text
stringlengths
8
5.77M
Introduction {#S0001} ============ Recently, contamination by toxic substances in the environment has attracted the attention of several researchers both in the developed and developing countries of the world. Many industrial processes especially recycling industries have contributed to the contamination of the lithosphere thereby causing adverse effects on human health (Wang, [@CIT0065]; Dautremepuits *et al*., [@CIT0025]). Heavy metals can accumulate in the soil and as such percolate the water body and aquifer system. This can be of public health concern to both animals and humans if ingested via water drinking or through other means of exposure (Kalay *et al*., [@CIT0040]; Ashraf, [@CIT0014]). Due to diverse functions and small mass in relation to the resting cardiac output that kidney carries out, it is a target organ both for chemicals that are pharmacologically active and toxic chemicals (Schröder, [@CIT0060]). The nephrons and its related cells perform multiple physiological functions. It serves as a major mechanism for excretion and homeostasis of water-soluble molecules (Innocentre *et al*., [@CIT0036]). This is because it is a metabolically active organ which actively concentrates certain substances. In addition, its cells have the potential to bio-transform chemicals and metabolically activate a variety of compounds (Innocentre *et al*., [@CIT0036]). Specific physiological characteristics are localized to specific cell types. This makes them susceptible to, and be the target tissue for toxic chemicals (Innocentre *et al*., [@CIT0036]; Schröder, [@CIT0060]). On the other hand, chemicals may cause severe damage to the cells when exposed. However, renal cells respond to injury by repair and as such the kidney as a whole undergoes cellular lesion. Although there is a substantial capacity within the kidney for repair, there are also several circumstances where damage may be irreversible. This depends on exposure levels, exposure time, which may vary over a long period of time or is limited to a single event, and it may be due to a single substance or to multiple chemicals (IPCS-UNEP-ILO-WHO, [@CIT0037]). Leachate is a liquid, generated during the process of lead-acid battery recycling. It contains mixture of metals. Elewi odo municipal battery recycling industry is located in primordial city of Ibadan, Ibadan North Local Government Area (LGA) of Oyo State, Nigeria. The liquid is leached from heap of auto-battery recycling wastes into nearby water bodies. Also, the components of the leachate may percolate through the soil, polluting these water bodies and get access to food chains. Experimental investigations that linked to nephrotoxicity by mixed-chemical and/or metal exposures had been inadequately studied and poorly elucidated. Also, the contribution of mixed-multiple chemicals to the overall incidence of nephropathy and sub-chronic renal failure is not well defined (Schröder, [@CIT0060]). However, investigation for studying and improving the basic understanding of the mechanisms linked with nephrotoxicity of mixed-metals and pathophysiology of renal injury is highly needed. Materials and methods {#S0002} ===================== Sampling industry and leachate preparation {#S20003} ------------------------------------------ The leachate was obtained from Elewi Odo municipal battery recycling industry, located at Ibadan North LGA of Oyo State, Nigeria (latitude 7°25.08′N and 7°25.11′N and longitudes 3°56.45′E and 3°56.42′E). The site is largely used for auto-battery waste recycling activities. It is at the back of a stream in the residential area. It covers about 2 acres of land. A randomized sampling technique (Houk, [@CIT0033]; Li *et al*., [@CIT0044]; Siddique *et al*., [@CIT0061]) was employed to collect the first horizon solid soils (0--15 cm deep) from different points on the municipal auto-battery recycling site. Five randomly collected samples from each site were pooled to make a single representative sample. The sample was air-dried, finely ground with a mortar and pestle, and sifted through a 63-μm (pore size) sieve to obtain a homogenous mixture. Leachate (100%) was prepared from the homogenous mixture according to a standard procedure (ASTM, [@CIT0012]; Ferrari, [@CIT0028]). Briefly, 100 g of the sample (homogenous mixture) was added to 100 ml of distilled water (w/v) and shaken for 48 hr at 32 °C. After shaking, the sample was allowed to settle for 30 minutes to sediment visible particles, and then the supernatant was filtered with a 2.5 μm filter (Whatman No. 42) to remove the suspended particles. Finally, the sample was stored at 4 °C until use. It was designated as Elewi-Odo municipal autobattery recycling industrial leachate (EOMABRIL). Water samples were collected from nearby stream and wells and designated as STREAM, WELL-A and WELL-B respectively. Also, drinking water was collected at far distance (8km away) as control and designated as POW. Heavy metal analysis {#S20004} -------------------- The nine metals, namely copper (Cu), lead (Pb), cadmium (Cd), cobalt (Co), chromium (Cr), zinc (Zn), iron (Fe), nickel (Ni) and manganese (Mn) were analyzed in the EOMABRIL, wells and control water sample. Briefly, 100ml each of EOMABRIL and water sample was digested by heating with concentrated HNO~3~ and the volume was reduced to 2--3 ml. This volume was made up to 10 ml with 0.1 N HNO~3~ and the concentrations of the metals were estimated using atomic absorption spectrophotometer (AOAC, [@CIT0013]) Chemicals and reagents {#S20005} ---------------------- Epinephrine, Reduced GSH, 5,5-dithio-bis-2-nitrobenzoic acid, hydrogen peroxide and thiobarbituric acid (TBA) were purchased from Sigma (St Louis, MO, USA). Except stated otherwise, all other chemicals and reagents were of analytical grades and were obtained from the British Drug Houses (Poole, Dorset, UK) and the water used was glass distilled. Experimental design {#S20006} ------------------- ### Sub-acute exposure {#S30007} Healthy adult male Wistar rats weighing approximately 200--220 g obtained from the Department of Biochemistry, University of Ibadan, Ibadan, Nigeria, were randomly assigned to 4 groups. The rats were acclimatized for a period of 2 weeks. The animals were kept in wire-mesh cages under a controlled light cycle (12 h light/ 12 h dark), 50% humidity and at 30±2 °C and placed on commercially available feed and water administered *ad libitum* during the period of acclimatization and treatment. ### Sub-chronic exposure {#S30008} A total of 30 healthy adult male Wistar rats weighing approximately 160--220 g were randomly assigned to 5 groups of 5 animals per group. This was chosen because sample size in conventional or typical laboratory experiments involving inbred rodents, the samples size is between 5--7 (Hsieh *et al*., [@CIT0034]; Kubota and Wakana, [@CIT0041]). Five different concentrations (20, 40, 60, 80 and 100%) of EOMABRIL were prepared according to the groups, and the rats in each group were administered 1 ml of EOMABRIL via oral administration for 60 consecutive days. The study period (60 days) was selected because conventional duration for sub-chronic exposure to toxicants ranges between 30--90 days. Also, previous works had made use of 60 days when 500 mg/L of lead (Pb) was exposed to rats via drinking water (Deveci *et al*., [@CIT0026]). Corresponding group of animals were administered with the same volume of distilled water via the same route and served as control. All the animals in the various groups had free access to standard laboratory rat pellet and drinking water. Rats were killed by cervical dislocation 24 h after the final treatment, the kidneys were removed and cleared of adhering tissues, washed in ice-cold 1.15% potassium chloride and dried with blotting paper and placed on ice bath. Animal ethics {#S20009} ------------- All of the animals received humane care according to the criteria outlined in the Guide for the Care and Use of Laboratory Animals prepared by the National Academy of Science and published by the National Institute of Health (USA). The ethic regulations have been followed in accordance with national and institutional guidelines for the protection of animal' welfare during experiments (PHS, [@CIT0055]). The analysis was carried out at the Laboratory, Department of Biochemistry, Bells' University of Science and Technology, Sango ota, Ogun state, Nigeria. Three different concentrations (20, 40, and 80%) of EOMABRIL (use as pilot study) were prepared according to the groups, and each rat in each group was administered 1 ml of EOMABRIL per day via oral administration for 7 consecutive days (Brusick, [@CIT0021]). Corresponding group of animals were administered with the same volume of distilled water via the same route and served as control. Rats were killed by cervical dislocation 24 h after the final treatment. The kidneys were quickly removed, weighed and placed on ice bath. Biochemical assay {#S20010} ----------------- The kidneys were homogenized in 50 mM Tris-HCl buffer (pH 7.4) containing 1.15% KCl and the homogenate was centrifuged at 10,000 *g* for 15 min at 4 °C. The supernatant was collected for the estimation of CAT activity using hydrogen peroxide as substrate according to the method of Clairborne ([@CIT0023]). H~2~O~2~ level was estimated using the method described by Clairborne ([@CIT0023]). Briefly, 50 μl of the test sample was added to a reacting mixture containing 500 μl of 59 mM H~2~O~2~ and 950 μl of 50 mM phosphate buffer (pH 7.0). The reaction was carried out at 25 °C and the decrease in absorbance at 570 nm was monitored for 3 min at 150 sec interval. A unit of the enzyme activity is defined as the amount of enzyme catalyzing the decomposition of 1 μmol of H~2~O~2~ per minute at 25 °C and pH 7.0 under specified condition. SOD activity was determined by measuring the inhibition of autoxidation of epinephrine at pH 10.2 at 30±1 °C according to Misra and Fridovich ([@CIT0047]). Briefly, 0.1ml of the kidney homogenate was diluted in 0.9ml of distilled water to make 1 in 10 dilutions. An aliquot of 0.2ml of the diluted homogenate was added to 2.5 ml of 0.05 M carbonate buffer pH 10.2 to equilibrate in a cuvette and the reaction started by the addition 0.3 ml of 0.3 M of adrenaline. The reference cuvette contained 2.5 ml of carbonate buffer, 0.3 ml of substrate (adrenaline) and 0.2 ml of distilled water. The increase in absorbance at 480 nm was monitored every 30 seconds for 150 seconds. Protein concentration was determined by the method of Lowry, *et al*. ([@CIT0045]). Briefly, 20 μl of the supernatant was mixed with 1 ml of Biuret reagent (100 mM NaOH, 16 mM Sodium-Potassium-tartrate, 15 mM Potassium iodide and 6 mM CuSO~4~). Thereafter, the mixture was incubated for 30 min at 25 °C and the absorbance taken at 546 nm. Bovine serum albumin was used as the standard protein and the total protein subsequently calculated GSH assay {#S20011} --------- Reduced GSH was determined using the method described by Jollow *et al*. ([@CIT0039]). Briefly, 1 ml of supernatant was treated with 500 μl of Ellman's reagent (19.8 mg of 5,5-dithio-bis-2-nitrobenzoic acid in 100 ml of 0.1% sodium citrate) and 3.0 ml of 0.2 M phosphate buffer (pH 8.0). The absorbance was read at 412 nm in spectrophotometer. Lipid peroxidation assay {#S20012} ------------------------ Lipid peroxidation was quantified as malondialdehyde (MDA) according to the method described by Okhawa *et al*. ([@CIT0051]) and expressed as mmol/mg tissue. Briefly, 100 μl of homogenate from rat kidneys was mixed with a reaction mixture containing 30 μl of 0.1 M Tris-HCl buffer (pH 7.4). The volume was made up to 300 μl by water before incubation at 37 °C for 2 hours. The colour reaction was developed by adding 300 μl of 8.1% SDS (Sodium duodecyl sulphate) to the reaction mixture containing the homogenate, followed by the addition of 600 μl of acetic acid/HCl (pH 3.4) and 600 μl of 0.8% thiobarbituric acid (TBA). This mixture was incubated at 100 °C for 1 hour. The absorbance of thiobarbituric acid reactive species (TBARS) produced were measured at 532 nm in UV-Visible spectrophotometer (Model 6305; Jenway, Barloworld Scientific, Dunmow, United Kingdom). MDA (Malondialdehyde) produced was calculated. Histopathological evaluation {#S20013} ---------------------------- The kidneys were fixed in 10% formalin. They were directly dehydrated in a graded serious of ethanol and embedded in paraffin. Thin sections, 5--6 micrometres, were cut by using a microtome, mounted on albumenized glass slides and stained with Eosin and Hematoxylen. Morphological examination of kidney was done by using an ocular micrometer scale under light microscope. Statistical analysis {#S20014} -------------------- The results of the replicates were pooled and expressed as mean ± standard deviation. A one way analysis of variance (ANOVA) was used to analyze the results and Duncan multiple test was used for the post hoc (Zar, [@CIT0071]). Statistical package for Social Science (SPSS) 17.0 for windows was used for the analysis and the least significance difference (LSD) was accepted at *p*\<0.05. Results {#S0015} ======= Antioxidant status in the kidney {#S20016} -------------------------------- The malondialdehyde (MDA) content in kidney homogenates of the treated rats with EOMABRIL were significantly (*p*\<0.05) elevated when compared to their corresponding control rats ([Figure 1a](#F0001){ref-type="fig"}) during sub-acute exposure (7-days) by 12.53%, 15.92% and 20.63% respectively. As observed, there was no significant increase (*p*\>0.05) between 20% and 40% doses of the treated rats following sub-acute exposure. As shown in [Figure 1b](#F0001){ref-type="fig"}, the rats exposed to EOMABRIL for sixty (60) days (sub-chronic exposure) had a significant (*p*\<0.05) increase in malondialdehyde (MDA) content when compared to the control group. Effects of EOMABRIL on nephritic antioxidant status are shown in [Figures 2](#F0002){ref-type="fig"}--[6](#F0006){ref-type="fig"}. Following exposure to EOMABRIL, a dose-dependent significant (*p*\<0.05) decrease in kidney glutathione (GSH) level and increase in activities of SOD and catalase (CAT) were observed in all treated groups. While 20, 40, 60, 80 and 100% EOMABRIL-treatment resulted in decreased GSH level by 20.0, 22.5, 30.0, 40.0 and 48.75%; SOD activity increased by 13.85, 50.77, 35.38, 30.77 and 55.38%. Hydrogen peroxide levels were markedly elevated in a non dose-dependent manner following EOMABRIL administration by 118.6, 87.2, 65.1, 116.3 and 77.9% respectively when compared with the control group. However, CAT activity was increased by 1.53, 30.71, 23.60, 42.48 and 45.04% after dosing the animal with 20, 40, 60, 80 and 100% EOMABRIL, respectively. Lastly, total protein was significantly (*p*\<0.05) depleted in rat exposed to EOMABRIL by 4.05, 24.64, 16.21, 31.12 and 32.25% respectively relative to the control group. ![(a & b) Effect of EOMABRIL on lipid peroxidation in a sub- acute (7 days) and sub-chronic (60 days) exposure. Group 1 received 0%, Group 2 received 20%, Group 3 received 40%, Group 4 received 60%, Group 5 received 80% and Group 6 received 100%. Values represent mean ± standard deviation, n=5. Values with different superscript are significantly (*p*\<0.05) different.](ITX-9-1-g001){#F0001} ![Effect of EOMABRIL on hepatic reduced glutathione (GSH) in a sub chronic (60 days) exposure. Group 1 received 0%, Group 2 received 20%, Group 3 received 40%, Group 4 received 60%, Group 5 received 80% and Group 6 received 100%. Values represent mean ± standard deviation, n=5. Values with different superscript are significantly (*p*\<0.05) different.](ITX-9-1-g002){#F0002} ![Effect of EOMABRIL on superoxide dismutase activity in a sub-chronic (60 days) exposure. Group 1 received 0%, Group 2 received 20%, Group 3 received 40%, Group 4 received 60%, Group 5 received 80% and Group 6 received 100%. Values represent mean ± standard deviation, n=5. Values with different superscript are significantly (*p*\<0.05) different.](ITX-9-1-g003){#F0003} ![Effect of EOMABRIL on hydrogen peroxide (H~2~O~2~) level in a sub-chronic (60 days) exposure. Group 1 received 0%, Group 2 received 20%, Group 3 received 40%, Group 4 received 60%, Group 5 received 80% and Group 6 received 100%. Values represent mean ± standard deviation, n=5. Values with different superscript are significantly (*p*\<0.05) different.](ITX-9-1-g004){#F0004} ![Effect of EOMABRIL on the activity of catalase in a sub-chronic (60 days) exposure). Group 1 received 0%, Group 2 received 20%, Group 3 received 40%, Group 4 received 60%, Group 5 received 80% and Group 6 received 100%. Values represent mean ± standard deviation, n=5. Values with different superscript are significantly (*p*\<0.05) different.](ITX-9-1-g005){#F0005} ![Effect of EOMABRIL on total protein in a sub-chronic (60 days) exposure. Group 1 received 0%, Group 2 received 20%, Group 3 received 40%, Group 4 received 60%, Group 5 received 80% and Group 6 received 100%. Values represent mean ± standard deviation, n=5. Values with different superscript are significantly (*p*\<0.05) different.](ITX-9-1-g006){#F0006} Nephritic cell damage {#S20017} --------------------- The photomicrographs in [Figure 7(a--f)](#F0007){ref-type="fig"} illustrate the different histopathologic changes that were observed in the kidney of animals that were give various doses of EOMABRIL. Administration of EMOABRIL caused severe histopathologic lesions such as renal cortical congestion, medullar damage and abnormal numerous proximal tubules with protein casts and eosinophilic intranuclear inclusions of debris in proximal tubular cells of the lumens. ![Microscopic findings of kidneys after EOMABRIL administration for 60 days, sub-chronic exposure (× 400). (**Control**) Showed no visible lesions; NVL or the lesion was very mild. (**20%**) EOMABRIL exposed rats showed severe renal cortical congestion; cc and hypertrophy, proliferation and swelling in the lining endothelium of the glomerulus, g. (**40%**) EOMABRIL exposed rat showed glomerular tubular degeneration with degeneration in the lining epithelial cells of renal tubules, d with protein casts, pc and debris in the lumen of the degenerated tubules. (**60%**) EOMABRIL exposed rat showed cortical congestion, cc with protein casts, pc in the lumen of the tubules (**80%**) EOMABRIL exposed rat showed severe renal cortical congestion and numerous tubules with protein casts in their lumens. (**100%**) EOMABRIL exposed rat showed cortical congestion; cc and presence of abnormal numerous tubules with protein casts; pc in their lumens. Generally, all treated rats with EOMABRIL showed necrosis of the glomerular tubules.](ITX-9-1-g007){#F0007} Discussion {#S0018} ========== Notably, toxic metals are widely generated in the environment and some of them can cause physiological, biochemical and histological disorders. Mammals are exposed to these hazardous substances from innumerable sources, including contaminated air, water, soil and food. However, the physiological effect of chemicals on living subjects is dependent on dose, duration, route of administration and other physiological factors (Roy Chowdhury, [@CIT0058]). The present work revealed rats that were exposed to leachate from battery recycling industry displayed a pronounced impairment in kidney functions which was confirmed by histopathological alterations. The cortex was suggested to be more damaged than the medulla in EOMABRIL exposed rat. This may be due to long-term exposure or may be linked to uneven distribution of the mixed-metal (contained in the leahate) in the nephrons of the kidney where about 90% of the total renal blood flow enters the cortex via the bloodstream (Atef, 2011). This is because a relatively high proportion of inorganic substances reach the cortex via the bloodstream than that would enter the medulla (Atef, 2011). This finding supports several observations which reported that experimental animals intoxicated with laboratory Pb, Hg, Cd, Cu and other heavy metals resulted in renal histological alterations (Goran *et al*., [@CIT0030]; Al-madani *et al*., [@CIT0011]; Sarena *et al*., [@CIT0059]; Mission *et al*., 2011). Also, as observed from the study, EOMABRIL exposed rat showed glomerular tubular degeneration, with protein casts and debris in the lumen of the degenerated proximal tubules. The eosinophilic intranuclear inclusions of debris in proximal tubular cells following the treatment may be traced to the formation of metal-protein complexes (Innocentre *et al*., [@CIT0036]) Additionally, much of the kidney pathology is associated with the decrease in intracellular GSH concentration (Atef, 2011). Hence, GSH concentration is important for survival of the cells. It is also a substrate for glutathione peroxidase. This is one of the most important modulatory mechanisms for free radical scavenging and inhibition of electrophilic xenobiotics attack on cellular macromolecules involves tripeptide glutathione (Cnubben *et al*., 2001). As revealed by the present investigation, GSH was remarkably declined following EOMABRIL treatment. Reports had shown in several different animal models, as well as in humans, that a decrease in GSH concentration may be associated with nephropathy and pathogenesis of kidney diseases (Yashpal *et al*., [@CIT0070]; Palsamy and Subramanian, [@CIT0053]; Tomino, [@CIT0064]). Activities of SOD and CAT were markedly increased by EOMABRIL treatment. The increased activity of SOD may be linked to the high level of superoxide anions (0^2^) induced by EOMABRIL which results into the accumulation of hydrogen peroxide (H~2~O~2~) in the renal cells. Similarly, high activity of CAT at extreme dose confirms the precipitation of reacting oxygen species, H~2~O~2~ in the kidney. The precipitation of hydrogen peroxide (H~2~O~2~) in nephritic tissue caused hydroxyl radical (OH•) generation. This eventually caused damage to renal proteins; bio-membrane and DNA molecule. The rise in the activity of CAT could also be linked to its induction to counter the effect of oxidative stress. Therefore, significant increased in the level of hydrogen peroxide (H~2~O~2~) indicates oxidative stress and nephritic necrosis. Our observation is consistent with earlier report of Guangke *et al*. ([@CIT0031]). Also, the level of protein was significantly depleted following EOMABRIL exposure. This may be linked to the direct inhibition of protein synthesis or perhaps the protein produced had formed complexes with mixtures of metals in EOMABRIL thereby reduced its levels. Administration of EOMABRIL during sub-acute and sub-chronic exposure resulted in increase in MDA level in treated animals. This result is consistent with previous study on oxidative damage induced in liver, brain, heart, kidney and spleen of animals treated with leachate (Li *et al*., [@CIT0043]; Akintunde & Oboh, [@CIT0008]; Akintunde *et al*., [@CIT0006]; Akintunde & Oboh, [@CIT0004]; Akintunde & Oboh, [@CIT0005]) As suggested from this study, EOMABRIL hypothesises to inhibit the kidney membrane (Na^+−^K^+^) ATPase thereby disrupting the homeostasis of Na^+^ and k^+^ flow. Similarly, other ATPases, including the (Ca^2+−^Mg^2+^)- ATPase and kidney mitochondrial ATPases may also be the targets of mixed-metal ions (M^2+^). Hence, in this case, more diverse effects on cellular function might be anticipated. This is in line with previous studies which discovered that, metal (Pb^2+^, Fe^2+^, Cd^2+^ etc) binds to the ATPase in renal medullary (Na^+−^K^+^) ATPase (Jefferies *et al*., [@CIT0038]; Masashi *et al*., [@CIT0046]) to inhibits its activity and ATPase-driven transport (Hinton *et al*., [@CIT0032]). Binding of metals to the ATPase is reversible and occurs at a site distinct from the oubain binding site (Jefferies *et al*., [@CIT0038]). This affects the interaction between the α and β subunits of the ATPase protein complex (Hinton *et al*., [@CIT0032]). The binding site occurs at the cytosolic domain of the ATPase (Jefferies *et al*., [@CIT0038]; Masashi *et al*., [@CIT0046]). The (Na^+−^K^+^) ATPase is the energy-requiring step in the development of the electrochemical gradients that drive solute and water transport in the proximal tubule. More so, inhibition of the (Na^+−^K^+^) ATPase would not only impair solute and water re-absorption in the proximal tubule but would also impair the transport of substrates for energy metabolism and synthesis in the kidney (e.g., amino acids, citrate, fatty acids, glucose, lactate) (Benard, [@CIT0018]). An earlier report in our laboratory revealed dose-dependent decrease in body weights of EMABRSIL-treated animals compared with control (Akintunde & Oboh, [@CIT0007]). This finding supported the discovery of Farombi *et al*. ([@CIT0027]) who reported a significant reduction in rat body weight following intraperitoneal injection with leachate from landfill. In contrast, our result is inconsistent with earlier reports of Guangke *et al*. ([@CIT0031]) and Li *et al*. ([@CIT0043]) who reported increase in body weight of mice treated with municipal landfill leachate. The discrepancy in these results may be linked to leachate composition, which varies with recycling industries or sites and season or species differences. Moreover, study showed that renal toxicity in rats is a good predictor in human subjects (Rosner *et al*., [@CIT0056]). This finding proposed that mixed-metal exposure can cause considerable nephropathies when togetherly exposed than when singly exposed. Nephrotoxic properties of the elements contained in EOMABRIL might be connected to the tubular re-absorption of metal protein complexes, which increase the epithelial burden of elements interaction with organic macromolecules, thus causing a cascade of events leading to cell membrane damage and oxidative stress (Flora *et al*., [@CIT0029]). Previous research showed that cadmium (greater than 0.003 mg/L) and chromium caused severe impairment to different nephronic sub-units and subsequently encouraged abnormal excretion of β2-microglobulin following chromium administration and chronic exposure to cadmium (Osfor *et al*., [@CIT0052]). In this study, sub-chronic exposure of rat to EOMABRIL at all concentrations (20, 40, 60, 80 and 100%) significantly mutilated the cell membrane and caused oxidative damage. The toxic response may be that cadmium detected (0.006 mg/L) in the leachate together with other metals additively damaged the kidney membrane integrity and fluidity by increased levels of malondialdehyde (MDA), which was higher than WHO permissible limits (0.003 mg/L) (WHO [@CIT0067]). Thus, the damage occurred might be at the initial segment of proximal convoluted tubule (S1), while the damaging intermediate metal (lead) of the distal segments (S2--S3) had been documented (Bergamaschi *et al*., [@CIT0017]). The concentration of lead (0.015 mg/L) detected in the leachate of the present investigation was far higher as compared to WHO permissible limits (0.01 mg/L) ([Table 1](#T0001){ref-type="table"}). Earlier findings showed that workers that were exposed to individual lead (Pb) showed a severe damage both in glomerulus and tubules (Cardenas & Roels, [@CIT0022]). Also, renal biopsy in chronic lead nephropathy with minimal inflammatory response has been documented. Mitochondrial swelling, loss of cristae, and increased lysosomal dense bodies within proximal tubule cells (Kutlubay & Oguz, [@CIT0042]) were also observed. It was further reported that arteriolar changes were indistinguishable from nephrosclerosis. Experimental studies also showed that Pb acetate at high doses (0.5%) in drinking water for 12 months resulted into early stages of intoxication such as kidney cortex hypertrophy, increase in glomerular filtration rate (GFR) and a comparable increase in tubular antigens excretion (ATSDR, [@CIT0003]). It has also been reported that exposure to Pb above permissible limits (0.01 mg/L) were characterized mainly by tubule-interstitial changes leading to kidney remodelling and progressive glomerulo-angiosclerosis (ATSDR, [@CIT0003]; Kutlubay & Oguz, [@CIT0042]). However, lead concentration (0.015 mg/L) contained in leachate-treated rats of this study caused similar nephrosis by 50% increase when compared with WHO permissible limits (0.01 mg/L). In addition, the glycosaminoglycans (GAGs) and the urinary beta-N-acetylglycosaminidase activity (NAGs) are polysacchrides composed of repetitive disaccharide units (Bastogi, [@CIT0016]). They are found in the glomeruli and the tubules and their leakage into the urine has been suggested to be a marker of injury to the nephron (Bastogi, [@CIT0016]). Further study also showed that an increased excretion of GAGs and NAGs are early indicators of damage to the renal papilla, which is rich in GAG (Bastogi, [@CIT0016]). As revealed from the present study, the rats exposed to the leachate showed psychomotor behavior of increased level of urination compared with the corresponding control. Our observation corroborated the recent study which reported that the presence of high Pb could trigger the increase of urinary excretion of sialic acids, GAGs and NAGs which indicate effect of exposure to lead (Bastogi, [@CIT0016]) and early index of distal nephrotoxicity. ###### Concentration of heavy metals detected in EOMABRIL, STREAM, WELL-A, WELL-B and POW (Adapted from Akintunde *et al*. [@CIT0006]; Akintunde *et al*. [@CIT0010]) Parameter EOMABRIL STREAM WELL-A WELL-B POW WHO ----------- --------------- ---------------- -------------- --------------- ------- ------- Cadmium 0.006 (100%) 0.002 0.002 0.003 BDL 0.003 Cobalt 0.049 0.004 0.003 0.002 BDL 0.05 Chromium 0.068 (36%) 0.011 0.015 0.014 BDL 0.05 Copper 0.341 0.012 0.010 0.010 BDL 2.00 Iron 2.667 (789%) 1.076 (259%) 0.011 0.030 0.050 0.30 Manganese 7.842 (1861%) 0.223 0.239 0.239 BDL 0.40 Nickel 0.050 (150%) 0.048 (140%) 0.044 (120%) 0.049 (145%) 0.027 0.02 Lead 0.015 (50%) 1.548 (15380%) 0.068 (580%) 0.306 (2960%) BDL 0.01 Zinc 0.010 0.126 0.053 0.011 0.010 3.00 EOMABRIL: Elewi Odo municipal battery recycling industrial leachate, POW: Drinking water sample was used as control. All values are in mg/l. The contents of heavy metals detected in EOMABRIL, STREAM and WELLS around the site were higher than the drinking water sample (POW). BDL- Below detection level (Source: WHO, [@CIT0069]; WHO [@CIT0067]; Akintunde et al. [@CIT0006]; Akintunde et al. [@CIT0010]); Least Observable Effective Concentration (LOEC) set by World Health Organisation (WHO, [@CIT0068]); values in the brackets: % increase compared with the WHO permissible limits in drinking water. The absorption of nickel is dependent on its physicochemical form, with water soluble. The metabolism of nickel involves conversion to various chemical forms and binding to various ligands (Daldrup *et al*., [@CIT0024]). Most nickel enters the body via food and water consumption, although inhalation exposure in occupational locations is a primary route for nickel-induced kidney toxicity. In large doses (\>0.02 mg/L), some forms of nickel may be acutely toxic to humans when taken orally (Sunderman *et al*., [@CIT0063]; WHO, [@CIT0069]). This finding observed that Nickel (0.05 mg/L) detected in EOMABRIL caused renal damage by 60% increase when compared with WHO limits (0.02mg/L). Similarly, there were occasional cases of acute tubular necrosis (ATN) following massive absorption of chromium. Chromate-induced ATN has been extensively studied in experimental animals following parenteral administration of large doses of potassium chromate (hexavalent) (15 mg/kg body weight) (Wedeen & Qjan, [@CIT0066]). It was reported that chromate is selectively accumulated in the convoluted proximal tubule where necrosis occurs (Wedeen & Qjan, [@CIT0066]). Also, there was long-term adverse effect of low-dose chromium exposure on the kidneys in chromium workers (Wedeen & Qjan, [@CIT0066]). However, Chromium from this study caused nephritic cell damage by 36% increase when compared with WHO tolerable limits. As observed from this study, iron concentration (2.667 mg/L) in the leachate exposed experimental rat model caused kidney dysfunctions by 789% increase in respect to WHO permissible limits (0.3 mg). Similarly, manganese concentration (7.842 mg/L) in the EOMABRIL-treated rats caused kidney dysfunctions by 1861% increase compared to WHO permissible limits (0.4 mg/L). This supports earlier reports which indicated that elevated level of iron is capable of inducing multiple changes in renal tubular epithelial functions. The effect of iron could be related to diminished expression of the beta 1 integrin subunit and impaired proliferation (Sponsel *et al*., [@CIT0062]). High level of manganese can bind either to a substrate (such as adenosine triphosphate, ATP), or to a protein directly, thereby causing conformational changes (Huang and Lin, [@CIT0035]). In addition, reports had shown that high dose of manganese can damage kidneys (inflammation and kidney stone formation) and urinary tract in high fed rats (Ponnapakkpam *et al*., [@CIT0054]). Additionally, tubulointerstitial nephritis with tubular proteineous and glomerulosclerosis was equally observed in animals groups treated with manganese (Ponnapakkpam *et al*., [@CIT0054]). In the present findings, Co (0.049 mg/L), Zn (0.01 mg/L) and Cu (0.341 mg/L) detected in EOMABRL were considerably lower than WHO exposure limits (0.05 mg/L, 3mg/L and 2 mg/L) respectively. The low level or the deficiencies of these metals (Co, Zn and Cu) had been implicated in enhanced expression of certain proteins known as angiotensis II that constrict the blood vessels in kidneys and further aggravate the condition of individuals with obstructive kidney disease (ATSDR, [@CIT0002]; Naura & Sharma, [@CIT0050]; Brewer, [@CIT0020]; ATSDR, [@CIT0001]). The present finding also supported the result of Bing ([@CIT0019]) which revealed that low levels of these beneficial metals can impair the development and maturation of kidneys in the fetus during pregnancy and at both pre- and post-weaning phases. This in turns increases the risk of renal dysfunction in adult individual (Naura & Sharma, [@CIT0050]). Generally, the level of heavy metals in EOMABRIL was higher than STREAM, WELL-A, and WELL-B. Its high levels may be because soil can easily form ligands with metals or likely that it has high capacity to retain heavy metals than inorganic solvents (Akintunde *et al*., [@CIT0010]). The considerably higher concentrations of lead (Pb), 1.548 mg/L, 0.068 mg/L and 0.306 mg/L in stream, well-A and well-B respectively, than the leachate (EOMABRIL), 0.015 mg/L of the present study may be linked to the direct discharge of effluent from the factory into the stream. The previous study had implicated that when lead passes through the soil, the complex ligand formation or adhesion capacity with soil and other materials may be weak (Monroe, [@CIT0049]). In addition, the large concentrations of manganese (7.842 mg/L), iron (2.667 mg/L) in the leachate, and lead (1.548 mg/L) in stream suggest that most of the waste batteries recycled at the industry were made of electrolytes from manganese, iron and lead sulphate. Collectively, the necrosis of renal tubular epithelial cells and injuries induced by EOMABRIL in the present finding could be linked to the individual, additive, synergistic or antagonistic interactions of the metals with the renal bio-molecules (Akintunde *et al*., [@CIT0006]; Akintunde & Oboh, [@CIT0007]; Akintunde & Oboh, [@CIT0010]). Conclusion {#S0019} ========== Following the exposure, EOMABRIL showed that the treatment induced systemic toxicity at the doses tested by causing a significant (*p*\<0.05) alteration in enzymatic antioxidants-catalase (CAT) and superoxide dismutase (SOD) in the kidneys which resulted into elevated levels of malonaldehyde (MDA). Reduced glutathione (GSH) levels were found to be significantly (*p*\<0.05) depleted relative to the control group. Considerable renal cortical congestion and numerous tubules with protein casts were observed in the lumen of EOMABRIL-treated rats. These findings conclude that possible mechanism by which EOMABRIL at the investigated doses elicits nephrotoxicity could be linked to the individual, additive, synergistic or antagonistic interactions of the metals with the renal bio-molecules, alteration of kidney detoxifying enzymes and necrosis of nephritic tubular epithelial cells. We are grateful to Biochemistry laboratory members, Bells University Ota Nigeria, including Ajiboye John, Chimyenum Bernice, and Siemuri Ese for their support and scientific advice throughout the experiments. This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
North Tetagouche, New Brunswick North Tetagouche (Tétagouche-Nord in French) is a community in New Brunswick, Canada. It is situated 7 km Western the centre of Bathurst. It is located to the North of Tetagouche river, it is rectangular and it borders with Dunlop on the Northwest. The most part of its territory is a forest and a residential neighbourhood by the river which links to Route 322. Advisory Committee Within the Local service district, North Tetagouche is administered by the Department of Local Government (New Brunswick), assisted by and advisory committee of five members with a president. Representation The circumscription of Nepisiguit is represented at the Legislative Assembly of New Brunswick by Cheryl Lavoie, a member of New Brunswick Liberal Association. History Notable people See also List of communities in New Brunswick References Category:Communities in Gloucester County, New Brunswick
In her fight to end sexual abuse, the Olympic champion is challenging the very institutions she led to glory. 1 Related The women, known collectively as "sister survivors," who spoke out against Nassar were honored Wednesday for their "strength and resolve" for bringing "the darkness of sexual abuse into the light." Sarah Klein, who identified herself as the first to be abused by Nassar, said that she and the more than 140 other survivors on the ESPYS stage "represent hundreds more who are not with us tonight. Make no mistake, we are here on this stage to present an image for the world to see: a portrait of survival, a new vision of courage." "Telling our stories of abuse over and over and over again in graphic detail is not easy ... it's grueling and it's painful, but it is time," Klein said. The courage award is given annually to those who embody the spirit of its namesake: tennis legend and longtime human rights campaigner Arthur Ashe. Nassar, the disgraced USA Gymnastics and Michigan State team doctor, was sentenced in January to 40 to 175 years in prison after seven days of impact statements from more than 150 girls and women who said he sexually abused them in what amounts to the biggest case of sexual abuse in the history of American sports. play 9:22 'Sister survivors' epitomize meaning of courage The sexual abuse of hundreds of female athletes was put to a stop when a group of brave women came forward to expose it. "We stand here and it feels like we're finally winning," said Tiffany Thomas Lopez, a softball player who was abused by Nassar. "1997, 1998, 1999, 2000, 2004, 2011, 2013, 2014, 2015, 2016. These were the years we spoke up about Larry Nassar's abuse," Olympic gold medal gymnast Aly Raisman said Wednesday. "All those years we were told, 'You are wrong. You misunderstood. He's a doctor. It's OK. Don't worry, we've got it covered. Be careful. There are risks involved.' The intention: to silence us in favor of money, medals and reputation." Kelly, a member of the Pro Football Hall of Fame, is a cancer survivor, and on Wednesday he encouraged people to use positivity in the face of adversity. "Make a difference today for someone who is fighting for their tomorrow," said the former Buffalo Bills quarterback, who was joined onstage by his daughters Erin and Camryn. "When I look across this arena, and when I talk to people, you don't need to be a Russell Wilson or an Aaron Rodgers to make a difference out there. Every single person in this room can be a difference-maker. You can be a normal person that gets up every morning and goes to work. But you can be a difference-maker, putting smiles on those faces." In other special awards, Jake Wood of Team Rubicon was honored with the Pat Tillman Award for Service. A former Wisconsin Badgers football player and Marine, Wood is the co-founder of Team Rubicon, an organization that helps veterans re-acclimate to life back at home through service projects and disaster relief. The award for best coach went to Aaron Feis, Scott Beigel and Chris Hixon of Marjory Stoneman Douglas High School. The three men were among those who died in the February school shooting in Parkland, Florida.
A carbon tax backed by some big businesses and former Republican officials has the support of most voters, a survey commissioned by the group backing it found. The Climate Leadership Council’s survey found that its proposal for taxing carbon dioxide emissions — a plan it calls “carbon dividends” — has the support of 56 percent of voters surveyed, including 55 percent of Republicans and 58 percent of Democrats. Twenty-six percent of respondents oppose the plan, the group said. ADVERTISEMENT Poll-takers didn’t tell respondents the amount of the tax the group is proposing, which would be $43 per metric ton starting in 2021. But nonetheless, the coalition launched by GOP elder statesmen including former secretaries of State James Baker and George Schultz is touting the survey to demonstrate support for its plan. “This shows that the Baker-Schultz carbon dividend plan is the most popular, ambitious and — most of all — politically viable plan to solve climate change,” said Ted Halstead, the council’s CEO. Halstead chalked up the support to the fact that taxpayers would get the money collected back, likely through regular payments. “This not your old carbon tax. This is carbon dividends, and the vast majority of American families would win.” The group’s goal is to present a climate plan that Republicans can get behind, even if it can’t be enacted in the immediate future. It faces significant headwinds, including that the GOP has consistently rejected policies that directly target greenhouse gas emissions, and the House voted this summer to denounce carbon taxes. As part of the plan, federal agencies would be mostly blocked from writing greenhouse gas regulations, a provision that is designed to help attract GOP support. In addition to the support from former GOP statesmen, a broad array of businesses like BP, Royal Dutch Shell, Pepsi Co., Unilever and General Motors support it. The council is making a major push for its climate plan this week to coincide with the Global Climate Action Summit in California. In addition to the poll, the group is releasing an internal analysis showing that their plan would reduce U.S. carbon emissions by 32 percent by 2025. That would exceed the U.S. pledge in the Paris agreement, which President Trump Donald John TrumpOmar fires back at Trump over rally remarks: 'This is my country' Pelosi: Trump hurrying to fill SCOTUS seat so he can repeal ObamaCare Trump mocks Biden appearance, mask use ahead of first debate MORE has committed to pulling out of. It would also exceed the reductions that would have been accomplished if all of former President Obama’s climate policies would have been implemented, like the Clean Power Plan and methane regulations.
The film crew were warned before they left port that heading to this island was like stepping back in time. There’s no electricity, there are no paved roads and in many cases, no plumbing. That island – called Lasqueti – is home to 400 people and less than an hour away from Vancouver. I highly recommend you check out this 14 minute mini-documentary. You’ll see everything from yurts and earthships to solar panel systems and small-scale hydropower. My favorite? 83-year old Al. He’s definitely an inspiring character. (Want to know about the 13 proven and best ways to generate power off-the-grid? Click to grab The 13 Proven Ways to Generate Power Off-the-Grid) As one guy mentions in the video though, not everyone can handle off the grid living. He estimated there’s less than a 40% success rate for first timers moving to the island community because they couldn’t take giving up all the comforts that the average American may be used to. I bet the same goes for other off the grid communities around the world. You’ll find two philosophies on living off the grid on Lasqueti. “Some believe moving here is about translating city life to the island, while others insist it’s about abandoning the unnecessary. I’d be interested to know, what do you believe going off the grid is all about? (Want to know about the 13 proven and best ways to generate power off-the-grid? Click to grab The 13 Proven Ways to Generate Power Off-the-Grid)
1. Field The following description relates to wired/wireless network technology, and more particularly, to wavelength division multiplexing. 2. Description of the Related Art Recently, due to the introduction of portable multi-function devices, such as smart phones, smart TVs, etc., excessive traffic is generated in wired/wireless networks. In order to cope with such excessive traffic, studies into applying Wavelength Division Multiplexing (WDM) to a wired subscriber network or an integrated wired/wireless subscriber network are actively conducted. The WDM is a method of multiplexing multiple optical wavelengths and transporting them over a single optical fiber at the same time, so that the WDM can greatly reduce the cost of lines by the number of the optical wavelengths, as well as having many advantages in view of security, Quality of Service (QoS), and protocol transparency since each data channel is carried on its own unique wavelength. In order to use the WDM, each subscriber device has to be allocated its own wavelength for communication with other parties. This requires optical sources with a number of unique wavelengths corresponding to the number of subscribers belonging to a wired subscriber network that is spread across remote nodes, or the number of separated-type base stations that exist in an integrated wired/wireless network. The need for optical sources with various unique wavelengths indicates that different specific kinds of optical sources should be further prepared, in view of fabrication, installation, and equipment management, in case failure occurs. This further requirement may be a considerable burden to providers. For these reasons, studies into development and commercialization of a wavelength-independent optical source are more actively conducted. Wavelength-independent optical sources can be broadly classified into two types: one is a reflective optical source, such as a Reflective Semiconductor Optical Amplifier (RSOA) or a Fabry-Perot laser diode; and the other is a wavelength-tunable optical source whose lasing wavelength can be tuned. The transmission performance of the reflective optical source strongly depends on the power level of injected seed light. Therefore, the link would have some constraints, such as scalability and transmission distance. The wavelength-tunable optical source is considered as an attractive solution due to its flexibility. However, the output wavelength of the wavelength tunable optical source is variable; therefore, the wavelength initialization process is indispensably necessary before starting communication. A straightforward and simple way to achieve wavelength initialization is using the lookup table, usually predetermined and loaded in the tunable transmitter module. A lookup table has to be generated for each of the lasers because of the manufacturing variations. Moreover, the value of the control parameters in the lookup table need to be adjusted due to either laser aging or temperature changes. Although the time for generating the lookup table depends on the tuning mechanisms of the laser diodes, and there are some proposals to generate lookup tables in a short time, the overall generating process is exhaustive and requires a time-consuming scanning process. This can increase the devices' packaging cost. Korean Patent Registration No. 10-0945423 discloses a tunable external cavity laser which tunes an output wavelength using the Littmann-Metcaff scheme, Korean Patent Registration No. 10-0945422 discloses a tunable external cavity laser which applies heat near a waveguide configuring Bragg gratings, and Korean Laid-Open Patent Application No. 10-2011-00732232 discloses a tunable laser module which tunes a wavelength by integrating a narrow-band wavelength tunable laser.
Following one man's task of building a virtual world from the comfort of his pajamas. Discusses Procedural Terrain, Vegetation and Architecture generation. Also OpenCL, Voxels and Computer Graphics in general. Tuesday, July 30, 2013 I know I said no more posts about children toys, but this one was a really good find. If you work with 3D entities drawing in paper will get you only so far. At some point you really need to look around, hold it in your hand. Last time it was a voxel playset. This one is a "simplex" playset: A simplex is the the minimum geometric unit you can have. In 2D they are triangles, in 3D tetrahedrons and in 4D, well, you do not really want to go there. They matter because when you are looking for a solution to a problem, it is often best to target the simplest element possible. If your solution is based on them, it is likely to be the simplest solution as well. It is no coincidence we use triangles extensively for rendering. In 3D simplexes are equally useful. For instance, Perlin rewrote his famous noise function to work over simplexes instead of cubes. It resulted in a faster, better looking noise, which Perlin aptly named -you have guessed- "Simplex Noise". At this point in time, there is no reason why someone would use Perlin noise when Simplex noise is available. We also have Marching Tetrahedrons, which improves over Marching Cubes. In my case I was looking at them because of their role in interpolations. Trilinear interpolation is often done in a cube. If you do it over simplexes you can shave off a few multiplications. When this is in a hot area of your code simplexes can make a difference. And above all you also have an excuse to play with these cool toys. Did I mention they are magnetic? Wednesday, July 24, 2013 Here is the latest video update. If you are keeping count you will notice I skipped the one for June. I would have done it in time, but one of my twins snapped the microphone I use for such recordings. I did not have time to get a new one until last week. For that reason this update is a bit longer. Tuesday, July 9, 2013 I like skirts. I hope one day men are able to wear them without being judged by the square minds out there. Even miniskirts. I think the Wimbledon tournament should require male players to wear white miniskirts, it would bring us to a new level of tennis. It was equally great when women liberated from the skirt and got to wear pants last Century. But we will be talking about a different type of skirt. Here is the story. When generation algorithms run in parallel you have to deal with multiple world chunks at the same time. You can think of a chess board and imagine all black squares are generated at once. You could put anything you want in these squares and it would be alright, you would never get discontinuities along the edges because black squares never share edges. Now comes the time where you need to generate the white squares. At this point you need to think about the edges, make sure anything you place in the white square will connect properly with the adjacent black square. You have two options here: You remember what was in the black squares. Your generation algorithm must be able to produce content "locally", that is the value obtained for one point does not depend on the neighboring points In most cases we opt for (2). This is how noise functions like Perlin's and Worley's work. This is also how Wang Tiles and derivative methods work. Once your generation function is "local", it does not really matter in which order you generate your chunks. They will always line up correctly along the edges. This choice of (2) may seem a no-brainer at this point, but we will come back to this decision later. Now, if instead of a checkerboard arrangement you have multiple levels of detail next to each other (a clipmap), you soon run into a problem. Running the same local function at different resolutions creates discontinuities. They will appear as holes in the resulting world mesh. The following screenshot shows some of them: The clean, nice solution for this is to create a thin mesh that connects both levels of detail. This is usually called a "seam". This is not difficult for 2D clipmaps. For a full 3D clipmap it can get a bit messy. In general your way out if this is always to extend the same algorithm you use for meshing. For instance if you are using marching cubes, you will need a modified marching cubes that runs at one resolution on one end, and at a different resolution on the other end. This is exactly what the guys in the C4 engine have done with their Transvoxel algortithm: http://www.terathon.com/voxels/ In my case I chose not to use seams in the beginning at all, but a different technique called skirts. This is a technique that was often applied to 2D clipmaps as well. The idea is to create a thin mesh that is perpendicular to the edge where the discontinuity appears. While this would not connect to the neighboring cell, it does hide the holes you get just like the seams. Just like seams, skirts in 3D clipmaps are kind of complicated as well. Imagine you are doing a thin vertical column. You need to make sure the skirts go into the right angle and never go too far. You don't want these skirts protruding out of the other side of your mesh. Skirts have a big problem. Since the vertices in the skirt mesh do not connect to the other end of the edge, you will have some polygons overlapping on screen. This can produce z-fighting at render time. This is not a big deal, you can always shift the skirts in the Z-buffer and make sure they will never fight with the main geometry in your clipmap cells. But this works only if the geometry is opaque. If you are rendering water or glass, skirts make rendering transparent meshes a lot more difficult. Still skirts have a massive advantage over seams. In order to produce seams you must evaluate the same function at two different resolutions for adjacent cells. If your function has a time penalty per cell, let's say you need to load some data, or access some generation cache, you will be paying this penalty twice for every cell that has a seam. You pay it once when you generate the contents of the cell, then again when you generate the seam. A properly generated seam creates a serial link between two neighboring cells. For a system you want to be massively parallel, any serial elements come to a price. There is no way around this, you either pay the price in processing time or in memory (where you cache the results of earlier processing). Skirts, on the other hand, can be computed with no knowledge of neighboring cells. They are inherently parallel. Back to the checkerboard example, even if you chose option (2), when you are doing seams you will be forced to look into the black squares when you are generating the white ones. Skirts have yet another advantage. Nothing is really forcing you to use the same function from one square to the next. Even if the function has discontinuities the skirts will mask them. You may think this never happens, and that is true while you are using simpler local functions like perlin noises or tilesets. But at some point you may be generating something that your standard seaming cannot mend, it just takes for the generation function to produce slightly different results for different levels of detail. Anyway in my case it was time to get properly connecting seams. They would be nice for water, ice crystals, glass and other transparent materials in the world. I run the dual contouring mesh generation over the seam space. Like in the transvoxel algorithm, one side of the voxels have double the resolution than the other side. Instead of going back and generating portions of the neighboring cells, I just store their boundaries. So there is a little bit of option (1) in the checkerboard example. It adds some memory overhead but it is worth the save in processing time. Here you can see some results in wireframe: The seams appear in yellow. I am actually not finished with this yet. Still need to bring the right materials and normals into the seams. But I would say the hard part is over.
BECOMING a father has made me reflect a lot on my own childhood and how different it was to the way many children live today. I grew up in Givendale, where my own father was a farm manager, and pretty much all my waking hours were spent out of doors, whatever the weather. My brother and I roamed the fields, climbed trees, fished in ponds, waded in mud, ferreted for rabbits and generally got as filthy and mucky as it was possible to get. We only came home at meal times. According to research published by the National Trust, very few children these days are given the freedom to explore the countryside. Instead, they lead unhealthy “couch potato” lives, increasingly disconnected from nature and the outside world. It is terrible to hear that there has been a rise in childhood obesity. And as a wildlife artist, it is desperately sad to learn that children nowadays are more likely to recognise a dalek from TV’s Dr Who than a barn owl. TVs and computers are robbing children of a healthy and active childhood and apparently parents allow this to happen because they have become frightened and confused by perceived risks of things like “stranger danger” or “germs”. I was lucky to be able to handle livestock on the farm as a child and our garden was littered with coops and aviaries where we kept partridges, pheasants, ferrets, rabbits, chickens, ducks, as well as orphaned birds of prey or injured foxes and even a deer that I had adopted. I once nursed a little owl which would perch on the clock in the kitchen and even ride on the handlebars of my bicycle. I taught it to hunt by letting a beetle loose across the living room carpet. Wild animals interested me the most and I spent so much time watching a badger sett as a teenager that I think I was accepted as part of the clan. This year, the National Trust has called on a number of different organisations to reach out to children and encourage them to get in touch with nature. I thought I would try to do my bit by offering them the experiences that first grasped my attention as a child. So, next month I have invited a mobile petting farm to visit the gallery. The gallery courtyard will be transformed for the day with pens containing new-born lambs, calves, piglets, goslings and ducklings. It is bound to be a noisy event and I hope as many families as possible will bring their children to handle the animals. They are all invited into the gallery afterwards to see my paintings of some wilder species and I’ll make sure there are plenty of pencils and paper available so that they can try their hand at drawing their favourites. The event takes place here at the gallery in Thixendale on Saturday, March 16, noon-3.30pm. It is a free event, but numbers are limited so if you want to come register online at www.robertefuller.com
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.arm.malideveloper.openglessdk.graphicssetup" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:debuggable="true" android:label="@string/app_name"> <activity android:label="GraphicsSetup" android:name="GraphicsSetup"> <intent-filter> <action android:name="android.intent.action.MAIN"></action> <category android:name="android.intent.category.LAUNCHER" /> <category android:name="android.intent.category.DEFAULT"></category> </intent-filter> </activity> </application> <uses-feature android:glEsVersion="0x00020000"></uses-feature> </manifest>
Structures of the dehydrogenation products of methane activation by 5d transition metal cations. The activation of methane by gas-phase transition metal cations (M(+)) has been studied extensively, both experimentally and using density functional theory (DFT). Methane is exothermically dehydrogenated by several 5d metal ions to form [M,C,2H](+) and H2. However, the structure of the dehydrogenation product has not been established unambiguously. Two types of structures have been considered: a carbene structure where an intact CH2 fragment is bound to the metal (M(+)-CH2) and a carbyne (hydrido-methylidyne) structure with both a CH and a hydrogen bound to the metal separately (H-M(+)-CH). For metal ions with empty d-orbitals, an agostic interaction can occur that could influence the competition between carbene and carbyne structures. In this work, the gas phase [M,C,2H](+) (M = Ta, W, Ir, Pt) products are investigated by infrared multiple-photon dissociation (IR-MPD) spectroscopy using the Free-Electron Laser for IntraCavity Experiments (FELICE). Metal cations are formed in a laser ablation source and react with methane pulsed into a reaction channel downstream. IR-MPD spectra of the [M,C,2H](+) species are measured in the 300-3500 cm(-1) spectral range by monitoring the loss of H (2H in the case of [Ir,C,2H](+)). For each system, the experimental spectrum closely resembles the calculated spectrum of the lowest energy structure calculated using DFT: for Pt, a classic C(2v) carbene structure; for Ta and W, carbene structures that are distorted by agostic interactions; and a carbyne structure for the Ir complex. The Ir carbyne structure was not considered previously. To obtain this agreement, the calculated harmonic frequencies are scaled with a scaling factor of 0.939, which is fairly low and can be attributed to the strong redshift induced by the IR multiple-photon excitation process of these small molecules. These four-atomic species are among the smallest systems studied by IR-FEL based IR-MPD spectroscopy, and their spectra demonstrate the power of IR spectroscopy in resolving long-standing chemical questions.
This invention relates to an improved valve pin for a valve gating injection molding system. In the past, a wide variety of valve gating systems have been used with varying success for injection molding different materials in different applications. However, while having a variety of different arrangements for providing heat in the gate area, these previous systems have primarily emphasized the valving system and failed to appreciate the degree to which heat transfer to the gate area is critical to the operation of the whole system. While these previous systems do operate for some materials under favourable conditions, their performance is not optimum and their performance is particularly unsatisfactory for more difficult materials under difficult conditions. Without sufficient heat transfer to the gate area, increased injection pressures are required as well as increased force on the valve pin which both in turn lead to subsequent operating difficulties and increased cost. In addition, the heaters in many of these previous systems are subject to the problems of burn out and overheating. More recently, it has become highly desirable to mold new high density materials such as up to 60% glass-filled nylon to replace aluminum molded products. Conventional valve gating systems which do not provide sufficient uniform heat to the gate area have been unable to mold these types of materials. With conventional thermoplastic materials which gradually soften with increased temperatures, closing problems have been overcome by increasing the valve pin force to the area of 400 to 800 pounds. With crystalline materials even this short sighted solution is not available because the melt solidifies very sharply with reduced temperatures. By providing sufficient heat to the gate area the present invention will enable most applications to be run with reduced valve pin forces in the area of from 150 to 300 pounds. Furthermore, in some molding applications it is desirable to form a hole in the molded product to coincide with the gate opening. This may be done by providing for the valve pin to penetrate through the plastic part and seat against the core of the mold as well as in the gate itself. This requires an even transfer of sufficient heat from the upper portion of the heater cast and the melt to the tip of the valve pin where in the past it has been very difficult to maintain and particularly to control temperatures at the necessary level.
/* * * Copyright 2018 Asylo authors * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ #include <sys/time.h> #include <time.h> #include <atomic> #include <cstring> #include "asylo/platform/common/time_util.h" #include "asylo/platform/host_call/trusted/host_calls.h" using asylo::NanosecondsToTimeSpec; using asylo::NanosecondsToTimeVal; using asylo::TimeSpecToNanoseconds; namespace { // Ensure the library provides support for atomic operations on a int64_t. static_assert(sizeof(std::atomic<int64_t>) == sizeof(int64_t), "lockfree int64_t is unavailable."); } // namespace extern "C" { // Custom in-enclave nanosleep that will leave the enclave for the standard // nanosleep. int nanosleep(const struct timespec *requested, struct timespec *remainder) { return enc_untrusted_nanosleep(requested, remainder); } int enclave_gettimeofday(struct timeval *__restrict time, void *timezone) { // The timezone parameter is deprecated by POSIX. Fail if non-null. if (timezone) { return -1; } struct timeval tval {}; int result = enc_untrusted_gettimeofday(&tval, nullptr); time->tv_sec = tval.tv_sec; time->tv_usec = tval.tv_usec; return result; } int enclave_times(struct tms *buf) { return enc_untrusted_times(buf); } int clock_gettime(clockid_t clock_id, struct timespec *time) { int result = enc_untrusted_clock_gettime(clock_id, time); if (clock_id == CLOCK_MONOTONIC) { int64_t clock_monotonic = TimeSpecToNanoseconds(time); thread_local static int64_t last_tick = clock_monotonic; // CLOCK_MONOTONIC should never go backwards. if (clock_monotonic < last_tick) abort(); last_tick = clock_monotonic; } return result; } int clock_getcpuclockid(pid_t pid, clockid_t *clock_id) { return enc_untrusted_clock_getcpuclockid(pid, clock_id); } int getitimer(int which, struct itimerval *curr_value) { return enc_untrusted_getitimer(which, curr_value); } int setitimer(int which, const struct itimerval *new_value, struct itimerval *old_value) { return enc_untrusted_setitimer(which, new_value, old_value); } } // extern "C"
Hundreds of tourists were barred from visiting the Athens Acropolis on Christmas Eve after the site's guards called a strike to demand overdue weekend pay. Visitors had to resort to taking photos of themselves outside the monument's shuttered gates on Saturday, peering through the bars to get a look at the 5th century BC temple. "It kind of sucks because this is one of your main sites here ... It throws off our whole weekend,» said Anita Amin, 25, a tourist from the United States. Greece has been hit by a wave of strikes provoked by cuts imposed by its debt-laden government to meet the terms of lifeline bailout deals from the European Union and the International Monetary Fund. The country's vital tourism industry has already taken a hit from walkouts by taxi drivers and other key workers. Guards at many other archaeological sites across Greece went on strike on Saturday, saying they would stay home every weekend until the government hands out the two months of weekend pay it owes them. "We are working people. We have seen our salaries greatly reduced because of the economic crisis and we can't keep working without getting paid,» said the president of the guards' union, Yannis Mavrikopoulos. The Acropolis is the leading attraction in a tourism industry which accounts for almost a fifth of the country's ailing economy. "Considering that tourism is one of the main incomes for the country, I think that they should find another way to express their disappointment with their employers,» said Eduardo Gouveia, 34, a visitor from Brazil. [Reuters]
It’s that time of year again! Actually, it was that time of year again last month for our annual X-Mas in July “Epic Southern Living Magazine Christmas Cake Baking Party!” However, this year one of my nieces couldn’t make it for any dates in July so we moved it to August. And, once again, it was a chaotic, but fun, success! Here is the cake we decided to make this year: Ta Da! We did it! The Southern Living Magazine Chocolate Citrus Orange Cake with Candied Oranges and Chocolate Ganache Filling! I am thrilled to be debuting my new comedy variety show this October at the lovely, historic Virginia Samford Theater in Birmingham AL. If you are in the area I hope you will come “see” me! Click here for tickets and info. ~ Sunny xo
01 February 2010 The investment map of Russia shows the business potential of each region and the risk connected with possible investment. The fourteen investment rankings of the Russian regions for 2008/2009 has been prepared byExpert RA Rating Agency. You can also compare the results with aprevious ranking. The investment map and preferences of investors has radically changed. The economic crisis has forced investors to rethink priorities from both sides; potential and risk. Nowadays the most competitive are those regions with the lowest “human” risks; criminal, administrative and social. The potential of the consumer, that held the leading position in previous rankings, is less important for investors that in the past. The importance of infrastructure potential remains stable.
The Black Death was a devastating global epidemic of bubonic plague that struck Europe and Asia in the mid-1300s. The plague arrived in Europe in October 1347, when 12 ships from the Black Sea docked at the Sicilian port of Messina. People gathered on the docks were met with a horrifying surprise: Most sailors aboard the ships were dead, and those still alive were gravely ill and covered in black boils that oozed blood and pus. Sicilian authorities hastily ordered the fleet of “death ships” out of the harbor, but it was too late: Over the next five years, the Black Death would kill more than 20 million people in Europe—almost one-third of the continent’s population. READ MORE: Pandemics that Changed History How Did The Black Plague Start? Even before the “death ships” pulled into port at Messina, many Europeans had heard rumors about a “Great Pestilence” that was carving a deadly path across the trade routes of the Near and Far East. Indeed, in the early 1340s, the disease had struck China, India, Persia, Syria and Egypt. WATCH: How the Black Death Spread So Widely The plague is thought to have originated in Asia over 2,000 years ago and was likely spread by trading ships, though recent research has indicated the pathogen responsible for the Black Death may have existed in Europe as early as 3000 B.C. READ MORE: See all pandemic coverage here. Symptoms of the Black Plague Europeans were scarcely equipped for the horrible reality of the Black Death. “In men and women alike,” the Italian poet Giovanni Boccaccio wrote, “at the beginning of the malady, certain swellings, either on the groin or under the armpits…waxed to the bigness of a common apple, others to the size of an egg, some more and some less, and these the vulgar named plague-boils.” Blood and pus seeped out of these strange swellings, which were followed by a host of other unpleasant symptoms—fever, chills, vomiting, diarrhea, terrible aches and pains—and then, in short order, death. The Bubonic Plague attacks the lymphatic system, causing swelling in the lymph nodes. If untreated, the infection can spread to the blood or lungs. How Did The Black Death Spread? The Black Death was terrifyingly, indiscriminately contagious: “the mere touching of the clothes,” wrote Boccaccio, “appeared to itself to communicate the malady to the toucher.” The disease was also terrifyingly efficient. People who were perfectly healthy when they went to bed at night could be dead by morning. Did you know? Many scholars think that the nursery rhyme “Ring around the Rosy” was written about the symptoms of the Black Death. Understanding the Black Death Today, scientists understand that the Black Death, now known as the plague, is spread by a bacillus called Yersina pestis. (The French biologist Alexandre Yersin discovered this germ at the end of the 19th century.) They know that the bacillus travels from person to person through the air, as well as through the bite of infected fleas and rats. Both of these pests could be found almost everywhere in medieval Europe, but they were particularly at home aboard ships of all kinds—which is how the deadly plague made its way through one European port city after another. WATCH: How Rats and Fleas Spread the Black Death Not long after it struck Messina, the Black Death spread to the port of Marseilles in France and the port of Tunis in North Africa. Then it reached Rome and Florence, two cities at the center of an elaborate web of trade routes. By the middle of 1348, the Black Death had struck Paris, Bordeaux, Lyon and London. Today, this grim sequence of events is terrifying but comprehensible. In the middle of the 14th century, however, there seemed to be no rational explanation for it. No one knew exactly how the Black Death was transmitted from one patient to another, and no one knew how to prevent or treat it. According to one doctor, for example, “instantaneous death occurs when the aerial spirit escaping from the eyes of the sick man strikes the healthy person standing near and looking at the sick.” How Do You Treat the Black Death? Physicians relied on crude and unsophisticated techniques such as bloodletting and boil-lancing (practices that were dangerous as well as unsanitary) and superstitious practices such as burning aromatic herbs and bathing in rosewater or vinegar. Meanwhile, in a panic, healthy people did all they could to avoid the sick. Doctors refused to see patients; priests refused to administer last rites; and shopkeepers closed their stores. Many people fled the cities for the countryside, but even there they could not escape the disease: It affected cows, sheep, goats, pigs and chickens as well as people. In fact, so many sheep died that one of the consequences of the Black Death was a European wool shortage. And many people, desperate to save themselves, even abandoned their sick and dying loved ones. “Thus doing,” Boccaccio wrote, “each thought to secure immunity for himself.” Black Plague: God’s Punishment? Because they did not understand the biology of the disease, many people believed that the Black Death was a kind of divine punishment—retribution for sins against God such as greed, blasphemy, heresy, fornication and worldliness. By this logic, the only way to overcome the plague was to win God’s forgiveness. Some people believed that the way to do this was to purge their communities of heretics and other troublemakers—so, for example, many thousands of Jews were massacred in 1348 and 1349. (Thousands more fled to the sparsely populated regions of Eastern Europe, where they could be relatively safe from the rampaging mobs in the cities.) WATCH: The Grisly Business of Black Death Burials Some people coped with the terror and uncertainty of the Black Death epidemic by lashing out at their neighbors; others coped by turning inward and fretting about the condition of their own souls. Flagellants Some upper-class men joined processions of flagellants that traveled from town to town and engaged in public displays of penance and punishment: They would beat themselves and one another with heavy leather straps studded with sharp pieces of metal while the townspeople looked on. For 33 1/2 days, the flagellants repeated this ritual three times a day. Then they would move on to the next town and begin the process over again. Though the flagellant movement did provide some comfort to people who felt powerless in the face of inexplicable tragedy, it soon began to worry the Pope, whose authority the flagellants had begun to usurp. In the face of this papal resistance, the movement disintegrated. READ MORE: Social Distancing and Quarantine Were Used in Medieval Times to Fight the Black Death How Did The Black Death End? The plague never really ended and it returned with a vengeance years later. But officials in the Venetian-controlled port city of Ragusa were able to slow its spread by keeping arriving sailors in isolation until it was clear they were not carrying the disease—creating social distancing that relied on isolation to slow the spread of the disease. The sailors were initially held on their ships for 30 days (a trentino), a period that was later increased to 40 days, or a quarantine—the origin of the term “quarantine” and a practice still used today. Does The Black Plague Still Exist? The Black Death epidemic had run its course by the early 1350s, but the plague reappeared every few generations for centuries. Modern sanitation and public-health practices have greatly mitigated the impact of the disease but have not eliminated it. While antibiotics are available to treat the Black Death, according to The World Health Organization, there are still 1,000 to 3,000 cases of plague every year. READ MORE: How 5 of History's Worst Pandemics Finally Ended
Evolution of efficient methods to sample lead sources, such as house dust and hand dust, in the homes of children. Efficient sampling methods to recover lead-containing house dust and hand dust have been evolved so that sufficient lead is collected for analysis, and to ensure that correlational analyses linking these two parameters to blood lead are not dependent on the efficiency of sampling. Precise collection of loose house dust from a 1-unit area (484 cm2) with a Tygon or stainless steel sampling tube connected to a portable sampling pump (1.2 to 2.5 liters/min) required repetitive sampling (three times). The Tygon tube sampling technique for loose house dust less than 177 microns in diameter was around 72% efficient with respect to dust weight and lead collection. A representative house dust contained 81% of its total weight in this fraction. A single handwipe for applied loose hand dust was not acceptably efficient or precise, and at least three wipes were necessary to achieve recoveries of greater than 80% of the lead applied. House dusts of different particle sizes less than 246 microns adhered equally well to hands. Analysis of lead-containing material usually required at least three digestions/decantations using hot plate or microwave techniques to allow at least 90% of the lead to be recovered. It was recommended that other investigators validate their handwiping, house dust sampling, and digestion techniques to facilitate comparison of results across studies. The final methodology for the Cincinnati longitudinal study was three sampling passes for surface dust using a stainless steel sampling tube; three microwave digestions/decantations for analysis of dust and paint; and three wipes with handwipes with one digestion/decantation for the analysis of six handwipes together.
Q: Is it possible to build a cheap memcached server? Is it possible to build a cheap memcached server? A: It's always "cheap" to build a "server" (I use those terms loosly, as you'll see below), but we can't answer this question for you. Only you can, by making decisions based on the following questions: Your definition of "Cheap" (your budetary needs may differ from others) Are you happy to go with commodity hardware and wear the risks? Are you concerned with hardware support Are you concerned with hardware replacement service agreements? How long do you want the hardware to last? Once you've got these items figured out, then you need to shop around and see what you can get within your budget and then that will answer that question for you.
There are a number of applications for techniques for optical measurement through light scattering materials. Most notably, such measurements can be performed through biological tissues and therefore can be used for noninvasive medical diagnostic tests. Cancer tissue and healthy tissue, for example, can be distinguished by means of different optical properties. Scanning the optical measurement can yield high contrast and high magnification images of biological tissues. For example, imaging techniques could be used to examine plaque on the interior walls of arteries and vessels or other small biological structures. Related applications extend to the examination and troubleshooting of integrated optical circuits, fiber optic devices, and semiconductor structures. All these applications require that the measuring technique have a relatively high spatial resolution (microns or tens of microns), high sensitivity, and low noise. Optical time domain reflectometry (OTDR) and optical frequency domain reflectometry (OFDR) are techniques which are used to examine optical systems and are generally not capable of performing high resolution measurements through a light scattering material. For example, these methods are generally designed for finding and locating (to within 1 meter) flaws in a fiber optic system. Optical coherence domain reflectometry (OCDR) is a technique which has been used to image an object within or behind light scattering media. The technique uses short coherence length light (typically with a coherence length of about 10-100 microns) to illuminate the object. Light reflected from a region of interest within the object is combined with a coherent reference beam. Interference occurs between the two beams only when the reference beam and reflected beam have traveled the same distance. This allows the OCDR to discriminate against light scattered from outside the region of interest. FIG. 1 shows a typical OCDR setup similar to ones disclosed in several U.S. Pat. Nos. (5,465,147, 5,459,570, and 5,321,501 issued to Swanson et al., 5,291,267, 5,365,335, and 5,202,745 issued to Sorin et al). FIG. 1 shows the device made with fiber optic components, but OCDR devices can also be made with bulk optical components. Light having a short coherence length l.sub.c (given by l.sub.c =C/.DELTA.f, where .DELTA.f is the spectral bandwidth) is produced by a light source 20 and travels through a 50/50 coupler 22 where it is divided into two paths. One path goes to the sample 24 to be analyzed and the other path goes to a movable reference mirror 26. Extra fiber length in the reference path is shown as fiber loop 31. The probe beam reflected from the sample 24 and reference beam reflected from the reference mirror 26 are combined at the coupler 22 and sent to a detector 28. The optical paths traversed by the reflected probe beam and reference beam are matched to within one coherence length such that coherent interference can occur upon recombination at the coupler. A phase modulator 30 (such as a piezoelectric fiber stretcher) produces sideband frequencies in the probe beam which produce a temporal interference pattern (beats) when recombined with the reference beam. The detector 28 measures the amplitude of the beats. The amplitude of the detected interference signal is a measure of the amount of light scattered from within a coherence gate interval 32 inside the sample 24 that provides equal path lengths for the probe and reference beams. Interference is produced only for light scattered from the sample 24 which has traveled the same distance (to within approximately one coherence length) as light reflected from the mirror 26. The coherence gate interval 32 has a width of approximately one coherence length. This feature of OCDR allows the apparatus to discriminate against light which is scattered from outside the coherence gate interval 32, and which is usually incoherent compared to the reference beam. This discrimination (a `coherence gate`) results in improved sensitivity of the device. One negative consequence of the geometry of FIG. 1 is that 50% of the light reflected from the sample 24 is lost. On its return trip through the coupler 22, half the reflected probe beam enters the light source 20 and does not enter the detector 28. This is undesired because it decreases the signal to noise ratio of the device and results in a more powerful light source being required. Another negative feature of the device of FIG. 1 is that it requires the use of a moving mirror to scan longitudinally in and out of the sample 24. The use of a moving mechanical mirror is a disadvantage because moving mechanical parts often have alignment and reliability problems. Another disadvantage of the device of FIG. 1 is the requirement for a large depth of focus of the probe beam in sample 24. A large depth of focus is necessary to allow longitudinal scanning of the coherence gate interval 32 while maintaining the coherence gate interval in the region of the beam having a reasonably small spot size. This requirement increases the minimum spot size of the beam, and thus limits the spatial resolution of the device when acquiring images. A further disadvantage of the device of FIG. 1 is the long integration time typically necessary for each measurement point (pixel) when acquiring an image. This is due to the low power of the backreflected signal when imaging deep within a scattering medium. Under these conditions, the slow acquisition time does not allow in-vivo imaging of live tissue which is usually in motion. U.S. Pat. No. 5,291,267 to Sorin et al. discloses a technique for OCDR which uses the light source as a light amplifier in order to boost the reflected signal from the sample. Light reflected from the sample is returned through the light source in a reverse direction and is amplified as it passes through. However, Sorin's device requires a coupler in the light path between the source and sample and so necessarily wastes 50% of the light reflected from the sample. In other words, only 50% of the light reflected by the sample is amplified and contributes to the interference signal. Consequently, Sorin's device produces less than optimum signal to noise ratio resulting in less accurate measurements.
Identifying risk factors for peripartum cesarean hysterectomy. A population-based study. To determine the incidence of, and obstetric risk factors for, emergency peripartum hysterectomy. A population-based study comparing all singleton deliveries between the years 1988 and 1999 that were complicated with peripartum hysterectomy to deliveries without this complication. Statistical analysis was performed with multiple logistic regression analysis. Emergency peripartum hysterectomy complicated 0.048% (n = 56) of deliveries in the study (n = 117,685). Independent risk factors for emergency peripartum hysterectomy from a backward, stepwise, multivariable logistic regression model were: uterine rupture (OR = 521.4, 95% CI 197.1-1379.7), placenta previa (OR = 8.2, 95% CI 2.2-31.0), postpartum hemorrhage (OR = 33.3, 95% CI 12.6-88.1), cervical tears (OR = 18.0, 95% CI 6.2-52.4), placenta accreta (OR = 13.2, 95% CI 3.5-50.0), second-trimester bleeding (OR = 9.5, 95% CI 2.3-40.1), previous cesarean section (OR = 6.9, 95% CI 3.7-12.8) and grand multiparity (> 5 deliveries) (OR = 3.4, 95% CI 1.8-6.3). Newborns delivered after peripartum hysterectomy had lower Apgar scores (< 7) at 1 and 5 minutes than did others (OR = 11.5, 95% CI 6.2-20.9 and OR = 27.4, 95% CI 11.2-67.4, respectively). In addition, higher rates of perinatal mortality were noted in the uterine hysterectomy vs. the comparison group (OR = 15.9, 95% CI 7.5-32.6). Affected women were more likely than the controls to receive packed-cell transfusions (OR = 457.7, 95% CI 199.2-1105.8) and had lower hemoglobin levels at discharge from the hospital (9.9 +/- 1.3 vs. 12.8 +/- 5.7, P < .001). Cesarean deliveries in patients with suspected placenta accreta, specifically those performed due to placenta previa in women with a previous uterine scar, should involve specially trained obstetricians. In addition, detailed informed consent about the possibility of emergency peripartum hysterectomy and its associated morbidity should be obtained.
The Hitchhiker's Guide to the Galaxy Earthman Arthur Dent is having a very bad day. His house is about to be bulldozed, he discovers that his best friend is an alien--and to top things off, Planet Earth is about to be demolished to make ... more DIRECTOR Screenwriter Companies Rating MPAA Storyline Earthman Arthur Dent is having a very bad day. His house is about to be bulldozed, he discovers that his best friend is an alien--and to top things off, Planet Earth is about to be demolished to make way for a hyperspace bypass. Arthur's only chance for survival: hitch a ride on a passing spacecraft. For the novice space traveler, the greatest adventure in the universe begins when the world ends. Arthur sets out on a journey in which he finds that nothing is as it seems: he learns that a towel is just the most useful thing in the universe, finds the meaning of life, and discovers that everything he needs to know can be found in one book: "The Hitchhiker's Guide to the Galaxy". Trivia & Production Notes Sam Rockwell will play Zaphod, the two-headed president of the galaxy. Mos Def plays Ford Prefect, an alien disguising himself as an out-of-work actor who sets out on an intergalactic journey with his best friend, mild-mannered earthling Arthur Dent (Morgan Freeman). The duo hitch a ride through space with Sam Rockwell's Zaphod, the beautiful and brilliant scientist Trillion (Zooey Deschanel) and a depressed robot while on a quest to discover the meaning of life.
]]>MaryAnn TaylorFri, 24 Oct 2014 16:06:00 GMTInformation Session - Are You Conducting Research, Scholarship, or Creative Work and want to Present at the April 2015 Whalen Symposium?http://www.ithaca.edu/intercom/article.php/20141006120654260http://www.ithaca.edu/intercom/article.php/20141006120654260 For interested faculty and students who have questions or need more information regarding participation in the Whalen Academic Symposium on April 9, 2015. The session will introduce students and faculty to the Symposium, the application process, the abstract guidelines, and presentation options. Please follow the link to sign up: For interested faculty and students who have questions or need more information regarding participation in the Whalen Academic Symposium on April 9, 2015. The session will introduce students and faculty to the Symposium, the application process, the abstract guidelines, and presentation options. Please follow the link to sign up: ]]>Nancy PierceThu, 23 Oct 2014 15:20:00 GMTApplying to the Ithaca College Honors Programhttp://www.ithaca.edu/intercom/article.php/20141023112046972http://www.ithaca.edu/intercom/article.php/20141023112046972Interested first and second year students are invited to apply to the Ithaca College Honors Program. The Honors Program engages a community of scholars in a highly enhanced liberal arts program that offers a variety of curricular, co-curricular, and extracurricular opportunities to foster critical thinking, intellectual curiosity, and lifelong learning.]]>Interested first and second year students are invited to apply to the Ithaca College Honors Program. The Honors Program engages a community of scholars in a highly enhanced liberal arts program that offers a variety of curricular, co-curricular, and extracurricular opportunities to foster critical thinking, intellectual curiosity, and lifelong learning.]]>Thomas J. PfaffThu, 23 Oct 2014 14:43:00 GMTChemistry Alumnus Marcos Pires, Ph.D. '03 presenting on November 4th.http://www.ithaca.edu/intercom/article.php/2014102310435293http://www.ithaca.edu/intercom/article.php/2014102310435293 Marcos Pires, Ph.D. '03 will be Chemistry Departments seminar guest speaker Tuesday, November 4th at 4:15 p.m. in CNS 333. The title of the presentation is: “Unnatural D-Amino Acids as Novel Antibiotic Agents and Diagnostic Tools” ]]> Marcos Pires, Ph.D. '03 will be Chemistry Departments seminar guest speaker Tuesday, November 4th at 4:15 p.m. in CNS 333. The title of the presentation is: “Unnatural D-Amino Acids as Novel Antibiotic Agents and Diagnostic Tools” ]]>Maria RussellWed, 22 Oct 2014 19:57:00 GMTIf you like Music Math and children consider helping outhttp://www.ithaca.edu/intercom/article.php/20141022155741660http://www.ithaca.edu/intercom/article.php/20141022155741660 Consider volunteering to help out with a Math (and Music) day at IC on April 15th 10:00 - 1:00 in Williams Hall ]]> Consider volunteering to help out with a Math (and Music) day at IC on April 15th 10:00 - 1:00 in Williams Hall As a member of NERCOMP (NorthEast Regional Computing Program), all Ithaca College administration, faculty, staff and students are invited to submit proposals for the April 2015 annual conference. The deadline for proposals has been extended to this Wednesday, October 22, 12:00pm ET. ]]> As a member of NERCOMP (NorthEast Regional Computing Program), all Ithaca College administration, faculty, staff and students are invited to submit proposals for the April 2015 annual conference. The deadline for proposals has been extended to this Wednesday, October 22, 12:00pm ET. As a member of NERCOMP (NorthEast Regional Computing Program), all Ithaca College administration, faculty, staff and students are invited to submit proposals for the April annual conference. The deadline is Friday, October 17th. The announcement from NERCOMP is posted below. ]]> As a member of NERCOMP (NorthEast Regional Computing Program), all Ithaca College administration, faculty, staff and students are invited to submit proposals for the April annual conference. The deadline is Friday, October 17th. The announcement from NERCOMP is posted below. ]]>Maria RussellThu, 09 Oct 2014 20:17:04 GMTChange of Major is now part of Workflow!http://www.ithaca.edu/intercom/article.php/20141009160528469http://www.ithaca.edu/intercom/article.php/20141009160528469Looking to change your major? Similar to the Online Course Override Form, the Change of Major (COM) process is now completely electronic! This is a project developed by the Academic Workflow Implementation Group.]]>Looking to change your major? Similar to the Online Course Override Form, the Change of Major (COM) process is now completely electronic! This is a project developed by the Academic Workflow Implementation Group.]]>Bryan RobertsThu, 09 Oct 2014 19:22:00 GMTRoundtable on Studying Abroad in Cubahttp://www.ithaca.edu/intercom/article.php/20141009152227754http://www.ithaca.edu/intercom/article.php/20141009152227754Latin American Studies will host a roundtable discussion on studying abroad in Cuba. Facilitated by Dr. Gonzalez-Conty, Journalism majors Max Ocean ('15) and Candace King ('15) will share their experiences and insights from a semester studying in Havana, Cuba. Please join us in Gannett 112 at 6pm on Monday, October 13. ]]>Latin American Studies will host a roundtable discussion on studying abroad in Cuba. Facilitated by Dr. Gonzalez-Conty, Journalism majors Max Ocean ('15) and Candace King ('15) will share their experiences and insights from a semester studying in Havana, Cuba. Please join us in Gannett 112 at 6pm on Monday, October 13. ]]>Jonathan AblardMon, 06 Oct 2014 04:00:00 GMTMajors in the department of Physics and Astronomy give talks on their summer research Tuesday October 7, 2014 at 12:10 in CNS 204http://www.ithaca.edu/intercom/article.php/20141003125021352http://www.ithaca.edu/intercom/article.php/20141003125021352Please join us for the Fall 2014 Seminar Series as Majors in the department of Physics and Astronomy give talks on their summer research Tuesday October 7, 2014 at 12:10 in CNS 204]]>Please join us for the Fall 2014 Seminar Series as Majors in the department of Physics and Astronomy give talks on their summer research Tuesday October 7, 2014 at 12:10 in CNS 204]]>Jill AckermanMon, 06 Oct 2014 03:24:00 GMTEarn Credits While Doing Field Research in Namibia, Botswana, Patagonia, or British Columbia!http://www.ithaca.edu/intercom/article.php/20141003152408247http://www.ithaca.edu/intercom/article.php/20141003152408247 Round River Conservation Studies is a research and education organization based in Utah. They offer field-based study-abroad programs where students work with local people to study, protect and restore wild places. Study areas include conservation biology, natural history and environmental policy; and students are immersed in the cultures of indigenous peoples. ]]> Round River Conservation Studies is a research and education organization based in Utah. They offer field-based study-abroad programs where students work with local people to study, protect and restore wild places. Study areas include conservation biology, natural history and environmental policy; and students are immersed in the cultures of indigenous peoples. The Latin American Studies program invites you to a public lecture by Dr. Ernesto Bassi of Cornell University entitled "Captains, Sailors, and the Creation of a Trans-imperial Greater Caribbean during the Age of Revolutions." Wednesday, October 8 at 6 pm in Textor 101. ]]> The Latin American Studies program invites you to a public lecture by Dr. Ernesto Bassi of Cornell University entitled "Captains, Sailors, and the Creation of a Trans-imperial Greater Caribbean during the Age of Revolutions." Wednesday, October 8 at 6 pm in Textor 101. Sponsored and organized by the School of Humanities and Sciences in celebration of the 50th anniversary of the C.P. Snow Lecture Series, THATCamp Humanities + Science 2014 will be an exploration and demonstration of the bridge, integration of, and/or collaboration between the Humanities and Sciences. The event will be on the Ithaca College campus on Friday and Saturday, November 7-8, 2014. Sponsored and organized by the School of Humanities and Sciences in celebration of the 50th anniversary of the C.P. Snow Lecture Series, THATCamp Humanities + Science 2014 will be an exploration and demonstration of the bridge, integration of, and/or collaboration between the Humanities and Sciences. The event will be on the Ithaca College campus on Friday and Saturday, November 7-8, 2014. The School of Humanities and Sciences and the Department of Writing are pleased to present best-selling biographer D.T. Max as part of this semester’s Distinguished Visiting Writers Series. ]]> The School of Humanities and Sciences and the Department of Writing are pleased to present best-selling biographer D.T. Max as part of this semester’s Distinguished Visiting Writers Series. ]]>Eleanor HendersonFri, 03 Oct 2014 12:00:00 GMTMedieval-Renaissance Colloquium presents &#34;Chaucer's Houses in the Tales of the Miller and the Reeve: The Architecture of Satire&#34;http://www.ithaca.edu/intercom/article.php/20141002104841151http://www.ithaca.edu/intercom/article.php/20141002104841151 The Ithaca College Medieval-Renaissance Colloquium announces a presentation by Michael Twomey (English) and Scott Stull (Anthropology) titled "Chaucer's Houses in the Tales of the Miller and the Reeve: The Architecture of Satire" on Tuesday, October 7, from 5:00 to 6:30 in the Cayuga Lake Room of the Campus Center. ]]> The Ithaca College Medieval-Renaissance Colloquium announces a presentation by Michael Twomey (English) and Scott Stull (Anthropology) titled "Chaucer's Houses in the Tales of the Miller and the Reeve: The Architecture of Satire" on Tuesday, October 7, from 5:00 to 6:30 in the Cayuga Lake Room of the Campus Center. ]]>Michael TwomeyFri, 26 Sep 2014 18:41:00 GMTNominations for Sigma Xi honorshttp://www.ithaca.edu/intercom/article.php/20140926144143160http://www.ithaca.edu/intercom/article.php/20140926144143160Nominations are open for student or faculty membership in Sigma Xi, The Scientific Research Society. Sigma Xi is a national honor society of scientists and engineers who are elected to the society because of their research achievements or potential. The Paulen A. Smith Chapter at Ithaca College was founded in 1965.]]>Nominations are open for student or faculty membership in Sigma Xi, The Scientific Research Society. Sigma Xi is a national honor society of scientists and engineers who are elected to the society because of their research achievements or potential. The Paulen A. Smith Chapter at Ithaca College was founded in 1965.]]>Jill AckermanWed, 17 Sep 2014 19:27:00 GMTLetter from the Deanhttp://www.ithaca.edu/hs/news/letter-from-the-dean-38093/http://www.ithaca.edu/hs/news/letter-from-the-dean-38093/ Greetings! The early fall is a busy time on the Ithaca College campus. We've just welcomed our first-year students in H&S and they are becoming acclimated to their new lives. As you might expect, at our opening celebrations and events, we set expectations for our students. I remind them that studies must always... ]]> Greetings! The early fall is a busy time on the Ithaca College campus. We've just welcomed our first-year students in H&S and they are becoming acclimated to their new lives. As you might expect, at our opening celebrations and events, we set expectations for our students. I remind them that studies must always... Please join us during the fall of 2014 for a series of events celebrating the 50th anniversary of the School of Humanities and Sciences' lecture series inspired by Charles Percy Snow, events that highlight what it means to engage in an integrative liberal arts education. •Lead up events begin in September and include lectures, art exhibitions, and a staged reading •C.P. Snow Lecture -- Alan Lightman, "At the Crossroads of Science and Art," November 7 Please join us during the fall of 2014 for a series of events celebrating the 50th anniversary of the School of Humanities and Sciences' lecture series inspired by Charles Percy Snow, events that highlight what it means to engage in an integrative liberal arts education. •Lead up events begin in September and include lectures, art exhibitions, and a staged reading •C.P. Snow Lecture -- Alan Lightman, "At the Crossroads of Science and Art," November 7
Restructuring: Emeka Anyaoku recommends 8 regions for Nigeria Former Commonwealth Secretary-General, Chief Emeka Anyaoku, has said that Nigeria should be restructured into eight regions. He identified lack of good leadership as one of the challenges facing the country, but stressed that Nigeria’s present structure is the actual problem confronting her. He made this assertion in Umuahia, yesterday on the occasion of Chief Emeka Anyaoku lecture series on good governance and book presentation with the theme: “Leadership and Good Governance in Nigeria”, organised the Youth Affairs International Foundation. He said that the principal issue in the country is the structure of governance, stating, “I do not believe that the best leadership will make Nigeria well. Not even the best in the world will make it well”. He described the country as an artificial creation that is not like the US where most population are immigrants and is easy to create one country as all immigrants owe allegiance to the US government. Until 1953 he said, there was no country called Nigeria and the people were with different cultures. The country, he pointed out is a place of diversity and established cultures that are also diverse, stating, “what we want to do is to create a nation out of diversity”. “Nigeria is still a country trying to create a nation for itself. “What do we do? The 1960 – 1963 constitution created three regions and subsequently four regions which were developing at their own pace and competing among themselves. That made the Nigerian economy the one of fastest-growing in the world”. He continued: “today, the reality of the present information is that we cannot return to the six regions based on six geographic zones in the country”. Anyaoku who is the Ichie Adazie of Obosi in Anambra state, recommended that the country should return to eight regions based on the six geographic zones of the country “but to be modified by restructuring what used to be mid-west region”. “So, you have the south-south region, and the old regions as the bases for the creation of mid-western region. “The killing in Benue, Plateau and Taraba, in my view has made it impossible for one region to be a full northern region. “So, the eight regions modelled after the 1960 and 1963 constitution will serve. “I believe that the challenges of Nigeria begin with economic challenges, security, and poverty”, he said, disclosing that the country is among the first 20th poorest countries in the world with less than an average income of less than $2 a day. The country, he said has the largest concentration of very poor people, expressing his belief that the challenges include health, power, education, stating that these can be solved if the country has true federation of eight regions, where each competes with others. Abia State, he said, was lucky to have a governor that has managed to pay salaries and pensions, adding that most other states don’t. On what happens to the existing 36 states, he said that not less than two-third of states in the country are not viable and unable to pay salaries and should therefore be retained as development/investment zones within the regions. The structure of governance, he also said should be decentralized to take account of the needs of each zone. Incremental restructuring he said will create more development, urging that all should be clamouring for a major restructuring. This restructuring, he said should start in one major swoop. Chief Anyaoku concluded by saying “we must set our goals high in order to achieve more detailed result”.
Q: Android menu forward-compatibility I created an app with minimum SDK 7 in order to get maximum compatibility with circulating devices. On Android phones (GB2.3), pressimng Menu button pops up a menu strip on the bottom of the screen, and that is correct. However on HC3.2 tablets, where no menu hardware key is present, I expected a soft-menu key on the bottom of the screen, but it didn't appear, so I can't open my menu. I don't know where to investigate and which portion of my code to share, so could you please show me where do I have to look for menu softbutton? After reading that menus are deprecated in most recent Android versions, I don't know if ICS4 has a soft-menu button or not. I never tested my app on such a device. Can you give me advices? Thanks A: The link you provide tells you how to correctly provide action bars in your app so that the presence or otherwise of a physical menu button is irrelevant, so that's a good start. Now, you need to combine that with a little runtime detection of the SDK version (just check the Build.VERSION.SDK_INT constant for Android 1.5 or above), along with some appropriate reflection to enable the same APK to run on any Android version starting with your minSDK version.
1. Introduction {#s1} =============== A brain-computer interface (BCI) enables communication without movement based on brain signals measured with electroencephalography (EEG). One of the most widespread BCI paradigms relies on the P300 event-related potentials, and is referred to as P300 BCIs. The P300 is an event-related potential elicited by oddball paradigm (see Figure [1](#F1){ref-type="fig"}). It exhibits larger amplitudes in target (rare) stimuli (Fazel-Rezai et al., [@B9]). Because the P300 component can also be observed for stimuli that are selected by the user e.g., because of his/her intention, many different BCIs can be designed based on this principle. The P300 speller introduced by Farwell and Donchin ([@B8]) can serve as one of the examples. Furthermore, P300 BCIs have consistently exhibited several useful features---they are relatively fast, straightforward, and require practically no training of the user (Fazel-Rezai et al., [@B9]). Unfortunately, the detection of the P300 is challenging because the P300 component is usually hidden in underlying EEG signal (Luck, [@B17]). Therefore, well-trained machine learning system is one of the most important parts of any P300 BCI system. Its task is to read EEG patterns and discriminate them into two classification classes (i.e., P300 detected, P300 not detected). ![Comparison of averaged EEG responses to common (non-target) stimuli and rare (target) stimuli. There is a clear P300 component following the target stimuli.](fnins-11-00302-g0001){#F1} Typically, the P300 detection requires preprocessing, feature extraction, and classification (depicted in more detail in Figure [2](#F2){ref-type="fig"}). The objective of preprocessing is to increase signal to noise ratio. Bandpass filtering of raw EEG signals is a common preprocessing method in P300 detection systems. Since the P300 component is stimulus-locked and the background activity is randomly distributed, the P300 waveform can be extracted using averaging (Luck, [@B17]). Averaging gradually improves signal to noise ratio. On the other hand, averaging also slows down the bit-rate of P300 BCI systems and distorts the shape of ERPs (Luck, [@B17]). Then, features are extracted from EEG signals. Different methods have been used for this purpose, e.g., discrete wavelet transform, independent component analysis, or principal component analysis. The final step is classification. Farwell and Donchin used step-wise discriminant analysis (SWDA) followed by peak picking and covariance evaluation (Farwell and Donchin, [@B8]). Other methods have also been used for the P300 detection such as support vector machine (SVM) (Thulasidas et al., [@B27]), and linear discriminant analysis (LDA) (Guger et al., [@B10]). Although different features and classifiers have been compared (Mirghasemi et al., [@B20]), comparisons of all different features extraction and classification methods applied to the same data set have only been published rarely. One study has, however, examined this issue. In Krusienski et al. ([@B14]) it was shown that SWDA and Fisher\'s linear discriminant (FLD) provided the best overall performance and implementation characteristics for practical classification, as compared to Pearson\'s correlation method (PCM), a linear support vector machine (LSVM), and a Gaussian kernel support vector machine (GSVM) (Fazel-Rezai et al., [@B9]). In Manyakov et al. ([@B18]), the authors demonstrated that LDA and Bayesian linear discriminant analysis (BLDA) were able to beat other classification algorithms. For comparison purposes, there is a benchmark P300 speller dataset from the BCI Competition 2003 (Blankertz et al., [@B4]) and some papers report results achieved on this dataset. Several approaches were able to reach 100% accuracy using only 4--8 averaged trials on the BCI Competition 2003 data (Cashero, [@B5]). In single trial P300 detection (i.e., detection without averaging the trials), the performance reported in the literature is lower---typically between 65 and 70% (Jansen et al., [@B12]; Haghighatpanah et al., [@B11]). ![Diagram of the P300 BCI system. The EEG signal is captured, amplified and digitized using equidistant time intervals. Then, the parts of the signal time-locked to stimuli (i.e., epochs or ERP trials) must be extracted. Preprocessing and feature extraction methods are applied to the resulting ERP trials in order to extract relevant features. Classification uses learned parameters (e.g., distribution of different classes in the training set) to translate the feature vectors into commands for different device types.](fnins-11-00302-g0002){#F2} Recent development in the field of deep learning neural networks has opened new research possibilities regarding P300 BCI systems. Using a combination of unsupervised pre-training and subsequent fine-tuning, deep neural networks have become one of the most reliable classification methods, in some pattern recognition cases even outperforming other state-of-the-art methods (Pound et al., [@B24]). P300 feature vectors reflect the nature of EEG signal. They are high-dimensional, not linearly separable, consisting of both time samples and spatial information (by concatenating multiple EEG channels). Therefore, deep learning models seem appealing since they are especially powerful for high-dimensional and complex feature vectors. Furthermore, to the authors\' best knowledge, there are very few papers that study deep neural networks for EEG/ERP data (Deng and Yu, [@B6]). The objective of this paper is to verify if one of the deep learning models suitable for real-valued inputs---stacked autoencoders---is suitable for the detection of the P300 component, and to compare it with traditional classification approaches. The datasets used were previously freely provided to public. The paper is organized as follows: Section 2 introduces deep learning models including stacked autoencoders. Then the proposed experiment is described: in Section 2.3, details about the obtained data and experimental conditions are described. Section 2.4.1 explains feature extraction and Section 2.4.2 describes the procedure used to train stacked autoencoders and classification models that were used for comparison. Results are presented in Section 3 and discussed in Section 4. 2. Materials and methods {#s2} ======================== 2.1. Deep learning ------------------ The main goal of this paper is to evaluate the benefits of using new deep learning models for P300 BCIs. Therefore, in this section, deep learning models are introduced. Deep learning models have emerged as a new area of machine learning since 2006 (Deng and Yu, [@B6]). For complex and non-linear problems, deep learning models have proven to outperform traditional classification approaches (e.g., SVM) that are affected by the curse of dimensionality (Arnold et al., [@B1]). These problems cannot be efficiently solved by using neural networks with many layers (commonly referred to as deep neural networks) trained using backpropagation. The more layers the neural network contains, the lesser the impact of the backpropagation on the first layers. The gradient descent then tends to get stuck in local minima or plateaus which is why no more than two layer were used in most practical applications (Deng and Yu, [@B6]). In deep learning, each layer is treated separately and successively trained in a greedy way: once the previous layers have been trained, a new layer is trained from the encoding of the input data by the previous layers. Then, a supervised fine-tuning stage of the whole network can be performed (Arnold et al., [@B1]). Deep networks models generally fall into the following categories (Arnold et al., [@B1]): - Deep belief networks (stacked restricted Boltzmann machine) - Stacked autoencoders - Deep Kernel Machines - Deep Convolutional Networks The main goal of this paper is to explore stacked autoencoders for this task. Deep belief networks from deep learning category have already been successfully applied to P300 classification (Sobhani, [@B26]). However, to the authors best knowledge, stacked autoencoders have so far not been used for the P300 detection. Furthermore, they use real inputs which is suitable for this application. 2.2. Stacked autoencoders ------------------------- A single autoencoder (AA) is a two-layer neural network (see Figure [3](#F3){ref-type="fig"}). The encoding layer encodes the inputs of the network and the decoding layer decodes (reconstructs) the inputs. Consequently, the number of neurons in the decoding layer is equal to the input dimensionality. The goal of an AA is to compute a code *h* of an input instance *x* from which *x* can be recovered with high accuracy. This models a two-stage approximation to the identity function (Arnold et al., [@B1]): f d e c ( f e n c ( x ) ) = f d e c ( h ) = x \^ ≈ x with *f*~*enc*~ being the function computed by the encoding layer and *f*~*dec*~ being the function computed by the decoding layer. ![Autoencoder. The input layer (*x*~1~, *x*~2~, .. , *x*~6~) has the same dimensionality as the output (decoding layer). The encoding layer ($h_{1}^{(1)}$, .., $h_{4}^{(1)}$) has a lower dimensionality and performs the encoding (Ng et al., [@B22]).](fnins-11-00302-g0003){#F3} The number of neurons in the encoding layer is lower than the input dimensionality. Therefore, in this layer, the network is forced to remove redundancy from the input by reducing dimensionality. The single autoencoder (being a shallow neural network) can easily be trained using standard backpropagation algorithm with random weight initialization (Ng et al., [@B22]). Stacking of autoencoders in order to boost performance of deep networks was originally proposed in Bengio et al. ([@B2]). A key function of stacked autoencoders is unsupervised pre-training, layer by layer, as input is fed through. Once the first layer is pre-trained (neurons $h_{1}^{(1)}$, $h_{2}^{(1)}$, .., $h_{4}^{(1)}$ in Figure [3](#F3){ref-type="fig"}), it can be used as an input of the next autoencoder. The final layer can deal with traditional supervised classification and the pretrained neural network can be fine-tuned using backpropagation. Stacked autoencoder is depicted in Figure [4](#F4){ref-type="fig"} (Ng et al., [@B22]). ![**Stacked autoencoder (Ng et al., [@B22])**.](fnins-11-00302-g0004){#F4} 2.3. Experimental design ------------------------ ### 2.3.1. Introduction To compare stacked autoencoders with traditional classification models, an ERP experiment was designed and conducted in our laboratory to obtain P300 data for training and testing of the classifiers used. The data with corresponding metadata and detailed description are available in Vareka et al. ([@B28]). ### 2.3.2. Stimulation device The stimulation device (Dudacek et al., [@B7]) was designed at our department. The main part of the stimulation device is a box containing three high-power Light-Emitting Diodes (LEDs) differing in their color: red, green, and yellow. The core of the stimulator is an 8bit micro-controller that generates the required stimuli. The control panel consists of a LCD display and a set of push-buttons that are used to set the parameters of the stimulation protocol. The stimulator also generates additional synchronization signals for the EEG recorder. The stimulator has typically been used for modified odd-ball paradigm experiments (three stimulus paradigm Dudacek et al., [@B7]). Apart from traditional target and non-target stimuli, the device can also randomly insert distractor stimuli. The distractor stimuli are usually used to elicit the subcomponent of the P3 waveform (called P3a) (Polich, [@B23]). Figure [5](#F5){ref-type="fig"} shows the LED module with the yellow diode flashing. ![Stimulation device with flashing diodes.](fnins-11-00302-g0005){#F5} ### 2.3.3. Stimulation protocol The stimulation protocol uses the device described above. For our experiments, the following setting of the stimulation device was used: each diode flashes once a second and each flash takes 500 ms. The probabilities of the red, green and yellow diodes flashing were 83, 13.5, and 3.5%, respectively. Consequently, the green diode was the target stimulus and the red diode the non-target stimulus. The yellow diode was the distractor stimulus, and was ignored in the subsequent processing. The participants were sitting 1 m from the stimulation device for 20 min. The experimental protocol was divided into three phases, each containing 30 target stimuli and each about 5 min long. There was a short break between each two phases. The participants were asked to sit comfortably, not to move and to limit their eye blinking. They were instructed to pay attention to the stimulation device and not to perform another task-relevant cognitive or behavioral activity. ### 2.3.4. Procedure The following experimental procedure was applied: Each participant was acquainted with the course of the experiment and answered questions concerning his/her health. Then he or she was given the standard EEG cap made by the Electro-Cap International. The international 10--20 system of electrode placement was used. The participant was subsequently taken to the soundproof and electrically shielded cabin. The reference electrode was located at the root of his/her nose. The participant was told to watch the stimulator. ### 2.3.5. Recording of the data The BrainVision amplifier and related software for recording were used in the subsequent experiments. The data were obtained with the following parameters: the sampling rate was set to 1 kHz, the number of channels was set to 19, the resolution was set to 0.1 μV and the recording low-pass filter was set with the cut-off frequency of 250 Hz. The impedance threshold was set to 10 *kΩ*. ### 2.3.6. Measured subjects A group of 25 healthy individuals (university students, aged 20--26) participated in our experiments. However, only the data from 15 subjects were used in subsequent experiments. Five subjects were rejected even before storing the data, so they are unavailable in our data publication (Vareka et al., [@B29]). Those subjects were blinking excessively, inattentive and in some cases, the experiment was ended early. High impedance was also one of the reason for rejection, because it was typically associated with a complete data loss on one or more electrode. Other five subjects were rejected based on their lack of the P300 response. All subjects signed an agreement with the conditions of the experiment and with the sharing of their EEG/ERP data. ### 2.3.7. Availability of the measured data The experimental protocol and datasets supporting the results of this article are described in more detail in Vareka et al. ([@B28]). The datasets are available for download in the EEG/ERP Portal under the following "<http://eegdatabase.kiv.zcu.cz/> (Moucek and Jezek, [@B21]). Supporting material for this paper can also be found in the GigaScience database, GigaDB (Vareka et al., [@B29]). 2.4. Pattern recognition ------------------------ ### 2.4.1. Preprocessing and feature extraction For feature extraction, the Windowed means paradigm (Blankertz et al., [@B3]) was used. It is a modern method that includes features from multiple channels and the most significant time intervals. Its use for P300 BCIs was encouraged in Blankertz et al. ([@B3]). The method is based on selecting epoch time windows that contain the components of interest (e.g., the P300 component). The following steps were taken: 1. Each dataset was split into epochs (trials) using stimuli markers of target events---the green diodes flashing (S 2) and non-target events---the red diodes flashing (S 4). Each trial started 500 ms before the stimulus, and ended 1,000 ms after the stimulus. 2. Baseline correction was performed by subtracting the average of 500 ms before the stimulus onset from each trial. 3. For averaging, 50 ms long time windows between 150 ms and 700 ms after the stimuli onset were selected. The intervals used were based on expected locations of the P300 and other cognitive ERP components (Luck, [@B17]) and further adjusted experimentally. Subsequently, 11 averages were extracted from all available 19 EEG channels. 4. Averages from all 19 channels were concatenated. As a result, each feature vector had dimensionality of 209. 5. Finally, each individual feature vector was normalized using its length. The procedure for finding suitable parameters for the P300 detection based on the Windowed means paradigm is described in detail in Vareka and Mautner ([@B30]). For example, it was investigated how to choose time intervals for averaging to maximize classification performance. ### 2.4.2. Classification For classification, the state-of-the-art methods for P300 BCIs mentioned e.g., in Lotte et al. ([@B16]): linear discriminant analysis (LDA) and multi-layer perceptron (MLP) were compared with stacked autoencoders (SAEs). The features vectors were extracted as described in Section 2.4.1. The training set was concatenated using the data from four subjects (experimental IDs 99, 100, 104, and 105, none of them included in the testing dataset). The datasets used for training were selected manually to contain an observable P300 component with different amplitudes and latencies. From each subject, all target trials were used. The corresponding number of non-targets was randomly selected from each subject. Consequently, the training dataset contained 366 target and 366 non-target trials. Finally, all training trials were randomly shuffled. Only the training set was used for both unsupervised pre-training and supervised fine-tuning. There were no further weight updates in the testing mode. Therefore, it could be observed if once trained classifiers can generalize for other subjects. To optimize parameters for classification models, 20% randomly selected subset of the training dataset was used for validation. Then, manually selected parameters were inserted and the process of training and evaluating the results was repeated ten times to average the performance for each configuration. After the parameters were found, the models were trained on the whole training dataset and subsequently tested. Matlab Neural Network Toolbox was used for the implementation of stacked autoencoders (MATLAB, [@B19]). The parameters of the stacked autoencoder (number of layers, number of neurons in each layer, and number of iterations for the hidden layers) were empirically optimized using the results on the validation set. The experimentation started with two layers, then either new neurons were added into the layer, or a new layer was added until the performance of the classifier stopped increasing. Finally, the following procedure was used to train the network. The maximum number of training epochs was limited to 200. 1. The first autoencoder with 130 hidden neurons was trained. 2. The second autoencoder with 100 hidden neurons was connected with the first autoencoder to form a 209-130-100-209 neural network, and trained. 3. The third autoencoder with 50 hidden neurons was connected with the second autoencoder to form a 209-130-100-50-209 neural network, and trained. 4. The fourth autoencoder with 20 hidden neurons was connected with third autoencoder to form a 209-130-100-50-20-209 neural network, and trained. Furthermore, the following parameters were set for the network globally to reduce overfitting and adjust the weight update: L2WeightRegularization was set to 0.004, SparsityRegularization was set to 4, and SparsityProportion was set to 0.2. These values were set according to common recommendations (MATLAB, [@B19]) and then slightly adjusted when tuning up the training. After the training of each autoencoder, the input feature vectors were encoded using that autoencoder to form input vectors of the next autoencoder. Using the output of the last autoencoder, softmax supervised classifier was trained with 200 training iterations. Finally, the whole pre-trained 209-130-100-50-20-2 network was fine-tuned using backpropagation. The structure of the stacked autoencoder is depicted in Figure [6](#F6){ref-type="fig"}. ![The structure of the SAE neural network.](fnins-11-00302-g0006){#F6} The same numbers of neurons for each layer were used for MLP. However, the phase of unsupervised pre-training was not included. Instead, the randomly initialized network was trained using backpropagation. The number of training iterations was empirically set to 1,000. For the training of LDA, the shrinkage regularization was used as recommended by Blankertz et al. ([@B3]) to reduce the impact of the curse of dimensionality. 3. Results {#s3} ========== To evaluate the results of classification, accuracy, precision and recall were calculated. Suppose that we have *t*~*p*~ - number of true positive detections, *t*~*n*~ - number of true negative detections, *f*~*p*~ - number of false positive detections, and *f*~*n*~ - number of false negative detections. The following values were calculated: A C C U R A C Y = t p \+ t n t p \+ t n \+ f p \+ f n P R E C I S I O N = t p t p \+ f p R E C A L L = t p t p \+ f n In the testing phase, the data from each experiment were evaluated. Similarly to the training dataset, all target trials were included but only the corresponding number of first non-target trials were used. The number of trials varied slightly for each subject. However, for each subject, \~90 target and 90 non-target trials were extracted. The results achieved are shown in Table [1](#T1){ref-type="table"}. For each classifier, average accuracy, precision, and recall are listed. Figures [7](#F7){ref-type="fig"}--[9](#F9){ref-type="fig"} depict achieved classification accuracy, precision, and recall for each testing dataset, respectively. ###### Average classification performance for different classifiers. **Classifier** **Accuracy (%)** **Precision (%)** **Recall (%)** ---------------- ------------------ ------------------- ---------------- LDA 65.9 68.1 58.4 SAE 69.2 73.6 58.8 MLP 64.9 67.8 56.2 ![For each dataset from the testing set, the achieved accuracy for LDA, MLP, and SAE classifiers is depicted.](fnins-11-00302-g0007){#F7} ![For each dataset from the testing set, the achieved precision for LDA, MLP, and SAE classifiers is depicted.](fnins-11-00302-g0008){#F8} ![For each dataset from the testing set, the achieved recall for LDA, MLP, and SAE classifiers is depicted.](fnins-11-00302-g0009){#F9} SAE, when configured as described, outperformed both LDA and MLP on the testing dataset (McNemar statistical tests; *p* \< 0.01). 4. Discussion {#s4} ============= The aim of the experiments was to evaluate if stacked autoencoders perform better for the P300 detection than two other classifiers. Unlike common P300-based BCI systems, the classifiers were trained on a dataset merged from four subjects and subsequently tested on different 11 subjects without any further training. Therefore, it can be observed how the P300 detection system can perform when dealing with the data from previously unknown subjects. Most parameters for classification were manually adjusted during the time-consuming mainly empirically-driven process of trying different settings and observing results on the validation set. As the results indicate, stacked autoencoders were consistently able to outperform multi-layer perceptrons. The improvement can be seen in both Figure [7](#F7){ref-type="fig"} and Table [1](#T1){ref-type="table"}. This difference can probably be explained by improved training in SAE that also includes unsupervised pre-training. The improvement is more pronounced in precision than in recall. Furthermore, SAEs were also able to outperform LDA. Tests revealed that both differences were statistically significant for the testing dataset used (*p* \< 0.01). As Figure [7](#F7){ref-type="fig"} illustrates, SAEs yielded higher accuracy than other classifiers in 9 out of total 11 subjects. Consequently, it appears that stacked autoencoders were able to match or outperform current state-of-the-art classifiers for the P300 detection in accuracy. These results are consistent with promising results reported by Sobhani ([@B26]). For deep belief networks, the authors reported 60--90% for other methods compared with 69 and 97% for deep learning. The achieved results encourage using deep learning models for the P300 component detection with applications to P300-based BCIs. Furthermore, during the process of manually adjusting parameters, it was observed that the comparative benefits of SAE increased with the increase in the dimensionality of feature vectors. This may be because linear classifiers such as LDA suffer from the curse of dimensionality (Ji and Ye, [@B13]). In contrast, SAE by itself also performs dimensionality reduction (Zamparo and Zhang, [@B31]). Although classification accuracy is very important for the reliability of P300 BCI systems, only BCIs with reasonably fast bit-rate are comfortable to use for disabled users. Since real-world BCI systems should be able to evaluate ERP trials on-line, computational time for the processing and classification of feature vectors should not be higher than inter-stimulus intervals. According to our experience, to be comfortable to use, inter-stimulus intervals should be at least 200 ms. In the literature, only slightly lower inter-stimulus intervals are used for the P300 speller (for example, 175 ms in Sellers et al., [@B25]). Fortunately, once the BCI system is trained, classifying a single feature vector is usually not very time consuming. This is also relevant for feed-forward neural networks. Therefore, according to our experience, SAE, MLP and LDA can all be used in on-line BCI systems. For the future work, more issues remain to be addressed. Although stacked autoencoders are less prone to overtraining than MLPs, during the fine-tuning phase, accuracy peaked after approximately 100 iterations and then leveled off slowly. Therefore, more regularization techniques for avoiding overfitting may be used. Despite being better than LDA and MLP, still, only four participants reached an accuracy above 70% which is often seen as a minimum to use a P300-based BCI (Lakey et al., [@B15]). It can therefore be evaluated how an individualized BCI system (i.e., the system trained on the data from the particular user) would perform and if better performance of SAEs outweighs their increased training times. In Sobhani ([@B26]), pre-training possibilities for deep belief networks are discussed. The authors proposed that the weights of a new neural network could be initialized using the results of pre-training based on another subject. The same principle could be applied to stacked autoencoders. This could lead to possibly increased classification performance. Another possible strategy for increasing accuracy and bitrate would be to shorten the inter-stimulus interval. Although shorter intervals could lead to lower P300 amplitudes, SAE can classify high-dimensional feature vectors and could detect only slight differences in the feature vectors. Furthermore, it could also be interesting to explore stacked denoising autoencoders, deep belief networks or other deep learning training models. Finally, we plan to apply the presented methods to on-line BCI for both healthy and paralyzed subjects. Ethics statement {#s5} ================ The manuscript uses only previously published datasets. In that case, no ethic committee was involved because the University of West Bohemia does not have an ethic committee. All participants signed the informed consent. Author contributions {#s6} ==================== LV and PM designed the experiment used to obtain the data. LV proposed, designed, and implemented the algorithm based on stacked autoencoders. LV wrote the manuscript. PM corrected the manuscript. Both authors read and agree with the final version of the manuscript. Conflict of interest statement ------------------------------ The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports under the program NPU I. [^1]: Edited by: Patrick Ruther, University of Freiburg, Germany [^2]: Reviewed by: Xiaoli Li, Beijing Normal University, China; Quentin Noirhomme, Maastricht University, Netherlands [^3]: This article was submitted to Neural Technology, a section of the journal Frontiers in Neuroscience
Next week, the Republican Party will convene in Tampa to plot world domination. And you're feeling left out. Yes, you badly want in on the ground floor of the next culture war or invasion of a small, preferably Muslim country. Yet the GOP speaks an elusive language only its followers understand. With just a few coded words, it's able to mobilize the loyalists — while simultaneously dismissing everyone else as un-American and quite possibly queer. Yet a recently leaked glossary lays bare the mystery of the Republican tongue. Now you too can speak with the superiority of talk radio hosts and pissed-off old guys who live in mobile home parks on the outskirts of Jacksonville, Fla. Enjoy your seat at the right hand of God! Abortion: Reproductive issue best decided by preachers from rural Georgia who believe babies are conceived by using public restrooms. American: True patriot who hates all the right things, including but not exclusive to: taxes, unbreaded chicken, California, female sportscasters, the Toyota Prius, people who speak Mexican, BET, free-range vegetables, public radio, Al Sharpton, and whales. Apologizing: The treasonous admission that America is not always perfect. Usually committed by people who can't even match their cowboy boots to their firearms. Bain Capital: Massachusetts investment firm celebrated for providing investors with huge returns by laying off thousands of workers, cutting healthcare benefits, and shipping jobs to those places where foreigners live. Will serve as the model for U.S. economic recovery once the infidel is smited. Barack Hussein Obama: Muslim foreigner illegally elected president to pursue the socialist agenda of Karl Marx, regarded as the least funny brother of the famed comedic troupe. Bible: Historical novel starring omnipotent being who sentences others to eternal damnation unless they do what he says. Think of Pat Robertson, only with a hillbilly beard and the ability to part seas. Chick-fil-A: Baptist version of eating kosher. Only sells chickens that have provided a documented history of heterosexuality to a commission of small-town Chamber of Commerce officials. Christian: GOP delegate who's devoted his life to Jesus, handguns, and repealing the Clean Water Act. Will be doing missionary work at Tampa gentlemen's clubs next week. At least, that's what he'll tell his wife when the MasterCard bill arrives. Christian Persecution: When the school board bars a teacher from conducting faith healing sessions in his seventh grade biology class. Class Warfare: Indefensible act of pitting America against the wealthy; perfectly reasonable when mocking moms on welfare. College: American Maoist reeducation camp, where liberal professors encourage impressionable youth to enjoy critical thinking, Jäger shots, and recreational intercourse. Constitutional Conservatism: Belief that our founding document should be strictly interpreted — even though it was written by guys who wore wigs and culottes, but were definitely not transvestites, since that hadn't even been invented yet. Corporation: Most evolved species of mammal. Designated by Supreme Court as the legal equivalent to people, only better, because they can afford to buy congressmen and box seats to the Texas Rangers. Entitlement Society: Large corporations who demand public subsidies every time they build a facility, move their headquarters, or threaten to relocate to Botswana or Mississippi. Wait. No. Scratch that. Environment: Convenient place to dump car batteries and kitchen appliances. While lamestream media insists on its preservation, studies by the business faculty at Liberty University business faculty prove that beavers actually like swimming in hydrochloric acid because it improves their skin tone. Evolution: Fraudulent theory that man evolved from ape. Have you ever seen an ape with jugs like Jessica Simpson's? Feminazis: Ingrate women who use the word "Ewwww!" when Rush Limbaugh tries to buy them a Sex on the Beach at hotel bars in Boca Raton. Food Stamp President: Did we mention that Obama's a negro? And that he's probably a Muslim? Founding Fathers: Early visionaries who built a start-up country to escape the tyranny of England. Based on the theory that we could do our own tyranny more cost-effectively. Free Market: Utopian world where corporations are allowed to conduct business without interference from price-fixing, consumer protection, or child labor laws. Global Warming: Theory shared by 99 percent of the world's scientists that man-made pollution is warming the Earth's atmosphere. Easily discredited by pointing to that one day in February when it was pretty cold. Gotcha Journalism: Shameful media practice pioneered by Katie Couric in which she used duplicitous interview tactics — often called "questions" — to get vice presidential candidates to admit they can't read. Homosexual Agenda: Conspiracy co-chaired by Satan and Neil Patrick Harris to convince America's youth to quit football and pursue careers as botanists and defense lawyers. Illegals: American slang for "Mexican." Also: Anyone skilled in the operation of a leaf-blower. Jesus: Celebrated deity who preached that "the poor should get a damned job already" and that all human suffering could be averted by simply lowering the capital gains tax. Jews: The guys who killed Christ. Occasionally have the audacity to apply for membership to your country club, despite genetic deficiencies and an inadequate short game. Job Creators: People who pay half the tax rate you do because God likes them way better. Deserving of further deductions because the gardener is asking for $4.25 an hour and Sundays off. Lamestream Media: All media with the exception of Fox News, the Wall Street Journal, and the non-gay parts of PornHub.com. Note: Gay parts can be mildly educational if your wife's at Bible study and the door is locked. Liberal Agenda: Set of effete East Coast values written by Sean Penn and the Dixie Chicks to destroy the American family by getting our children to suck at math and listen to John Mayer. Liberal Elite: Immoral foe nearly crushed to extinction by the superiority of the conservative agenda. Membership believed to consist of three elderly men recently expelled from the Newport Yacht club for publicly expressing fond memories of FDR. Mormons: Creepy sex cult perverts from Utah who have arranged marriages to 13-year-old girls named Edna. Still better than negroes, but scarier than Jews. Muslims: Swarthy apostates who hate freedom. Believe that blowing up grandmas and blond children will be rewarded with 72 virgins in a jacuzzi suite at the Heaven Best Western. Covert cells mostly operating in Iran and the U.S. State Department. Obamacare: Theory that all Americans deserve health coverage, when they could just as easily rub some dirt on it. Radical Feminists: Secret cabal of WNBA season ticketholders seeking to usurp the natural role of men as the boss of everything. Need to shut up and vacuum the living room. Science: Discredited field of study practiced by sissies at northern liberal arts schools that suck at football. Second Amendment: The God-given right to carry an assault rifle to Sunday brunch at Applebee's in case there's a kid wearing a hoodie. Tea Party: People who hate socialism and government entitlements but live off Social Security and Medicare because that stuff doesn't count. Traditional Marriage: A union between a man and woman who argue over a period of three to seven years, then separate and file unflattering paperwork about each other. Repeat cycle as necessary. Values Voters: People willing to be economically fucked over as long as we keep bagging on the minorities.
A new method for fast blood cell counting and partial differentiation by flow cytometry. A new blood counting method by flow cytometry is described which determines absolute counts and relative proportions of erythrocytes, reticulocytes, thrombocytes, lymphocytes and granulocytes from one sample of saline diluted human or animal blood. Staining time is 2 to 5 min and measuring time between 1 and 2 additional minutes. Measured simultaneously are the electrical cell volume, the green and optionally also the red fluorescence of the transmembrane potential sensitive dye 3,3-dihexyloxacarbocyanine DiOC6(3) and the RNA/DNA stain acridine orange (AO). Work is under way to fully automate staining, measurement and data evaluation. The use of stains by which blood cell counting and biochemical analysis can be combined offers new possibilities for routine blood cell counting without requirement for additional time. The potential of such stains is that pathologic cell conditions which are not, or not yet reflected in the cell count may be earlier detectable by biochemical stains.
70 S.E.2d 264 (1952) EYE v. NICHOLS. No. 10373. Supreme Court of Appeals of West Virginia. Submitted January 15, 1952. Decided April 22, 1952. Mahan, White & Higgins, and S. C. Higgins, Jr., Fayetteville, R. A. Clapperton, Summersville, for appellant. Wolverton & Callaghan, Richwood, G. D. Herold, Summersville, for appellee. GIVEN, Judge. Ira Eye instituted a chancery cause in the Circuit Court of Nicholas County against Cecil Nichols, appellant here, the object of which was to have cancelled two certain written agreements, executed by the named parties, dissolving a partnership arrangement previously existing, and which agreements effected a settlement as to the partnership property. As basis for the relief sought by Eye, the bill of complaint charged fraud on the part of Nichols in making certain representations as to *265 the value of the business belonging to the partnership and managed solely by Nichols, located as Summersville, West Virginia, and that there was a mutual mistake of fact as to the value of that business at the time of the dissolution of the partnership. The circuit court found that no fraud existed, but held that there was a mutual mistake of fact, decreed that the dissolution agreements be cancelled, and that Eye recover of and from Nichols the sum of $3,210.40, being one half of the difference, as found by that court, in the value of the Summersville business, and another business owned by the partnership and managed solely by Eye, located at Webster Springs, West Virginia. This Court granted an appeal from that decree. About June 7, 1946, Eye and Nichols became equal partners in a business located in Summersville, known as the Summersville Heating and Plumbing Company and, about September 7, 1948, they became equal owners of a business located at Webster Springs known as the Standard Plumbing and Heating Company. Thereafter Eye managed the Webster Springs business and Nichols managed the Summersville business. Neither of the partners had full or accurate information as to the business managed by the other, but trusted and relied upon the manager of the particular business as to the management thereof. This mutual arrangement was continued until about November, 1948, when Nichols suggested a dissolution of the partnership. After several conferences the parties agreed to a dissolution, executed written agreements effecting the dissolution and, as a settlement of the partnership affairs, Nichols transferred unto Eye all interest in and to the Webster Springs business, "including the stock in trade, monies in the bank, accounts receivable, and in that certain 1947 Model Ford Truck Motor No. 1516691, and in all other assets" used in connection with or as part of that business, and Eye transferred unto Nichols all of his interest in the Snmmersville business, "including the stock in trade, monies in the bank, accounts receivable, and in that certain 1948 Model F2 Ford Truck, Serial No. 18922, and in all other assets" used in connection with or as part of that business. The assignments were made by separate contracts and were to be effective as of December 31, 1948. In reaching an agreement as to the basis of the settlement no actual audit or inventory of the assets of either of the businesses was made. Eye relied upon the representations of Nichols as to the probable value of the Summersville business and Nichols relied upon representations of Eye as to the probable value of the Webster Springs business. From such representations, and the discussions, Eye concluded that the value of the Summersville business was approximately $20,000, and Nichols concluded that the value of the Webster Springs business was approximately $18,000. As before indicated, however, the parties agreed that Eye should have the Webster Springs business as his full share of the assets of the partnership property and that Nichols should have the Summersville business as his full share of the assets of the partnership property. In reaching the conclusion as to the method of settling the partnership affairs, it is apparent that the parties considered factors other than the actual or net inventory value of the merchandise and accounts, such as the necessity of the dissolution, the place of residence of the parties, the location of the respective businesses, and the potential earnings of each business. The parties appear to have been entirely satisfied with the dissolution agreements, and the division of the assets of the partnership property, until Eye discovered that the partnership return of the Summersville business, made for federal income tax purposes for the year 1947, disclosed a net income of $5,402.14; that an amended return filed by Nichols subsequent to the dissolution disclosed a net income of $10,297.57; and that an audit made later by the United States Treasury, Internal Revenue Service, disclosed a net income for that year of $19,572.37. Such a tax return for the year 1948 disclosed a net income of $14,298.54 but, upon an audit by the Internal Revenue Service, was increased to $17,539.12. These audits indicated a much larger income from the Summersville business *266 than alleged by Eye to have been represented to him by Nichols. The trial court directed that an audit be made of the assets of the two businesses as of the date of the dissolution, which was done, and the result of such audits is reflected in the evidence. That court, after hearing the evidence offered by the parties, found and decreed that the actual value of the Summersville business, at the time of the dissolution, was $18,756.77, and that the actual value of the Webster Springs business as of that time was $12,335.96; that no fraud had been proved; and that there was a mutual mistake of fact as to the respective values of the two businesses. The circuit court cancelled the dissolution agreements and decreed that Eye have sole ownership of the Webster Springs business and that Nichols have sole ownership of the Snmmersville business, as of December 31, 1948, as their respective interests in the partnership property, and that Eye recover of and from Nichols the sum of $3,210.40, being one half of the difference between the value of the two businesses, as determined by that court. Eye, cross-assigning error here, contends that the circuit court was in error in finding and decreeing that no fraud was established and that the clear weight of the evidence establishes fraud on the part of Nichols with respect to the representations made by him as to the value of the Summersville business. We are of the opinion, however, that the record discloses substantial basis for the court's finding in that respect, and are not disposed to disturb that finding. The contention that Nichols made false representations as to the value of the Summersville business rests largely upon the tax returns showing a larger income for the Summersville business than alleged to have been represented to Eye by Nichols. Nichols, however, denies making any such representations, and testifies to the effect that the amounts of such income were unknown to him until after the preparation of the tax returns by an accountant some time subsequent to the dissolution of the partnership. Moreover, the finding of the trial court that the Summersville business was actually worth only $18,756.77, considerably less than the value represented by Nichols, would indicate strongly that no fraudulent intent existed on the part of Nichols in making the representations. In reaching the conclusion that the circuit court correctly found that no fraud on the part of Nichols had been proved, we have not overlooked the rule requiring that partners, in their dealings with each other, must always observe the highest degree of good faith. See Fouse v. Shelly, 64 W.Va. 425, 63 S.E. 208; Benedetto v. Di Bacco, 83 W.Va. 620, 99 S.E. 170. The rule, however, must not be so applied as to defeat the rights of partners to effect a dissolution of a partnership by contract, if fairly entered into. The burden of establishing good faith in a transaction between partners should not be required of a partner merely upon a charge of fraud. "* * * In a sale between living partners, if the seller seeks to have the sale set aside on the ground that the purchaser has fraudulently withheld information affecting the sale, the burden is upon the seller to establish the charge by clear and cogent evidence. When the purchaser has proved a complete sale, the law implies good faith and honesty in the contract, in the absence of evidence to the contrary. In every transaction lawful in itself, the law supports a presumption of honesty and good faith. * * *." 14 M.J., Partnership, Section 26. See Benedetto v. Di Bacco, supra; Welch Publishing Co. v. Johnson Realty Co., 78 W.Va. 350, 89 S.E. 707, L.R.A.1917A, 200. The controlling question relates to the contention of Eye that there existed a mutual mistake of fact as to the relative value of the two businesses. The facts relied upon by Eye as disclosing a mutual mistake of fact are practically the same as the facts upon which Eye relies as establishing fraud on the part of Nichols. It will be recalled that the partners entered into the dissolution agreements upon the basis of the businesses being of approximate equal values, the representations of Eye being to the effect that the value of the Webster Springs business was approximately $18,000, and the representations of *267 Nichols being to the effect that the value of the Summersville business was approximately $20,000, and that the subsequent tax returns of the auditor disclosed large profits from the Summersville business. It will also be recalled that Eye testified to the effect that Nichols represented the profits earned by the Summersville business for the year 1948 to have been only $7,000, whereas the profits for that year, as disclosed by the tax returns, were approximately $10,000 more. Like charges were based on tax returns for the year 1947. The contention of Eye is that the large profits indicated by the tax returns establish conclusively that the Summersville business was worth at least $40,000. The circuit court, however, heard the witnesses of both parties, in open court, testify as to the actual value of the two businesses, and found that the value of the Summersville business was $18,756.77 and that the value of the Webster Springs business was $12,335.96 as of December 31, 1948. The finding of the trial court as to the Summersville business discloses a value of approximately $2,000 less than the value represented by Nichols, while the value of the Webster Springs business, as found by the trial court, as of the same date, was approximately $6,000 less than the value thereof represented by Eye. The actual difference in the two values, as fixed by the trial court, was $6,420.81. From this finding it will be noticed that if Nichols made any mistake in estimating or in representing to Eye the value of the Summersville business it was to his own prejudice, not to the prejudice of Eye. In fact, the recovery allowed by the trial court to Eye was not based on any mistake of Nichols, but on the mistake of Eye, and as to a matter concerning which Eye alone made representations, the accuracy of which representations were known, or should have been known, to him, as sole manager of the Webster Springs business. In Simmons v. Looney, 41 W.Va. 738, at page 742, 24 S.E. 677, at page 678, in considering the effects of a unilateral mistake, this Court stated: "* * * The only mistake he pleads is that he did not know how much timber Simmons had furnished. This fact he was bound to know; and, as it was readily ascertainable, not to know it was negligence. It was not misrepresented to him by Simmons. By no means is it the rule that in every instance money paid in mistake or ignorance of fact may be recovered back. The fact not known must be material in the matter. And, even where the fact is material, that alone is not always enough. `It must be such as the party could not by reasonable diligence get knowledge of when he was put upon inquiry; for if, by such reasonable diligence, he could have obtained knowledge of the fact, equity even will not relieve him, since that would be to encourage culpable negligence.' So the law is stated by Judge Haymond in Harner v. Price, 17 W.Va. 523, 545, on the text of Judge Story. * * *" See Benedetto v. Di Bacco, supra. It is clear from the record that Nichols had no knowledge as to the accuracy of the representations made by Eye concerning the value of the Webster Springs business, and we find no reason why controlling weight should be given to the tax returns or audits. Conceivably the tax returns and audits would not necessarily reflect the value of the businesses and neither the tax returns nor audits would reflect factors considered by the partners other than inventory values. In Biggs v. Bailey, 49 W.Va. 188, 38 S.E. 499, syllabus, Point 2, this Court held: "A court of equity will relieve against a mutual mistake of law as well as of fact, when such mistake is established by clear and convincing proof, and the rights of innocent third parties do not interfere." In 58 C.J.S., Mistake, p. 832, mutual mistake is defined as follows: "The term `mistake' is often employed in the sense of `mutual mistake,' which means a mistake reciprocal and common to both parties, when each alike labored under the same misconception in respect of the terms of a written instrument; a mistake common to all the parties to a written instrument, and it usually relates to a mistake concerning the contents or legal effect. Where there is a mutual mistake as to antecedent private rights, it has been held that the mistake partakes of the nature of a mistake of fact. *268 `Mutual mistake' is distinguishable from `unilateral mistake.'" In Welch Publishing Co. v. Realty Co., 78 W.Va. 350, 89 S.E. 707, L.R.A.1917A, 200, specific performance of a contract to sell real estate was sought. The description of the lot gave rise to the controversy. The lot shown upon the map as Lot No. 4 had a frontage of seventy feet, but the grantor mistakenly believed it to have had a frontage of fifty feet. The Court held, Point 3, syllabus: "A mistake on the part of the vendor of such a lot as to its area or dimensions, inducing a sale thereof at a smaller price than he would have asked had he been cognizant of its size, not in any way occasioned or concealed by conduct of the vendee, constitutes no ground for rescission of the contract, nor does his inadvertant failure to specify a portion of the lot as the subject of sale at the price named." In the opinion in that case the Court stated: "Mistakes under the influence of which parties make contracts, or mistakes in the formulation thereof, often afford ground for relief in equity. But to have such effect the mistake ordinarily must be mutual, or the mistake of one party must have been induced or caused by fraud on the part of the other. In either of these cases, a court of equity will rescind the contract. Biggs v. Bailey, 49 W.Va. 188, 38 S.E. 499; Brown v. Rice, 26 Grat., Va., 467; Kerr on Fraud, 413. A mistake made in the expression of the contract is always fatal; for in that case the written contract is not the one actually made. Kerr on Fraud, 413. `If the mistake is not in the expression of the agreement, but in some fact materially inducing it, the mere knowledge in the one party of the mistake in the other does not, in the absence of a duty to disclose, or other special circumstances, constitute a sufficient ground in equity for avoiding it.' Kerr, Fraud & Acc. 414; Merchants' Bank et al., v. Campbell et al., 75 Va. 455. But in granting such relief equity does not declare there was no contract. Its jurisdiction stands upon the assumption that there is one and is interposed to relieve from it." In the Benedetto case [83 W.Va. 620, 99 S.E. 170] this Court held, syllabus, Points 3 and 4: "3. A settlement of the affairs of a partnership entered into between the parties, by which their respective interests are fully determined and fixed, is presumptively correct, and a party thereto who would overthrow the same has the burden of showing that it was brought about by fraud, accident, or mistake." "4. A member of a partnership seeking to re-open a settlement of the affairs thereof, upon the ground of accident, mistake, or fraud therein, must allege and prove the particular facts wherein such accident, mistake, or fraud exists, failing in which his bill will be dismissed." In 9 Am.Jur., Cancellation of Instruments, Section 34, it is stated: "While equity will relieve against a plain mistake, such a mistake cannot be said to arise in a matter which was considered doubtful and treated accordingly. Nor can a party, who enters into a contract in conscious ignorance of facts which, he apparently concluded would not influence his action, or induce him to refrain from entering into the contract, be relieved therefrom, on the ground of mutual mistake, when revelation of the true state of facts disappoints his anticipations. Negligence on part of the complainant, contributing to the mistake, will also prevent the securing of relief; the cases are practically unanimous in holding that mistake which results from failure to exercise that degree of care and diligence which would be exercised by persons of reasonable prudence under the same circumstances will not be relieved against. * * *." Authorities are not unanimous as to whether recovery may be had where a mistake is unilateral, not mutual. See 9 Am.Jur., Cancellation of Instruments, Section 33, and cases there cited. We find, however, that the authorities are practically unanimous in holding that where a mistake *269 was the result of lack of ordinary diligence on the part of the person seeking recovery, in connection with a duty charged to him, and the person against whom recovery is sought had no knowledge relating thereto, and did not induce the improper action, recovery should be denied. See 30 C.J.S., Equity, § 47, and cases there cited. In Simmons v. Looney, supra, Point 2, syllabus, this Court held: "If one under legal duty to ascertain, and with means to ascertain, a fact, pays money in ignorance of it, he cannot recover back." See Holt v. Holt, 46 W.Va. 397, 35 S.E. 19; Bank of Williamson v. McDowell County Bank, 66 W. Va. 545, 66 S.E. 761, 36 L.R.A.,N.S., 605; Welch Publishing Co. v. Realty Co., supra. In applying these applicable rules to the instant case, we must necessarily hold that Eye failed to establish any basis for relief as to any mistake of fact. If a mistake existed it related alone to the Webster Springs business, and was due solely to the failure of Eye to inform himself as to the accuracy of the representations. Nichols relied and acted solely upon the representations made by Eye, which he had a right to do, since Eye was the sole manager of the business and knew, or should have known, the value of that business before making the representations. There is no contention that Nichols had any knowledge as to the inaccuracy of the representations or that he in any manner induced them. The contracts entered into between partners dissolving the partnership can not be lightly ignored. Benedetto v. Di Bacco, supra; Holt v. Holt, supra. Appellant contends that the decree of the circuit court should be reversed for two further reasons: (1) That the proof of a mistake on the part of Nichols as to the value of the Summersville business does not correspond to the allegations of the bill of complaint charging a mutual mistake as to the value of the Webster Springs business; and (2) where a court decrees the dissolution of a partnership a division of the assets of the partnership cannot be decreed until after the payment of debts of the partnership has been properly provided for. To sustain these propositions appellant relies upon the holdings pronounced in Doonan v. Glynn, 26 W.Va. 225; Hyre v. Lambert, 37 W.Va. 26, 16 S.E. 446; Floyd v. Duffy, 68 W.Va. 339, 69 S.E. 993, 33 L.R.A.,N.S., 883; Jones v. Rose, 81 W. Va. 177, 94 S.E. 41. In view of the conclusions of the Court announced above, however, we need not determine whether such holdings apply to the facts in the instant case. The final decree of the circuit court complained of is reversed and the bill of complaint dismissed. Reversed and dismissed.
A framework for producing deterministic canonical bottom-up parsers Abstract A general framework for producing deterministic canonical bottom-up parsers is described and very general conditions on the means of construction are presented which guarantee that the parsing methods work correctly. These conditions cover all known types of deterministic canonical bottom-up-parsers.
I’m looking forward to my thirties – a third of the way to my goal of being a totally awesome 90-year-old! I’m almost done with reviewing the past ten years and updating my collection of blog highlights, and I’m looking forward to getting some clarity on what’s coming up next too. Birthday celebrations are an excellent excuse to get together with people. I feel a little weird inviting people to come and spend a few hours with me and a bunch of other people I know. I tend to get stressed out by the process of getting other people gifts (or guiltily donating things people have given me), so I’d rather not receive gifts. But I’ve been part of wonderful parties before, so I can think about what made those parties awesome, and what I can learn to have even better parties. My favourite parties were the ones I had with my closest friends back home. We never needed an excuse. Sometimes I’d invite people over to hang out, or to watch a movie, or to play a game. I really liked those because my friends were all good friends with each other, so there were lots of crazy conversations and in-jokes. Even after I moved to Canada, I loved how they’d sometimes have ice cream parties and other get-togethers, patching me in through Skype. I miss them a lot. When I lived at Graduate House, I often invited people over for a barbecue. There was a large outdoor party area with plenty of seats. Since many of my friends were also in graduate school, we had relaxed conversations under the stars. Graduate House was really convenient because most of the people I knew lived there or close by, and it was a short walk from a downtown subway stop. I moved to my first apartment and celebrated my 24th birthday there. I didn’t have chairs and the bare walls echoed the noise, but people sat on cushions on the floor and we had a lot of fun. After I moved in with W-, it took me a while to get around to having parties. Still, I had the occasional tea party – a casual, conversation-filled open house that was usually my excuse to bake far too many goodies. I had one of these every 2-3 months, which felt pretty infrequent (but it’s still more often than people invite me over, so I guess that counts for something). My favourite of these was when the conversation gelled and I got to learn all sorts of interesting things about my new friends. I’ve had larger parties here as well. I remember scrambling to wash extra saucers! =) We set out mats and cushions on the deck, and people hung out there as well as in the kitchen. Our home has more space than my first apartment. (The kitchen’s about the size of the main living area I had back then!) We have two bathrooms. So why am I not having more people over? Let me think about my excuses and how to work around them. It’s cluttered. Having people over is a good excuse to declutter and clean up, and people are fine with a lived-in home. Besides, moving things around can work wonders for opening up space. Maybe we can move the kitchen table outside, for example? That requires disassembly, but it might be worth it. People can stand around in the kitchen or hang out on the deck. There’s not enough seating. In a pinch, we manage to fit ten people around the kitchen table. Now that we’ve rebuilt the deck stairs (I helped!), we can put a few more chairs on the deck as well. Maybe we can get extra chairs and store them in the shed when they’re not in use. The cats might get in the way. You’d expect the cats to hide with unfamiliar company – except Luke loves attention and Neko’s curious (but still tetchy, so guests sometimes get nipped if they get too close to her). And then cat hair! We’ve thought about keeping them in the basement with some food and water during parties, although some of our guests like playing with the cats, so maybe they can join us at the end. Food. I like cooking, although sometimes it’s hit-or-miss, and I’m never quite sure about inflicting my experiments on people. (Although I guess that’s how you know who your friends are! ) Since I don’t get firm RSVPs, I tend to prepare things that we can enjoy throughout the week even if no one shows up. I should stop worrying about filling everyone up. I’ve gone to fun parties that had mostly chips to snack on. People are used to pot luck or barbecue. I can always pick up party platters or order in. Drink. Neither W- nor I drink alcohol (or intend to any time soon), so I’m pretty clueless about something that a lot of people enjoy or expect at parties. BYOB can help, I guess, especially if we can get someone to take stuff home afterwards. (Alternatively, we could cook with the remaining alcohol, I guess…) I rarely drink anything other than water, so I don’t have a good handle on Frugality on behalf of others. I keep projecting my frugality onto other people, especially as other people might be in more difficult situations. =) It’s much cheaper to cook rather than to eat out, so I don’t want to organize a party at a restaurant where everyone will be eating out – I’d rather cook for everyone, or have a potluck dinner. Timing. I asked a friend for advice, and he said many good parties run until 2 AM or something like that. I’m usually in bed by midnight. So… maybe I’m an afternoon or dinner party sort of person, even if it means not being able to join the deep discussions that often happen late at night. Don’t want to accidentally offend someone. Sleeping Beauty’s problem? Her parents forgot to invite one fairy, who then threw a fit. While I don’t think anyone’s going to be quite that vindictive (or magical), I still worry about forgetting to invite someone and accidentally sending the wrong message. A good number of excuses… I have to remember that even though I regularly feel insecure about hosting, I still have get-togethers pretty frequently, and people come. (Even though I’m usually semi-anxiously twiddling my thumbs at 1pm – maybe I should move to a 2pm start time?) I live ten minutes from the subway station, even if it’s a subway station a bit far from downtown. I think it will help to reflect on why I want to bring people together in the first place. What are my reasons for having birthday parties and other get-togethers? To thank people. People are awesome and helpful and inspiring. Feeding them and sharing what I’ve learned from them are small things I can do to say thanks. I don’t have one-on-one lunches or coffees with people nearly enough because I don’t want to impose on their schedule (although maybe that’s something else I should practise), but an open house is voluntary. I’m working on a big gratitude map thanking people for various ways they’ve helped me over the past ten years, and I’m looking forward to having that printed at a large scale. =) (That’ll also answer the “How do you know Sacha?” question!) To hear from people. People don’t blog nearly as often as I do, so if I want to find out what’s going on in their lives, I have to ask, or I have to give them an opportunity to tell me. Sometimes I can help out, sometimes I learn things, sometimes it’s just interesting to find out what’s going on with other people. To bring awesome people together, and to learn from their conversations. Maybe it’s weird, but I’m usually the quiet one at my own parties. =) I like listening, especially as people bring out aspects in other people that I might never come across myself. I sometimes prompt people with questions if I know they know something that other people might find useful. To pick people’s brains for ideas and next steps. It’s good to let people know what you’re planning, since they’ll often have great ideas and tips. =) To celebrate with lots of good food. Salads! Fruits! Baked yummies! Things that people would probably not make for themselves (or things I might not make on my own)! Many of my friends are single, so cooking can be difficult, but we enjoy cooking and are set up well for it. If there’s anything left over, I can always pack it up and stash it in the freezer. What would it look like if I could get better at having parties? I have a flexible plan for having parties. I know where the table and chairs go, where I’m going to put food, where to put drinks and snacks so that conversations flow, what some go-to snacks are so that I can get that sorted out easily. I have checklists so that I don’t forget things in the scramble. (Must remember to get ice next time…) I have parties more regularly. Maybe once every two months, and maybe with a core group that also hosts during the other times? I trust people more. I don’t have to worry too much about keeping conversations balanced or food flowing. I trust that people will adapt, taking care of newcomers and bringing them in without pushing them too hard. We have a few more seats available, and can sustain conversation in another seating area – maybe on the deck, with the deck chairs that we built. We tend to crowd the kitchen because the living room is too dark, although maybe we can sort that out with better lighting (must replace the bare light that’s in that room). I want to have virtual parties too, like the ones we had back then… I wonder what that would be like, especially with something like Google Hangout. So, party. =) I don’t know what life will quite be like in the next couple of weeks, but maybe if I’m ambitious, I could try having an in-person party near my birthday. More conservatively, I could have it closer to the end of the month. Summer, so we can snack on plenty of fruits, and the barbecue will be handy too. Thoughts? Tips? Does everyone else just Get It when it comes to parties, and am I the only one geekily trying to figure stuff out? =) Some people have a hard time deciding what gift to bring. The most-fun parties I’ve had were those where I indicated what gifts I would appreciate getting – although I was quick to add that if they don’t or can’t bring anything, their presence would be enough and would be most appreciated. At one party, I asked for fruits for Maali and Sally (elephant and giraffe at the zoo). It was fun to see friends’ individual differences – some friends brought fruits in supermarket bags, while others had fruit and flower arrangements. A single bunch of bananas was fine, too. At another party, I asked for plants and specified that they should not be rare, expensive or hard to take care of – the plants that I preferred were those that could survive lack of care (I don’t have a green thumb). I’ve also once asked for paperback books – I supplied the titles – nothing too expensive. I know that celebrants are not supposed to ask for gifts, but I just wanted to help. Some people – like me – can have an easier time if only we knew what our friends or loved ones wanted to receive from us. So why not help make the gift-giving easier? http://sachachua.com sachac I like receiving people’s stories, and I like sharing potluck meals. =) Rachelle I tend to figure things out like this in advance for parties too. I don’t have them very often – maybe twice a year, so your habit of making lists sounds very useful for reminding me of how to work things. Something else I find important for party prep is to have a low key activity planned for myself for the period of time between when I am totally ready for the guests and when they actually arrive. I tend to feel growing anxiety waiting for the guests without anything else to prepare, so reading a book or having some knitting handy allows me to distract myself without getting so involved in something I can’t just put down the minute someone arrives! http://sachachua.com sachac In that tense hour or so before the first guest shows up, I can usually be found reading or blogging at the kitchen table, while mumbling something like, “Well, if nobody shows up, I’ll have yummy food for a week, and this is totally okay. Awesome, even.” =) Recent comments JohnKitchin Thanks. That matches my current understanding too. It seems like use-package pretty conveniently installs and configures packages. I have seen cask for creating and installing... – Emacs configuration and use-package
Nike Elite Socks (Black / Blue) Product Data Style No.: sx3693-004 MSRP: $14.00 Color: Black / Blue Popularity: 0 / 100 Return Rate: 0.00% Lifespan: 01/26/2012~02/15/2013 Pricing History: 01/24/2013: $12.9901/27/2012: $13.99 Description Nike Elite Socks Nike Elite socks keep your feet comfortable and dry. They feature Nike's Dri-FIT fabric that wicks away moisture, have superior fit due to anatomical left and right construction, and implement cushioning on the midfoot, ankle and achilles. (One pair of socks.) Details: 62% polyester - 21% nylon - 15% cotton - 2% spandex Bring 'Em Back? Product Reviews Overall Rating for Nike Elite Socks (based on 1 review) Nike Elite Socks By Corbin from Cedar Park, Texas on 1/29/2012 These are the most comfortable socks that i have evr put my feet in. They also feel good when you are wearing them in games and they look great! Sneakerhead.com Reviews Excellent By a Sneakerhead.com customer via post-transaction survey I think sneakerhead is a wonderful website. The only one I trust to buy shoes online, it's very convenient it's like the mall comes to you, you no longer have to take that 15-20 minute drive to get there. Instead you just take a trip to sneakerhead.com and spend 15 minutes and place your order and in a matter of days you have what you wanted. I will always reccomend sneakerhead.com to a friend or two cuz I completely trust it. I just have a simple suggestion I think they should categorize the shoes by size, it would make people's experience on the website so much simpler. But over all I love it here and will continue to come back. Thank you
Origins and evolution of Huntington disease chromosomes. Huntington disease (HD) is one of five neurodegenerative disorders resulting from an expansion of a CAG repeat located within the coding portion of a novel gene. CAG repeat expansion beyond a particular repeat size has been shown to be a specific and sensitive marker for the disease. A strong inverse correlation is evident between CAG length and age of onset. Sporadic cases of HD have been shown to arise from intermediate sized alleles in the unaffected parent. The biochemical pathways underlying the relationship between CAG repeat length and specific cell death are not yet known. However, there is an increasing understanding of how and why specific chromosomes and not others expand into the disease range. Haplotype analysis has demonstrated that certain normal chromosomes, with CAG lengths at the high range of normal, are prone to further expansion and eventually result in HD chromosomes. New mutations preferentially occur on normal chromosomes with these same haplotypes associated with higher CAG lengths. The distribution of different haplotypes on control chromosomes in different populations is thus one indication of the frequency of new mutations for HD within that population. Analysis of normal chromosomes in different populations suggests that genetic factors contribute to expansion and account for the variation in prevalence rates for HD worldwide.
Infrared Sauna Heaters The Far Infrared Sauna heater produced by Lux Saunas is a brand new approach to at home sauna equipment. The Black Bio spectrum ceramic heaters are produced by Lux Saunas and combined with carbon heating technology to generate all around far infrared heat to penetrate the body and provide the maximum health and detoxification benefits. The secret to the technology is in a deep and healthy sweat that removes toxins from the body. Our sauna heat technology is vastly superior to other heating technologies used in saunas – such as incoloy heating, concave heating and fiberglass based carbon sauna heaters. Older saunas heaters generate much more heat than necessary and waste energy – many of it not penetrating deep enough into the body to provide any health sauna benefits. Steam or hot rocks saunas work by generating bursts of heat that can be intensely uncomfortable. These saunas are designed to heat the air within the room rather than heating the body. Infrared saunas use dry heat that mimics that of the human body to provide the maximum detoxification and deep heat penetration – at a fraction of the cost of operating more traditional saunas. How Infrared Saunas are the Perfect Alternative Infrared Saunas do not waste energy releasing heat into the air inside the unit. With traditional saunas, most of the heat does not make it past the skin – as much as 97% simply bounces off and is lost in the air. However, infrared saunas heaters penetrate deep into muscle tissue to melt fat and remove toxins without resorting to extreme temperatures. Most toxins are stored in body fat. Lux Saunas uses this principle to their advantage by melting this fat (at 104 degrees) and removing these toxins through heavy sweating. As these toxins are expelled through sweat, you lose weight and become healthier through a cleaner body. Why is Toxin Removal Important? As healthy as you might feel, your body is constantly bombarded by toxins throughout the day. It is estimated that the average living environment contains more than 65,000 harmful chemicals that cause disease and expedite the effects of aging. No age group is safe from these chemicals. A study performed by the American Red Cross found that the average baby was infected with 287 chemicals in their blood stream. Of these chemicals, 217 are known to be toxic to the human body and 180 have been shown to cause damage to DNA. The author of the book, Detoxify or Die, Sherry A. Rogers, has found that the use of infrared saunas is the sole way to remove many of the damaging or harmful toxins that infect our bodies – especially toxins such as Phthalates (often located in common plastic). All of the toxins that an infrared sauna removes are often associated with diseases such as obesity and cancer. One of the most important aspects of using an infrared sauna is that you need to drink a great deal of water while using it. The health benefits of drinking enough water every day alone can contribute to a healthy body – especially when the water is used to expel toxins through sweat. An infrared sauna can be compared to a shower for the inside of your body. Just as it is important to regularly remove toxins and chemicals from the outside of your body, it is important to regularly flush the inside of your body using an infrared sauna. The True Power of Infrared Saunas Infrared rays instantly penetrate through your exterior skin to melt fat located as far as 1 1/2 inches. This heat causes chemicals like toxins and acid to leak through your exterior skin in the form of sweat and leave your body naturally. Your body responds to this heat in much the same way as it does to a fever – but without the ill feeling. Your body naturally switches over to disease fighting and starts cleansing chemicals from your fat cells in order to recover from the perceived illness – an entirely safe, natural and healthy process. This entire process strengthens your immune system and stimulates cardiovascular activity – resulting in an exercise effect without the hard work. Far infrared heaters technology was first used for NASA astronauts that found that, using infrared heat, they could achieve an effective workout without needing to exercise. You might be surprised just how effective Infrared Saunas from Lux Saunas can be. In fact, the average 39 minute Infrared Sauna session can help you burn as much as 600 calories each session. Infrared rays are naturally generated by the Sun every day. They are absolutely safe for humans. Infrared Saunas simply harness this energy and use it in a positive way to promote a healthy body. The Sun’s energy is mimicked by state of the art ceramic heaters that produce heat waves actually safer than prolonged exposure to the Sun. “How Do The LuxSauna Far Infrared Saunas Produce Such Amazing Results?” Far Infrared Saunas – Spas: Dr. Gabriel Cousens, MD has a lot to say on the subject “At The Tree Of Life a lot of people come from the world with mental, emotional and physical toxicities and the LuxSauna we like for two particular reasons: they are the highest grade spas and saunas at the best prices. No small deal, this is important. We encourage people to buy LuxSauna saunas when they go home.. So they can continue an ongoing detox program because we’re always accumulating toxins in the world. The food we eat. The water we drink and the air we breathe. Filled with toxins continuously. And this is a continual way to undo this “
Monday, March 03, 2014 Its disastrous flirtation with neoliberalism has had anything but a romantic ending for Qantas. The once Australian government-owned national carrier, with its exemplary safety record, had its wings clipped courtesy of Labor Prime Minister Paul Keating’s privatisation decision. It is now destined for foreign ownership, or worse, at the hands of the nation-wreckers in the Liberal Party who, in the space of months, have overseen the closure or threatened closure of General Motors-Holden, Toyota, SPC Ardmona and anything else that doesn’t taste of chocolate. Under CEO Alan Joyce and his scary band of Directors, morale at Qantas has plummeted and jobs have been thrown out the window. At least two of its directors bear that out:Gary Hounsell and Paul Rayner are both on the board of the very underperforming Treasury Wines Ltd. Before we look more into the Board, the case for re-nationalising Qantas must be stated. An island nation requires the security of controlling the means by which its citizens may depart from or return to the country. It cannot place itself at the commercial or political mercy of foreign-owned carriers. A government that placed the interests of its citizens before those of foreign capital would maintain a national airline. We do not have such a government, or the remotest prospect of having one, given that Liberal and Labor are two arms on the same body, the brain of which is incapable of thinking from the perspective of the independence of the nation and the rights of the people. Too much attention is focussed in debates around Qantas’ performance on the CEO, Alan Joyce. Sure, his ancestors probably sold gunpowder to the British, but look at his collaborators. Notice their interlocking directorships and ties to finance capital, to mining and energy, to construction, to the reactionary anti-working class legal firm Freehills and corporate criminals like the tobacco companies. These are profiles of the ruling class, the class whose members only have one vote like the rest of us, but who rule by virtue of the great mismatch between the political and economic forms of democracy. It is a mismatch that can only be reconciled through the ownership of economic assets by a state led by the working class, imho. Leigh Clifford, AO BEng, MEngSci Chairman and Independent Non-Executive Director Leigh Clifford was appointed to the Qantas Board in August 2007 and as Chairman in November 2007. He is Chairman of the Qantas Nominations Committee. Mr Clifford is a Director of Bechtel Group Inc. and Chairman of Bechtel Australia Pty Ltd and the Murdoch Childrens Research Institute. He is a Senior Advisor to Kohlberg Kravis Roberts & Co, a Board Member of the National Gallery of Victoria Foundation and a Member of the Council of Trustees of the National Gallery of Victoria. Mr Clifford was previously a Director of Barclays Bank plc. Mr Clifford was Chief Executive of Rio Tinto from 2000 to 2007. He retired from the Board of Rio Tinto in 2007 after serving as a Director of Rio Tinto plc and Rio Tinto Limited for 13 and 12 years respectively. His executive and board career with Rio Tinto spanned some 37 years, in Australia and overseas. Age: 66 ________________________________________ Alan Joyce BApplSc(Phy)(Math)(Hons), MSc(MgtSc), MA, FRAeS, FTSE Chief Executive Officer Alan Joyce was appointed Chief Executive Officer and Managing Director of Qantas in November 2008. He is a Member of the Safety, Health, Environment and Security Committee. Mr Joyce is a Director of the Business Council of Australia and a Member of the International Air Transport Association's Board of Governors (having served as Chairman of IATA from 2012 to 2013). He is also a Director of a number of controlled entities of the Qantas Group. Mr Joyce was the CEO of Jetstar from 2003 to 2008. Before that, Mr Joyce spent over 15 years in leadership positions with Qantas, Ansett and Aer Lingus. At both Qantas and Ansett, he led the network planning, schedules planning and network strategy functions. Prior to that, Mr Joyce spent eight years at Aer Lingus, where he held roles in sales, marketing, IT, network planning, operations research, revenue management and fleet planning. Age: 47 Maxine Brenner BA, LLB Independent Non-Executive Director Maxine Brenner was appointed to the Qantas Board in August 2013. She is a Member of the Remuneration Committee and the Audit Committee. Ms Brenner is a Director of Origin Energy Limited, Orica Limited, Growthpoint Properties Australia Limited and the State Library of NSW Foundation. She is also a Member of the Advisory Panel of the Centre for Social Impact at the University of New South Wales. Ms Brenner was previously a Managing Director of Investment Banking at Investec Bank (Australia) Limited, the Deputy Chairman of Federal Airports Corporation and a Director of Neverfail Springwater Limited, Bulmer Australia Limited and Treasury Corporation of NSW. She also served as a Member of the Australian Government's Takeovers Panel. Earlier, she practised as a lawyer with Freehills and was a law lecturer at the Universities of New South Wales and Sydney. Age: 51 Richard Goodmanson BEng(Civil), BCom, BEc, MBA Independent Non-Executive Director Richard Goodmanson was appointed to the Qantas Board in June 2008. He is Chairman of the Safety, Health, Environment and Security Committee and a Member of the Nominations Committee. Mr Goodmanson is a Director of Rio Tinto plc and Rio Tinto Limited. From 1999 to 2009 he was Executive Vice President and Chief Operating Officer of E.I. du Pont de Nemours and Company. Previous to this role, he was President and Chief Executive Officer of America West Airlines. Mr Goodmanson was also previously Senior Vice President of Operations for Frito-Lay Inc. and was a Principal at McKinsey & Company Inc. He spent 10 years in heavy civil engineering project management, principally in South East Asia. Mr Goodmanson was born in Australia and is a citizen of both Australia and the United States. Age: 66 Jacqueline Hey BCom, Assoc Dip (Marketing), GAICD Independent Non-Executive Director Jacqueline Hey was appointed to the Qantas Board in August 2013. She is a Member of the Audit Committee. Ms Hey is a Director of Bendigo and Adelaide Bank Limited and is Chairman of its Change & Technology Committee and a Member of its Audit and Risk Committees. She is also a Director of the Australian Foundation Investment Company Limited, Special Broadcasting Service, Melbourne Business School and Cricket Australia, and a Member of the ASIC Director Advisory Panel. Ms Hey is the Honorary Consul for Sweden in Victoria. Between 2004 and 2010, Ms Hey was Managing Director of various Ericsson entities in Australia and New Zealand, the United Kingdom and Ireland and the Middle East. Her executive career with Ericsson spanned for more than 20 years in which she held finance, marketing, sales and leadership roles. Age: 47 Garry Hounsell BBus(Acc), FCA, CPA, FAICD Independent Non-Executive Director Garry Hounsell was appointed to the Qantas Board in January 2005. He is Chairman of the Audit Committee and a Member of the Nominations Committee. Mr Hounsell is Chairman of PanAust Limited and a Director of DuluxGroup Limited and Treasury Wine Estates Limited. He is also Chairman of Investec Global Aircraft Fund and a Director of Ingeus Limited. Mr Hounsell was formerly a Director of Orica Limited and Nufarm Limited and Deputy Chairman of Mitchell Communication Group Limited. He was also a former Senior Partner of Ernst & Young, Chief Executive Officer and Country Managing Partner of Arthur Andersen and a Board Member of law firm Herbert Smith Freehills. Age: 59 William Meaney BScMEng, MSIA Independent Non-Executive Director William Meaney was appointed to the Qantas Board in February 2012. He is a Member of the Safety, Health, Environment and Security Committee and the Remuneration Committee Mr Meaney is the President and Chief Executive Officer of Iron Mountain Inc. He is a Member of the Asia Business Council and also serves as Trustee of Carnegie Mellon University and Rensselaer Polytechnic Institute. Mr Meaney was formerly the Chief Executive Officer of The Zuellig Group and a Director of moksha8 Pharmaceuticals, Inc. He was also the Managing Director and Chief Commercial Officer of Swiss International Airlines and Executive Vice President of South African Airways responsible for sales, alliances and network management. Prior to these roles, Mr Meaney spent 11 years providing strategic advisory services at Genhro Management Consultancy, as the Founder and Managing Director, and as a Principal with Strategic Planning Associates. Mr Meaney holds United States, Swiss and Irish citizenships. Age: 53 Paul Rayner BEc, MAdmin, FAICD Independent Non-Executive Director Paul Rayner was appointed to the Qantas Board in July 2008. He is Chairman of the Remuneration Committee and a Member of the Nominations Committee. Mr Rayner is Chairman of Treasury Wine Estates Limited and a Director of Centrica plc. He is also a Director of Boral Limited and Chairman of its Audit Committee. From 2002 to 2008, Mr Rayner was Finance Director of British American Tobacco plc based in London. Mr Rayner joined Rothmans Holdings Limited in 1991 as its Chief Financial Officer and held other senior executive positions within the Group, including Chief Operating Officer of British American Tobacco Australasia Limited from 1999 to 2001. Previously Mr Rayner worked for 17 years in various finance and project roles with General Electric, Rank Industries and the Elders IXL Group. Age: 60 Barbara Ward, AM BEc, MPolEc Independent Non-Executive Director Barbara Ward was appointed to the Qantas Board in June 2008. She is a Member of the Safety, Health, Environment and Security Committee and the Audit Committee. Ms Ward is a Director of a number of Brookfield Multiplex Group companies, O'Connell Street Associates Pty Ltd and the Sydney Children's Hospital Foundation. She was formerly a Director of the Commonwealth Bank of Australia, Lion Nathan Limited, Brookfield Multiplex Limited, Allco Finance Group Limited, Rail Infrastructure Corporation, Delta Electricity, Ausgrid, Endeavour Energy and Essential Energy. She was also Chairman of Country Energy and NorthPower, a Board Member of Allens Arthur Robinson and on the Advisory Board of LEK Consulting. Ms Ward was Chief Executive Officer of Ansett Worldwide Aviation Services from 1993 to 1998. Before that, Ms Ward held various positions at TNT Limited, including General Manager Finance, and also served as a Senior Ministerial Adviser to The Hon PJ Keating. Age: 60 …………….. So, away with all these parasites! Let’s soar through the skies on the wings of independence and socialism.
--- author: - 'R. Tylenda' - 'M. Hajduk' - 'T. Kami[ń]{}ski' - 'A. Udalski' - 'I. Soszy[ń]{}ski' - 'M. K. Szyma[ń]{}ski' - 'M. Kubiak' - 'G. Pietrzy[ń]{}ski' - 'R. Poleski' - '[Ł]{}. Wyrzykowski' - 'K. Ulaczyk' date: 'Received; accepted' title: 'V1309 Scorpii: merger of a contact binary [^1]' --- Introduction \[intro\] ====================== Stellar mergers have for a long time been recognized to play an important role in the evolution of stellar systems. High stellar densities in globular clusters can lead often to collisions and mergers of stars [@leon89]. In this way the origin of blue strugglers can be explained. In dense cores of young clusters multiple mergers of protostars have been suggested as a way of formation of the most massive stars [@bonn98]. Some binary stars, in particular contact binaries, are suggested to end their evolution as stellar mergers [@robeggl]. ![image](l_curve.eps){height="\hsize"} The powerful outburst of V838 Mon in 2002 [@mun02], accompanied by a spectacular light echo [@bond03], raised interest in a class of stellar eruptions named “red novae”, “optical transients” or “V838 Mon type eruptions”. These objects, which typically reach a maximum luminosity of $\sim10^6~{\rm L}_\odot$, evolve to low effective temperatures and decline as very cool (super)giants. Apart form V838 Mon, in our Galaxy the class also includes V4332 Sgr, whose outburst was observed in 1994 [@martini], and V1309 Sco, which erupted in 2008 [@mason10]. As extragalactic eruptions of this kind one can mention M31 RV [eruption in 1989, @mould], M85 OT2006 [@kulk07], and NGC300 OT2008 [@berger09]. Several interpretations of the eruptions were proposed. They include an unusual classical nova [@tutu; @shara], a late He-shell flash [@lawlor], or a thermonuclear shell flash in an evolved massive star [@muna05]. @tylsok06 presented numerous arguments against these mechanisms. They showed that all the main observational characteristics of the V838 Mon-type eruptions can be consistently understood as resulting from stellar collisions and mergers, as originally proposed in @soktyl03. For these reasons @st07 proposed to call these type of eruptions [*mergebursts*]{}. Recently, @kfs10 and @ks11 have suggested that some of the V838 Mon-type eruptions are of the same nature as the eruptions of luminous blue variables, and that they can be powered by mass-transfer events in binary systems. In the present paper, we show that the recent red nova, V1309 Sco, is a Rosetta stone in the studies of the nature of the V838 Mon type eruptions. Archive photometric data collected for the object in the OGLE project during about six years before the outburst allow us to conclude that the progenitor of V1309 Sco was a contact binary. The system quickly evolved towards its merger, which resulted in the eruption observed in 2008. V1309 Scorpii ============= V1309 Sco, also known as Nova Sco 2008, was discovered on 2.5 September 2008 (JD 2454712) [@nak08]. The subsequent evolution, however, showed that as pointed out in @mason10, this was not a typical classical nova. Early spectroscopy revealed an F-type giant [@ruda]. On a time scale of a month the object evolved to K- and early M-types [@mason10; @rudy]. Eight months after the discovery it was observed as a late M-type giant [@mason10]. As described in @mason10, the object developed a complex and rapidly evolving line spectrum of neutral and singly ionized elements. In early epochs, absorption features dominated the spectrum, emission components developed later and their intensities quickly increased with time. The strongest emission lines were those of the hydrogen Balmer series. The emission components had FWHM of $\sim$150 kms$^{-1}$ and broader wings, which in the case of H$\alpha$ extended even beyond 1000 kms$^{-1}$ [@mason10]. Narrow absorption components were superimposed on the emission components, so that the line profiles mimicked those of P-Cyg type or inverse P-Cyg in some cases. @mason10 interpret these line profiles as produced in an expanding shell that is denser in the equatorial plane. In late epochs, when the object evolved to M-type, absorption features of TiO, VO, CO, and H$_2$O appeared in the spectrum [@mason10; @rudy]. V1309 Sco thus shares the principal characteristic of the V838 Mon type eruptions, i.e evolution to very low effective tempratures after maximum brightness and during the decline [@tylsok06]. Other common features of V1309 Sco and the V838 Mon type eruptions include: outburst time scale of the order of months, outburst amplitude (maximum minus progenitor) of 7 – 10 magnitudes, complete lack of any high-ionization features (coronal lines, in particular), expansion velocities of a few hundred km s$^{-1}$ (instead of a few thousands as in classical novae), and oxide bands observed in later epochs, which imply that oxygen-rich (C/O $<$ 1) matter was involved in the eruption. Observations ============ Owing to the position of V1309 Sco close to the Galactic centre (l = 3598, b = –31), the object appears to be situated within a field monitored in the OGLE-III and OGLE-IV projects [@udal03][^2]. As a result V1309 Sco was observed on numerous occasions since August 2001. Altogether more than 2000 measurements were obtained predominantly in the $I$ Cousins photometric band. Among them, $\sim$1340 observations were made before the discovery of the object as a nova in September 2008. A few observations were also made in the $V$ band (seven before the discovery). The data were reduced and calibrated using standard OGLE procedures [@udal08]. A typical precision of the measurements was 0.01 magnitude. Results \[result\] ================== The entire measurements of V1309 Sco derived from the OGLE-III and IV surveys in the $I$ photometric band are displayed in Fig. \[lightcurve\]. The gaps in the data are owing to conjunctions of the object with the Sun. Apart from 2001 and 2009 most of the data were obtained between February and October of each year. In 2001 the OGLE-III project was just starting to operate, hence only 11 measurements were obtained in this year (they were omitted from the analysis discribed below). In the period 2002–2008, from 52 (2002, 2003) to 367 (2006) observations were made each year. Near the maximum of the 2008 eruption, i.e. when the object was brighter than $I \simeq 11$, its image was overexposed in the OGLE frames, hence there are no data for this period. Near maximum brightness, the object attained $I \simeq 6.8$, according to the data gathered in AAVSO[^3]. In May 2009 OGLE-III phase ended and threfore only 64 data points were collected during the 2009 observing season. The OGLE project resumed regular observations of the Galactic centre in March 2010 with a new instrumental setup, namely a 32 chip mosaic camera, which started the OGLE-IV phase. In 2010 655 measurements were obtained. As can be seen from Fig. \[lightcurve\], the progenitor of V1309 Sco was initially slowly increasing in brightness on a time scale of years and reached a local maximum in April 2007. Subsequently, the star faded by $\sim$1 magnitude during a year. In March 2008, the main eruption started, which led to the object discovery, as Nova Sco 2008, half a year later. Remarkable is the smooth, roughly exponential rise in brightness. During this event the object brightened by $\sim$10 magnitudes, i.e. by a factor of $10^4$. An analysis of the data for the progenitor (2002–2007) is presented in Sect. \[progen\], while the outburst and the decline of the object are discussed in Sect. \[sect\_burst\]. The progenitor \[progen\] ========================= The data \[prog\_dat\] ---------------------- A short-term variability, resulting in a $\sim$0.5 magnitude scatter of the points in Fig. \[lightcurve\], is the most remarkable and interesting feature of the V1309 Sco progenitor. This variability is strictly periodic. To show this, we used a method that employs periodic orthogonal polynomials to fit the observations and an analysis of variance to evaluate the quality of the fit, as described in @schwarz. This method is particularly suitable for analysing unevenly sampled data of non-sinusoidal periodic variations, such as eclipses in binary systems. The resulting periodograms obtained for the particular observational seasons are presented in Appendix \[periodograms\]. They show that the principal periodicity in the variations observed in seasons 2002 – 2007 corresponds to a period of $\sim$1.4 day. However, the derived period was not constant, but slowly decreasing with time, as presented in Fig. \[fig\_period\]. It decreased by 1.2% during the pre-outburst observations. When deriving the period plotted in Fig. \[fig\_period\], we devided the data sequence available for a given season into subsamples, each containing $\sim$50 data points, and constructed periodograms for each subsample separately. In this way we can see that during later seasons, particularly in 2006 and 2007, the period was significantly varying on a time scale of months. We find that the time evolution of the observed period can be fitted by an exponential formula. A result of a least-squares fit (inverse errors used as weights) is shown with the full line in Fig. \[fig\_period\]. The obtained formula is $$P = 1.4456\ {\rm exp}\ \Big( \frac{15.29}{t - t_0} \Big), \label{per_fit}$$ where $P$ is the period in days, $t$ is the time of observations in Julian Dates, and $t_0$ = 2455233.5. ![Evolution of the period of the photometric variations of the V1309 Sco progenitor. The line shows a least-squares fit of an exponential formula to the data (see text and Eq. \[per\_fit\]).[]{data-label="fig_period"}](period.eps){height="\hsize"} Using the above period fit we folded the observations and obtained light curves of the object in particular seasons. The results are displayed in Fig. \[fig\_lc\]. In 2002 – 2006 the light curves were obtained from all available data for a particular seasons. The results are shown in the upper part of Fig. \[fig\_lc\]. In 2007 the light curve significantly evolved during the season and therefore we plotted the results for the subsamples (the same as when deriving the period) of 2007 season in the lower part of Fig. \[fig\_lc\] (time goes from subsample a to e in the figure). ![Light curves obtained from folding the data with the period described by Eq. (\[per\_fit\]). Upper part: seasons 2002 – 2006. Lower part: season 2007 devided into five subsamples (time goes from a to e). The zero point of the magnitude (ordinate) scale is arbitrary.[]{data-label="fig_lc"}](lc_2_6.eps "fig:"){height="\hsize"} ![Light curves obtained from folding the data with the period described by Eq. (\[per\_fit\]). Upper part: seasons 2002 – 2006. Lower part: season 2007 devided into five subsamples (time goes from a to e). The zero point of the magnitude (ordinate) scale is arbitrary.[]{data-label="fig_lc"}](lc_7.eps "fig:"){height="\hsize"} As can be seen from Fig. \[fig\_lc\], the light curve in 2002 – 2006 displays two maxima and two minima. They are practically equally spaced in the phase and have similar shapes in the early seasons. This is the reason why the periodograms for these seasons (see Appendix \[periodograms\]) show a strong peak at a frequency twice as high (a period twice as short) as the main peak. In other words, the observations could have been interpreted with a period of $\sim$0.7 day and a light curve with one maximum and minimum. The periodograms show however already in the 2002 season that the period that is twice as long ($\sim$1.4 day) better reproduces the observations. The reason is obvious in the later seasons, when the first maximum (at phase 0.25 in Fig. \[fig\_lc\]) in the light curve becomes increasingly stronger than the second one. As a result the peak corresponding to the 0.7 day period deacreases and practically disappears in 2007. Interpretation \[interpret\] ---------------------------- In principle one can consider three possible interpretations of the observed light variability, i.e. stellar pulsation, single-star rotation, and an eclipsing binary system. ### Pulsation The light curve in the early seasons may imply a stellar pulsation with a period of $\sim$0.7 day and an amplitude of $\sim$0.15 magnitude. This interpretation, however, encounters severe problems when it is used to explain the observed evolution of V1309 Sco. As we show below, the progenitor was probably a K-type star. There is no class of pulsating stars that could be reconciled with these characteristics, i.e a K-type star pulsating with the above period and amplitude. Moreover, to explain the observed evolution of the light curve displayed in Fig. \[fig\_lc\], one would have to postulate a switch of the star pulsation to a period exactly twice as long on a time scale of a few years, together with a gradual shortening of the period. This would be very difficult, if not impossible, to understand within the present theory of stellar pulsation. Finally, there is no physical mechanism involving or resulting from a pulsation instability, which could explain the powerful 2008 eruption, when the object brightened by a factor of $10^4$. ### Rotation of a single star The observed light curve could have been explained by a single K-type star rotating with a period of $\sim$1.4 day and with two spots (or rather two groups of spots) on its surface in early seasons. The two spots would be replaced by one spot in 2007. The object would thus belong to the FK Com class of rapidly rotating giants [e.g. @bopp81]. There are, however, several differences between the progenitor of V1309 Sco and the FK Com stars. The 1.4 day period is short, as for the FK Com stars. The very good phase stability of the observed light curve, shown in Fig. \[fig\_lc\], implies that the spot(s) would have had to keep the same position(s) on the star surface over a time span of a few years. This seems to be very improbable given differential rotation and meridional circulation, which are expected to be substantial for a rapidly rotating star. Indeed, for the FK Com stars, the spots are observed to migrate and change their position on much shorter time scales [@korh07]. The systematic decrease of the rotational period is also difficult to explain for a single star, and is not observed in the FK Com stars. Finally, no eruption such as that of V1309 Sco in 2008 was observed for the FK Com stars. Indeed, there is no known mechanism that could produce such a huge eruption in a single giant, even if it were fast rotating. ### A contact binary evolving to its merger \[sect\_cb\] We are thus left with an eclipsing binary system. As we show below, this possibility allows us to explain all principal characteristics of the observed evolution of the V1309 Sco progenitor, as well as the 2008 outburst. The shape of the light curve, especially in early seasons (two rounded maxima of comparable brightness and two equally spaced minima), implies that the progenitor of V1309 Sco was a contact binary. The orbital period of $\sim$1.4 day does not allow us to classify the object as a W UMa-type binary, because the clasical W UMa stars have periods $<1$ day. Nevertheless, contact binaries with periods beyond 1 day are also observed [@pacz06]. The exponentially decreasing period (Fig. \[fig\_period\]) can be interpreted as resulting from an unstable phase of evolution of the system, which leads to the shrinkage of the binary orbit and finally to the merger of the system. Such a situation was predicted in theoretical studies [@webb76; @robeggl; @rasio95] and it is now commonly believed that the W UMa binaries end their evolution by merging into a single star. Dissipation of the orbital energy in the initial, violent phase of the merger [@soktyl06] resulted in the V1309 Sco eruption observed in 2008. One of the possible ways that can lead a binary system to merge is the so-called Darwin instability. This happens when the spin angular momentum of the system is more than a third of the orbital angular momentum. As a result, tidal interactions in the system cannot maintain the primary component in synchronization anymore. The orbital angular velocity is higher than the primary’s angular velocity, the tidal forces increase their action and rapidly transport angular momentum from the orbital motion to the primary’s rotation. For contact binaries this takes place when the binary mass ratio $q \equiv M_2/M_1 \la 0.1$ [@rasio95]. Another possibility appears to be if a binary system enters a deep contact, as discussed in @webb76. This can happen, for instance, if the primary attempts to cross the Hertzsprung gap. The system then starts loosing mass and angular momentum through the outer Lagrangian point, $L_2$. This shrinks the binary, which further deepens the contact and increases mass and angular momentum loss. In addition the system starts orbiting faster than the components rotate. As a result, similar as for the Darwin instability, the tidal forces transport angular momentum from the orbit to the components’ spins, which further accelerates the orbit shrinkage. A maximum in the light curve of a typical contact binary is observed when we look at the system more or less perpendicularly to the line that joins the two components. As a result, we observe two maxima during each orbital revolution. The maxima are of similar brightness, because the system looks similar from both sides. The situation changes when one of the above instabilities sets in. The secondary starts orbiting faster than the primary’s envelope rotates. The stars are in contact, which means that the difference in velocities is partly dissipated near the contact between the components. This should lead to the formation of a brighter (hotter) region that is visible when we look at the leading side of the secondary. The system is then brighter than when we look from the opposite direction. The maxima in the light curve start to differ. This is what we observe in the case of the V1309 Sco progenitor. As can be seen from Fig. \[fig\_lc\], the first maximum (at phase $\sim$0.25) becomes progressively stronger than the second one (phase $\sim$0.75) from 2002 to 2006. In 2007, the second maximum disappears, and at the end of this season we observe a light curve with only one maximum and one minimum. Apparently the system evolved to a more spherical configuration with a bright (hot) spot covering a large fraction of the system’s surface along the orbital plane. Basic parameters of the progenitor ---------------------------------- ### Interstellar reddening, spectral type, and effective temperature \[sect\_redd\] The effective temperature of the progenitor can be derived from the $V - I$ colour. In 2006 a few measurements were obtained in the $V$ band. They are displayed in Fig. \[fig\_VI\] together with the $I$ data obtained in the same time period (JD 2453880 – 2453910). ![$V$ (open points) and $I$ (asterisks) measurments obtained in JD 2453880 – 2453910 (season 2006) and folded with the period described by Eq. (\[per\_fit\]). The zero point of the magnitude (ordinate) scale is arbitrary. []{data-label="fig_VI"}](lc_VI.eps){height="\hsize"} In order to determine the effective temeprature from a colour, an estimate of the interstellar reddening is also necessary. @mason10 obtained $E_{B-V} \simeq 0.55$ from interstellar lines observed in their spectra of V1309 Sco in outburst. We constrained spectral types of the object in outburst from the UVES spectra of @mason10 obtained in their epochs 1 – 3 (downloaded from the VLT archive). Comparing them with the $BVRI$ photometry of the object obtained by the AAVSO team near the above spectral observations, we obtained $0.7 \la E_{B-V} \la 0.9$. The OGLE data also allowed us do derive a $V - I$ colour, equal to 5.27, 223 days after the outburst discovery date. This happened to be in the epoch of the SOAR spectroscopy of @mason10, from which these authors derived a spectral type of M6–7. To reconcile the colour with the spectral type, $E_{B-V} > 0.7$ is required. An upper limit to the extinction can be obtained from the Galactic Dust Extinction Service[^4], which estimates $E_{B-V}$ from the 100 $\mu$m dust emission mapped by IRAS and COBE/DIRBE. For the position of V1309 Sco, we obtain $E_{B-V} \la 1.26\pm0.11$. Below we assume that V1309 Sco is reddened with $E_{B-V} \simeq 0.8$. As can be seen from Fig. \[fig\_VI\], two $V$ measurements were taken close to the first maximum in the light curve (phase $\sim$0.26), two others near the following minimum (phase $\sim$0.55). The resulting colour, $V - I$, was 2.09 and 2.21 for the two phases, respectively. The mean value of the observed colour, i.e. $V - I \simeq 2.15$, corrected for the reddening and compared with the standard colours of giants (luminosity class III) results in a spectral class K1–2 and an effective temperature $T_{\rm eff} \simeq 4500$ K for the progenitor of V1309 Sco. The difference in the colour, as mentiond above, implies that at the maximum the object was on average $\sim$200 K hotter than in the minimum. This agrees with the interpretation of the evolution of the light curve given in Sect. \[sect\_cb\]. ### Luminosity and distance \[sect\_ld\] The hypothesis that the progenitor of V1309 Sco was a contact binary allows us to evaluate other parameters of the system. Adopting a total mass of the binary, $(M_1 + M_2)$, of 1.0 – 3.0 M$_\odot$ (observed range of masses of W UMa binaries) and an orbital period of 1.43 day, one obtains from the third Kepler law that the separation of the components, $A$, is $(3.7 - 5.4)\times 10^{11}$ cm (5.4 – 7.7 R$_\odot$). Taking the mass ratio, $q$, within the range observed in the W UMa systems, i.e. 0.07 – 1.0, we can derive the radii of the Roche lobes of the components [@eggl83]. Assuming that both components fill their Roche lobes a total effective surface of the system can be obtained. This, along with the above estimate of $T_{\rm eff}$ results in a luminosity of the system of $(1.15 - 3.3) \times 10^{34}$ erg s$^{-1}$ (3.0 – 8.6 L$_\odot$). Taking into account bolometric correction, interstellar reddening, and the mean observed brightness of the V1309 Sco progenitor in 2006, $V = 18.22$, we derive a distance to the object of $3.0 \pm 0.7$ kpc. If the outer Roche lobes [dimensions can be calculated from @yakegg] are assumed to be filled by the components, the luminosity increases by $\sim$35%, while the distance becomes $3.5 \pm 0.7$ kpc. The distance estimate is almost independent of the reddening value. The above estimates of the luminosity and effective temperature are consistent with a $\sim$1.0 M$_\odot$ star at the beginning of the red giant branch when they are compared with theoretical tracks of stellar evolution [e.g. @girardi], The outburst \[sect\_burst\] ============================ The data \[burst\_data\] ------------------------ Figure \[fig\_rise\] (upper part) displays the rise of V1309 Sco during its eruption in 2008 (open points near maximum show the $I$ results taken from AAVSO[^5]). ![$I$ light curve of V1309 Sco during its rise to maximum in 2008 (upper part). Full points: data from OGLE III, open points: data from AAVSO. The line shows a fit of an exponential formula (Eq. \[eq\_rise\]). The time scale used in the formula is plotted in the lower part of the figure.[]{data-label="fig_rise"}](rise.eps "fig:") ![$I$ light curve of V1309 Sco during its rise to maximum in 2008 (upper part). Full points: data from OGLE III, open points: data from AAVSO. The line shows a fit of an exponential formula (Eq. \[eq\_rise\]). The time scale used in the formula is plotted in the lower part of the figure.[]{data-label="fig_rise"}](tsc.eps "fig:") This phase started in March 2008, and it took the object about six months to reach the maximum brightness. The rise was remarkably smooth for an eruption that led the object to brighten by a factor of $\sim 10^4$. We found that the light curve can be fitted by an exponential formula, i.e. $$\label{eq_rise} I = -2.5\ {\rm log} (F_I), ~~~~~~~F_I = F_0 + F_1\ {\rm exp} (t/\tau(t)),$$ where $F_0 = 1.6 \times 10^{-7}$, $F_1 = 2.1 \times 10^{-8}$ are fluxes in the Vega units, $t$ = JD – 2454550, and $\tau(t)$ is a time scale, allowed to evolve with time. The time evolution of $\tau$, resulting in the fit shown with the full line in the upper part of Fig. \[fig\_rise\], is displayed in the lower part of the figure. These results show that during the initial $\sim$5 months of the rise, i.e. up to JD $\simeq$ 2454690, the time scale was constant and equal to $\sim$27 days. During this phase the object brightened by $\sim$3.5 magnitudes. Then the time scale quickly decreased and the object brightened by $\sim$6.5 magnitudes during a month. The $I$ light curve of V1309 Sco during the decline observed by OGLE-IV in 2010 is presented in Fig. \[fig\_decl\]. Evidently the decline was relatively smooth at the begining of the presented phase. A similar behaviour (smooth evolution) was also observed in 2009 (see Fig. \[lightcurve\]). However, as the decline continues in 2010, oscillations of a few hundredths magnitude on a time scale of hours appear. A periodogram analysis shows that there is no significant periodicity in these light variations. ![$I$ light curve of V1309 Sco during the decline in 2010.[]{data-label="fig_decl"}](decl.eps){height="\hsize"} In a few cases multiband photometric observations are available during the outburst and decline. They allow us to estimate the spectral type and effective temperature of the object. This is the case for seven dates in September 2008, when multiband photometry was made by the AAVSO team. On 4 and 16 October 2008 the object was observed by @rudy using the Infrared Telescope Facility, and $JHK$ magnitudes were derived. These results can be combined with $BV$ AAVSO measurements and an $I$ OGLE magnitude obtained in the same time period. On 13 April 2009, as well as in a few cases in March–April 2010 and October 2010, $V$ photometric measurements (together with $I$) were obtained by OGLE. On 28 August 2010 the object was observed by one of us (M.H.) with the SAAO 1.0 m. Elizabeth Telescope with the Bessell $BVRI$ filters. The observations were reduced using standard procedures in the IRAF package. The photometry of the star was obtained with the DAOPHOT package implemented in IRAF. The resulting magnitudes are $B = 22.21 \pm 0.10$, $V = 20.42 \pm 0.13$, $R = 18.84 \pm 0.10$, and $I = 16.67 \pm 0.10$. The latter value agrees very well with an OGLE result ($I = 16.69 \pm 0.01$) obtained on the same date. ---------------- ------------- --------- --------------- ------------------------------ --------------------- JD data source sp.type $T_{\rm eff}$ $R_{\rm eff}({\rm R}_\odot)$ L (${\rm L}_\odot$) \[2pt\] 4718.9 AAVSO F9 5830 174. $3.16\ 10^4$ 4721.0 AAVSO G1 5410 177. $2.40\ 10^4$ 4722.9 AAVSO G5 4870 209. $2.22\ 10^4$ 4724.9 AAVSO K1 4360 252. $2.06\ 10^4$ 4730.9 AAVSO K3 4150 297. $2.36\ 10^4$ 4735.0 AAVSO K4 3980 312. $2.21\ 10^4$ 4737.9 AAVSO K5 3900 297. $1.84\ 10^4$ 4750. AAVSO/ M4 3510 155. $3.27\ 10^3$ OGLE/ITF 4934.8 OGLE M7 3130 34. 98.5 5282.0 OGLE M5 3370 9.5 10.4 5437.0 SAAO M4 3420 6.9 5.9 5474.5 OGLE M3 3565 5.4 4.2 ---------------- ------------- --------- --------------- ------------------------------ --------------------- : Basic parametres of V1309 Sco in outburst and decline[]{data-label="tab_evol"} ![Evolution of V1309 during outburst and decline (see Table \[tab\_evol\]). The abscissa displays a logarithm of time in days counted from the date of the discovery.[]{data-label="fig_evol"}](tem.eps "fig:") ![Evolution of V1309 during outburst and decline (see Table \[tab\_evol\]). The abscissa displays a logarithm of time in days counted from the date of the discovery.[]{data-label="fig_evol"}](lum.eps "fig:") ![Evolution of V1309 during outburst and decline (see Table \[tab\_evol\]). The abscissa displays a logarithm of time in days counted from the date of the discovery.[]{data-label="fig_evol"}](rad.eps "fig:") Observed evolution ------------------ Taking our estimates of the distance (3.0 kpc, see Sect. \[sect\_ld\]) and reddening ($E_{B-V} = 0.8$, see Sect. \[sect\_redd\]), we can derive basic parameters of V1309 Sco during the outburst from multiband photometry with the same method as in @tyl05. The resulting values of the effective temperature, radius, and luminosity are presented in Table \[tab\_evol\] and Fig. \[fig\_evol\]. They show that the evolution of V1309 Sco during outburst and decline was really of the same sort as those of V838 Mon [@tyl05] and V4332 Sgr [@tcgs05]. In all these cases the main decline in luminosity was accompanied by a decline in the effective temperature. In later phases of the decline the objects resume a slow increase in the effective temperature. There is no doubt that V1309 Sco was not a classical nova. We know from the present study that its eruption resulted from a merger of a contact binary. Below we show that the energy budget of the eruption can also be well accounted for by dissipation of the orbital energy of the progenitor. Merger-powered outburst – mergerburst ------------------------------------- The orbital energy and angular momentum of a binary system can be calculated from $$\label{eorb_eq} E_{\rm orb} = -\frac{G\ M_1\ M_2}{2\ A} = -\frac{G\ (M_1 + M_2)^2}{2\ A} \frac{q}{(1 + q)^2}$$ and $$\label{jorb_eq} J_{\rm orb} = \Big( \frac{G\ A}{M_1 + M_2} \Big)^{1/2} M_1\ M_2 = \big( G\ (M_1 + M_2)^3\ A \big)^{1/2} \frac{q}{(1+q)^2}.$$ Taking the values assumed and obtained in Sect. \[sect\_ld\] for the V1309 Sco progenitor, one obtains $E_{\rm orb} = (0.2 - 5.6)\ 10^{47}$ ergs and $J_{\rm orb} = (0.9 - 22.)\ 10^{51}$ gcm$^2$s$^{-1}$. As derived in Sect. \[prog\_dat\], the orbital period decreased by 1.2% during $\sim$6 years. Thus the system contracted by 0.8% and its orbital energy decreased by $\sim 10^{45}$ ergs during this time period, which gives a mean rate of energy dissipation of $\sim 10^{37}\ {\rm erg\ s}^{-1}$. Already a small portion from this can account for the observed brightening of the object between 2002 and the beginning of the 2007 season (see Fig. \[lightcurve\]). During the same time period the system also lost 0.4% of its orbital angular momentum. If the system started shrinking because of the Darwin instability, the orbital angular momentum loss was primarily absorbed by the spinning-up primary. If, however, the system entered a deep contact resulting from evolutionary changes of the components, mass loss through the $L_2$ point would be the main source of the orbital angular momentum loss. Assuming that the outflowing mass carries out a specific angular momentum of $\Omega r_{L_2}^2$, where $\Omega$ is the angular rotational velocity of the system and $r_{L_2}$ is a distance of $L_2$ from the mass centre, we can estimate that the system lost $(0.16 - 2.0) 10^{-3}~M_\odot$ in 2002–2007. This is very much an upper limit for the mass loss. First, judging from the K spectral type, the system might have had magnetically active components. Then the outflowing matter might have kept corotation to radii larger than $r_{L_2}$ and thus have carried out a specific angular momentum larger than assumed in the above estimate. Second, matter already lost through $L_2$ may have formed an excretion disc around the system. Tidal interactions between the binary system and the disc can transfer angular momentum directly from the system to the disc. Third, a part of the orbital angular momentum probably went to accelerating the spins of the components. An excretion disc is expected to be formed even if the orbital shrinkage was initiated by the Darwin instability. Decreasing separation between the components results in a deeper contact, and mass loss through $L_2$ must finally occur. Apparently we see in the light curve of the V1309 Sco progenitor a signature of an excretion disc formation. We observed the system near the equatorial plane (eclipsing binary). Therefore, when the disc became sufficiently massive and thick, it could result in dimming the object for the observer. This is a likely explanation of the fading of the V1309 Sco progenitor observed during a year before the beginning of the 2008 eruption (see Fig. \[lightcurve\]). The progressive shortening of the orbital period finally led to engulfing the secondary by the primary’s envelope. This probably took place at some point in February 2008, when the signs of the binary motion disappeared from the light curve. The secondary, or rather its core, now spiralling in the common envelope, started to release the orbital energy and angular momentum at an increasing rate. The result was the gradual and relatively gentle brightening of the object, doubling the brightness every $\sim$19 days (Fig. \[fig\_rise\]). This phase lasted until $\sim$20 August (JD $\simeq$2454700), when the eruption abruptly accelarated and the object brightened by a factor of $\sim$300 during $\sim$10 days. Perhaps this was a signature of a final disruption of the secondary’s core deeply in the envelope. In the merger process, especially during its initial, relatively gentle phases until about August 2008, mass loss probably occured mainly in directions close to the orbital plane of the progenitor. As a result, it is likely that an extended disc-like envelope was then formed, where a significant portion of the angular momentum of the progenitor was probably stored. The main eruption in August 2008, partly blocked by the envelope, was then likely to occur mainly along the orbital axis. @mason10 interpret the line profiles observed during the outburst, especially those of the Balmer series, as produced in a partially collimated outflow and a slowly expanding shell that is denser in the equatorial plane. As can be seen from Table \[tab\_evol\], V1309 Sco attained a maximum luminosity of $\sim 3\ 10^4~{\rm L}_\odot$. The OGLE data show that the period when V1309 Sco was brighter than $I \simeq 11$ magnitude lasted 40 days. Both the maximum luminosity and the time scale of the outburst are close to the theoretical expectations of a solar-mass star merging with a low-mass companion [@soktyl06]. During $\sim$30 days, when the luminosity of V1309 Sco was $\ga 10^4~{\rm L}_\odot$, the object radiated energy of $\sim 3\ 10^{44}$ ergs. This is $\la$1% of the energy available in the binary progenitor (see above). Of course a considerable energy was also lost during the six-months rise, as well as in the decline after maximum. Certainly a significant amount was carried out in mass loss. Finally, an energy was also stored in the inflated remnant. Estimates made for V838 Mon show that the total energy involved in the outburst can be a factor of 10 – 20 higher than the energy observed in radiation [@tylsok06]. Even this case can be accounted for by the available orbital energy. Our SAAO photometry performed in August 2010, as well as the OGLE $V$ and $I$ measurements obtained in 2010 (see Sect. \[burst\_data\]), i.e about two years after the outburst maximum, show that V1309 Sco reached a luminosity (see Table \[tab\_evol\]) comparable with its preoutburst value (see Sect. \[sect\_ld\]). The object was then significantly cooler however than before the outburst (early M-type spectrum versus K1–2). The latter agrees with what was observed in V838 Mon and V4332 Sgr, displaying M-type spectra several years after their outbursts [@tyl05; @tcgs05]. However, the drop in luminosity of V1309 Sco was unexpectedly deep. Both V838 Mon and V4332 Sgr at present (many years after outburst) remain significantly more luminous than their progenitors. Most likely the remnant of V1309 Sco contracting after the outburst partly disappeared for us behind the disc-like envelope. Moving small-scale blobs of the envelope matter, absorbing and scattering the light from the central object, could have been responsible for the short-term variability shown in Fig. \[fig\_decl\]. The situation thus looks to be similar to that of the V4332 Sgr remnant, where the central object is most likely hidden in an opaque dusty disc [@kst10; @kt11]. Conclusions =========== The principal conclusion of our study is that all observed properties of V1309 Sco, i.e. the light curve of the progenitor during six years before the 2008 eruption, as well as the outburst itself, can be consistently explained by a merger of a contact binary. This is the first case of a direct observational evidence showing that the contact binary systems indeed end their evolution by merging into single objects, as predicted in numerous theoretical studies of these systems. Our study also provides a conclusive evidence in favour of the hypothesis that the V838 Mon-type eruptions (red novae) result from stellar mergers, as originally proposed in @soktyl03 and @tylsok06. In particular, long- and short-term variabilities of the progenitors, as those of V838 Mon and V4332 Sgr [@goran07; @kimes07], which were sometimes raised as evidence against the merger hypothesis, appear now natural in view of our data for the V1309 Sco progenitor. We do not claim that all observed eruptions of the V838 Mon-type are mergers of contact binaries. There can be different ways leading to stellar mergers. What the case of V1309 Sco evidently shows is that the observational appearances of a stellar merger are indeed the same as those observed in the V838 Mon-type eruptions. The outburst of V1309 Sco was shorter and less luminous than those of V838 Mon and the extragalactic red novae. The latter objects attained luminosities of about or above $10^6~{\rm L}_\odot$ and their eruptions lasted a few months. These differences are most likely caused by the masses of merging stars. For V838 Mon, an $\sim8~{\rm M}_\odot$ primary was probably involved [@tylsok06] instead of a $\sim1~{\rm M}_\odot$ one of V1309 Sco. As noted in Sect. \[sect\_cb\], the orbital period of the V1309 Sco progenitor of $\sim$1.4 day is long for the observed population of contact binaries. From a study of contact binaries discovered by the OGLE project in a sky region very close to the position of V1309 Sco, @rucin98 concluded that the W UMa type sequence sharply ends at the orbital period of 1.3–1.5 days [see also @pacz06], i.e. just at the orbital period of the V1309 Sco progenitor. This can be a pure coincidence (just one case observed), but can also indicate that binaries passing through contact at periods $\ga$1 day are not rare, but that the contact phase in their case is relatively short and quickly leads to a merger. V1309 Sco, an overlooked object [only one research paper published so far, i.e. @mason10], deserves much more attention of the observers and astrophysicists, as do the other V838 Mon-type objects. Apart from supernovae, they belong to the most powerful stellar cataclysms. As often happens in nature, cataclysms destroy old worlds, but also give birth to new ones. What will develop from the stellar mergers? Fast rotating giants, similar to FK Com? Peculiar stars with circumstellar discs, when new generation planets can be formed? To answer these questions, we just have to follow the evolution of the V838 Mon-type objects, V1309 Sco in particular. Berger, E., Soderberg, A. M., Chevalier, R. A. et al. 2009, , 699, 1850 Bond, H. E., Henden, A., Levay, Z. G. et al. 2003, 422, 405 Bonnell, I. A., Matthew, R. B. & Zinnacker, H. 1998, , 298, 93 Bopp, B. W. & Stencel, R. E. 1981, , 247, L131 Eggleton, P. P. 1983, , 268, 368 Girardi, L., Bressan, A., Bertelli, G., & Chiosi, C. 2000, , 141, 371 Goranskij, V. P., Metlova, N. V., Shugarov, S. Yu. et al. 2007, in “The Nature of V838 Mon and its Light Echo”, R. L. M. Corradi & U. Munari eds., ASP Conf. Ser., 363, 214 Iben, I., Tutukov, A. V. 1992, , 389, 369 Kamiński, T., Schmidt, M., & Tylenda, R. 2010, , 522, A75 Kamiński, T. & Tylenda, R. 2011, , 527, A75 Kashi, A., Frankowski, A., & Soker, N. 2010, , 709, L11 Kashi, A. & Soker, N. 2011, arXiv:1011.1222 Kimeswenger, S. 2007, in “The Nature of V838 Mon and its Light Echo”, R. L. M. Corradi & U. Munari eds., ASP Conf. Ser., 363, 197 Korhonen, H., Berdyugina, S. V., Hackman, T., Ilyin, I. V., Strassmeier, K. G. & Tuominen, I. 2007, , 476, 881 Kulkarni, S. R., Ofek, E. O., Rau, A. et al. 2007, , 447, 458 Lawlor, T. M. 2005, , 361, 695 Leonard, P. J. T. 1989, , 98, 217 Martini, P., Wagner, R. M., Tomaney A, et al. 1999, , 118, 1034 Mason, E., Diaz, M., Williams, R. E., Preston, G. & Bensby, T. 2010 , 516, A108 Mould, J., Cohen, J., Graham, J. R. et al. 1990, , 353, L35 Munari, U., Henden, A., Kiyota, S. et al. 2002, , 389, L51 Munari, U., Henden, A., Vallenari, A. et al. 2005, , 434, 1107 Nakano, S. 2008, , 8972 Paczyński, B., Szczygieł, D. M., Pilecki, B. & Pojmański, G. 2006, , 368, 1311 Rasio, F. A. 1995, , 444, L41 Robertson, J. A. & Eggleton, P. P. 1977, , 179, 359 Rucinski, S. M. 1998, , 115, 1135 Rudy, R. J., Lynch, D. K., Russell, R. W., Kaneshiro, B., Sitko, M., & Hammel, H. 2008a, , 8976 Rudy, R. J., Lynch, D. K., Russell, R. W., Sitko, M., Woodward, C. E., & Aspin, C. 2008b, , 8997 Schwarzenberg-Czerny, A. 1996, , 460, L107 Shara, M. M., Yaron, O., Prialnik, D., Kovetz, A., Zurek, D. 2010, , in press, arXiv:1009.3864 Soker, N. & Tylenda, R. 2003, , 582, L105 Soker, N. & Tylenda, R. 2006, , 373, 733 Soker, N. & Tylenda, R. 2007, in “The Nature of V838 Mon and its Light Echo”, R. L. M. Corradi & U. Munari eds., ASP Conf. Ser., 363, 280 Tylenda, R. 2005, , 436, 1009 Tylenda, R., Crause, L. A., Górny, S. K., & Schmidt, M. R. 2005, , 439, 651 Tylenda, R. & Soker, N. 2006, , 451, 223 Udalski, A. 2003, , 53, 291 Udalski, A., Szymański, M. K., Soszyński, I. & Poleski, R. 2008, , 58, 69 Webbink, R. F. 1976, , 209, 829 Yakut, K. & Eggleton, P. P. 2005, , 629, 1055 Periodograms from the preoutburst observations {#periodograms} ============================================== Figures \[per\_fig1\] and \[per\_fig2\] present periodograms derived from the OGLE observations of the progenitor of V1309 Sco in 2002 – 2007. The periodograms were obtained using the method of @schwarz, which fits periodic orthogonal polynomians to the observations and evaluates the quality of the fit with an analysis of variance (AOV). The resulting AOV statistics is plotted in the ordinate of the figures. In all seasons the dominating peaks is at a frequency of $\sim$0.7 day$^{-1}$, which we interpret as a frequency of the orbital period of the contact binary progenitor (Sect. \[interpret\]). In earlier seasons, there is also a strong peak at a frequency of $\sim$1.4 day$^{-1}$, which is because the observations can also be reasonably fitted with quasi-sinusoidal variations having a period twice as short as the orbital period. In 2006 this peak is much weaker and practically disappears in 2007, which reflects the evolution of the light curve as displayed in Fig. \[fig\_lc\]. All other peaks are aliases resulting from combinations of the two above frequencies and a frequency of 1 day$^{-1}$ or subharmonics of the main peaks. ![Periodograms from the observations of the progenitor of V1309 Sco in 2002–2004.[]{data-label="per_fig1"}](per_2.eps "fig:") ![Periodograms from the observations of the progenitor of V1309 Sco in 2002–2004.[]{data-label="per_fig1"}](per_3.eps "fig:") ![Periodograms from the observations of the progenitor of V1309 Sco in 2002–2004.[]{data-label="per_fig1"}](per_4.eps "fig:") ![Periodograms from the observations of the progenitor of V1309 Sco in 2005–2007.[]{data-label="per_fig2"}](per_5.eps "fig:") ![Periodograms from the observations of the progenitor of V1309 Sco in 2005–2007.[]{data-label="per_fig2"}](per_6.eps "fig:") ![Periodograms from the observations of the progenitor of V1309 Sco in 2005–2007.[]{data-label="per_fig2"}](per_7.eps "fig:") [^1]: Based on observations obtained with the 1.3-m Warsaw telescope at the Las Campanas Observatory of the Carnegie Institution of Washington. [^2]: http://ogle.astrouw.edu.pl [^3]: available at http://www.aavso.org/ [^4]: http://irsa.ipac.caltech.edu/applications/DUST/ [^5]: available at http://www.aavso.org/
In the RF transmission of digital information, sampled data sequences are converted to analog signals and processed, subsequently, by various operations containing unwanted nonlinearities. The primary source of nonlinearity is the power amplifier (PA). Nonlinear behavior of the PA (or other devices) can be compensated using digital predistortion (DPD). That is, the correction signal is a sampled sequence applied prior to the PA to create a corrected signal which compensates for nonlinear modes in the transmitter. The nonlinear behavior of the PA transfer characteristics can be classified as memoryless or memory based. For a memoryless nonlinear device, the nonlinear modes are functions of the instantaneous input value, x(t), only. In contrast, for a PA exhibiting memory effects, the nonlinear modes are functions of both instantaneous and past input values. In general, memory effects exist in any PA; however, the effect becomes more apparent when the bandwidth of the input signal is large. As a result, the correction of memory effects will become increasingly more important as wide bandwidth modulation formats are put in use. Therefore a need presently exists for an improved digital predistortion system where, in addition to correcting memoryless nonlinearities, the specific problem of compensating for memory effects associated with the power amplifier is addressed.
R K Damodaran R K Damodaran (Malayalam: ആർ കെ ദാമോദരൻ) (born on 1 August 1953) is a poet and lyricist who has worked predominantly in the Malayalam movie industry. He also worked as a journalist in Mathrubhumi from 1982 to 2013. He has written lyrics for almost 3600 songs in devotional, political, environmental, drama, and light music genres, including two Sanskrit songs. He has worked in more than 100 Malayalam films and written songs like Ravivarma Chithrathin, Thalam Thettiya Tharatt, Manjil Chekkerum, Sukham, Chandrakiranathin Chandanamunnum, Thani Thankakkinapponkal, Pakalppoove. Career He studied BA Malayalam at Maharajas College, Kochi and Sanskrit languages at Bharatiya Vidya Bhavan, Kochi. He entered the film music world in the year 1977, when he was a second year BA student at Maharaja's College. "Ravivarma Chithrathin Rathi Bhavame" for the movie ‘Raju Rahim’ in the year 1978 (recorded on 1977 November 2'nd Wednesday at AVM-C theatre Chennai) was his debut song. Soon he carved out a name for himself in the Malayalam movie industry. During a career spanning over four decades, RK has penned 118 film songs and has worked with music masters like Dakshinamurthy, Devarajan Master, M. S. Viswanathan, Ilayaraja, Arjunan Master, Johnson, Raveendran, Syam, S. P. Venkitesh, Jerry Amaldev, Perumbavoor G. Raveendranath, Vidyadharan, Mohan Sithara, T. S. Radhakrishnan, Vidya Sagar, K.P. Udayabhanu, M. Jayachandran, Deepak Dev, Berny-Ignatious and so on. The film titled ‘Cleopatra’ released in 2013, stands as his last film. Besides writing lyrics for Malayalam movies, he also published four books. Out of these, two were collection of his poetry - "Athunathanam" and "Kadharaavaneeyam". The other two were devotional song collections namely "Amme Narayana" and "Aravana Madhuram". Apart from the books RK wrote two dramas, Poorapparambu and Kannakiyude Mula. He has also learned Chenda, Kerala’s traditional percussion instrument, from Babu Kanjilasseri of Kozhikode. RK had been selected as a member of Kerala Sangeetha Nataka Academy during 2001 to 2004 which is run by the Government of Kerala. Since 2012 RK is an executive member of Bharat Bhavan which is under the Department of Culture, Government of Kerala. He is also an executive member of Samastha Kerala Sahithya Parishad since 2016. Personal life RK Damodaran was born to Manjapra Kothanath Chirayil Kalathil Ramankutty Nair and Palakkad Pallatheri Kappadathu Puthanveettil Kalyani Kutti Amma on 1 August 1953 in Kochi. It is heard that his family shifted from Palakkad to Kochi and RK is deeply influenced by the fertile cultural landscape of his family place. RK was married to Rajalakshmy (native of North Paravoor) on 7 June 1985. They have a daughter named Anagha and currently residing at Kochi, Kerala. Awards These are some of the awards and achievements in the career life of R. K. Damodaran. Kerala Sangeetha Nataka Academy Kalasree Award - 2013 Kunjunni Master Award for poetry - 2008 Vaadya Mithra Award with Suvarna Mudra - 2006 Kesava Poduval Smaraka Puraskaram - 2018 Pavakulathamma Award -2018 P.Gokulapalan Sankam Kala Group Award -2017 Thirumantham Kunnu Neerajanam Award- 2014 Parasseri Meen Kulathi Bhagavathi Temple’s Bhadrapriya Award - 2014 Paloor Sree Subrahmanya Swami Temple Award - 2014 Akhila Bharathiya Ayappa Samithi Award - 2014 Pattambi Sreethali Mahadeva Puraskaram- 2013 Oottoor Unni Namboothirippadu Smaraka Puraskaram- 2011 Kashypa Veda Research Foundation Award - 2010 Mayilppeeli Award from Guruvayoor - 2010 Thathvamasi Award - 2010 Pambadi Pambumkavu Sree Nagaraja Puraskaram - 2009 Jaycey Foundation Award - 2005 Kerala Film Audience Council Award - 2004 & 2005 Sangam Kala Group Award - 2003 MTV Award and Smrithi Award - 2002 Harivarasanam Award - 2001 Drisya Award - 2000, 2002, 2004 & 2007 Malayalam Tele Viewers Association Award - 2000 Ayyappa Ganasree Award - 1994 IPTA Award for the Best National Integration Song - 1992 Nana Miniscreen Award - 1991 Chottanikkara narayana Marar memorial NAVA NAARAAYAM award-2018 References External links R K Damodaran Famous Movie Songs 40 Years of R K Damodaran Kerala Sangeetha Nataka Akademi Awards Category:1953 births Category:Living people Category:Indian male poets Category:Maharaja's College, Mysore alumni Category:Malayalam poets Category:People from Kochi
The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com. How many times must the left tell Americans what it thinks of them before Americans realize a simple fact: Leftist leaders simply don't like half the country? In 2012, the media lost its mind over former Gov. Mitt Romney's statement that 47 percent of Americans "who are dependent upon government, who believe that they are victims, who believe the government has a responsibility to care for them" would vote for President Obama. This apparently demonstrated that Romney hates everyday Americans. Disdains them. Sees them as moochers. In 2008, then-Sen. Barack Obama claimed that small-town Americans in the Midwest are benighted hicks. "It's not surprising then they get bitter, they cling to guns or religion or antipathy toward people who aren't like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations," he said. This received attention from the conservative press, but was downplayed by the mainstream media, or brushed off as accurate. This weekend, Hillary Clinton echoed Obama. She said: "To just be grossly generalistic, you could put half of Trump's supporters into what I call the basket of deplorables. Right? The racist, sexist, homophobic, xenophobic, Islamophobic -- you name it. And unfortunately there are people like that." The other half of Trump supporters, Clinton said, are little better: "But that other basket of people are people who feel that the government has let them down, the economy has let them down, nobody cares about them, nobody worries about what happens to their lives and their futures, and they're just desperate for change. ... Those are people we have to understand and empathize with as well." Clinton's language is far more telling than Obama's. Democrats routinely see voters they don't understand as morally deficient. That provides them the comforting illusion that disagreement reflects lack of virtue. And that means that their policies need not succeed -- success or failure is irrelevant to the ethical question of how to vote. Good people will vote for them regardless of track record, while bad people will oppose them. But Clinton's language goes further. Where Obama simply labels his opponents as bad guys, Clinton suggests that Romney was right: Those who are her potential supporters are pathetic losers waiting for government to save them. They are disappointed with the economy. They think the government must do more. They just need some tender, loving care from Clinton, and then they'll realize that Trump isn't the man for them. This means that the sneering tone so many people detected in Romney exists among Democrats for their own constituents. Clinton doesn't label her potential voters self-sufficient Americans seeking an equal opportunity. No. They're grievance-mongers, ne'er-do-wells and people who believe they are victims, who believe government has an obligation to take care of them. And she thinks she can draw them to the Democratic Party. So, where are all the good Americans? To Democrats they don't exist. There are just the deplorables and the needies -- and the elites who control them. That's the scariest thing about the Clinton vision for America. Nobody deserves freedom because nobody wants freedom. Everyone is either a racist or in need of saving; everyone needs a cure, either of their soul or their material well-being. And Clinton thinks she can provide that cure, by crushing half of Trump's supporters and co-opting the other half. She's only missing one thing: Most Trump supporters, and most Americans, aren't bitter clingers or victims. They're independent human beings, waiting for a candidate who wants to grant them that independence -- if any elite is willing to stand up for it.
UNITED STATES DISTRICT COURT FOR THE DISTRICT OF COLUMBIA ADRIENNE DURSO, et al., Plaintiffs, v. JANET NAPOLITANO, in her official capacity as Secretary of Homeland Security, Civil Action 10-02066 (HHK) and JOHN S. PISTOLE, in his official capacity as Administrator of the Transportation Safety Administration, Defendants. MEMORANDUM OPINION Plaintiffs Adrienne Durso, D. Chris Daniels, and Michelle Nemphos (on behalf of her minor child C.N.) bring this action against Secretary of Homeland Security Janet Napolitano and Administrator of the Transportation Safety Administration (“TSA”) John S. Pistole, challenging TSA’s use of advanced imaging technology (“AIT”) and aggressive pat-downs to screen airline passengers at airports. Plaintiffs allege that TSA’s use of these measures violates the Fourth Amendment’s ban on unreasonable searches and seizures. Before the Court is defendants’ motion to dismiss [#5], which argues that, because the challenged screening procedures are employed pursuant to a TSA order, the U.S. courts of appeals have exclusive jurisdiction over plaintiffs’ challenge thereto. Upon consideration of the motion, the opposition thereto, and the record of this case, the Court concludes that the motion must be granted. I. BACKGROUND Following the September 11, 2001, attacks, Congress created TSA “to prevent terrorist attacks and reduce the vulnerability of the United States to terrorism within the nation’s transportation networks.” Def.’s Mot. to Dismiss Ex. 1 (“Kair Decl.”) ¶ 8. TSA’s responsibilities include civil aviation security. See 49 U.S.C. §§ 114(d)(1), 44901 et seq. To aid in TSA’s aviation security mission, Congress has directed the Secretary of Homeland Security to “give a high priority to developing, testing, improving, and deploying, at airport screening checkpoints, equipment that detects nonmetallic, chemical, biological, and radiological weapons, and explosives, in all forms, on individuals and in their personal property.” Id. § 44925(a). TSA’s operations are guided in part by Standard Operating Procedures (“SOPs”), which provide “uniform procedures and standards” that TSA must follow. Kair Decl. ¶ 10. At issue here is TSA’s Screening Checkpoint SOP, which “sets forth in detail the mandatory procedures that [Transportation Security Officers] must apply in screening passengers at all airport checkpoints, and which passengers must follow in order to enter the sterile area of any airport.” Kair Decl. ¶ 10. The SOP was revised on September 17, 2010 to “direct[] the use of AIT machines as part of TSA’s standard security screening procedures, as well as the use of revised procedures for the standard pat-down.” Kair Decl. ¶ 11. Pursuant to the revised Screening Checkpoint SOP, TSA uses two types of AIT systems: backscatter x-ray machines, and millimeter wave scanners. Kair Decl. ¶¶ 16–17. Because the SOP in question contains sensitive security information, it has not been publicly released and is not part of the record before the Court. See Def.’s Mem. in Supp. of Mot. to Dismiss (“Def.’s Mem.”) at 4 n.2. Each plaintiff alleges that he or she has been required to undergo AIT screening or the 2 revised pat-down procedure at an airport checkpoint. Durso, who had undergone a mastectomy as part of breast cancer treatment, describes a humiliating and painful patdown in which a TSA agent “repeatedly and forcefully . . . prodded” at her chest. Compl. ¶¶ 5, 24–36. Daniels experienced “an aggressive and invasive pat-down of his genitals,” an experience exacerbated by a childhood injury. Compl. ¶¶ 6, 37–54. And Nemphos asserts that C.N., her twelve-year-old daughter, was pulled out of the security screening line and forced to undergo an AIT scan without the knowledge or consent of her parents and without being given an opportunity to refuse. Compl. ¶¶ 8, 55–63. Nemphos alleges that this process violated her family’s religious beliefs, by allowing a TSA agent to view an image of C.N.’s naked body, and exposed C.N. to dangerous radiation. Compl. ¶ 60. Plaintiffs filed this action on December 6, 2010, alleging that TSA’s screening procedures violate the Fourth Amendment’s ban on unreasonable searches and seizures. See U.S. CONST . amend. IV. II. LEGAL STANDARD Under Federal Rule of Civil Procedure 12(b)(1), a defendant may move to dismiss a complaint, or a claim therein, for lack of subject-matter jurisdiction. FED . R. CIV . P. 12(b)(1); see Kokkonen v. Guardian Life Ins. Co. of Am., 511 U.S. 375, 377 (1994) (“Federal courts are courts of limited jurisdiction. . . . It is to be presumed that a cause lies outside this limited jurisdiction . . . .”). In response to such a motion, the plaintiff must establish that the Court has subject-matter jurisdiction over the claims in the complaint. See Shuler v. United States, 531 F.3d 930, 932 (D.C. Cir. 2008). If the plaintiff is unable to do so, the Court must dismiss the action. Steel Co. v. Citizens for a Better Env’t, 523 U.S. 83, 94 (1998) (citing Ex parte 3 McCardle, 7 U.S. 506, 514 (1868)). When resolving a motion made under Rule 12(b)(1), a court may consider material beyond the allegations in the plaintiff’s complaint. Jerome Stevens Pharm., Inc. v. FDA, 402 F.3d 1249, 1253–54 (D.C. Cir. 2005). III. ANALYSIS Defendants move to dismiss this action on the ground that it challenges a final TSA order — namely, the Screening Checkpoint SOP — and thus, pursuant to 49 U.S.C. § 46110, falls within the exclusive jurisdiction of the U.S. courts of appeals. In relevant part, § 46110 provides that a person disclosing a substantial interest in an order issued by the Secretary of Transportation (or the Under Secretary of Transportation for Security with respect to security duties and powers designated to be carried out by the Under Secretary . . . ) . . . may apply for review of the order by filing a petition for review in the United States Court of Appeals for the District of Columbia Circuit or in the court of appeals of the United States for the circuit in which the person resides or has its principal place of business. 49 U.S.C. § 46110(a).1 The court of appeals in which such a petition is filed “has exclusive jurisdiction to affirm, amend, modify, or set aside any part of the order and may order the Secretary, Under Secretary, or Administrator to conduct further proceedings.” Id. § 46110(c). Defendants contend that this language divests this Court of jurisdiction to adjudicate plaintiffs’ Fourth Amendment claim. Plaintiffs make a number of responses. First, they contend that the Screening Checkpoint SOP is not an “order” that is subject to § 46110. Second, they argue that § 46110 does not apply 1 Thanks to TSA’s move from the Department of Transportation to the Department of Homeland Security in 2002, statutory references to the “Under Secretary of Transportation for Security” are now understood to refer to the TSA Administrator. See, e.g., In re Sept. 11 Litig., 236 F.R.D. 164, 174 (S.D.N.Y. 2006). 4 to this case because their constitutional challenge to TSA’s procedures is distinct from a challenge to the SOP. And third, they contend that forcing them to proceed in a court of appeals would constitute a denial of due process. The Court addresses each issue in turn. A. The Screening Checkpoint SOP is an Order Subject to § 46110 Although § 46110 does not define the term “order,” the D.C. Circuit has explained what constitutes an order thereunder: To be deemed ‘final’ and thus reviewable as an order under 49 U.S.C. § 46110, an agency disposition ‘must mark the consummation of the agency’s decisionmaking process,’ and it ‘must determine rights or obligations or give rise to legal consequences.’ As a general principle, ‘the term order in [section 46110] should be read expansively.’ Safe Extensions, Inc. v. FAA, 509 F.3d 593, 598 (D.C. Cir. 2007) (quoting City of Dania Beach v. FAA, 485 F.3d 1181, 1187 (D.C. Cir. 2007)) (alteration in original) (internal citation omitted); see City of Dania Beach, 485 F.3d at 1188 (holding an agency letter to be a final order where nothing therein “indicate[d] that the [agency’s] statements and conclusions [we]re tentative, open to further consideration, or conditional on future agency action”). Here, defendants contend that the Screening Checkpoint SOP meets both of these criteria. It is final, they aver, because it sets forth firm requirements that apply to TSA and airline passengers alike, with no further agency action required to trigger those requirements. Likewise, they contend that it “give[s] rise to legal consequences” because it lays out procedures that passengers must follow if they wish to gain access to the restricted areas of an airport terminal. Plaintiffs do not dispute that the SOP gives rise to legal consequences. They do, however, assert that the SOP cannot constitute an order for three separate reasons: first, the SOP is not final; second, the SOP is not supported by an adequate administrative record; and third, 5 TSA did not provide public notice of the SOP’s issuance. The Court addresses each argument in turn. 1. Finality Plaintiffs first contend that the Screening Checkpoint SOP cannot be an order reviewable under § 46110 because it is not final. See Safe Extensions, 509 F.3d at 598. In support of this argument, plaintiffs point to defendants’ statement that the SOP can be “revised as necessary — and often upon short notice,” Kair Decl. ¶ 12, and to the fact that it has already been revised once since September 2010. Plaintiffs infer from the SOP’s revisability that it is not final. The Court does not agree. Simply put, plaintiffs provide no authority for the proposition that an otherwise- authoritative order is not final for the purposes of § 46110 simply because it is subject to revision. The rule that an order is not final unless it marks the “consummation” of the agency’s decisionmaking process does not mean that an order must be set in stone to be considered final; rather, it must have immediate effect. See Dania Beach, 485 F.3d at 1188; Vill. of Bensenville v. FAA, 457 F.3d 52, 69 (D.C. Cir. 2006) (holding that an FAA letter was not final because its adverse effect on the petitioners’ rights was contingent on future administrative action); Gilmore v. Gonzales, 435 F.3d 1125, 1130, 1133 (9th Cir. 2006) (holding a TSA security directive to be a final order because it had a direct and immediate effect, notwithstanding the fact that such directives were “revised frequently, as often as weekly”). Here, it is uncontested that the Screening Checkpoint SOP had immediate effect. Upon its adoption by TSA, the SOP mandated certain procedures that TSA and air travelers alike were required to follow. See Kair Decl. ¶¶ 10–11. Accordingly, the Court concludes that the SOP is final. 6 2. Adequate Record for Review Plaintiffs next argue that the Screening Checkpoint SOP is not subject to § 46110 because it is not supported by an adequate administrative record. Plaintiffs contend that the language of 49 U.S.C. § 46105 assumes that TSA orders reviewable under § 46110 will be supported by comprehensive administrative records. See id. § 46105(b) (stating that “[a]n order of the Secretary, Under Secretary, or Administrator shall include the findings of fact on which the order is based and shall be served on the parties to the proceeding and the persons affected by the order.” (emphasis added)). Plaintiffs further point to case law suggesting that an adequate record is a prerequisite for review by a court of appeals. See City of Rochester v. Bond, 603 F.2d 927, 932 (D.C. Cir. 1979) (“[T]he administrative record compiled by the FAA in the course of its proceedings is adequate for review in the court of appeals, a circumstance we have frequently held to be a principal indicium of ‘orders’ reviewable within the meaning of direct review statutes . . . .”); see also Gilmore, 435 F.3d at 1133 (“The existence of a reviewable administrative record is the determinative element in defining an FAA decision as an ‘order’ for purposes of Section [46110].” (quoting Sierra Club v. Skinner, 885 F.2d 591, 593 (9th Cir. 1989)) (alteration in original) (internal quotation marks omitted)). As defendants point out, however, the D.C. Circuit rejected plaintiffs’ position in Safe Extensions. There, the FAA argued that “to qualify as an order, an agency decision must not only be final, but also ‘be accompanied by a record sufficient to permit judicial review.’” Safe Extensions, 509 F.3d at 598 (quoting the FAA’s brief). The court responded: “This argument ignores our cases interpreting section 46110. In both Dania Beach and Bensenville we held that agency actions are reviewable as orders under section 46110 so long as they are final . . . .” Id. 7 (citing Dania Beach, 485 F.3d at 1187; Bensenville, 457 F.3d at 68) (emphasis added). As explained above, the Screening Checkpoint SOP is final. Thus, it need not be supported by an administrative record of any particular comprehensiveness to fall within the scope of § 46110.2 3. Public Notice Lastly, plaintiffs contend that the Screening Checkpoint SOP cannot be an agency order reviewable under § 46110 because it was not preceded by public notice. They argue that because § 46110 requires petitions thereunder to be filed within sixty days of an order’s issuance, see 49 U.S.C. § 46110(a), the public must be notified of any order’s promulgation. Likewise, they argue that § 46105’s requirement that orders include factual findings and be served on affected parties, see id. § 46105(b), establishes that public notice is a necessary step. Defendants respond that plaintiffs misread these provisions. Defendants are correct. In Avia Dynamics, Inc. v. FAA, 2011 WL 1466330 (D.C. Cir. Apr. 19, 2011), the D.C. Circuit explained that § 46110 and § 46105 do not use the term “order” in the same way; the former’s use of the term is broader, “because of its function in providing for judicial review.” Id. at *4. Thus, the fact that § 46105(b) requires “orders” to include factual findings and be served on affected parties does not mean that an agency determination made without those steps is not an “order” for the purposes of § 46110. See id. (explaining that “informal orders” that are “not subject to the procedural requirements laid out in . . . 49 U.S.C. § 46105(b)” can be reviewable orders under § 46110); see also Redfern v. Napolitano, 2011 WL 1750445, at *5 (D. Mass. May 2 To the extent that Safe Extensions’s holding on this point contradicts City of Rochester — which is far from clear — the Court must follow the former. See IRS. v. FLRA, 862 F.2d 880, 882 (D.C. Cir. 1988) (recent decisions of D.C. Circuit panels are controlling unless withdrawn or overruled en banc), rev’d on other grounds, 494 U.S. 922 (1990). 8 9, 2011) (finding that courts, including the D.C. Circuit, “have rejected [the position] that the term ‘order’ as used in Section 46110 requires that persons receive notice” (citing Safe Extensions, 509 F.3d at 598, 599)). Likewise, defendants are correct that § 46110’s sixty-day deadline for the filing of petitions thereunder does not assume that orders will be preceded by public notice. Because the sixty-day clock does not begin to tick until an order is “issued,” 49 U.S.C. § 46110(a), i.e., “made public,” a plaintiff has sixty days to file a petition starting on “the date the order is officially made public.” Avia Dynamics, 2011 WL 1466330, at *3. Thus, plaintiffs’ concern that an order could take effect and trigger the sixty-day window without anyone knowing, thereby precluding any judicial review thereof, is unfounded: if an order is kept secret, the sixty-day period will be tolled until plaintiffs receive some notice of the order’s contents or effect. See id. at *3–4; Redfern, 2011 WL 1750445, at *6. And, as defendants observe, the Avia Dynamics court held that “the sixty-day deadline [in § 46110] does not constitute a jurisdictional bar.” 2011 WL 1466330, at *3. That holding further bolsters the conclusion that the sixty-day deadline does not affect the jurisdictional boundaries drawn in the other provisions of § 46110. See id. Accordingly, the Court concludes that TSA’s failure to provide public notice of the Screening Checkpoint SOP or its contents prior to its effective date does not prevent the SOP from being an order under § 46110. 9 B. Plaintiffs’ Claim is Inescapably Intertwined with a Review of the SOP For the foregoing reasons, the Court concludes that the Screening Checkpoint SOP is an “order” in the meaning of § 46110.3 Thus, the Court must next consider whether plaintiffs’ Fourth Amendment claim is inescapably intertwined with review of that order. The awkwardly named inescapable-intertwinement doctrine gives the courts of appeals jurisdiction over not only direct challenges to final agency orders but also any claims inescapably intertwined with the review of those orders. See Breen v. Peters, 474 F. Supp. 2d 1, 4 (D.D.C. 2007) (citing Merritt v. Shuttle, Inc., 245 F.3d 182, 187 (2d Cir. 2001)); see also Beins v. United States, 695 F.2d 591, 597–98 & n.11 (D.C. Cir. 1982).4 The doctrine serves to prevent plaintiffs from collaterally attacking agency proceedings by presenting ostensibly independent claims. See Americopters, LLC v. FAA, 441 F.3d 726, 736 (9th Cir. 2006). In the inescapable-intertwinement inquiry, a “critical point” is whether review of the order by a court of appeals would allow for adjudication of the plaintiff’s claims and could result in the relief that the plaintiff requests. Breen, 474 F. 3 Other courts have reached the same conclusion as to this SOP, see Redfern, 2011 WL 1750445, at *6, and as to similar TSA orders. See Gilmore, 435 F.3d at 1133; Scherfen v. U.S. Dep’t of Homeland Sec., 2010 WL 456784, at *10–11 (M.D. Pa. Feb. 2, 2010); Tooley v. Bush, 2006 WL 3783142, at *26 (D.D.C. Dec. 21, 2006), rev’d in part on other grounds, Tooley v. Napolitano, 556 F.3d 836 (D.C. Cir. 2009), rev’d on rehearing, 586 F.3d 1006 (D.C. Cir. 2009); Green v. TSA, 351 F. Supp. 2d 1119, 1125 (W.D. Wash. 2005). 4 The inescapable-intertwinement doctrine applies only where a claim does not directly challenge the order in question; if it does, intertwinement is a moot point because § 46110 clearly applies. Here, defendants argue that plaintiffs’ claim presents such a direct challenge, but offer no authority to support that proposition; plaintiffs merely assume that their claim is not a direct challenge without saying so, or why. Regardless, the Court need not determine whether plaintiffs’ claim is actually a direct challenge because that claim is inescapably intertwined with review of the screening procedure SOP. Cf. Redfern, 2011 WL 1750445, at *6 (to the extent that the plaintiffs’ claims arose from the SOP but did not challenge it directly, those claims were inescapably intertwined). 10 Supp. 2d at 5 (citing Beins, 695 F.2d at 598 n.11); see also Merritt, 245 F.3d at 187 (“A claim is inescapably intertwined . . . if it alleges that the plaintiff was injured by [the] order and that the court of appeals has authority to hear the claim on direct review of the agency order.”). Here, defendants contend that plaintiffs’ Fourth Amendment claim is inescapably intertwined with a review of the SOP because the injuries that plaintiffs assert — their allegedly unconstitutional scans and pat-downs — were caused by the SOP. Defendants further argue that review in the court of appeals is appropriate because that court would be able to hear and rule on plaintiffs’ constitutional argument, and could provide the relief that plaintiffs seek, i.e., the termination of TSA’s current screening procedures.5 Defendants are correct that a court of appeals could, in ruling on a § 46110 petition challenging the SOP, decide whether TSA’s screening procedures are consistent with the Fourth Amendment. See, e.g., Gilmore, 435 F.3d at 1135–39 (reaching the merits of the plaintiff’s constitutional claims, including his Fourth Amendment claim, on § 46110 review). But defendants’ argument that the court of appeals could give plaintiffs the relief they seek elides the distinction between the remedy sought by plaintiffs — a permanent injunction barring the use of AIT scanners or enhanced pat-downs as a primary means of screening air travelers — and the 5 Defendants assert correctly that plaintiffs’ damages claims should not be part of the Court’s inescapable-intertwinement analysis because they are barred by sovereign immunity. See Hamrick v. Brusseau, 80 F. App’x 116, 116 (D.C. Cir. 2003) (“[T]he United States has not waived sovereign immunity with respect to actions for damages based on violations of constitutional rights by federal officials, whether brought against the United States directly, or against officers sued in their official capacities.” (citing Clark v. Library of Cong., 750 F.2d 89, 103 n.31 (D.C. Cir. 1984); Laswell v. Brown, 683 F.2d 261, 268 (8th Cir. 1982)) (internal citations omitted)); Beins, 695 F.2d at 598 n.11 (distinguishing City of Rochester on the ground that court of appeals review could not result in an award of damages, which was sought by the Beins plaintiff). 11 relief that the court of appeals can provide: “affirm[ing], amend[ing], modify[ing], or set[ting] aside any part of” the SOP. 49 U.S.C. § 46110(c). If plaintiffs prevailed before the court of appeals, the screening SOP would be modified or set aside; if plaintiffs prevailed before this Court, they would earn an injunction prohibiting TSA from employing the practices challenged by plaintiffs, on pain of contempt. See Firefighters Local Union No. 1784 v. Stotts, 467 U.S. 561, 600 n.5 (1984) (“An enjoined party is required to obey an injunction issued by a federal court within its jurisdiction . . . and failure to obey such an injunction is punishable by contempt.”). This difference is not trivial.6 The question remains, however, whether this distinction is sufficient to place this case beyond the reach of the inescapable-intertwinement doctrine. The Court concludes that it is not. A basic purpose of the doctrine is to prevent plaintiffs from avoiding special review statutes through creative pleading. See Americopters, 441 F.3d at 736; United Transp. Union v. Norfolk & W. Ry. Co., 822 F.2d 1114, 1120 (D.C. Cir. 1987). If a plaintiff could proceed in the district court merely by asking for an injunction barring the agency from taking the action required by the order in question, then that purpose would be defeated. Thus, the Court concludes that this case fits the basic criteria for inescapable intertwinement: the court of appeals could hear plaintiffs’ constitutional claim, and could remedy the injury they allege by setting aside or modifying the SOP. Plaintiffs allege, however, that the doctrine does not apply to their claim for two further reasons: first, because there has been no true administrative process here, merely unilateral 6 The Court does not mean to suggest that, if the court of appeals held TSA’s screening practices to be unconstitutional, TSA would flout that judgment by reinstituting those practices, under a new SOP or otherwise. 12 agency action. This argument is based on a pair of Ninth Circuit cases in which that court explained that the inescapable-intertwinement doctrine is intended in part to prevent plaintiffs from “crafting constitutional tort claims either as a means of ‘relitigat[ing] the merits of the previous administrative proceedings,’ or as a way of evading entirely established administrative procedures.” Americopters, 441 F.3d at 736 (quoting Tur v. FAA, 104 F.3d 290, 292 (9th Cir. 1997)) (alteration in original) (internal citation omitted). Plaintiffs argue that, because there were no “previous administrative proceedings” here, they cannot be attempting to relitigate anything and thus the doctrine should not apply. But, as defendants observe, the Ninth Circuit subsequently held in Gilmore that the plaintiff’s claims were “‘inescapably intertwined’ with a review of the order” in question, apparently untroubled by the fact that, as here, there had been no administrative process. See Gilmore, 435 F.3d at 1133 n.9. Moreover, avoiding relitigation of agency proceedings is not the only purpose of the doctrine; the D.C. Circuit has explained that a prime rationale therefor is that “coherence and economy are best served if all suits pertaining to designated agency decisions are segregated in particular courts.” City of Rochester, 603 F.2d at 936. Those goals are served by exclusive jurisdiction in the courts of appeals, regardless of whether there has been an administrative process. Second, plaintiffs contend that the inescapable-intertwinement doctrine does not apply to broad constitutional challenges, as opposed to claims focusing on individual agency decisions or adjudications. Plaintiffs again rely on the Ninth Circuit’s decision in Americopters, but again their position is contradicted by the later decision in Gilmore, where the court found that the plaintiff’s claims were inescapably intertwined even though those claims were, like plaintiffs’ Fourth Amendment claim here, broad constitutional challenges to airport security measures. See 13 Gilmore, 435 F.3d at 1133 n.9, 1135–39. Further, Americopters itself said that broad constitutional challenges are not inescapably intertwined if they seek damages. Americopters, 441 F.3d at 736. As noted, see supra note 5, that is not the case here. Thus, plaintiffs’ argument that broad constitutional challenges are categorically exempt from intertwinement analysis is without support.7 In sum: plaintiffs’ constitutional claim is inescapably intertwined with a review of the Screening Checkpoint SOP because a court of appeals reviewing the SOP could rule on that claim and could, by setting aside or modifying the SOP, provide approximately the remedy that plaintiffs request. Neither of the putative exceptions to the intertwinement doctrine proffered by plaintiffs is supported by authority. Thus, pursuant to that doctrine, plaintiffs’ claim must proceed before the court of appeals. C. Application of § 46110 Would Not Offend Due Process In a final effort to save this Court’s jurisdiction over their case, plaintiffs argue that an application of § 46110’s jurisdictional bar (either directly or via the inescapable-intertwinement doctrine) would violate their Fifth Amendment due process rights by foreclosing meaningful judicial review of TSA’s screening procedures. This is so, plaintiffs contend, for two reasons. First, the record before the court of appeals would consist solely of materials produced by TSA, and would not be geared to address a constitutional challenge to TSA’s procedures. Second, 7 Contrary to plaintiffs’ assertions, the Second Circuit’s first decision in Merritt v. Shuttle, Inc., 187 F.3d 263 (2d Cir. 1999) — which plaintiffs mistakenly attribute to the Ninth Circuit — does not confirm that a constitutional-challenge exception to the intertwinement doctrine exists. Rather, that court said: “We need not decide whether a broad-based, facial constitutional attack on an FAA policy or procedure — in contrast to a complaint about the agency’s particular actions in a specific case — might constitute appropriate subject matter for a stand-alone federal suit.” Id. at 271 (emphasis added). 14 § 46110 provides that agency findings of fact, “if supported by substantial evidence, are conclusive.” 49 U.S.C. § 46110(c). Plaintiffs assert that these factors combine to “tilt the playing field so heavily in Defendants’ favor that it would effectively deprive Plaintiffs of meaningful judicial review.” Pls.’ Opp’n at 19. There are two problems with plaintiffs’ due process argument. First, the cases on which plaintiffs rely do not stand for the proposition that special review statutes like § 46110 can effect a denial of due process by channeling cases directly into the courts of appeals. Plaintiffs rely primarily on McNary v. Haitian Refugee Center, Inc., 498 U.S. 479 (1991), in which the Supreme Court held that a special review provision of the Immigration and Nationality Act did not preclude district court jurisdiction over a procedural due process challenge to the Immigration and Naturalization Service’s administration of an unlawful-immigrant amnesty program. See id. at 483, 494. But McNary’s holding was statutory, not constitutional: the Court explained that the language of the provision in question did not reveal a congressional intent to restrict the type of claim at issue. See id. at 494; Gen. Elec. Co. v. Jackson, 610 F.3d 110, 126 (D.C. Cir. 2010) (stating that McNary’s holding “rested entirely on the Court’s analysis of the jurisdictional provision’s text”). Similarly, the cases that plaintiffs cite in attacking § 46110’s substantial-evidence standard were not constitutional decisions. See Aircraft Owners & Pilots Ass’n v. FAA, 600 F.2d 965, 970 (D.C. Cir. 1979).8 Second, and more fundamentally, plaintiffs’ due process arguments would not, even if 8 The Court does not suggest that Congress is free to restrict judicial review of constitutional challenges to agency action however it sees fit; a “‘serious constitutional question’ . . . would arise if an agency statute were construed to preclude all judicial review of a constitutional claim.” Thunder Basin Coal Co. v. Reich, 510 U.S. 200, 215 n.20 (1994) (quoting Bowen v. Mich. Acad. of Family Physicians, 476 U.S. 667, 681 n.12 (1986)). 15 correct, be sufficient to allow this Court to retain jurisdiction over their case. As plaintiffs concede, a court of appeals reviewing an agency determination has the authority to supplement the record. See 28 U.S.C. § 2347(c) (providing that, upon a proper showing, a court of appeals reviewing agency action “may order . . . additional evidence” to be accepted by the agency and filed with the reviewing court); Am. Wildlands v. Kempthorne, 530 F.3d 991, 1002 (D.C. Cir. 2008) (noting exceptions to the usual administrative-record-only rule). Thus, a court of appeals reviewing the SOP would be capable of addressing plaintiffs’ concerns that TSA’s administrative record will be incomplete or one-sided.9 Likewise, plaintiffs’ arguments regarding the standard of review are properly directed to the reviewing court of appeals. In Aircraft Owners, one of the parties objected to the application of the substantial-evidence standard, which was mandated by a special review statute similar to § 46110. See 600 F.2d at 969–72. The D.C. Circuit carefully considered whether applying the standard would be appropriate under the circumstances. See id. There is no reason to believe that it could not do the same before ruling on a petition challenging the Screening Checkpoint SOP.10 Thus, plaintiffs’ due process arguments, even if adequately supported by authority, would not be sufficient to save this Court’s jurisdiction over their case.11 9 Plaintiffs protest that courts of appeals “rarely” permit a party to supplement the administrative record, but the fact remains that such measures are allowed upon a proper showing. 10 Moreover, the substantial-evidence standard is contained in § 46110(c) and is separate from the jurisdictional language in § 46110(a). Thus, even if application of the substantial-evidence standard were somehow unconstitutional, this Court would still lack jurisdiction to hear plaintiffs’ Fourth Amendment claim. 11 The Court does not further address plaintiffs’ assertions (which are ostensibly part of their due process argument) that § 46110 is not meant to apply in the absence of a true record 16 IV. CONCLUSION TSA’s Screening Checkpoint SOP is an “order” in the meaning of 49 U.S.C. § 46110. Because plaintiffs’ Fourth Amendment claim is inescapably intertwined with a review of that order, and because an application of § 46110’s jurisdictional bar to that claim would present no due process problem, defendants’ motion to dismiss must be granted. An appropriate order accompanies this memorandum opinion. Henry H. Kennedy, Jr. United States District Judge and that district court review is necessary to ensure adequate fact-finding because those arguments simply retread the claims discussed here and above. 17
Ways to Warning sign All the way up for Chinese On the web Video game titles Ways to Warning sign All the way up for Chinese On the web Video game titles Ways to Warning sign All the way up for Chinese On the web Video game titles Not long ago, we have witnessed large numbers of unique free games quit inside China. With Enormous Seeker Internet and Telephone regarding Accountability On line, we will see a couple of top notch grade free-to-play video games https://fontys-becreative.nl/2019/08/10/the-history-of-healthy-refuted/ in between kingdom. Though a majority of these matches usually are purely available around Cina, i am not saying enterprising golfers abroad won't play the game them. Using Chinese language flash games offshore is definitely possible. Such video game titles presently usually do not look for IP address, for that reason criminal record search have fun them all upon each and every IP address. The exact concern is with the signing up process. With that said, we should begin. Forgoing the requirement for your Chinese language program ID, is in fact not hard to perform those games. Certain, payday advance have an understanding of any speech together with it may be hard to get the experience in that way, though therein is the excess challenge. Truth be told, the inability to learn Oriental within the massively internet based multiplayer gameplay may earn selection selection close impossible, however , that make recreation unplayable. In addition to that, various Offshore people should come up with and realize English. Therefore of regardless of this plus conducted, let us start for you to enroll in and some Chinese game. Just for this course, today i want to travel aided by the superb cutesy MMO, Tao Yuan On the net. Subsequently after downloading it the customer, our next component is usually letting go of ones own important data! Yay! As such, we've got to attack some control saying "????." It indicates "to join up a free account ".Is going to do typically the consideration registration type seems like. After that is almost all completed not to mention accomplished, it is as fundamental as setting up the buyer, showing up in site control key and additionally commencing a game. Looking relayed through numerous game corporations the fact that they certainly agree to unusual passports. In spite of this, increasingly being this paranoid person who I'm, We are unwilling giving just about anyone apart your United Declares Federal government the US granted passport number. Numerous Chinese language flash games necessitate even more discreetness to join with regard to, yet mostly, subscribing is this easy. Meant for Tencent-made adventures, one example is, you might use a powerful a lot less difficult technique simply just sign up to any Oriental QQ chitchat account.
Pheochromocytoma and tetralogy of Fallot: a rare but potentially dangerous combination. To describe a case of pheochromocytoma (PHEO) with tetralogy of Fallot (TOF) and discuss the difficulties encountered during the management of this patient, with a review of the literature. We report the clinical course, imaging, and management issues of our patient and review relevant literature. A 14-year-old female who was known to have TOF presented with classical paroxysmal symptoms and worsening dyspnea. She was diagnosed as having epinephrine-secreting PHEO based on biochemical, radiologic, and functional imaging. She was treated with an α-1 blocker for control of paroxysms but developed severe cyanotic spells. She required addition of a calcium-channel blocker for control of the paroxysms and underwent successful cardiac repair. Treatment of the combination of cyanotic congenital heart disease (CCHD) and PHEO requires an individualized and multidisciplinary approach with judicious use of available medications. This is the first case of uncorrected TOF and epinephrine-secreting PHEO. Our case also reiterates the need for further studies to better understand the pathophysiologic link between PHEO/paraganglioma and CCHD.
Effect of long-term contact lens wear on corneal endothelial cell morphology and function. Anterior segment fluorophotometry (topical) and central endothelial cell photography were performed on 40 long-term (2-23 years) contact lens wearers (four groups of ten each: hard, soft, gas permeable, and gas permeable plus prior lens usage) and 40 non-contact lens wearers of similar ages. Morphologically, the endothelial cells of contact lens wearers showed greater variability in size and shape compared to controls. The mean endothelial cell size in contact lens wearers (307 +/- 35 micron2) was smaller than that of controls (329 +/- 38 micron2, P less than 0.01). There was an increase in the coefficient of variation of cell size of the contact lens group (0.35 +/- 0.06 versus 0.25 +/- 0.04 for controls, P less than 0.0001). The endothelial cell mosaic contained a smaller percentage of hexagonal cells in contact lens wearers (66 +/- 8) compared to controls (71 +/- 7, P less than 0.01). There was a compensatory increase in five-sided cells. Functionally, there was no difference in corneal clarity, central corneal thickness or endothelial permeability to fluorescein (3.78 +/- 0.57 X 10(-4) cm/min versus 3.85 +/- 0.55 X 10(-4) cm/min for controls) between the two groups. Aqueous humor flow was increased 7% in contact lens wearers. We found no correlation between oxygen transmissibility, estimated underlying oxygen tension, or duration of wear of the contact lenses and any morphologic or functional variable. We also found no differences between the four groups of contact lens wearers except that the gas permeable lens wearers had more hexagonal and less pentagonal cells. Long-term contact lens wear induces morphologic changes in the corneal endothelium.(ABSTRACT TRUNCATED AT 250 WORDS)
Carbon dioxide capture is the first step in Carbon Capture and Sequestration processes. Several methods of carbon capture are in use on a semi-commercial basis. These can be described as Amine Capture, Ammonia Capture, and Water/Alkaline Capture. The Ammonia Capture process is the carbon capture process which relates to this invention. Ammonia Capture Process In the ammonia capture process, a concentrated solution of ammonia in water, either cooled or at ambient or higher temperature, is contacted with a gas stream containing carbon dioxide, such as power plant flue gas, cement kiln gas or even possibly air. Carbon dioxide reacts with the ammonium based ions in the water and ammonia solution producing in effect a mixture of ammonia, ammonium carbonate, ammonium bicarbonate, and ammonium carbamate in water. In this discussion we will use the term ammonium carbonates to refer to all species formed during the reaction of ammonia with carbon dioxide. Given sufficient ammonia added to the solution, eventually a very high concentration of ammonium carbonate species can be reached and either solid crystals or a very concentrated solution is produced. The crystals of ammonium carbonates along with the concentrated solution can be decomposed under mild conditions, releasing ammonia and carbon dioxide as gases. Ammonia can be separated from the CO2 using a variety of means, including cold condensing surfaces which liquefy the ammonia. This separation allows the carbon dioxide to pass through the system as a gas and be compressed into a liquid form for later sequestration or use. The liquid ammonia is now available for recycle to capture additional carbon dioxide. No desalination is accomplished in this standalone process. Typical examples of this approach include the ECO2 process from Powerspan or the Alstrom CAP—Chilled Ammonia Process. The presentation entitled “ECO2 Technology—Basin Electric Power Cooperative's 120 MWe CCS Demonstration”, Alix et al, MIT Carbon Sequestration Forum IX, 2008 provides a very detailed overview of the ammonia based carbon capture process and economics. Also the presentation “Chilled Ammonium Process (CAP) for Post Combustion CO2 Capture,” Gal et al, 2nd Annual Carbon Capture and Transportation Workshop, California, March 2006 provides details of the chilled ammonia process and economics. Forward Osmosis Process An entirely different process called “forward osmosis” is currently being developed to desalinate saline and contaminated waters. In this forward osmosis process a “draw solution” is used to create an osmotic pressure differential and the water to be desalinated is “drawn” through an osmotic membrane into the draw solution. In osmotic membranes the water passes preferentially through the membrane over salts dissolved in the water, resulting in a desalination. The water is then separated from the draw solution as purified or desalinated water and the draw solution is reused. The water to be desalinated, as is amply described in the references, may range from seawater, to oil or gas produced waters, to industrial and municipal wastewaters. The common feature of these waters to be desalinated is that they all contain dissolved salts above the level at which the water can be used for any particular purpose such as potable water, agricultural irrigation water, or cooling tower makeup. “Forward osmosis: Principles, applications, and recent developments,” Elimelech et al, Journal of Membrane Science 281 (2006) 70-87 provides a basic review and detail discussion of the process and its applications. In one variation of this forward osmosis technique the “draw solution” is based on ammonium bicarbonate (in this application I treat the term ammonium carbonate solution as a mixture of ammonia, ammonium, carbonate, bicarbonate, carbamate, and CO2 species as will be readily apparent to anyone skilled in the art). Ammonium carbonate in high concentration exhibits a very high osmotic pressure and when separated from seawater by an osmotic membrane, water (but not salts) permeates the membrane and flows into the draw solution. The draw solution is now somewhat diluted. To recover the water, the ammonia and carbon dioxide needs to be recovered from the draw solution. This is typically accomplished by heating the solution, causing the ammonia and carbon dioxide to vaporize from the solution where they can be recovered and re-dissolved in water to create more draw solution. Of course, in a large scale setting this will be done on a continuous basis. The paper “A novel ammonia-carbon dioxide forward (direct) osmosis desalination process,” J. R. McCutcheon et al./Desalination 174 (2005) 1-11 describes this system in detail. U.S. Pat. Nos. 6,391,205 and 7,560,029 as well as US Patent Application No. 20050145568 describes similar processes. Osmotic Power Another yet entirely different yet related process called Osmotic Power, or direct osmosis or pressure retarded osmosis or salinity gradient osmosis is also currently being developed and used to generate osmotic power. In this process, fresh water is contacted through a semi-permeable membrane against a more concentrated solution. Water flows from the freshwater into the more concentrated water. If the concentrated water side is constrained in volume, a pressure develops which can ultimately equal the osmotic pressure differential between the two solutions. Typically this process is applicable to areas where fresh water rivers empty into the sea. The osmotic power process is currently in large scale prototype development, primarily in Europe. The Norwegian company StatKraft is the current leader in the process. Background information can be found in references such as Stein Erik Skilhagen—Osmotic Power presentation March 2008 at Wirec 2008 or in “Salinity Power Plants May be the Next Eco-Power Generating Tech,” by Kit Eaton, Feb. 26, 2009 in FastCompany (www.fastcompany.com). Osmotic Power Heat Engines are also described by Elimelech et al in “A novel ammonia-carbon dioxide osmotic heat engine for power generation,” Journal of Membrane Science 305 (2007) 13-19. In these applications, power is produced through a combination of forward osmosis of high purity distilled water into a concentrate of ammonium carbonates. This produces a pressurized, but now diluted ammonium carbonate stream. The pressure energy is recovered via a turbine or work exchanger device and the diluted draw solution is reconstituted using heat in the typical manner of forward osmosis. No net desalinated water is in the process and the recovered water from the draw solution is recycled back to the process.
• Hard Times: According to police in Idaho Falls, Idaho, Mark Carroll, 18, masked and armed with a handgun, is the one who threatened and robbed the night-shift clerk at the Maverik convenience store on New Year's morning. The clerk was Donna Carroll, Mark's mother, but police said that it was not an "inside" job and that she still does not believe the man behind the mask was her son. [KIDK-TV (Idaho Falls), 1-23-2013] • Major Crimes Unit: (1) Sheriff's deputies in Tampa were searching in January for the thief who stole a wallet from a car and used the victim's debit card three times -- once at a gas station and twice to wash clothes in the laundry room of the Countrywood Apartments. (2) Edward Lucas, 33, was arrested in Slidell, La., in November and charged with theft from the sheriff's department headquarters. Lucas reportedly had walked in and requested a file, and while he was waiting (as surveillance video later confirmed), he furtively swiped three ball-point pens from the reception area. [WTSP-TV (St. Petersburg), 1-10-2013] [Associated Press via Clarion-Ledger (Jackson, Miss.), 11-21-2012] • Judges in Danger: (1) Sheriff's deputies in Ozaukee County, Wis., identified Shelly Froelich, 48, as the woman who allegedly called the jail in January and asked if Judge Thomas Wolfgram was in, and when informed that he wasn't but that he'd be in court the following morning, said, "Good. Tell him I have a hit on him." Deputies said Froelich's son was in lockup and that his mom had several times before issued threats to judges after her son had been arrested. (2) James Satterfield, 58, was arrested in Cobb County, Ga., in December after police said he wrote a letter to the wife of Judge Reuben Green vowing to eat the couple's children after "cook(ing) them first to make them more palatable." [Milwaukee Journal Sentinel, 1-23-2013] [Atlanta Journal-Constitution, 1-14-2013]
Growth Groups What is a Growth Group? A Growth Groups are relational Bible study groups that meet regularly to share, study, and support one another. Why Growth Groups? Our desire is to see people connected to God and to one another in the Body of Christ. Building real connections with each other centred on fellowship with God goes beyond participating the weekly worship service. Real connections are developed as people engage with each other in meaningful relationships. People grow as they learn from God’s word and apply it in their lives. Growth groups provide an important context for people to grow together in living out the implications of the Word. When and Where Do the Groups Meet? Growth Groups meet at various times and at various locations throughout the week/month. The groups that meet following the Worship Service on Sunday have the benefit of Sunday School classes and child-care for their children. What is the Required Commitment? Joining a Growth Group requires a commitment to regular participation. Obviously, allowances are made for sickness, vacation, work conflicts and other special events, but to get the most out of the study you really need to be present each week and to do the homework. This commitment is the key to a strong Growth Group and to getting the most out of the study. What Will We Study? Each session there are new Bible studies/groups to choose from. Choose a study group that piques your interest and one you hope will help your life. REGISTER *indicates required field Name:* Phone:* Email:* Gender: MaleFemale Choose Growth Group:* Love and RespectBeatitudes: How’s Your Attitude?Thirst For ChristThe Ten CommandmentsFaith Walk—Living What You BelieveWomen of the BibleProverbs To Live By CAPTCHA Code:* SPRING 2015 LOVE AND RESPECT Leaders: Mark and Debra Fugit Meets: Sundays following the Worship Service Location: Church Youth/Child Classes and Care available This is a class tailored for married couples and singles planning to be married. We will go through the DVD series from the book, Love And Respect by Emerson Eggrichs, which is based on Ephesians 5:33. This series reveals the power of unconditional love and respect and how husbands and wives can reap the benefits of marriage that God intended. Please purchase one Small Group Discussion Guide per person ($10), available through the church. THIRST FOR CHRISTLeader: Fred BerishMeets: Sundays following the Worship Service Location: ChurchYouth/Child Classes and Care availableWe live in a world with much religion without a change in the way we believe. In this study, we will be taking a journey through the Bible and look at several of God’s children. Through the Word of God, the goal is to discover what it means to “walk in the Spirit” and develop a real thirst for our Savior Jesus Christ. BEATITUDES: HOW’S YOUR ATTITUDE? Leader: Debbie Marko Meets: Sundays following the Worship Service Location: Church Youth/Child Classes and Care available Are you tired, stressed, discouraged? Are you worried about the future? The right attitude can invigorate the spirit, relieve stress, and produce joy. With this study of Matthew 5:3–12, you’ll discover the Beatitudes—the attitudes you need so you can be the happy, Christlike Christian God wants you to be. Please purchase one study guide per person ($10), available through the church. WOMEN OF THE BIBLELeader: Beth Ellis Meets: First Mondays @ 7-8p Location: Starbucks (83rd & Bell Rd) Child care not available This women’s group is studying the lives of various women found in the Bible. PROVERBS TO LIVE BYLeader: Pastor Dave Crichton Meets: First & Third Tuesdays @ 8-9a Location: Starbucks (99th & Bell) Fourth Wednesdays @ 11a-12:00p Location:Various Restaurants Child care not availableThis men’s study focuses on the wisdom made practical for everyday living found in the book of Proverbs.
Reduction of platelet cytosolic phospholipase A2 activity by atorvastatin and simvastatin: biochemical regulatory mechanisms. Statins have demonstrated effects beyond reducing cholesterol level that may contribute to their clinical benefit, including effects on platelet biochemistry and function. To explore and compare the antiplatelet effect of two lipophilic statins (atorvastatin and simvastatin) and one hydrophilic statin (pravastatin) concerning: a) collagen-induced platelet aggregation and thromboxane A2 (TXA2) synthesis; b) the additive effect of statins on TXA2 synthesis in platelets treated with a submaximally effective concentration of aspirin and c) the biochemical mechanisms involved. Washed human platelets were incubated with statins (1-20μM), and stimulated with collagen (1μg/ml) or arachidonic acid (AA) (200μM) and TXB2 was quantified by ELISA. Incubation with simvastatin or atorvastatin reduced (36.2% and 31.0%, respectively) collagen-induced TXB2 synthesis (p<0.05) and platelet aggregation (p<0.001), whereas pravastatin had no effects. Simultaneous incubation with a submaximally effective concentration of aspirin (1μM) and atorvastatin or simvastatin significantly increased the inhibition of TXB2 synthesis by aspirin by 4.4- and 4.1-fold, respectively. Statins did not affect AA-induced TXB2 synthesis, excluding an effect on COX-1/TXA2 synthase activities. Atorvastatin and simvastatin concentration-dependently inhibited the collagen-induced increase in cytosolic calcium and the kinetics of cPLA2 phosphorylation. Lipophilic statins reduced phosphorylation of both ERK1/2 and p38 MAPK, which regulate cPLA2 phosphorylation and calcium movement. We report for the first time a direct downregulation by atorvastatin and simvastatin of platelet cPLA2 activity through effects on calcium and MAPK, which reduce collagen-induced TXA2 synthesis. These mechanisms might contribute to their beneficial effects, even in aspirin-treated patients.
Essential Oils For Diabetic Neuropathy Essential Oils For Diabetic Neuropathy-Diabetes can cause long-term problems for your whole body, especially if you do not control your blood sugar effectively and leave it high for years. High blood sugar can damage nerves that send signals from the hands and feet. This damage is called diabetic neuropathy.Diabetic neuropathy can cause numbness or tingle in your fingers, be it a toe or hand. Other symptoms include pain such as burning, exposure to sharp objects, and pain (diabetic nerve pain). The pain may be mild at first, but it can get worse over time and can spread to your legs or arms. Walking can be very painful and you may be in pain, even just because of the soft touch. According to the American Academy of Family Physicians, 10%-20% of people with diabetes have experienced nerve pain. Nerve damage can affect your ability to sleep and your overall quality of life. Having a chronic illness can also cause depression. What is the treatment for diabetic neuropathy? Although damaged nerves cannot be replaced, there are ways to prevent further damage and reduce pain. The first step to treating your pain is by controlling your blood sugar, so the damage does not develop. Talk to your doctor about your blood sugar regulation, and learn how to monitor it. You may be asked to lower blood sugar by 70-130 mg / dL before meals and less than 180 mg / dL of blood sugar after meals. Use diet, exercise, and medications to lower your blood sugar to the targeted range. Note also other health risks that can make your diabetes worse. Keep your weight under control. If you smoke, ask your doctor to explain the various effective ways to quit smoking. Your doctor may first suggest that you try painkillers, such as acetaminophen (Tylenol), aspirin, or ibuprofen (Motrin, Advil). These drugs are available without a prescription and may cause side effects. Try using low doses in a short time to control the symptoms. There are other options if you need long-term pain relief or a stronger pain reliever. Antidepressants These drugs are most often used to treat depression. But antidepressants are also often prescribed to treat diabetic nerve pain, as it can affect the chemicals in your brain that cause you pain. Doctors may recommend tricyclic antidepressants, such as amitriptyline (Elavil), imipramine (Tofranil), and desipramine (Norpramin). These drugs can cause unpleasant side effects such as dry mouth, fatigue, and sweating. You may not be advised to take tricyclic antidepressants if you have a history of heart problems. The latest generation of serotonin and norepinephrine reuptake inhibitor (SNRI) such as venlafaxine (Effexor) and duloxetine (Cymbalta) is an alternative for tricyclics. They tend to have fewer side effects. Anti-seizure medication Drugs used to prevent seizures in epilepsy patients such as pregabalin (Lyrica), gabapentin (Gabarone, Neurontin), phenytoin (Dilantin), and carbamazepine (Carbatrol, Tegretol) may also help overcome nerve pain. Pregabalin can also help you sleep better. Side effects of this drug include drowsiness, swelling, and dizziness. Opioid pain medication To relieve stronger pain, there are strong drugs such as oxycodone (Oxycontin) and opioids such as tramadol (Conzip, Ultram). This drug tends to be the last resort to deal with pain. You may turn to these medications if other treatments do not work. Although they can help reduce pain, these drugs are not meant to be consumed in the long run because of the risk of side effects and potentially addictions. Be very careful when taking opioid drugs and consult your doctor. Topical pain relief There are also products that you can rub or attach to your skin in painful areas. Cream Capsaicin (Arthricare, Zostrix) can help prevent pain signals by using ingredients found in chili. The capsaicin product can cause skin irritation in some people. The lidocaine patch gives local anesthetic through patches placed on the skin. Keep in mind this as treatment can sometimes cause mild skin irritation. Alternative medicine Several alternative therapies have been studied for diabetic nerve pain, although the therapy has not been proven. Alternative treatments include: Supplements such as alpha lipoic acid and acetyl-L-carnitine Biofeedback Meditation Acupuncture Hypnosis How to treat my diabetes so as not to get this complication? Diabetic nerve damage can cause pain, but it can also affect your ability to feel pain. That’s why it’s important to keep your feet healthy. Try the following techniques to treat your feet better: Check your feet every day to see cuts, swelling, and other problems. You probably will not know this problem until your legs are completely infected. An untreated infection can lead to serious complications and even amputations. Wash your feet daily with warm water and dry afterward. Then apply lotion to keep them moist. Do not apply lotion between your toes. Wear comfortable, flexible shoes and fit your feet and give room to move. Use your new shoes slowly, so do not hurt your legs. Ask your doctor how to get special shoes if ordinary shoes do not fit. Always cover your feet with shoes, sandals, or thick socks to protect your feet and prevent injuries. Is there any way to prevent diabetic neuropathy? The best way to avoid nerve pain is to keep your blood sugar under control to prevent nerve damage early on. Follow your doctor’s advice on diet, exercise, and care.
You are currently viewing as a guest which gives you limited access to view most discussions and access to our other features. By joining our free AFrakan Community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, Join our AFrakan Community today! If you have any problems with the registration process or your account login, please contact contact us. Announcement Collapse Mobile Messinger Chat! ABIBIFAHODIE! Nubian family and to our newest members. Our mobile chat is back up and running if you would like to Communicate one on one you can. The green image at the bottom right of the site for the mobile devices is what you looking for just tap the image. Ma'at 05-19-2016, 06:19 AM Ma'at There's a lot of confusion about ancient african spirituality. First, you need to learn "the language of the gods" "or "gods language" to even put everything in proper context. Be like Ashra Kwesi (youtube him) read it off the walls for yourselves. Learn from Rkty Amen, she is a master saba and an excellent educator. SUPPORT her by buying her books and taking her online class. I have several of her books. Imo conversational phrases was the best one for people starting out. http://meduneter.com/rkhty-amen-e-books/ ; Second, the focus of this thread is Kemetic spirituality (Kemetic science) which means you need the original stories. Not christianity, not islam, not judaism. We're going back tens of thousands of years. Those concepts didn't even exist yet. Read this book to get original gods, stories and concepts. The plagiarism will be evident http://www.amazon.com/gp/aw/d/0195170245/ref=mp_s_a_1_1... Third, the book most people incorrectly know as the book of the dead (the book of coming forth by day) is a must read for a more direct view of the conduit with which other religions were created. Don't buy European copy and paste versions of earlier European mistranslated versions. Get this one, it's accurate. The author did her own translations and she's pro melanin. http://www.amazon.com/gp/aw/d/0943412145/ref=mp_s_a_1_6... Fourth, the goddess Ma'at (balance, cosmic order) and isfet (chaos, disorder) there's waaaay too much to explain here so I'll put a video. Besides, the sister in the video does a better job than I would have https://www.youtube.com/watch?v=VwDGIk4p7So Fifth, there are more than a few people who are, understandibly, too conditioned to ever give up or question the authenticity of christianity. This video is for you https://www.youtube.com/watch?v=Dz-94Tiy660&app=desktop So is this one... https://www.youtube.com/watch?v=OJAb...ature=youtu.be Maat represents truth, balance, justice, and righteousness that a society brings about itself without waiting on some make believe character to come out of the sky to save them. So, on to the negative confessions. You know the watered down version as the ten commandments Now you need to understand who Heru is I think this book will provide a much clearer understanding of who Heru/Horus originally was. I like how the wuthor uses Christianity and islam as a bridge to teach people the truth based on what they currently have been taught.http://www.amazon.com/Heru.../dp/1434812529/ref=sr_1_3... ‪#‎knowthyancestors‬ ‪#‎knowyourhistory‬ at least ask what the kidnapped ancestors had as spiritual systems before they were kidnapped enslaved and forced to convert to christianity... Africans were not christians when they encountered the white men. If they were why did European have to send so many missionaries?? You don't convert people who are already following your religion! You don't have to do this to them. http://atlantablackstar.com/…/9-deva...tions-white-…/ This is an awesome guideline brother. I would like to make a correction on the third section. It says The author did her own translations and she's pro melanin. The author is Maulana Karenga. Great write up nonetheless... Second, the focus of this thread is Kemetic spirituality (Kemetic science) which means you need the original stories. Not christianity, not islam, not judaism. We're going back tens of thousands of years. Those concepts didn't even exist yet. Thank you. My head spins on how we've been duped in to "believing" and not indoctrinated with KNOWING!
You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today! If you have any problems with the registration process or your account login, please contact us. Welcome, Bienvenue...This forum is intended for a brief presentation of our members. Seems i have actually found a caring helpful site. As i have read some posts and saw the compassion. I am looking for help with a few personal things & i know I'll need to locate the proper links here. I hope to make many friends & in turn,when i can, help anyone i can along myway. (Lonely Are you looking for help with a physical pain problem or are you a clinician looking for help treating patients? If it's the former, then you will probably be directed to someone in your locality who may be able to provide professional services; if it's the latter, then you're likely to meet many other "lonely" clinicians here who share your plight. __________________John Ware, PTFellow of the American Academy of Orthopedic Manual Physical Therapists"Nothing can bring a man peace but the triumph of principles." -R.W. Emerson“If names be not correct, language is not in accordance with the truth of things. If language be not in accordance with the truth of things, affairs cannot be carried on to success.” -The Analects of Confucius, Book 13, Verse 3 Hi warped, Thanks for introducing yourself. This is a site for primarily human primate social groomers, not necessarily for groomees. Depending on what kind of pain you have, you may be better off seeking a forum of like minds/experiencers. In other words, this isn't a place, necessarily, for therapy itself, virtual or otherwise: it's more a place where we ramble on and on about who we are and reflect on what we do as therapists. You're welcome to read here, though, if you want. I don't know of any participants here in your particular part of the country, but there might be. It's very difficult- and probably unethical- to provide any kind of useful or meaningful clinical intervention for pain over the internet. I encourage you to search your region for at least a therapist who is a member of APTA and preferably with OCS (Orthopedic Certified Specialist) credentials (if your problem is primarily musculoskeletal, if you have a chronic neurodegenerative disease, such as MS, the look for someone with the NCS credential). __________________John Ware, PTFellow of the American Academy of Orthopedic Manual Physical Therapists"Nothing can bring a man peace but the triumph of principles." -R.W. Emerson“If names be not correct, language is not in accordance with the truth of things. If language be not in accordance with the truth of things, affairs cannot be carried on to success.” -The Analects of Confucius, Book 13, Verse 3
Russian Foreign Minister Sergei Lavrov on Thursday blasted the West for meddling in Ukraine, saying that Europe's relations with Russia were facing the "moment of truth" over the crisis. "You could say that the relationship between Russia and the European Union has reached a kind of moment of truth," Lavrov wrote in a lengthy article published in the Kommersant daily. "You get the impression that our Western partners follow a reflex reaction based on the simplistic 'us against them' principle and do not really think about the long-term impact of what they are doing," Lavrov wrote. "It was an unpleasant surprise to discover that in the minds of EU and US officials, the 'free' choice of the Ukrainian people has already been made and means only a 'European future'," Russia's foreign policy chief wrote. The ex-Soviet nation of 46 million people has been in chaos since November when President Viktor Yanukovych ditched a historic EU trade and political pact in favour of closer ties with Moscow, stunning pro-EU parts of the population and sparking violent protests. Since then unrest has snowballed into a titanic tussle for Ukraine's future between Russia and the West, as demonstrations continue and spread to other parts of the country. Kiev's iconic Independence Square, which has drawn hundreds of thousands of protesters, now resembles a war zone, with protesters in army fatigues and bullet-proof vests patrolling an area full of tents and burning log fires. Lavrov blasted the protests, saying that attempts by "several thousand protesters" to pressure the government through force were unacceptable and warned that the demonstrations were increasingly being hijacked by extremist nationalist groups. "Attempts to decide for the citizens of Ukraine what the future of their state should be and even who should be in their government appear doomed," Lavrov argued. "Similar attempts at 'social engineering' have inevitably ended very badly," he wrote pointing to Western interventions in Iraq, Afghanistan and Libya. German Foreign Minister Frank-Walter Steinmeier was due to arrive in Moscow on Thursday for a two-day visit that was expected to include discussions with Lavrov over the political crisis in Ukraine.
South Dakota gas prices rise nearly a nickel over past week News Staff - May 14, 2018 UNDATED - Average retail gasoline prices in South Dakota have risen 4.6 cents per gallon in the past week, averaging $2.68/g yesterday, according to GasBuddy's daily survey of 628 gas outlets in South Dakota. This compares with the national average that has increased 5.7 cents per gallon in the last week to $2.86/g, according to gasoline price website GasBuddy.com. Including the change in gas prices in South Dakota during the past week, prices yesterday were 31.7 cents per gallon higher than the same day one year ago and are 10.8 cents per gallon higher than a month ago. The national average has increased 15.4 cents per gallon during the last month and stands 53.8 cents per gallon higher than this day one year ago. According to GasBuddy historical data, gasoline prices on May 14 in South Dakota have ranged widely over the last five years:$2.36/g in 2017, $2.15/g in 2016, $2.52/g in 2015, $3.50/g in 2014 and $3.63/g in 2013. Areas near South Dakota and their current gas price climate:Sioux Falls- $2.77/g, up 8.1 cents per gallon from last week's $2.69/g.North Dakota- $2.77/g, up 8.6 cents per gallon from last week's $2.69/g.Nebraska- $2.76/g, up 3.9 cents per gallon from last week's $2.72/g. "Gas prices saw among the larger weekly increases since Hurricane Harvey in the last week as oil prices continued to surge, leading to sharply higher prices at the pump, putting the U.S. in peril of striking the $3/gallon level for the first time since 2014," said Patrick DeHaan, head of petroleum analysis for GasBuddy. "Some of the factors at play in the rising prices: President Trump's U.S. withdraw from the nuclear deal with Iran and oil supplies that have continued to drop as U.S. exports surpass Venezuela- a surprising feat given Venezuela has the largest proven oil reserves in the world. In addition, as money continues to flow into commodities as bets for higher oil prices rise, there's a strong chance of seeing crude oil prices continue to rally in the weeks ahead, with odds of hitting $3/gallon nationally now better than 65% just in time for the summer driving season."
1. Field of the Invention The present invention relates generally to incorporation of thermal energy storage materials into porous materials. More particularly, this invention relates to incorporation of solid phase-change materials into porous materials such as gypsum wallboard and cellulosic materials such as ceiling tiles and wood. Even more particularly, this invention relates to improved methods and techniques for incorporating phase-change materials and other chemicals into porous construction materials. 2. Description of the Prior Art Energy is commonly stored in heated bricks, rock beds, concrete, water tanks, and the like. Such thermal energy storage methods require leakproof containers and/or extensive space and mechanical support for the massive amounts of storage materials. In such materials, the amount of energy stored is proportional to the temperature rise and to the mass of the storage material, and is generally limited to about 1 calorie per gram per degree Celsius (1 BTU per pound per degree Fahrenheit). In contrast, phase-change materials store much larger amounts of thermal energy over a small temperature change by virtue of reversible physical/chemical/structural changes such as melting. For example, certain hydrated inorganic salts used for thermal energy storage absorb as much as 96 BTU per pound at the melting temperature. There are disadvantages to the use of solid/liquid phase-change materials. They must be reliably maintained in a durable container and their melting-crystallization change must be fully reversible. In the past, many solid/liquid phase-change materials have leaked and/or have lost storage capacity because of irreversible changes over periods of time. In addition, the conduction of heat into and out of solid/liquid phase-change materials is commonly limited by the poor thermal properties of the liquid phase of the material and/or its interface with the container used to hold the phase-change material. A series of organic polyols which are related compounds with tetrahedral molecular structures has been known to be suitable for thermal energy storage. These polyols include pentaerythritol (C.sub.5 H.sub.12 O.sub.4), pentaglycerine (C.sub.5 H.sub.12 O.sub.3), neopentyl glycol (C.sub.5 H.sub.12 O.sub.2), neopentyl alcohol (C.sub.5 H.sub.12 O) and neopentane (C.sub.5 H.sub.12). Certain of these polyols can be alloyed together to provide reversible solid-solid mesocrystalline phase transformations of high enthalpy and adjustable temperatures of transition. These polyols are referred to as phase-change materials (PCMs), which reversibly absorb large amounts of thermal energy during solid-state transformations at temperatures well below their melting temperatures. These transformation temperatures may be adjusted over a wide range by selecting various compositions of solid-solution mixtures of the polyols. A large number of phase-change materials were evaluated by NASA in the 1960's as "thermal capacitors" to passively buffer the temperature swings experienced by earth orbiting satellites. See Hale et al., Phase Change Materials Handbook, NASA Report B72-10464 (Aug. 1972). Among the hundreds of phase-change materials evaluated by NASA were a few materials which exhibited solid-to-solid transformations with large enthalpies. Though these materials were not used for space applications, a decade later they became of interest to scientists searching for better phase-change materials for thermal energy storage. Solid-state phase-change materials have the advantages of less stringent container requirements and greater design flexibility. In general, the thermal conductivity of a phase-change thermal storage material is an important parameter, as well as its transition temperature. To a certain extent, the thermal conductivity of phase-change materials is adjustable by introducing additives to form composite materials. For example, the heat transport in paraffin phase-change materials can be adjusted by introducing metal matrices, such as aluminum honey-comb or expanded aluminum mesh into the phase-change material container. See: deJong, A.G., Improvement of Heat Transport in Paraffins for Latent Heat Storage Systems, in Thermal Storage of Solar Energy (C. den Ouden, ed.) pp. 123-1344 (1981); and Benson et al., Solid State Phase Change Materials for Thermal Energy Storage in Passive Solar Heated Buildings, Proceedings of the Tenth Energy Technology Conference, Washington, D.C., pp. 712-720, (Feb. 28-Mar. 2, 1983). Other literature discusses a class of hydrocarbon compounds for use in thermal energy storage components for passive solar heated buildings, with particular reference to polyhydric alcohols such as pentaerythritol, trimethylol ethane, neopentyl glycol, and closely related materials. This work also discusses solid-state phase-change materials which provide compact thermal energy storage with reduced concern for the containment of the phase-change material. Another work, Christensen, Advanced Phase Change Storage for Passive Solar Heating: Analysis of Materials and Configurations, in Proceedings of the ASES Passive 83 Conference, Glorieta, N.Mex., (Sep. 7-9, 1983) discusses the performance of phase-change materials for thermal storage in passive solar heating systems, including factors other than material properties that affect storage performance and optimization. A related work, Benson et al. Materials Research for Passive Systems-Solid State Phase Change Materials and Polymer Photodegradation, in Proceedings of the Passive and Hybrid Solar Energy Update, Washington, D.C., pp. 228-235, (Sept. 15-17, 1982), discusses solid-state phase-change materials being evaluated for use in passive solar thermal energy storage systems, with particular emphasis on pentaerythritol, pentaglycerine and neopentyl glycol. Another work, Benson, Organic Polyols: Solid State Phase Change Materials for Thermal Energy Storage, in Opportunities in Thermal Storage R and D, EPRI Special Report EM-3159-SR, pp. 19-1 to 19-10 (July 1983); discusses a homologous series of organic polyols based on pentaerythritol, including pentaglycerine and neopentyl glycol, demonstrating potential for thermal energy storage at temperatures from below 25.degree. C. to 188.degree. C. In U.S. Pat. No. 4,572,864, incorporated herein by reference, there are described certain techniques for increasing the thermal storage capacity of various solid materials. The techniques involved placement of certain polyhydric alcohols into or in contact with solid materials such as metals, carbon, plastic, cellulose material, fibrous material, concrete, porous rock, gypsum, siliceous materials, etc. The techniques described in such patent include melting the phase-change alcohols and then adding the solid materials thereto or dipping the solid material into the molten phase-change material, dissolving the phase-change alcohols and impregnating solid porous materials with the solution and then drying, and pouring molten phase-change material into cavities in solid materials. The practice of working with vats of molten phase-change compounds or materials presents safety hazards and pollution problems. The molten phase-change materials have high vapor pressures and are flammable. Also, the escape of the vapors and their condensation in the air and onto any unheated surfaces can produce potential inhalation and dust explosion problems in the plant. Of course, much time and energy is required to heat a vat of phase-change material above its melting point and hold it there for the duration of a work day. Another disadvantage of the use of a vat of molten phase-change material is that separate vats are required for each different composition of phase-change material to be used on various porous materials. Although addition of particles or pellets of solid (unmelted) phase-change material into the mix of raw ingredients used to make construction materials has been tried, this technique is very restrictive. The phase-change material can easily interfere with the processing of the construction material. Polyalcohols, for example, are water soluble and interfere with the hydration (setting) of concrete or gypsum. Certain other phase-change materials are also water soluble and would be expected to interfere with the processing of these construction materials. Some construction materials, such as wood products, are not readily adapted to the incorporation of particles or pellets of an additive. The previous techniques require a dedicated production process. As a result, it is difficult or cumbersome to make changes to the process or to tailor the properties of the final impregnated material. There has not heretofore been provided an efficient and safe technique for impregnating phase-change materials into porous materials (such as construction materials) having the advantages of the techniques of the present invention.
Synopsis by Hal Erickson Tensions of a mostly racial nature erupt between two African-American staffers at the ER, the mild-mannered Michael Gallant (Sharif Atkins) and the outspoken Gregory Pratt (Mekhi Phifer). Pratt foments the hostility when he interferes in Gallant's treatment of a suicidal soldier. But when a hypochondriac (Diane Delano) is refused treatment by Dr. Kayson (Sam Anderson) for what seems to be a genuine ailment, Pratt holds his tongue -- with fatal consequences for the patient. Now it is Gallant's turn to unleash his anger at Pratt, a confrontation with long-ranging ramifications. Elsewhere, a distracted Weaver (Laura Innes) makes a disastrous error while demonstrating flu shots on a TV news program, and Carter (Noah Wyle) again confronts Abby (Maura Tierney) about her alcohol problems.
# Translation of Odoo Server. # This file contains the translation of the following modules: # * website_mail_channel # # Translators: # Martin Trigaux, 2019 # Hans Henrik Gabelgaard <[email protected]>, 2019 # jonas jensen <[email protected]>, 2019 # Morten Schou <[email protected]>, 2019 # JonathanStein <[email protected]>, 2019 # Sanne Kristensen <[email protected]>, 2019 # lhmflexerp <[email protected]>, 2019 # Mads Søndergaard, 2020 # msgid "" msgstr "" "Project-Id-Version: Odoo Server saas~12.4\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2019-08-12 11:32+0000\n" "PO-Revision-Date: 2019-08-26 09:16+0000\n" "Last-Translator: Mads Søndergaard, 2020\n" "Language-Team: Danish (https://www.transifex.com/odoo/teams/41243/da/)\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: \n" "Language: da\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "- <i class=\"fa fa-calendar\" role=\"img\" aria-label=\"Date\" title=\"Date\"/>" msgstr "- <i class=\"fa fa-calendar\" role=\"img\" aria-label=\"Dato\" title=\"Dato\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "" "- <i class=\"fa fa-paperclip\" role=\"img\" aria-label=\"Attachments\" " "title=\"Attachments\"/>" msgstr "" "- <i class=\"fa fa-paperclip\" role=\"img\" aria-label=\"Vedhæftelser\" " "title=\"Vedhæftelser\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "" "<i class=\"fa fa-arrow-left\" role=\"img\" aria-label=\"Previous message\" " "title=\"Previous message\"/>" msgstr "" "<i class=\"fa fa-arrow-left\" role=\"img\" aria-label=\"Forrige besked\" " "title=\"Forrige besked\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "" "<i class=\"fa fa-arrow-right\" role=\"img\" aria-label=\"Next message\" " "title=\"Next message\"/>" msgstr "" "<i class=\"fa fa-arrow-right\" role=\"img\" aria-label=\"Næste besked\" " "title=\"Næste besked\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "" "<i class=\"fa fa-chevron-down\" role=\"img\" aria-label=\"Show attachments\"" " title=\"Show attachments\"/>" msgstr "" "<i class=\"fa fa-chevron-down\" role=\"img\" aria-label=\"Vis vedhæftelser\"" " title=\"Vis vedhæftelser\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "" "<i class=\"fa fa-chevron-down\" role=\"img\" aria-label=\"Show replies\" " "title=\"Show replies\"/>" msgstr "" "<i class=\"fa fa-chevron-down\" role=\"img\" aria-label=\"Vis svar\" " "title=\"Vis svar\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "" "<i class=\"fa fa-chevron-right\" role=\"img\" aria-label=\"Hide " "attachments\" title=\"Hide attachments\"/>" msgstr "" "<i class=\"fa fa-chevron-right\" role=\"img\" aria-label=\"Skjul " "vedhæftelser\" title=\"Skjul vedhæftelser\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "" "<i class=\"fa fa-chevron-right\" role=\"img\" aria-label=\"Hide replies\" " "title=\"Hide replies\"/>" msgstr "" "<i class=\"fa fa-chevron-right\" role=\"img\" aria-label=\"Skjul svar\" " "title=\"Skjul svar\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_messages #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "<i class=\"fa fa-envelope-o\" role=\"img\" aria-label=\"Alias\" title=\"Alias\"/>" msgstr "<i class=\"fa fa-envelope-o\" role=\"img\" aria-label=\"Alias\" title=\"Alias\"/>" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "<i class=\"fa fa-envelope-o\"/> send mail" msgstr "<i class=\"fa fa-envelope-o\"/> send mail" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "<i class=\"fa fa-file-o\"/> archives" msgstr "<i class=\"fa fa-file-o\"/> arkiver" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "" "<i class=\"fa fa-fw fa-user\" role=\"img\" aria-label=\"Recipients\" " "title=\"Recipients\"/>" msgstr "" "<i class=\"fa fa-fw fa-user\" role=\"img\" aria-label=\"Modtagere\" " "title=\"Modtagere\"/> " #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "<i class=\"fa fa-times\"/> unsubscribe" msgstr "<i class=\"fa fa-times\"/> afmeld" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "<span class=\"oe_snippet_thumbnail_title\">Discussion Group</span>" msgstr "<span class=\"oe_snippet_thumbnail_title\">Diskussionsgruppe</span>" #. module: website_mail_channel #: model:mail.template,body_html:website_mail_channel.mail_template_list_subscribe msgid "" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" style=\"padding-top: 16px; background-color: #F1F1F1; font-family:Verdana, Arial,sans-serif; color: #454748; width: 100%; border-collapse:separate;\"><tr><td align=\"center\">\n" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"padding: 16px; background-color: white; color: #454748; border-collapse:separate;\">\n" "<tbody>\n" " <!-- HEADER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\">\n" " <span style=\"font-size: 10px;\">Your Channel</span><br/>\n" " <span style=\"font-size: 20px; font-weight: bold;\">${object.name}</span>\n" " </td><td valign=\"middle\" align=\"right\">\n" " <img src=\"/logo.png?company=${user.company_id.id}\" style=\"padding: 0px; margin: 0px; height: auto; width: 80px;\" alt=\"${user.company_id.name}\"/>\n" " </td></tr>\n" " <tr><td colspan=\"2\" style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin:16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- CONTENT -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"top\" style=\"font-size: 13px;\">\n" " <div style=\"margin: 0px; padding: 0px;\">\n" " Hello,<br/><br/>\n" " You have requested to be subscribed to the mailing list <strong>${object.name}</strong>.\n" " <br/><br/>\n" " To confirm, please visit the following link: <strong><a href=\"${ctx['token_url']}\">${ctx['token_url']}</a></strong>\n" " <br/><br/>\n" " If this was a mistake or you did not requested this action, please ignore this message.\n" " % if user.signature\n" " <br/>\n" " ${user.signature | safe}\n" " % endif\n" " </div>\n" " </td></tr>\n" " <tr><td style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin: 16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- FOOTER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; font-size: 11px; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\" align=\"left\">\n" " ${user.company_id.name}\n" " </td></tr>\n" " <tr><td valign=\"middle\" align=\"left\" style=\"opacity: 0.7;\">\n" " % if user.company_id.phone\n" " ${user.company_id.phone} |\n" " %endif\n" " % if user.company_id.email\n" " <a href=\"'mailto:%s' % ${user.company_id.email}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.email}</a> |\n" " % endif\n" " % if user.company_id.website\n" " <a href=\"'%s' % ${user.company_id.website}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.website}\n" " </a>\n" " % endif\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" "</tbody>\n" "</table>\n" "</td></tr>\n" "<!-- POWERED BY -->\n" "<tr><td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: #F1F1F1; color: #454748; padding: 8px; border-collapse:separate;\">\n" " <tr><td style=\"text-align: center; font-size: 13px;\">\n" " Powered by <a target=\"_blank\" href=\"https://www.odoo.com?utm_source=db&amp;utm_medium=mail\" style=\"color: #875A7B;\">Odoo</a>\n" " </td></tr>\n" " </table>\n" "</td></tr>\n" "</table>\n" " " msgstr "" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" style=\"padding-top: 16px; background-color: #F1F1F1; font-family:Verdana, Arial,sans-serif; color: #454748; width: 100%; border-collapse:separate;\"><tr><td align=\"center\">\n" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"padding: 16px; background-color: white; color: #454748; border-collapse:separate;\">\n" "<tbody>\n" " <!-- HEADER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\">\n" " <span style=\"font-size: 10px;\">Din kanal</span><br/>\n" " <span style=\"font-size: 20px; font-weight: bold;\">${object.name}</span>\n" " </td><td valign=\"middle\" align=\"right\">\n" " <img src=\"/logo.png?company=${user.company_id.id}\" style=\"padding: 0px; margin: 0px; height: auto; width: 80px;\" alt=\"${user.company_id.name}\"/>\n" " </td></tr>\n" " <tr><td colspan=\"2\" style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin:16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- CONTENT -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"top\" style=\"font-size: 13px;\">\n" " <div style=\"margin: 0px; padding: 0px;\">\n" " Hej,<br/><br/>\n" " Du har ønsket at blive tilmeldt vores adresseliste<strong>${object.name}</strong>.\n" " <br/><br/>\n" " For at bekræfte dette, bedes du venligst klikke på følgende link:<strong><a href=\"${ctx['token_url']}\">${ctx['token_url']}</a></strong>\n" " <br/><br/>\n" " Hvis dette var en fejl, eller du ikke har ønsket at blive tilmeldt vores adresseliste, skal du se bort fra denne meddelelse.\n" " % if user.signature\n" " <br/>\n" " ${user.signature | safe}\n" " % endif\n" " </div>\n" " </td></tr>\n" " <tr><td style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin: 16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- FOOTER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; font-size: 11px; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\" align=\"left\">\n" " ${user.company_id.name}\n" " </td></tr>\n" " <tr><td valign=\"middle\" align=\"left\" style=\"opacity: 0.7;\">\n" " % if user.company_id.phone\n" " ${user.company_id.phone} |\n" " %endif\n" " % if user.company_id.email\n" " <a href=\"'mailto:%s' % ${user.company_id.email}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.email}</a> |\n" " % endif\n" " % if user.company_id.website\n" " <a href=\"'%s' % ${user.company_id.website}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.website}\n" " </a>\n" " % endif\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" "</tbody>\n" "</table>\n" "</td></tr>\n" "<!-- POWERED BY -->\n" "<tr><td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: #F1F1F1; color: #454748; padding: 8px; border-collapse:separate;\">\n" " <tr><td style=\"text-align: center; font-size: 13px;\">\n" " Powered by <a target=\"_blank\" href=\"https://www.odoo.com?utm_source=db&amp;utm_medium=mail\" style=\"color: #875A7B;\">Odoo</a>\n" " </td></tr>\n" " </table>\n" "</td></tr>\n" "</table>\n" " " #. module: website_mail_channel #: model:mail.template,body_html:website_mail_channel.mail_template_list_unsubscribe msgid "" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" style=\"padding-top: 16px; background-color: #F1F1F1; font-family:Verdana, Arial,sans-serif; color: #454748; width: 100%; border-collapse:separate;\"><tr><td align=\"center\">\n" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"padding: 16px; background-color: white; color: #454748; border-collapse:separate;\">\n" "<tbody>\n" " <!-- HEADER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\">\n" " <span style=\"font-size: 10px;\">Your Channel</span><br/>\n" " <span style=\"font-size: 20px; font-weight: bold;\">${object.name}</span>\n" " </td><td valign=\"middle\" align=\"right\">\n" " <img src=\"/logo.png?company=${user.company_id.id}\" style=\"padding: 0px; margin: 0px; height: auto; width: 80px;\" alt=\"${user.company_id.name}\"/>\n" " </td></tr>\n" " <tr><td colspan=\"2\" style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin:16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- CONTENT -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"top\" style=\"font-size: 13px;\">\n" " <div style=\"margin: 0px; padding: 0px;\">\n" " Hello,<br/><br/>\n" " You have requested to be unsubscribed to the mailing list <strong>${object.name}</strong>.\n" " <br/><br/>\n" " To confirm, please visit the following link: <strong><a href=\"${ctx['token_url']}\">${ctx['token_url']}</a></strong>.\n" " <br/><br/>\n" " If this was a mistake or you did not requested this action, please ignore this message.\n" " % if user.signature:\n" " <br/>\n" " ${user.signature | safe}\n" " % endif\n" " </div>\n" " </td></tr>\n" " <tr><td style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin: 16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- FOOTER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; font-size: 11px; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\" align=\"left\">\n" " ${user.company_id.name}\n" " </td></tr>\n" " <tr><td valign=\"middle\" align=\"left\" style=\"opacity: 0.7;\">\n" " % if user.company_id.phone\n" " ${user.company_id.phone} |\n" " %endif\n" " % if user.company_id.email\n" " <a href=\"'mailto:%s' % ${user.company_id.email}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.email}</a> |\n" " % endif\n" " % if user.company_id.website\n" " <a href=\"'%s' % ${user.company_id.website}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.website}\n" " </a>\n" " % endif\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" "</tbody>\n" "</table>\n" "</td></tr>\n" "<!-- POWERED BY -->\n" "<tr><td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: #F1F1F1; color: #454748; padding: 8px; border-collapse:separate;\">\n" " <tr><td style=\"text-align: center; font-size: 13px;\">\n" " Powered by <a target=\"_blank\" href=\"https://www.odoo.com?utm_source=db&amp;utm_medium=mail\" style=\"color: #875A7B;\">Odoo</a>\n" " </td></tr>\n" " </table>\n" "</td></tr>\n" "</table>\n" " " msgstr "" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" style=\"padding-top: 16px; background-color: #F1F1F1; font-family:Verdana, Arial,sans-serif; color: #454748; width: 100%; border-collapse:separate;\"><tr><td align=\"center\">\n" "<table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"padding: 16px; background-color: white; color: #454748; border-collapse:separate;\">\n" "<tbody>\n" " <!-- HEADER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\">\n" " <span style=\"font-size: 10px;\">Din kanal</span><br/>\n" " <span style=\"font-size: 20px; font-weight: bold;\">${object.name}</span>\n" " </td><td valign=\"middle\" align=\"right\">\n" " <img src=\"/logo.png?company=${user.company_id.id}\" style=\"padding: 0px; margin: 0px; height: auto; width: 80px;\" alt=\"${user.company_id.name}\"/>\n" " </td></tr>\n" " <tr><td colspan=\"2\" style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin:16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- CONTENT -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"top\" style=\"font-size: 13px;\">\n" " <div style=\"margin: 0px; padding: 0px;\">\n" " Hej,<br/><br/>\n" " Du har bedt om at blive afmeldt vores adresseliste. <strong>${object.name}</strong>.\n" " <br/><br/>\n" " For at bekræfte dette, bedes du venligst klikke på følgende link:<strong><a href=\"${ctx['token_url']}\">${ctx['token_url']}</a></strong>.\n" " <br/><br/>\n" " Hvis dette var en fejl, eller du ikke har ønsket at blive afmeldt vores adresseliste, skal du se bort fra denne meddelelse.\n" " % if user.signature:\n" " <br/>\n" " ${user.signature | safe}\n" " % endif\n" " </div>\n" " </td></tr>\n" " <tr><td style=\"text-align:center;\">\n" " <hr width=\"100%\" style=\"background-color:rgb(204,204,204);border:medium none;clear:both;display:block;font-size:0px;min-height:1px;line-height:0; margin: 16px 0px 16px 0px;\"/>\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" " <!-- FOOTER -->\n" " <tr>\n" " <td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: white; font-size: 11px; padding: 0px 8px 0px 8px; border-collapse:separate;\">\n" " <tr><td valign=\"middle\" align=\"left\">\n" " ${user.company_id.name}\n" " </td></tr>\n" " <tr><td valign=\"middle\" align=\"left\" style=\"opacity: 0.7;\">\n" " % if user.company_id.phone\n" " ${user.company_id.phone} |\n" " %endif\n" " % if user.company_id.email\n" " <a href=\"'mailto:%s' % ${user.company_id.email}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.email}</a> |\n" " % endif\n" " % if user.company_id.website\n" " <a href=\"'%s' % ${user.company_id.website}\" style=\"text-decoration:none; color: #454748;\">${user.company_id.website}\n" " </a>\n" " % endif\n" " </td></tr>\n" " </table>\n" " </td>\n" " </tr>\n" "</tbody>\n" "</table>\n" "</td></tr>\n" "<!-- POWERED BY -->\n" "<tr><td align=\"center\" style=\"min-width: 590px;\">\n" " <table border=\"0\" cellpadding=\"0\" cellspacing=\"0\" width=\"590\" style=\"min-width: 590px; background-color: #F1F1F1; color: #454748; padding: 8px; border-collapse:separate;\">\n" " <tr><td style=\"text-align: center; font-size: 13px;\">\n" " Powered by <a target=\"_blank\" href=\"https://www.odoo.com?utm_source=db&amp;utm_medium=mail\" style=\"color: #875A7B;\">Odoo</a>\n" " </td></tr>\n" " </table>\n" "</td></tr>\n" "</table>\n" " " #. module: website_mail_channel #. openerp-web #: code:addons/website_mail_channel/static/src/js/website_mail_channel.editor.js:15 #, python-format msgid "Add a Subscribe Button" msgstr "Tilføj tilmeld knap" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "Alone we can do so little, together we can do so much" msgstr "Alene kan vi gøre så lidt, sammen kan vi gøre så meget" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_messages msgid "Archives" msgstr "Arkiverede" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "Avatar" msgstr "Avatar" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "Browse archives" msgstr "Gennemse arkiver" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_messages msgid "By date" msgstr "Efter dato" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_messages msgid "By thread" msgstr "Via denne tråd" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "Change Discussion List" msgstr "skift diskussionslisten" #. module: website_mail_channel #: model:mail.template,subject:website_mail_channel.mail_template_list_subscribe msgid "Confirm subscription to ${object.name}" msgstr "Bekræft tilmelding til ${object.name}" #. module: website_mail_channel #: model:mail.template,subject:website_mail_channel.mail_template_list_unsubscribe msgid "Confirm unsubscription to ${object.name}" msgstr "Bekræft afmelding på ${object.name}" #. module: website_mail_channel #: model:ir.model,name:website_mail_channel.model_mail_channel msgid "Discussion Channel" msgstr "Diskussionskanaler" #. module: website_mail_channel #. openerp-web #: code:addons/website_mail_channel/static/src/js/website_mail_channel.editor.js:16 #, python-format msgid "Discussion List" msgstr "Diskussionsliste" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "Follow-Ups" msgstr "Opfølgninger" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "Group" msgstr "Gruppe" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.invalid_token_subscription msgid "Invalid or expired confirmation link." msgstr "Ugyldigt eller udløbet bekræftelseslink." #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_messages #: model:website.menu,name:website_mail_channel.menu_mailing_list msgid "Mailing Lists" msgstr "Adresselister" #. module: website_mail_channel #: code:addons/website_mail_channel/models/mail_mail.py:20 #, python-format msgid "Mailing-List" msgstr "Adresseliste" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "" "Need to unsubscribe? It's right here! <span class=\"fa fa-2x fa-arrow-down " "float-right\" role=\"img\" aria-label=\"\" title=\"Read this !\"/>" msgstr "" "Har du brug for at afmelde dig? Det er lige her!<span class=\"fa fa-2x fa-" "arrow-down float-right\" role=\"img\" aria-label=\"\" title=\"Read this " "!\"/>" #. module: website_mail_channel #: model:ir.model,name:website_mail_channel.model_mail_mail msgid "Outgoing Mails" msgstr "Udgående mails" #. module: website_mail_channel #: code:addons/website_mail_channel/models/mail_mail.py:21 #, python-format msgid "Post to" msgstr "Send til" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "Reference" msgstr "Reference" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "Stay in touch with our Community" msgstr "Hold kontakten med vores Community" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "Subscribe" msgstr "Tilmeld" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.not_subscribed msgid "The address" msgstr "Adressen" #. module: website_mail_channel #: code:addons/website_mail_channel/controllers/main.py:245 #, python-format msgid "" "The address %s is already unsubscribed or was never subscribed to any " "mailing list" msgstr "" "Adressen %s er allerede afmeldt eller har aldrig været tilmeldt nogen " "adresseliste" #. module: website_mail_channel #: code:addons/website_mail_channel/models/mail_mail.py:22 #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels #, python-format msgid "Unsubscribe" msgstr "Afmeld" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.confirmation_subscription msgid "You have been correctly" msgstr "Du har været korrekt" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "a confirmation email has been sent." msgstr "en bekræftelsesemail er blevet sendt." #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message msgid "attachments" msgstr "Vedhæftninger" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "by" msgstr "af" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.not_subscribed msgid "" "is already\n" " unsubscribed or was never subscribed to the mailing\n" " list, you may want to check that the address was\n" " correct." msgstr "" "er allerede\n" " afmeldt eller var aldrig tilmeldt adresselisten\n" " du bør muligvis tjekke at adressen var\n" " korrekt." #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_message #: model_terms:ir.ui.view,arch_db:website_mail_channel.group_messages msgid "mailing list archives" msgstr "adresselistearkiver" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "" "members<br/>\n" " <i class=\"fa fa-fw fa-envelope-o\" role=\"img\" aria-label=\"Traffic\" title=\"Traffic\"/>" msgstr "" "medlemmer<br/>\n" " <i class=\"fa fa-fw fa-envelope-o\" role=\"img\" aria-label=\"Traffic\" title=\"Traffic\"/>" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels msgid "messages / month" msgstr "beskeder/ pr. måned" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "more replies" msgstr "flere svar" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "replies" msgstr "svar" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.messages_short msgid "show" msgstr "vis" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.confirmation_subscription msgid "subscribed" msgstr "tilmeldt" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.confirmation_subscription msgid "to the mailing list." msgstr "adresselisten" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.confirmation_subscription msgid "unsubscribed" msgstr "afmeldt" #. module: website_mail_channel #: model_terms:ir.ui.view,arch_db:website_mail_channel.mail_channels #: model_terms:ir.ui.view,arch_db:website_mail_channel.subscribe msgid "your email..." msgstr "din e-mail..."
Q: Collecting data from NSURLConnection I have the worst internet connection atm, so sorry if this has been asked before.. I have an NSURLConnection for getting some json data. Until now it worked perfectly fine to use the delegate method didReceiveData:(NSData*)data to save the received data. I am downloading data from at least seven different pages at the same time. Today, after updati g on of the json-pages to contain more data, the NSData object seemed corrupt. I have recently been told that this delegate does not return the whole data, and thus corrupting my information. Is there another delegate like the didFinish only it also returns the full complete object? Or do I have to do this myself, like merging two NSData's? Sorry for stupidity, and grammatical errors are dedicated to iPhone auto-correct. A: You must never, ever rely on didReceiveData: returning the full data, because it will break one day. You have to collect your chunks of data in an NSMutableData: NSMutableData *d = [[NSMutableData alloc] init]; - (void)connection:(NSURLConnection *)c didReceiveData:(NSData *)data { [d appendData:data]; } - (void)connectionDidFinishLoading:(NSURLConnection *)conn { // 'd' now contains the entire data } If it's inconvenient for you, you can avoid using NSURLConnection and use a background thread to grab the data in one piece using: NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:@"http://web.service/response.json"]];
Marvel's A.K.A. Jessica Jones Netflix series will be the second of four Marvel TV shows to debut on the streaming service. This latest series is currently in production, but a premiere date and plot details have yet to be released. An Amazon listing for the official Marvel's A.K.A. Jessica Jones Season 1 companion book reveals not only the official synopsis, but gives us a hint as to when it may be released. Check out the plot description below, then read on for more details. "Ever since her short-lived stint as a Super Hero ended in tragedy, Jessica Jones has been rebuilding her personal life and career as a hot-tempered, sardonic, badass private detective in Hell's Kitchen, New York City. Plagued by self-loathing, and a wicked case of PTSD, Jessica battles demons from within and without, using her extraordinary abilities as an unlikely champion for those in need... especially if they're willing to cut her a check." It's worth noting that the synopsis on Netflix's actual site is much, much shorter, which you can check out below. "Working as a private investigator in New York's Hell's Kitchen, a troubled ex-superhero's past comes back to haunt her in the live action series Marvel's A.K.A. Jessica Jones." The hardcover book is set for release on November 17, so it seems likely that the series will debut around the same time, although that has yet to be confirmed. The synopsis doesn't mention any of the confirmed supporting cast members, such as David Tennant (Kilgrave), Mike Colter (Luke Cage), Rachael Taylor (Patricia "Trish" Walker) and Carrie-Anne Moss, whose role has not yet been revealed. Marvel's A.K.A. Jessica Jones has been shooting since February, and if it has the same six-month shooting schedule as Marvel's Daredevil, which shot from July to December last year, then production may wrap on Marvel's A.K.A. Jessica Jones in July. It's unclear when we may get to see the first trailer for Marvel's A.K.A. Jessica Jones, but if production is truly winding down, we may get to see it sooner rather than later. Marvel's A.K.A. Jessica Jones will be followed up by Marvel's Luke Cage and Marvel's Iron Fist, which will likely debut in 2016, before all four heroes come together to form The Defenders in a new mini-series. What do you think of the Marvel's A.K.A. Jessica Jones synopsis?
Q: registerNib:forReuseidentifier with custom UTTableViewCell and Storyboards I'm migrating from customizing my TableViewCells in tableView:cellForRow:atIndexPath: to using a custom UITableViewCell subclass. Here's how I done it: First, created empty XIB, dragged UITableViewCell there and put a UILabel on top. Created a class (subclass of UITableViewCell) and in Interface Builder's properties editor set the class to MyCell. Then, in my TableViewController, put the following: - (void)viewDidLoad { [super viewDidLoad]; // load custom cell UINib *cellNIB = [UINib nibWithNibName:@"MyCell" bundle:nil]; if (cellNIB) { [self.tableView registerNib:cellNIB forCellReuseIdentifier:@"MyCell"]; } else NSLog(@"failed to load nib"); } After that I wiped out all the custom code from tableView:cellForRow:atIndexPath: and left only default lines: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"MyCell"; MyCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; return cell; } When I ran this, I expected to see a bunch of cells with a single label in each cell (the very label that I dropped in the middle while creating XIB). But instead I see just plain white empty cells and adding/removing the components to the XIB layout doesn't help. I spend A DAY trying different options like setting the reuseIdentifier in Interface Builder for custom cell XIB and loading the view in tableView:cellForRow:atIndexPath:, but none helped. A: ...but it turned out, that the only thing that I missed, was clearing the reuseIdentifier for prototype cell in my Storyboard for this TableViewController. It seems that Storyboard initializes its views/components later that viewDidLoad called, and instead of taking my nice custom cell, xCode sets the real cell view for reusing to just plain white cell which is the standard for newly created TableViewControllers. So again: go to your TableView properties and remove the reuseIdentifier you set before ;) I spend so much time for this, so I thought it might help someone if I share this experience here.
Hey Doc: I tested Poz in '94, had PCP Pneumonia in '95 and Shingles in '99 and at this point TOTALLY RESISTANT to ANY treatment regime. I am worried? NO! I made peace with myself in '95 and do not fear the next phase in my non-caporeal life. I know my life is helping with experimental treatments and I'm profoundly greatful to all people involved in finding the 'cure' for HIV. How come I haven't been deathly ill since '95? Although my current t-cells are 10 and my v/l is 196,000, I DONT CARE! I just continue on my current regiment and live my life without fretting daily over my predicament. I fully trust the experts. As morbid as it may sound, HIV SAVED MY LIFE! I was messed up and suicidal before HIV but that changed and I sought the necessary mental and physical venues to keep me alive. WELL HERE I AM, continually amazing my doc that I am NOT deathly ill....I have a "mind over matter" attitude and take each day I live and cherish it as if it is my last and following my Deist belief in God to survive. My purpose(fate) on this planet was to be there at the right time for my friends and family and to be available for experimentation to help find the cure for HIV. My HIV attitude is still, "OH Well!"(I do live a "protected" and informed life.) LIVE, LOVE, LAUGH Thank you all for your dilligence! Response from Dr. Frascino Hello, Thank you for taking the time to write in and share your story and personal insights. "Living with AIDS" is certainly a far more "positive" attitude than dying from AIDS. BRAVO. By the way, despite your assertion of being resistant to all treatments, it's worth noting that new and very novel therapies have recently been approved, such a CCR5 inhibitor, an integrase inhibitor, a new NNRTI and a new PI. Be sure to discuss these potential new options with your HIV specialist. In addition, as you mention, there are also clinical trials ongoing looking at additional new and novel approaches to kick HIV's butt! Thanks again for writing in and personally demonstrating that optimism is indeed a powerful therapy when confronting any challenge. This forum is designed for educational purposes only, and experts are not rendering medical, mental health, legal or other professional advice or services. If you have or suspect you may have a medical, mental health, legal or other problem that requires advice, consult your own caregiver, attorney or other qualified professional. Experts appearing on this page are independent and are solely responsible for editing and fact-checking their material. Neither TheBody.com nor any advertiser is the publisher or speaker of posted visitors' questions or the experts' material. The Body is a service of Remedy Health Media, LLC, 750 3rd Avenue, 6th Floor, New York, NY 10017. The Body and its logos are trademarks of Remedy Health Media, LLC, and its subsidiaries, which owns the copyright of The Body's homepage, topic pages, page designs and HTML code. General Disclaimer: The Body is designed for educational purposes only and is not engaged in rendering medical advice or professional services. The information provided through The Body should not be used for diagnosing or treating a health problem or a disease. It is not a substitute for professional care. If you have or suspect you may have a health problem, consult your health care provider.
Q: Where does the actual data stored by add_post_meta I am pretty new to WordPress API, loving it btw, I have a small project, adding a like button to the end of every post, pressing it makes user like the post, button changes to dislike, pressing again makes user dislike the post. Without the knowledge of WordPress API, I planned to create a table for my plugin in wpdb which I will store the post-id and user-ids. Plugin will know by querying the table if the user has already liked the post. But on the internet I found a lot of examples of like buttons, not creating a custom table and using update_post_meta, and if I understood right update_post_meta does not alter any table in the database or insert new rows. Because I tried some of the plugins which uses update_post_meta and after liking a post, my wp_posts and wp_postmeta tables does not change at all. My question is where exactly is post_meta stored, where does wordpress store the new custom field 'like_count' if I do this; update_post_meta($post_id, 'like_count',1); A: Yes, we can use existing tables in WordPress for storing the values. Post meta fields are stored in the {$wpdb->prefix}_postmeta table (depending on the table prefix; wp_postmeta by default). If the meta key "like_count" is already present in the table along with the post ID then update_post_meta() will update this, and otherwise this function will insert a new row with this key. For more reference you can check here : https://codex.wordpress.org/Function_Reference/update_post_meta
Introduction {#Sec1} ============ Acetaminophen (APAP) is one of the most commonly used analgesics/antipyretics worldwide. Thereby, it is a readily available over-the-counter drug. Apart from its beneficial effects as a pharmaceutical, APAP is also the most common cause of acute liver toxicity in Europe and the USA (Lee [@CR42]). Cytochrome P450 enzymes in the liver metabolize APAP to the oxidative metabolite *N*-acetyl-*p*-benzoquinone imine (NAPQI), which is thought to cause the toxic effects of APAP through protein adduct formation, leading to oxidative stress and finally liver damage (Dahlin et al. [@CR14]). The molecular mechanism behind further progression of APAP toxicity still is not fully elucidated; the involvement of multiple mechanisms of toxicity, like inflammatory responses and oxidative stress, has been suggested (Jaeschke et al. [@CR33]). The main body of our knowledge on the toxicological, or more specific hepatotoxic, mechanisms of compounds, including APAP, is based on data collected from studies using animal models (Hinson et al. [@CR30]). However, increasing criticism on the usability and applicability of animal data to the human situation has developed over the last years (Greek and Menache [@CR25]). Therefore, there is a growing need for human in vitro models such as hepatic cell lines, liver slices, or primary hepatocytes cultures, to facilitate human-based research. Large interindividual differences in response to xenobiotic exposure between humans have been documented (Court et al. [@CR11]). In the domains of toxicology/pharmacology, many attempts are being undertaken to gain a better understanding of the factors that are causative of this interindividual variation. Environmental factors as well as genetic factors have been proposed to contribute to the variation in drug responses between human individuals. Over the last decade, with the rise of the whole genome omics techniques, it has become feasible to perform more complete/in-depth analysis of the genomic components contributing to interindividual variation in the human population (Berg [@CR3]). Also, the metabolism of APAP is known to show large interindividual variability (Court et al. [@CR11]). Genetic factors including many biotransformation-related genes, such as UDP-glucuronosyltransferases and cytochrome P450 enzymes, have been suggested to be causative of the variation in APAP-induced adverse effects observed between individuals (Court et al. [@CR11]; Fisher et al. [@CR22]; Polasek et al. [@CR49]; Yasar et al. [@CR68]). While some individuals seem to be able to endure APAP doses considerably exceeding the recommended maximal daily dose, others are at risk of liver toxicity due to APAP much closer to the recommended dose window (Sabate et al. [@CR57]). The relatively high frequency of unintentional overdosing has recently led the FDA to adjusting recommendations on safe APAP dosage use by lowering the maximal therapeutic daily dose and decreasing the single dose units of APAP (Turkoski [@CR64]). However, not only interindividual differences do occur at supra-therapeutic doses, but also (sub-) therapeutic doses of APAP have been shown to cause interindividual variation in APAP metabolite levels, as well as in mRNA and miRNA expression levels (Jetten et al. [@CR34]). In the same study, regulation of biological processes known to be related to the liver toxicity response after APAP overdosing could be detected at lower and supposedly non-toxic doses. In this study, we aim at investigating the interindividual differences in response to a non-toxic to toxic APAP dose range, using an in vitro cell model consisting of primary human hepatocytes of several donors. The interindividual differences in APAP metabolite formation and gene expression responses were considered and compared in an attempt to pinpoint the factors that could be causative of interindividual variation in APAP metabolism. Materials and methods {#Sec2} ===================== Cell culture and treatment {#Sec3} -------------------------- ### Primary human hepatocytes {#Sec4} Cryopreserved PHH of five individuals (see Supplementary Table 1 for donor demographics) were purchased from Life Technologies (Gibco). Cells were cultured in 12-well plates in a collagen sandwich, according to the supplier's protocol (Invitrogen [@CR32]). The following culture media were used: cryopreserved hepatocyte recovery medium (CHRM, Gibco) for thawing, William's medium E (WME) + Glutamax (Gibco) substituted with 10 % FCS (Gibco), 0.02 % penicillin/streptomycin (Gibco), and 0.1 U/ml insulin (Invitrogen) for seeding/attaching and WME + Glutamax substituted with 0.02 % Pen/Strep, 0.1 U/ml insulin, and 0.02 mg/ml hydrocortisone (Sigma-Aldrich) for culturing/exposure. After thawing, viability of the cells was checked by a trypan blue (CAS no. 72-57-1, Sigma-Aldrich) exclusion test as instructed in the supplier's protocol (Invitrogen [@CR32]). All viability scores were in accordance with those listed by the supplier. Hepatocytes were exposed for 24 h to 0, 0.2, 2, or 10 mM APAP (CAS no. 103-90-2, Sigma-Aldrich) dissolved in culture medium. The doses, representing no observed effect level (NOEL), lowest observed effect level (LOEL = non-toxic), and toxic dose, respectively, were chosen based on the available literature (Bannwarth et al. [@CR2]; Borin and Ayres [@CR6]; Critchley et al. [@CR12]; Douglas et al. [@CR18]; Kamali [@CR35]; Kienhuis et al. [@CR40]; Portolés et al. [@CR50]; Rygnestad et al. [@CR56]; Tan and Graudins [@CR61]; Yin et al. [@CR69]). Transcriptomic sample preparation {#Sec5} --------------------------------- ### Total RNA isolation {#Sec6} QIAzol (0.5 ml, QIAGEN) was used to isolate total RNA from all samples according to the manufacturer's protocol. RNA purification was performed with the miRNeasy Mini Kit (Qiagen) as instructed by the manufacturer. Next, the integrity of the RNA was checked with the Bioanalyzer 2100 (Agilent). ### cDNA preparation/hybridization {#Sec7} From an input of 250 ng RNA, cDNA targets were prepared using the Affymetrix protocol. The procedures as recommended by the manufacturer were applied to hybridize the samples to Affymetrix GeneChip Human Genome U133A plus 2 GeneChip arrays. GeneChips were washed and stained after hybridization with a fluidics station (Affymetrix) and scanned with a GeneArray scanner (Affymetrix). The samples from donor 1 exposed to 10 mM APAP did not pass quality control and were therefore excluded from further analyses. Transcriptomic data analysis {#Sec8} ---------------------------- The CEL files retrieved in the previous step were subjected to an overall quality control, using arrayanalysis.org, and all arrays were of high quality (Eijssen et al. [@CR20]). Subsequently, data were RMA-normalized and re-annotated using BrainArray's EntrezGene customCDF_V15.1 (Dai et al. [@CR16]; Lim et al. [@CR43]). Probes with low signal-to-noise ratio (average expression \<6) were excluded from further analyses as a data-cleanup step. Metabolomic sample preparation {#Sec9} ------------------------------ Culture medium from the cells was collected after 24 h and stored at −80 °C until extraction. To extract the metabolites, 6 ml of ice-cold acetone was added to 1.5 ml of medium in a 10-ml glass tube. The solution was vortexed for 30 s, kept on ice for 12 min, and then centrifuged for 15 min at 2800 rpm at 4 °C. The supernatant was transferred into a 10-ml glass tube and dried under nitrogen. To concentrate the semi-polar metabolites contained in the medium, SPE C18 columns (C18, 500 mg, 3 ml, Bond-Elut, VARIAN) were used. The SPE C18 column was conditioned by running it with methanol including 0.5 % of formic acid (1 ml twice) and MilliQ including 0.5 % of formic acid (1 ml twice). Once the dried pellet was re-suspended in 1 ml of MilliQ (with 0.5 % formic acid), it was applied to the column. The column was then washed twice with 1 ml of MilliQ, after which the components of interest were eluted with 1 ml of methanol and dried under nitrogen. For U-HPLC-Orbitrap analysis, the dried polar fraction was re-suspended in 400 μl of MilliQ with 0.1 % formic acid. ### U-HPLC-Orbitrap MS analysis {#Sec10} Experimental setups and procedures as described before have been used with some slight modifications as defined below (Lommen [@CR44]; Lommen et al. [@CR45]; Ruiz-Aracama et al. [@CR55]). The gradient was similar to the one used in Jetten et al. ([@CR34]) with small modifications. The initial eluent composition, 100 % A, was changed to 85% A and 15 % B in 15 min. Afterward, the composition of B was increased to 30 % in 10 min and subsequently increased in 3 min to 90 %, remaining at this composition for 5 min prior to the next injection. A capillary temperature of 250 °C with a sheath and auxiliary gas flow of 19 and 7 arbitrary units were used, respectively. Metabolomic data analysis {#Sec11} ------------------------- Visual inspection of the three technical replicates of each sample showed a high degree of reproducibility. All MS data were preprocessed and aligned using the in-house developed program, metAlign (Lommen [@CR44]). A targeted search for the metabolites of APAP previously described in Jetten et al. ([@CR34]) was executed. For targeted analyses, Search LCMS, an add-on tool for metAlign, was used (Lommen [@CR44]). Briefly, a list of masses of interest was composed based on our previous in vivo study with human volunteers (Jetten et al. [@CR34]) and some further data available from mice (Chen et al. [@CR8]). This list was loaded into Search LCMS, which returned the amplitudes of the masses of interest. U-HPLC-Orbitrap MS data were preprocessed as described in previous papers (Lommen [@CR44]; Lommen et al. [@CR45]) to obtain ultra-precise (sub-ppm) mass data (calibration using internal masses and PEG200, PEG300, PEG600 as external masses). Metabolites were considered to be present when retention times were analogous to earlier experiments and average accurate masses were below ±3 ppm; nearly all average masses of metabolites were within 1 ppm. For some metabolites previously not found, including hydroxy-APAP, methoxy-APAP, and 3,3′-biacetaminophen, retention times were related to those derived from the literature if possible (Chen et al. [@CR8]; Jetten et al. [@CR34]). Metabolite expression levels for methoxy-APAP-glucuronide-1/2 and hydroxy-APAP-glutathione in samples of donor 1 exposed to 10 mM APAP could unfortunately not be determined due to technical issues. ### Metabolite visualization {#Sec12} To create a metabolic map based on available literature, a pathway visualization tool, PathVisio, was used (Chen et al. [@CR9]; Daykin et al. [@CR17]; van Iersel et al. [@CR65]). LC--MS data were visualized for each donor and per dose (corrected for control levels, 0 mM APAP). Log-transformation of the data resulted in range with a minimum of 0 and a maximum of 5. Data integration and visualization; interindividual variation {#Sec13} ------------------------------------------------------------- Expression data (log2-scaled intensities) of all genes passing the selection as described under 'transcriptomic data analysis' and levels of all of the identified metabolites were plotted against APAP dose per donor (*X*-axis: dose 0, 0.2, 2, and 10 mM APAP; *Y*-axis: log-scaled gene expression/metabolite levels; line: donor, *n* = 5) using R 2.15.3 (R Core Team [@CR52]). For clarification purposes, a representative plot is provided in Supplementary Figure 1. To estimate the correlation in APAP dose response between donors, data points from one donor were compared to the data points of all other donors using a Pearson-based correlation analysis, which resulted in the following comparisons: D1--D2, D1--D3, D1--D4, D1--D5, D2--D3, D2--D4, D2--D5, D3--D4, D3--D5, and D4--D5. Then, the absolute correlation coefficients of each donor were summed to generate an arbitrary correlation score per donor for each gene/metabolite (Score D1, Score D2, Score D3, Score D4, and Score D5; see supplementary Figure 1). This 'score' now represents the similarity of a particular donor to all other donors in expression response for a particular gene/metabolite following APAP exposure; the donor with the lowest score is most aberrant from all other donors (showed the least correlation with the other donors). In order to select the most variable genes between donors, standard deviations (SD) of the donor scores per gene/metabolite were calculated and ranked. The top 1 % ranked genes (score *SD* \> 2.52) were selected for further analysis, since these exhibit the most interdonor variability (Table [1](#Tab1){ref-type="table"}).Table 1List of the top 1 % most variable genes based on Pearson correlation analysisEntrezGeneIDGene nameFunctional description according to GeneCards92ACVR2AKinase receptor*513ATP5DSubunit of mitochondrial ATP synthase*595CCND1Cyclin family*617BCS1LComplex III of the mitochondrial respiratory chain*988CDC5LCell cycle regulator important for G2/M transition1545CYP1B1Cytochrome P450 superfamily1611DAPMediator of programmed cell death2669GEMGTP-binding proteins, receptor-mediated signal transduction2766GMPRNADPH-dependent reductive deamination of GMP to IMP3276PRMT1Methyltransferase4302MLLT6Myeloid/lymphoid or mixed-lineage leukemia4615MYD88Myeloid differentiation primary response5201PFDN1Member of the prefolding beta subunit family5287PIK3C2BPI3-kinases play roles in signaling pathways involved in cell proliferation, oncogenic transformation, cell survival, cell migration, and intracellular protein trafficking5300PIN1Regulation of cell growth, genotoxic, and other stress responses, the immune response, induction and maintenance of pluripotency, germ cell development, neuronal differentiation, and survival5523PPP2R3ANegative control of cell growth and division5550PREPMaturation and degradation of peptide hormones and neuropeptides5584PRKCIProtective role against apoptotic stimuli, is involved in NF-kappa-B activation, cell survival, differentiation, and polarity and contributes to the regulation of microtubule dynamics in the early secretory pathway5696PSMB8Apoptosis, may be involved in the inflammatory response pathway5699PSMB10Involved in antigen processing to generate class I-binding peptides5796PTPRKCell growth, differentiation, mitotic cycle, and oncogenic transformation6612SUMO3Posttranslationally modify numerous cellular proteins and affect their metabolism and function, such as nuclear transport, transcriptional regulation, apoptosis, and protein stability6942TCF20Stimulates the activity of various transcriptional activators such as JUN, SP1, PAX6, and ETS1, suggesting a function as a co-activator7264TSTA3Cell--cell interactions, including cell--cell recognition; in cell--matrix interactions; in detoxification processes7572ZNF24Transcription repressor activity7965AIMP2Functions as a pro-apoptotic factor8270LAGE3???*8310ACOX3Desaturation of 2-methyl branched fatty acids in peroxisomes*8985PLOD3Hydroxylation of lysyl residues in collagen-like peptides9343EFTUD2A component of the spliceosome complex which processes precursor mRNAs to produce mature mRNAs*9361LONP1Mediates the selective degradation of misfolded, unassembled, or oxidatively damaged polypeptides in the mitochondrial matrix assembly of inner membrane protein complexes and participates in the regulation of mitochondrial gene expression and maintenance of the integrity of the mitochondrial genome*9470EIF4E2EIF4E2 gene promoter protein synthesis and facilitates ribosome binding by inducing the unwinding of the mRNA secondary structures10093ARPC4Regulation of actin polymerization and together with an activating nucleation-promoting factor (NPF) mediates the formation of branched actin networks10189THOC4Molecular chaperone. It is thought to regulate dimerization, DNA binding, and transcriptional activity of basic region-leucine zipper (bZIP) proteins10313RTN3Involved in membrane trafficking in the early secretory pathway10422UBAC1Required for poly-ubiquitination and proteasome-mediated degradation of CDKN1B during G1 phase of the cell cycle10598AHSA1May affect a step in the endoplasmic reticulum to Golgi trafficking10807SDCCAG3May be involved in modulation of TNF response10899JTBRequired for normal cytokinesis during mitosis. Plays a role in the regulation of cell proliferation11068CYB561D2Acting as an ubiquitin-conjugating enzyme, involved in the regulation of exit from mitosis, cell cycle, protein, ubiquitin-dependent proteolysis, electron transport11131CAPN11Remodeling of cytoskeletal attachments to the plasma membrane during cell fusion and cell motility, proteolytic modification of molecules in signal transduction pathways, degradation of enzymes controlling progression through the cell cycle, regulation of gene expression, substrate degradation in some apoptotic pathways, and an involvement in long-term potentiation11142PKIGPKA inhibitors; protein kinase A has several functions in the cell, including regulation of glycogen, sugar, and lipid metabolism11252PACSIN2Involved in linking the actin cytoskeleton with vesicle formation by regulating tubulin polymerization*11332ACOT7Catalyze the hydrolysis of acyl-CoAs to the free fatty acid and coenzyme A (CoASH), providing the potential to regulate intracellular levels of acyl-CoAs, free fatty acids, and CoASH*23243ANKRD28Involved in the recognition of phosphoprotein substrates23325KIAA1033Plays a key role in the fission of tubules that serve as transport intermediates during endosome sorting23558WBP2Involved in mediating protein--protein interactions through the binding of polyproline ligands26100WIPI2Probable early component of the autophagy machinery being involved in formation of preautophagosomal structures and their maturation into mature phagosomes26505CNNM3Probable metal transporter*26520TIMM9Mediate the import and insertion of hydrophobic membrane proteins into the mitochondrial inner membrane*27075TSPAN13Mediate signal transduction events that play a role in the regulation of cell development, activation, growth, and motility29105C16orf80???29927SEC61A1Plays a crucial role in the insertion of secretory and membrane polypeptides into the ER50640PNPLA8Phospholipases which catalyze the cleavage of fatty acids from membrane phospholipids51094ADIPOR1Regulates fatty acid catabolism and glucose levels51491NOP16Involved in ribosome biogenesis51504TRMT112Participates in both methylation of protein and tRNA species51523CXXC5Required for DNA damage-induced ATM phosphorylation, p53 activation, and cell cycle arrest. Involved in myelopoiesis*51706CYB5R1Involved in desaturation and elongation of fatty acids, cholesterol biosynthesis, drug metabolism, and, in erythrocyte, methemoglobin reduction*54187NANSFunctions in the biosynthetic pathways of sialic acids54606DDX56Implicated in a number of cellular processes involving alteration of RNA secondary structure such as translation initiation, nuclear and mitochondrial splicing, and ribosome and spliceosome assembly. May play a role in later stages of the processing of the preribosomal particles, leading to mature 60S ribosomal subunits. Has intrinsic ATPase activity54941RNF125E3 ubiquitin-protein ligase that acts as a positive regulator of T cell activation55062WIPI1May play a role in autophagy55111PLEKHJ1??? phospholipid binding55238SLC38A7Mediates sodium-dependent transport of amino acids55315SLC29A3Plays a role in cellular uptake of nucleosides, nucleobases, and their related analogs55647RAB20Plays a role in the maturation and acidification of phagosomes that engulf pathogens55700MAP7D1Mitotic spindle protein and member of the MAP7 (microtubule-associated protein 7) family of proteins55743CHFRFunctions in the antephase checkpoint by actively delaying passage into mitosis in response to microtubule poisons55898UNC45APlays a role in cell proliferation and myoblast fusion, binds progesterone receptor and HSP90, and acts as a regulator of the progesterone receptor chaperoning pathway56005C19orf10???*56267CCBL2Encodes an aminotransferase that transaminates kynurenine to form kynurenic acid*56910STARD7???57409MIF4GDFunctions in replication-dependent translation of histone mRNAs64754SMYD3Histone methyltransferase64787EPS8L2Is thought to link growth factor stimulation to actin organization, generating functional redundancy in the pathways that regulate actin cytoskeletal remodeling*64949MRPS26Mammalian mitochondrial ribosomal proteins are encoded by nuclear genes and help in protein synthesis within the mitochondrion*66036MTMR9Thought to have a role in the control of cell proliferation79056PRRG4??? Calcium ion binding80227PAAF1Involved in regulation of association of proteasome components80775TMEM177???89870TRIM15???91663MYADM??? Regulates the connection between the plasma membrane and the cortical cytoskeleton and so can control the endothelial inflammatory response*114971PTPMT1Is an essential intermediate in the biosynthetic pathway of cardiolipin, a mitochondrial-specific phospholipid regulating the membrane integrity and activities of the organelle*124583CANT1Functions as a calcium-dependent nucleotidase 127687C1orf122???135932TMEM139???140465MYL6BRegulatory light chain of myosin140606SELMMay function as a thiol-disulfide-oxidoreductase that participates in disulfide bond formation147007TMEM199???151613TTC14??? RNA binding155066ATP6V0E2Play an important role in processes such as receptor-mediated endocytosis, protein degradation, and coupled transport196383RILPL2Involved in cell shape and neuronal morphogenesis, positively regulating the establishment and maintenance of dendritic spines252839TMEM9May be involved in intracellular transport253461ZBTB38May be involved in the differentiation and/or survival of late postmitotic neurons375757C9orf119Required for double-strand break repair via homologous recombination389203C4orf52???100128750LOC100128750???100505687LOC100505687???The description of the functionality of the gene has been taken from GeneCards; genes involved in mitochondrial processes according to the MITOP2 database are in italic To enable biological interpretation, an overrepresentation analysis was performed on this set of variable genes, using the overrepresentation module of ConsensusPathDB (Kamburov et al. [@CR36]). A background list consisting of all genes passing the selection as described under 'transcriptomic data analysis' was used in this analysis. Subsequently, the correlation score matrices (see top left corner supplementary Figure 1) created for each of the top 1 % highly variable genes and all metabolites were used to find gene expression profiles that match metabolite profiles on an individual level. To do so, the interdonor correlations for all highly variable genes were correlated with the interdonor correlations from all metabolites (Pearson). A cutoff of \>0.7 was used to define genes of which the difference in expression levels mimicked the difference in the metabolite levels between donors. The results of this analysis are summarized in supplementary Table 3. To define genes related to mitochondrial processes, the top 1 % variable genes were compared to the mitochondrial reference gene set from MITOP2. This is a database which provides a list of human mitochondrial proteins linked to their gene names found through computational prediction of signaling sequences, but also includes results from proteome mapping, mutant screening, expression profiling, protein--protein interaction, and cellular sub-localization studies (Elstner et al. [@CR21]). The MiMI plugin for Cytoscape was used to generate a network for all mitochondrial-related genes based on the nearest neighbor analysis (Fig. [3](#Fig3){ref-type="fig"}; Gao et al. [@CR24]). Only the nearest neighbors shared by more than one of the mitochondrial-related genes were taken into consideration. Results {#Sec14} ======= Transcriptomics {#Sec15} --------------- Just over 10,000 genes were screened for interindividual variation in their responses toward APAP exposure by correlating their expression over dose. Standard deviations of these correlation scores showed a normal distribution. To assure that only the most variable genes were used for further analyses, and a short list was created of the top 1 % most variable genes (see Table [1](#Tab1){ref-type="table"}, *n* = 99, SD \> 2.52). To define the functionality of the variable genes, an overrepresentation analysis was performed, i.e., a network consisting of the biological pathways containing the genes with the highest variability between donors in response to APAP exposure was defined (see Fig. [1](#Fig1){ref-type="fig"}). This overrepresentation network could be broken down into several parts; one large component appeared constructed of:Fig. 1Network of the top 1 % most variable genes between donors after APAP exposure. The network was created based on a gene set overrepresentation analysis (ConsensusPathDB). Each node represents a biological pathway, the size of the node represents the amount of genes included in the pathway (*bigger diameter* = larger \# genes), the *color* of the node represents its significance (*darker gray* = lower *p* value), and the *thickness* of the edge represents the amount of overlap between the connected nodes (*thicker line* = higher \# overlapping genes)A large cluster with toll-like receptors (TLR), c-Jun N-terminal kinases (JNK), nuclear factor (NF)-κB, interleukin (IL)-1, p38, and cyclin D1-related pathways (encircled with striped line).A smaller cluster around p75 neurotrophin receptor (p75 NTR) (encircled with dotted line).Two more components, involved in 'Wnt-signaling pathway and pluripotency' and 'BTG (B cell translocation gene) family proteins and cell cycle regulation.' Additional parts consisted of components separated from the large cluster on 'leukotriene metabolism,' 'biosynthesis of unsaturated fatty acids,' 'amino sugar and nucleotide sugar metabolism,' 'retinoic acid-inducible gene 1 (RIG-I), and melanoma differentiation-associated protein 5 (MDA5)-mediated induction of interferon (IFN)-alpha--beta.' A further network based on mitochondrial-related genes form the top 1 % highly variable genes was created using next neighbor analysis in Cytoscape Fig. [2](#Fig2){ref-type="fig"}. This figure shows that transcription factors are the main binding element in the response of mitochondrial-related genes, showing high interindividual variation in gene expression response after APAP exposure.Fig. 2Network of highly variable mitochondrial-related genes. Nearest neighbor analysis was performed on all mitochondrial-related genes from the top 1 % highly variable gene list. Only the nearest neighbors shared by more than one of the variable genes were taken into account. *Square* nodes represent input genes, and *round* nodes represent the shared nearest neighbors Metabolomics {#Sec16} ------------ A broad spectrum of metabolites was measured in the medium, as shown in Supplementary Table 3 and Figure 3. In general, the variation between individuals was lower with respect to metabolite levels when compared to the variation in gene expression levels. To define how the variability between donors in gene expression is related to the variability in metabolite level in these same donors, a Pearson-based correlation analysis between the top 1 % variable genes and all metabolites was performed (cutoff *R*~2~ \> 0.7). Out of the 99 most variable genes, 91 could be linked to the variation in metabolites on an individual level, meaning that these 91 genes can at least partially explain the interindividual variation observed in metabolites. In particular, hydroxy-APAP, methoxy-APAP, and the tentatively identified metabolite C~8~H~13~O~5~N-APAP-glucuronide showed strong correlations with genes on an individual level (*n* = 36, 36, and 51 correlating genes, respectively). Interestingly, C~8~H~13~O~5~N-APAP-glucuronide has previously been reported by Jetten et al. ([@CR34]) as a novel APAP metabolite, which could be detected in the in vivo human situation after low-dose APAP exposure. This metabolite could thus be confirmed in the current study in an in vitro human situation consisting of primary human hepatocytes. Furthermore, a mass tentatively assigned to 3,3′-biacetaminophen (not detected previously) has also been found. 3,3′-biacetaminophen has been suggested to result from NAPQI reacting with APAP and is considered a reactive oxygen species (ROS) product (Chen et al. [@CR8]). Discussion {#Sec17} ========== The aim of this study is to evaluate the interindividual differences in gene expression changes and APAP metabolite formation in primary human hepatocytes of several donors (*n* = 5) exposed to a non-toxic to toxic APAP dose range. Interindividual variation in gene expression is a very common phenomenon; therefore, we have focused on the gene expression changes that are most different between individuals in response to APAP exposure. To do so, we have created a short list consisting of the top 1 % most different genes based on correlation analysis (*n* = 99, see Table [1](#Tab1){ref-type="table"}). Expression levels of many genes/metabolites including, but not limited to cytochrome P450 enzymes, glucuronosyltransferases, sulfotransferases, and glutathione S-transferases have been shown to influence the biotransformation processes of APAP (Zhao and Pickering [@CR71]). However, studies in general link baseline expression levels of these genes to APAP metabolism parameters, while in the current study we focus on response parameters after APAP exposure in order to explain interindividual variability. To define the biological functionality of the genes with the highest variability between individuals (top 1 % list), a network of pathways found by gene set overrepresentation analysis on this list was created (Fig. [1](#Fig1){ref-type="fig"}). This network shows a large cluster with TLRs, JNK, NF-κB, IL-1, p38, and cyclin D1-related pathways (encircled with striped line). TLR, JNK, and NF-κB pathways are all key regulators in the production of cytokines associated with inflammatory responses and the early stages of the development of hepatocarcinogenesis (Maeda [@CR46]). Furthermore, all of the above-mentioned pathways have been associated with the process of liver regeneration (Iimuro and Fujimoto [@CR31]). Hepatocytes rarely undergo proliferation in the liver under normal circumstances. However, proliferation can be triggered in response to loss of liver mass for instance induced by liver resection but also due to toxin-induced hepatocyte trauma like is the case with APAP. In both in vitro and in vivo studies, APAP has been shown to induce a persistent activation of JNK adding to hepatocellular necrosis (Gunawan et al. [@CR26]). Cross talk between JNK and NF-κB has been proposed as a mechanism through which JNK mediates cell death. Matsumura et al. ([@CR47]) showed that inhibition of NF-κB in the liver by APAP leads to amplification of JNK and as such shifts the balance from cell survival toward cell death. IL-1 stimulates both the JNK and NF-κB pathways, which increases cell signaling and interferes with the cell cycle (Matsuzawa et al. [@CR48]; Sanz-Garcia et al. [@CR58]). TLR plays a role in the expression of cytokines and hepatomitogens; in response to TRL activation, p38 is triggered, leading to cytokine- and stress-induced apoptosis (Matsuzawa et al. [@CR48]; Sanz-Garcia et al. [@CR58]). Finally, it is indicated that that NF-κB activation operates on cell growth through cyclin D1 expression (Guttridge et al. [@CR27]; Maeda [@CR46]). In acetaminophen-induced liver injury, sustained JNK activation, through NF-κB, is essential in inducing apoptosis (Hanawa et al. [@CR29]). Furthermore, it is suggested that NAPQI-induced damage on its own is not enough to cause hepatocyte death after APAP dosing and that activation of signaling pathways involving JNK is necessary to lead to cell death (Hanawa et al. [@CR29]; Kaplowitz [@CR37]). The actual downstream targets of JNK that are involved in APAP-induced liver injury are still largely unknown; however, a role for mitochondrial proteins has recently been suggested (Zhou et al. [@CR72]). Interestingly, 10 % of the genes from the top 1 % highly variable gene list consist of mitochondrial-related genes (see Table [1](#Tab1){ref-type="table"}; genes in Italic). The majority of these genes are involved in metabolism-related processes \[CCBL2 (Yu et al. [@CR70]), PTPMT1 (Shen et al. [@CR59]), ACOX3 (Cui et al. [@CR13]), ACOT7 (Fujita et al. [@CR23]), and CYB5R1 (Chae et al. [@CR7])\], while others are part of the respiratory chain complexes in the mitochondria \[ATP5D (Sotgia et al. [@CR60]) and BCS1L (Kotarsky et al. [@CR41])\] or are involved in more structurally-related processes like protein synthesis \[MRPS26 (Sotgia et al. [@CR60])\] and mitochondrial matrix or membrane maintenance \[LONP1 (Tian et al. [@CR62]) and TIMM9 (Sotgia et al. [@CR60]), respectively\]. All but one (MRPS26) of the above-mentioned genes has been related to the toxic/necrotic effects of APAP in a study comparing the toxicity response to APAP in rats/mice with the response to APAP's far less toxic stereo-isomer N-acetyl-m-aminophenol \[AMAP; (Beyer et al. [@CR4])\]. In addition, ATP5D, MRPS26, LONP1, ACOT7, and TIMM9 were also shown to be regulated in HepG2 cells exposed to a toxic (10 mM) APAP dose for 12--72 h (unpublished results). Drug-induced liver injury has often been linked to regulation of the mitochondrial stress responses, which include the formation of ROS products, alterations in lipid metabolism, electron transport, cofactor metabolism, and the activation of pathways important in determining cell survival or death (Beyer et al. [@CR4]; Han et al. [@CR28]). The nearest neighbor analyses on all mitochondrial-related genes from the top 1 % highly variable gene list show that mainly transcription factors seem to be the binding element in the response of the mitochondrial-related genes (Fig. [2](#Fig2){ref-type="fig"}). The involvement of transcription factors has been suggested in the drug-induced stress response of mitochondria in hepatocytes (Han et al. [@CR28]). This indicates that interindividual variation exists in the response of mitochondrial-related genes, possibly explaining part of the differences between humans in mitochondrial-related APAP-induced toxicity responses and consequential liver damage levels. Furthermore, in the gene set overrepresentation network, another smaller network around p75NTR is present (Fig. [1](#Fig1){ref-type="fig"}, encircled with dotted line). P75 NTR is a cell membrane receptor protein that has been associated with tumor and metastasis suppression (Khwaja et al. [@CR39]), similar to the cluster described above. Non-steroidal anti-inflammatory drugs (NSAIDs) are used to reduce inflammation and also act as analgesics by the inhibition of cyclooxygenase-2 (COX-2). However, high concentrations of some NSAIDs are able to reduce proliferation and induce apoptosis in cancer cells. Several molecular mechanisms have been proposed as possible mediators in the anticancerous effects of NSAIDs, including p75NTR. Although APAP is not considered to be a real NSAID, due to its limited anti-inflammatory effects, APAP does affect similar pathways and works as an analgesic through COX-2 inhibition which might explain why similar effects have been suggested for APAP (Bonnefont et al. [@CR5]). Two other components in the gene set overrepresentation network that are connected to the components described above are the 'Wnt-signaling pathway and pluripotency' and 'BTG family proteins and cell cycle regulation' (see Fig. [1](#Fig1){ref-type="fig"}). Both pathways can be related to APAP-induced toxicity-related effects. The stimulation of the Wnt pathway has been suggested to be beneficial after APAP-induced liver failure by stimulating liver regeneration (Apte et al. [@CR1]). This corresponds with the previously mentioned regulation of liver regeneration by the genes involved in the cluster around TLRs, JNK, NF-κB, IL-1, p38, and cyclin D1-related pathways. The BGT gene family has been associated with APAP hepatotoxicity before (Beyer et al. [@CR4]) and has a function in DNA-strand break repair (Choi et al. [@CR10]) and in the regulation of reactive oxygen species generation in the mitochondria (Lim et al. [@CR43]). Both APAP and NAPQI are known to covalently bind to DNA and cause DNA damage (Dybing et al. [@CR19]; Rannug et al. [@CR51]), which possibly explains why this pathway is triggered by APAP exposure. As already described above, mitochondrial stress is an intricate part of the cellular response to APAP-induced oxidative stress (Hanawa et al. [@CR29]), which might also explain the response of the BTG-related pathway. These findings agree with the previously mentioned interindividual variation in mitochondrial genes. In addition, several other pathways not connected to the main cluster of pathways are included in the network of the gene set overrepresentation analysis. These are 'leukotriene metabolism' together with 'biosynthesis of unsaturated fatty acids,' 'amino sugar and nucleotide sugar metabolism' and 'RIG-I-MDA5-mediated induction of IFN-alpha--beta.' The first set of pathways (leukotrienes and fatty acids) probably represents the variation in the normal, therapeutic mechanism of APAP. In humans, unsaturated fatty acids are bioactivated through enzymatic oxygenation to, among others, prostaglandins and leukotrienes which contribute to fever, pain, inflammation, and cancer development (Ricciotti and FitzGerald [@CR54]). APAP interferes with these processes as such inhibiting symptoms. Concerning the sugar metabolism pathway, carbohydrate homeostasis is essential for normal liver function. It is well known that during APAP-induced liver failure, these processes are severely affected (Record et al. [@CR53]). Finally, RIG-I-MDA5-mediated induction of IFN-alpha--beta is a process that has been linked to several liver pathologies like hepatitis A/B/C and hepatic steatosis (Kawai et al. [@CR38]; Toyoda et al. [@CR63]; Wei et al. [@CR66]). Although no apparent link with APAP is available in the literature, it seems that this process is somehow linked to an APAP-induced stress response. To determine how the variation in the above-mentioned genes can explain the interindividual variation in metabolism levels, a correlation analysis was performed between the top 1 % most variable genes and all metabolites. Glucuronidation and sulfation are the two major processes in APAP metabolism, resulting in APAP-glucuronide and APAP-sulfate, respectively, which are non-toxic APAP metabolites (see Fig. [3](#Fig3){ref-type="fig"}; Chen et al. [@CR8]). In addition, another less abundant route of APAP metabolism utilizes hydroxylation/methoxylation of APAP. Hydroxy-APAP and methoxy-APAP are oxidative metabolites formed during this route of APAP metabolism, and both these metabolites have been associated with the hepatotoxic effects of APAP (Chen et al. [@CR8]; Dahlin et al. [@CR15]; Wilson et al. [@CR67]). The variation in both these metabolites could be explained by a large proportion of the genes from the top 1 % most variable genes (*n* = 36 for both metabolites). It thus seems that the largest interindividual variation in gene expression responses after APAP exposure can be linked to the formation of toxic APAP metabolite formation. Interestingly, hydroxy/methoxy-derived metabolites could also be detected in human in vivo low-dose APAP exposure (Jetten et al. [@CR34]). The fact that these metabolites and their derivatives are detectable both in vivo and in vitro, even at APAP doses within the therapeutic range and that their variation in expression levels between individuals can be linked to the variation in gene expression of genes related to toxicity-related effects of APAP exposure, indicates a role as potential key elements in the molecular mechanism behind APAP toxicity. Additionally, another metabolite, C~8~H~13~O~5~N-APAP-glucuronide, also shows correlation with a large set of the top 1 % variable genes (*n* = 51). This metabolite was described as new in the low-dose in vivo APAP exposure study of Jetten et al. ([@CR34]). Since this metabolite is relatively unknown, further studies on its exact route of metabolism and toxic potency could lead to further insight into the molecular mechanism behind APAP toxicity for the same reasons as explained for hydroxyl/methoxy-APAP above.Fig. 3Schematic visualization of APAP metabolic pathway. The log-transformed metabolite levels for each donor on at each dose corrected for 0 mM are visualized. *Gray boxes* not measured/detected. Increase in a metabolite is pictured from *green* (no increase, equals a numerical value of 0 on a log scale) to *yellow*, *orange*, and *red* (high increase, maximum value = 5 on a log scale). Figure adapted from Jetten et al. ([@CR34]) (color figure online) In summary, the biological processes in which the genes with the highest variability in expression between individuals after APAP exposure are involved can be linked to APAP-toxicity-related processes like liver regeneration, inflammatory responses, and mitochondrial stress responses. Also, processes related to hepatocarcinogenesis, cell cycle, and drug efficacy show large interindividual variation after APAP exposure. In addition, most of the genes with high variability between individuals after APAP exposure can be linked to variability in expression levels of metabolites (hydroxyl/methoxy-APAP and C8H13O5N-APAP-glucuronide). Possibly, these findings could help explain the differences seen in susceptibility to APAP toxicity in the in vivo situation. Furthermore, they might give an indication for where the factors causing variability in susceptibility toward APAP-induced liver failure could be found. Electronic supplementary material ================================= {#Sec18} Below is the link to the electronic supplementary material. Supplementary Figure 1**Representative correlation plot**. X-axis: APAP dose range, Y-axis: Log2 gene/metabolite expression level, lines represent responses over dose per donor, arrows represent performed Pearson correlations, Corr. Score table (top left) shows the sum of absolute correlation score per donor (= absolute sum of value of arrows per donor). Supplementary material 1 (PDF 202 kb)Supplementary Table 1**Donor demographics**. Supplementary material 2 (PDF 177 kb)Supplementary Table 2**Identified masses derived from UPLC-TOF/MS after 24-h exposure to APAP.** For each of the detected metabolites its full name, abbreviation, mass, retention, and composite molecular groups are shown. . Supplementary material 3 (PDF 201 kb)Supplementary Table 3**Correlation between interindividual variation in gene expression levels and metabolite expression levels.** Pearson correlation analysis between the donor-specific correlation scores of log-transformed expression-values of the top 1% highly variable genes and all metabolites. Correlation coefficients \> 0.70 are in bold. Supplementary material 4 (PDF 300 kb) This work was supported by the Dutch Ministry of Public Health, Welfare and Sports (VWS) as a part of the Assuring Safety without Animal Testing (ASAT) initiative. Conflict of interest {#d30e2040} ==================== The authors declare that they have no conflict of interest.
Q: Azure Logic Apps HTTP Unauthorized I'm trying to call a SuccessFactors API to update some data from a Logic App. But I keep running into an "Unauthorized" error. How can I get some more details about this error? Can't see input-output for this action so it's a bit difficult. Kind Regards Tim A: I ended up trying to mimic the call in an online REST test tool. That gave me the error I was looking for. SuccessFactors has some settings on user level to only allow logins for certain ip's. If I add the logic app IP's, it works.
Bob Schieffer: Good evening, I’m Bob Schieffer of CBS News… [ audience cheers ] and welcome to the third and final Presidential debate of the 2008 election. I’ll be your moderator tonight, for what we hope will be a lively and substantive discussion… [ checks his notes ] between the candidates: Senator Barack Obama of Illinois and Senator John McCain of Arizona. Gentlemen, let’s begin. Obviously, with another 700-point plunge in the Dow today, this economy is in trouble. Each of you have plans to address the problem, but tell us why yours is better than your opponent’s. We’ll start with Senator McCain. Sen. John McCain: Bob, let me begin by saying, a few days ago, Senator Obama was out in Ohio, and he had an encounter with a man named Joe, who’s a plumber. We’ll call him “Joe the Plumber”. Now, Joe wants to buy the business where he’s worked for many years. And he looked at Senator Obama’s tax plan, and saw that he was going to pay much higher taxes. Which would leave him unable to employ people, and achieve the American dream. So my question is, why would you want to do that to Joe the Plumber? What did Joe the Plumber ever do to you, that you want to raise his taxes? Of all the people to go after in this way, why single out Joe the Plumber? Sen. Barack Obama: First of all, look: uh — I don’t recall meeting the individual you’re referring to. But let me say this: nearly all small businesses earn less than $250,000 a year. And if Joe’s business falls into that category, he should know that, under my plan, uh, his taxes will not go up. Not one cent. Sen. John McCain: Senator, I don’t think most people believe that. I know Joe the Plumber doesn’t. Because he told me so. And frankly, I trust Joe the Plumber a lot more than I trust your plan. Because Joe the Plumber is a straight shooter, and one of the finest people I’ve ever known. And I’ll tell you something else: He’s got a lot of good ideas on how to fix this economy. And, as President, I’ll be relying on his advice and expertise. Bob Schieffer: Let’s turn to a related topic. Over the last several years, we’ve seen budget deficits increase dramatically, with some experts saying this year’s could reach nearly a trillion dollars. What will either of you do to bring government spending under control? Senator Obama? Sen. Barack Obama: Uh, look — uh, obviously, Bob, all government programs need to be examined to see if they’re necessary, or if they’re working, or if they could do the job more efficiently. But we’ve got to cut these programs carefully, with a — a — a scalpel, not a hatchet. Sen. John McCain: [ grinning boradly ] The fact is, Senator, only one of us has a record of fighting wasteful government spending, and it’s me. As President, I would go after the bloated budgets with a GIANT hatchet, and THEN use a scalpel. Or I might take the advice of my friend, Joe the Plumber, and use a plunger. Sen. John McCain: Senator, Joe the Plumber lives in a cigar box, under my bed, with our friend Simon. Bob Schieffer: So… Joe the Plumber would be very tiny, then. Sen. John McCain: Joe stands about three-and-a-half inches tall. Except when he’s upset. Then he can become as big as a house! He’s my best friend. Bob Schieffer: [ uncomfortable ] Alright, let’s turn to a new topic… Sen. John McCain: Bob, could I just add, that Simon is invisible? Bob Schieffer: Of course. [ a beat ] Gentlemen, over the last few weeks, the tone of this campaign has become increasingly nasty. Senator Obama, in describing your opponent, your campaign has used words like “erratic”, “out of touch”, “lying”, “losing his bearings”, “senile”, “dementia”, “nursing home”, “decrepit”, and “at death’s door”. Senator McCain, your ads have featured terms such as “disrespectful”, “dangerous”, “foreign”, “sleeper agent”, and “uncircumcised”. Are you both comfortable with this level of discourse? Sen. Barack Obama: Uh, look, Bob: uh, obviously, in any campaign, harsh things are going to be said. And certainly, both of our campaigns have now and then crossed the line. But, I have to say; I am troubled by some of the things said about me at my opponent’s rallies. Things like “traitor”, “kill him”, and “off with his head”. Uhhh — and, unfortunately, Senator McCain has yet to condemn these comments. Sen. John McCain: Bob, as to the “off with his head” comment, that was shouted at a rally we held at a Renaissance Fair. The gentleman had too much mead and he was removed by security. Sen. Barack Obama: Uhhhh — at that same event, I was also denounced as a “sorcerer”. Sen. John McCain: At any rally of nearly ninety — uh, seventy-five people, you’re going to get a couple of crackers. We all know that. But, just a few moments ago, my opponent slandered my very best friend in the world, Joe the Plumber, by calling him “imaginary”. Would the Senator like to apologize to Joe for that remark? Bob Schieffer: Alright, we have time for one more question. Let’s talk about the people each of you would bring into government, and their qualifications. Specifically, your running mates. Senator Obama? Sen. Barack Obama: For nearly 35 years, uh, Joe Biden has established a reputation for honesty, uh, compassion, and a mastery of the issues affecting this nation. Uhh, I can’t think of anyone more qualified to assume the Presidency, should anything happen to me. Bob Schieffer: Senator McCain. Sen. John McCain: Bob, I’ve known Senator Biden for nearly 25 years. I think he’s a good man, but let me say something here: he has never been particularly nice to Joe the Plumber. I think Joe the Plumber resents that. In fact, I KNOW he does. But as to my own running mate, Governor Palin, I couldn’t be more proud of her. Now, on the question of people I’d bring into government, let me say here tonight, that, as President, I will be the first to add a cabinet-level Department of Plumbing. And you know how I’m going to tap for that post? Bob Schieffer: Joe the Plumber? Sen. John McCain: Bingo! Joe the Plumber. You’re damn straight. Sen. Barack Obama: Uhh, what about your mutual friend Simon, who also lives in the cigar box under your box? Sen. John McCain: Senator, Simon cannot serve in the Cabinet, because Simon… is a unicorn. And I think you know that. Bob Schieffer: [ shakes his head ] And that concludes tonight’s third and final Presidential debate. From all of us here at Hofstra University, goodnight and Live from New York, it’s Saturday Night”! Author: Don Roy King Don Roy King is directing his fourteenth season of Saturday Night Live. That work has earned him nine Emmys and thirteen nominations. Additionally, he has been nominated for thirteen DGA Awards and won in 2014, 2016, 2017, 2018 and 2019. Mr. King is also the creative director of Broadway Worldwide which brings theatrical events to theaters. The company has produced Smokey Joe’s Café; Putting It Together with Carol Burnett; Jekyll & Hyde; and Memphis, all directed by Mr. King. He completed the screen capture of Broadway's Romeo & Juliet in 2013. - LinkedIn View all posts by Don Roy King
Amazon is currently engaged in a feud with Google over voice-enabled products and access to video services, but that’s hardly Amazon’s only silly battle. The company kept its Prime Video service away from >> With Netflix doubling down on original content, the streaming giant continues to amass subscribers at an impressive rate. Just last quarter, for instance, Netflix added 5.3 million new subscribers, bringing its cumulative total >>
Pentagon spokesman Navy Capt. Jeff Davis, said that the U.S. role in the raid was strictly in an advisory capacity and that U.S. forces did not accompany the Somali troops to the objective. Davis would not say what that objective was, though he did specify that the aircraft used in the raid were American. AD In 2013, U.S. Navy SEALs attempted to capture an al-Shabab leader during an amphibious raid on the coastal town of Barawe. The assault quickly went wrong when the American commando force was discovered after coming ashore. In the ensuing gun battle, the SEALs left behind some equipment but were pulled out safely. AD The latest U.S. raid comes just days after a series of airstrikes hit an al-Shabab training camp approximately 120 miles north of Mogadishu. The strikes, carried out in waves by both drones and fixed-wing aircraft, killed more than 150 al-Shabab fighters — an unprecedented number of casualties for the U.S.-led air campaign against the group. AD U.S. officials said the training camp was targeted because the fighters posed an “imminent threat” to U.S. and African Union forces in the region. The United States has a small contingent of advisers in the war-torn country in a bid to help train, advise and assist their African Union counterparts. The small detachment of approximately 50 U.S. advisers was sent to Somalia in 2013. Their arrival marked the first U.S. troop deployment to the country since 1993, when U.S. Army Rangers and Delta Force members left the country following the failure of Operation Gothic Serpent, known to many as the “Black Hawk Down” incident. AD In the past month, al-Shabab carried out a failed suicide attack on an Emirati airliner and claimed responsibility for ambushing and killing more than 100 Kenyan troops stationed in Somalia. AD The terror group has waxed and waned as a regional threat in recent years, as a series of U.S. strikes in 2014 targeted al-Shabab’s leadership following the deadly Kenyan Westgate mall siege in 2013. The siege on the popular upscale mall in Kenya’s capital of Nairobi killed 67 people and wounded more than 100. Al-Shabab has also experienced varying degrees of infighting after the Islamic State pushed the group to break ties with al-Qaeda and pledge allegiance to ISIS leader Abu Bakr al-Baghdadi. Despite some defections, al-Shabab remains tightly aligned with the terrorist group responsible for carrying out the Sept. 11, 2001, attacks.
Tell Nursing Homes to Stop Stealing Seniors’ Rights Fine print buried in nursing home admissions contracts is depriving nursing home residents and their families of their constitutional rights. During the incredibly stressful nursing home admissions process, many nursing home corporations push residents and their families into signing away their right to go to court — even in instances when residents suffer severe neglect, serious injuries, death or sexual and physical abuse. Nursing homes should focus on helping elderly individuals live in dignity and thrive with the support of caring professionals and not on protecting themselves when they harm residents with poor care.Sign the petition: Forced arbitration clauses in nursing home contracts must be banned in order to restore the rights of residents and their families. Forced arbitration clauses are unfair terms that block seniors from accessing the court system. Instead, they are pushed into private, biased arbitration forums where decisions are often made by the industry’s handpicked arbitration firms where there are no appeals, no accountability and no transparency. The Centers for Medicare and Medicaid Services (CMS), the federal agency that administers Medicare and Medicaid, is currently considering ending forced arbitration for nursing homes that receive public funding (nearly all). CMS can require nursing homes that want to receive taxpayer funds to stop forcing residents to sign away their rights. It can restore residents’ rights and choice by permitting arbitration only after disputes arise, making arbitration truly voluntary. The nursing home industry should no longer be allowed to force victims into a rigged system because they signed a document during the stress and confusion of admission to their facilities Claims against nursing homes can be heartbreaking. The frail elderly living there rely upon caretakers hired by nursing home corporations for their medical, physical and other daily needs. Neglect and abuse cause tremendous suffering in the final years of life. If anything, the rights of today’s seniors in nursing homes should be strengthened, not weakened. Let’s send a strong message to CMS and urge the agency to prioritize the rights of seniors over corporate interest. Add your name right now. Tell CMS: America’s seniors should not have to sacrifice their rights to protect corporate profits.
Slashdot videos: Now with more Slashdot! View Discuss Share We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates). Hopefully these new 2000 rail cars can be modified to transport solar and wind energy to bring these Energies of the Future to locations where they are currently unavailable. Even better if the locomotives are corn powered. Or, just grow the corn on the trains using the solar energy they are transporting and we'll have perpetual energy trains. China took our solar tech, mass produced panels at a loss to flood the market, and drove the innovators out of business. Now we're stuck with 5-year old tech because China is the only game in town, and they're not innovating. If China competed fairly, the innovators would have found ways to get panel production costs below what they currently are. The price of panels would be even lower than the Chinese price, which is losing them money. It also doesn't help that while the government subsidizes solar power companies to an extent; it's a paltry level of support compared to the oil company subsidies. It's unlikely that solar by itself will be able to replace carbon fuels anytime soon, but we it could easily provide a much greater percentage of power than it does if it got a little support. I could have sworn that us companies got more than $0 in drilling rights in Iraq, but you are suggesting otherwise. Would you prefer that the solar power companies and everyone else get the same tax reduction as well? You do realize that no US oil corporations were awarded any of the top tier contracts in Iraq as the result of the war? Oh, I'm sure there will be some US sub-contractors who make some money by providing engineering and operational services.And how does the CIA overthrow a country? Do they have some secret million man army to do it? Do they say "do what I say or we will nuke you?" During the cold war and even today the smaller nations always play one greater power off another to make sure they get the best The Expensing of Exploration and Development Costs Credit allows investors in oil or gas exploration and development to “expense” (to deduct from their corporate or individual income tax) intangible drilling costs (IDCs). IDCs include wages, the costs of using machinery for grading and drilling and the cost of unsalvageable materials in constructing wells. These costs are “intangible” in comparison to costs for salvageable expenditures (such as pipes or casings) or costs related to a Again, none of the above is a subsidy. Handing half a billion dollars to a company to make solar panels, or ethanol, or windmills, or anything else that cannot possibly pay for itself, is a subsidy. Taking less of someone's money at the point of a gun (i.e., taxation) is not a subsidy. Again, none of the above is a subsidy. Handing half a billion dollars to a company to make solar panels, or ethanol, or windmills, or anything else that cannot possibly pay for itself, is a subsidy. Taking less of someone's money at the point of a gun (i.e., taxation) is not a subsidy. You don't understand. We need convenient scapegoats and enemies to set up the whole "proletariat vs bourgeois" class-warfare meme used by Socialists and Communists to such great effect in Europe and Asia, to generate enough popular anger and envy to enable a societal destabilization, eventually leading, hopefully, to an overthrow of Western Capitalism. Making poor people poorer while demonizing the wealthy helps further this Marxist narrative while making poor people more desperate, and therefor more easily p Ah, thank you, Comrade, for explaining it to me. Now I love Big Brother. An extra ration of beets and vodka for you, for "educating" me! And now, I must leave to attend the Occupy Wyatt Oil protest. That idiot sister of our glorious friend, James Taggart, is getting ready to kill half the population of Denver by running a train down that damned dangerous rail line of hers... "It also doesn't help that while the government subsidizes solar power companies to an extent; it's a paltry level of support compared to the oil company subsidies." Seriously? We subsidise the research into solar panels, we subsidise the production of solar panels, we subsidise the purchase of solar panels, and we subsidise the price utility companies pay for the electricity the panels produce. We also subsidise the training of the workers that install and maintain the solar panels. You forgot the lack of a carbon tax or cap and trade system for co2 emissions. That's a massive subsidy of today's oil companies by future generations who will be paying to re-do the economy as a whole in a world of greatly warmed climate, shifted arable zones, an acidified ocean, and enviro-wars. unless the levels of taxation are assymetrical; i.e. unless there are more categories of deductions and greater levels of deductions for one industrial sector compared to other sectors of the economy. Then it would be a subsidy. I think if you study the details there is a a subsidy by this definition for the fossil fuel industry. You are right. Cap and trade is easy to render impotent by changing definitions of things to the point of meaninglessness. It's full of potential accounting tricks that would ensure that no real progress was made on the only number that matters in this topic: the rate of change of CO2 concentration in the atmosphere. A hefty carbon tax is much simpler, stands a chance if implemented at being effective in reducing fossil-fuel use, and thus is predictably politically impossible. There is no 'solar tech'. China mass-produces standard panels for the most part, and those plants produce large quantities of toxic byproducts in doing so which they pretty much just dump into the environment. The solar industry in China also seriously over-produced their panels and wound up with piles of them as other countries (aka Europe, Germany in particular) pulled back subsidies for various reasons. Solar panel companies in the US simply cannot compete against that, not unless you want to create env But maybe, if we borrow ever more money from China, we can figure out some way to make simple, labor-intensive products like solar panels as cheaply as China - the first steps will be to lower worker wages and forget about those silly environmental concerns... In order to beat China at their own game we need to become China. How come all the oil and gas companies keep expanding like this and all the solar companies keep going bankrupt? For one, Koch Bros., and others like them, pretty much own the Republican party in the USA. It comes in handy when you want to make sure emerging technology doesn't threaten your empire. Secondly, too many people just don't care and more would rather eat up lies and witch-hunt drama because it's easier than figuring out the facts. This keeps seats in the senate full of corporate sponsored asshats who in turn pass votes to allow things like alternative energy solutions to be derailed before posing a threat to the oil empire. dropped the price of natural gas that even coal plants ramp down. Solar was barely approaching the old price point of power generation and then fracking hit. Combined with the nuclear scare and countries exploring alternatives the money landed on wind power because its currently a better option than solar. Fracking has been used since the 1950's. The only thing that really has increased the use of fracking is that oil prices have gone up enough to support spending several million dollars per well to complete the job. Now that the shale gas people have done such an excellent job dropping wells, there is a relative glut and the price goes down. Enjoy it while you can - won't last terribly long. Big problem with shale (either oil or gas) is that the depletion rates are quite high - you pump out a well in years "How come all the oil and gas companies keep expanding like this and all the solar companies keep going bankrupt?" Because solar panels are not cost-effective, not yet anyway, and the massive government subsidies are being poured into "production facilities" not basic research. It reminds me of the bee in "The Bee Movie" that kept slamming into the glass [youtu.be] because he didn't understand the concept of glass "Maybe this time," "Maybe this time," "this time," this time..." Because the oil and gas companies already have distribution and logistic systems in place. Plus solar panels can be described as risky when used to power critical systems. Home solar power use would require homeowners to spend quite a bit of money to convert. Just like alternative fuels for automobiles would require car owners to either purchase new cars capable of using bio-filters and natural gas as a power source. I honestly beleive there will come a time when oil use will decline but it will most likel If you want cheap solar panels, you'll have to smuggle them from China. Those idiots in the federal government can't decide which political constituencies to pander to, so they've slapped ridiculous tariffs on solar cells made in china. How come all the oil and gas companies keep expanding like this and all the solar companies keep going bankrupt? Wasn't it supposed to be the other way around? Damned hippies lied to me again. Maybe you should try listening to something beyond fox "news" since the solar industry has been growing, and not that many of the companies in the industry have gone belly-up. That said, the hydrocarbon industry is spending a lot of money to get politicians to continue giving them subsidies while seeking to prevent renewable energy developer from succeeding... this is of course OIL being trucked not water or electricity. basically whether a given "wire" is buried or not is a function with many variables. 1who bribed who (over the years)2 who owns which bit of land3 ground makeup4 sea level offset5 presence of Historical Significance6 presence of Possible Endangered Species7 phase of the moon8 color of the Primary Local Admins desk blotter9 amount of money needed to do the job10 - N (various other unnamed but Vitally Important Things) Heroin addiction is far less a problem than the legal consequences creating a vastly profitable black market, Government REACTION to heroin creates far worse problems then would a supply of cheap, clean smack . Is there an inherent limit to the amount of (low-entropy) energy (disequilibria) that should be directed by humans to human ends. You would seem to have history on your side if you were to say that we have used the plentiful fossil-fuel energy of the last century or so very unwisely, but does that inherently mean that we can not ever be responsible "fire users"? The problem comes with the definition of over-consumption. What is that definition? True, it is easy to show that we a This smacks me as being a bit odd and inefficient. Given the volume being produced, wouldn't a pipeline make more sense? It'd be safer and cheaper in the long run. Of course, given the troubles the Keystone XL pipeline is having, maybe it's more economic to truck it than to try and get through all the red tape for a pipeline. I agree a pipeline would be more efficient in the long run if the supply keeps flowing. However, given how much the environmental moment hates, pipelines, fracking, and Oil in general they have created a dis-economy where Business people have to make the rational decision to use an inefficient solution because the red tape is less cumbersome. Now, if we had regulators that were not ideological against the industry they were trying to regulate or a product or regulatory capture by a few large players maybe w You know, we have the technology to build pipelines that won't leak. We use them in chip fabs, where they use materials that can slaughter everyone for miles downwind. It would increase the cost by more than a factor of two, but the environmental cost of leaks is nothing to sneer at. The current state of affairs has managed to keep gas prices down in the US because the crude price is depressed.With the pipeline, gas prices will go up because the price of WTI will rise to the same level as Brent.So I say don't build the pipeline. All it'll do is increase the profits of the oil producers, at the cost of the consumers. $20 per BBL is WAY off for rail. Heck, my dad pays about that per tote (15BBL) for local delivery via truck and he's a small player. I'd have to ask him what he pays per rail car but I can guarantee you it's a lot less than $20/BBL. Taxpayers: Sure, you can have your pipeline... if you want to pay for it yourself. Oil Companies: Fuck that, without your money paying for it, that shit's too expensive! We'll just buy more trucks to run on existing infrastructure. People: Look, gas is down to $2.75 a gallon! Taxpayers: Sure, you can have your pipeline... if you want to pay for it yourself. Oil Companies: Fuck that, without your money paying for it, that shit's too expensive! We'll just buy more trucks to run on existing infrastructure. People: Look, gas is down to $2.75 a gallon! FTFY. Oddly, I can find no real evidence that the oil companies want anything from the taxpayers other than to get the permits approved allow them to build the pipeline. Taxpayers: Sure, you can have your pipeline... if you want to pay for it yourself. Oil Companies: Fuck that, without your money paying for it, that shit's too expensive! We'll just buy more trucks to run on existing infrastructure. People: Look, gas is down to $2.75 a gallon! FTFY. Oddly, I can find no real evidence that the oil companies want anything from the taxpayers other than to get the permits approved allow them to build the pipeline. I'd be interested in seeing a good analysis of exactly WHY something like the Keystone XL pipeline (or the OP's huge number of railcars) is necessary for shipping crude to the Gulf Coast. I realize that 80% of the US's refineries are on the Gulf, but, given a couple of things: The tar sands are *relatively* clumped together in Alberta After a re-alignment of oil sources, the vast majority of tar sands oil will be used domestically (Canada and USA) building a refinery is expensive, but we need the extra capacity anyway refining close to the tar sands extraction site reduces the total requirements for transport of the final products (i.e. the oil source to refinery to recipient); that is, not only do you reduce the total volume of end product being produced (as refining 1 gallon of crude produces under 1 gallon of end-products), but you can ship end products essentially directly from the tar sands to end-users. Given that the distance from the Gulf to the end-users is no shorter than from Alberta to the end-users, this saves a whole lot of transportation costs for the crude oil. what's the cost differential between building the Keystone XL vs a large refinery (or a couple) up in Alberta? Something similar goes for the various Shale gas extractions - I would think that it would be far better to build power generation (since that's where 90% of the gas is going to go) right near the gas fields, and then spend money on an upgraded Power Grid, rather than try to ship the gas around to existing power stations. Basically, I think we're falling into the trap where we just assume that transportation is less expensive than co-location of end use. I'd far rather pay for another refinery and gas power stations (added capacity) AND a better power grid, than cough up the same amount for just another couple of pipelines (which, frankly, all they add is environmental disaster potential). Even with the pipeline, refining close to the point of extraction really makes sense for tar sands.The stuff is heavy and nasty and the "dilbit" or diluted bitumen that has to be made out of it so it can flow is much, much worse than normal crude.It's more corrosive to the pipe and more noxious and toxic when spilled. The raw output from the tar sands is way too heavy to be refineable, it still needs to be diluted with light natural gas liquids first... the same stuff they use to dillute it for pipeline transport, minus a few refinement stages (btw, the US exports some of the lighter NGLs coming out of the shales to Canada for this purpose). But it makes no sense to refine the output up there far away from consumers. What are they going to do, truck the gasoline down several thousand miles? They'd go out of business try Just depends on which is easier to transport - oil or electricity. One would think that pushing electrons would be more efficient and cost effective than hauling hydrogen-carbon chains across the continent, but that isn't necessarily true. The other part of the equation is that it's hella expensive to build a refinery from scratch. AFAIK, there have not been any completely new refineries built in the 'developed' world for decades - there is simply too much opposition for it. The only new refineries are in China, Indonesia, Saudi Arabia and Venezuela and similar places where you can push through large developments easier. I'd far rather pay for another refinery and gas power stations (added capacity) AND a better power grid, than cough up the same amount for just another couple of pipelines (which, frankly, all they add is environmental disaster potential). - you know what's funny, years of comments here, explaining that the real infrastructure cannot be built artificially by government, it has to come from the actual market need, and then a company wants to put down a freaking pipe and everybody is yelling about it. A pipe is infrastructure, it's PRODUCTIVE infrastructure, unlike bridges and overpasses to nowhere. Environmentally speaking, the companies that are building the pipeline should have to buy or rent the land they are using from private owners, not "Yelling about it" is a part of people deciding on what to build. The GP just gave his opinion on what he wants to build. - once the GP decides to actually invest some money, make a bet on his investment that he'll make a profit and put his money where his mouth is (because according to you that's what he wants to do) then he can simply do it. My point is that once somebody wants to make an investment, the government shouldn't be in a position to stop it. It just seems odd. This is more a business article than anything else, and there is nothing new and cool about buying rail cars. Our domestic pipeline infrastructure has been on a building spree for a decade. If any of you are investors, that's been the basis for the Oil&Gas MLP buildout that has been maturing at a very fast clip since the mid-2000's, continued right through the crash, and continues to mature at a modestly fast clip today and probably for another 10 years at least before the core-buildout slows down. Generally speaking transport for OIL and NGLs (Natural Gas Liquids) can start out in tankers and rail cars but ultimately cost efficiency requires a pipeline to be built. And you have no choice for natural gas... its pipeline or nothing pretty much since compression to CNG or LNG levels is way too expensive (and way too dangerous) for domestic transport. But it takes several years to build a long pipeline, costs billions of dollars, and requires both shippers and receivers to enter into long term 10-year+ contracts with guaranteed volume flow or investors wouldn't finance the pipeline in the first place. Because no actual revenue flows until the pipeline is complete. There are a dozen major producing areas but in layman's terms the bottleneck is mainly in the North->South direction these days. EastWest has capacity now (though numerous major cities on the east coast still have bottlenecks). Existing pipelines in the north-south direction are essentially maxed out. The Keystone pipeline saga is your typical talking-head/exaggerated/public-unaware crap. Pipelines criss-cross the U.S. already, there are already numerous (but maxed out) pipelines coming down from Canada all the way to the gulk, and Canada is a major trading partner whos major oil and gas reserves are essentially land-locked. Sure, they have some transport to the coasts for export, but they need to be able to drop down through the border into the U.S. markets and we also have an export market of our own going northward of light NGLs which the Canadians use for a multitude of purposes in their oil-sands operations. It's as much a diplomatic issue with our northern neighbor as it is anything else. I think you are a bit confused over the scale involved, but the main problem is transport cost when it comes to NG. The producers are already underwater with curent well-head NG prices for pure NG plays (called dry-gas wells), any sort of transport other than a pipeline would bankrupt them. I'm not saying it's true. But what if it is? What are the implications? What if these petroleum corporations would put their billions of dollars into researching and developing technology that's just waiting to be used? These people who claim to be witnesses were trusted to the utmost, including some who were trusted with nuclear launch authorization codes. No nuts would be given a job like that. Close to Gulf oil plays, yes. But it also makes exporting pork-subsidized crude and natural gas to more lucrative foreign markets mere childs play. That's the main reason for Keystone XL transporting corrosive tar sands instead of refined products: the option to export it instead of lowering domestic US prices by even five cents. There's a reason Warren Buffett bought a railroad and GE (makes the engines...)... and a reason the Administration blocked a pipeline for their best-buddy in Omaha who spews the "I need to be taxed more" message for them.
Talkback: Wind It's been very windy here in Dorset too. luckily I haven't had anything damaged just a couple of pots blown over. More high winds are forecast for tonight so I've been outside to check that everything is secured, hopefully tomorrow everything should be where it should be. Very windy in Bristol too = blew my Xmas wreath on the front door into the garden but no damage done. I'll fix it again when the wind subsides. I'm giving a pre - Xmas lunch to some of my fellow volunteers from the Botanic Gardens tomorrow and they will have to put up with "those onions are from my lovely crop this year"and "I had such a glut of strawberries I had to freeze a load and some are in your trifle" I agree, Pippa, the produce from your own efforts pay for the extra effort. Greetings all - it's been very windy in Sussex too - much of this week on the nursery has been spent re-tying trees to their lines, clearing up broken glass from greenhouses and removing fallen/broken trees - it's heartbreaking when such lovely plants get destroyed in bad weather, but all part of it I guess. Happy Marion - please may I have some trifle?! We're waiting for the hard frosts to come and kill off our beautify exotic plants......I think I'm a bit of an old softie really - they're like my babies! It's always windy here in Lincolnshire! Some of my pots had blown over after the last very windy blast. I find that if I remove the pot feet from all my pots and weigh them down with bricks also move them to as sheltered a site as possible, although they don't look very elegant it usually does the trick. Parts of my garden have "wind tunnels" . I have a clematis in a pot growing through a climbing rose and in the earlier part of the year,after we had a windy spell the clematis got hopelessly tangled up in the rose and I couldn't untangle it. It did flower well but I shall move it in the early part of next year to a less windy spot. The climbing rose will have to flower on its own. The latest windy spell seems to be much worse with lots of damage done, especialy to polytunnels. The lime trees especially seem susceptible to wind. my drive is strewn with broken branches from the two trees next door. safer to stay indoors in this kind of weather.
6 Midweek low carb treats that we simply cannot resist There's no need to go without that something sweet, when you've got delicious recipes like these. by: Uhuru Plaatjies | 23 Aug 2017 It's that time of the week again where your diet might be getting old, and your cravings have come back. You need a little pick me up and your alternatives aren't working out. It happens to the best of us... And sometimes the only thing that will satisfy that craving is to actually surrender to the sweet treat! Here are six low carb treats that you won't need to feel guilty about.
Guatemala City Railway Museum The Guatemala City Railway Museum, officially Museo del Ferrocarril FEGUA, is located in the former main railway station in Guatemala City, Guatemala. The museum has a collection of steam and diesel locomotives, passenger carriages and other rolling stock and items connected with the railway. It also has information about the historic development of the railways in Guatemala. Gallery See also Rail transport in Guatemala External links www.museofegua.com - official website of the museum At Lonely Planet Youtube video Category:Museums in Guatemala Category:Rail transport in Guatemala Category:Railway museums in Guatemala
Caregiver of the Quarter ​​​A tradition of great caregiving means that we take care of our own as well as we take care you and your family. We reward our local CAREGivers with CAREGiver of the Quarter awards to show our appreciation for all they do. Home Instead, Inc. also recognizes regional CAREGivers of the Year and one special CAREGiver who shows exceptional dedication is the recipient of the national award — The Mary Steibel CAREGiverSM​ of the Year Award. This program inspires CAREGivers to strive for excellence. It creates a sense of pride and rewards their hard work, compassionate care and service to others. We are thrilled to introduce our outstanding award winning CAREGivers:​ Missy Klusmeyer ​Missy has been caregiving with us since March 2017 and immediately impressed us with her compassionate heart, great communication with the office regarding clients’ needs, willingness to go above and beyond in caring for her clients, and eagerness to fill shifts when needed. She has a servant’s heart and has been delivering Meals On Wheels for years before becoming a caregiver. One of Missy’s clients said, “Missy is very smart, a quick learner, pays attention to my husband, a good communicator & alert to changes, caring, prompt and always on time. She’s one of my husband’s best CGs.” Keyana Mapp ​Keyana models the values we look for in a CAREGiver: loyalty, dependability, reliability, good communication; great work ethic and a true compassion for serving others. She is studying to be a lawyer, but definitely has the heart of a caregiver! Her clients love her and a couple of client’s family members have said, “She’s always on time, efficient in all she does, solves problems to make things run smoothly and has become a huge part of our family. Keyana is lovely, fantastic, patient, and knows just what is needed.” Keyana was recognized as our Spring 2017 CAREGiver of the Quarter. Jeanne Boedges Jeanne has been caregiving with Home Instead for a year, but has been a caregiver to seniors for much longer than that. Jeanne is very energetic and has lots of spunk and a go-get-'em attitude! She has a great rapport with her clients and is a great communicator with the office regarding schedule and client needs. She is always dependable, reliable and is good for her word! On top of all that, she is a fabulous cook! We are excited to name Jeanne as our Autumn CAREGiver of the Quarter 2016.​ Jane Messman ​Jane has been caregiving for nearly two decades. She is one of seven siblings, four of whom are caregivers, so it’s definitely a family trait! Jane is loved by not only her clients, but also her clients’ families! She did a storyboard with one of her dementia clients, stepped up to provide hygiene care for a hospice client who was in a facility but not receiving the appropriate personal care by the facility staff, made follow-up calls with a medical equipment company for a client, among other clients that she clicked with immediately. Jane is quick to help us out when we’re in a pinch and is great at communicating with the office regarding changes in her clients and their needs. Jane was recognized as our Winter 2017 CAREGiver of the Quarter! Gina Hetlage ​Gina has been caregiving for over a year. She has previous experience working in Demonstration Sales (think Costco samples!), as well as in office settings. Gina has recently found her niche as a caregiver to seniors, and her heart for providing compassionate care is very obvious, not only to us, but to her clients and their family members as well. After one of her clients passed away, the daughter-in-law was so impressed with Gina's compassionate care that she wrote a heartfelt thank-you note to our office for matching Gina up with her father-in-law. Gina is on top of communicating with the office and willingly accepts sub shifts when available. We are grateful for Gina's heart for seniors, as well as her employee work ethics and are happy to recognize her as our Summer CAREGiver of the Quarter 2016. Sheri Althof ​Sheri's background is as an RN. She began working as a CAREGiver for Home Instead in April of 2015 and quickly became a favorite of her clients. She always has a smile on her face and a pep in her step! Sheri's clients always appreciate Sheri's loving care and her willingness to go above and beyond the call of duty. Our office staff always appreciates Sheri's great communication, reliability, dependability and all-around awesome attitude to help out whenever and however she can. With Sheri's background as an RN, she was a natural choice when choosing someone to demonstrate and coach new CAREGivers in providing personal care to clients. It is our honor to name Sheri Althof as our Autumn 2015 CAREGiver of the Quarter. Linda Boelter Linda has been with Home Instead Senior Care since September 2015 and was an instant hit with her clients! She has such a gentle, quiet, warm spirit about her which makes her clients and their families feel very comfortable and genuinely cared for.​ Linda has an excellent record for communicating with our office and on numerous occasions has accepted short-notice shifts to help out in a time of need. The only challenge we have with Linda is that we haven't figured out how to clone her! Congratulations to Linda Boelter for being recognized as our April 2016 CAREGiver of the Quarter! Christine Black Christine has a huge heart and a warm smile for her clients. She is so easy-going and has a calming effect on those for whom she is caring. Christine's deep faith motivates her and definitely has a positive impact on her ability to treat others with love and compassion​. She is willing to step in and help whenever and however she can. Christine began her career as a Home Instead CAREGiver in August 2014 and has proven herself to be steady, reliable and trustworthy. We are excited to name Christine as our Winter 2016 CAREGiver of the Quarter! Nancy Skinner ​Nancy served with her husband for over 26 years as a missionary in Russia and Finland. When they moved back to the States, she told her husband that she wanted to find a job serving others. For this very reason she makes an excellent caregiver to her clients. Nancy has many times picked up extra shifts for us when we have an emergency shift arise. Nancy is quick to communicate with us, and is one of the first to sign up for our monthly continuing education trainings as well. It's an honor to name Nancy as our Summer 2015 Caregiver of the Quarter! Theresa Dobson ​Theresa is always an instant hit with her clients! She's one of those caregivers whose clients "fight" over who gets her more! Theresa is loved not only by her clients, but by their family members as well. Theresa's heart for seniors is as big as they come. She has no problem going the extra mile for her clients and has gone above and beyond with working short-notice or overnight shifts as well. Theresa was our April 2015 Caregiver of the Quarter. Carolyn Korb ​Carolyn's experience with raising a special needs son gives her the compassionate heart necessary in being a CAREGiver to seniors, especially those living with Alzheimer's Disease. She has so much patience with her clients and is willing to do whatever it takes to make their lives easier. She goes above and beyond with each of her clients (and their pets!) and lives the Home Instead mission of enhancing their lives. To Carolyn, they are not just clients, they're like family. Carolyn was chosen as our January 2015 Caregiver of the Quarter. Susie David ​Susie is a natural when it comes to caregiving! She has such a selfless heart for taking care of others, always putting her clients' needs above her own. Every single one of Susie's clients has loved her so much that some have even become a bit "jealous" when they found out she had other clients! Susie has often worked seven days a week in order to serve each of her clients. She is quick to accept short-notice shifts and has even been willing to drive farther distances to help a client in need. Susie was our Caregiver of the Quarter in October 2014. Bettye Babb Bettye had been retired from the workforce for 5 years when she began working for us as a CAREGiver. She does considerable volunteer work in congregational care through her local church, and is presently coordinator for that ministry. Bettye says, "My work background has been primarily in office type situations, but my heart is in caring for others." Bettye was chosen to be our very first CAREGiver of the Quarter in April 2014. Looking for advice? Home Instead offers free monthly newsletters with tips and advice for caregivers of elderly loved ones. Sign up for advice Home Instead offers free monthly newsletters with tips and advice for caregivers of elderly loved ones.
<?xml version="1.0" encoding="utf-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <edmx:Edmx Version="4.0" xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx"> <edmx:DataServices> <Schema Namespace="Microsoft.Test.OData.Services.ODataWCFService" xmlns="http://docs.oasis-open.org/odata/ns/edm"> <Term Name="IsBoss" Type="Edm.Boolean"/> <ComplexType Name="Address"> <Property Name="Street" Type="Edm.String" Nullable="false"/> <Property Name="City" Type="Edm.String" Nullable="false"/> <Property Name="PostalCode" Type="Edm.String" Nullable="false"/> </ComplexType> <ComplexType Name="HomeAddress" BaseType="Microsoft.Test.OData.Services.ODataWCFService.Address"> <Property Name="FamilyName" Type="Edm.String"/> </ComplexType> <ComplexType Name="CompanyAddress" BaseType="Microsoft.Test.OData.Services.ODataWCFService.Address"> <Property Name="CompanyName" Type="Edm.String" Nullable="false"/> </ComplexType> <EnumType Name="AccessLevel" IsFlags="true"> <Member Name="None" Value="0"/> <Member Name="Read" Value="1"/> <Member Name="Write" Value="2"/> <Member Name="Execute" Value="4"/> <Member Name="ReadWrite" Value="3"/> </EnumType> <EnumType Name="Color"> <Member Name="Red" Value="1"/> <Member Name="Green" Value="2"/> <Member Name="Blue" Value="4"/> </EnumType> <EnumType Name="CompanyCategory"> <Member Name="IT" Value="0"/> <Member Name="Communication" Value="1"/> <Member Name="Electronics" Value="2"/> <Member Name="Others" Value="4"/> </EnumType> <EntityType Name="Person"> <Key> <PropertyRef Name="PersonID"/> </Key> <Property Name="PersonID" Type="Edm.Int32" Nullable="false"/> <Property Name="FirstName" Type="Edm.String" Nullable="false"/> <Property Name="LastName" Type="Edm.String" Nullable="false"/> <Property Name="MiddleName" Type="Edm.String"/> <Property Name="HomeAddress" Type="Microsoft.Test.OData.Services.ODataWCFService.Address"/> <Property Name="Home" Type="Edm.GeographyPoint" SRID="4326"/> <Property Name="Numbers" Type="Collection(Edm.String)" Nullable="false"/> <Property Name="Emails" Type="Collection(Edm.String)"/> <NavigationProperty Name="Parent" Type="Microsoft.Test.OData.Services.ODataWCFService.Person" Nullable="false"/> </EntityType> <EntityType Name="Customer" BaseType="Microsoft.Test.OData.Services.ODataWCFService.Person"> <Property Name="City" Type="Edm.String" Nullable="false"/> <Property Name="Birthday" Type="Edm.DateTimeOffset" Nullable="false"/> <Property Name="TimeBetweenLastTwoOrders" Type="Edm.Duration" Nullable="false"/> <NavigationProperty Name="Orders" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Order)"/> <NavigationProperty Name="Company" Type="Microsoft.Test.OData.Services.ODataWCFService.Company" Nullable="false" Partner="VipCustomer"/> </EntityType> <EntityType Name="Employee" BaseType="Microsoft.Test.OData.Services.ODataWCFService.Person"> <Property Name="DateHired" Type="Edm.DateTimeOffset" Nullable="false"/> <Property Name="Office" Type="Edm.GeographyPoint" SRID="4326"/> <NavigationProperty Name="Company" Type="Microsoft.Test.OData.Services.ODataWCFService.Company" Nullable="false" Partner="Employees"/> </EntityType> <EntityType Name="Product"> <Key> <PropertyRef Name="ProductID"/> </Key> <Property Name="ProductID" Type="Edm.Int32" Nullable="false"/> <Property Name="Name" Type="Edm.String" Nullable="false"/> <Property Name="QuantityPerUnit" Type="Edm.String" Nullable="false"/> <Property Name="UnitPrice" Type="Edm.Single" Nullable="false"/> <Property Name="QuantityInStock" Type="Edm.Int32" Nullable="false"/> <Property Name="Discontinued" Type="Edm.Boolean" Nullable="false"/> <Property Name="UserAccess" Type="Microsoft.Test.OData.Services.ODataWCFService.AccessLevel"/> <Property Name="SkinColor" Type="Microsoft.Test.OData.Services.ODataWCFService.Color"/> <Property Name="CoverColors" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Color)" Nullable="false"/> <NavigationProperty Name="Details" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.ProductDetail)"> <ReferentialConstraint Property="ProductID" ReferencedProperty="ProductID"/> </NavigationProperty> </EntityType> <EntityType Name="ProductDetail"> <Key> <PropertyRef Name="ProductID"/> <PropertyRef Name="ProductDetailID"/> </Key> <Property Name="ProductID" Type="Edm.Int32" Nullable="false"/> <Property Name="ProductDetailID" Type="Edm.Int32" Nullable="false"/> <Property Name="ProductName" Type="Edm.String" Nullable="false"/> <Property Name="Description" Type="Edm.String" Nullable="false"/> <NavigationProperty Name="RelatedProduct" Type="Microsoft.Test.OData.Services.ODataWCFService.Product"/> <NavigationProperty Name="Reviews" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.ProductReview)"> <ReferentialConstraint Property="ProductID" ReferencedProperty="ProductID"/> <ReferentialConstraint Property="ProductDetailID" ReferencedProperty="ProductDetailID"/> </NavigationProperty> </EntityType> <EntityType Name="ProductReview"> <Key> <PropertyRef Name="ProductID"/> <PropertyRef Name="ProductDetailID"/> <PropertyRef Name="ReviewTitle"/> <PropertyRef Name="RevisionID"/> </Key> <Property Name="ProductID" Type="Edm.Int32" Nullable="false"/> <Property Name="ProductDetailID" Type="Edm.Int32" Nullable="false"/> <Property Name="ReviewTitle" Type="Edm.String" Nullable="false"/> <Property Name="RevisionID" Type="Edm.Int32" Nullable="false"/> <Property Name="Comment" Type="Edm.String" Nullable="false"/> <Property Name="Author" Type="Edm.String" Nullable="false"/> </EntityType> <EntityType Name="Order"> <Key> <PropertyRef Name="OrderID"/> </Key> <Property Name="OrderID" Type="Edm.Int32" Nullable="false"/> <Property Name="OrderDate" Type="Edm.DateTimeOffset" Nullable="false"/> <Property Name="ShelfLife" Type="Edm.Duration"/> <Property Name="OrderShelfLifes" Type="Collection(Edm.Duration)"/> <NavigationProperty Name="LoggedInEmployee" Type="Microsoft.Test.OData.Services.ODataWCFService.Employee" Nullable="false"/> <NavigationProperty Name="CustomerForOrder" Type="Microsoft.Test.OData.Services.ODataWCFService.Customer" Nullable="false"/> <NavigationProperty Name="OrderDetails" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.OrderDetail)"/> </EntityType> <EntityType Name="OrderDetail"> <Key> <PropertyRef Name="OrderID"/> <PropertyRef Name="ProductID"/> </Key> <Property Name="OrderID" Type="Edm.Int32" Nullable="false"/> <Property Name="ProductID" Type="Edm.Int32" Nullable="false"/> <Property Name="OrderPlaced" Type="Edm.DateTimeOffset" Nullable="false"/> <Property Name="Quantity" Type="Edm.Int32" Nullable="false"/> <Property Name="UnitPrice" Type="Edm.Single" Nullable="false"/> <NavigationProperty Name="ProductOrdered" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Product)"/> <NavigationProperty Name="AssociatedOrder" Type="Microsoft.Test.OData.Services.ODataWCFService.Order" Nullable="false"/> </EntityType> <EntityType Name="Department"> <Key> <PropertyRef Name="DepartmentID"/> </Key> <Property Name="DepartmentID" Type="Edm.Int32" Nullable="false"/> <Property Name="Name" Type="Edm.String" Nullable="false"/> <Property Name="DepartmentNO" Type="Edm.String"/> <NavigationProperty Name="Company" Type="Microsoft.Test.OData.Services.ODataWCFService.Company" Nullable="false" Partner="Departments"/> </EntityType> <EntityType Name="Company" OpenType="true"> <Key> <PropertyRef Name="CompanyID"/> </Key> <Property Name="CompanyID" Type="Edm.Int32" Nullable="false"/> <Property Name="CompanyCategory" Type="Microsoft.Test.OData.Services.ODataWCFService.CompanyCategory"/> <Property Name="Revenue" Type="Edm.Int64" Nullable="false"/> <Property Name="Name" Type="Edm.String"/> <Property Name="Address" Type="Microsoft.Test.OData.Services.ODataWCFService.Address"/> <NavigationProperty Name="Employees" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Employee)" Partner="Company"/> <NavigationProperty Name="VipCustomer" Type="Microsoft.Test.OData.Services.ODataWCFService.Customer" Nullable="false" Partner="Company"/> <NavigationProperty Name="Departments" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Department)" Partner="Company"/> <NavigationProperty Name="CoreDepartment" Type="Microsoft.Test.OData.Services.ODataWCFService.Department" Nullable="false"/> </EntityType> <EntityType Name="PublicCompany" BaseType="Microsoft.Test.OData.Services.ODataWCFService.Company" OpenType="true"> <Property Name="StockExchange" Type="Edm.String"/> <NavigationProperty Name="Assets" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Asset)" ContainsTarget="true"/> <NavigationProperty Name="Club" Type="Microsoft.Test.OData.Services.ODataWCFService.Club" Nullable="false" ContainsTarget="true"/> <NavigationProperty Name="LabourUnion" Type="Microsoft.Test.OData.Services.ODataWCFService.LabourUnion" Nullable="false"/> </EntityType> <EntityType Name="Asset"> <Key> <PropertyRef Name="AssetID"/> </Key> <Property Name="AssetID" Type="Edm.Int32" Nullable="false"/> <Property Name="Name" Type="Edm.String"/> <Property Name="Number" Type="Edm.Int32" Nullable="false"/> </EntityType> <EntityType Name="Club"> <Key> <PropertyRef Name="ClubID"/> </Key> <Property Name="ClubID" Type="Edm.Int32" Nullable="false"/> <Property Name="Name" Type="Edm.String"/> </EntityType> <EntityType Name="LabourUnion"> <Key> <PropertyRef Name="LabourUnionID"/> </Key> <Property Name="LabourUnionID" Type="Edm.Int32" Nullable="false"/> <Property Name="Name" Type="Edm.String"/> </EntityType> <Action Name="AddAccessRight" IsBound="true"> <Parameter Name="product" Type="Microsoft.Test.OData.Services.ODataWCFService.Product" Nullable="false"/> <Parameter Name="accessRight" Type="Microsoft.Test.OData.Services.ODataWCFService.AccessLevel"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.AccessLevel"/> </Action> <Action Name="IncreaseRevenue" IsBound="true"> <Parameter Name="p" Type="Microsoft.Test.OData.Services.ODataWCFService.Company" Nullable="false"/> <Parameter Name="IncreaseValue" Type="Edm.Int64"/> <ReturnType Type="Edm.Int64" Nullable="false"/> </Action> <Action Name="ResetAddress" IsBound="true" EntitySetPath="person"> <Parameter Name="person" Type="Microsoft.Test.OData.Services.ODataWCFService.Person" Nullable="false"/> <Parameter Name="addresses" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Address)" Nullable="false"/> <Parameter Name="index" Type="Edm.Int32" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.Person" Nullable="false"/> </Action> <Action Name="Discount" IsBound="true" EntitySetPath="products"> <Parameter Name="products" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Product)" Nullable="false"/> <Parameter Name="percentage" Type="Edm.Int32" Nullable="false"/> <ReturnType Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Product)" Nullable="false"/> </Action> <Action Name="Discount"> <Parameter Name="percentage" Type="Edm.Int32" Nullable="false"/> </Action> <Action Name="ResetBossEmail"> <Parameter Name="emails" Type="Collection(Edm.String)" Nullable="false"/> <ReturnType Type="Collection(Edm.String)" Nullable="false"/> </Action> <Action Name="ResetBossAddress"> <Parameter Name="address" Type="Microsoft.Test.OData.Services.ODataWCFService.Address" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.Address" Nullable="false"/> </Action> <Action Name="ResetDataSource"/> <Function Name="GetEmployeesCount" IsBound="true"> <Parameter Name="p" Type="Microsoft.Test.OData.Services.ODataWCFService.Company" Nullable="false"/> <ReturnType Type="Edm.Int32" Nullable="false"/> </Function> <Function Name="GetProductDetails" IsBound="true" EntitySetPath="product/Details" IsComposable="true"> <Parameter Name="product" Type="Microsoft.Test.OData.Services.ODataWCFService.Product" Nullable="false"/> <Parameter Name="count" Type="Edm.Int32"/> <ReturnType Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.ProductDetail)" Nullable="false"/> </Function> <Function Name="GetRelatedProduct" IsBound="true" EntitySetPath="productDetail/RelatedProduct" IsComposable="true"> <Parameter Name="productDetail" Type="Microsoft.Test.OData.Services.ODataWCFService.ProductDetail" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.Product" Nullable="false"/> </Function> <Function Name="GetDefaultColor" IsComposable="true"> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.Color"/> </Function> <Function Name="GetPerson" IsComposable="true"> <Parameter Name="address" Type="Microsoft.Test.OData.Services.ODataWCFService.Address" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.Person" Nullable="false"/> </Function> <Function Name="GetPerson2" IsComposable="true"> <Parameter Name="city" Type="Edm.String" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.Person" Nullable="false"/> </Function> <Function Name="GetAllProducts" IsComposable="true"> <ReturnType Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Product)" Nullable="false"/> </Function> <Function Name="GetBossEmails"> <Parameter Name="start" Type="Edm.Int32" Nullable="false"/> <Parameter Name="count" Type="Edm.Int32" Nullable="false"/> <ReturnType Type="Collection(Edm.String)" Nullable="false"/> </Function> <Function Name="GetProductsByAccessLevel"> <Parameter Name="accessLevel" Type="Microsoft.Test.OData.Services.ODataWCFService.AccessLevel" Nullable="false"/> <ReturnType Type="Collection(Edm.String)" Nullable="false"/> </Function> <Function Name="GetActualAmount" IsBound="true"> <Parameter Name="giftcard" Type="Microsoft.Test.OData.Services.ODataWCFService.GiftCard" Nullable="false"/> <Parameter Name="bonusRate" Type="Edm.Double"/> <ReturnType Type="Edm.Double" Nullable="false"/> </Function> <Function Name="GetDefaultPI" IsBound="true" EntitySetPath="account/MyPaymentInstruments"> <Parameter Name="account" Type="Microsoft.Test.OData.Services.ODataWCFService.Account" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.PaymentInstrument"/> </Function> <Action Name="RefreshDefaultPI" IsBound="true" EntitySetPath="account/MyPaymentInstruments"> <Parameter Name="account" Type="Microsoft.Test.OData.Services.ODataWCFService.Account" Nullable="false"/> <Parameter Name="newDate" Type="Edm.DateTimeOffset"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.PaymentInstrument"/> </Action> <Function Name="GetHomeAddress" IsBound="true" IsComposable="true"> <Parameter Name="person" Type="Microsoft.Test.OData.Services.ODataWCFService.Person" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.HomeAddress" Nullable="false"/> </Function> <Function Name="GetAccountInfo" IsBound="true" IsComposable="true"> <Parameter Name="account" Type="Microsoft.Test.OData.Services.ODataWCFService.Account" Nullable="false"/> <ReturnType Type="Microsoft.Test.OData.Services.ODataWCFService.AccountInfo" Nullable="false"/> </Function> <ComplexType Name="AccountInfo" OpenType="true"> <Property Name="FirstName" Type="Edm.String" Nullable="false"/> <Property Name="LastName" Type="Edm.String" Nullable="false"/> </ComplexType> <EntityType Name="Account"> <Key> <PropertyRef Name="AccountID"/> </Key> <Property Name="AccountID" Type="Edm.Int32" Nullable="false"/> <Property Name="Country" Type="Edm.String" Nullable="false"/> <Property Name="AccountInfo" Type="Microsoft.Test.OData.Services.ODataWCFService.AccountInfo"/> <NavigationProperty Name="MyGiftCard" Type="Microsoft.Test.OData.Services.ODataWCFService.GiftCard" ContainsTarget="true"/> <NavigationProperty Name="MyPaymentInstruments" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.PaymentInstrument)" ContainsTarget="true"/> <NavigationProperty Name="ActiveSubscriptions" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Subscription)" ContainsTarget="true"/> <NavigationProperty Name="AvailableSubscriptionTemplatess" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Subscription)"/> </EntityType> <EntityType Name="GiftCard"> <Key> <PropertyRef Name="GiftCardID"/> </Key> <Property Name="GiftCardID" Type="Edm.Int32" Nullable="false"/> <Property Name="GiftCardNO" Type="Edm.String" Nullable="false"/> <Property Name="Amount" Type="Edm.Double" Nullable="false"/> <Property Name="ExperationDate" Type="Edm.DateTimeOffset" Nullable="false"/> <Property Name="OwnerName" Type="Edm.String"/> </EntityType> <EntityType Name="PaymentInstrument"> <Key> <PropertyRef Name="PaymentInstrumentID"/> </Key> <Property Name="PaymentInstrumentID" Type="Edm.Int32" Nullable="false"/> <Property Name="FriendlyName" Type="Edm.String" Nullable="false"/> <Property Name="CreatedDate" Type="Edm.DateTimeOffset" Nullable="false"/> <NavigationProperty Name="TheStoredPI" Type="Microsoft.Test.OData.Services.ODataWCFService.StoredPI" Nullable="false"/> <NavigationProperty Name="BillingStatements" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.Statement)" ContainsTarget="true"/> <NavigationProperty Name="BackupStoredPI" Type="Microsoft.Test.OData.Services.ODataWCFService.StoredPI" Nullable="false"/> </EntityType> <EntityType Name="CreditCardPI" BaseType="Microsoft.Test.OData.Services.ODataWCFService.PaymentInstrument"> <Property Name="CardNumber" Type="Edm.String" Nullable="false"/> <Property Name="CVV" Type="Edm.String" Nullable="false"/> <Property Name="HolderName" Type="Edm.String" Nullable="false"/> <Property Name="Balance" Type="Edm.Double" Nullable="false"/> <Property Name="ExperationDate" Type="Edm.DateTimeOffset" Nullable="false"/> <NavigationProperty Name="CreditRecords" Type="Collection(Microsoft.Test.OData.Services.ODataWCFService.CreditRecord)" ContainsTarget="true"/> </EntityType> <EntityType Name="StoredPI"> <Key> <PropertyRef Name="StoredPIID"/> </Key> <Property Name="StoredPIID" Type="Edm.Int32" Nullable="false"/> <Property Name="PIName" Type="Edm.String" Nullable="false"/> <Property Name="PIType" Type="Edm.String" Nullable="false"/> <Property Name="CreatedDate" Type="Edm.DateTimeOffset" Nullable="false"/> </EntityType> <EntityType Name="Statement"> <Key> <PropertyRef Name="StatementID"/> </Key> <Property Name="StatementID" Type="Edm.Int32" Nullable="false"/> <Property Name="TransactionType" Type="Edm.String" Nullable="false"/> <Property Name="TransactionDescription" Type="Edm.String" Nullable="false"/> <Property Name="Amount" Type="Edm.Double" Nullable="false"/> </EntityType> <EntityType Name="CreditRecord"> <Key> <PropertyRef Name="CreditRecordID"/> </Key> <Property Name="CreditRecordID" Type="Edm.Int32" Nullable="false"/> <Property Name="IsGood" Type="Edm.Boolean" Nullable="false"/> <Property Name="Reason" Type="Edm.String" Nullable="false"/> <Property Name="CreatedDate" Type="Edm.DateTimeOffset" Nullable="false"/> </EntityType> <EntityType Name="Subscription"> <Key> <PropertyRef Name="SubscriptionID"/> </Key> <Property Name="SubscriptionID" Type="Edm.Int32" Nullable="false"/> <Property Name="TemplateGuid" Type="Edm.String" Nullable="false"/> <Property Name="Title" Type="Edm.String" Nullable="false"/> <Property Name="Category" Type="Edm.String" Nullable="false"/> <Property Name="CreatedDate" Type="Edm.DateTimeOffset" Nullable="false"/> </EntityType> <EntityContainer Name="InMemoryEntities"> <EntitySet Name="People" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Person"> <NavigationPropertyBinding Path="Parent" Target="People"/> </EntitySet> <Singleton Name="Boss" Type="Microsoft.Test.OData.Services.ODataWCFService.Person"> <NavigationPropertyBinding Path="Parent" Target="People"/> </Singleton> <EntitySet Name="Customers" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Customer"> <NavigationPropertyBinding Path="Orders" Target="Orders"/> <NavigationPropertyBinding Path="Parent" Target="People"/> </EntitySet> <Singleton Name="VipCustomer" Type="Microsoft.Test.OData.Services.ODataWCFService.Customer"> <NavigationPropertyBinding Path="Orders" Target="Orders"/> <NavigationPropertyBinding Path="Parent" Target="People"/> <NavigationPropertyBinding Path="Company" Target="Company"/> </Singleton> <EntitySet Name="Employees" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Employee"> <NavigationPropertyBinding Path="Parent" Target="People"/> <NavigationPropertyBinding Path="Company" Target="Company"/> </EntitySet> <EntitySet Name="Products" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Product"> <NavigationPropertyBinding Path="Details" Target="ProductDetails"/> </EntitySet> <EntitySet Name="ProductDetails" EntityType="Microsoft.Test.OData.Services.ODataWCFService.ProductDetail"> <NavigationPropertyBinding Path="RelatedProduct" Target="Products"/> <NavigationPropertyBinding Path="Reviews" Target="ProductReviews"/> </EntitySet> <EntitySet Name="ProductReviews" EntityType="Microsoft.Test.OData.Services.ODataWCFService.ProductReview"/> <EntitySet Name="Orders" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Order"> <NavigationPropertyBinding Path="LoggedInEmployee" Target="Employees"/> <NavigationPropertyBinding Path="CustomerForOrder" Target="Customers"/> <NavigationPropertyBinding Path="OrderDetails" Target="OrderDetails"/> <Annotation Term="Core.ChangeTracking"> <Record> <PropertyValue Property="Supported" Bool="true"/> <PropertyValue Property="FilterableProperties"> <Collection> <PropertyPath>OrderID</PropertyPath> </Collection> </PropertyValue> <PropertyValue Property="ExpandableProperties"> <Collection> <PropertyPath>OrderDetails</PropertyPath> </Collection> </PropertyValue> </Record> </Annotation> </EntitySet> <EntitySet Name="OrderDetails" EntityType="Microsoft.Test.OData.Services.ODataWCFService.OrderDetail"> <NavigationPropertyBinding Path="AssociatedOrder" Target="Orders"/> <NavigationPropertyBinding Path="ProductOrdered" Target="Products"/> </EntitySet> <EntitySet Name="Departments" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Department"> <NavigationPropertyBinding Path="Company" Target="Company"/> </EntitySet> <Singleton Name="Company" Type="Microsoft.Test.OData.Services.ODataWCFService.Company"> <NavigationPropertyBinding Path="Employees" Target="Employees"/> <NavigationPropertyBinding Path="VipCustomer" Target="VipCustomer"/> <NavigationPropertyBinding Path="Departments" Target="Departments"/> <NavigationPropertyBinding Path="CoreDepartment" Target="Departments"/> </Singleton> <Singleton Name="PublicCompany" Type="Microsoft.Test.OData.Services.ODataWCFService.Company"> <NavigationPropertyBinding Path="Microsoft.Test.OData.Services.ODataWCFService.PublicCompany/LabourUnion" Target="LabourUnion"/> </Singleton> <Singleton Name="LabourUnion" Type="Microsoft.Test.OData.Services.ODataWCFService.LabourUnion"/> <ActionImport Name="Discount" Action="Microsoft.Test.OData.Services.ODataWCFService.Discount"/> <ActionImport Name="ResetBossEmail" Action="Microsoft.Test.OData.Services.ODataWCFService.ResetBossEmail"/> <ActionImport Name="ResetBossAddress" Action="Microsoft.Test.OData.Services.ODataWCFService.ResetBossAddress"/> <ActionImport Name="ResetDataSource" Action="Microsoft.Test.OData.Services.ODataWCFService.ResetDataSource"/> <FunctionImport Name="GetDefaultColor" Function="Microsoft.Test.OData.Services.ODataWCFService.GetDefaultColor" IncludeInServiceDocument="true"/> <FunctionImport Name="GetPerson" Function="Microsoft.Test.OData.Services.ODataWCFService.GetPerson" EntitySet="People" IncludeInServiceDocument="true"/> <FunctionImport Name="GetPerson2" Function="Microsoft.Test.OData.Services.ODataWCFService.GetPerson2" EntitySet="People" IncludeInServiceDocument="true"/> <FunctionImport Name="GetAllProducts" Function="Microsoft.Test.OData.Services.ODataWCFService.GetAllProducts" EntitySet="Products" IncludeInServiceDocument="true"/> <FunctionImport Name="GetBossEmails" Function="Microsoft.Test.OData.Services.ODataWCFService.GetBossEmails" IncludeInServiceDocument="true"/> <FunctionImport Name="GetProductsByAccessLevel" Function="Microsoft.Test.OData.Services.ODataWCFService.GetProductsByAccessLevel" IncludeInServiceDocument="true"/> <EntitySet Name="Accounts" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Account"> <NavigationPropertyBinding Path="Microsoft.Test.OData.Services.ODataWCFService.PaymentInstrument/TheStoredPI" Target="StoredPIs"/> <NavigationPropertyBinding Path="AvailableSubscriptionTemplatess" Target="SubscriptionTemplates"/> <NavigationPropertyBinding Path="Microsoft.Test.OData.Services.ODataWCFService.PaymentInstrument/BackupStoredPI" Target="DefaultStoredPI"/> </EntitySet> <EntitySet Name="StoredPIs" EntityType="Microsoft.Test.OData.Services.ODataWCFService.StoredPI"/> <EntitySet Name="SubscriptionTemplates" EntityType="Microsoft.Test.OData.Services.ODataWCFService.Subscription"/> <Singleton Name="DefaultStoredPI" Type="Microsoft.Test.OData.Services.ODataWCFService.StoredPI"/> </EntityContainer> </Schema> </edmx:DataServices> </edmx:Edmx>
rfc:functional-interfaces PHP RFC: Functional Interfaces Version: 0.1 Date: 2016-04-17 Author: krakjoe Status: Declined First Published at: http://wiki.php.net/rfc/functional-interfaces Introduction A functional interface is an interface which declares only one abstract method, a familiar example is Countable: <?php interface Countable { public function count ( ) ; } ?> Such interfaces are also known as SAM (Single Abstract Method) interfaces. While the language has a few examples of functional or SAM interfaces, the ecosystem has many more. Proposal A closure is able to provide a way to implement a functional interface: <?php interface IFoo { public function method ( ) : int ; } $cb = function ( ) implements IFoo : int { return 42 ; } ; There is enough information in the code above for the engine to reason that $cb should implement IFoo, and obviously be a Closure. The engine generates the appropriate class entry using the closure as the only public method, having easily determined the correct name for that method (there is, and can only be, one possible candidate). This is extremely powerful, because Closures have lexical scope, and so unlike an anonymous class, can access the private properties and other symbols where the Closure is declared. The code below is not good code, it's not the most efficient version of the code that could exist. It serves to show the difference between implementing a functional interface using a closure, and provides a comparison with anonymous classes: Functional Interfaces - Counter Example <?php class Foo { private $bar = [ ] ; public function fill ( $limit = 100 ) { for ( $i = 0 ; $i < $limit ; $i ++ ) { $this -> bar [ ] = mt_rand ( $i , $limit ) ; } } public function getEvenCounter ( ) : Countable { return function ( ) implements Countable { $counter = 0 ; foreach ( $this -> bar as $value ) { if ( $value % 2 === 0 ) $counter ++; } return $counter ; } ; } public function getOddCounter ( ) : Countable { return function ( ) implements Countable { $counter = 0 ; foreach ( $this -> bar as $value ) { if ( $value % 2 !== 0 ) { $counter ++; } } return $counter ; } ; } } $foo = new Foo ( ) ; $even = $foo -> getEvenCounter ( ) ; $odd = $foo -> getOddCounter ( ) ; $it = 0 ; while ( ++ $it < 10 ) { $foo -> fill ( 50 ) ; var_dump ( count ( $even ) , count ( $odd ) ) ; } ?> The same code using anonymous classes: <?php class Foo { private $bar = [ ] ; public function fill ( $limit = 100 ) { for ( $i = 0 ; $i < $limit ; $i ++ ) { $this -> bar [ ] = mt_rand ( $i , $limit ) ; } } public function getEvenCounter ( ) : Countable { return new class ( $this -> bar ) implements Countable { public function __construct ( & $bar ) { $this -> bar =& $bar ; } public function count ( ) { $counter = 0 ; foreach ( $this -> bar as $value ) { if ( $value % 2 === 0 ) $counter ++; } return $counter ; } private $bar ; } ; } public function getOddCounter ( ) : Countable { return new class ( $this -> bar ) implements Countable { public function __construct ( & $bar ) { $this -> bar =& $bar ; } public function count ( ) { $counter = 0 ; foreach ( $this -> bar as $value ) { if ( $value % 2 !== 0 ) { $counter ++; } } return $counter ; } private $bar ; } ; } } $foo = new Foo ( ) ; $it = 0 ; $even = $foo -> getEvenCounter ( ) ; $odd = $foo -> getOddCounter ( ) ; while ( ++ $it < 10 ) { $foo -> fill ( 50 ) ; var_dump ( count ( $even ) , count ( $odd ) ) ; } ?> The anonymous class version: must use referencing, or fetch a new Countable object on each iteration, is extremely verbose must set dependencies in the constructor has no support for lexical scope The functional interface version: is sparse is easier to reason about does not require the use of references supports lexical scope Functional interface support does not change the definition of an interface, and only reuse the definition of a Closure. Receiving and Invoking Functional Interfaces The implementation of a functional interface is an instance of Closure, and the interface it was declared to implement, it has the behaviour of both. The implementation would have the following formal definition: final class {Interface}\0{closure} extends Closure implements Interface Such that the following is always true: $instance instanceof Interface && $instance instanceof Closure Functional Interfaces - Receiving and Invoking <?php interface ILog { public function log ( string $message , ... $args ) : void ; } class Foo { public function __construct ( ILog $logger ) { $this -> logger = $logger ; } public function thing ( ) { $this -> logger -> log ( "thing" ) ; } } $logger = function ( string $message , ... $args ) implements ILog : void { printf ( " {$message} " , ... $args ) ; } ; $foo = new Foo ( $logger ) ; $foo -> thing ( ) ; $logger ( "next thing" ) ; This means that the receiver (Foo::__construct) can receive, and consumer (Foo::thing) can invoke the interface as if it were a normal object, while the creator of $logger, who must know it is a Closure, can still invoke it as a Closure. Both methods of invocation are valid in both receiving and declaring contexts. Error Conditions The following conditions will cause compiler errors: Functional Interfaces - Compiler Error 1 <?php interface IFoo { public function method1 ( ) ; public function method2 ( ) ; } function ( ) implements IFoo { } ; ?> Fatal error: cannot implement non functional interface IFoo in /in/65W6i on line 7 Reason: IFoo cannot be considered a functional interface, because it contains more than one abstract method. Functional Interfaces - Compiler Error 2 <?php interface IFoo { public function foo ( ) ; } interface IBar extends IFoo { public function bar ( ) ; } function ( ) implements IBar { } ; Fatal error: cannot implement non functional interface IBar in /in/qLbPv on line 9 Reason: Although IBar only declares one abstract method, it extends IFoo and so contains two abstract methods Functional Interfaces - Compiler Error 3 <?php abstract class Foo { abstract public function bar ( ) ; } function ( ) implements Foo { } ; Fatal error: cannot implement non interface Foo in /in/WT98N on line 6 Reason: Although Foo contains only one abstract method, it is not an interface Functional Interfaces - Compiler Error 4 <?php new class { public function __construct ( ) { function ( ) implements self { } ; } } ; Fatal error: functional interface cannot implement self in /in/MMuD0 on line 4 Reason: Although self is a valid scope in that context, self, parent, and static, can never be interfaces Functional Interfaces - Compiler Error 5 <?php interface IFoo { public static function method ( ) ; } function ( ) implements IFoo { } ; Fatal error: cannot create non static implementation of static functional interface IFoo in /in/2AiUV on line 6 Reason: The compiler would raise less specific errors later on Functional Interfaces - Compiler Error 6 <?php interface IFoo { public function method ( ) ; } static function ( ) implements IFoo { } ; Fatal error: cannot create static implementation of non static functional interface IFoo in /in/o9gIB on line 6 Reason: The compiler would raise less specific errors later on Syntax Choices Interface and return type reversed: function (string $arg) use($thing) : int implements Interface It looks as if int is somehow implementing Interface . Interface before arguments: function implements Interface (string $arg) use($thing) : int {} The arguments list looks as if it somehow applies to Interface . Interface after arguments and before use: function (string $arg) implements Interface use($thing) : int {} This looks as if Interface somehow uses $thing . Vote Voting started on May 15th, ended May 29th 2016. Accept functional interfaces? (2/3+1 majority required) Real name Yes No ajf (ajf) bishop (bishop) bwoebi (bwoebi) colinodell (colinodell) danack (danack) demon (demon) dm (dm) dmitry (dmitry) galvao (galvao) guilhermeblanco (guilhermeblanco) hywan (hywan) jhdxr (jhdxr) kguest (kguest) klaussilveira (klaussilveira) krakjoe (krakjoe) levim (levim) lstrojny (lstrojny) malukenho (malukenho) marcio (marcio) mariano (mariano) mike (mike) nikic (nikic) ocramius (ocramius) pierrick (pierrick) pollita (pollita) santiagolizardo (santiagolizardo) svpernova09 (svpernova09) zeev (zeev) zimt (zimt) Final result: 7 22 This poll has been closed. Backward Incompatible Changes N/A Proposed PHP Version(s) 7.1 RFC Impact To Existing Extensions The API to create functional interface implementations is exported by Zend, and part of the already exported Closure API . To Opcache Opcache may need a trivial patching. Future Scope When the concept of functional interfaces is implemented, it may be worth discussing the coercion, or explicit cast of callables. Proposed Voting Choices 2/3 majority required, simple yes/no vote proposed. Patches and Tests 3v4l 3v4l have been kind enough to provide testing facilities for this patch. References
# Event 8 - DefaultLayoutManager_FindMatchingLayout ###### Version: 0 ## Description None ## Data Dictionary |Standard Name|Field Name|Type|Description|Sample Value| |---|---|---|---|---| |TBD|layoutProviderName|UnicodeString|None|`None`| ## Tags * etw_level_Informational * etw_opcode_Stop * etw_task_DefaultLayoutManager_FindMatchingLayout
The stability of amitriptyline N-oxide and clozapine N-oxide on treated and untreated dry blood spot cards. Procedures for drug monitoring based on Dried Blood Spot (DBS) sampling are gaining acceptance for an increasing number of clinical and preclinical applications, where ease of use, small sample requirement, and improved sample stability have been shown to offer advantages over blood tube sampling. However, to-date, the vast majority of this work has described the analysis of well characterized drugs. Using amitriptyline, clozapine, and their potentially labile N-oxide metabolites as model compounds, we consider the merits of using DBS for discovery pharmacokinetic (PK) studies where the metabolic fate of test compounds are often unknown. Both N-oxide metabolites reverted to parent compound under standard drying (2hr) and extraction conditions. Card type significantly affected the outcome, with 14% and 22% degradation occurring for clozapine-N-oxide and amitriptyline-N-oxide on a brand of untreated DBS cards, compared to 59 and 88% on a brand of treated DBS cards. Enrichment of the parent compound ex vivo leads to overestimation of circulating blood concentration and inaccurate determination of the PK profile.
Malignant glioma is the most common and lethal primary brain tumor. Despite the improvement of imaging technology and surgical removal of the tumors significantly reduces the mortality of malignant glioma, how to enhance the sensitivity to radiation and chemotherapy, and reduce the risk of tumor invasion and metastasis still remains a big challenge. The LIM domain-containing TRIP6 (Thyroid Hormone Receptor-Interacting Protein 6) is a focal adhesion molecule involved in cell motility and transcriptional control. Through the multidomain-mediated protein-protein interactions, TRIP6 binds to several components of focal complexes and promotes ERK activation, Rho signaling and cell migration in a c-Src-dependent manner. In addition, TRIP6 is capable of shuttling to the nucleus to serve as a coactivator of NF-?B, AP-1 and E2F1 in the transcriptional regulation of genes involved in anti-apoptosis and cell growth. In this proposal, we provide novel data showing that inhibition of TRIP6 expression reduces cell migration, enhances chemosensitivity and prolongs G1 phase of the cell cycle in glioblastoma multiforme cells, suggesting a critical role for TRIP6 in malignant glioma progression. As the levels of TRIP6 mRNA and proteins are overexpressed in glioblastoma multiforme, which is correlated to the disease progression, TRIP6 can be a novel therapeutic target for malignant glioma treatment. To investigate if TRIP6 can serve as a molecular marker in GMB progression and determine if we can target TRIP6 to enhance chemosensitivity and reduce the risk of GBM tumor invasion and metastasis, Aim1 will determine the roles of TRIP6 in chemoresistance, cell cycle progression and cell migration and investigate the underlying molecular mechanisms in malignant glioma cells. Aim 2 will study the biological roles of TRIP6 in GBM tumor proliferation, invasion and metastasis using a xenograft animal model and determine if inhibition of TRIP6 expression can enhance chemosensitivity in vivo. The understanding from this study will help to design more effective therapies for this devastatin disease. PUBLIC HEALTH RELEVANCE: The LIM domain-containing TRIP6 is overexpressed in glioblastoma multiforme and plays a critical role in glioma tumor migration, chemoresistance and proliferation. The goal of this project is to understand the molecular mechanisms in cultured glioma cells and in an animal model in order to translate this understanding into more effective therapies for this devastating disease.
virtual box The Vagrant open-source project has morphed into a startup, backed by Hashicorp, a new company that will further build out the tool designed to manage the complexities of modern development within a virtual environment. Read More
NAFTA talks with US 'very far' along: Mexico's Guajardo WASHINGTON - US and Mexican negotiators will continue meeting through the weekend to revamp the North American Free Trade Agreement and Canada is set to rejoin the talks as soon as they are called, Mexico's Economy Minister Ildefonso Guajardo said Friday. Mexico's Economy Minister Ildefonso Guajardo (R) and US Trade Representative Robert Lighthizer have been meeting for five weeks on revising NAFTA Guajardo and Mexico's Foreign Minister Luis Videgaray have been shuttling back and forth to Washington for more than a month for meetings with US Trade Representative Robert Lighthizer to try to iron out bilateral issues, such as rules for the auto market, before the end of August. But Guajardo told reporters that Canada's Foreign Minister Chrystia Freeland was ready at any point to proceed with the NAFTA negotiations. "I have confirmation that she would be available the moment we believe we can enter into the trilateral" discussions, he said. Freeland said earlier in the week that she was encouraged by the progress between Washington and Mexico City and would rejoin the talks once the bilateral discussions concluded. However, her office announced Friday she would travel to Europe August 26-30 for visits in Germany, France and Ukraine. The Mexican official declined to go into detail on the topics remaining with the United States but said the agreement could happen at any time. "The idea is that we are staying because we know there are issues to resolve," he said. "And we have to make sure that everybody feels comfortable with this agreement." Earlier Friday, Guajardo said officials were "very far" along in efforts to deal with the US-Mexico issues but added "there are trilateral issues that have to be solved in a trilateral context." A contentious proposal by the United States -- which would require the nearly 25-year-old trade pact be reauthorized every five years -- is one that must include all three partners, Guajardo said. Jesus Seade, an economic advisor to Mexico's incoming president, Andres Manuel Lopez Obrador, has been participating in portions of the NAFTA talks and said the sunset clause "is going out," according to press reports from Mexico City. Guajardo declined to comment on Seade's remarks but said the teams were working together on behalf of Mexico. A senior Canadian official told AFP on Thursday there had been "no indication of flexibility from the US on this issue." The three countries have been negotiating for a year to salvage the trade pact that President Donald Trump says has been a "disaster" for the United States.
[Sex hormone regulation of progesterone and estradiol receptors in the cytosol of human endometrium in normal and pathologic pregnancy]. Hormonal control of progesterone and estradiol receptors was studied in the cytozol of short-lived culture of human endometrium in normal and undeveloping pregnancy. Endometrium was cultivated in the presence of estradiol or progesterone for 16 hours. In cultivation of normal endometrium with estradiol the content of estradiol receptors increased 4-fold in comparison with unstimulated tissue, and of progesteron receptors--3-fold. In cultivation of normal endometrium with progesterone the content of estradiol receptors rose 4--5-fold, and of progesterone receptors--3-fold. In cultivation of pathological endometrium with estradiol or progesterone the number of estradiol receptors was only doubled, and of progesterone receptors--increased only 1 1/2 times, this pointing to a diminished sensitivity of pathological endometrium to the regulating action of sex hormones.
Faced with a belief, shared by many, that justice has not been served — and a moment that could seem to exemplify centuries of continuing injustice — Justin Trudeau and his government obviously feel the need to say something. But in the wake of a jury's ruling that Gerald Stanley did not commit a criminal offence in the death of Colten Boushie, the words haven't come easily. On Monday afternoon, for instance, there were loud grumbles in the House of Commons when the prime minister prefaced his response to a question about the case with the proviso that "​it would be completely inappropriate to comment on the specifics of this case." "We understand," he said in the next breath, "that there are systemic issues in our criminal justice system that we must address." The grumbles spoke to a suspicion in some quarters that the Liberals already have inappropriately commented on a judicial proceeding — a case that might still be appealed. Prime Minister Justin Trudeau comments after a jury found Gerald Stanley not guilty in the shooting death of Colten Boushie 0:41 Wilson-Raybould's promise to 'do better' Trudeau's first response on Friday night was to acknowledge the personal loss and send his "love" to the Boushie family. Justice Minister Jody Wilson-Raybould offered empathy and a vague opinion. "As a country we can and must do better," she wrote. "I am committed to working everyday to ensure justice for all Canadians." "Do better" suggests something or someone has failed. And so, on Saturday morning, Trudeau was asked whether he and the attorney general were questioning the judicial process. "I'm not going to comment on the process that led to this point today," Trudeau said. "But I am going to say that we have come to this point as a country far too many times. Indigenous people across this country are angry, they're heartbroken. And I know Indigenous and non-Indigenous Canadians alike know that we have to do better." People attend a vigil in Halifax for Colten Boushie. (Shaina Luck/CBC) On Monday, Wilson-Raybould pleaded that she had been speaking "about the justice system generally." Maybe she was — and maybe she thought she was staying on the right side of the line. But at the very least, she could be accused of cutting it rather close. A government that questions a particular verdict, or suggests the legal system itself is deficient, can expect to be accused of weakening the reputation and independence of the judicial process itself. But the moment also seems to demand more than a no-comment. So it might be a moment to look back at Barack Obama's response to a not-guilty verdict in the death of Trayvon Martin, the black 17-year-old who was fatally shot in Florida in 2012. Family of Colten Boushie are in Ottawa and met Monday with ministers Jane Philpott and Carolyn Bennett. 0:29 How Obama did it On the day that a jury acquitted George Zimmerman, the U.S. president released a 10-sentence statement. Five days later, he addressed reporters at the White House, speaking for nearly 20 minutes. In the statement, Obama described Martin's death as a tragedy for his family and for the nation. At the White House, he began his remarks with thoughts for Martin's family. Then he went further, addressing the "context" of that death. "I think it's important to recognize that the African American community is looking at this issue through a set of experiences and a history that doesn't go away," he said. "The African American community is also knowledgeable that there is a history of racial disparities in the application of our criminal laws." Though the first black president was uniquely qualified to comment on this issue, he was speaking about a reality that every American should have been able to understand. The same could be said of the Indigenous experience in the justice system and Canadians. Trudeau's remarks on the weekend also included a nod to the context — "that we have come to this point as a country far too many times." He went deeper into it in the House on Monday: "When Indigenous adults make up three per cent of our population but 26 per cent of our incarcerated population, there is a problem," the prime minister said. "When Indigenous Canadians are significantly under-represented on juries and in jury selection pools, we have a problem. "We have much we need to do together to fix the system. In the spirit of reconciliation, that is exactly what we are going to be doing." In 2012, Obama asked Americans, "Where do we take this?" He spoke about reducing mistrust in the justice system, repealing unhelpful laws, improving the lives of young African American men. He talked about the need for Americans to work within themselves and their communities to confront racism. That is where the Trudeau government seems to be moving now. After Canadian politicians weigh in on the Colten Boushie verdict, Indigenous leaders call for real change to the justice system. National Chief of the Assembly of First Nations Perry Bellegarde speaks with The Weekly’s Wendy Mesley about what he says is a ‘wrong’ verdict 5:48 Explaining the anger Obama made clear at the outset of his remarks that he was not quibbling with how the judicial system had functioned in a particular case. Trudeau might think about trying to follow suit — even if the racial make-up of the jury in the Stanley verdict is a prominent point of contention. Having set aside the matter of the trial itself, Obama was able to acknowledge and explain the anger that people were feeling, to attempt to channel that frustration toward change. Of course, he wasn't trying to do it in the span of a tweet, or in the 35 seconds afforded to a response in question period. Boushie's family members met with two cabinet ministers in Ottawa on Monday. They will meet with two more ministers on Tuesday and sit down with the prime minister. Such meetings might put pressure on the Liberals to act, both to respond to the current anger and to make good on their stated commitment to reconciliation. But in the days ahead, if the country continues to strain under the weight of the Stanley verdict, if public opinion polarizes, there could be also be new pressure on Trudeau to say even more.
Relationship between patient income level and mitral valve repair utilization. The superiority of mitral valve (MV) repair is well established with respect to long-term survival, preservation of ventricular function, and valve-related complications. The relationship between patient income level and the selection of MV procedure (repair versus replacement) has not been studied. The 2005 to 2007 Nationwide Inpatient Sample database was searched for patients ≥ 30 years old with MV repair or replacement; patients with ischemic and congenital MV disease were excluded. Patients were stratified into quartiles according to income level (quartile 1, lowest; quartile 4, highest). We used univariate and multivariate models to compare patients with respect to baseline characteristics, selection of MV procedure, and hospital mortality. The preoperative profiles of the income quartiles differed significantly, with more risk factors occurring in the lower income quartiles. Unadjusted hospital mortality decreased with increasing income quartile. The percentage of patients receiving MV repair increased with increasing income (35.6%, 39.6%, 48.2%, and 55.8% for quartiles 1, 2, 3, and 4, respectively; P = .0001). Following adjustment for age, race, sex, urban residency, admission status, primary payer, Charlson comorbidity index, and hospital location and teaching status, the income quartiles had similar hospital death rates, whereas the highly significant relationship between valve repair and income level persisted (P = .0008). Significant disparity exists among patients in the different income quartiles with respect to the likelihood of receiving MV repair. MV repair is performed less frequently in patients with lower incomes, even after adjustment for differences in baseline characteristics. The higher unadjusted mortality rate for less affluent patients appears mostly related to their worse preoperative profiles.
--- abstract: 'For the study of Planck-scale modifications of the energy-momentum dispersion relation, which had been previously focused on the implications for ultrarelativistic (ultrafast) particles, we consider the possible role of experiments involving nonrelativistic particles, and particularly atoms. We extend a recent result establishing that measurements of “atom-recoil frequency” can provide insight that is valuable for some theoretical models. And from a broader perspective we analyze the complementarity of the nonrelativistic and the ultrarelativistic regimes in this research area.' author: - Flavio MERCATI - Diego MAZÓN - 'Giovanni AMELINO-CAMELIA' - José Manuel CARMONA - José Luis CORTÉS - Javier INDURÁIN - Claus LÄMMERZAHL - 'Guglielmo M. TINO' title: 'Probing the quantum-gravity realm with slow atoms' --- Introduction ============ Over the last decade there has been growing interest in the possibility to investigate experimentally some candidate effects of quantum gravity. The development of this “quantum-gravity phenomenology" [@gacLRR] of course focuses on rare contexts in which the minute effects induced by ultra-high “Planck scale" $M_P ( \equiv \sqrt{ \hbar c / G} \simeq 1.2 \cdot 10^{28} ~ \text{eV})$ are not completely negligible. Several contexts of this sort have been found particularly in the study of quantum-gravity/quantum-spacetime effects for the propagation of ultrarelativistic/ultrafast particles (see, [*e.g.*]{}, Refs. [@grbgac; @astroSchaefer; @astroBiller; @gacNature1999; @urrutiaPRL; @jaconature; @Gaclaem; @PiranNeutriNat; @hessPRL]), and often specifically for cases in which the ultrarelativistic on-shell condition[^1], $E \simeq p + m^2/(2p)$, is modified by Planck-scale effects. In the recent Ref. [@gacPRL2009] some of us observed that experiments involving cold (slow, nonrelativistic) atoms, and particularly measurements of the atom-recoil frequency, can provide valuable insight on certain types of modifications of the dispersion relation which had been previously considered in quantum-gravity literature. We here extend the scopes of the analysis briefly reported in Ref. [@gacPRL2009], also adopting a style of presentation that allows to comment in more detail the derivation of the result. Concerning the conceptual perspective that guides this recent research proposal, we here expose some previously unnoticed aspects of complementarity between the nonrelativistic and the ultrarelativistic regimes in the study of Planck-scale modifications of the dispersion relation. And we offer several observations on how the insight gained from studies of slow atoms might translate into limits of different strength depending on some details of the overall framework within which the modifications of the dispersion relation are introduced. We also report a preliminary exploration of the relativistic issues involved in these studies, which have been already well appreciated in the ultrarelativistic regime but appear to provide novel challenges when the focus is instead on the nonrelativistic regime. Complementarity of nonrelativistic and ultrarelativistic regimes {#COMPLEMENTARITY} ================================================================ Results in support of the possibility of modifications of the energy/momentum (“dispersion") relation have been reported in studies of several approaches to the quantum-gravity problem, and perhaps most notably in analyses inspired by Loop Quantum Gravity [@urrutiaPRL; @LQGDispRel], and in studies that assumed a “noncommutativity" of spacetime coordinates [@gacmajid; @kowaPLBcosmo; @Orfeupion]. The analyses of these quantum-gravity approaches that provide encouragement for the presence of corrections to the dispersion relation have become increasingly robust over the last decade [@LQGDispRel; @smolinbook; @gacmajid; @kowaPLBcosmo; @Orfeupion], but in the majority of cases they are still unable to establish robustly the functional dependence of the correction on momentum. This has led to the proposal that perhaps on this occasion experiments might take the lead by establishing some experimental facts (at least in the form of constraints on the form of the dispersion relation) that may provide guidance for the ongoing investigations on the theory side. In light of these considerations the majority of phenomenological studies of Planck-scale corrections to the dispersion relation have assumed a general [*ansatz*]{}, $$\begin{aligned} E^2 = p^2 + m^2 + \Delta_{QG}(p,m,M_P)~, \label{generalansatz}\end{aligned}$$ denoting with $E$ the energy of the particle and with $\Delta_{QG}$ a model-dependent function of the Planck mass $M_P$ and of the spatial momentum $p$ and of the mass $m$ of the particle. Different models do give (more or less detailed) guidance on the form of $\Delta_{QG}$, and we shall consider this below, but even at a model-independent level a few characteristics can be assumed with reasonable robustness[^2]. As most authors in the field, we shall here focus our analysis on cases in which the mass $m$ still is the rest-energy and the dispersion relation regains its ordinary special-relativistic form in the limit where the Planck scale is removed (${M_P} \rightarrow \infty$): $$\Delta_{QG}(p,m,M_P) \xrightarrow[p \to 0]{} 0 ~, \qquad \Delta_{QG}(p,m,M_P) \xrightarrow[M_P \to \infty]{} 0~. \label{PropertiesOfDeltaQG}$$ And these are most fruitfully exploited, since the relevant phenomenology clearly can at best hope to gain insight on the leading terms of a small-$M_P^{-1}$ expansion, within a power-series expansion, $$E^2 = p^2 + m^2 + \frac{1}{M_P} \Delta_{QG}^{(1)}(p,m) + \frac{1}{M_P^2} \Delta_{QG}^{(2)}(p,m) + \dots ~, \label{DispRelPrimoOrdineInMp}$$ where the terms in the power series are subject to the condition $\left. \Delta_{QG}^{(1)}(p,m)\right|_{p=0} =0 =\left. \Delta_{QG}^{(2)}(p,m)\right|_{p=0}$. This past decade of vigorous investigations of these modifications of the dispersion relation focused primarily (but not exclusively) on terms linear in $M_P^{-1}$ and reached its most noteworthy results in analyses of observational astrophysics data, which of course concern the “ultrarelativistic" ($p\gg m$) regime of particle kinematics [@grbgac; @astroSchaefer; @astroBiller; @astroKifune; @gacQM100; @jaconature]. For these applications the function $\Delta_{QG}^{(1)}(p,m)$ can of course be usefully parametrized in such a way that the relation between energy and spatial momentum takes the following form: $$E \simeq p + \frac{m^2}{2p} + \frac{1}{2M_P}\left( \eta_1 \, p^2 + \eta_2 \, m \, p + \eta_3 \, m^2 \right) ~, \label{DispRelRelativistica}$$ where, considering the large value of $M_P$, we only included correction terms that are linear in $1/M_P$, and, considering that this formula concerns the ultrarelativistic regime of $p \gg m$, the labels on the parameters $\eta_1 , \eta_2 , \eta_3$ reflect the fact that in that regime $p^2/M_P$ is the leading correction, $m p/M_P$ is next-to-leading, and so on. Evidence that at least some of these $\eta_1$, $\eta_2$, $\eta_3$ parameters have nonzero values is indeed found in studies inspired by the Loop-Quantum-Gravity approach and by the approach based on spacetime noncommutativity, and most importantly some of these studies [@urrutiaPRL; @LQGDispRel; @gacmajid; @kowaPLBcosmo; @Orfeupion] provide encouragement for the presence of the strongest imaginable ultrarelativistic correction, the leading-order term $\eta_1 \, p^2/(2M_P)$. Unfortunately, as usual in quantum-gravity research, even the most optimistic estimates represent a gigantic challenge from the perspective of phenomenology. This is because, if the Planck scale is indeed roughly the characteristic scale of quantum-gravity effects then correspondingly parameters such as $\eta_1$, $\eta_2$, $\eta_3$ should take (positive or negative) values that are within no more than 1 or 2 orders of magnitude of $1$. And this in turn implies that, for example, all effects induced by Eq. (\[DispRelRelativistica\]) could only affect the running of our present particle-physics colliders at the level [@gacLRR] of at best 1 part in $10^{14}$. In recent years certain semi-heuristic renormalization-group arguments (see, [*e.g.*]{}, Refs. [@gacLRR; @wilcGUTEP] and references therein), have encouraged the intuition that the quantum-gravity scale might be plausibly even 3 orders of magnitude smaller than the Planck scale (so that it could coincide [@wilcGUTEP] with the “grand unification scale" which appears to play a role in particle physics). But even assuming for $\eta_1$, $\eta_2$, $\eta_3$ values plausibly as “high" as $10^3$ is not enough help at traditional high-energy particle-collider experiments. It was therefore rather exciting for many quantum-gravity researchers when it started to emerge that some observations in astrophysics could be sensitive to manifestations of the parameter $\eta_1$ all the way down to $|\eta_1| \sim 1$ and even below [@grbgac; @astroSchaefer; @astroBiller; @astroKifune; @gacQM100; @jaconature], thereby providing for that parameter the ability to explore the full range of values that could be motivated from a quantum-gravity perspective. These studies are presently being conducted at the Fermi Space Telescope [@fermiSCIENCE; @ellisPLB2009; @gacSMOLINprd2009; @fermiNATURE; @gacNATURE2009], and other astrophysics observatories. In the recent Ref. [@gacPRL2009] some of us observed that it would be very valuable to combine to these astrophysics studies of the ultrarelativistic regime of the dispersion relation also a complementary phenomenology program of investigation of the nonrelativistic regime. And in the regime of $p \ll m$ the 3 largest contributions to $\Delta_{QG}^{(1)}(p,m)$ have behavior[^3] $m^2 p$, $m p^2$ and $p^3$, allowing to cast the relation between energy and spatial momentum in the following form: $$E \simeq m + \frac{p^2}{2m} + \frac{1}{2M_P}\left( \xi_1 m p + \xi_2 p^2 + \xi_3 \frac{p^3}{m} \right) \label{DispRelNonRelativistica}~,$$ where, again, $\xi_1$, $\xi_2$, $\xi_3$ are dimensionless parameters. Evidence that at least some of these dimensionless parameters $\xi_1$, $\xi_2$, $\xi_3$ should be non-zero has been found for example in the much-studied framework introduced in Refs. [@urrutiaPRL; @urrutiaPRD], which was inspired by Loop Quantum Gravity, and produces a term linear in $p$ in the nonrelativistic limit (the effect here parametrized by $\xi_1$). And for the purposes of this Section, which we are devoting to the complementarity of the nonrelativistic and ultrarelativistic regimes of the dispersion relation, it is particularly insightful to consider two of the most studied scenarios that have emerged in the literature on noncommutative-geometry-inspired deformations of Poincaré symmetries. These are the scenarios proposed in Refs. [@gacIJMP2002vD11; @gacdsrPLB2001] and in Ref. [@leedsrPRL], which respectively produce the following proposals for the exact form of the dispersion relation: $$\left(\frac{2 M_P}{\eta}\right)^2 \sinh^2 \left(\frac{\eta E}{2 M_P} \right) = \left(\frac{2 M_P}{\eta}\right)^2 \sinh^2 \left(\frac{\eta m}{2 M_P} \right) + e^{-\eta \frac{E}{M_P}} p^2~, \label{DSRs1}$$ and $$\frac{m^2}{(1- \eta\frac{m}{M_P} )^2} = \frac{E^2-p^2}{(1-\eta\frac{E}{M_P})^2} ~, \label{DSRs2}$$ Both of these proposals have the same description in the nonrelativistic regime $$E \simeq m + \frac{p^2}{2m} - \eta \frac{p^2}{2 M_P} ~,$$ [*i.e.*]{} the type of correction term in the nonrelativistic regime that we are here parameterizing with $\xi_2$. But these proposals have significantly different behavior in the ultrarelativistic regime. From Eq. (\[DSRs1\]) in the ultra-relativistic regime one finds $$E \simeq p + \frac{m^2}{2p} - \eta \frac{p^2}{2M_P} ~,$$ whereas from Eq. (\[DSRs2\]) in the ultra-relativistic regime one finds $$E \simeq p + \frac{m^2}{2p} - \eta \frac{m^2}{M_P} ~.$$ Therefore the example of these two much studied deformed-symmetry proposals is such that by focusing exclusively on the nonrelativistic regime one could not (not at the leading order at least) distinguish between them, but one could discriminate between the two proposals using data on the ultrarelativistic regime. The opposite is of course also possible: different candidate dispersion relations with the same ultrarelativistic limit, but with different leading-order form in the nonrelativistic regime. And in general it would be clearly very valuable to constrain the form of the dispersion relation both using experimental information on the leading nonrelativistic behavior and experimental information on the leading ultrarelativistic behavior. Probing the nonrelativistic regime with cold atoms {#nonrUR} ================================================== Our main objective here is to show that cold-atom experiments can be valuable for the study of Planck-scale effects. We illustrate this point mainly by considering the possibility, already preliminarily characterized in Ref. [@gacPRL2009], to use cold-atom studies for the derivation of meaningful bounds on the parameters $\xi_1$ and $\xi_2$, [*i.e.*]{} the leading and next-to-leading terms in (\[DispRelNonRelativistica\]) for the nonrelativistic limit: $$E \simeq m + \frac{p^2}{2m} + \frac{1}{2M_P}\left( \xi_1 m p + \xi_2 p^2 \right) ~. \label{DispRelNonRelativisticaJOC}$$ In this section we work exclusively from a laboratory-frame perspective, as done in Ref. [@gacPRL2009], but, as for most relativistic studies, it is valuable to also perform the analysis in one or more frames that are boosted with respect to the laboratory frame, and we shall discuss this in Sec. \[BOOSTEDFRAME\]. The measurement strategy proposed in Ref. [@gacPRL2009] is applicable to measurements of the “recoil frequency" of atoms with experimental setups involving one or more “two-photon Raman transitions" [@Kasevich91b; @Peters99; @Wicht02]. Let us initially set aside the possibility of Planck-scale effects, and discuss the recoil of an atom in a two-photon Raman transition from the perspective adopted in Ref. [@Wicht02], which provides a convenient starting point for the Planck-scale generalization we shall discuss later. One can impart momentum to an atom through a process involving absorption of a photon of frequency $\nu$ and (stimulated [@Kasevich91b; @Peters99; @Wicht02]) emission, in the opposite direction, of a photon of frequency $\nu'$. The frequency $\nu$ is computed taking into account a resonance frequency $\nu_*$ of the atom and the momentum the atom acquires, recoiling upon absorption of the photon: $ \nu \simeq \nu_* + ( h \nu_* + p)^2/(2 m) - p^2/(2m)$, where $m$ is the mass of the atom ([*e.g.*]{} $m_{Cs} \simeq 124~\text{GeV}$ for Caesium), and $p$ its initial momentum. The emission of the photon of frequency $\nu'$ must be such to de-excite[^4] the atom and impart to it additional momentum: $\nu' + (2 h \nu_* + p)^2/(2 m) \simeq \nu_* + (h \nu_*+p)^2/(2 m)$. Through this analysis one establishes that by measuring $\Delta \nu \equiv \nu - \nu'$, in cases (not uncommon) where $\nu_*$ and $p$ can be accurately determined, one actually measures $h/m$ for the atoms: $$\begin{aligned} \frac{\Delta \nu}{ 2 \nu_* (\nu_* +p/h)} = \frac{h}{m} ~. \label{deltaomeNOEP}\end{aligned}$$ This result has been confirmed experimentally with remarkable accuracy. A powerful way to illustrate this success is provided by comparing the results for atom-recoil measurements of $\Delta \nu/[\nu_* (\nu_* +p/h)]$ and for measurements [@gab08] of $\alpha^2$, the square of the fine structure constant. $\alpha^2$ can be expressed in terms of the mass $m$ of any given particle [@Wicht02] through the Rydberg constant, $R_\infty$, and the mass of the electron, $m_{{e}}$, in the following way [@Wicht02]: $ \alpha^2 = 2 R_\infty \frac{m}{m_{{e}}} \frac{h}{m}$. Therefore according to Eq. (\[deltaomeNOEP\]) one should have $$\frac{\Delta \nu}{ 2 \nu_* (\nu_* +p/h)} = \frac{\alpha^2}{2 R_\infty} \frac{m_e}{m_u} \frac{ m_u}{m} ~, \label{alphaJ2}$$ where $m_u$ is the atomic mass unit and $m$ is the mass of the atoms used in measuring $\Delta \nu/[\nu_* (\nu_* +p/h)]$. The outcomes of atom-recoil measurements, such as the ones with Caesium reported in Ref. [@Wicht02], are consistent with Eq. (\[alphaJ2\]) with the accuracy of a few parts in $10^9$. The fact that Eq. (\[deltaomeNOEP\]) has been verified to such a high degree of accuracy proves to be very valuable for our purposes as we find that modifications of the dispersion relation require a modification of Eq. (\[deltaomeNOEP\]). Our derivation can be summarized briefly by observing that the logical steps described above for the derivation of Eq. (\[deltaomeNOEP\]) establish the following relationship $$h \Delta \nu \simeq E(p + h\nu + h\nu') - E(p) \simeq E(2 h\nu_* + p) - E(p) ~, \label{DeltaOmegaGenerico}$$ and therefore Planck-scale modifications of the dispersion relation, parametrized in Eq. (\[DispRelNonRelativistica\]), would affect $\Delta \nu$ through the modification of $E(2 h \nu_* + p) - E(p)$, which compares the energy of the atom when it carries momentum $p$ and when it carries momentum $p+2 h \nu_*$. Since our main objective here is to expose sensitivity to a meaningful range of values of the parameter $\xi_1$, let us focus on the Planck-scale corrections with coefficient $\xi_1$. In this case the relation (\[deltaomeNOEP\]) is replaced by $$\begin{aligned} \Delta \nu \! & \simeq & \! \frac{ 2 \nu_* (h \nu_* +p)}{m} + \xi_1 \frac{m}{M_P} \nu_* ~, \label{DeltaOmegaLeading}\end{aligned}$$ and in turn in place of Eq. (\[alphaJ2\]) one has $$\frac{\Delta \nu}{ 2 \nu_* (\nu_* \! + \! p/h)} \!\! \left[ \! 1 \! - \xi_1 \! \left( \! \frac{ m}{2 M_P} \! \right) \!\! \left( \! \frac{m}{h \nu_* +p} \! \right) \! \right] \!\! = \!\! \frac{\alpha^2}{2 R_\infty} \frac{m_e}{m_u} \frac{ m_u}{m} ~. \label{jocMINUS1}$$ We have arranged the left-hand side of this equation placing emphasis on the fact that our quantum-gravity correction is as usual penalized by the inevitable Planck-scale suppression (the ultrasmall factor $m /M_P$), but in this specific context it also receives a sizable boost by the large hierarchy of energy scales $m/(h \nu_* +p)$, which in typical experiments of the type here of interest can be [@Kasevich91b; @Peters99; @Wicht02] of order $\sim 10^{9}$. Our result (\[jocMINUS1\]) for the case of modification of the dispersion relation by the term with coefficient $\xi_1$ can be straightforwardly generalized to the case of a modified dispersion relation of the form $$E \simeq m + \frac{p^2}{2m} + \frac{\xi_{\beta}}{2} \frac{m^{2-\beta}}{M_P} p^{\beta} \label{DispRelJOSE}$$ which reproduces our terms with parameters $\xi_1$ and $\xi_2$ respectively when $\beta =1$ and $\beta = 2$ (but in principle could be examined even for non-integer values of $\beta$). One then finds $$\frac{\Delta \nu}{2\nu_*(\nu_*+p/h)}\left[1 - \xi_{\beta} \left(\frac{m^{2-\beta}\left[(p+2h\nu_*)^{\beta} - p^{\beta}\right]}{4M_P h\nu_*}\right) \left(\frac{m}{h\nu_*+p}\right)\right] \,=\, \frac{\alpha^2}{2 R_{\infty}} \frac{m_e}{m_u} \frac{m_u}{m}$$ which indeed reproduces (\[jocMINUS1\]) for $\beta = 1$ and gives [@gacPRL2009] $$\frac{\Delta \nu}{ 2 \nu_* (\nu_* \! + \! p/h)} \!\! \left[ 1 \! - \xi_2 \frac{m}{M_P} \right] \!\! = \!\! \frac{\alpha^2}{2 R_\infty} \frac{m_e}{m_u} \frac{ m_u}{m} ~, \label{jocMINUS2}$$ for $\beta = 2$. Limits on different models {#LIMITS} ========================== From a phenomenological perspective the most remarkable observation one can ground on the results reported in the previous Section is that the accuracies achievable in cold-atom studies allow us to probe values of $\xi_1$ that are not distant from $|\xi_1| \sim 1$. This is rather meaningful since, as stressed in the previous Section, the quantum-gravity intuition for parameters such as $\xi_1$ is that they should be (in models where a nonzero value for them is allowed) within a few orders of magnitude of 1. Besides discussing this point, in this Section we also consider the case of the term with $\xi_2$ parameter and we comment on the relevance of these analyses from the perspective of a class of phenomenological proposals which is broader than the one here discussed in Section II. The closing remarks of this Section are devoted to observations that may be relevant for attempts to further improve the relevant experimental limits. Limits on $\xi_1$ and $\xi_2$ ----------------------------- The fact that our analysis provides sensitivity to values of $\xi_1$ of order 1 is easily verified by examining our result for the case of the $\xi_1$ parameter, which we rewrite here for convenience $$\frac{\Delta \nu}{ 2 \nu_* (\nu_* \! + \! p/h)} \!\! \left[ \! 1 \! - \xi_1 \! \left( \! \frac{ m}{2 M_P} \! \right) \!\! \left( \! \frac{m}{h \nu_* +p} \! \right) \! \right] \!\! = \!\! \frac{\alpha^2}{2 R_\infty} \frac{m_e}{m_u} \frac{ m_u}{m} ~, \label{joclong1}$$ and taking into account some known experimental accuracies. Let us focus in particular on the Caesium-atom recoil measurements reported in Ref. [@Wicht02], which were ideally structured for our purposes. Let us first notice that $R_\infty$, ${m_e}/{m_u}$ and ${ m_u}/{m_{Cs}}$ are all known experimentally with accuracies of better than 1 part in $10^9$. When this is exploited in combination with the value of ${\alpha^2}$ recently determined from electron-anomaly measurements [@gab08], which is ${\alpha^2} = 137.035 999 084 (51)$, the results of Ref. [@Wicht02; @Gerginov06] then allow us to use (\[joclong1\]) to determine that $\xi_1 = - 1.8 \pm 2.1$. This amounts to the bound $-6.0 < \xi_1 < 2.4 $, established at the 95% confidence level, and shows that indeed the cold-atom experiments we here considered can probe the form of the dispersion relation (at least in one of the directions of interest) with sensitivity that is meaningful from a Planck-scale perspective. As mentioned in Section \[COMPLEMENTARITY\] among the models that could be here of interest there are some where, by construction, $\xi_1=0$ but $\xi_2 \neq 0$. In such cases it is then of interest to establish bounds on $\xi_2$ derived assuming $\xi_1=0$, for which one can easily adapt the derivation discussed above. These are therefore cases in which our result (\[jocMINUS2\]) is relevant, and one easily then finds that the atom-recoil results for Caesium atoms reported in Refs. [@Wicht02; @Gerginov06] can be used to establish that $- 3.8 \cdot 10^{9} < \xi_2 < 1.5 \cdot 10^{9}$. This bound is still some 6 orders of magnitude above even the most optimistic quantum-gravity estimates. But it is a bound that still carries some significance from the broader perspective of tests of Lorentz symmetry [@gacPRL2009]. Relevance for other quantum-gravity-inspired scenarios ------------------------------------------------------ Up to this point we have assumed “universal" effects, [*i.e.*]{} modifications of the dispersion relation that have the same form for all particles, independently of spin and compositeness, and with dependence on the mass of the particles rigidly inspired by the quantum-gravity arguments suggesting correction terms of the form $m^j p^k/M_{p}^l$ ([*i.e.*]{} with a characteristic dependence on momentum and with a momentum-independent coefficient written as a ratio of some power of the mass of the particle versus some power of the Planck-scale). While this universality is indeed assumed in the majority of studies of the fate of Poincaré symmetry at the Planck scale, alternatives have been considered by some authors [@liberatiNONUNIV] and there are good reasons to at least be open to the possibility of nonuniversality. One reason of concern toward universality originates from the fact that clearly modifications of the dispersion relation at the Planck scale are a small effect for microscopic particles (always with energies much below the Planck scale in our experiments), but would be a huge (and unobserved) effect for macroscopic bodies, such as planets and, say, soccer balls. Even the literature that assumes universality is well aware of this issue, and in fact the opening remarks of papers on this subject always specify a restriction to microscopic particles. With our present (so limited) understanding of the quantum-gravity realm we can indeed contemplate for example the possibility that such effect be confined to motions which admit description in terms of coherent quantum systems (by which we simply mean that the focus is on the type of particles whose quantum properties could also be studied in the relevant class of phenomena, unlike the motions of planets and soccer balls). This is clearly (at least at present) a plausible scenario that many authors are studying and for which atoms provide an extraordinary opportunity of investigation of the nonrelativistic limit: because of their relatively large masses atoms have ultrashort (de Broglie) wavelengths even at low speeds and provide relatively large values for terms of the form $m^j p^k/M_{p}^l$. Let us compare for example our study to the popular studies of the ultrarelativistic regime with photons. The best limits on the ultrarelativistic side are obtained [@fermiNATURE] through observations of photons with energies of a few tens of GeV’s. The limit we here obtained in the nonrelativistic regime involves very small speeds ($\ll c$) but for particles, the atoms, with (rest) energies in the $\sim 100~\text{GeV}$ range. While it is therefore rather clear that atoms are excellent probes of scenarios with universality for “quantum-mechanically microscopic particles", their effectiveness can be sharply reduced in models with some forms of nonuniversality. In particular, one could consider the compositeness of particles as a possible source of nonuniversality [@dsrIJMPrev]. And this would imply that in the study of processes involving, say, protons and pions one should adopt a “parton picture" with the number of partons acting in the direction of averaging out the effects: if quantum-spacetime effects affect primarily the partons then a particle composed of 3 partons could feel the net result of 3 such fundamental features, with a possible suppression ([*e.g.*]{} by a factor of $\sqrt{3}$) of the effect for the particle with respect to the fundamental effect for partons. These ideas have not gained much attention, probably also because things might change only at the level of factors of order $1$ if one was for example to devise ways to keep track of the different number of partons for nucleons and for pions. But in the case of atoms, that we are now bringing to the forefront of quantum-gravity phenomenology, clearly these concerns cannot be taken lightly: for the description of an atom one might have to consider hundreds of partons (or at least $\sim 100$ nucleons). We therefore expect that our strategy to place limits on $\xi_1$ and $\xi_2$ will be less effective (limits more distant from the Planck scale) in scenarios based on one or another form of “parton model" for the implications of spacetime quantization on quantum-mechanical particles. We do not dwell much on this here at the quantitative level since the literature does not offer us definite models of this sort that we could compare to data. Even assuming that the effect is essentially universal one could consider alternatives to the most common assumption that quantum-gravity corrections have the form $m^j p^k/M_P^l$. In particular, some authors (see, [*e.g.*]{}, Refs. [@HIN; @HOS; @ROV]) have argued that the density of energy (or mass) of a given particle (be it elementary or composite) should govern the magnitude of the effect, rather than simply the mass of the particle. This is another possibility which is also under investigation [@HIN; @HOS; @ROV] as a mechanism for effectively confining the new effects to elementary particles. In the simplest scenarios this proposal might amount to replacing terms such as our $\xi_1 m p/(2M_P)$ with terms of the general form ${\tilde{\xi}}_1 \rho^{1/4} p/(2 M_P)$, but of course the implications of such pictures depend crucially on exactly which density $\rho$ one adopts. For different choices of $\rho$ the limits derived from atom-recoil experiments can be more or less stringent than those derived in studies of lighter particles, such as electrons. Another framework which can be used to illustrate the different weight that cold-atom studies can carry in different scenarios for the deformation of the dispersion relation is the one already studied in Refs. [@LIV-tritium; @LIV-cutoffs], parameterized by a single scale $\lambda$ such that $E^2=m^2+p^2+2\lambda p$. Limits on this form of the dispersion relation have been obtained for neutrinos in Ref. [@LIV-tritium], and for electrons, in Ref. [@LIV-cutoffs]. Taking into account that from $E^2=m^2+p^2+2\lambda p$ it follows that in the nonrelativistic limit $E=m+p^2/(2m)+\lambda p/m$, one easily finds that the parametrization we introduced in Eq. (\[DispRelNonRelativistica\]) and the parametrization of Refs. [@LIV-tritium; @LIV-cutoffs] are related by $\xi_1 m/M_P\equiv 2 \lambda/m$. And in light of this one easily sees that our atom-recoil analysis can also be used to establish the bound $-3.7\cdot 10^{-6}\,\text{eV}<\lambda<1.5\cdot 10^{-6}\,\text{eV}$. This shows that the cold-atom-based strategy is suitable also for studies of the $\lambda$-parameter picture of Refs. [@LIV-tritium; @LIV-cutoffs]. But, while, as some of us already stressed in Ref. [@gacPRL2009], this atom-based bound on $\lambda$ is more powerful (by roughly 6 orders of magnitude) than bounds previously obtained on $\lambda$ using neutrino data [@LIV-tritium], we should here notice that the best present bound on $\lambda$ is the electron-based bound derived in Ref. [@LIV-cutoffs], which is at the level $|\lambda| \lesssim 10^{-7}~\text{eV}$. We stress that there is no contradiction between the remarks we offered above on the unique opportunities that cold-atom studies provide for setting bounds on the parameter $\xi_1$, and the fact that instead for the $\lambda$ parameter electron studies are competitive with (and actually still slightly more powerful than) atom-based studies: this difference between the strategies for bounding the $\xi_1$ parameter and the $\lambda$ parameter is easily understood in light of the relation $\xi_1 m/M_P \leftrightarrow 2 \lambda/m$ and of the large difference of masses between electrons and (Caesium or Rubidium) atoms. Finally, in closing this Subsection on alternative models, let us mention the possibility of intrinsically non-universal modifications of the dispersion relation, [*i.e.*]{} phenomenological scenarios in which the modifications of the dispersion relation are assumed to be different for different particles without introducing any specific prescription linking these difference to the mass, the spin or other specific properties of the particles. For example, in Ref. [@liberatiNONUNIV], and references therein, the authors introduce a free parameter for each different type of particle. In such cases studies of Caesium and, say, Rubidium atoms could be used to set constraints on parameters that are specialized to those types of atoms. In essence, according to this (certainly legitimate) perspective, we might learn that for Caesium and Rubidium $\xi_1$ is small but without assuming any implications for the values of $\xi_1$ for other particles. And another noteworthy example is the one of Ref. [@ellisPHOTONonly], and references therein, where it is argued, within a specific scenario for quantum gravity, that the effects of modification of the dispersion relation should be confined to a single type of particle, the photon (in which case of course atoms cannot possibly be of any help). Strategies for improving the limits {#NEWLIMIT} ----------------------------------- As a contribution toward the development of experimental setups which in some cases may be optimized for our proposal it is important for us to stress that, while essentially here we structured our analysis in a way that might appear to invite interpretation as “quantum-gravity corrections to $h/m$ measurements", not all improvements in the sensitivity of measurements of $h/m$ will translate into improved bounds on the parameters we here considered. First we should notice that our result for the $\xi_1$-dependent correction to $\Delta \nu /[ 2 \nu_* (\nu_* \! + \! p/h)]$ would not appear as a constant shift of $h/m$, identically applicable to all experimental setups. This is primarily due to the fact that, as shown in Eq. (\[joclong1\]), our quantum-gravity correction factor has the form $ 1 - \xi_1 m^2/[2 M_P(h \nu_* +p)]$, and therefore at the very least should be viewed as a momentum-dependent shift of $h/m$. Different $h/m$ measurements, even when relying on the same atoms (same $m$), are predicted to find different levels of inconsistency with the uncorrected relationship between $h/m$ and $\alpha^2$. This is particularly important because some of the standard techniques [@Wicht02; @udem] used to improve the accuracy of measurement of $h/m$ rely on imparting to the atoms higher overall values of momentum, but since the magnitude of the $\xi_1$-governed effect decreases with the magnitude of momentum, these possible ways to get more accurate determinations of $h/m$ might not actually provide more stringent bounds on $\xi_1$. This is after all one of the reasons why the bound on $\xi_1$ which we discussed here relied on the determinations of $h/m$ reported in Ref. [@Wicht02; @Gerginov06]: a more accurate determination of $h/m$ was actually obtained in the cold-atom (Rubidium) studies reported in Refs. [@biraben06; @biraben08], but those more accurate determinations of $h/m$ relied on much higher values of momentum, thereby producing a bound on $\xi_1$ which is not competitive [@gacPRL2009] with the one obtainable using the $h/m$ determination of Ref. [@Wicht02; @Gerginov06]. The challenge we propose is therefore the one of reaching higher accuracies in the measurement of $h/m$ without increasing significantly the momentum imparted to the atoms. Interestingly these concerns do not apply to our result for the $\xi_2$ parameter. In fact, our result for the $\xi_2$-dependent correction to $\Delta \nu /[ 2 \nu_* (\nu_* \! + \! p/h)]$ would actually appear as a constant shift of $h/m$, a mismatch between $h/m$ results and $\alpha^2$ results of identical magnitude in all experimental setups using the same atoms (same $m$). This is due to the fact that, as shown in Eq. (\[jocMINUS2\]), our quantum-gravity correction factor has the form $[ 1 - \xi_2 m/M_P]$, and therefore can indeed be viewed as a (mass-dependent but) momentum-independent shift of $h/m$. Besides these issues connected with the role played by the momentum of the atoms in our analysis, there are clearly other issues that should be taken into consideration by colleagues possibly contemplating measurements of $h/m$ that could improve the limits on our parameters. One of these clearly deserves mention here, and concerns the setup of $h/m$ measurements as differential measurements. In this respect it is rather significant that our derivation of dependence of the measured $\Delta \nu$ on the Planck-scale effects shows that the sign of the correction term depends on the “histories" (beam-splitting/beam-recombination histories) of the atoms whose interference is eventually measured. Even from this perspective our result is therefore not to be viewed simply as “a shift in $h/m$": often in the relevant cold-atom experiments one achieves a very accurate determination of $h/m$ by comparing (in the sense of a differential measurement) two different values of $\Delta \nu$ obtained by interference of different pairs of beams produced in the beam-splitting/beam-recombination sequence of a given experimental setup. We should therefore warn our readers that for some differential measurements the effect measured would be twice as large as the one we here computed (same effect but with opposite sign on the two sides of the differential measurement), but on the other hand it is not hard to arrange[^5] for a differential measurement that is insensitive to the quantum-gravity effects (if the “histories" are such that the correction carries the same sign on the two sides of the differential measurement). Atom velocity, energy-momentum conservation and other relativistic issues {#BOOSTEDFRAME} ========================================================================= We have so far focused on schemes which assume that the only new relevant quantum-gravity-induced law amounts to a modification of the energy-momentum dispersion relation. The main results here derived in Section \[nonrUR\] relied on a strategy of analysis that only requires a specification in the “laboratory frame" of the form of the dispersion relation (which is used to establish, for example, the energy gained by an atom when its spatial momentum is increased) and the law of energy-momentum conservation (which is used to establish, for example, the spatial momentum imparted to an atom upon absorption of a photon of known wavelength). Even within that scheme of analysis one clearly should consider also the possibility of modifications of the law of energy-momentum conservation, especially in light of the fact that certain quantum-gravity scenarios establish (see below) a direct link between modifications of the dispersion relation and some corresponding modifications of the law of energy-momentum conservation. Moreover, the laboratory-frame perspective is of course too narrow for the investigation of the relativistic issues that clearly must be involved in scenarios that introduce modifications of the dispersion relation. Also from this perspective the quantum-gravity literature offers significant motivation for a careful investigation, since modifications of the laws of transformation between reference frames have been very actively studied (see below). And, as we shall here stress, connected to this issue of boost transformations between reference frames one also finds intriguing challenges for what concerns the description of the velocity of particles. In this Section we offer an exploratory discussion of these issues. Even in the quantum-gravity literature on ultrarelativistic modifications of the dispersion relation the study of these issues has proven very challenging, and many unsolved puzzles remain. So we shall not even attempt here to address fully these issues in the novel domain of the nonrelativistic limit, which we are here advocating. But we hope that the observations we report here may provide a valuable starting point for more detailed future studies. Among the “exploratory aspects" of our discussion we should in particular stress that we assume here, as done in most of the related quantum-gravity-inspired literature, that concepts such as energy, spatial momentum and velocity can still be discussed in standard way, so that the novelty of the pictures resides in new laws linking symbols that admit a conventional/traditional physical interpretation. Of course, alternative possibilities also deserve investigation: a given quantum-gravity/quantum-spacetime picture might well (when fully understood) provide motivation not only for novel forms of, say, the dispersion relation but also impose upon us a novel description of the entities that appear in the dispersion relation, such as a novel understanding of the energy $E$ that appears in the dispersion relation. But we shall already highlight several challenges for the more conservative scenario (with traditional “interpretation of symbols"), and therefore we postpone to future works the investigation of alternative interpretations. Velocity and boosted-frame analysis ----------------------------------- As a partial remedy to the laboratory-frame limitation of the strategy of analysis discussed in Section \[nonrUR\], we take as our next task the one of deriving the same result using a scheme of derivation involving boosting and the Doppler effect. The role played by transformation laws between different observer-frames motivates part of our interest for this calculation, since investigations of the fate of Poincaré symmetry in models with Planck-scale modifications of the dispersion relation must in general address the issue of whether the symmetries are “deformed", in the sense of the “Doubly Special Relativity" scenario [@gacIJMP2002vD11; @gacdsrPLB2001], or simply “broken". When the symmetry transformations are correspondingly “deformed" the dispersion relation will be exactly the same for all observers [@gacIJMP2002vD11; @gacdsrPLB2001]. In the symmetry-breaking alternative scenario the laws of boosting are unmodified and as a result one typically finds that the chosen form of the dispersion relation only holds for one class of observers (at the very least one must expect [@gacflavtsviuri] observer dependence of the parameters that characterize the modification of the dispersion relation). And another aspect of interest for such analyses originates from the fact that the description of the Doppler effect requires a corresponding description of the velocity of the atoms, and therefore requires a specification of the law that fixes the dependence of speeds on momentum/energy at the Planck scale: this too is a debated issue, with many authors favoring $v(p) = \partial E/\partial p$, but some support in the literature also for some alternatives, the most popular of which is $v=p/E$. As stressed in the opening remarks of this Section, we are just aiming for a first exploratory characterization of these issues and their possible relevance for our atom-recoil studies. Consistently with these scopes we assume that the Doppler effect (boosting) is undeformed and that the dispersion relation is an invariant law. This of course is only one (and a particularly peculiar) example of combination of the possible formulations of the main issues here at stake, but it suffices for exposing the potentially strong implications that the choice of these formulations can have for the analysis. Let us start by reanalyzing the recoil of atoms in terms of a Doppler effect, neglecting initially the possible Planck-scale effects (which we shall reintroduce later in this Section). When an atom absorbs a photon whose frequency is $\nu$ in the laboratory frame, in the rest frame of the atom the photon has frequency $\tilde \nu = \nu ( 1 - v)$, where $v$ is the speed of the atom in the lab frame (and for definiteness we are considering the case of photon velocity parallel to the atom velocity). Then in the rest frame, if the absorption of the photon takes the atom to an energy level $h \nu_*$, energy conservation takes the form $$\tilde \nu \simeq \nu_* + \frac{h \nu_*^2}{2m}~, \label{jocflav}$$ which of course can also be equivalently rewritten in terms of the lab-frame frequency of the photon $$\nu \simeq \nu_* (1 + v) + \frac{h \nu_*^2}{2m}~,$$ also neglecting a contribution of order $v \, {h \nu_*^2}/{m}$ which is indeed negligible in the nonrelativistic ($v \ll 1$) regime. This photon absorption also takes the atom from velocity $v$ to velocity $v'$, $$v' \simeq v + h \nu_* / m ~,$$ in the laboratory frame (where we also observed that the gain of momentum of the atom is approximately $h\nu_*$). For the stage of (stimulated) emission of a second photon, whose frequency in the lab frame we denote with $\nu'$, the atom would then be moving at this speed $v'$, and in the rest frame of the atom the frequency of this emitted photon is $\tilde \nu' = \nu' (1+v')$ (also taking into account that if, in the lab frame, the absorbed photon moved in parallel with the atom, the emitted photon must then move in anti-parallel direction). In the case of photon emission, conservation of energy in the rest frame has a different sign with respect to Eq. (\[jocflav\]), [*i.e.*]{} $$\tilde \nu' \simeq \nu_* - \frac{h \nu_*^2}{2m}~,$$ which again one may prefer to re-express in terms of the lab-frame frequency of the photon $$\nu' \simeq \nu_*(1-v') - \frac{h \nu_*^2}{2m}~.$$ So the lab-frame frequency difference between the two photons is $$\Delta \nu = \nu_* (v + v') + \frac{h \nu_*^2}{m} \simeq 2 v \nu_* + \frac{2 h \nu_*^2}{m}~, \label{jocBOOSTIE1}$$ and this (as easily seen upon noticing that in the nonrelativistic limit $v = p /m$) of course perfectly agrees with the corresponding result (\[deltaomeNOEP\]), which we had obtained relying exclusively on lab-frame kinematics. It is easy to verify that redoing this Doppler-effect-based derivation in presence of our Planck-scale corrections to the dispersion relation (but setting aside, at least for now, possible Planck-scale dependence of the Doppler effect) one ends up replacing (\[jocBOOSTIE1\]) with $$\Delta \nu = \nu _* \left[ v(p) + v(p+ h \nu _* )\right] + \frac{ h \nu _*^2 }{m} + \xi_1 \frac{m}{M_P} \nu_* ~. \label{DeltaNUboostedGENERAL}$$ This is the formula that should reproduce our main result (\[DeltaOmegaLeading\]). Indeed this is the point where one might encounter the necessity of Planck-scale modifications of the boost/Doppler-effect laws and/or of Planck-scale modifications of the law that fixes the dependence of speeds on momentum/energy, Concerning speeds if one assumes (as done by most authors [@grbgac; @astroBiller; @urrutiaPRL; @PiranNeutriNat; @LQGDispRel]) $v = \partial E/\partial p$ then in our context (nonrelativistic regime, with $\xi_1$ parameter) one finds $v(p) = p/m + \xi_1 m/M_P$. If instead, as argued by other authors [@newVEL1; @newVEL2; @newVEL3], consistency of the Planck-scale laws requires that $v = p/E$ should be enforced then in our nonrelativistic context one of course has $v(p) = p/m$. We find that the desirable agreement between (\[DeltaNUboostedGENERAL\]) and (\[DeltaOmegaLeading\]) is found upon assuming $v(p) = p/m$, which indeed allows one to rewrite (\[DeltaNUboostedGENERAL\]) as $$\Delta \nu = \frac{2 \nu_* (p + h \nu_*)}{m} + \xi_1 \frac{m}{M_P} \nu_* ~.$$ If instead one insists on the alternative $v(p) = \partial E/\partial p = p/m + \xi_1 m/M_P$, then (\[DeltaNUboostedGENERAL\]) takes the form $$\Delta \nu = \frac{2 \nu_* (p + h \nu_*)}{m} + 2\xi_1 \frac{m}{M_P} \nu_* ~,$$ which is sizably different from (\[DeltaOmegaLeading\]). Our observation that the law $v=p/m$ is automatically consistent with a plausible symmetry-deformation perspective is intriguing, but might well be just a quantitative accident. We thought it might still be worth reporting just as a way to illustrate the complexity of the issues that come into play if our cold-atom studies are examined within a symmetry-deformation scenario, issues that we postpone to future studies. The Doppler effect in models with deformed Poincaré symmetries had not been previously studied, and there are several alternative “schools" on how to derive from the energy-momentum dispersion relation a law giving the speed as a function of energy. In the specific case of the correction term we here parametrized with $\xi_1$ it would seem that $v=p/m$ is a natural choice, at least in as much as the choice $v(p) = \partial E/\partial p$ appears to be rather pathological/paradoxical since it leads to $v(p) = p/m + \xi_1 m$, [*i.e.*]{} a law that assigns nonzero speed to the particle even when the spatial momentum vanishes. Testing energy-momentum conservation {#EPCONS} ------------------------------------ Up to this point our analysis has focused on tests of the Lorentz sector of Poincaré symmetry. But of course there is also interest in testing the translation sector, and indeed there has been a corresponding effort, particularly over the last decade. The aspect of the translation sector on which these studies have primarily focused is the law of energy-momentum conservation in particle-physics processes, and particularly noteworthy are some results [@gactpPRD; @sethEPCONS] which exposed “Planck-scale sensitivity" for the analysis of certain classes of “ultraviolet" (high-energy) modifications of the law of energy-momentum conservation. Even for these studies one can contemplate the alternative between breaking and deforming Poincaré symmetry, and from this perspective it is rather noteworthy that the scenarios in which one deforms Poincaré symmetry require [@gacIJMP2002vD11; @dsrIJMPrev] a consistency[^6] between the scheme of modification of the dispersion relation and the scheme of modification of the law of energy-momentum conservation. Instead of course if one is willing to break Poincaré symmetry one can consider independently (or in combination) both modifications of the dispersion relation and modifications of the law of energy-momentum conservation. In this Section we want to point out that our cold-atom-based strategy also provides opportunities for studies of the form of the law of energy-momentum conservation in the nonrelativistic regime. The observations on cold-atom experiments that some of us reported in Ref. [@gacPRL2009] already inspired the recent analysis of Ref. [@kowamich], which provides preliminary encouragement for the idea of using cold-atom experiments for the study of the form of the law of energy-momentum conservation in the nonrelativistic regime. The scopes of the analysis reported in Ref. [@kowamich] were rather limited, since it focused on one specific model, which in particular codifies no modifications of the dispersion relation: the only modification allowed in Ref. [@kowamich] appeared in the law of energy-momentum conservation and appeared only at subleading order (in the sense here introduced in Sections \[COMPLEMENTARITY\]-\[nonrUR\]) in the nonrelativistic limit. While maintaining the perspective of a first exploratory investigation of these issues, we shall here contemplate a more general scenario, with modifications of both energy-momentum conservation and dispersion relation, and with correction terms strong enough to appear even at the leading order in the nonrelativistic regime. Besides aiming for greater generality, our interest in this direction is also motivated by the desire of setting up future analysis which might consider in detail the interplay between modifications of the dispersion relation and modifications of energy-momentum conservation, particularly from the perspective of identifying scenarios with deformation (rather than breakdown) of Poincaré symmetries, for which, as mentioned, this interplay is in many instances required [@gacIJMP2002vD11; @dsrIJMPrev]. While we shall not here attempt to formulate a suitable deformed-symmetry scenario, the observations we here report are likely to be relevant for the possible future search of such a formulation. In light of the exploratory nature of our investigation of this point we shall be satisfied illustrating the possible relevance of the interplay between dispersion relation and energy-momentum conservation for the specific case of modified laws of conservation of spatial momentum (ordinary conservation of energy): $$\vec p_1 + \vec p_2 - \frac{\rho_1}{4 M_P} \left ( \frac{E_1^2}{E_2} \vec p_1 + \frac{E_2^2}{E_1} \vec p_2 \right) - \frac{\rho_2}{2 M_P} ( E_1 \vec p_2 + E_2 \vec p_1) = \vec p_3 + \vec p_4 - \frac{\rho_1}{4 M_P} \left ( \frac{E_3^2}{E_4} \vec p_3 + \frac{E_4^2}{E_3} \vec p_4 \right) - \frac{\rho_2}{2 M_P} ( E_3 \vec p_4 + E_4 \vec p_3) ~.$$ We are focusing on the case of two incoming and two outgoing particles (relevant for processes in which a photon is absorbed and one is emitted by an atom), and we characterized the modification in terms of parameters $\rho_1$ and $\rho_2$. As announced, we shall keep track of these parameters $\rho_1$ and $\rho_2$ together with the parameters $\xi_1$ and $\xi_2$ that parametrized the modifications of the dispersion relation in the nonrelativsitic limit[^7]: $$\left\{ \begin{array}{ll} h \nu = h |\vec k| + \frac{\xi_2}{2 M_P} h^2 |\vec k|^2 & \text{(for photons)} \\\\ E = m^2 + \frac{|\vec p|^2}{2m} +\frac{\xi_1}{2 M_P} m |\vec p|+ \frac{\xi_2}{2 M_P} |\vec p|^2 & \text{(for massive particles)} \end{array} \right.$$ For a two-photon Raman transition our modified law of conservation of spatial momentum has significant implications along the common direction of the laser beams used to excite/de-excite the atoms: $$\begin{aligned} &&h |\vec k| + |\vec p| - \frac{\rho_1}{4 M_P} \left ( \frac{h^2 \nu^2}{m} h |\vec k| + \frac{m^2}{h \nu} |\vec p| \right) -\frac{\rho_2}{2 M_P} ( h \nu |\vec p| + E h |\vec k|) = \nonumber \\ &&- h |\vec k'| + |\vec p'| - \frac{\rho_1}{4 M_P} \left ( - \frac{h^2 \nu'^2}{m} h |\vec k'| + \frac{m^2}{h \nu'} |\vec p'| \right) - \frac{\rho_2}{2 M_P} ( h \nu' |\vec p'| - E' h |\vec k'|) ~, \label{jocnewlaw}\end{aligned}$$ In Section \[nonrUR\] we used ordinary momentum conservation, $h |\vec k| + |\vec p| = - h |\vec k'| + |\vec p'|$, but if instead one adopts (\[jocnewlaw\]) the following result is then straightforwardly obtained: $$\frac{\Delta \nu}{2 \nu_*(\nu_* + p/h)} \simeq \frac{h}{m} + \frac{1}{M_P}\left[ m (\xi_1 - \rho_1) + (2 \xi_2-\rho_2) p + 2 (\xi_2 - \rho_2) h \nu_* \right] \frac{h \nu_*}{2 \nu_*(h\nu_* + p)} ~.$$ While this is, as stressed, only an exploratory investigation of the role that could be played by modifications of energy-momentum conservation (in particular there is clearly a strong influence of the specific [*ansatz*]{} we adopted for the modified law of conservation of energy-momentum) it is still noteworthy that the parameter $\rho_1$ enters the final result at the same order as the parameter $\xi_1$ and similarly the parameter $\rho_2$ enters the final result at the same order as the parameter $\xi_2$. In particular, this implies that even at the type of leading nonrelativistic order we here mainly focused on (the order where $\xi_1$ appears) the possibility of modifications of the law of energy-momentum conservation may well be relevant, with nonnegligible effects even in cases where $\xi_1 = 0$ but $\rho_1 \neq 0$. Closing remarks =============== We have here used the noteworthy example of atom-recoil measurements to explore whether it is possible to setup a phenomenology for the nonrelativistic limit of the energy-momentum dispersion relation that adopts the same spirit of a popular research program focusing instead on the corresponding ultrarelativistic regime. It appears that this is indeed possible and that on the one hand there is a strong complementarity of insight to be gained by combining studies of the nonrelativistic regime and of the ultrarelativistic regime, and on the other hand the nature of the conceptual issues that must be handled (particularly the relativistic issues associated with the possibility of breaking or deforming Poincaré symmetry) are closely analogous. We therefore argue that by adding the nonrelativistic limit to the relevant phenomenology agenda we could improve our ability to constrain certain scenarios, and we could also gain a powerful tool from the conceptual side, exploiting the possibility to view the same conceptual challenges within regimes that are otherwise very different. In light of the remarkable pace of improvement of cold-atom experiments over the last 20 years, we expect that the sensitivities here established for the parameters $\xi_1$ and $\xi_2$ (and $\lambda$; and $\rho_1$,$\rho_2$) might be significantly improved upon in the near future. This will most likely translate into more stringent bounds, but, particularly considering the values of $\xi_1$ being probed, should also be viewed as a (slim but valuable) chance for a striking discovery. We therefore feel that our analysis should motivate experimentalists to taylor some of their plans in this direction (also using the remarks we offered in Subsection \[NEWLIMIT\]) and should motivate theorists toward a vigorous effort aimed at overcoming the technical difficulties on the quantum-gravity-theory side that presently obstruct the derivation of more detailed quantitative predictions. Acknowledgments {#acknowledgments .unnumbered} =============== G. A.-C. is supported in part by grant RFP2-08-02 from The Foundational Questions Institute (fqxi.org). C. L. is supported in part by the German Research Foundation and the Centre for Quantum Engineering and Space-Time Research QUEST. D. M., J. M. C., J. L. C. and J. I. are supported by CICYT (grants FPA2006-02315 and FPA2009-09638) and DGIID-DGA (grant 2009-E24/2). J. I. acknowledges a FPU grant and D. M. a FPI grant from MICINN. [50]{} G. Amelino-Camelia, gr-qc/9910089, Lect. Notes Phys.  **541**, 1 (2000); [*Quantum Gravity Phenomenology*]{}, arXiv:0806.0339. G. Amelino-Camelia *et al.*, astro-ph/9712103, Nature **393**, 763 (1998). B.E. Schaefer, Phys. Rev Lett. **82**, 4964 (1999). S.D. Biller *et al.*, Phys. Rev. Lett. **83**, 2108 (1999). G. Amelino-Camelia, gr-qc/9808029, Nature [**398**]{}, 216 (1999). J. Alfaro, H. A. Morales-Tecotl and L. F. Urrutia, Phys. Rev. Lett. **84**, 2318 (2000). T. Jacobson, S. Liberati and D. Mattingly, astro-ph/0212190, Nature [**424**]{}, 1019 (2003). G. Amelino-Camelia and C. Lämmerzahl, gr-qc/0306019, Class. Quant. Grav. **21**, 899 (2004). U. Jacob and T. Piran, hep-ph/0607145, Nature Phys. [**3**]{}, 87 (2007). F. Aharonian et al. \[HESS Collaboration\], Phys. Rev. Lett. [**101**]{}, 170402 (2008). G. Amelino-Camelia, C. Laemmerzahl, F. Mercati and G. M. Tino, arXiv:0911.1020 \[gr-qc\], Phys. Rev. Lett.  [**103**]{}, 171302 (2009). R. Gambini and J. Pullin, Phys. Rev. **D59**, 124021 (1999). G. Amelino-Camelia and S. Majid, hep-th/9907110, Int. J. Mod. Phys. **A15**, 4301 (2000). J. Kowalski-Glikman, astro-ph/0006250, Phys. Lett. **B499**, 1 (2001). O. Bertolami and L. Guisado, hep-th/0306176, JHEP [**0312**]{}, 013 (2003). L. Smolin, *Three roads to quantum gravity* (Basic Books, 2002). T. Kifune, astro-ph/9904164, Astrophys. J. Lett. **518**, L21 (1999). G. Amelino-Camelia, gr-qc/0012049, Nature [**408**]{}, 661 (2000). S. P. Robinson and F. Wilczek, hep-th/0509050, Phys. Rev. Lett. [**96**]{}, 231601 (2006). A. Abdo *et al.*, Science [**323**]{}, 1688 (2009). J. Ellis, N. E. Mavromatos and D. V. Nanopoulos, arXiv:0901.4052 \[astro-ph\], Phys. Lett.  B [**674**]{}, 83 (2009). G. Amelino-Camelia and L. Smolin, arXiv:0906.3731 \[astro-ph\], Phys. Rev.  D [**80**]{}, 084017 (2009). A. Abdo *et al.*, Nature **462**, 331 (2009). G. Amelino-Camelia, Nature **462**, 291 (2009). J. Alfaro, H.A. Morales-Tecotl, L.F. Urrutia, hep-th/0208192, Phys. Rev. [**D66**]{}, 124006 (2002). G. Amelino-Camelia, gr-qc/0012051, Int. J. Mod. Phys. [**D11**]{}, 35 (2002). G. Amelino-Camelia, hep-th/0012238, Phys. Lett.  B [**510**]{}, 255 (2001). J. Magueijo and L. Smolin, Phys. Rev. Lett.  [**88**]{}, 190403 (2002). M. Kasevich and S. Chu, Phys. Rev. Lett. **67**, 181 (1991). A. Peters *et al.*, Nature **400**, 849 (1999). A. Wicht *et al.*, Phys. Script. **T102**, 82 (2002). D. Hanneke, S. Fogwell and G. Gabrielse, Phys. Rev. Lett. **100**, 120801 (2008). V. Gerginov *et al.*, Phys. Rev. [**A73**]{}, 032504 (2006). S. Liberati and L. Maccione, Ann. Rev. Nucl. Part. Sci.  [**59**]{}, 245 (2009). G. Amelino-Camelia, gr-qc/0210063, Int. J. Mod. Phys. [**D11**]{}, 1643 (2002). F. Hinterleitner, gr-qc/0409087, Phys. Rev. D [**71**]{}, 025016 (2005). S. Hossenfelder, hep-th/0702016, Phys. Rev. D [**75**]{}, 105005 (2007). C. Rovelli, arXiv:0808.3505 \[gr-qc\]. J.M. Carmona, J.L. Cortés, hep-ph/0007057, Phys. Lett. B [**494**]{}, 75 (2000). J.M. Carmona, J.L. Cortés, hep-th/0012028, Phys. Rev. D [**65**]{}, 025006 (2001). J. Ellis, N.E. Mavromatos, D.V. Nanopoulos, A.S. Sakharov, astro-ph/0309144v2, Nature [**428**]{}, 386 (2004). T. Udem, Nature Phys. **2**, 153 (2006). P. Cladé *et al.*, Phys. Rev. A **74**, 052109 (2006). M. Cadoret *et al.*, Phys. Rev. Lett. **101**, 23080 (2008). F. Biraben, *Proceedings of the XXI International Conference on Atomic Physics*, 56 (World Scientific, 2009). U. Jacob, F. Mercati, G. Amelino-Camelia and T. Piran, arXiv:1004.0575 \[astro-ph\]. P. Kosinski & P. Maslanka, hep-th/0211057, Phys. Rev. D [**68**]{}, 067702 (2003). S. Mignemi, hep-th/0302065, Phys. Lett. A [**316**]{}, 173 (2003). M. Daszkiewicz, K. Imilkowska & J. Kowalski-Glikman, hep-th/0304027, Phys.Lett. A [**323**]{}, 345(2004). G. Amelino-Camelia & T. Piran, astro-ph/0008107, Phys. Rev. D [**64**]{}, 036005 (2001). D. Heyman, F. Hinteleitner & S. Major, gr-qc/0312089, Phys. Rev. D [**69**]{}, 105016 (2004). M. Arzano, J. Kowalski-Glikman, A. Walkus, arXiv:0912.2712 \[hep-th\]. [^1]: We adopt units in which the speed-of-light scale $c$ is set to $1$ (whereas we shall explicitate the role of the Planck constant $h$). [^2]: We should stress however that, while the perspective schematized in our Eqs. (\[PropertiesOfDeltaQG\])-(\[DispRelPrimoOrdineInMp\]) is by far the most studied in the relevant quantum-gravity-inspired literature, in principle more general possibilities may well deserve investigation. For example, one might contemplate non-integer powers of $M_P$ to appear, and this would not be too surprising, especially in light of the rather common expectation that the correct description of quantum gravity might require sizable nonlocality. [^3]: Note that a contribution of form $m^3$ ([*i.e.*]{} momentum-independent) to $\Delta_{QG}^{(1)}(p,m)$ cannot be included in the nonrelativistic regime because of the requirement $\Delta_{QG}^{(1)}(p=0,m) =0$. A contribution to $\Delta_{QG}^{(1)}(p,m)$ of form $m^3$ is instead admissible in the ultrarelativistic regime (since in that regime the requirement $\Delta_{QG}^{(1)}(p=0,m) =0$ of course is not relevant), but we ignored it since $m^3$ is too small with respect to $p^3$, $m p^2$ and $m^2 p$ in the ultrarelativistic regime. [^4]: We only give a schematic and simplified account of the process, which suffices for the scopes of our analysis. A more careful description requires taking into account that, rather than a single ground state, the relevant two-photon Raman transition involve hyperfine-splitted ground states [@Kasevich91b; @Peters99; @Wicht02]. And that, rather than tuning the two lasers exactly on some energy differences between levels, some detuning is needed [@Kasevich91b; @Peters99; @Wicht02]. [^5]: The careful reader will for example notice that Ref. [@birabenPROCEEDINGS] provides an example of setup in which our Planck-scale effects would cancel out. [^6]: These consistency requirements for a deformation of Poincaré symmetry are very restrictive but may not suffice to fully specify the form of the law of energy-momentum conservation by insisting on compatibility with a chosen form of the dispersion relation [@gacIJMP2002vD11; @dsrIJMPrev]. [^7]: Note that we have worked consistently throughout this manuscript characterizing the nonrelativistic limit as the long-wavelength regime where $p \ll E$ for massive particles. This terminology was inspired by our focus on atoms and other massive particles, since the label “nonrelativistic" for long-wavelength photons (massless particles) is of course not applicable. However, the reader can easily check that we handled correctly the long-wavelength properties of photons in the relevant frameworks. In particular, in our characterizations of photons the parameter $\xi_1$ is automatically absent (since $\xi_1$ always appears in formulas multiplied by powers of the mass of the particle) and also parameters such as $\eta_1$ are omitted, since it always appears in the combination $\eta_1 p/M_P$ which in the long-wavelength regime is completely negligible.
/* *********************************************************************** * * project: org.matsim.* * * * * *********************************************************************** * * * * copyright : (C) 2008 by the members listed in the COPYING, * * LICENSE and WARRANTY file. * * email : info at matsim dot org * * * * *********************************************************************** * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * See also COPYING, LICENSE and WARRANTY file * * * * *********************************************************************** */ package org.matsim.contrib.analysis.kai; import java.util.Collection; import java.util.Map; import java.util.Set; import java.util.TreeMap; /** * Data structure that memorizes double values as a function of String keys. Typically, the key is an aggregation type. If there * are also bins, we need something else ({@link org.matsim.contrib.analysis.kai.Databins} for several keys that collect for the same bins). * * @author nagel */ public class DataMap<K> /*implements Map<K,Double>*/{ private Map<K,Double> delegate = new TreeMap<K,Double>() ; // so the stuff comes out sorted. public int size() { return delegate.size(); } public boolean isEmpty() { return delegate.isEmpty(); } public boolean containsKey(Object key) { return delegate.containsKey(key); } public boolean containsDoubleValue(Object value) { return delegate.containsValue(value); } public Double get(Object key) { return delegate.get(key); } // public Double put(K key, Double value) { // return delegate.put(key, value); // } private static int addCnt = 0 ; public Double addValue( K key, Double value ) { if ( addCnt < 10 ) { System.out.println( key.toString() + ": adding " + value ); } Double result; if ( delegate.get(key)==null ) { result = delegate.put(key,value) ; } else { result = delegate.put( key, Double.valueOf( value.doubleValue() + delegate.get(key).doubleValue() ) ) ; } if ( addCnt < 10 ) { addCnt++ ; System.out.println( key.toString() + ": new value " + value ); } return result ; } public Double inc( K key ) { // System.out.println( key.toString() + ": incrementing" ); if ( delegate.get(key)==null ) { return delegate.put(key,1.) ; } else { return delegate.put( key, Double.valueOf( 1. + delegate.get(key).doubleValue() ) ) ; } } public Double remove(Object key) { return delegate.remove(key); } // public void putAll(Map<? extends K, ? extends Double> m) { // delegate.putAll(m); // } public void clear() { delegate.clear(); } public Set<K> keySet() { return delegate.keySet(); } public Collection<Double> values() { return delegate.values(); } public Set<java.util.Map.Entry<K, Double>> entrySet() { return delegate.entrySet(); } @Override public boolean equals(Object o) { return delegate.equals(o); } @Override public int hashCode() { return delegate.hashCode(); } }
<?xml version="1.0" encoding="utf-8"?> <!-- Copyright (c) .NET Foundation and contributors. All rights reserved. Licensed under the Microsoft Reciprocal License. See LICENSE.TXT file in the project root for full license information. --> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <ProjectGuid>{89A33C9C-92F8-4C9B-9AEC-1980C716A304}</ProjectGuid> <OutputType>Exe</OutputType> <RootNamespace>Wix.Samples</RootNamespace> <AssemblyName>runbundle</AssemblyName> </PropertyGroup> <ItemGroup> <Compile Include="Program.cs" /> <Compile Include="AssemblyInfo.cs" /> </ItemGroup> <ItemGroup> <Reference Include="System" /> <ProjectReference Include="..\ManagedBundleRunner\ManagedBundleRunner.csproj"/> </ItemGroup> <Import Project="$([MSBuild]::GetDirectoryNameOfFileAbove($(MSBuildProjectDirectory), wix.proj))\tools\WixBuild.targets" /> </Project>
This invention relates generally to inflatable restraint systems and, more particularly, in one aspect to the type of inflator known as a hybrid inflator and the treatment of gases therein and in another aspect to the treatment of gas generated by inflatable restraint system inflators. Many types of inflators have been disclosed in the art for inflating an air bag for use in an inflatable restraint system. One type involves the utilization of a quantity of stored compressed gas which is selectively released to inflate the air bag. Another type derives a gas source from a combustible gas generating material which, upon ignition, generates a quantity of gas sufficient to inflate the air bag. In a third type, the air bag inflating gas results from the combination of a stored compressed gas and the combustion products of a gas generating material. The last mentioned type is commonly referred to as an augmented gas or hybrid inflator. Hybrid inflators that have been proposed heretofore have, in general, been subject to certain disadvantages. For example, the burning of the pyrotechnic (gas generating) and initiation materials in such inflators invariably results in the production of particulate material. The use of such a particulate-containing inflator emission to inflate an air bag can in turn result in the particulate material being vented out from the air bag and into the vehicle. Typically, the particulate material is variously sized and includes a large amount of particulate within the respirable range for humans. Thus, the passage of the gas borne particulate material into the passenger compartment of the vehicle, such as via conventional air bag venting, can result in the undesired respiration of such particulate material by the driver and/or other passengers which in turn can cause consequent respiratory problems. Also, such particulate can easily become dispersed and airborne so as to appear robesmoke and thereby result in the false impression that there is a fire in or about the vehicle. It has also been proposed to screen the gaseous emission coming from the pyrotechnic portion of such hybrid inflators. For example, the above-identified U.S. Pat. No. 5,131,680 discloses the inclusion of a circular screen "128" between the body of pyrotechnic material and the orifice through which the pyrotechnically produced emission is passed to the pressurized gas-containing chamber of the hybrid inflator. Also, U.S. Pat. No. 5,016,914 discloses the inclusion of a metal disk having a plurality of suitably sized openings therein. The disk is disclosed as functioning to trap large particles such as may be present in the generated gas. Such techniques of filtering or screening the gaseous emission of the pyrotechnic section of the hybrid inflator prior to contact with the stored, pressurized gas of the inflator generally suffer such as from undesirably slowing or preventing the transfer of heat to the stored gas from the relatively hot generated gas and particulate material. In general, such a transfer of heat to the stored gas is desired in hybrid inflators in order to produce desired expansion of the gas. Consequently, the slowing or preventing of desired heat transfer can result in a reduction in the performance of the inflator. Also, the screening or filtering of particulate at this location within the inflator can undesirably effect gas flow within the inflator. For example, such treatment can undesirably restrict the flow of gas out of the pyrotechnic chamber, causing the pressure inside the pyrotechnic chamber to increase and thereby increase the potential for structural failure by the pyrotechnic chamber. The above-identified U.S. Pat. No. 5,016,914 also discloses constraining gas flow to a tortuous path whereby additional quantities of relatively large particles produced by combustion of the gas generating material are separated from the commingled gases as the gases flow toward the inflatable vehicle occupant restraint. As disclosed, various component parts of the vehicle occupant restraint system cooperate to form the described tortuous path. These component parts include the openings in the container which direct the gas into an outer cylindrical diffuser, the container itself which preferably contains gas directing blades positioned therein as well as burst disks to control the flow of the gas generated by ignition of the gas generating material. The patent also discloses that in a preferred embodiment, a coating material, e.g., a silicone grease, is coated onto the inner surface of the container to assist in the fusing of particles thereto rather than allowing the particles to rebound into the nitrogen gas jet stream. Such surface coatings, however, generally suffer in several significant aspects with respect to effectiveness and functioning when compared, for example, to the use of a filter to effect particulate removal. First, as the nature of such fusion or adhesion of particles onto a coating is a surface phenomenon, the effectiveness of such removal is directly related to the amount of available surface area. In practice, such a surface coating provides a relatively limited amount of contact surface area and, further, the effectiveness of such surface treatment typically is decreased as the available surface area is occupied. Also, though such an internal surface coating may be of some use in the fusing of solid particles, such a coating would normally be relatively ineffective in trapping liquid phase particles. Furthermore, the process of condensation of liquid phase particles in an inflator normally involves a transfer of heat to the subject contact surface. In the case of such a surface coated with such a grease, such a transfer of heat could undesirably result in the off-gassing of the coating material, e.g., production of gaseous byproducts of the coating material, which in turn would undesirably contribute to the toxicity of the gases emitted from such an inflator. In addition, the effect of the flow of gases within the inflator can raise concerns about the use of inflators which utilize such coatings. For example, the impingement onto such a coating of the hot combustion gases produced within an inflator would normally tend to displace the coating material, particularly since such coatings tend to become softer at elevated temperatures. Thus, even for the short time periods associated with the operation of such devices neither exclusive nor primary reliance is made by this patent on the use of such a coating to effect particle removal. There is a need and a demand for improvement in hybrid inflators to the end of preventing, minimizing or reducing the passage of particulate material therefrom without undesirably slowing or preventing heat transfer to the stored, compressed gas while facilitating proper bag deployment, in a safe, effective and economical manner. The present invention was devised to help fill the gap that has existed in the art in these respects. In addition and as described above, inflators, particularly those which house a combustible gas generating material whether alone or in conjunction with a stored gas as in hybrid inflators, have in the past utilized various grades of fine metal screens to effect emission filtration. Unfortunately, partially as a result of the costs associated with the manufacture of such screens, such screen materials can be relatively costly. Also, depending on factors such as the looming and crimping processes employed, individual wires in such metal meshes can experience significant movement relative to adjacent wires and, as a result, detrimentally effect the strength of the resulting wire mesh material. In addition, the edges of wire meshes or screens used in inflator filter assemblies are susceptible to permitting particulate-containing gas generant effluent to pass therethrough and circumvent the main particulate-removing components of the filter assemblies. Such circumvention, also termed "blow-by," can permit undesired and unacceptable amounts of particulates to escape with the inflation gas out of the inflator. Furthermore, the nature of wire meshes or screens prevents the production of a one piece material which has a first portion without openings, e.g., such as a border or edge, and a second portion with openings, e.g., such as a central region. In general, inflator filters include several components which cooperate to perform various functions or treatments, such as, provide for the cooling, flow redirection and filtering (e.g., particulate removal) of or from the contacting stream. Also, one or more filter assembly components can serve to provide structural support for other filter components such as those that could not otherwise withstand the operating conditions (e.g., temperatures, pressures, and/or flow rates) to which it would be subjected to in use. Thus there is a need and a demand for improvement in the components and materials used in inflator filter assemblies to reduce cost as well as to improve production, operational and assembly options and capabilities.
I had a big lunch, so something light sounds good. "Neil Mann" <[email protected]> on 11/02/2000 04:27:22 PM Please respond to <[email protected]> To: "C. Kay Mann (E-mail)" <[email protected]> cc: Subject: dependent care I spoke with a lady in St Louis HR who said she paid in advance for her child last year. She said it does not matter when you paid. She suggested we ask St F for a quarterly statement. That's what she used last year and had no problem whatsoever getting reimbursed. I bet we are not the only folks in this situation at St F. I confirmed the fence installation for Wednesday 11/8 at 1:15pm. I will be at an HCA luncheon that morning for 1 hour of CE and should be finished in time to be at the house by then. Any thought on what you would like for dinner? Neil
Friday, September 30, 2011 I went on a field trip today with the three special education classrooms to Maple View Farm in Orange County, but instead of just going for a picnic and ice cream, we took the whole tour. The best moment was on the hayride: touring the cow pasture, the housing area for calves, the milking barn, and the newborn calf pen. The kids all liked this, but one little boy that I sat with, was especially enthused. He had been acting up all day, but during the ride, he was pointing to every animal and vocalizing (he had little speech). I showed him how to sign the two word phrase "love cow"---a big smile came on his impish little face, and then he signed this frequently--every time we passed a group of them. This was a cool way to do speech therapy! This was a great trip---lots of animals, an outdoor picnic area, nice folks, and a cup of homemade ice cream at the end. I appreciate the teachers for setting this up. Thursday, September 29, 2011 When at all possible, I go into classrooms and work with my speech-language kids in that setting. I use the teacher's materials, the teacher's language (sometimes simplified), and work with groups of kids (not all of them 'mine'). The children stay in the mainstream, are learning from the core curriculum, and generally are more successful than those who are pulled out. This week, one of my second grade students hit a roadblock. He didn't grasp the concepts of addition---that is, adding together different numbers to get the same sum (e.g. 3 + 2 is the same as 4 + 1). This concept was presented to the kids as how many people are on a bus if the top deck had so many people and the bottom deck had another number. Kids had to manipulate beads on pipe cleaners---and most of the children understood that the pipe cleaners represented the decks on a bus, and the beads represented people. What the kids typically get for manipulatives (sufficient for most!) My kids didn't get it--typical kids did. After a few days of the teachers and assistant trying to explain the concepts, working with the kids individually, and modeling answers, it hit me that the task and materials were a bit too abstract. I made another set of materials for this second grade teacher--using Boardmaker, Google images, and velcro. She gave me great input----wanted the colors in the manipulative I made to match the ones she was already using. So here is the end result of what I made for my kids.......much more concrete. It's not a work of art, but it's definitely more understandable. I made a blank white area for the children to write the number sentence using erasable markers. These manipulatives will make everything more concrete, and easier to understand. The teacher is excited, and the cool thing is that the other three 2nd grade teachers want the same manipulatives for their struggling kids! I guess I'll be crafting buses and little people next Monday! Boardmaker and Google images are a Godsend! The whole point of this is that, sometimes, modifications need to be made to the materials presented to make the experience a bit more meaningful. Collaboration with this teacher made it happen. She's willing to work hard with the kids, but by providing some supplemental materials, maybe she won't have to work too hard, and the kids will comprehend a bit quicker. I love making classroom materials, so am eager to see what I can make next week! Monday, September 26, 2011 We all have problems--overdue bills, bald tires, oversleeping, bad hair. Most of us can gauge our reactions to the seriousness of the problem (except perhaps those of us who experience Road Rage!). Bad hair might merit a scowl, while an overdue bill might merit a frantic trip to the bank. A giant problem such as a bad fall might merit a bit of screaming, calls to 911, and crying. The point is that most of us can gauge our reactions based on the severity of the problem at hand. Most of us can, but with some of the children I work with (many on the autism spectrum), a problem (whether it's a spelling mistake, or a major illness) is always treated as a disaster. It's hard to function that way----always on edge because so many disasters are always happening. With one group of kids I see, all very nice, fun children, I've been using Michelle Garcia Winner's curriculum "Think Social". The book provides step-by-step methods for teaching social-cognitive and -communicative skills to students who have these challenges that affect their school and home life. I started with lesson 1 and am now on the 4th lesson. (I'm trying to follow it as it's written--the author definitely is more knowledgeable about this than I am!) This week's lesson had a great premise--that kids need to learn to gauge reactions, so we made a 'Problem Meter'--zero is no problem while 10 on the meter is a disaster, such as a trip to the hospital. We've had many discussions and first practiced ranking problems on the meter such as 'wrong answer on a math paper', to 'throwing up', to 'someone hitting them'. Today's lesson was in ranking feelings and expressions based on the problem meter. Visuals were from Boardmaker, but any feelings pictures could be used. The teacher has the 'Problem Meter" now in her classroom, and can use it during moments when the child needs help gauging reactions. I'm hoping this will cut down on needless meltdowns and encourage problem-solving behaviors. Saturday, September 24, 2011 Last weekend, we had company. My boy, Ben, and his fiance, Aleah, were over for dinner, along with Aleah's parents. Her mom is a gourmet cook, and brought as a gift, a jar of homemade pesto. Tonight, I made a delicious recipe from one of my favorite recipe websites---GlutenFreeda.com. The main course was Roasted Chicken with Potatoes & Pesto. Click on this link and you'll go directly to the site and the recipe. I guess I'll be having leftovers tomorrow! I still have a half a jar of pesto left---I wonder what other recipes I can try? Ignore the bananas in the background. They just happened to be on the table. Thursday, September 22, 2011 In my speech sessions, I like to read picture books to the kids, and nonfiction science books are great ways to help improve children's language skills and knowledge of the world around them. (Science, as you know, is really not emphasized as much these days in elementary school--it's not on the end of grade tests until 5th grade.) A favorite of mine is the book Actual Size by Steve Jenkins. Children love the illustrations: (from the Library School Journal, "In striking torn-and-cut paper collages, Jenkins depicts 18 animals and insects–or a part of their body–in actual size." Check this out! This is just one illustration---an actual size eyeball of a Giant Squid! I like to hold this picture up next to a kid's head---the eye and the head are the same size. The kids get a kick out of that comparison. Group rules that I teach In addition to all of the language and vocabulary concepts presented in this book, I also use it to help teach classroom social skills to my verbal kids who are on the autism spectrum. Typically, for each page, these children want to spew out all the facts they may know about a particular animal or creature. This uninhibited rattling off of minute facts is often disruptive to a group or classroom discussion, so I like to teach the pragmatics of being in a discussion group. That's where a small written organizer and picture cues come in handy. The written organizer is in the form of a book-specific list of the animals, with clearly defined spaces as to when the children can raise their hands and offer on-topic remarks or questions. Later, I try to fade out the 'raise your hand' cues, but initially they are needed. Part of the "Actual Size" organizer. Kids follow along while I read. The kids need role playing, and clear instructions to learn the group discussion rules, but after a few months, they know what 'on target' talking means, and how to raise their hand to speak. We use many science picture books to provide the median to teach these skills. I encourage my teachers to use the same visuals---pictured rules, and book organizers. A book like Actual Size lends itself to this type of lesson since the language is simple, the topics are clearly defined, and it's interesting to the children. I love it! Steve Jenkins has written many books perfect for this type of lesson. Check out his website! Monday, September 19, 2011 What happens when a biology major is also an artist? You end up with the most artistic flashcards ever! At UNC Asheville, Vicki is taking a class where she has to memorize species of salamanders. To help her learn the names, she drew flashcards---if it were me drawing them, they would be stick figures. Take a look at her creations! Sunday, September 18, 2011 If any of you are thinking of mentoring a child, the activities you can do are practically unlimited. This weekend, we took a jaunt to Asheville and Brevard. I've actually seen mountains and waterfalls before, but loved sharing this experience with another person who hadn't. Thursday, September 15, 2011 Actually, this posting is not about ALL the children at my school---just the first grade. The first grade team, in preparation for open house tonight, asked the kids to create their own realistic likenesses using a collage-type of format. Parents coming in then had to try to match their child's name with the piece of art. I loved how the children perceived themselves, the details, the colors, the individual touches! Can you believe the bead work? The bleached hair styling! He looks ready for Wall Street. The Mohawk! The bangs! Love the earrings and hair! This tradition has continued for several years. Thanks, Gretchen! We now are making plans for staff to make their own self portraits and have the students guess! I can't wait! Wednesday, September 14, 2011 I should have posted videos on YouTube of my twins, except it wasn't invented when they were little! I bet they are counting their blessings! In this video of unknown twins, from a special education/therapeutic viewpoint, the twin on the left seems a little more coordinated. In one child's case (my student), he seems to be learning to read and write words that are highly interesting to him---words that appear in elevators, and color words, safety signs, and names of family members. He really doesn't have the verbal ability at this moment to tell people that he is interested in reading. Due to his autism, perhaps everyone assumed he wasn't really ready. No one knew about his reading skills until his mom discovered that he was spelling his favorite words on his iPad, and asking her how to spell less familiar words. Wow! Today, I went around the school with this little boy, and he read many words and signs he saw in the hallway. I took pictures and made a special book just for him! I got goosebumps! 'Caution' is what he told me. No problem reading 'stop'! 'Up' was the word He spelled color words on the iPad, and words such as 'cat' and 'dog'. What else does he know? Kids like him challenge my assumptions, and make me rethink everything. Monday, September 12, 2011 Chapel Hill is an affluent community. I live in a house where we have about 2 computers per person, if you count the iPad and MacBook that I carry around from work. We have a scanner, video cameras, and printer. We have printer paper. We have wireless! I don't feel as though my home is atypical for my circle of neighbors and friends. Life is easy for me, and if I had kids still at home, homework would go smoothly. My college twins each have their own laptop, and my grown boys have computers. They stay in touch! With mentoring, I've learned that life, technology-wise, is not always so easy. I can't always email and depend on an answer. There is no wireless, no printer, no dependable computer. Large families compete for their time on one 8 year old half-working laptop; so I was very happy when a generous neighbor (in response to my email) donated two laptops to Blue Ribbon kids--one for my mentee, and a ThinkPad for another mentee. Wow!!! In this day and age, all families need the internet, and these were very nice donations! Saturday, September 10, 2011 I'm seeing this question all over ---where was I on 9/11? I remember that day very clearly. It was a terrible day. At my school, there used to be a room with a television (TV gone now). On September 11, people like me (not a classroom teacher), popped in and watched the air strikes, the commentary, and the twin towers fall---again and again. We didn't tell the kids--the classroom teachers carried on. Friday, September 9, 2011 I made some new best friends today! It seems to be all of the third grade teachers at Ephesus Elementary! Maybe they really do like me, but I suspect it's really because the Public School Foundation blessed me with a grant for 4 iPads. Guess where the iPads are going? Right on! To the third grade teachers to keep track of all of their intervention data. (So what if they also play Tap Tap Revenge occasionally!) Personally, I'm at Level 27! I'm challenging them! Thursday, September 8, 2011 The kids have grown up so fast! One minute they were preschool, the next they were adults. I don't feel that much older, until I look at them, and often think wistfully that I'd like to go back and spend a few days with them again when they were little and cuddly. I was housecleaning, and found a box of used-up disposable cameras and rolls of film. Why they were in a box, I don't know. So, I sent them away to be developed. I got the pictures back today, and it sent me back to the preschool days for the twins, elementary for the boys. Here are a few of my favorites: Ben and our first piano, his first year at playing it. 2nd grade here--now 25 years old. Zach 3rd grade (now 26). Old computer, when the internet was really new. My favorite picture---Vicki at about 4 or 5. Now 21. Andorra's first hairdresser appointment, and she still remembers how much she hated that haircut. Twins--at 4; now 21 Ordinary snapshots, but extraordinary in their 'time capsule' effect on me personally. Time flies, so enjoy every minute of it! Wednesday, September 7, 2011 A friend and colleague of mine steered me to a TV show, The Big Bang Theory. I don't watch much television in general, but this particular sitcom seems to have as its star a young man, Sheldon, who fits the image of a brilliant man with Aspergers. Since I work with younger versions of this character, and try to teach some of the same skills this guy is lacking, the humor presented hits home. This is funny! Here is a part of an episode where Sheldon has created a schema for making a friend. See for yourself! I feel that Sheldon actually is a success---he has a great job, has a circle of friends, a nice place to live, and many interests (maybe not shared interests, but interests all the same) He has the ability to reflect on his behavior and attempt to make changes. I hope the same for the kids at my school. The episode here was real for me. I have piles of speech therapy materials which all attempt to teach friendship skills. It's often elusive and difficult to concretely explain. Friendship and conversational skills all require on-the-spot flexibility that this character does not possess. Sheldon and others like him will struggle, but he's on the right track! At least he's aware he needs to learn. When children have autism, it's usually impossible to make them huddle---they simply don't understand or sensory issues prevent them from doing this for 30 to 45 minutes. The next best thing then is to keep them sitting and happy in a safe room---again not always easy. This is where the iPad really came in handy today. When the tornado alert sounded, I bounded to the EC class, iPad in hand, to help the teachers and to keep the kids entertained in a safe way. We actually had a few iPads for about 10 kids. 1. Kids took turn in the safe room playing simple iPad games. 2. YouTube videos on the iPad were played to keep them entertained. 3. With the iPad 2, we took a few silly pictures during our confinement, and shared them. Most of the photos were of the kids (they loved them), but I caught a few adults ;) One of our lovely staff members! She's a gem! 4. With the iPad, the teachers could monitor the weather conditions on the internet and anticipate how long the kids needed to be confined to the small space, since there were no regular announcements coming in otherwise. The iPad alleviated anxiety all around, in many ways, during a stressful situation. To conclude here, most of the children that I work with survived the day unscathed. I'm sure some of them wondered why they sat on the floor for long periods of time in a different room with a lot of adults and other kids, but the teachers worked very hard to keep them happy and calm. The iPad helped a little, but I have to give credit where credit is due. Yeah, Ephesus EC teachers and assistants!!!!! Sunday, September 4, 2011 My speech therapy schedule has started full blast---it's like we never had a vacation. I really can't complain. Across the state, school based speech pathologists struggle with impossible caseload sizes---50 kids, 60 kids? I don't know how you even learn the children's names much less try to help them with their communication skills. In Chapel Hill, we have been truly blessed because there is a commitment on the part of the administration to keep workloads manageable across the district. That's one reason I am happy with my job! Book with icons is at the top of the picture. Another reason I like my job is the access to technology---my favorite gadget of course being the iPad. On Friday, in anticipation of fall (I'm desperately wanting cooler temps, and sweater weather!) we read an adapted book together in the EC classroom ("One Bright Fall Morning") and then made a fall leaves craft. I set up a Pictello story with craft pictures, and then adapted the activity somewhat to take into account difficulties with using scissors. screenshot from pictello These next four pictures are a few of the screenshots from the Pictello ipad app. It is simply sequential pictures of the craft, but also has text-to-speech features. If a child touches the picture, the direction is spoken aloud. In addition, the kids really like swiping the screen to get to the next picture. With an iPad 2, it's simple to take the pictures with the iPad and import them into the app when setting this all up. My hubby--I made the craft at home first and needed a model! Communication board; boardmaker icons for leaves So, how did the kids do? We adapted the lesson- precut the spiral, used Boardmaker icons for the leaves, and had a simple communication board handy for each child. They enjoyed the adapted book, and each child was able to match icons, numbers and pictures. At the end, we watched a YouTube video about the seasons. I would say this was successful---the teacher and assistants participated with the kids and this activity reinforced to the adults that the iPad, adapted books, and communication boards can be great tools for any lesson.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!--NewPage--> <HTML> <HEAD> <!-- Generated by javadoc (build 1.6.0_35) on Tue Oct 16 22:49:40 ICT 2012 --> <TITLE> Resolvable (Apache FOP 1.1 API) </TITLE> <META NAME="date" CONTENT="2012-10-16"> <LINK REL ="stylesheet" TYPE="text/css" HREF="../../../../stylesheet.css" TITLE="Style"> <SCRIPT type="text/javascript"> function windowTitle() { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="Resolvable (Apache FOP 1.1 API)"; } } </SCRIPT> <NOSCRIPT> </NOSCRIPT> </HEAD> <BODY BGCOLOR="white" onload="windowTitle();"> <HR> <!-- ========= START OF TOP NAVBAR ======= --> <A NAME="navbar_top"><!-- --></A> <A HREF="#skip-navbar_top" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_top_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/Resolvable.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../index-all.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> fop 1.1</EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../../../org/apache/fop/area/RenderPagesModel.html" title="class in org.apache.fop.area"><B>PREV CLASS</B></A>&nbsp; &nbsp;<A HREF="../../../../org/apache/fop/area/Span.html" title="class in org.apache.fop.area"><B>NEXT CLASS</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../index.html?org/apache/fop/area/Resolvable.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="Resolvable.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> SUMMARY:&nbsp;NESTED&nbsp;|&nbsp;FIELD&nbsp;|&nbsp;CONSTR&nbsp;|&nbsp;<A HREF="#method_summary">METHOD</A></FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> DETAIL:&nbsp;FIELD&nbsp;|&nbsp;CONSTR&nbsp;|&nbsp;<A HREF="#method_detail">METHOD</A></FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_top"></A> <!-- ========= END OF TOP NAVBAR ========= --> <HR> <!-- ======== START OF CLASS DATA ======== --> <H2> <FONT SIZE="-1"> org.apache.fop.area</FONT> <BR> Interface Resolvable</H2> <DL> <DT><B>All Known Implementing Classes:</B> <DD><A HREF="../../../../org/apache/fop/area/BookmarkData.html" title="class in org.apache.fop.area">BookmarkData</A>, <A HREF="../../../../org/apache/fop/area/DestinationData.html" title="class in org.apache.fop.area">DestinationData</A>, <A HREF="../../../../org/apache/fop/area/LinkResolver.html" title="class in org.apache.fop.area">LinkResolver</A>, <A HREF="../../../../org/apache/fop/area/PageViewport.html" title="class in org.apache.fop.area">PageViewport</A>, <A HREF="../../../../org/apache/fop/area/inline/UnresolvedPageNumber.html" title="class in org.apache.fop.area.inline">UnresolvedPageNumber</A></DD> </DL> <HR> <DL> <DT><PRE>public interface <B>Resolvable</B></DL> </PRE> <P> Resolvable Interface. Classes that implement this interface contain idrefs (see Section 5.11 of spec for definition of <idref> datatype) that are resolved when their target IDs are added to the area tree. <P> <P> <HR> <P> <!-- ========== METHOD SUMMARY =========== --> <A NAME="method_summary"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TH ALIGN="left" COLSPAN="2"><FONT SIZE="+2"> <B>Method Summary</B></FONT></TH> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD ALIGN="right" VALIGN="top" WIDTH="1%"><FONT SIZE="-1"> <CODE>&nbsp;java.lang.String[]</CODE></FONT></TD> <TD><CODE><B><A HREF="../../../../org/apache/fop/area/Resolvable.html#getIDRefs()">getIDRefs</A></B>()</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Get the array of idrefs of this resolvable object.</TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD ALIGN="right" VALIGN="top" WIDTH="1%"><FONT SIZE="-1"> <CODE>&nbsp;boolean</CODE></FONT></TD> <TD><CODE><B><A HREF="../../../../org/apache/fop/area/Resolvable.html#isResolved()">isResolved</A></B>()</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Check if this area has been resolved.</TD> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD ALIGN="right" VALIGN="top" WIDTH="1%"><FONT SIZE="-1"> <CODE>&nbsp;void</CODE></FONT></TD> <TD><CODE><B><A HREF="../../../../org/apache/fop/area/Resolvable.html#resolveIDRef(java.lang.String, java.util.List)">resolveIDRef</A></B>(java.lang.String&nbsp;id, java.util.List&lt;<A HREF="../../../../org/apache/fop/area/PageViewport.html" title="class in org.apache.fop.area">PageViewport</A>&gt;&nbsp;pages)</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;This method allows the Resolvable object to resolve one of its unresolved idrefs with the actual set of PageViewports containing the target ID.</TD> </TR> </TABLE> &nbsp; <P> <!-- ============ METHOD DETAIL ========== --> <A NAME="method_detail"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TH ALIGN="left" COLSPAN="1"><FONT SIZE="+2"> <B>Method Detail</B></FONT></TH> </TR> </TABLE> <A NAME="isResolved()"><!-- --></A><H3> isResolved</H3> <PRE> boolean <B>isResolved</B>()</PRE> <DL> <DD>Check if this area has been resolved. <P> <DD><DL> <DT><B>Returns:</B><DD>true once this area is resolved</DL> </DD> </DL> <HR> <A NAME="getIDRefs()"><!-- --></A><H3> getIDRefs</H3> <PRE> java.lang.String[] <B>getIDRefs</B>()</PRE> <DL> <DD>Get the array of idrefs of this resolvable object. If this object contains child resolvables that are resolved through this then it should return the idref's of the child also. <P> <DD><DL> <DT><B>Returns:</B><DD>the id references for resolving this object</DL> </DD> </DL> <HR> <A NAME="resolveIDRef(java.lang.String, java.util.List)"><!-- --></A><H3> resolveIDRef</H3> <PRE> void <B>resolveIDRef</B>(java.lang.String&nbsp;id, java.util.List&lt;<A HREF="../../../../org/apache/fop/area/PageViewport.html" title="class in org.apache.fop.area">PageViewport</A>&gt;&nbsp;pages)</PRE> <DL> <DD>This method allows the Resolvable object to resolve one of its unresolved idrefs with the actual set of PageViewports containing the target ID. The Resolvable object initially identifies to the AreaTreeHandler which idrefs it needs resolved. After the idrefs are resolved, the ATH calls this method to allow the Resolvable object to update itself with the PageViewport information. <P> <DD><DL> <DT><B>Parameters:</B><DD><CODE>id</CODE> - an ID matching one of the Resolvable object's unresolved idref's.<DD><CODE>pages</CODE> - the list of PageViewports with the given ID</DL> </DD> </DL> <!-- ========= END OF CLASS DATA ========= --> <HR> <!-- ======= START OF BOTTOM NAVBAR ====== --> <A NAME="navbar_bottom"><!-- --></A> <A HREF="#skip-navbar_bottom" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_bottom_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/Resolvable.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../index-all.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> fop 1.1</EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../../../org/apache/fop/area/RenderPagesModel.html" title="class in org.apache.fop.area"><B>PREV CLASS</B></A>&nbsp; &nbsp;<A HREF="../../../../org/apache/fop/area/Span.html" title="class in org.apache.fop.area"><B>NEXT CLASS</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../index.html?org/apache/fop/area/Resolvable.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="Resolvable.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> SUMMARY:&nbsp;NESTED&nbsp;|&nbsp;FIELD&nbsp;|&nbsp;CONSTR&nbsp;|&nbsp;<A HREF="#method_summary">METHOD</A></FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> DETAIL:&nbsp;FIELD&nbsp;|&nbsp;CONSTR&nbsp;|&nbsp;<A HREF="#method_detail">METHOD</A></FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_bottom"></A> <!-- ======== END OF BOTTOM NAVBAR ======= --> <HR> Copyright 1999-2012 The Apache Software Foundation. All Rights Reserved. </BODY> </HTML>
I have sold a property at 2 BENNETT PL Completely renovated and a fabulous location is what you will find in this stunning 1679 sq foot home! The home is just steps to the Braeside ravine and walking distance to numerous schools and the waterpark!!! This home has been completely remodelled including a fabulous new kitchen with stainless steel appliances,3 new bathrooms, new windows, A/C, new wiring and much much more!!! Upstairs you will find the master bedroom which has a new three piece ensuite and walk in closet, two more bedrooms and a full four piece bathroom. The main floor living spaces features two large living areas, another bathroom, main floor laundry and a fourth bedroom or office. The basement is fully finished and perfect for a growing family. There is a double attached garage with a newer door. The treed backyard has a new deck and has loads of privacy for relaxing on those warm summer nights! This home has it all and shows like a showhome! Don't miss out on the opportunity to live in this great home in a fabulous location!
Q: Delete button destroys but not redirecting in rails I have a delete button that deletes the project but does not redirect. I am deleting in the edit view so I am unsure if that is the issue. I did check and it is set to DELETE and not GET. This is a Ruby on Rails app that uses HAML. Routes: projects GET /projects(.:format) projects#index POST /projects(.:format) projects#create new_project GET /projects/new(.:format) projects#new edit_project GET /projects/:id/edit(.:format) projects#edit project PATCH /projects/:id(.:format) projects#update PUT /projects/:id(.:format) projects#update DELETE /projects/:id(.:format) projects#destroy Haml: %div.actions-group-delete .right - if can? :destroy, @project = link_to project_path(@project), method: :delete, remote: true, data: { confirm: 'Are you sure you want to permanently delete this project?' }, class: "btn btn--primary btn--auto btn--short btn--delete", title: "Delete project" do %i.icon.icon-trash Projects Controller: def destroy @project_id = params[:id] project = Project.accessible_by(current_ability).find_by!(id: @project_id) authorize! :destroy, @project if @project.destroy.update_attributes(id: @project_id) flash[:success] = "The Project was successfully deleted." redirect_to projects_path else flash[:error] = "There was an error trying to delete the Project, please try again later." redirect_to edit_project_path(@project) end end Project Model: class Project < ActiveRecord::Base belongs_to :user has_many :project_items, -> { order("code ASC, name ASC") }, dependent: :destroy has_many :project_workers, dependent: :destroy has_many :workforces, through: :project_workers has_many :worked_hours, through: :project_workers has_many :project_equipments, dependent: :destroy has_many :equipments, through: :project_equipments has_many :equipment_hours, through: :project_equipments has_many :collaborators, dependent: :destroy has_many :used_items, dependent: :destroy has_many :reports, dependent: :destroy # has_many :items_used, dependent: :destroy, through: :project_items, source: :used_items accepts_nested_attributes_for :project_items, allow_destroy: true accepts_nested_attributes_for :project_workers, allow_destroy: true accepts_nested_attributes_for :project_equipments, allow_destroy: true accepts_nested_attributes_for :collaborators A: Your link_to is set up with remote: true. This means the link is submitted via an ajax call so the redirect happens in the context of that call. You need to either remove remote: true or create a delete.js.erb view and return the path to redirect to from your delete action. In the view you can then set window.location to this new path.
import Vue from 'vue'; import App from './App.vue'; import './registerServiceWorker'; Vue.config.productionTip = false; new Vue({ render: (h) => h(App), }).$mount('#app');
Crane Building Crane Building may refer to the following buildings in the United States: Crane Company Building (Chicago), Chicago, Illinois, listed on the National Register of Historic Places (NRHP) Crane Building (Des Moines, Iowa), listed on the NRHP Crane and Company Old Stone Mill Rag Room, Dalton, Massachusetts, listed on the NRHP Crane Company Building (North Carolina), Charlotte, North Carolina, listed on the NRHP Crane Building (Chattanooga, Tennessee), listed on the NRHP in Tennessee Crane Building (Portland, Oregon) Crane Co Building of Memphis, Memphis Tennessee
Struggle With the Death Penalty When my high school government class held a series of debates against another class, I charged in at full steam. The teachers had agreed not to interfere, but they let me know in private that if I wanted to offer some “assistance” to other debate teams from my class, I could. I did just that, helping each team research concepts for their issue (social security reform, abortion, education spending, etc), showing them how to lay out key points in an argument, and sharing ideas for the short video each team was asked to produce to support their view. Our crowning achievement came in the midst of the death penalty debate. At first it went badly; we found that it is almost impossible to make a strong argument for the death penalty as a deterrent. We found that putting a person through the entire death row process was actually more expensive than giving a person life without parole. We found constitutional arguments impossible so long as our government allows some states to use the death penalty and some some to ban it. But then we hit upon an idea that became our crown jewel. By splicing together violent scenes and the closing arguments from the movie A Time to Kill, we created a tale of indescribable evil with vivid imagery and gut-wrenching descriptions. After the video played, the class listened silently as our debate team made a simple but powerful distinction. Some crime is just crime. But some crime is pure evil, and locking up the perpetrator is not enough. *** The recent executions of Troy Davis and Lawrence Russell Brewer are perfect examples of why the death penalty question is so difficult. One might argue that in the case of Davis, the death penalty was wielded as a sort of, “strongest penalty we have,” in a murder case that was full of questions and changes and doubts. Meanwhile, the case of Lawrence Russell Brewer’s hate crime murder was so heinous and disturbing that the penalty was given as a sort of, “least we can do,” in the face of enormous evil. I hope these two cases cause you to think carefully about your stance on the death penalty. I’ll talk about where I stand in a moment, but I confess I do not stand there strongly. This issue is a tough one for Christians because Scripture does not seem to give a clear and certain prescription for this issue in our current context. So first I want to give you a road map for settling your own mind with a series of questions that deserve investigation. First, a pragmatic question. Does the death penalty prevent crime? Given that we cannot change our system of appeals (a good thing, I think), does the death penalty as it works now have a significant reductive influence on crime in those areas where it is legal? Too often people answer this question with thought experiments or hypotheticals. What does the data seem to show? Second, what are some of the various reasons a governing entity might make use of capital punishment? Which of those reasons seem valid and which do not? For instance, let us say for a moment that the data shows the death penalty does not reduce crime. If, “crime prevention,” is the key reason a government uses the death penalty, suddenly that reason is invalid, is it not? At the same time, let us say that, “sending a message that unrepentant evil will not be allowed to survive,” is an important reason. This may still be valid. Understanding why you are doing something is a big key to evaluating whether it is right or wrong, foolish or wise. Third, what are some of the things God says about capital punishment? Try as I might, I cannot find anything to suggest he is inherently against it. Jewish law was full of capital punishment specifically commanded by God, and nowhere does he condemn it in the New Testament. And yet, it is also clear that God wants us to carry the gospel to all people, that he does not desire than any should perish, and that we are to love and turn the other cheek to our enemies, banishing evil hatred from our hearts. These qualities seem to compete directly with the emotions that are encouraged in capital punishment cases: vengeance, closure, peace through punishment, hope for pain and even hell as the lot of the criminal. Once you have worked through these questions, I encourage you to take what you understand to be the most God-honoring position possible. But be wise, this one is tricky and wiser people than you and I have disagreed on it for ages. I said I would share my perspective, so here it is. As a Christian, I do not want to see anyone die without hearing the gospel. And having heard it, I want them to hear it again and again until they submit to it. My hope is that men given life without parole will hear and respond to the gospel, and will then use their position within the prison system to reach out to other prisoners. The vast majority of the time, then, I believe incarceration is the healthiest and best method of punishment we have. But I also believe capital punishment is a tool given to earthly authorities to be wielded with wisdom. As Augustine said, “Since the agent of authority is but a sword in the hand [of God], it is in no way contrary to the commandment `Thou shalt not kill’ for the representative of the state’s authority to put criminals to death.” So I am in favor of the death penalty for cases in which the crime displays an extreme rejection of social compacts. In other words, the person’s crime is a loud and clear, “Screw you!” to society and all it stands for. The case of Lawrence Russell Brewer is one good example, in which he beat and dragged a man to death primarily out of hatred for the man’s race. Timothy McVeigh, the Unabomber, and the Nazis involved in the Holocaust might be others. I affirm the right of the state to take a clear stand against evil that rises to special prominence because it willfully rejects the very principles that allow us to live in peace. But I think those moments should be few and far between; in fact, they should be much more rare than they are currently. Your perspective may be different than mine, and that is fine. I cannot claim perfect Scriptural authority on this issue. But I beg you to wrestle with it. As I discovered in high school, capital punishment is an issue fraught with complexity, competing statistics, and high emotion. It is more complex than crime deterrence, and it is tied more closely to our human ethics than most other issues. How we understand the role of punishment and justice in our society says a lot about how we view other things as well. And how we as Christians approach issues like these with both love and justice says a lot about the God we serve. May we all display wisdom in our pursuit. Share This Like this: LikeLoading... — Ben Bartlett Ben Bartlett is a business consultant, living and working in Louisville, Kentucky. He and his wife Samantha have three terrific kids. He loves reading, theology, politics, analysis, Ultimate Frisbee, and hiking. He also loves serving as an elder and teacher in his local church. 13 Comments In thinking through the death penalty “Biblically”, I am struck by how it seems that the foundation for capital punishment rests in man being made in the image of God. That is, in the case of murder, it seems that the death penalty is given because the murder is an assault on the image of God. If this is the reason for the death penalty for murder, then any other benefit would be secondary. By that, I mean if the death penalty serves as a deterrent then that would be good, but not the foundation for doing it. Capital punishment in the Bible seems to me to be reserved for those sins that are “high handed” against God. That is, in one way or another, God takes capital crimes as a direct assault against His law, which also reveals His character. I believe that capital punishment serves as a reminder that God is sovereign, and that He appoints the governments of the world to uphold His law. Now, this makes it difficult for me to argue (in the best sense) for capital punishment in the secular realm because any argument I make will be secondary reasons by nature, and therefore it weakens my resolve to make them. I wonder if that isn’t why you may be experiencing the same type of dilemma? What do you think? I pretty much agree with Brad. I take two biblical passages as fundamental in establishing capital punishment. First, Genesis 9:5-6 establishes that because man is made in God’s image, killing a human is such a heinous offense that it deserves death. Second, Romans 13 makes it clear that this principle hasn’t been abrogated under the new covenant, and establishes “secular” governmental authorities as the ones tasked with carrying out capital punishment where it is warranted. Civil rulers are given “the sword,” the power of execution, in order to act as God’s agents to bring his temporal wrath on murderers. So, in light of these two texts, I understand capital punishment to be justified on grounds of retribution – that is, the just punishment that an offense deserves – alone, without considering other grounds. If it does have a deterrent effect, well and good, but deterrence isn’t necessary for capital punishment to be just. Now, that’s what I believe as to the theory of capital punishment. But as far as its actual employment in the US, I’ve become deeply concerned about the propriety of our using the death penalty and I’m almost to the point of wanting a moratorium on executions for the forseeable future. The Troy Davis case is just the latest in a long, long line of suspect convictions that, upon further examination, prove to be suspect at best and obviously wrong at worst. The success of groups like The Innocence Project at overturning wrongful convictions (of mostly black males, mostly in the South) using DNA evidence indicates that our justice system is seriously flawed and needs serious reform. When capital punishment is in the picture, the stakes are too high for systemic problems like this to be ignored, and the present likelihood of wrongful executions may be too high for us to continue employing the death penalty at all. Of course, no justice system will ever be perfect; there will always be the possibility of wrongful convictions and executions. But I think that, given where we seem to be as a nation, we should seriously consider abolishing or suspending the death penalty unless and until the justice system is thoroughly reformed. I appreciate your comment, and I thank you for adding the Bible references that I was apparently too lazy to add. I’d like to ask you a question that bothers me. The Biblical structure for the death penalty required only the witness of two or more persons. At their testimony, the accused could be condemned. It seems to me that if we discontinue the practice of capital punishment because of our modern racial and class bias, then we would also undermine the integrity of the system that God introduced in the Scripture. In those days, God only required the sworn testimony of a couple of faithful witnesses, now we normally require much more than that, and yet we still believe the system is insufficient. How do we square these things? I think you both hit it right on the head. As I said in the article, I DO affirm the right of secular government to make use of capital punishment. I think God very clearly gives that power to the state. But I think many times we make the mistake of believing the rights of the state and the beliefs of the individual should be in perfect alignment, and I’m not convinced that’s the case. And I think it should be the desire of Christians that even the greatest sinner should have opportunity to hear and respond to the gospel in repentance. So I tend to believe these things: 1. The state has the right to enact capital punishment. 2. That right should be used not as retribution or to satisfy a desire for vengence, but to make a statement about the intolerability of challenging our common social contracts. 3. It should only be used in situations of extreme clarity. 3. The individual Christian should desire that every opportunity be given to even the worst sinner to repent and be saved. I do NOT think the Christian should be glib or gleeful or overly affirming of the death penalty as an automatic response to crime (consider the case of Christ and the adulterous woman, for example). So Brad, I do agree with you generally, but keep in mind that there the death penalty is given in the OT for more crimes than just murder. We live in a very different time and place now, and I think our beliefs about use of capital punishment have to interact with the fact that we have a secular government and a plethora of religious perspectives. That’s why I tend to say the death penalty should be used for extreme violations of social contracts on the basis of the government’s role of protecting the general welfare. Jeff, you raise a great point about the disturbing trends we see in use of capital punishment. That’s why, again, I tend to distinguish between unrepentant people going to war against the very fabric of our society vs. people committing crimes. There are times for the death penalty, yes… but nowhere near as much as we see it being used now, I think. Regarding your follow-up, Brad, I think there are a LOT of areas where we don’t maintain the system God set up for the monotheistic state of Israel in the OT. We simply don’t live in that time anymore, and I don’t think the system of law, especially the technicalities of what crime receives what punishment, exist for us anymore. After all, the OT never speaks of drugs… does that mean they should be legalized? The central realities for the Christian in regard to government and citizenship are these: First, that Christ calls us to a law that is higher than even the OT law. Second, we are to submit to the authorities. And third, that if there is a conflict between the two, we choose to submit to God before the authorities. With capital punishment, the story of Christ and the adulterous woman clearly displays the fact that as citizens of the state Christians are not REQUIRED to push for enactment of OT law. Christ himself did not push for the death penalty in a situation in which he could have done just that! Instead I think we are called to proclaim the Kingdom of God with love and mercy and grace. That said, I also do not think we can argue that the state does not have the right to protect its definition of citizenship through use of capital punishment, because that right is so clearly given to the state in the OT. That’s how I get to this view… the individual Christian focuses on proclaiming the gospel. The state should rarely and wisely exercise its right to capital punishment. And we should pray for God’s help for those Christians thrust into the difficult situation of acting both as a Christian and as the executer of secular laws (i.e. governors asked to sign off on death penalties). I didn’t mean to imply that we should maintain systems set up for monotheistic state of Israel in the OT. (That’s a mouthful!) I only wanted to point out that God deemed it just to execute persons on the witness of two people. Now we have a multitude of witnesses like DNA, video surveillance and tracking numbers on weapons, ballistics data, and forensics labs. Yet, we still get very nervous with all of this data that we might execute the “wrong guy”. God tells us that it is sufficient to settle a matter upon the testimony of two or three witnesses. This is even in the NT, although not connected with the death penalty except in Heb. 10:28. (See Matt. 18:16; 2 Cor. 13:1; 1 Tim. 5:19). Is it just to establish guilt on the testimony of two or three witnesses? If it is, it seems to me that the current system is working fine because we go far beyond that requirement! Therein lies the problem: in this age we are counting more on technology to solve the problem than human eye witnesses. I’m not saying that this is right or wrong, I’m saying that there is something to be thought about there. That is, if we still convicted men and women of capital crimes based on the witness of two or three people, how would that fundamentally change society and capital punishment itself? I do not think that we have the luxury of saying that ruling on the eyewitness of two or three people is unjust, or that it is even insufficient. Why, do you think, is that not enough evidence to convict today? Why, despite the fact that we have far more checks on a case than the Bible requires, do we feel less certain of justice, perhaps, than we used to when less evidence was required for a guilty verdict? I think we differ somewhat on the grounds of the state’s right/responsibility to wield the death penalty, unless we mean different things by retribution. I do think retribution – the just punishment that the crime deserves – is an appropriate motive, and is sufficient in itself to justify capital punishment without reference to protecting a social contract. The juxtaposition of Genesis 9 and Romans 13 leads me to say that God has given government the task of protecting his image by protecting human life, and this entails punishing those who take life wrongly. It is true that government has a responsibility to promote the general welfare, but in my view that isn’t the only or even the primary motive for capital punishment. I largely agree with your perspective on how individual Christians should view situations where the death penalty is in view. We should certainly pray for the repentance and faith of everyone, including murderers on death row. I don’t think it’s inconsistent, though, for a Christian as a citizen of the state tasked with wielding the sword to endorse – humbly, sorrowfully, never gleefully – the execution of a just punishment. At least, I don’t think it’s any bigger of a conflict than those we are thrust into every day by our dual citizenships here and in heaven. Brad, Ben’s right on the whole two witnesses thing. That was a stipulation of OT law given to Israel for specific purposes at that place and point in time. I take the Genesis text to be a fundamental creation ordinance that predates Israel and sets up fundamental principles that endure as long as this world does. So, today, we get our justification for capital punishment from the OT, but not every detail about how it is employed. Brad, the Law also forbids bearing false witness against one’s neighbor. In requiring two witnesses, the Law also expected those witnesses to be faithful to all the terms of the covenant Israel made with God. And the testimony of two witnesses never would have been accepted in OT courts if the two were buddies known to have a grudge against the defendant, had been badgered and cajoled by police and prosecutors into giving testimony that was a lot more certain than they really felt, and the defendant had had a dodgy confession beaten out of him that didn’t match up with any of the facts of the case. In other words, the formal requirement of two witnesses wouldn’t have contravened obvious principles of justice. In the modern state, there’s no expectation of honesty or covenant faithfulness before God. Swearing to tell the whole truth and nothing but is a poor imitation. If we lived in a similar society as OT Israel and could be certain of the honesty and reliability of two witnesses, I might be OK with assigning the death penalty based on their testimony. But that’s an impossibility today. Perhaps your comment came as my second was coming in and you didn’t get my response. I believe it is irrelevant that the two or three witnesses clause was given for theocratic Israel. For one, the NT utilizes the same standard. Secondly, the question is not whether God ordained it, but whether it was just. Do you think it was just of God to have guilt established on the testimony of two or three persons? If it is just, then why do we shrink from that standard now? I think there is a great point to be made here. The reason we shrink from this standard is because we know that people will be mistaken from time to time, and even worse, we know that people will lie. Therefore, the likelihood of an innocent person being executed by mistake or bias is much greater under the OT system. In order to solve this problem, we have pushed the burden of “testimony” more and more away from people and onto forensic science. In doing so, however, we are still dealing with the problem of uncertainty. Why? In the OT days, if you lived next door to a man who was a notorious liar, he was a far greater danger to you then than he is now. Hence, the likelihood of his community putting up with him was far less than it is now. Why? Because a liar will get you killed. An irresponsible man can bring you to ruin. Now, we have made a system that we fall back on that may actually allow liars to proliferate because we are trusting the system and not testimonies to save us from them. Isn’t that odd? And what does this do to the idea of man as judge? Back in the day, “Joe Smith” had to take the stand, point at the accused and say, “I saw you do it, mister. And you are going to hang today because I have testified to it.” Scary place? Does this make our personal responsibility for our words go up or down? This article and discussion has made me think about a lot of things, and I’ve probably spammed up this comment section too much by putting my thoughts out there. But I’ll close with this: I think that we live in a world that would generally say that they believe men are basically good. But they don’t really believe that. They don’t believe it because they would flip out if we went back to a “two or three witnesses” system because they know better than to trust people like that. That’s an interesting discussion to get into with someone who thinks that people are basically good, isn’t it? Thanks for the distinction. By, “retribution,” I merely meant it wouldn’t be right for the state to execute simply because we’re angry about something. That’s not to say they cannot, just that it ought to be based in something higher than anger. One area this is especially evident is that of justice between races. I believe there are still serious systemic racism problems in this country, but there was a time when they were far worse. And in that time, a black man raping a white woman would incite far more anger at the governmental level than a white man raping a black woman. As a result, the “retribution” of the government would be far more harsh for the first rather than the second. My point is that the government should not be guided merely by emotions of the majority, but by a principle that is as just and consistent for all as possible. I can appreciate your perspective on my little social contract idea. I confess it is not rooted in any strong Scriptural perspective. It’s just my view that the death penalty has become a national sore point because it seems to be (is?) used disproportionately, and I’m looking for a standard that all citizens can agree on. There seems to me to be some sort of separation between, say, the DC sniper or the Oklahoma City bomber on one hand, and a bipolar man of questionable mental capacity killing his girlfriend in a fit of anger on the other. All of the above are horrific, to be sure, but to me the intentional destruction of the social contract separates them. I’d like to see SOME way of distinguishing rather than the approach we have in place today. But like I say, I don’t stand too strongly on that perspective, it’s more of an idea than anything else.
Significance of Velamentous Cord Insertion for Twin-Twin Transfusion Syndrome. The objective of this study was to evaluate the actual association between velamentous cord insertion (VCI) and twin-twin transfusion syndrome (TTTS) in the native cohort concerning the natural history of monochorionic twin pregnancies. All monochorionic diamniotic twin pregnancies who received prenatal care from <16 weeks of gestation until delivery at our center between 2004 and 2013 were included in this retrospective cohort study. Macroscopically defined cord insertion site was recorded as velamentous, marginal, or central. The effects of VCI on TTTS and a composite of adverse outcomes, including abortion, death, and neurological morbidities ≤28 days of age, were evaluated with a multiple logistic regression model. A total of 357 monochorionic diamniotic twin pregnancies were analyzed. VCI in both twins was noted in 2.5% of cases and VCI in at least one twin was noted in 22.1% of cases. The incidence of TTTS was 8.4%; the incidence of a composite of adverse outcomes in at least one twin was 9.8%. There was no correlation between VCI and TTTS as well as a composite of adverse outcomes. VCI in monochorionic twin pregnancies was not a risk factor for TTTS and severe perinatal morbidities.
Conventional methods for producing a fluorine-containing olefin involving a 1,1-dihydro-2-fluorovinyl group generally comprise the following steps: the hydroxyl group of a 1,1-dihydro-2,2-difluoro alcohol (referred to as a "fluoroalcohol" hereinafter) is substituted by a halogen atom; and then the resulting halide is dehalogenated with zinc. A method for halogen substitution of a fluoroalcohol has been reported, e.g., in J. Am. Chem. Soc., vol. 75, p. 5978 (1953), in which the fluoroalcohol is tosylated and then reacted with a halide such as sodium iodide. The reaction scheme is indicated below. ##STR1## In the above formulae, R.sub.f represents a perfluoroalkyl group or a fluoroalkyl group (same as in the formulae referred hereinafter). This method is convenient and useful in laboratory scale production, but has numerous disadvantages, such as (i) expensive reagents (such as p-toluene sulfonic acid chloride and sodium iodide) are used, (ii) in the reaction of the above formula (2), the reaction must be conducted at a high temperature using a high boiling point organic solvent (such as diethylene glycol), (iii) a large amount of waste solvents having a high boiling point must be disposed of. Therefore, this method is disadvantageous for production on an industrial scale. Another method for halogen substitution of a fluoroalcohol has been reported, e.g., in U.S. Pat. No. 3,038,947, in which the fluoroalcohol is reacted with a thionyl halide in the presence of an amido compound as a catalyst. If the amido compound is not used, the fluoroalcohol cannot be reacted with the thionyl halide. The reaction scheme is indicated below. ##STR2## This method is convenient and useful compared to the aforementioned method, but, in view of industrial production, has various disadvantages, such as the use of a toxic substance (such as thionyl chloride) and the generation of a large amount of an acidic gas. The dehalogenation reaction of the thus obtained halide can be conducted using zinc. The reaction scheme is indicated below. EQU R.sub.f CF.sub.2 CH.sub.2 I+Zn.fwdarw.R.sub.f CF.dbd.CH.sub.2 +ZnIF (4) EQU R.sub.f CF.sub.2 CH.sub.2 Cl+Zn.fwdarw.R.sub.f CF.dbd.CH.sub.2 +ZnClF (5) This reaction is relatively easy, but, in view of the industrial production, has various disadvantages, such as (i) the reaction rate is extremely low when a chloride is used, (ii) an organic solvent such as methanol or dimethylformamide is generally used in the reaction, and the waste organic solvent must be disposed of, and (iii) waste of zinc halide containing non-reacted zinc must be disposed of. As described above, conventional methods for producing a fluorine-containing olefin have numerous disadvantages when they are practiced industrially.
Q: IQueryable.Count() or IEnumerable.Count() versus Single() throwing exception? We have a repository (Entity Framework) which queries for single records - only a single record should exist for any given query. Initially, our queries were SingleOrDefault(). Clearly, multiple results in the query will throw exception; and none are wrapped by try/catch. Rather than wrapping these queries in a try/catch block, I proposed an extension method as follows: public static bool IsEmpty<T>(this IQueryable<T> Query, out int Count) { Count = Query.Count(); return Count == 0; } This has advantages in more ways than simply determining if I have an empty query return or single result return. The alternative is to wrap my query in a try/catch. My question is whether the extension method or expense of catching an exception is preferred. So as not be subjective, I am specifically referring to the cost of catching and throwing an exception versus the cost of the Count() method. Although the database is expected to only return a single record, my approach is that the database will contain unexpected records. I don't perceive this to be an exceptional event, therefore I do not perceive the need for throwing an exception. The typical implementation of the extension method is as follows: var query = Repository.All().Where(*/ some criteria */); int count; if (query.IsEmpty(out count)) { // handle empty return } else if (count > 1) { // handle unexpected returns } return query.Single(); Edit An important note: we want to be informed of ambiguous results and how many records are returned. A: Be careful not to execute the query multiple times by accident. var query = Repository.All() .Where(*/ some criteria */) .Take(2) //magic here .ToList(); if (query.Count == 0) { // handle empty return } else if (query.Count > 1) { // handle unexpected returns } return query.Single(); Query the TOP 2 rows to handle all cases.
Annual stocking of lake sturgeon in Lake Michigan scheduled Sep. 26, 2013 Written by the Wisconsin Department of Natural Resources MILWAUKEE — More than 1,100 lake sturgeon will be released into Lake Michigan by Department of Natural Resources fisheries crews who helped raise the fish and the public who are encouraged to take part in this annual celebration. The event takes place on Saturday, Sept. 28, with registration at 11 a.m. Prior to release, the sturgeon will be blessed by a member of the Oneida Nation. The stocking of the 6-to-8-inch fish, aimed at helping restore a self-sustaining population of lake sturgeon to Lake Michigan, will occur at 12:30 p.m. at the dock located at the north end of Lakeshore State Park. “This is an important step on the long road to restoring lake sturgeon to Lake Michigan,” said Brad Eggold, DNR Southern Lake Michigan fisheries team supervisor. "We hope these fish, and the ones stocked in previous years, will survive and thrive and ultimately help bring this magnificent fish back." The fish were raised in the Milwaukee River through the efforts of the Riveredge Nature Center to operate a streamside rearing facility. A streamside rearing facility is basically a mini-hatchery. Water is drawn from the Milwaukee River, pumped into sand filters and then into an 8- foot by 20- foot trailer. The trailer has four fish raceways capable of holding 1,200 lake sturgeon when full. In the past, stockings were from lake sturgeon raised at the Wild Rose Fish Hatchery. The primary benefit of using a streamside rearing facility for lake sturgeon is that they will be raised on a native water source throughout their entire early life. "This will maximize their ability to imprint to this water source and greatly improve the odds that, at maturity, the sturgeon will return to Lake Michigan to spawn, which is the ultimate goal,” Eggold said. "Without the cooperation of the Riveredge Nature Center staff and its volunteers, we wouldn’t have any fish to stock. They’ve provided the location for the trailer and more importantly, the support for the day-to-day operation of the facility,” he added. (Page 2 of 2) Lake sturgeon can grow to 200 pounds and live 100 years. Female sturgeon don’t start spawning until they are 25 to 30-years-old and males start at about age 15. Getting to adulthood will be a challenge for the sturgeon. DNR surveys reveal good habitat for young fish, overwintering and spawning areas, but the lake sturgeon must first survive these initial months, and then subsequent years of eluding predators and finding sufficient food. Recent gill net surveys completed by the DNR caught 10 juvenile lake sturgeon ranging from two to four-years-old. This indicates that fish raised at the facility and stocked in the Milwaukee River are surviving and using the harbor and nearshore areas as nursery grounds. These are the first sturgeon caught by DNR crews that were raised in the streamside trailer. The sturgeon stocking project is funded through a cooperative effort among agencies and public partners. Wisconsin DNR, the Great Lakes Fishery Trust and the U.S. Fish and Wildlife Service provide the majority of the funding. For more details, check out the Riveredge Nature Center website at http://riveredge.us. Click on the link for “Sturgeon Fest 2013.”
KRAVITCH, Circuit Judge, dissenting: I. The threshold question in this case is whether the State did, in fact, resentence Cave within the 90 day time frame specified by the habeas order so as to avoid the conditional mandate of a life sentence. In denying Cave's petition, the district court found that the state court "timely commenced the re-sentencing proceedings on October 22, 1992," setting a trial date of November 30, 1992, "[u]pon agreement of the parties." It is unclear whether the district court believed that the October 22 scheduling conference was in itself sufficient to comply with the terms of the habeas order or that Cave waived the right to enforce the conditional habeas order by agreeing to a trial date outside the 90 day time limit. On appeal, the parties dispute both when the resentencing time limit expired and when a "new sentencing hearing," within the meaning of the habeas order, was held. The majority bases its affirmance solely on the determination that the 90 day period was extended by agreement of the parties.1 1 Although the majority does not address the calculation of the 90 day time period, the State challenges the district court's finding that the period expired on October 25, 1992. I note in passing that the district court was correct. The district court's habeas order was issued on August 3, 1990. The 90 days were to be counted "from the date of this Order." On August 13, the State filed a timely motion to alter or amend the judgment, pursuant to Federal Rule of Civil Procedure 59, along with a motion to stay the habeas order pending appeal. On September 25, the district court denied the Rule 59 motion but granted the motion to stay pending appeal to this court, apparently stopping the 90 day clock after 53 days had elapsed. The opinion of this court was issued on September 17, 1992. With the 90 day clock again running, on October 22, the state court judge, Judge Walsh, conducted the status Inasmuch as the district court based its denial of habeas relief on the fact that the scheduling conference was held before the 90 day time limit expired, it ignored the clear language of the original habeas order: Respondent the State of Florida is directed to schedule a new sentencing proceeding at which Petitioner may present evidence to a jury on or before 90 days from the date of this Order. Upon failure of the Respondent to hold a new sentencing hearing within said 90 day period without an Order from this Court extending said conference at which Cave's resentencing was scheduled for November 30. The 90 day period would have expired on October 25, as the district court found. (The district court's order states, "Thus, the State had until October 25, 1992 to comply with this Court's Order regarding Petitioner's re-sentencing.") Challenging this finding of fact, the State offers a novel recounting of days. It asserts that the filing of its Rule 59 motion on the tenth day after issuance of the order should have tolled the 90 day resentencing clock in the same way that the filing of a Rule 59 motion tolls the time allowed for filing an appeal, see Federal Rule of Appellate Procedure 4(a)(4). Accordingly, the State argues, the 90 day time limit would not have expired until some time in December, after Cave's counsel had requested a continuance on November 17. By requesting a continuance before the 90 day period had expired, the argument goes, Cave would have waived the right to enforce the resentencing time limit. (The State also contends that Federal Rule of Civil Procedure 62(a) would operate to toll the running of the 90 day period for ten days after entry of the district court's order. Even if so, however, the additional ten days would make no difference because Cave's counsel's request for a continuance still would have been made after the 90 days had expired.) The premise of the State's argument is dubious. Not only does the State fail to cite a case in support of the proposition that the filing of a petition for rehearing tolls the time period of a conditional habeas order, but it fails to cite binding precedent apparently to the contrary. See Tifford v. Wainwright, 588 F.2d 954, 957 (5th Cir. 1979) (90 day resentencing period specified in conditional habeas order not tolled by state's petition for rehearing). The State has no basis for concluding that the district court was clearly erroneous in finding that the 90 day resentencing time limit had expired on October 25. Consequently, Cave's counsel's request for a continuance on November 17 is irrelevant to the issue of the State's compliance with the habeas order. 2 time for good cause, the sentence of death imposed on the Petitioner will be vacated and the Petitioner sentenced to life imprisonment. Conceivably, the first sentence, read by itself, could be thought ambiguous as between directing that the act of scheduling occur within 90 days and directing that a sentencing proceeding before a jury commence within 90 days. But the two sentences together leave little room for interpretation: if the State fails to hold a new sentencing hearing--at which Cave may present evidence to a jury--within the designated time period, then Cave is to be sentenced to life imprisonment. Merely scheduling such a hearing is not, on the terms of the habeas order, sufficient.2 Apparently accepting that the scheduling conference itself was not sufficient to discharge the State's time-limited obligations under the habeas order, the majority construes what happened at that scheduling conference as an "agreement" to continue resentencing beyond the 90 day period. There are two serious problems with that approach. First, nowhere in the habeas order is there any provision for extensions of the 90 day resentencing time limit by agreement of the parties; to the contrary, the order expressly provides a different mechanism for extending the 90 day period: "an Order 2 The presiding state court judge at the scheduling conference described his task as "to set this case for trial within the mandated time period." R.72, Tr. of Oct. 22, 1992 Hr'g at 3. This would seem an odd remark had the scheduling conference itself been understood to discharge this responsibility. 3 from this Court extending said time for good cause."3 The order was a direction from the district court to the State; Cave simply lacked the power unilaterally to forgive the State of its court- imposed obligation.4 Second, assuming that express agreement by Cave to postpone resentencing beyond the 90 day period would suffice to waive the time limit, the transcript of the October 22, 1992, scheduling conference reveals no such agreement. Instead, it is evident from the transcript that everyone in attendance at the October 22 conference erroneously believed that the tentative date set for the resentencing hearing, November 30, 1992, was within the 90 day period.5 It is true that the attorney from the public defender's office who was present at the conference apparently 3 The State never availed itself of the habeas order's invitation to petition the district court for such a "good cause" extension of the 90 day resentencing period. 4 Insofar as the second district judge interpreted the order drafted by the first district judge to permit extension of the 90 day period by agreement, I doubt this misreading is, as the majority argues, entitled to this court's deference. Although we generally defer to a district judge's reasonable interpretation of his own order, the only rationale for doing so--that the district judge who drafted the order is in the best position to know what he meant to say--disappears when the judge doing the interpreting is not the same person as the judge who did the drafting. In any case, the interpretation imposed on the order by the second district judge was, in my opinion, unreasonable. 5 There is no evidence in the record to suggest that Cave's counsel knew that the 90 day period would expire at the end of October and was withholding this knowledge from the state court or that he was otherwise strategically delaying in the hope that the 90 day period would expire before Cave was resentenced. Cave's counsel was newly appointed and had not even spoken with Cave at the time of the scheduling conference. 4 concurred in the judge's doubt that the public defender's office would be ready for trial on November 30; but it is also true that this attorney did not consent to any date other than November 30 at the conference, let alone acknowledge that the 90 day limit might have to be extended or waived.6 Because, by all indications, everyone at the conference mistakenly believed that November 30, 1992, was within the 90 day period, there is no way that the lawyer representing Cave (who was not himself present) could have knowingly waived the 90 day limit or consented to an extension. Cf. Hamilton v. Watkins, 436 F.2d 1323, 1326 (5th Cir. 1970) ("The accepted classic definition of waiver is ... 'an intentional relinquishment or abandonment of a known right or privilege.'") (quoting Johnson v. Zerbst, 58 S. Ct. 1019, 1023 (1938) (emphasis added). The only question, then, is which party should bear the "cost" of this mutual mistake. I believe it should be the State. The habeas order was directed to the State, not Cave, and the State was in a better position to 6 The majority says that its "conclusion that there was such an agreement derives strong support from the fact that the parties at the October 22 status conference explicitly noted that the 90-day period could be extended by later agreement." I am not sure what the majority means by "explicitly noted," as no one at the scheduling conference actually said anything about what sort of procedure would suffice to extend the resentencing period. While the participants did contemplate putting off the resentencing proceedings until April, there is no way of telling from the transcript whether they believed that their agreement to do so would be sufficient to comply with the habeas order or whether instead the government would have to petition the district court for a "good cause" extension. In any case, the attorney from the public defender's office did not agree to any date that he did not believe (albeit mistakenly) was within the 90 day period. 5 ensure compliance by initiating resentencing within the mandated period or requesting a "good cause" extension. The majority argues that Cave's temporary counsel at the sentencing hearing forfeited Cave's "entitlement" to be resentenced within 90 days by analogy to defense counsel's forfeiture of a right by failing to object to its violation at trial. This line of reasoning iterates the error of viewing the habeas order as granting Cave a right or entitlement--which he could subsequently forfeit through his own negligence--instead of directing the State to do something--an obligation that would persist irrespective of the actions of Cave or his counsel. Worse, the majority assumes that the responsibility for ensuring resentencing within the 90 day period falls not on the State but, perversely, on Cave himself. Neither the State nor Cave "objected" at the scheduling hearing to the imminent failure of the judge to order resentencing within the specified period because neither was aware of the miscalculation of time. I do not understand the majority's view that Cave alone should be punished for a failure primarily, if not exclusively, attributable to the State. II. Given that the State failed to hold a rescheduling hearing within the 90 day period, the only question remaining is the enforceability of the district court's habeas order mandating imposition of a life sentence. Issuing such an order is, under 6 some circumstances, within the authority of a habeas court. Consequently, the district court was within its habeas jurisdiction in issuing the order, and the order is not unenforceable per se. Moreover, the further question of whether the conditional bar against resentencing was an appropriate exercise of the district court's discretion on the facts of this case is not properly before this court because the State failed to challenge the form of habeas relief granted by the district court in its previous Eleventh Circuit appeal. I would conclude, therefore, that the habeas order should be enforced as written, imposing on Cave a final sentence of life imprisonment. The federal habeas statute empowers federal courts to grant relief "as law and justice require," 28 U.S.C. § 2243, and expressly contemplates remedies other than release from custody, see 28 U.S.C. § 2244(b) ("release from custody or other remedy on an application for a writ of habeas corpus"). The Supreme Court consistently has emphasized that a federal court is vested "'with the largest power to control and direct the form of judgment to be entered in cases brought up before it on habeas corpus.'" Hilton v. Braunskill, 107 S. Ct. 2113, 2118 (1987) (quoting In re Bonner, 14 S. Ct. 323, 327 (1894)). Most commonly, courts granting habeas relief issue "conditional release" orders, which require the state to release the petitioner from custody or from an unconstitutional sentence unless the petitioner is retried or resentenced within some specified (or a "reasonable") period of time. Ordinarily, if the state fails to retry or resentence the 7 petitioner within the designated period of time, it may still rearrest and retry or resentence the successful habeas petitioner at a later time.7 See Moore v. Zant, 972 F.2d 318, 320 (11th Cir. 1992), cert. denied, 113 S. Ct. 1650 (1993). The question presented here, however, is whether a habeas court has the authority to issue a conditional order permanently forbidding reprosecution or resentencing if the state fails to act within a specified time period. (On the facts of this case, this question becomes whether a habeas court can forbid further state capital sentencing hearings once a death sentence has been held unconstitutional and the state has failed to comply with the procedural requirements of the resulting habeas order.) Three out of four circuits to have decided this issue have held that federal courts do have the authority to bar retrial of a habeas petitioner who has successfully challenged his or her conviction. See Capps v. Sullivan, 13 F.3d 350, 352 (10th Cir. 1993); Foster v. Lockhart, 9 F.3d 722, 727 (8th Cir. 1993) ("district court has authority to preclude a state from retrying a successful habeas petitioner when the court deems that remedy appropriate"); Burton v. Johnson, 975 F.2d 690, 693 (10th Cir. 1992), cert. denied, 113 S. Ct. 1879 (1993); Heiter v. Ryan, 951 F.2d 559, 564 (3d Cir. 1995). Only the Fifth Circuit has indicated that a habeas court lacks the power to permanently bar a state from retrying or resentencing a defendant. See Smith v. Lucas, 9 F.3d 359, 365-67 7 Of course, the defendant's Sixth Amendment speedy trial rights may be asserted against retrial in state court and, if that fails, in a subsequent federal habeas petition. 8 (5th Cir. 1993), cert. denied, 115 S. Ct. 98 (1994). But see Smith v. Lucas, 16 F.3d 638, 641 (5th Cir.) (on appeal from the district court's order on remand from the previous Fifth Circuit Smith decision, purporting only to "have some doubt as to whether a federal court has the authority to enter" a habeas order prohibiting the state from subsequently seeking a death sentence) (emphasis added), cert. denied, 115 S. Ct. 151 (1994). Although this circuit has not decided the issue, the most relevant Eleventh Circuit case seems to comport with the majority view that habeas courts have the power to bar retrial or resentencing. In Moore v. Zant, this court interpreted a conditional habeas order not to prohibit the state from subsequent capital resentencing. Explaining the effect of the typical conditional habeas order, the court stated that after a successful habeas petitioner is released from custody "the state may ordinarily still rearrest and reprosecute that person," and that the grant of the writ "does not usually adjudicate the constitutionality of future state acts directed at the petitioner." 972 F.2d at 320 (emphases added). Evidently, then, the court was of the opinion that habeas courts could, under certain circumstances, permanently bar reprosecution or resentencing. I would hold that it is within the broad habeas power of a federal court to issue an order permanently barring the state from retrying or resentencing the petitioner. Indeed, in some cases this may be the only effective form of habeas relief. For 9 example, if the basis for granting habeas relief is a violation of the petitioner's Fifth Amendment Double Jeopardy rights or insufficiency of the evidence, then barring a new trial would be the only way to prevent the state from iterating the constitutional violation. Similarly, a prisoner's Sixth Amendment speedy trial rights would be rendered meaningless if, even after a successful habeas petition asserting these rights, he or she could be tried or sentenced at the will of the state. Of course, to recognize that this extreme remedy is authorized is not to condone its routine use; habeas courts must exercise discretion. Other courts to have recognized the authority of habeas courts to impose permanent bars on retrial or resentencing sensibly have limited the circumstances in which this form of relief would be appropriate. See Capps, 13 F.3d at 352-53 (generally should be reserved for cases in which the "constitutional violation ... cannot be remedied by another trial, or other exceptional circumstances exist such that the holding of a new trial would be unjust); Foster, 9 F.3d at 727 ("suitable only in certain situations, such as when a retrial itself would violate the petitioner's constitutional rights"). We need not now define the circumstances in which such relief would be warranted, however, because the claim that the district court abused its discretion by mandating the conditional imposition of a life sentence is not properly before this court. The State admits that it did not challenge the form of relief specified in the habeas appeal on its previous appeal to the 10 Eleventh Circuit.8 It is not necessary, therefore, for this court to determine whether the district court abused its discretion by mandating the conditional bar to retrial on the facts of this case; the form of relief granted became the law of this case when the State failed to challenge it on the initial appeal. This is precisely the situation confronted by the Tenth Circuit in both Capps and Burton. In each of those cases, the court held that the state had waived any challenge to the habeas remedy of permanent discharge. Capps, 13 F.3d at 353; Burton, 975 F.2d at 693-94. In fact, in Capps the court recognized that "because nothing in the record suggests the constitutional violation was not redressable in a new trial, the district court apparently abused its discretion [by issuing a writ barring retrial]." 13 F.3d at 353. Nevertheless, because the state did not challenge the remedy in its initial appeal of the grant of habeas to the Tenth Circuit, the court held that it was precluded from reviewing the form of habeas relief granted by the district court. Id. I would follow the approach of the Tenth Circuit, finding it dispositive that the district court was acting within the scope of its habeas authority. III. The State in this case not only failed to resentence Cave in 8 The State challenged only the substantive (i.e., Strickland) basis for granting the writ. 11 the time allotted but also failed to challenge the valid habeas remedy granted by the district court in the first Eleventh Circuit appeal. As a result, Cave should be sentenced to life imprisonment. I respectfully DISSENT. 12
Be a Freecycle Santa by Adam C. Engst [I wrote this article almost a year ago, but too late to do much good before the holiday season, so I’ve dusted it off for this year. It’s still entirely accurate and relevant, and I strongly encourage everyone to think about clearing out electronic clutter in this fashion, as I too will be doing once again. -Adam] Several years ago, I raved about how quick and satisfying it was to dispose of old and potentially dodgy electronics via the Freecycle Network, a loose affiliation of mailing-list based groups of people who exchange reusable goods for free (see “Freecycle: Disposing of Good Old Stuff,” 6 August 2007). Every so often since, I resubscribe to the Ithaca Freecycle list whenever I come across something that I’d far rather give away than throw away — a portable chair that didn’t fit either me or Tonya, an old tabletop that was taking up space in the garage, a houseplant that had outgrown our living room, and so on. I was recently bemoaning the fact that we had some elderly iPods and a PlayStation 2 that Tonya had gotten to play Dance Dance Revolution (but stopped using because she didn’t like the music), all of which were perfectly functional, but none of which had been touched in years. They weren’t worth the effort of selling, given the prices for comparable or better items I’d seen on craigslist. Then I had a brainstorm — many people on Freecycle would surely want these items, despite their age, and even better, given the time of year, I could require that they be used only as presents for kids who wouldn’t otherwise receive such a gift. Posting them on Freecycle was a huge success — I immediately received email from numerous people who were interested, and I set up pickups for the people who I felt had the most need and the kids who were most likely to appreciate the gifts. The PlayStation 2 went to the 7-year-old daughter of a single mother working two jobs while undergoing a divorce. The iPod photo went to the teenage daughters of another single mother working two jobs, and the third-generation iPod will be shared by the five children of a woman who couldn’t work because of a medical condition. Perhaps most gratifying was the iPod nano, which a teaching assistant at a local elementary school is giving to a third-grader whose family (a single mother of four kids who is working double shifts at a hotel) can’t make ends meet, to the extent where teachers at the school have been helping with necessities like food, clothing, and required dental care. When the teachers asked the third-grader what he liked to do outside of school, the kid said, “I know what you’re trying to do, but don’t worry about me and just get things for my little brother. I’ll be fine.” I hope he likes the iPod; the teaching assistant is also giving him an iTunes gift card and helping him set the iPod up on a school computer. The only hard part about giving these old electronics away has been hearing from all the people who are similarly deserving. I could have given away a dozen PlayStations and twice as many iPods if only I’d had them. But some of you do have them. So I’d like to encourage everyone out there with old iPods, digital cameras, game consoles, or other unused but functional electronics to don a virtual Santa hat and see if you can brighten some kid’s Christmas this year via Freecycle. The most difficult barrier to clear with Freecycle is simply getting started. Here’s what you have to do. For groups hosted on the Freecycle site, posts will appear, along with Sign Up/Log In and Search Posts buttons (once you’re logged in to the Freecycle site, that first button changes to Join This Group). For older groups that are still hosted on Yahoo Groups, there’s a small “Visit the group and see the posts” link, way at the bottom of the screen. For Freecycle-hosted groups, log in and click Join This Group. For Yahoo Groups-hosted groups, follow the link to Yahoo Groups, and click the Join This Group button. You’ll have to log in with your Yahoo ID. Once you’re a member of the group, you can post. The Freecycle site has a Web form for this, which I haven’t used, since the Ithaca group is hosted at Yahoo Groups, but I presume it’s basically the same as sending email to the list submission address. The Subject line of the post must start with the word OFFER and then list what you’re giving away. In the body of the message, be explicit about the item, the condition it’s in, and any other relevant details. I recommend including links to more information or pictures, if that’s easy (I often take a photo with my iPhone, put it in my ~/Dropbox/Public folder, Control-click to copy the public Dropbox link, and paste the link into the email message). At the end of the post, provide details about how you’ll choose from among the people who reply — this is where you should be explicit about wanting the item to be a gift for a deserving child and ask that people provide a little background to help you choose. Be sure to say roughly where you’re located (not your address, just your neighborhood) so people can evaluate how far away you are, and also ask that people tell you where they’re coming from and when they can meet you, so you can take schedule and unnecessary gas consumption into account in choosing a recipient. After you post your message, you’ll start receiving replies. Don’t respond to them at first; it’s better to wait a few hours to make sure you have a representative sample. Then you can pick the most deserving recipient, reply via direct mail to set up a pickup time and place (either your home or office, or a nearby public space), and meet with the recipient. If you post in the morning, you can often give the item away by the evening — it’s seldom a drawn-out process. Finally, once you’ve chosen someone, post another message to the list with the same Subject line, replacing the word OFFER with the word TAKEN. That’s sufficient for alerting the people you didn’t pick, although there’s certainly no harm in replying to them individually as well. The incredible response I got to this rather offhand idea was what prompted me to write this article (I even received a number of extremely kind messages from people who just wanted to thank me for helping kids in this way). I encourage you to follow suit while there’s time before Christmas, and honestly, even if you’re reading this article after the holidays, there’s nothing stopping you from giving away unused items and saying that you want some item to be a belated Christmas present for a child whose holiday was otherwise pretty bleak.
The longitudinal growth of the neuromeres and the resulting brain in the human embryo. The growth of the human brain during the embryonic period was assessed in terms of longitudinal measurements in staged embryos. Precise graphic reconstructions prepared by the onerous point-plotting method were considered to be the most reliable, and 23 were examined in detail. A distinction is necessary between measurements of the brain (cerebral diameters) and those of the skull (osseous diameters), and also between those of the folded brain in situ, studied here, and the later relatively straightened brain. Longitudinal measurements were made of individual neuromeres and their successors in steps (neuromeric lengths). The sum of the neuromeric measurements at any given stage provides the total neuromeric length (TNL) of the folded brain in situ at that stage and it increases in keeping with the greatest length (GL) of the embryo. At stages 16-19, however, the neuromeric length of the brain may exceed the GL. From stage 20 onwards the body length increases more rapidly compared with the length of the brain. The most cephalic neuromere is the telencephalon medium, abbreviated T1 here. The cerebral hemispheres are derived from it, although they are not neuromeres. The hemispheres soon extend rostrally beyond the limit of T1 by an amount that is here designated T2, and that indicates the growth of the telencephalon rostral to the commissural plate, which is the site of the future corpus callosum. Further laterally, the hemispheric length (future fronto-occipital diameter) increases rapidly, as does also the bitemporal (biparietal) diameter. At the end of the embryonic period these diameters are one fourth to one fifth of the head circumference. Additional neuromeric information becomes manifest when the measurements are calculated as percentages of the total length of the brain. The rhombencephalon decreases considerably, diencephalon 2 increases greatly, whereas diencephalon 1 diminishes, and the cerebral hemispheres enlarge massively. In addition, specific neuromeres or subdivisions come to occupy relatively more or relatively less of the total. Three periods were found during which individual neuromeres acquire their maximal or minimal lengths: the maximal absolute lengths were in period 3, whereas the maximal and minimal percentage lengths were in periods 1 and 3. The various neuromeric changes are considered to be related to alterations in functional development. Finally, in furtherance of establishing continuity in prenatal data, comparisons were effected between embryonic and fetal measurements.
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rs: <http://www.w3.org/2001/sw/DataAccess/tests/result-set#> . [] <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> rs:ResultSet ; rs:resultVariable "given", "family" ; rs:solution [ rs:binding [ rs:value "Bob" ; rs:variable "given" ] ; rs:binding [ rs:value "Smith" ; rs:variable "family" ] ] .