text
stringlengths
8
5.77M
NEW YORK (Reuters) - The U.S. dollar edged up on Thursday, reversing its recent decline, after U.S. President Donald Trump backed a strong dollar, and the S&P 500 gave back gains on the comments before eking out a record-high close. Slideshow ( 2 images ) The U.S. dollar .DXY was up 0.17 percent against a basket of major currencies after Trump told CNBC in an interview in Davos, Switzerland, he wants to see a strong dollar. The comments came a day after U.S. Treasury Secretary Steven Mnuchin, also in Davos, made a major departure from traditional U.S. currency policy, saying “obviously, a weaker dollar is good for us as it relates to trade and opportunities.” Mnuchin’s comments drove further losses in the dollar on Wednesday, which gave the currency its biggest daily percentage drop in seven months. “They want to walk back yesterday’s comments. They were salt in the wound,” said Kathy Lien, managing director at BK Asset Management in New York. But other factors have been driving the extended decline in the dollar, she said. “I think we are due for a rebound. What we are seeing now is some profit-taking.” The Dow and S&P 500 closed at their highest levels ever although they relinquished bigger gains after Trump’s comments. The Dow Jones Industrial Average .DJI rose 140.67 points, or 0.54 percent, to end at 26,392.79, the S&P 500 .SPX gained 1.71 points, or 0.06 percent, to 2,839.25 and the Nasdaq Composite .IXIC dropped 3.90 points, or 0.05 percent, to 7,411.16. The pan-European FTSEurofirst 300 index .FTEU3 lost 0.60 percent and MSCI's gauge of stocks across the globe .MIWD00000PUS gained 0.13 percent. The euro was EUR= down 0.09 percent to $1.2395. Earlier, the euro rose to its highest in three years after the European Central Bank showed little concern about the euro zone single currency's hottest run in nearly four years. The ECB kept its ultra-easy monetary policy unchanged. ECB President Mario Draghi cited the region’s “solid and broad” growth and said inflation was likely to rise in the medium term. U.S. Treasury debt prices rose, boosted by solid demand for 7-year notes as well as Trump’s remarks on the dollar. U.S. benchmark 10-year notes US10YT=RR last rose 9/32 in price to yield 2.6207 percent, from 2.654 percent late Wednesday. Oil retreated as the U.S. dollar rebounded from early losses and strengthened, denting support for the latest crude rally. Brent crude LCOc1, the international oil benchmark, settled down 11 cents at $70.42 a barrel. U.S. crude CLc1 futures for March delivery fell 10 cents to settle at $65.51.
Natural Impact Initiative NII - Natural Impact InitiativeCharles D. Owen Middle School Charles D. Owen Middle is a school actively connecting the roots of our past to a brighter future. Our goal is to maximize our natural and digital resources through community partnerships, creating a school landscape that promotes environmental stewardship, citizen science, and exploratory learning.
[Assessment of the treatment with imiquimod in persistent infection by human papillomavirus with the polimerase chain reaction method]. Molecular studies have shown that oncogenic genotypes of human papillomavirus (HPV) are the main risk factor for cervical cancer development. Sub-clinical wound does not cause symptoms and is diagnosed by colposcopy or histology, in addition the latent infection is associated with the presence of DNA of the HPV, but when clinical and histological abnormalities are not presented only molecular techniques can detect this infection. To determine if complementary processing with imiquimod, recent medicament with powerful antiviral activity in vitro as in vivo, reduces the cervical persistence of HPV. This study was carried out with 87 patients, who had antecedents of HPV cervical and intraepithelial wound with low degree. Patients were divided as follows: treated with cryotherapy, cervical loop electrosurgical and imiquimod, all with diagnosis by cervical cytology, colposcopy and polymerase chain reaction (PCR) for HPV. At 3, 6 and 12 months after the processing, PCR, cervical cytology and colposcopy control were carried out again. Out of the 87 patients studied, 11% (10) patients treated with cervical cytology were positive for VPH; with colposcopy 8% (7) of patients and with PCR 40% (34) of patients; decreased persistence with combined methods of loop and imiquimod was obtained in 29% (5) patients; however, when utilized imiquimod alone, there were 55% (11) patients with persistence determined by PCR method. Imiquimod appears to be beneficial in 45% of the patients, in contrast with efficacy reported until 85% in genitals and annals warts, in addition, the capacity of eliminating the viruses has been shown, therefore it is possible that its potential effect could be observed long-time. It is evident that the percentages of viral detection are improved for PCR method, compared with indirect methods as cervical cytology and colposcopy, which is favorable when virus serotypes are of high degree of transformation and ablative methods should be conservatives due to fertility motives.
643 F.2d 1069 B. B. on Behalf of A. L. B., Plaintiff-Appellant,v.Richard S. SCHWEIKER, Secretary of Health and HumanServices, Defendant-Appellee. No. 79-3539. United States Court of Appeals,Fifth Circuit. Unit B April 27, 1981. Groover & Childs, Denmark Groover, Jr., Macon, Ga., for plaintiff-appellant. S. Elizabeth Conlin, Asst. U. S. Atty., Macon, Ga., for defendant-appellee. Appeal from the United States District Court for the Middle District of Georgia. Before GODBOLD, Chief Judge, and HATCHETT, Circuit Judge, and MARKEY*, Chief Judge. GODBOLD, Chief Judge: 1 Mrs. B appeals from the judgment of the district court affirming the Secretary's denial of surviving child's insurance benefits, 42 U.S.C. § 402(d) (1), to her child, A. B-B v. Califano, 476 F.Supp. 970 (M.D.Ga.1979). 2 Mrs. B's deceased husband, Mr. B, is the wage earner through whom A claims benefits. Although A was born during Mrs. B's marriage to Mr. B, it is undisputed that A is not his child. A was conceived while Mr. B was in military service overseas. Mrs. B concedes that the child is illegitimate. The issue before us is whether A is a stepchild within the meaning of the Social Security Act. We hold that the child is not a stepchild and therefore not entitled to benefits. 3 Mr. and Mrs. B continued to live together as husband and wife after A's birth. Mrs. B testified that her husband raised and supported A and presented A publicly as his child. In divorce papers filed several years later, Mr. B acknowledged A as his child and agreed to support A. (The divorce was abandoned because Mr. and Mrs. B were reconciled.) 4 A may be entitled to benefits if she is the deceased wage earner's child or stepchild, 42 U.S.C. §§ 402(d)(1), 416(e). The district court upheld the ruling of the administrative law judge and the agency's Appeals Council that A is neither the wage earner's child nor his stepchild. B-B v. Califano, 476 F.Supp. 970 (M.D.Ga.1979). The only question presented to us is whether A is the wage earner's stepchild within the meaning of the statute.1 5 42 U.S.C. § 402(d)(1) provides that every qualifying child of an insured individual shall be entitled to child's insurance benefits. 42 U.S.C. § 416(e) provides that "child" means "the child or legally adopted child of an individual ... (and) a stepchild." 6 An initial question concerns choice of law for a definition of "stepchild." 42 U.S.C. § 416(h)(2)(A) directs us to define "child" with reference to state law governing the devolution of intestate personal property.2 Because stepchildren in Georgia, Mr. B's domicile, do not share in intestate distribution, see Ga.Code.Ann. § 113-903, Georgia intestacy law says little about who is a stepchild, and we have found no Georgia cases considering the problem posed here, and few cases from other jurisdictions. 7 We find nothing in the statutory history of the Social Security Act that illuminates Congress's understanding of the term "stepchild." The applicable regulations offer little more guidance. 20 C.F.R. § 1109(b), in effect when Mrs. B filed A's claim, provided: 8 the term "stepchild" means a claimant who ... is the stepchild of the individual upon whose wages and self-employment income his application is based by reason of a valid marriage of his parent ... or adopting parent with such individual. 9 Both the ALJ and the district court concluded that § 1109 implicitly contemplates that a child may become a stepchild only when its parent marries the purported stepparent after its birth, thus excluding the children of adulterous relationships.3 The district court recognized that the statute and regulation did not forthrightly address the present situation but felt constrained to give "stepchild" its normal meaning. The district court noted several definitions in legal dictionaries defining "stepchild" as one's spouse's child by a prior marriage. 476 F.Supp. at 974 n. 3. 10 We recognize that legal dictionaries offer general definitions, not universal ones. Cf. Lunceford v. Fegles Construction Co., 185 Minn. 31, 239 N.W. 673 (1931). For instance, the dictionary definitions comprehend only the children of former marriages, whereas many jurisdictions recognize that an illegitimate child may be a stepchild to a person the parent subsequently marries. E. g., U. S. Fire Insurance Co. v. City of Atlanta, 135 Ga.App. 390, 217 S.E.2d 647 (1975) (worker's compensation statute); Lipham v. State, 125 Ga. 52, 53 S.E. 817 (1906) (incest); Pigford Brothers Construction Co. v. Evans, 225 Miss. 411, 83 So.2d 622 (1955) (worker's compensation); Nation v. Esperdy, 239 F.Supp. 531 (S.D.N.Y.1965) (immigration statute provides that one may be a stepchild whether or not born out of wedlock). Some jurisdictions accord "stepchild" a broader meaning in determining entitlement to benefits than the term is given in general parlance, including even adulterine children. Compare McClure v. Hackney, 491 S.W.2d 177 (Tex.Civ.App.1973) (wife is "stepmother" of adulterous husband's illegitimate child within the meaning of state Aid to Families with Dependent Children statute) and Hernandez v. Supreme Forest Woodmen Circle, 80 S.W.2d 346 (Tex.Civ.App.1935) (statute authorizing fraternal insurance corporation to pay death benefits to "stepchildren" permits policy to name as beneficiary the illegitimate child of the insured's adulterous husband) with Smith v. National Tank Co., 350 P.2d 539 (Wyo.1960) (illegitimate child of adulterous wife is not husband's "stepchild" within worker's compensation scheme where the husband did not take the child into his household). See also Matter of Stultz, Interim Dec. 2401 (AG June 30, 1975) (wife can be stepparent of adulterous husband's illegitimate child for immigration purposes where the three lived as a close family unit). 11 We agree, putting dictionaries aside, that A is not Mr. B's stepchild as that term is commonly understood. Moreover, a 1966 administrative ruling states that the child of an adulterous relationship is not the stepchild of the parent's spouse, even where the purported stepparent accepts and supports the child. Social Security Ruling 66-11. While the agency's ruling does not bind this court, we accord it great respect and deference where the statute is not clear and the legislative history offers no guidance. See e. g., Seagraves v. Harris, 629 F.2d 385, 390-91 (5th Cir. 1980). Arguments for granting benefits to children in A's position must be addressed to Congress. 12 AFFIRMED. * Chief Judge of the United States Court of Customs and Patent Appeals, sitting by designation 1 Appellant does not suggest that excluding the illegitimate child of an adulterous relationship from the definition of "stepchild" may pose a constitutional equal protection problem. Mrs. B does not appeal the district court's decision that there was no equitable adoption 2 42 U.S.C. § 416(h)(2)(A) provides: In determining whether an applicant is the child or parent of a fully or currently insured individual for purposes of this subchapter, the Secretary shall apply such law as would be applied in determining the devolution of intestate personal property by the courts of the State in which such insured individual is domiciled at the time such applicant files application, or, if such insured individual is dead, by the courts of the State in which he was domiciled at the time of his death, or, if such insured individual is or was not so domiciled in any State, by the courts of the District of Columbia. Applicants who according to such law would have the same status relative to taking intestate personal property as a child or parent shall be deemed such. 3 This regulation has been superseded by 20 C.F.R. § 404.357, which became effective June 15, 1979. The new regulation provides: "You may be eligible for benefits as the insured's stepchild if, after your birth, your natural or adopting parent married the insured." The new regulation is thus clearer than the old in requiring the stepparent's marriage to the parent to follow the birth of the stepchild
I make no effort to hide the fact that I think NieR was one of the most critically under-appreciated games out there. When it was released in 2010, it was met with a lukewarm reception, having received an aggregate rating of 68%. And I get it. As a game reviewer, the reality is, you have deadlines to meet and embargos to make. Having the ability to sink 25+ hours into a game you have have to finish in less than 22 is a challenging feat at best. Which is an unfortunate situation for a game like NieR; a game whose story is heavily dependant on playing until completion, 60+ hours later. For us fans, a score like that usually means a nail in the coffin for what could be a fantastic franchise. So when NieR: Automata was announced at E3 2015 (going by the working title of NieR New Project), it was a huge surprise. Yet, at the same time, it wasn’t. You can’t develop a game of that magnitude without a team that is passionate about the world, the story and the characters they’re created. Which, despite the fact that, as Producer Yosuke Saito puts it, NeiR was “not considered a great success”, was reason enough to shake off the negative feedback and give the world another chance through NeiR: Automata. I had a chance to play through a preview level with Co-Producers Yosuke Saito and Junichi Ehara, and have a chat about this new title and how things would be different this time around. Yosuke-San and Junichi-San started our conversation off by immediately addressing the feedback they received and how it influenced the development of NeiR: Automata. “In the previous title we did receive high praises for storyline, the music and the character. Especially for the previous title, you could experience the game in full with a multi-ending if you play multiple times. We did receive a lot of feedback saying that it was a really touching game. They [players] could really cry, when playing the game. However, we did also receive feedback saying that the action part of the game wasn’t as great, and so we did think that it is somewhere that we need to improve on. We did receive high praises for the storyline, but for those that really did not play through the entire game until the fourth ending did not really think highly about the game. That is how we ended up with the Metacritic score that we do have today.” It was the feedback that prompted the team to join forces with PlatinumGames, a development team with a reputation for engaging action games such as Bayonetta, and Metal Gear Rising: Revengeance. Platinum took the lead in developing NieR: Automata’s gameplay to create a battle system that was quick to pick up and keeps you on your toes. They combine sequences of action RPG and bullet-hell with a variety of weapons and skills to fit a multitude of gameplay styles. Junichi-San handed me a controller and I watched as NieR: Automata opened to a desolate, post-apocalyptic urban world, reminiscent of the first game. Platinum’s reputation for creating beautiful environments is well earned, and Automata is no exception. Yosuke-san proudly explains, “They have created beautiful environments for us, which are connected in an open world game. We were able to create a game that moves at 60fps even during all of the action sequences.” With the sequence at an end, I dove in, head first, ready to battle a series of androids who were very much in favour of destroying me. As seen in many trailers for NieR: Automata, you take control of humanoid android YoRHa No. 2 Model B, or “2B” for short. Even at such an early development stage, the mechanics felt quick and fluid with a devastating variety of combos to use to eliminate all who stand in your way. According to Junichi-San, combos are tied to the variety of weapons you collect throughout the game. These combos vary depending on the types of weapons you have equipped, and can be coupled with defending maneuvers. Aside from a devastating array of weapons, 2B is accompanied by a flying companion robot who attacks along side you, with skills that can be tailored and upgraded as you progress. As in the first game, depending on the areas you enter, the camera pulls back to give players a full view of the beautifully rendered landscapes. As enemy encounters intensified, so did the music, layering operatic vocals in a polyphonic experience as powerful as that of the first NieR. The story for NieR: Automata takes place more than a few thousand years in the future of the first game. All life on Earth was forced to flee to the Moon after an Alien invasion attacked with an insurmountable army of machine life forms. 2B and other YoHRa were created to counteract the invasion and take back the planet. There is no need, however, to play the first NieR to get the full effect of, Yosuke-San assures me. He explains what influenced this decision: “I do not consider the previous title to be a huge success, but at the same time, there are a lot of people who really loved that title. Because it was a title that was loved by some many people, I wanted to expand that to more people, more players. If we made this new title a direct sequel you have to play the previous title, then that opportunity just gets tiny; it just doesn’t reach as many people as we want it to reach out to. That’s why we decided to make the main story of the game, something that you don’t have to play the previous title to enjoy.” According to Yosuke-san and Junichi-san, feedback was a big factor influencing many decisions in the games development. “Was it difficult to see some of the user feedback you received after NieR’s initial release?” I asked. “So, It wasn’t too difficult to accept, just because I already braced myself for it,” Yosuke-san explained. “What I felt bad about was that there are so many people who wanted to really see the end of the story; they wanted a real experience the entirety of the game, but they couldn’t, because the action sequence was too difficult to clear. They couldn’t move forward and move on past a certain stage. So, when I was creating this game [NieR: Automata], I had in mind, that this could never happen again, so I made sure that people would be able to clear the game.” “In prior interviews, you mention that, theoretically, players can clear it in 25 hours. Was that an influencing factor on as well?” I asked. “Yeah, most definitely, that is the one of the reasons why we made it so that you’ll be able to clear it in 25 hours. But the difficulty level to reach that complete ending is not as difficult, because of that as well. We do have multiple endings in [NieR: Automata], but what you need to do in order to reach that complete ending, will not be as difficult as the previous game.” Yousuke-san explains. Junichi-san also mentioned while I played, that player would have the option to choose a difficulty level at the start, which would set different stats and parameters of enemies, so as to not alienate players looking for more of a challenge. This may sound like the team is catering only to review scores, but after playing through the preview, I can honestly say that if they can keep up the trajectory I was shown, nothing can be further from the truth. There was enough of a return to the things the made NieR so special in the first place, that it feels like the team behind it felt so deeply about the material that they took NieR: Automata as an excuse to improve every aspect of the game. A lot of time was spent developing the story itself and the interaction between the main characters, with efforts concentrated on rich themes, in tune with the style of the first game. “The characters that appear in this game are androids – they’re mechanical life forms.” Yosuke-san describes. “At first glance, when you hear that, you would imagine characters or beings with no emotion. But…there’s going to be a lot of interaction between the characters, like 2B and 9S….that they do have some kind of dialogue between them. We see that there’s some kind of emotion, and so, the image that you have of androids and mechanical life forms may change as you play the game. You would notice that they do have some kind of emotion.” “So, I can’t really dive into to much about it, because that would reveal the story line,or be a kind of spoiler.” He continues, “But while there is that theme of agaku, I also think that there’s also a theme of love on in this game, which you would normally not associate with robots. You will see that there is a certain type of love between the androids and other robots themselves as well.” Fans of NieR will also get to see a few familiar faces with Devola and Popola, who, as Yosuke-san hints, “will appear in this game as well, in some format. There will be that kind of a connection, that you might be able to look for- like an easter egg in the game.” He goes on to say that NieR: Automata will be an opportunity for the fan favorites to “try to accomplish what they couldn’t do, just to get rid of that regret that the had in the previous game.” With a release date set for March 7, 2017, fans and newcomers alike will have to wait a little bit longer to see the fruits of the team’s efforts. NieR: Automata will be available for PC through Steam and on PS4, with plans to optimise the game for PS4 Pro. In the meantime, Yosuke-san informed me of plans for a consumer demo: “We are hoping to release the demo,as soon as possible, so we could bring this [NieR: Automata] to everyone. That will also include the boss battles. Please try playing that when it comes out too.”
Frans Mandos Franciscus Hubertus Wilhelmus Mandos Toonzn (4 April 1910 - May 1977, The Netherlands) Frans Mandos studied Arts in Tilburg. Until 1944 he often worked together with his brother Kees from their atelier in their parent's house in Tilburg. He has illustrated, and sometimes written, about 11 stories for Uitgeverij Helmond in the mid 1930s. Among these illustrated stories, which can also be seen as comics, were 'Het Raadsel van den Knotwilg', 'De Speeldoos van Langelot', 'Aapje', 'Joop en Toop, de Tweelingen', 'De Toverstaf van Fatsrevot', 'De Firma Bultje en Aapje', 'Pepranoet en zijn Helper', 'Roosmarijntje en het Toverboek', 'Hoe Schors Leerde Toveren' and 'De Bewoners van het Vlashuisje'. Mandos was also an illustrator for Brabantia Nostra magazine, and a well-known glass painter, mural artist and fine artist.
Availability Search for No 6b Cathedral Street Room Terms & Conditions This booking system and any information appearing on this page relating to the availability of any accommodation is provided by third parties and not by VisitScotland. It is intended to provide real time availability information relating to accommodation which is also provided by third parties. You may use this booking system to place direct bookings with third party accommodation providers. Any booking you make will not be placed with VisitScotland and we will have no liability to you in respect of any booking. If you proceed to make a booking you will leave our Website and visit a website owned and operated by a third party. VisitScotland does not have any control over the content or availability of any external website. This booking system and any information appearing on this page is provided for your information and convenience only and is not intended to be an endorsement by VisitScotland of the content of such linked websites, the quality of any accommodation listed, or of the services of any third party. No 6b Cathedral Street Enchanting and elegant ground floor apartment for two in the heart of Dunkeld's delightful medieval centre, surrounded by the Cathedral, River Tay and National Trust for Scotland's 'Little Houses'. No 6B Cathedral Street lies within one of the most photographed streets in Scotland. It is living history, with antiques and generous creature comforts. A stroll away are 7 places to eat; tempting shops; music ranges from gentle fiddle playing of neighbour, the internationally renowned folk singer Dougie Maclean, to guest classical musicians within the Cathedral; art galleries and year round Beatrix Potter Exhibition. Above the front door is the last local marriage lintel of 1737. Windows also look onto Water Wynd, used by medieval monks to haul up their boats. Owned by author Ann Lindsay, whose 19th Century family photos and book jacket covers decorate the walls. No 6B Cathedral Street in Dunkeld has seen 1000 years of history pass by. St. Columbia's bones rested at Dunkeld, his followers having established a monastery; the effigy of a legendary warrior, the dastardly Wolf of Badenoch's is within the Cathedral; Robbie Burns visited famous fiddler Neil Gow in 1787; Queen Victoria was 'much pleased' to visit many times; J.M. Barrie's 'Little Minister' was filmed there; Alexander Mackenzie, first Canadian Prime Minister spent his childhood here. No 6B was a pre information church house for the parish of Kinloch, which lies along the picturesque route of five lochs, one of which is the Osprey haven at the Loch of the Lowes. * seven eating places are within walking distance, ranging from the 4 Star Hilton Dunkeld House Hotel, to brasseries, acclaimed Indian restaurant and a good fish & chip shop * also a minute's walk away are shops which include a bakery; Menzies, a licensed grocery founded in 18 Century and now a deli with wine section; an outstanding flower, fruit & vegetable shop; a newsagent; Kettles, a honey pot of a hardware shop with baskets galore; the famous Dunkeld Smoked Salmon shop; boutiques; hairdresser and beautician; antiques; deerskin shop; best of Scottish crafts * within half an hours drive are Castles, such as Blair Castle, Glamis Castle and Scone Palace with gardens galore, from the magnificence of the Hercules Garden at Blair Castle, to the formality of Drummond Castle Gardens, backdrop for film of Rob Roy. You will now be directed to our partner's site to complete your booking terms and conditions This booking system and any information appearing on this page relating to the availability of any accommodation is provided by third parties and not by VisitScotland. It is intended to provide real time availability information relating to accommodation which is also provided by third parties. You may use this booking system to place direct bookings with third party accommodation providers. Any booking you make will not be placed with VisitScotland and we will have no liability to you in respect of any booking. If you proceed to make a booking you will leave our Website and visit a website owned and operated by a third party. VisitScotland does not have any control over the content or availability of any external website. This booking system and any information appearing on this page is provided for your information and convenience only and is not intended to be an endorsement by VisitScotland of the content of such linked websites, the quality of any accommodation listed, or of the services of any third party.
st value in -1/4, k, o? o Let i = -0.0047 + 1.0047. What is the third biggest value in -1, -3, i? -3 Let g be (-1 + 1/(-3))*111/1406. Let u(l) = l**2 - 7*l + 5. Let s be u(5). Which is the third biggest value? (a) 0.2 (b) g (c) s c Let t = 17/30 - -1/10. Let m = -262 + 280.3. Let w = m + -18. Which is the second biggest value? (a) -2 (b) t (c) w c Let r = -154 + 635/4. Which is the biggest value? (a) -2 (b) 15 (c) r (d) -2/3 b Let u be (-9)/(-27) + 41/3. Let x be ((-28)/u)/((-4)/(-26)). Let b be 2 + (x/4 - -1). What is the third smallest value in 2/11, b, -0.4? 2/11 Let y(o) = -5*o**2 + 5*o - 3. Let n be y(3). Let p = n + 63/2. Let d be -2*((-5)/6 - -1). Which is the second biggest value? (a) p (b) -4/3 (c) d b Let b = 427 + -426.7. What is the third smallest value in 2/7, -0.05, 1, b? b Let t = -775 + 777. Let v = -570/11 - -52. Which is the fourth biggest value? (a) t (b) -4 (c) -1 (d) v b Let g be (-255)/(-1020) + (-18)/8 - (-6)/1. Let q be (-1 + 6/2)/5. Which is the fourth biggest value? (a) 5 (b) q (c) g (d) -1 d Let r be ((-2)/(-3))/((-56)/(-12)). Let g = 0.57 + -0.3. Let f = g - -0.23. Which is the third biggest value? (a) f (b) r (c) 0 c Suppose -22*b + 12*b = -11*b. What is the smallest value in b, 0.2, 4, 1.4? b Let k = 149/3 + -751/15. Let w = 2 - 4. Let u = 0.03 - -0.07. What is the third biggest value in u, w, k? w Let m = -0.5 - -23.7. Let v = 23 - m. What is the smallest value in v, 1/3, 0.1? v Let n = 189.2 + -189.3. Which is the third biggest value? (a) 1.1 (b) 1.3 (c) n c Let r(g) = -4*g - 54. Let y be r(-13). Let v = -0.9 - 0.1. Let t = v + -2. Which is the second smallest value? (a) t (b) y (c) 1 b Let b = -3471/4 + 867. What is the third biggest value in b, 2/7, -9, -2? -2 Let k = -1169 + 1164. Which is the third smallest value? (a) -39 (b) 4 (c) 1 (d) k c Let n = -25/164 - 4/41. What is the third biggest value in 135, n, -2? -2 Let i = -0.04 - 0.1. Let k = 11.5 + -10.36. Let m = i + k. What is the third smallest value in m, -2, -0.2? m Let y = -6.58 + -0.52. Let u = -7 - y. Let x = 0.8 - 1. Which is the biggest value? (a) 2/3 (b) x (c) u a Let d(v) = -v**2 - 6*v - 2. Let h be d(-3). Suppose -h*g = -2*g + 100. Let z be g/14*-1 - 1. What is the second smallest value in 2, z, -0.3? z Let l = -715/7 + 103. Which is the third biggest value? (a) -158 (b) 0 (c) l a Let a = -79.896 + -0.104. Let h = a - -76. Let r be 2/(-9) - 88/(-234). What is the third smallest value in -10, h, r? r Let z(f) = -f**3 + 8*f**2 - 8*f - 1. Let d be z(5). Suppose t = 36 + d. Let o be (-40)/t*7/3. Which is the smallest value? (a) 0.5 (b) o (c) -2/5 b Let o = -0.378 + 0.278. Which is the third smallest value? (a) o (b) 1 (c) 1/3 b Let s(x) = -x**3 + 3*x**2 - 1. Let u be s(3). Let y = 2435 + -2437. Suppose 4*n - 12 = -2*d, 0*n - 4*d - 12 = -n. What is the second smallest value in y, u, n? u Let n be -1*(355 - 2/(-1)). Let o = n - -2137/6. Suppose 3*v - 8*v = 20. Which is the biggest value? (a) -1 (b) o (c) v b Suppose -5*q + 21 = -99. Let b = 35 - q. Suppose 3*x + 3*l = 6*l + 9, -2*x + l = -b. What is the biggest value in 2/13, x, -2? x Let v = -13 + 17. Let g = v - 4.3. Which is the smallest value? (a) 2/7 (b) g (c) -1 c Let z(f) = f**3 - 2*f**2 - 2*f + 3. Let c be z(2). Suppose -31*l - 63 = -125. What is the second smallest value in -8, c, l? c Let r = 1447 + -11561/8. Which is the third biggest value? (a) -0.02 (b) r (c) 1 a Let g = -4.0394 - -0.0394. Let n = 0.1 + 0. Let y = 0.31 - -0.19. Which is the biggest value? (a) n (b) y (c) g b Let p be ((-8)/3)/(2/12). Let j(n) be the first derivative of n**2/2 + 18*n - 242. Let d be j(p). What is the second smallest value in 0.4, -0.2, d? 0.4 Let p(d) = -d**2 + 10*d - 6. Let g be p(6). Suppose -3*i = 3*i - g. Suppose -15 = -i*l - 0*l. Which is the third biggest value? (a) -5 (b) l (c) -4 a Let a = 1405.5 - 1400. What is the third smallest value in 1, a, -9? a Let c = 635/3 - 212. What is the second biggest value in c, 33, 1? 1 Let i = -20.48 + 23.48. Let v = -1/13 + -10/39. Let y be 6/(-10)*2/3. What is the third biggest value in i, y, v? y Let g be (12/9 - 2)/3. Suppose -4*d = d. Let s be -2 - 416/(-68) - 4. What is the second smallest value in d, s, g? d Let k = -117.2 - -117. What is the third smallest value in k, 1, 48? 48 Let y = 24.0385 - 0.3285. Let o = -0.71 + y. What is the smallest value in 1, o, -5? -5 Let u = -97.4 - -97. Let v be 14 - 2/(4/(-2)). Suppose -v + 5 = -2*d. Which is the third smallest value? (a) d (b) u (c) -5 a Let u be -1 + (-2 - -20) + -2. Let g = 8.8 + -8.9. What is the smallest value in u, 0, g? g Let s = -0.211 + 7.211. Let r = -0.35 - -0.05. Which is the biggest value? (a) r (b) -2 (c) s c Let p = -0.1411 - 0.0589. What is the second smallest value in -4, -64/3, p? -4 Let t be 640/(-1750) + (-4)/(-50). What is the second biggest value in -2/15, t, 36? -2/15 Let v = 57 - 35.6. Let p = v + -24. Let k = -0.4 + p. What is the smallest value in k, -4/3, 0? k Let y = 206 + -198. Let r be -3 + 4 - 11/13. Which is the smallest value? (a) 0.2 (b) r (c) y b Let n be (-16)/10 + 36/60. Let u(d) = 47*d**3 - 1. Let p be u(n). Let h be p/143 - 8/(-44). Which is the smallest value? (a) -0.04 (b) h (c) 2 b Let m = 162.3 + -162.21. Which is the biggest value? (a) -1/4 (b) -3/5 (c) 0.2 (d) m c Let o = -222 + 227. Which is the fourth smallest value? (a) -2/5 (b) -1 (c) -0.1 (d) o d Let v = -575.4 - -575. Which is the third biggest value? (a) v (b) -3/7 (c) 4 (d) -0.9 b Let f = -13529.5 - -13530. Let m be -2 - ((-24)/10 + 0). Which is the third smallest value? (a) -27 (b) f (c) m b Let p = 0 + 0. Let o = p - -2. Let s be 2*(700/77 - 9). Which is the smallest value? (a) -1 (b) s (c) o a Let g be (6/(-4))/(-1 + 13). What is the second biggest value in 2, -5, -0.3, g? g Let y be 100/18 - 6*(4 - 3). Which is the second smallest value? (a) -1/9 (b) -1 (c) -2/5 (d) y d Let k = 7/85 + -109/85. Which is the smallest value? (a) k (b) -5 (c) -0.05 b Let m = 6.84 - 4.84. What is the biggest value in 1, -13, m? m Suppose -143 = 4*s - 5*z, 0*z + 4*z = -5*s - 230. Let w be ((-1)/s)/((-2)/8). What is the biggest value in w, -2, -3? w Let s be ((-3)/(-2))/((-96)/128). Which is the third biggest value? (a) -4 (b) 2/7 (c) 5 (d) s d Let l = -27 - 16. Let s = 45 + l. Which is the third smallest value? (a) 4 (b) -3/4 (c) s a Let y = 5.1 - 11. Let j = y + 6. What is the third smallest value in 0.4, -0.5, j? 0.4 Let t = 1.7 - 5.7. Which is the biggest value? (a) -0.1 (b) t (c) -1/37 c Let o = 741 - 8153/11. Which is the third smallest value? (a) o (b) -0.051 (c) 0 c Let q = 0.05 + 0.25. Let c = -12.7 + 8.7. What is the third smallest value in q, 22, c? 22 Let p = 170 + -167. Which is the biggest value? (a) -0.45 (b) p (c) -4 (d) 5 d Let a = 33 - -7. Let d = a - 282/7. Let c = 1.35 - 1.55. What is the third biggest value in -3, c, d? -3 Let p = 64 + -64.4. Let w = -0.2 - -0.12. Let m = w + 0.48. Which is the second biggest value? (a) 3/4 (b) m (c) p b Let d = 560 + -559. What is the third smallest value in 1/3, -9/5, d, -2? 1/3 Let d = -0.084 + -2.996. Let z = d + 2.7. Let v = 0.08 + z. Which is the third smallest value? (a) 5 (b) v (c) 0.2 a Let s be 21/66 - 2 - 36/(-198). Let z = 1/5 + 0. What is the third biggest value in z, s, 5? s Let i = -0.017 + 0.487. Let q = 1.5 - 1.47. Let f = i + q. Which is the third biggest value? (a) 3/5 (b) f (c) 2/11 c Let i = 53.3 - 48.3. What is the second biggest value in -2/7, 2/9, i, 1? 1 Suppose 0 = -5*j + 57 + 23. Let y(p) = 3*p - 6. Let w be y(-6). Let i = j + w. Which is the biggest value? (a) -4 (b) -3 (c) i b Let s = -443.8 + 444. Which is the second biggest value? (a) -1 (b) s (c) -3/13 (d) 2/23 d Let r(d) = 50*d + 297. Let f be r(-6). Which is the second biggest value? (a) f (b) -49 (c) 0.1 a Let z be (-15 - -14)*4/14. Let k = 3 - 2.6. Which is the second smallest value? (a) k (b) z (c) 0 c Let v = 15 - 15.5. Let g be (1/6)/((-11)/(-66)). Suppose u - 3 = g. Which is the biggest value? (a) v (b) 5 (c) u b Let g be (-1)/(21/(-2) - -1). Let h = -0.36 + 0.46. What is the third biggest value in h, -3, g? -3 Let x be 12/78*(-4)/((-40)/(
Florida's voter purge can go forward -- but they need a new list. A US district judge ruled Wednesday that Florida's efforts to remove ineligible voters from the rolls was in line with federal law. The Department of Justice demanded the state stop the voter purge earlier this month because the purge was happening too close to the August 14 primary election. State officials asked local election supervisors to check out the citizenship status of more than 2,600 voters. While more than 100 non-U.S. citizens have been removed supervisors have also discovered more than 500 people on the list were U.S. citizens. State officials already asked local election supervisors to check out the citizenship status of more than 2,600 voters but the state has also drawn up a list of 180,000 voters. A spokesman for Gov. Rick Scott said the state will not distribute that list unless the state first can check the names against a federal immigration database. Most counties in Florida have stopped removing voters due to differing opinions over whether it is legal. Governor Rick Scott issued the following statement regarding the ruling: “The court made a common-sense decision consistent with what I’ve been saying all along: that irreparable harm will result if non-citizens are allowed to vote. Today’s ruling puts the burden on the federal government to provide Florida with access to the Department of Homeland Security’s citizenship database. We know from just a small sample that an alarming number of non-citizens are on the voter rolls and many of them have illegally voted in past elections. The federal government has the power to prevent such irreparable harm from continuing, and Florida once again implores them to grant access to the SAVE database.” ——————————————— The Associated Press contributed to this report.
Q: How to change the background in the table? How to change the background color in the table? Template RWD Magento. A: add .toolbar{background:'your color'}; to style.css inside skin folder and to change the inner font color use .sort-by option { color: red !important; }
Q: Django allauth: empty 'signup_url, login_url and logout_url' Using django 1.11.4 and package django-allauth==0.33.0 Login works fine The default login template 'login.html' contains a link to a signup page: <p>{% blocktrans %}If you have not created an account yet, then please <a href="{{ signup_url }}">sign up</a> first.{% endblocktrans %}</p> and that works fine but any other page then /accounts/* its just empty base.html: <div class="nav-wrapper"> <a href="/" class="brand-logo">Logo</a> <ul id="nav-mobile" class="right hide-on-med-and-down"> {% if user.is_authenticated %} <li> Welcome: {% user_display user %}</li> <li><a href="{{ logout_url }}">logout</a></li> {% else %} <li><a href="{{ login_url }}">sign in</a></li> <li><a href="{{ signup_url }}">sign up</a></li> {% endif %} <li></li> </ul> </div> I use the base.html on /accounts/* as well as on the index. on /accounts/* its works fine but on the index the {{ logout_url }} etc are empty. Settings extract: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], 'APP_DIRS': True, 'OPTIONS': { 'debug': DEBUG, 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] Same problem answer didnt help A: Your logout page appears after {% if user.is_authenticated %}. This means that if the user is not authenticated, your logout page and others that falls into the "if statement" will not show. If you want it to show, delete {% if user.is_authenticated %}
Q: Why decltype works here but not auto? I have the code as below: template <typename T, typename sepT = char> void print2d(const T &data, sepT sep = ',') { for(auto i = std::begin(data); i < std::end(data); ++i) { decltype(*i) tmp = *i; for(auto j = std::begin(tmp); j < std::end(tmp); ++j) { std::cout << *j << sep; } std::cout << std::endl; } } int main(){ std::vector<std::vector<int> > v = {{11}, {2,3}, {33,44,55}}; print2d(v); int arr[2][2] = {{1,2},{3,4}}; print2d(arr); return 0; } If I change the decltype to auto, it won't compile and complain (partial error): 2d_iterator.cpp: In instantiation of ‘void print2d(const T&, sepT) [with T = int [2][2]; sepT = char]’: 2d_iterator.cpp:21:21: required from here 2d_iterator.cpp:9:36: error: no matching function for call to ‘begin(const int*&)’ 2d_iterator.cpp:9:36: note: candidates are: In file included from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/string:53:0, from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/locale_classes.h:42, from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/bits/ios_base.h:43, from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/ios:43, from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/ostream:40, from /usr/lib/gcc/x86_64-redhat-linux/4.7.2/../../../../include/c++/4.7.2/iterator:64, Why is this happening? A: The answer summed-up in one comment: decltype yields int(&)[2], whilst plain auto forces a pointer conversion (same rules as template argument deduction). Just use auto&. - Xeo @Xeo's comment-answer basically says that because auto involves the same rules as template argument type deduction, auto deduces a pointer (int*) type out of the source's array type (of i, specifically int(&)[2]). There is something great in your code: it actually demonstrates how template type deduction behaves when the parameter is a reference and how the reference affects how the type is being deduced. template <typename T, typename sepT = char> void print2d(const T &data, sepT sep = ',') { ... } ... int arr[2][2] = {{1,2},{3,4}}; print2d(arr); You can see that data is of type const T&, a reference to a const T. Now, it is being passed with arr, whose type is int[2][2], which is an array of two arrays of two ints (whoo!). Now come template argument type deduction. On this situation, it rules that with data being a reference, T should be deduced with the original type of the argument, which is int[2][2]. Then, it applies any qualifications to the parameter type to the parameter, and with data's qualified type being const T&, the const and & qualifiers are applied and so data's type is const int (&) [2][2]. template <typename T, typename sepT = char> void print2d(const T &data, sepT sep = ',') { static_assert(std::is_same<T, int[2][2]>::value, "Fail"); static_assert(std::is_same<decltype(data), const int(&)[2][2]>::value, "Fail"); } ... int arr[2][2] = {{1,2},{3,4}}; print2d(arr); LIVE CODE However, if data would have been a non-reference, template argument type deduction rules that if the argument's type is an array type (e.g. int[2][2]), the array type shall "decay" to its corresponding pointer type, thus making int[2][2] into int(*)[2] (plus const if parameter is const) (fix courtesy of @Xeo). Great! I just explained the part that is entirely not what caused the error. (And I just explained a great deal of template magic)... ... Nevermind about that. Now to the error. But before we go, keep this on your mind: auto == template argument type deduction + std::initializer_list deduction for brace init-lists // <-- This std::initializer_list thingy is not relevant to your problem, // and is only included to prevent any outbreak of pedantry. Now, your code: for(auto i = std::begin(data); i < std::end(data); ++i) { decltype(*i) tmp = *i; for(auto j = std::begin(tmp); j < std::end(tmp); ++j) { std::cout << *j << sep; } std::cout << std::endl; } Some prerequisites before the battle: decltype(data) == const int (&) [2][2] decltype(i) == const int (*) [2] (see std::begin), which is a pointer to an int[2]. Now when you do decltype(*i) tmp = *i;, decltype(*i) would return const int(&)[2] , a reference to an int[2] (remember the word dereference). Thus, it is also tmp's type. You preserved the original type by using decltype(*i). However, when you do auto tmp = *i; Guess what decltype(tmp) is: int*! Why? Because all of the blabbery-blablablah above, and some template magic. So, why the error with int*? Because std::begin expects an array type, not its lesser decayed-to pointer. Thus, auto j = std::begin(tmp) would cause an error when tmp is int*. How to solve (also tl;dr)? Keep as-is. Use decltype. Guess what. Make your autoed variable a reference! auto& tmp = *i; LIVE CODE or const auto& tmp = *i; if you don't intend to modify the contents of tmp. (Greatness by Jon Purdy) Moral of the story: A great comment saves a man a thousand words. UPDATE: added const to the types given by decltype(i) and decltype(*i), as std::begin(data) would return a const pointer due to data also being const (fix by litb, thanks)
Q: R: grep drop all columns even when its not matching Trying to remove columns from a large data frame. Using grep and it works fine when actually there are matching columns. But when there is zero matching columns it drops all the columns. s <- s[, -grep("^Test", colnames(s))] To confirm that there are no columns that match Test > y <- grep("^Test", colnames(s)) > y integer(0) What is exactly going on here? A: You need to use grepl and ! instead. df2 <- data.frame(ID =c(1,2,3), T = c("words", "stuff","things")) df2[,!grepl("^Test", colnames(df2))] ID T 1 1 words 2 2 stuff 3 3 things -grep() or -grepl() return integer(0) when there isn't a match. -TRUE == -1 where as !TRUE == FALSE Using !grepl() returns the full logical vector (TRUE TRUE) for each column header, allowing you to correctly subset when no columns meet the condition. In other words for colname(df)[i], grepl(..., colnames(df))[i] returns TRUE where your pattern is matched, then using ! you invert to keep the values that don't match, and remove the ones that do.
Q: MediaPlayer control I have a WPF Caliburn.Micro application, and I want to use System.Window.Media.MediaPlayer to play audio. I know how to start playing, but how can I know when the playing is done, so I can for example disable Pause button etc? My code: var audio = Tpv.GetAudio(tpv.TpvId); var file = Path.GetTempFileName().Replace(".tmp", ".wma"); File.WriteAllBytes(file, audio); var player = new MediaPlayer(); player.Open(new Uri(file, UriKind.Absolute)); player.Play(); Thanks. A: You can subscribe to the MediaEnded event on the MediaPlayer. http://msdn.microsoft.com/en-us/library/system.windows.media.mediaplayer.mediaended If you want more control over the playback, like pausing, and seeking then use a MediaTimeline and a Storyboard. http://msdn.microsoft.com/en-us/library/system.windows.media.mediatimeline.source.aspx WPF: Implementing a MediaPlayer Audio / Video Seeker
The overall objective for this core (Core A: Pre-clinical Trials and Pathology) are to provide the required experimental animals and support services needed to facilitate the AIDS vaccine development studies proposed in Projects 1-4 in this application. This will include provision of 2-3 year old retrovirus-free (SIV, STLV-1, SRV) rhesus macaques from the Yerkes macaque breeding colonies; immunization of the animals with DNA or protein immunogens as detailed in Projects 1 and 2; intravenous viral challenges of selected immunized animals; daily monitoring of the experimental animals; periodic physical examinations, blood collections and lymph node biopsies from the experimental animals to assess the animals' clinical and physical condition and to provide specimens for laboratory evaluations (Projects 1-4); performance of CBCs and flow cytometry evaluations to determine lymphocyte subsets; RT-QC-PCR determination to evaluate viral load in the plasma of immunized challenged animals; viral cultures of PBMCs of immunized, challenged animals to determine if the vaccines were effective in preventing infection; performance of in situ hybridization and/or immunohistochemistry studies of specimens collected by biopsy or at necropsy of immunized, challenged animals; and the performance of complete gross and histologic evaluation of all experimental animals that die or that are sacrificed during the course of the study. Provision of these resources and support services will facilitate the development and testing of AIDS vaccines as described in Projects 1-4 of this application. The virological, immunologic and clinico-pathologic evaluations proposed will allow an assessment of vaccine efficacy with respect to either the prevention of infection (sterilizing immunity) or modification of post-challenge virus load or viral set point.
Newts Immodest Proposal December 15, 2011 I’m so looking forward to this GOP primary with Newt Gingrich as the front runner. He’s been around for so long, has said so many incredibly idiotic things and continues to do so, that for a political blogger it’s the mother lode for material. In 1994, during the early days of the public debate on welfare reform, Speaker of the House Newt Gingrich ignited a media firestorm by suggesting that orphanages are better for poor children than life with a mother on Aid to Families with Dependent Children (AFDC). Responding to blistering criticism, he first defended the proposal by invoking the idyllic orphanage life of the 1938 film “Boys Town,” finally retreating, at least rhetorically, from the entire controversy. Orphanages became just another blip on the nation’s radar screen, or so it seemed. In fact, the plan to revive orphanages is embedded in the Personal Responsibility Act, the Republican plan for welfare reform, and is a major piece of the Republican Contract With America. The Republicans’ pledge promised to balance the budget, protect defense spending, and cut taxes, targeting programs for the poor–cash assistance, food, housing, medical, and child care–as the big areas for major budget savings. He explained that what “liberals” really believed was “put your baby in a dumpster, that’s Okay.” He claimed that 800 babies were thrown in dumpsters in Washington DC every year, which was, needless to say, absurd. The Boys Town thing came in response to Hillary Clinton criticizing GOP calls for removal of poor children from their parents. He tartly responded: “I’d ask her to go to Blockbuster and rent the Mickey Rooney movie about Boys Town. I don’t understand liberals who live in enclaves of safety who say, ‘Oh, this would be a terrible thing.'” Apparently Newtie believes everything he sees in the movies. It explains a lot. (I wonder if the revelations about the Irish orphanage abuses have altered his opinion on the wonderful advantages of orphanages? Maybe someone should ask him.) His comment provoked a firestorm back in 1994 and he slithered back a bit on his stand, as he usually does when he says something outrageous like this. But he never really changes his mind. Look what he said just last week: “It is tragic what we do in the poorest neighborhoods, entrapping children in, first of all, child laws, which are truly stupid,” said the former House speaker, according to CNN. “Most of these schools ought to get rid of the unionized janitors, have one master janitor and pay local students to take care of the school. The kids would actually do work, they would have cash, they would have pride in the schools, they’d begin the process of rising.” “You’re going to see from me extraordinarily radical proposals to fundamentally change the culture of poverty in America,” he added. Generally, the Fair Labor Standards Act allows minors over 14 to work in most jobs, with several exceptions for minors under that age. Hours are limited for minors under the age of 16. Some states have higher age standards. By the way, his “extraordinarily radical proposals to fundamentally change the culture of poverty” are the same as they ever were: The Republicans’ pledge promised to balance the budget, protect defense spending, and cut taxes, targeting programs for the poor–cash assistance, food, housing, medical, and child care–as the big areas for major budget savings. Add to that orphanages and his bold new proposal to get rid of child labor laws and you have a patented “radical” Gingrich proposal. I’m sure it will be quite popular with the GOP base. This could be his moment. Everyone who testified at a Congressional hearing on the state of steel fingered bad trade as the culprit in the current collapse. As it is now, trade rules require Americans to forfeit a pound of flesh before trade enforcement can occur.
Pilot-scale comparison of microfiltration/reverse osmosis and ozone/biological activated carbon with UV/hydrogen peroxide or UV/free chlorine AOP treatment for controlling disinfection byproducts during wastewater reuse. Ozone and biological activated carbon (O3/BAC) is being considered as an alternative advanced treatment process to microfiltration and reverse osmosis (MF/RO) for the potable reuse of municipal wastewater. Similarly, the UV/free chlorine (UV/HOCl) advanced oxidation process (AOP) is being considered as an alternative to the UV/hydrogen peroxide (UV/H2O2) AOP. This study compared the performance of these alternative treatment processes for controlling N-nitrosamines and chloramine-reactive N-nitrosamine and halogenated disinfection byproduct (DBP) precursors during parallel, pilot-scale treatment of tertiary municipal wastewater effluent. O3/BAC outperformed MF/RO for controlling N-nitrosodimethylamine (NDMA), while MF/RO was more effective for controlling N-nitrosomorpholine (NMOR) and chloramine-reactive NDMA precursors. The UV/H2O2 and UV/HOCl AOPs were equally effective for controlling N-nitrosamines in O3/BAC effluent, but UV/HOCl was less effective for controlling NDMA in MF/RO effluent, likely due to the promotion of dichloramine under these conditions. MF/RO was more effective than O3/BAC for controlling chloramine-reactive halogenated DBP precursors on both a mass and toxicity-weighted basis. UV/H2O2 AOP treatment was more effective at controlling the toxicity-weighted chloramine-reactive DBP precursors for most halogenated DBP classes by preferentially degrading the more toxic brominated species. However, the total toxicity-weighted DBP precursor concentrations were similar for treatment by either AOP because UV/H2O2 AOP treatment promoted the formation of iodoacetic acid, which exhibits a very high toxic potency. The combined O3/BAC/MF/RO train was the most effective for controlling N-nitrosamines and the total toxicity-weighted DBP precursor concentrations with or without treatment by either AOP.
Q: Why Hibernate updates UpdateTimestampsCache with each SQL query I have enabled second level and query caches in my app Looks like when I invoke following code String sql = "update SOME_TABLE set SOME_FIELD=somevalue"; SQLQuery query = getSession().createSQLQuery(sql); query.executeUpdate(); hibernate updates UpdateTimestampsCache for ALL tables. Why he does this? I have about 1000 tables and many sql queries in my app. I dont need this updates because I dont update cached tables via sql. It causes huge netwrok traffic and slowneess of application. Is there a way to tell hibernate to NOT do any updates when running sql queries? A: I found solution! You can use addSynchronizedEntityClass() method String sql = "update SOME_TABLE set SOME_FIELD=somevalue"; SQLQuery query = getSession().createSQLQuery(sql); query.addSynchronizedEntityClass(SomeTable.class) query.executeUpdate(); In this case it will reset just cache for SOME_TABLE
To view the multimedia assets associated with this release, please click: http://www.multivu.com/mnr/61630-dhx-kids-dhx-junior-dhx-retro-paid-youtube-family-entertainment-channels "There is an insatiable appetite for kid's content in the digital universe across the globe and DHX Media is positioned with our extensive library of evergreen favorites to satisfy that demand," said Michael Hirsh, Executive Chairman, DHX Media. "Millennials changed the way entertainment is consumed and now, parents in this generation, who grew up with 'Inspector Gadget' and 'Super Mario,' can introduce their kids to these family friendly favorites anytime, anywhere, online and on-the-go," he concluded. DHX Media CEO Michael Donovan commented, "We are the largest independent library of programming for families in the world. These channels represent another way in which the internet is enabling us to monetize our 8,500 half hour library of leading children's programming. Partnering with YouTube on a revenue-sharing basis for these new digital channels is an efficient way to provide children and parents with great entertainment content, when and where they want it," he concluded. DHX Media's digital channel launch strengthens the company's position as a leading content provider of popular family entertainment on multiple platforms. The DHX Kids channel will feature live action and animated series such as "Horseland;" "Mudpit," which will be first run in the U.S.; "Sabrina," the animated series; and "Sherlock Holmes." The DHX Junior channel will offer content from "The Busy World of Richard Scarry;" "The Doodlebops," and "Madeline." The DHX Retro channel is a platform for nostalgic childhood favorites including "Heathcliff;" "Inspector Gadget;" "Sonic Underground" (Sonic the Hedgehog); and "Super Mario." DHX Media will add additional content to these channels regularly and will showcase a selection of first-run programming through their digital channels in all territories.
Pages Sunday, August 15, 2010 stashbusting august 15 Good week. Nothing came in, only fabric went out. Binding for the cherisch nature quilt. Pic will come later today.Fabric in this week: 0 meterFabric out this week: 0.45 meterFabric in 2010: 102.8 meterFabric out 2010: 131 meterFabric busted in 2010: 28.20 meter
Poképreneur Hoodie Are you an entrepreneur that is cashing in on the Pokémon Go craze? Already achieving Unicorn status? Then you gotta catch this hoodie. This American Apparel hoodie is made out of California fleece which, opposed to typical synthetic fleece, is made of 100% extra soft ring-spun combed cotton. It’s pre-washed to minimize shrinkage, and is breathable yet extra thick for warmth. Description Are you an entrepreneur that is cashing in on the Pokémon Go craze? Already achieving Unicorn status? Then you gotta catch this hoodie. This American Apparel hoodie is made out of California fleece which, opposed to typical synthetic fleece, is made of 100% extra soft ring-spun combed cotton. It’s pre-washed to minimize shrinkage, and is breathable yet extra thick for warmth.
The First Purge Starring the UK’s Joivan Wade is a brilliant scary rollercoaster ride of fun 80% #OutOf100 Joivan Wade holds his own in his second major feature lead, The First Purge. I watched The Purge (2013), didn’t watch The Purge Anarchy (2014); think I distractedly watched The Purge Election Year (2016). I only raised an eyebrow for the 4th installment of the thriller/horror series The First Purge when I heard that British Blacktor Joivan Wade had landed a starring role. Not that I didn’t enjoy The Purge, the concept of people having one night to physically purge out all their angst, anger, and evil thoughts once a year, with no legal repercussions is a great one. But repeating the concept over and over again, I presumed would be a bad idea. How many different ways can we be shown the negative fall out of legalising crime? But they decided to bring another installment, and perhaps wisely this time around, The First Purge is the prequel to the Purge franchise. It tells the story of how it all came to be. The premise – To push the crime rate below one percent for the rest of the year, the New Founding Fathers of America test a sociological theory that vents aggression for one night in one isolated community. But when the violence of oppressors meets the rage of the others, the contagion will explode from the trial-city borders and spread across the nation. We have Joivan Wade (The Weekend (2016); Youngers (2013 – 14)) as Isaiah younger brother to Nya (Lex Scott Davis, Superfly, 2018) who is pissed off at the world and his situation, living in a project block on Staten Island; he and his sister struggling to make ends meet. Isaiah needs an outlet. Enter Dr. Updale (Marisa Tomei) and her radical social experiment. Updale believes that if people are allowed the freedom to vent their innermost aggressions for a set amount of time, the general crime rate will be reduced for the rest of the year. The government aka the New Founding Fathers of America (NFFA), decide to give Updale’s experiment the greenlight and announce that for 12 hours on the 4th July (American Independence Day) inhabitants of Staten Island will be allowed to commit any crime they like, including murder with no repercussion. Thus we have the pilot run, of The First Purge. Focusing on one of New York’s most impoverished boroughs, the NFFA want to monitor how the people of Staten Island react to this new version of ‘freedom’, specifically the poorer residents who they entice to participate with irresistible offers. Those who don’t want to Purge have the option to leave town to the safety of neighbouring boroughs and states. Or stick it out and hope for the best. The thinking is, that if people react they way the NFFA hope, then the Purge will be rolled out across the Nation. Joivan Wade and Lex Scott Davis – The First Purge Other players in the mix, include local drug dealer/kingpin Dmitri (Y’lan Noel, Insecure, 2016 -) who as with others who disapprove of the NFFA’s scheme, is suspicious of what The First Purge means for the community he ‘runs’. Let’s not, for now, dwell on the negative impact his own drug dealing actions have on his precious community. But here we are. Dmitri’s moral spidey senses have been triggered. Equally not impressed with the NFFA is Nya, who is at the forefront of the local protests. Also concerned with the potential effect the Purge will have on the locals, Nya makes moves to ensure she and her brother Isiah are safe as the dreaded event draws ever closer. What’s great about The First Purge is that it’s every bit as jumpy, scary and horrific as it should be. As someone who was sceptical and sure they’d not ‘get me’, they did. Series, creator James DeMonaco who wrote and directed all the previous Purge installments wrote this one, with African American director of the pretty good hazing film Burning Sands, Gerard McMurray jumping in the director’s chair for The First Purge. It is interesting that the director is African American McMurray because most definitely, The First Purge is one for the culture. Because I’m a spoiler phobe, there’s not much I will say, because the reveals are great. What impressed me is that this is more than an unnecessarily violent film. There’s a point to it all. A political/social theme runs strong throughout and there are some real pump your fist in the air scenes when those you want to win… succeed. There are a few melodramatic scenes, some truly unbelievable. Typical of a film like this. But dramatics are needed because what fuels the fear, is that we’re currently living in a political era none of us could have foreseen or would have believed pre-Trump/Brexit/shit-show. So whilst annual legalising of crime seems far-fetched, this film further embeds the seeds of possibility. Noel’s turn as Dmitri, well. Those of you who are #TeamDaniel in Insecure will definitely be #TeamDmitri – thoroughly impressed, pressed and thirsty when you see him in his element once the Purge, starts… purging. There’s mumbling that Marvel needs to reboot the Blade franchise. Though Wesley Snipes said in a tweet last year “When anyone asks “Who could be the next Blade?” 💥👋🏿 you already know there’s only one Blade fam! #Blade” . Sure, up until this moment I would have agreed. But I think we have a strong candidate in Noel. He is Blade for the millennials. Sorry Uncle Wesley. Make it happen Marvel. For Wade this is a brilliant launchpad for his already successful career. Coming from being one-third of the brilliant Mandem on the Wall nee Wall of Comedy outfit, then going on to branch out solo in Doctor Who and EastEnders. Aside from his reunion with his comedy brothers Dee Kaate and Percelle Ascott, in their first feature The Weekend, as Isaiah, Wade holds his own. Believably a Staten Islander and though you want to knock him in his forehead for some of the decisions he makes, Wade’s wide-eyed innocence shines through enough for you to empathise with his character. So, yep America has claimed another one of ours. (Wake up UK for the love of Hollywood. Sheesh). The First Purge is a lot of fun. Great characters, thought-provoking premise. More than just a violent romp. It’ll entertain you whilst triggering your socio-political emotions. Gather your pals, even the bougie ones who think films like this are beneath them. They will be pleasantly surprised. I’m going to watch the rest of the franchise. Out Of 100 80% Summary The First Purge is a lot of fun. Great characters, thought-provoking premise. More than just a violent romp. It'll entertain you whilst triggering your socio-political emotions. Gather your pals, even the bougie ones who think films like this are beneath them. They will be pleasantly surprised. #BritishBlacktor Joivan Wade shines in his debut Hollywood performance.
In early 2015, the Centers for Medicare & Medicaid Services (CMS) initiated reimbursement for low-dose computed tomography (LDCT) screening for lung cancer in individuals aged 55 to 77 years with a 30 pack-year or greater smoking history.1 A unique feature of the CMS approval was a requirement for a separate shared decision-making (SDM) session before the LDCT.1 This visit had several required components, including use of a decision aid and counseling on tobacco abstinence. We used Medicare data from January 1, 2015, through December 31, 2016 to determine the percentage of enrollees who received an LDCT had a visit for SDM. Methods We developed separate cohorts for 2015 (4 192 802 persons) and 2016 (4 138 559 persons) Medicare beneficiaries aged 55 to 77 years with complete Medicare Parts A and B coverage and no health maintenance organization (HMO) enrollment from a 20% national sample. Using Current Procedural Terminology codes, we determined the number of enrollees with Evaluation and Management charges for an LDCT SDM visit (Current Procedural Terminology code G0296), and receipt of LDCT (Current Procedural Terminology code G0297 or S8032). We assessed age, sex, race/ethnicity, Medicaid eligibility, region, comorbidities, and education (at the zip code level). The monthly percentage of patients undergoing LDCT with an associated SDM visit were graphed and assessed by joinpoint analysis using joinpoint software downloaded from the National Cancer Institute. Statistical significance for the joinpoint analysis was set at P < .0001. We estimated the odds of patients with LDCT engaging in SDM before the screening and the relative risk of undergoing LDCT after SDM by using logistic regression. These statistical calculations were performed with SAS software version 9.4 (SAS Institute Inc). The University of Texas Medical Branch institutional review board approved this study, which used deidentified data. Results Of the 19 021 enrollees in the 20% sample who underwent LDCT in 2016, 1719 (9.0%) had a separate SDM visit on the day of LDCT or in the previous 3 months. After an initial increase, the monthly percentage of enrollees undergoing LDCT who had participated in SDM plateaued at approximately 10% (Figure). Characteristics associated with lower odds of SDM before LDCT included black race vs white race (odds ratio [OR], 0.76; 95% CI, 0.59-0.97), female sex (OR, 0.88; 95% CI, 0.79-0.98), and higher education (for highest vs lowest quartile of education: OR, 0.81; 95% CI, 0.68-0.96); there was also wide regional variation (Table). Of the 2154 enrollees who underwent SDM from January through October 2016, 1300 (60.8%) underwent LDCT in the following 3 months. In a multivariable analysis, black race (risk ratio, 0.81; 95% CI, 0.66-0.97) and female sex (risk ratio, 0.93; 95% CI, 0.86-0.99) were associated with significantly lower LDCT use after SDM. Discussion Although patient characteristics such as sex, race/ethnicity, and geographical region are associated with receipt of SDM before LDCT and with receipt of LDCT after SDM, the most important finding is the remarkably low uptake of SDM visits after the CMS mandate. Several factors may contribute to this finding, including the recentness of the mandate, lack of training in SDM, and competing priorities for clinicians. In addition, SDM may have occurred as part of another medical encounter. The CMS has previously issued other requirements on reimbursement for screening tests that also appear to be ignored without affecting reimbursement, for example, on minimum intervals between routine screening colonoscopies in average-risk patients.2 Early reports suggest that less than 5% of eligible Americans are receiving LDCT.3 The 60.8% rate of LDCT after SDM suggests that a substantial proportion of enrollees are deciding against LDCT after SDM. Our study has some limitations. The results from enrollees aged 65 to 77 years with fee for service Parts A and B Medicare may not be generalizable to those in Medicare HMOs. In addition, the results from enrollees aged 55 to 64 years represent those with disability or end-stage renal disease. Shared decision-making has rapidly evolved from an abstract concept to mandated implementation. However, the clinical community has not adopted the CMS mandate for an SDM visit before LDCT screening.3 Inability or unwillingness to engage in SDM may contribute to the low overall use of LDCT screening and less awareness of its implications among eligible patients.4-6 Back to top Article Information Accepted for Publication: September 22, 2018. Corresponding Author: James S. Goodwin, MD, University of Texas Medical Branch, 301 University Blvd, Galveston, TX 77555-0177 ([email protected]). Published Online: January 14, 2019. doi:10.1001/jamainternmed.2018.6405 Author Contributions: Dr Zhou had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: Goodwin, Nishi. Acquisition, analysis, or interpretation of data: Nishi, Zhou, Kuo. Drafting of the manuscript: Goodwin, Nishi. Critical revision of the manuscript for important intellectual content: Nishi, Zhou, Kuo. Statistical analysis: Nishi, Zhou. Administrative, technical, or material support: Nishi, Kuo. Conflict of Interest Disclosures: Dr Nishi reported being a consultant for Veran Medical Technologies; providing clinical user feedback on instrumentation, procedure workflow, system, and software applications; representing the company on panel discussions and workshops at industry meetings; and providing input for design of potential clinical studies and potential products and therapies for future interventional pulmonary products. No other disclosures were reported. Funding/Support: This study was supported by Cancer Prevention and Treatment Institute of Texas grant RP160674 and by the National Institutes of Health grants K05 CA134923, P30 AG024832 and UL1TR001439. Role of the Funder/Sponsor: The funders/sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
So far, a wide variety of vapor phase reaction methods aimed at the surface smoothing, etc., of electronic devices, have been developed and put into practical use. E.g., the method of smoothing a substrate surface shown in Patent Reference 1 irradiates a substrate surface at a low angle with ions of monomer atoms or molecules of Ar (argon) gas and so on, and smoothes it by sputtering. Moreover, in recent years, solid surface smoothing methods using a gas cluster ion beam have gained attention for enabling little surface damage and very small surface roughness. E.g., in Patent Reference 2, there is disclosed a method of reducing surface roughness by irradiating a gas cluster ion beam on a solid surface. In this method, the gas cluster ions irradiated on the object being processed are broken down by collisions with the object being processed, on which occasion there arise many-body collisions between the constituent atoms or molecules of the cluster and the constituent atoms or molecules of the object being processed, and a movement in a horizontal direction with respect to the object being processed becomes noticeable, as a result of which cutting is performed in a transverse direction with respect to the surface of the object being processed. This is a phenomenon called “lateral sputtering”. By further movement of particles in a lateral direction on the surface of the object being processed, the apices of the surface are planed, the result being that atomic-size, ultra-accurate polishing is obtained. In addition, the energy held by the gas cluster ion beam is different from that of conventional ion etching in that, the energy being lower, no damage is inflicted on the surface of the object being processed, making possible the required ultra-accurate polishing. This means that solid surface smoothing method based on a gas cluster ion beam exhibits the advantage of there being less damage to the processed surface than the ion etching method shown in the aforementioned Patent Reference 1. For smoothing based on a gas cluster ion beam, it is generally recognized that it is desirable for the direction of irradiation of the cluster ion beam on the surface of the object being processed to be one coming from a nearly perpendicular direction with respect to the surface being processed. This is to make maximum use of the effect of “surface smoothing based on lateral sputtering” described previously. However, in the aforementioned Patent Reference 2, it is described that, in case the surface being processed is a curved surface or the like, the irradiation may be in an oblique direction in response to that situation of the surface, but there is no mention regarding the effect in the case of irradiation be in an oblique direction. Consequently, in this Patent Reference 1, it comes about that the most efficient method for the smoothing of a solid surface is one where the beam is irradiated from a nearly perpendicular direction with respect to that surface. Moreover, concerning the smoothing of a solid surface using a gas cluster ion beam, there is also an example in Patent Reference 3. There is no description in this Patent Reference 3 either of the relationship between the angle formed between the gas cluster ion beam and the solid surface, and the smoothing of the surface, so if one considers, from the disclosed description, that the “lateral sputtering” effect is used, one may consider that data for perpendicular irradiation are shown, in the same way as the previously indicated Patent Reference 2. In addition, there is also an account pertaining to the smoothing of a solid surface based on gas cluster ion beam irradiation in Non-Patent Reference 1. Toyoda et al. carried out irradiating Ar cluster ions on surfaces of materials like Cu, SiC, and GaN and show a reduction in surface roughness. Even in this case, the work presented is irradiated by a gas cluster ion beam from a nearly perpendicular direction with respect to the surface. Moreover, there are descriptions in Non-Patent Reference 2 regarding the changes in the roughness of a solid surface in the case of irradiating a gas cluster ion beam at various irradiation angles with respect to a solid surface. If the case of perpendicular irradiation with respect to the solid surface is taken to be 90° and the case of irradiation in parallel with the surface is taken to be 0°, it is shown that the sputtering rate, which is the speed at which the surface is etched, is the greatest for perpendicular irradiation and the etching rate decreases as the irradiation angle decreases. Regarding the relationship between surface roughness and irradiation angle, tests were performed by changing the irradiation angle to 90°, 75°, 60°, 45°, and 30°, and it was shown that the surface roughness increases as the irradiation angle decreases. No investigation was carried out experimentally for irradiation angles below 30°, but this may be thought to be due to the fact that it was judged to be of no use to carry out something like that, since surface roughness increases as the irradiation angle is decreased. In addition, the majority of electronic devices such as integrated circuits and optical devices used in optical communications have concavo-convex patterns prepared by microshaping in solid surfaces or thin film material surfaces, but there is no account of using a gas cluster ion beam for the smoothing of the side wall surfaces of concave portions or convex portions in those concavo-convex patterns. This is because it was believed that it is difficult to irradiate a gas cluster ion beam nearly perpendicularly to the side wall surfaces of concave portions or convex portions or that the smoothing of side wall surfaces is not possible with the lateral sputtering mechanism. As mentioned above, since, in the case of smoothing a solid surface by using a gas cluster ion beam, the surface roughness is the smallest when the irradiation angle of the gas cluster ion beam with respect to the solid surface is chosen to be 90°, and the surface roughness increases as the irradiation angle is decreased, it is not an exaggeration to say that no consideration has been given to cases other than making the irradiation angle nearly perpendicular. Patent Reference 1: Japanese Patent Application Laid Open No. 1995-58089. Patent Reference 2: Japanese Patent Application Laid Open No. 1996-120470. Patent Reference 3: Japanese Patent Application Laid Open No. 1996-293483. Non-Patent Reference 1: Japanese Journal of Applied Physics, Vol. 41 (2002), pp. 4287-4290. Non-Patent Reference 2: Materials Science and Engineering, R34 (2001), pp. 231-295.
4. Invest in a great pair of boots. “They’re actually my motorcycle boots. I don’t ride a motorcycle anymore because I almost got killed,” he said. After 45 years of riding, Macomber got into a horrific accident. “I’m titanium on my right side, from the hip down … I’m done,” he said. “My wife doesn’t deserve another call like that.” When he woke up in the hospital, “heavily drugged” with anesthesia, he was touched to find friends and family watching over him. “One of the first things people came and asked me was, ‘Will you still be able to do Santa this year?’ ” He said goodbye to his totaled Harley and kept the boots. 5. You don’t need a ‘personal brand’ when you have passion. Brands and labels are for toys, not people. A good kid can be naughty, and a naughty one can learn to be nice. (And some people can be both. “I’m gonna tell you right now,” Santa quipped to a few grown-ups on the plaza the night of the El Cajon parade, “naughty girls get Jaguars.”) So Macomber may make a great Santa Claus, but here’s a little secret: He also makes amazing port. “For Thanksgiving Day, I made chocolate-orange port,” he said. He’s also made a cabernet franc ice wine, with grapes picked in Niagara at minus 13 degrees and shipped to his home in El Cajon. Might as well be the North Pole, it came out that yummy, he said. A Harley-riding, port-making Santa? Who’d have thought? Some people say Macomber lacks credentials because he only has a mustache. (Yes, the beard is fake.) “They had a convention here last year. For bearded Santas only. I said, ‘No, no, I don’t have one.’ They said, ‘You can’t come.’ ” Guess what? They had their meeting; he didn’t give it a second thought — and now he’s still Santa. The take-away? You can be guided by conventional labels. “I’m Santa, so I have to grow a beard,” or “I’m a mom, so I have to bake 36 cupcakes because that’s what moms do.” Or you could make your own label. 6. Remember to thank those who enrich your life. Who knows, you might be just as successful or awesome without them. But you probably wouldn’t be as happy. When he’s not Santa, Macomber has a blast with his wife, Jane. They contribute to charity events themed around civic pride, rock music, dancing, wine. He got married in a kilt. A few weeks ago, he took his wife and her friends to a tea party at Jamul Haven, for “sandwiches and desserts, fresh and handmade there,” he said. “We had just a delightful time.” So what’s on Santa’s wish list this Christmas? “The end of the fiscal cliff,” he proclaimed. But he’ll settle for good health and more time with his loved ones. “Just another great year. Happiness with my wife, family. All my friends. A repeat of last year.”
Description & Features+ Your little dreamer will have lots of fun discovering all of the enchanted places in Elsa's Frozen Castle with Elsa! Flip the floor to reveal a bed, or flip the curtains to reveal Elsa's favorite friends, Anna and Olaf, on the wall. Pushing the seat of the throne will make crystals rise from the top! Frozen fans can also switch up Elsa's look! She comes with 2 movie-inspired styles -- her iconic snow queen gown and her coronation outfit -- that feature a total of 2 bodices, 2 skirts, 2 peplums, and 2 capes. Your little dreamer will have so much fun decorating her Frozen character with mix-and-match Snap-ins pieces and outfits (sold separately). Copyright Disney.Hasbro and all related terms are trademarks of Hasbro.
credit cards.jpg Ohio Bureau of Motor Vehicles locations may soon accept credit and debit cards for the first time, under a provision in the state's transportation budget. (Associated Press file) COLUMBUS, Ohio -- Ohio residents may soon be able to use a credit or debit card to pay for driver's licenses or other fees at their local Bureau of Motor Vehicles office. Ohio's two-year transportation budget bill, passed by the Ohio House earlier this week, would allow the state's 192 BMV locations to accept plastic by July of 2016 as long as the cardholders pay an additional service fee. Currently, the BMV allows customers to use credit or debit cards to pay for transactions done online or over the phone. But at BMV locations, only checks and cash are accepted. That's because until recently, Visa and MasterCard effectively prohibited public officials from passing on the cost of credit-card fees to customers in face-to-face transactions. But those policies were changed under a 2013 court settlement. The change would be an added convenience for motorists, said Theresa Podguski, director of legislative affairs for AAA East Central, which covers Northeast Ohio. Podguski noted that motorists who don't want to be charged a credit-card service fee would still be able to pay by cash or check. Don Petit, the Ohio BMV's registrar, said customers have complained for years about not being able to use credit or debit cards. "One of the long-running jokes is 'Where's the one place on Earth you can't use your credit card? At the BMV,'" Petit said. The transportation budget, House Bill 53, must now pass the Ohio Senate before heading to Gov. John Kasich for his signature.
[Satiation and satiety in the regulation of energy intake]. The study of the factors that regulate high energy food intake is especially relevant nowadays due to the high prevalence of overweight and obesity. Food intake regulation can be divided in two basic processes, namely satiation and satiety. Satiation is the process that determines the moment in which feeding stops and regulates the amount of ingested food during a single meal. Satiety is the interval between meals and regulates the time elapsed between two meals. The longer the interval, the lower energy intake. Each of these processes are regulated by different factors, which are here reviewed.
LIBRARY NTLANMAN.dll EXPORTS NPAddConnection @17 NPAddConnection3 @38 NPCancelConnection @18 NPCloseEnum @35 NPFormatNetworkName @36 NPGetCaps @13 NPGetConnection3 @54 NPGetConnectionPerformance @49 NPGetResourceInformation @52 NPGetResourceParent @41 NPGetUser @16 DllMain I_SystemFocusDialog QueryAppInstanceVersion RegisterAppInstance RegisterAppInstanceVersion ResetAllAppInstanceVersions SetAppInstanceCsvFlags
class ApplicationController < ActionController::Base before_filter :auth protect_from_forgery with: :exception def current_user if session[:github_user_id] User.where(:github_id => session[:github_user_id]).first end end private def auth unless session[:github_user_id] session[:return_to] = request.path redirect_to "/auth/github" end end end
Ellis Barker Wine Cooler As many of you know, I LOVE Ellis Barker silver. Offered is a fabulous silver plated wine cooler with an incredible hand engraved initials of "NROD" in a beautiful script. This piece is very heavy. It measures 10" tall and the top is 8" wide. There is a beautiful gadroon pedestal foot. The is a rope and bead border at the top. Embellished handles and body. The base has the EB menorah symbol for 1912 and the word England on the opposite side. The interior is very clean. This is not only for wine or champagne, it would be beautiful with a flower arrangement in it or have an orchid sit in it. Love, love, love this piece!!!! Item ID:1493 If you were the pending buyer of this item, go to My Account to view, track and check payment for this item.
Solar ThermalHarvesting and storing the sun's heat energy This is what often springs to mind when people mention solar panels. While solar photovoltaic (PV) panels use energy from the sun to generate electricity, solar thermal panels use it to heat water for your home. A solar thermal system works alongside conventional water heaters. Solar water heating systems use solar panels, called collectors, fitted to your roof. These collect heat from the sun and use it to warm water which is stored in a domestic hot water cylinder or thermal store. There are two types of solar water heating panels, they are evacuated tubes and flat plate collectors. Flat plate collectors can be fixed on the roof tiles or integrated into the roof. Both systems are highly efficient and will produce up to 100% of your domestic hot water needs during the summer months. During winter a boiler or immersion heater can be used as a back up to supplement the solar water heating system when conditions do not allow it to reach the desired temperature (normally 60ºC). Larger solar panels can also provide energy to heat your home as well – though usually only in the summer months when home heating is unnecessary.
Media playback is unsupported on your device Media caption Some of the biggest global offshore banking centres are in Asia Tax has never been so sexy. Chances are by now you know all the gory details - allegations in the Panama Papers that the super-rich and politically connected, and even some of their relatives, have moved hundreds of thousands of dollars from their own countries into offshore accounts in Panama, Hong Kong and Singapore, amongst other places. A lot of the international spotlight has been centred on the practice of offshore banking. Some of the biggest global offshore banking centres can be found in Asia - Singapore, Macao, Dubai and Hong Kong, for example, are amongst the top spots for the global super-rich looking to open an offshore account. The practice in itself isn't illegal, but Asian capitals have been under pressure to share more information about who account holders are, and where the money comes from. So will the Panama Papers force more governments to become more transparent about tax? Image copyright Getty Images Image caption There is often a thin line between tax avoidance and tax evasion Unlikely, says Andy Xie, an independent economist based in China and Hong Kong. "In Asia it's about how to hide your wealth that often hasn't been legitimately acquired," says Mr Xie. "Political power and ill-gotten wealth go hand in hand here. "How are you going to convince people to close these doors?" Evasion v. avoidance Now let's be clear - setting up an offshore account or an offshore company is perfectly legal. But here's where it gets complicated. There is a difference between tax evasion and tax avoidance. And the devil is in the details. Tax evasion, according to Paul Lau, tax partner with professional services firm PricewaterhouseCoopers (PwC), is when "someone has income to report and then doesn't report it." Image copyright Getty Images Image caption Singapore is one of a number of major financial centres in Asia So if you have income in that offshore account, that you haven't declared to tax authorities back in your home country, and you are required to report that income to them - then that could be illegal. But tax avoidance is something a bit more "nebulous", as Mr Lau puts it. "Tax avoidance is taking advantage of certain tax provisions in a way that is not within the intent of the provision, to avoid paying tax." So that means - if you've found a perfectly legal way to avoid paying taxes because of a provision in the tax system - well, then depending on the country, you may not be doing anything illegal at all. Lots of hedges and provisos here, but that's sort of the point. "The world is dotted with states and territories that make a speciality of providing services whose purpose is to facilitate ways to hide assets," says anti-corruption advocacy group Transparency International. Image copyright Getty Images Image caption The so-called Panama Papers were leaked from law firm Mossack Fonseca Activists say it is time for these countries to reform the secret world of finance they operate and become more transparent. "The enablers - the accountants, the lawyers, the business formation people - they're all involved," says Transparency International's Casey Kelso. "They are all getting a great deal of money as a percentage of these profits from these transactions." 'Serious view' But reforming these offshore banking centres won't be easy. This sort of business attracts billions of dollars for offshore banking centres every year, and it's not just from individuals. Massive profit-making corporations often set up shop in these centres to pay less tax as well. Google, Apple, Microsoft, BHP Billiton and Rio Tinto - they're all household names - and all have admitted to being under audit by Australian tax authorities for using Singapore as a marketing and service hub. They report hundreds of millions of dollars of income in Singapore, but pay lower tax on their money there than they would back in Australia, because of Singapore's lower tax rates. The companies say they're not doing anything wrong, because Singapore is an important hub for them. But Australia says if money was earned from business done in Australia, tax should be paid there. Image copyright Getty Images Image caption People have long sent money overseas to try to limit the taxes they have to pay Both Singapore and Hong Kong have said they take a serious view of tax evasion and support international efforts to tackle cross-border transgressions. The government here has been quick to point out its efforts to clamp down on any illegal activities. "Singapore takes a serious view on tax evasion and will not tolerate its business and financial centre being used to facilitate tax related crimes," the Monetary Authority of Singapore said in a statement. Singapore's Ministry of Finance added: "We are reviewing the information being reported in connection with the so-called Panama Papers and are doing the necessary checks. "If there is evidence of wrongdoing by any individual or entity in Singapore, we will not hesitate to take firm action." In fact, many Asian countries have committed to exchange more tax information by 2018 as part of the Automatic Exchange of Information initiative set up by the OECD. Singapore, Japan, Hong Kong and Australia have all signed up. So if you're an Australian and you open a bank account in Singapore, by 2018 in theory, your government could know about it. But critics say there's no incentive for countries who depend on offshore banking to do this. In fact, their business depends on keeping things secret. "The livelihoods of these offshore financial centres depend on giving their clients confidentiality," says Mr Xie. "Otherwise why would people hide their money there?" In the end, it's all about who goes first. Countries want a level playing field, because if one offshore banking centre starts opening itself to greater scrutiny, there's a very good chance their wealthy customers will flee, running to the next most secret place to park their cash. And as we all know, where there's demand, there will always be a ready supply.
Sex maysa Fuck sex chat on iphone 03-May-2019 20:22 Free sex chat without any password and user name His diversity made him a crossover star and a ’90s R&B icon. Muslims form a large part of the world population and have some of the loveliest names. It means ‘exalted or highest social standing’.[ Read: Beautiful Baby Girl Names ]Aasma is a wonderful name with an interesting meaning –‘excellent’. If you are mainly looking for a religious name, try Amtullah. Anan is a fresh, adorable, and cute Muslim girl name, meaning ‘cloud’. If you like names that have a vintage aura, then you’re likely to adore this name. Muslim name Falak means ‘sky is full light and is beautiful’. Fareeda, meaning ‘gem or precious pearl’, is one-of-a-kind Muslim baby girl name. With a name such as Fariah, meaning ‘friend or companion’, maybe even your daughter can become someone’s best friend forever. It is a unique and modern muslim girl name and perfect for modern parents. Adult video cam chat no credits single online dating jewish single Muslims believe that they are called by their names on the Judgement Day, and so the baby names have to be meaningful, pleasant, and good. Mom Junction has compiled a list of many such meaningful Muslim baby girl names for your little princess. Aa’eedah is a lilting baby girl name, meaning ‘returning or reward’. Aaleyah is a nice twist on the now common name Aaliyah. It was increasingly popular in the 60s, and reigns today as an old classic. This shimmery Muslim name, used widely in the Arab community, is the feminine form of Amir and means ‘exalted’.1 singles: “Fortunate,” (1999, R&B and Urban A/C) Edd said: Man, forget Cool James, ladies LOVED Maxwell in the ’90s. 1 singles: “You Make Me Wanna…” (1997, R&B); “Nice & Slow” (1998, R&B) Edd said: Before you Usher fanatics cram my inbox, chill. 1 singles: ” Tell Me What You Want Me To Do (1991, R&B); “Alone With You” (1992, R&B); “Cane We Talk (1993, R&B) Edd said: Tevin practically owned the first half of the decade, effortlessly cranking out hits and serving as R&B’s teen heartthrob.Ursher dominated R&B in the 2000s, but he was just getting his feet wet in the ’90s. He lost a lot of steam as the decade closed out, though.____________________________________________________________________________________________ Albums: For The Cool In You (1993, 3x platinum), The Day (1996, 2x platinum), Christmas With Babyface (1998) No. 1 singles: “Love Makes Things Happen” (1990, R&B) Edd said: When it came to his own material, Face was all about quality, not quantity in the 90s. ____________________________________________________________________________________________ Albums: I’ll Give All My Love To You (1990, 2x platinum); Keep it Comin’ (1991, platinum); Get Up On It (1994, platinum); Keith Sweat (1996, 4x platinum); Still In The Game (1998, platinum) No.Hadiya has a sentimental and touching meaning for a newborn, ‘gift’. Foucault opens new perspectives on political practices: he suggests putting liberty at the center of one’s sexuality (homosexuality), an alternative that seeks to create new social relationships, a new culture.… continue reading »
mkdir public – this will be my webroot folder. This will be the destination folders (public/assets/css, public/assets/js ) for the output files nodeJS will generate for us. mkdir client – this will where my source files will be npm init – create a package.json file for our project npm i <module_name> -D – this is how I install the node modules i need. the -D option will put the module in our package.json as a dependency. once all the the modules I need are installed, I openup my package.json file and write the scripts I will be using to monitor and compile my CSS and JS files. The gist is to output 2 files bundle.js and bundle.css where all dependencies and custom code are concatenated into one file respectively and compressed. build-css– will compile all my scss files and compress it into one bundle.css file. I also included a folder where I put my other scss snippet files. watch-css – this will monitor all .scss files specified in the watch-css command then run build-css once changes are saved on any of the files. build-assets– will build my js and css files watch-assets – will monitor my scss and js files and compile them once changes have been made. Once I have this setup, i simply do npm run-watch-assets while I am developing to automatically compile all my assets on the files while I code. It’s a great time saver and cleans up my code a whole lot, especially since I only have to include 1 .js file and .css file in my html files for all the dependencies and plugins i use..
"use strict"; const t = require("../../"); const stringifyValidator = require("../utils/stringifyValidator"); const toFunctionName = require("../utils/toFunctionName"); const NODE_PREFIX = "BabelNode"; let code = `// NOTE: This file is autogenerated. Do not modify. // See packages/babel-types/scripts/generators/flow.js for script used. declare class ${NODE_PREFIX}Comment { value: string; start: number; end: number; loc: ${NODE_PREFIX}SourceLocation; } declare class ${NODE_PREFIX}CommentBlock extends ${NODE_PREFIX}Comment { type: "CommentBlock"; } declare class ${NODE_PREFIX}CommentLine extends ${NODE_PREFIX}Comment { type: "CommentLine"; } declare class ${NODE_PREFIX}SourceLocation { start: { line: number; column: number; }; end: { line: number; column: number; }; } declare class ${NODE_PREFIX} { leadingComments?: Array<${NODE_PREFIX}Comment>; innerComments?: Array<${NODE_PREFIX}Comment>; trailingComments?: Array<${NODE_PREFIX}Comment>; start: ?number; end: ?number; loc: ?${NODE_PREFIX}SourceLocation; }\n\n`; // const lines = []; for (const type in t.NODE_FIELDS) { const fields = t.NODE_FIELDS[type]; const struct = ['type: "' + type + '";']; const args = []; Object.keys(t.NODE_FIELDS[type]) .sort((fieldA, fieldB) => { const indexA = t.BUILDER_KEYS[type].indexOf(fieldA); const indexB = t.BUILDER_KEYS[type].indexOf(fieldB); if (indexA === indexB) return fieldA < fieldB ? -1 : 1; if (indexA === -1) return 1; if (indexB === -1) return -1; return indexA - indexB; }) .forEach(fieldName => { const field = fields[fieldName]; let suffix = ""; if (field.optional || field.default != null) suffix += "?"; let typeAnnotation = "any"; const validate = field.validate; if (validate) { typeAnnotation = stringifyValidator(validate, NODE_PREFIX); } if (typeAnnotation) { suffix += ": " + typeAnnotation; } args.push(t.toBindingIdentifierName(fieldName) + suffix); if (t.isValidIdentifier(fieldName)) { struct.push(fieldName + suffix + ";"); } }); code += `declare class ${NODE_PREFIX}${type} extends ${NODE_PREFIX} { ${struct.join("\n ").trim()} }\n\n`; // Flow chokes on super() and import() :/ if (type !== "Super" && type !== "Import") { lines.push( `declare function ${toFunctionName(type)}(${args.join( ", " )}): ${NODE_PREFIX}${type};` ); } } for (let i = 0; i < t.TYPES.length; i++) { let decl = `declare function is${ t.TYPES[i] }(node: ?Object, opts?: ?Object): boolean`; if (t.NODE_FIELDS[t.TYPES[i]]) { decl += ` %checks (node instanceof ${NODE_PREFIX}${t.TYPES[i]})`; } lines.push(decl); } lines.push( `declare function validate(n: BabelNode, key: string, value: mixed): void;`, `declare function clone<T>(n: T): T;`, `declare function cloneDeep<T>(n: T): T;`, `declare function removeProperties<T>(n: T, opts: ?{}): void;`, `declare function removePropertiesDeep<T>(n: T, opts: ?{}): T;`, `declare type TraversalAncestors = Array<{ node: BabelNode, key: string, index?: number, }>; declare type TraversalHandler<T> = (BabelNode, TraversalAncestors, T) => void; declare type TraversalHandlers<T> = { enter?: TraversalHandler<T>, exit?: TraversalHandler<T>, };`.replace(/(^|\n) {2}/g, "$1"), // eslint-disable-next-line `declare function traverse<T>(n: BabelNode, TraversalHandler<T> | TraversalHandlers<T>, state?: T): void;` ); for (const type in t.FLIPPED_ALIAS_KEYS) { const types = t.FLIPPED_ALIAS_KEYS[type]; code += `type ${NODE_PREFIX}${type} = ${types .map(type => `${NODE_PREFIX}${type}`) .join(" | ")};\n`; } code += `\ndeclare module "@babel/types" { ${lines .join("\n") .replace(/\n/g, "\n ") .trim()} }\n`; // process.stdout.write(code);
Space Tourism: An \'Adventure Sport\' In the Making ARLINGTON, Virginia – To help advance the evolution of passenger-carrying space vehicles, the U.S. government is shaping an experimental-class permit to help designers trial-run such craft. Furthermore, policy makers are exploring ways to streamline environmental regulations that can be costly and retard the progress of fledgling public space travel operators. That’s the word from Patricia Grace Smith, Associate Administrator for Commercial Space Transportation within the Department of Transportation’s Federal Aviation Administration (FAA). Smith’s office is responsible for licensing, regulating and promoting the U.S. commercial space transportation industry – including the evolving public space travel sector. Click here to read the whole story at Space.com
//--------------------------------------------------------------------------- // Модуль фильтрации данных формграбера // // ВАЖНО: Модуль будет подключен к проекту только если его вызов имеется // в модуле Modules.h //--------------------------------------------------------------------------- #ifndef FgrFiltersH #define FgrFiltersH //--------------------------------------------------------------------------- #include <windows.h> #define FGRFILTER_PARAM_SIZE_URLS 3000 #define FGRFILTER_PARAM_SIZE_DATAMASK 3000 #define FGRFILTER_PARAM_NAME_URLS "FGR_URL_FILTERS\0" #define FGRFILTER_PARAM_NAME_DATAMASK "FGR_PARAMS_FILTERS\0" #ifndef DEBUGCONFIG #define FGRFILTER_PARAM_ENCRYPTED_URLS true #define FGRFILTER_PARAM_ENCRYPTED_DATAMASK true #else #define FGRFILTER_PARAM_ENCRYPTED_URLS false #define FGRFILTER_PARAM_ENCRYPTED_DATAMASK false #endif //------------------------------------------------- // Функция фильтрует пост данные и в случае // необходимости отправки их на сервер возвращает // истину //------------------------------------------------- bool FiltratePostData(const char* URL, const char* Data); //------------------------------------------------- // FiltrateFormGrabberURL - Функция возвращает // истину если ссылка поддерживается // формграбером //------------------------------------------------- //bool FiltrateFormGrabberURL(PCHAR URL); //------------------------------------------------- // FiltrateFormGrabberData - функция возвращает // истину если данные прошли фильтрацию //------------------------------------------------- //bool FiltrateFormGrabberData(const char* Data); //--------------------------------------------------------------------------- #endif
How to be wrong (for beginners) Being wrong is quite easy, though it requires some effort—at first. This short introduction may help you to overcome some of the initial hurdles. Let’s start with the easiest part: just say something. Look around you, watch things and people move and talk. Say something about the things and people you see around you. Next comes the tricky part. Say something while someone else stands close to you — in other words, so that they can hear you saying it. They may suddenly look at you and frown, but that’s just right! I lied a bit about the tricky part, sorry. Nevertheless — it’s been a rather empowering experience to manage to finish the supposedly difficult assignment so effortlessly, wasn’t it? Well, you’re there, the learning process has already started. You said something. Someone listened. If you’re lucky enough, someone disagrees with your very valid observation. (If you’re stuck at this step, try expressing an opinion. However, try as much as you can to avoid that rather dangerous path.) Now comes the tricky part. Look at your fellow. Their worried eyes, wrinkled forehead, raised eyebrows. You may discover a resemblance between that face and your own; this face you’re well used to see in the morning. Just the toothbrush is missing and that’s all right. They’re human as much as you are. And that human disagrees with your perfectly well suited description of what’s happening. What’s wrong with them? There has to be something wrong with them! And now, back to you. Imagine looking at yourself, your tense forehead, eyebrows lifted and eyes already nervous. Yes, we left your body standing there and we’re looking at it in this suddenly transcendental experiment. Why? Look at your face again. That’s exactly what your companion sees, looking at you. That’s exactly what they see, thinking about what’s wrong with you! Doesn’t that remind you of something? If this was too Michael Bay for you, I’m sorry about it. The camera stops rotating around those two people looking at each other and you can safely return to your body. Yet, what has been seen cannot be unseen: two people think there’s something wrong about the other one. What are the odds that one of them is right every time? The statistics might speak harshly, but the odds are quite low. With a little help from the magical science of statistics, try to imagine that in some cases you’re the one who is wrong. It’s possible. I might have to repeat that in italics: imagine that you’re the one who is wrong. Stretch your imagination. A little bit of pain is to be expected. You’re at the point where it’s only you. Then again, you’re too far in the forest to come back without a catch. You can win this. Close your eyes and feel the force! Embrace the structure of the tesseract! Imagine you can be wrong. Made it? This is how it’s done. After some discussion with that other person, you might arrive at the conclusion that in fact you had been wrong. Wow! Such an obviously implausible proposition just a few minutes ago. Your imagination just opened a whole new world of possibilities. In any further case, just try to imagine you’re the one who’s wrong. Sometimes you’ll even find that you were right. What a feeling! At the same time, you will avoid the deadlock situation of the doubly asked question: „what’s wrong with that idiot“? It’s really your imagination that opened a magical door in the seemingly seamless wall of a deadlock situation.
Q: Server Driven Ajax Framework similar to JqueryMobile Is there any other Server Driven Ajax Framework similar to Jquery Mobile ( which can fetch a complete page without refresh and update the ui ) for normal Ajax Web Application instead of Mobile Web Application ? Thanks. A: There's nothing at all stopping you using jQuery Mobile for a 'desktop' web application rather than a mobile web application. You may need to tweak the css slightly.
460 N.E.2d 168 (1984) STATE of Indiana, Plaintiff-Appellant, v. David G. MOUNTS, Defendant-Appellee. No. 1-883A244. Court of Appeals of Indiana, First District. February 27, 1984. Rehearing Dismissed March 30, 1984. Linley E. Pearson, Atty. Gen. of Ind., Michael Gene Worden, Deputy Atty. Gen., Indianapolis, for plaintiff-appellant. George C. Barnett, Sr., Barnett & Barnett, Evansville, for defendant-appellee. NEAL, Presiding Judge. STATEMENT OF THE CASE The State of Indiana appeals a decision of the Vanderburgh Circuit Court dismissing *169 the information charging David C. Mounts with arson. We reverse. STATEMENT OF THE FACTS The record reflects that a Vanderburgh County grand jury, after an investigation of an alleged arson in which Mounts was a target defendant, returned a "no-bill" on September 1, 1982. Thereafter, on November 30, 1982, the prosecuting attorney commenced an arson prosecution on the same facts by information. The trial court dismissed the information based upon the above agreed facts. The sole issue on appeal is whether the prosecuting attorney may commence a criminal prosecution by information after a grand jury, considering the same facts, returns a "no-bill". DISCUSSION AND DECISION The grand jury exists at the sufferance of the legislature. Ind. Const. art. VII, sec. 17. All criminal prosecutions may be charged by either indictment or information. IND. CODE 35-34-1-1(a). The format for the commencement of criminal proceedings, established in the Constitution of 1851, has continued little changed until the present date. Beginning with Lankford v. State, (1896) 144 Ind. 428, 43 N.E. 444, the Indiana Supreme Court has held that the fact that the grand jury failed to indict does not preclude the prosecuting attorney from thereafter commencing a prosecution by information. This rule was thereafter followed in Hall v. State, (1912) 178 Ind. 448, 99 N.E. 732; and State v. Roberts, (1906) 166 Ind. 585, 77 N.E. 1093. Indiana's position accords with the majority rule as stated in 42 C.J.S. Indictment and Information Sec. 72 (1944) as follows: "In the absence of constitutional or statutory provisions to the contrary, the acts of the grand jury with respect to the findings of an indictment, are not binding on the prosecuting attorney with respect to his filing an information, and an information may be filed, although the grand jury has investigated the case and refused or failed to find an indictment." The same rule is recited in Annot. 120 A.L.R. 713 (1939). Mounts supports the court's decision with State v. Boswell, (1885) 104 Ind. 541, 4 N.E. 675, a case which was specifically overruled on this very point by Lankford v. State, supra, because "... it interpolates into the statute a condition inconsistent with its plain provisions". Lankford, supra, at 445. He further cites cases from other jurisdictions supporting the minority rule. Mounts devotes a large amount of his argument discussing the historic function of the grand jury as guardian of the people from depredations of oppressive prosecuting attorneys. The reader is referred to State v. Roberts, supra, which considered those arguments and the waning eminence of the grand jury system. See also King v. State, (1957) 236 Ind. 268, 139 N.E.2d 547. Finally, Mounts attempts to distinguish Lankford, supra, Roberts, supra, and Hall from Boswell, supra, and the instant case. In the first three cases, a difference existed in that the grand jury was either discharged without returning an indictment or it returned a different indictment. Conversely, in Boswell and the present case, the grand jury actually returned a "no-bill". We see no material difference in the former situation and one in which a "no-bill" was returned. A grand jury is an inquisitional and not a judicial body, King v. State, supra, and its acts are not res judicata. State v. Boswell, supra. No good purpose would be served by exploring the philosophical arguments or in making an analysis of the authorities supporting the minority position. Suffice it to say that the court in Lankford, supra, Hall, supra, and Roberts, supra, was explicit. Though old, these cases have not been overruled or modified by the Supreme Court or the legislature, even though a number of minor modifications in the statutes permitting prosecution by indictment or information have occurred. It is a rule of statutory construction that when a statute *170 has been construed by a court of last resort and is later re-enacted in substantially the same terms, the legislature may be deemed to have intended the same construction. State v. Dively, (1982) Ind. App., 431 N.E.2d 540 (trans. denied). That rule is applicable here. If the legislature had intended that the failure of a grand jury to indict precluded further prosecution, it would have said so. For the above reasons, this cause is reversed and the trial court is ordered to reinstate the information. Judgment reversed. ROBERTSON and RATLIFF, JJ., concur.
IN THE COURT OF CRIMINAL APPEALS OF TEXAS NO. AP-75,169 EX PARTE CHARLES EDWARD WRIGLEY, Applicant ON APPLICATION FOR WRIT OF HABEAS CORPUS FROM THE 52ND JUDICIAL DISTRICT COURT OF APPEALS CORYELL COUNTY Keller, P.J., delivered the opinion of the Court in which MEYERS, PRICE, JOHNSON, KEASLER, HERVEY, HOLCOMB, and COCHRAN, JJ., joined. WOMACK, J., concurred in the result. O P I N I O N This case presents the novel issue of whether an original sentence is completed and a stacked sentence begins to run at the time the defendant makes parole on the original offense, if his parole is revoked before the trial court sentences the defendant for the stacked offense. (1) We hold that it does not. I. FACTS Applicant was sentenced to a twenty-year term in the Institutional Division of the Texas Department of Criminal Justice (TDCJ) for possession of a controlled substance, committed in 1991. On August 11, 1992, while in the custody of TDCJ, applicant assaulted a prisoner. On November 12, 1993, while awaiting trial on the aggravated assault, applicant was paroled on the possession offense. While he was on parole, he was arrested on June 22, 1994, for possession of a controlled substance and pleaded guilty to the offense, for which he was sentenced to twenty-five years in TDCJ. His parole on the first possession offense was subsequently revoked. On May 23, 1996, applicant pleaded guilty to the aggravated assault offense pursuant to a plea bargain for a term of seven years confinement in TDCJ. The trial court ordered the seven-year sentence to run consecutive to the initial twenty-year sentence applicant had received for the first possession offense. The judgment indicates applicant was to receive pre-sentence time credit from November 12, 1993, on the seven-year sentence. Applicant alleges that the seven-year sentence was to run from November 12, 1993, the same date on which he was paroled for the first possession offense, so that he would have discharged the seven-year sentence on November 12, 2000. By applicant's calculation, he was entitled to mandatory release on November 12, 2000, on the twenty-year sentence and to mandatory release on July 18, 2004, on the twenty-five year sentence. Applicant contends that he has been detained past his mandatory release date because TDCJ improperly stacked the seven-year sentence on the twenty-five year sentence in violation of the plea agreement. The State agrees that applicant's seven-year sentence is to run consecutive only to the twenty-year sentence for the first possession offense. In its response, the State claims that TDCJ's records correctly reflect the trial court's order to stack the seven-year sentence on the twenty-year sentence. Thus, TDCJ did not improperly cumulate the seven-year and twenty-five year sentences, and applicant's plea agreement has not been violated. We begin by analyzing the applicable law. II. ANALYSIS A. Article 42.08(b) (2) Applicant contends that a stacked sentence begins to run on the date an inmate makes parole on the original offense, even if his parole is revoked before the court sentences him for the stacked offense. If applicant is correct, his seven-year stacked sentence began to run on November 12, 1993, the date he made parole on the original possession offense. The trial court ordered applicant's seven-year sentence for aggravated assault, committed while in TDCJ, to run consecutive to his twenty-year sentence for the original possession offense. The court was required to stack these sentences under Article 42.08(b), which provides: If a defendant is sentenced for an offense committed while the defendant was an inmate in the institutional division of the Texas Department of Criminal Justice and the defendant has not completed the sentence he was serving at the time of the offense, the judge shall order the sentence for the subsequent offense to commence immediately on completion of the sentence for the original offense. In Ex Parte Kuester, this Court held that "completion of the sentence" in Article 42.08(b) has the same meaning as "ceases to operate" in Article 42.08(a). (3) Government Code § 508.150(b) (4) defines "ceases to operate" for purposes of Article 42.08 as the date on which the original sentence is served out in actual calendar time or the date on which a parole panel approves the inmate for parole release. (5) Serving a sentence out in "actual calendar time" means serving the sentence in full, day-for-day, until discharge. (6) So the date a sentence is completed is the date it is served out in full, day-for-day, until discharge, or the date the defendant makes parole on the original offense. Moreover, Article 42.08(b) requires a stacked sentence for the offense the defendant committed in TDCJ while serving his sentence for the original offense, if he has not completed the original sentence at the time of sentencing for the stacked offense. The phrase, "has not completed," is in the present perfect tense, the tense used to describe actions that began in the past and continue in the present. (7) The statute requires the sentencing court to stack the subsequent offense, committed in TDCJ, if the defendant did not complete the original sentence in the past and has not completed it in the present. The present time for purposes of Article 42.08(b) is the time of sentencing for the stacked offense. The defendant has not completed the original sentence under Article 42.08(b), if at the time of sentencing for the stacked offense, the defendant has not served the original sentence in full, day-for-day, until discharge, or he has not made parole on the original offense. And, the defendant has not "made parole" on the original offense, if his parole is revoked prior to being sentenced for the stacked offense, because his original sentence is still in operation, as he is serving his remaining sentence. (8) We therefore hold that, under Article 42.08(b), a stacked sentence does not begin to run on the date the defendant makes parole on the original offense if his parole is revoked before the trial court sentences the defendant for the stacked offense. Consequently, applicant's seven-year, stacked sentence for the aggravated assault did not begin to run on November 12, 1993, the date he made parole on the original possession offense, because his parole was revoked before the trial court sentenced him for the stacked offense. B. Voluntariness of the Plea Applicant alleges that he agreed to a seven-year sentence for the aggravated assault offense, to begin to run on November 12, 1993, and thus it violates his plea agreement for the seven-year sentence to begin to run after November 12, 1993. Yet nothing in the record supports his allegation that the seven-year sentence must run from November 12, 1993, as part of the plea agreement. The judgment for the aggravated assault reflects that the plea bargain was confinement for seven years in TDCJ. The court also ordered pre-sentence time credit from November 12, 1993, on the seven-year sentence. But pre-sentence time credit has nothing to do with the stacking order. A person who is arrested will get credit for time he spends in jail prior to bonding out, (9) but that does not mean that the date of his arrest is the date his sentence begins. Here, since the plea agreement does not include a provision relating to stacking or specify that his sentence begins to run on a specific date, applicant has failed to establish that his plea was involuntary. We reject applicant's involuntary plea claim. C. Cumulation of the Sentences Applicant also alleges that TDCJ violated his plea agreement by improperly cumulating applicant's seven-year and twenty-five year sentences, and, as a result, he is being held past his mandatory supervision date. TDCJ's records, however, reflect that TDCJ properly cumulated the twenty-year and seven-year sentences, per the trial court's stacking order. We therefore reject applicant's claim that TDCJ violated his plea agreement. Relief is denied. KELLER, Presiding Judge Delivered: November 16, 2005 En Banc Publish 1. For purposes of this opinion, "stacked offense" is the offense committed while the defendant was an inmate in TDCJ, serving his sentence for the original offense. See Tex. Code Crim. Proc. Art. 42.08(b). Under Article 42.08(b), the judge must stack the sentence for the subsequent offense, committed while in TDCJ, on the sentence for the original offense. See id. 2. Unless otherwise indicated, all references to Articles refer to the Code of Criminal Procedure. 3. 21 S.W.3d 264, 271 (Tex. Crim. App. 2000). 4. Tex. Gov't Code § 508.150(b). This provision reads, "For the purposes of Article 42.08, Code of Criminal Procedure, the judgment and sentence of an inmate sentenced for a felony, other than the last sentence in a series of consecutive sentences, cease to operate: (1) when the actual calendar time served by the inmate equals the sentence imposed by the court; or (2) on the date a parole panel designates as the date the inmate would have been eligible for release on parole if the inmate had been sentenced to serve a single sentence." Id. This current definition of "ceases to operate" is essentially the same as it was at the time of applicant's offense. See id., formerly Article 42.18, § 8(d)(2). 5. See Kuester, 21 S.W.3d at 271. 6. Id. 7. See Bryan A. Garner, A Dictionary of Modern American Usage 645 (Oxford University Press 1998). 8. See Tex. Gov't Code § 508.283(b) ("If the parole... of a person described by § 508.149(a) is revoked, the person may be required to serve the remaining portion of the sentence on which the person was released."). 9. See Tex. Code Crim. Proc. Art. 42.03, § 2(a).
Q: TypeError: this[(("_" + (intermediate value)) + "_listener")] is not a function I'm dynamically building an object that will have a sender on it: var coins = {}; ['USD','EUR'].forEach((product_id) => { coins[`_${product_id}`] = {}; coins[`_${product_id}_listener`] = (val) => { log.info(process.pid, 'terminal send', product_id, [`_${product_id}`]); terminal.send({ action: product_id, data: this[`_${product_id}`], timestamp: new Date, }); }; Object.defineProperty(coins, product_id, { set: (val) => { this[`_${product_id}`] = val; this[`_${product_id}_listener`](val); }, get: (val) => { return this[`_${product_id}`]; }, }); }); Unfortunately, when I set a coin to something ... coins['USD'] = {a: 4}; I get an error: TypeError: this[(("_" + (intermediate value)) + "_listener")] is not a function at Object.set (repl:15:32) at repl:1:18 at sigintHandlersWrap (vm.js:22:35) at sigintHandlersWrap (vm.js:73:12) at ContextifyScript.Script.runInThisContext (vm.js:21:12) at REPLServer.defaultEval (repl.js:340:29) at bound (domain.js:280:14) at REPLServer.runBound [as eval] (domain.js:293:12) at REPLServer.<anonymous> (repl.js:538:10) at emitOne (events.js:101:20) This is obviously referring to the second line of the setter, but I'm not sure why. If I comment that out, gets and sets work fine. My listener just doesn't fire ... obviously. The goal here is to get the listener to fire. Follow up I've also noticed, with that line commented out, that the object doesn't look right (imo): > coins['USD'] = {a: 4}; { a: 4 } > coins { _USD: {}, _USD_listener: [Function], _EUR: {}, _EUR_listener: [Function] } > coins['_USD'] {} > coins['USD'] { a: 4 } Since my setter is supposively setting _USD, why doesn't _USD look set when I print out coins? A: The problem is that arrow functions unlike normal function does not allow to change context (this). Docs. An arrow function expression has a shorter syntax than a function expression and does not bind its own this, arguments, super, or new.target. These function expressions are best suited for non-method functions So when you create an arrow function context will be captured once an for all calls. You could iether switch to normal functions or create arrow functions using coins as a context. For example by passing coins as a context to .forEach call and replacing iteratee with a normal function to allow dynamic context. var coins = {}; ['USD','EUR'].forEach(function(product_id) { this[`_${product_id}`] = {}; this[`_${product_id}_listener`] = (val) => { console.log(`${product_id} - ${val}`) }; Object.defineProperty(coins, product_id, { set: (val) => { this[`_${product_id}`] = val; this[`_${product_id}_listener`](val); }, get: (val) => { return this[`_${product_id}`]; }, }); }, coins) coins.USD = 10 coins.EUR = 100 console.log(`USD ${coins.USD}, EUR ${coins.EUR}`) Or better use normal function because you do have "methods" var coins = {}; ['USD','EUR'].forEach(product_id => { coins[`_${product_id}`] = {}; coins[`_${product_id}_listener`] = function(val) { console.log(`${product_id} - ${val}`) }; Object.defineProperty(coins, product_id, { set: function(val) { this[`_${product_id}`] = val; this[`_${product_id}_listener`](val); }, get: function() { return this[`_${product_id}`]; }, }); }) coins.USD = 10 coins.EUR = 100 console.log(`USD ${coins.USD}, EUR ${coins.EUR}`)
Yes my friend, I am a space farmer. I grow zero-gravity crystals, harvest helium-3 from the moon's surface, and extract drinking water from asteroids, all for the benefit of mankind. I am a space farmer. Ariane 5 Abort Hair-raising is one way to describe the Ariane 5 abort which occurred after the main engine had fired for several seconds. Like the space shuttle, the Ariane 5 rocket’s core cryogenic main engine ignites seconds before twin solid-fueled boosters, giving computers a chance to gauge the vehicle’s health before firing the strap-on motors, which can’t be turned off. Actually, there’s one way to turn them off: the command destruct (that is, the safety self-destruct) package. But you get the idea. Obviously far better than a failure, this anomaly will blow the Arianespace launch schedule out of the water. Beyond just the anomaly resolution (expect it to be blamed on a glitch; just a prediction), all sorts of space hardware would have gone to internal (battery) power, ordinance may have been expended on the umbilical pulls, etc..
Galcon Fusion – Review I rarely write game reviews because you can already find such reviews for the Windows ports on other gaming websites. In fact the only review I wrote was of the game Mystic Mine by a request from the Koonsolo developer Koen Witters. I was late to write news about Galcon Fusion release so I’ve decided to write a review… When Galcon Fusion was released for GNU/Linux I thought it was another of those simple games that I will play for 10 minutes and lose interest very quickly – but I was so wrong… This is one of the most addictive games I have ever played ! After 10 minutes of playing I just had to buy it (and it’s very cheap, just $8.99 this week, original price $9.99). I’ve tested the game on Ubuntu 9.10 64-bit and heard no sound at all – apparently the developer used the proprietary sound library irrklang which had a bug on 64-bit GNU/Linux systems (the library searches in the wrong directories for alsa). Fortunately I’ve found a GNU/Linux “Guru” nicked “samus_aran” on the Linux IRC channel at the Freenodes server and got him addicted to the game in no time. He just had to fix the problem and even supplied us with a code he made (required php5) which worked great. I’ve uploaded the fixed file to a share server for everyone to use and this problem was fixed.( the thread about the sound problem + fixes ). An official update was released fixing the GNU/Linux 64-bit sound problem. When you start the game you have a very simple tutorial which sums up all the basics of gameplay. In the beginning of each game you start with one (or up to three in MP) planet which generates spaceships at specific rate (each planet has an “production” number, larger planets build spaceships faster). You have to conquer all enemy planets buy taking neutral planets and building a fleet using many strategies (which come to use with 3 player game, specially in MP). The first few game difficulties are very easy to beat, but when you reach the Admiral difficulty you have no chance unless you use a specific strategy (which seems to work on 1v1 games very well). The game has no campaign but it has endless random maps and several play modes like : Billiards : the planets are always at move, making you consider other tactics every second. Stealth : Your spaceships become invisible to the enemy and enemies spaceships become invisible to you, making you guess where he attacks next. Crash : Small map in which all 3 players are close to each other (in single player mode) making it instant action, no time or need to conquer the small neutral planets.-, plus – you are able to crash enemies spaceships in mid-air. Assassin : each player is assigned a target which he needs to eliminate. Beast : you start alone and after about 2- seconds all neutral planets become hostile and attack you – you need to conquer them all. Vacuum : “conquer them all” in specific amount of minutes 3-Way : you now have additional opponent to worry about, let the best “man” win. But the real jewel is the multiplayer game. Except the single player modes you also have a new teams mode in which you and your team needs to conquer the other team/s which is a lot of fun, you can send your spaceships to backup your teams planets and arrange attacks with your team and use different tactics for the “common good”. Still the most popular mode in multiplayer is the “Classic” mode which is conquer all enemies, everyone for himself. But it’s not as stupid as it might sound, in many occasions one player becomes to powerful and you still need to help your other enemies to conquer the big one, so you will have a chance to win. Pure strategy, tactics and plots – pure fun ! After a few games you are able to see your online stats and rank (even single player states and rank) and even maybe one day reach the top 10. Except the sound issue the game had no bugs that I’ve encounter and run very smooth on my machine. The only complaint I’ve seen on the forums is to add shortcut keys so you could select the % of spaceships faster – but I didn’t see a huge need to this feature. [youtube=http://www.youtube.com/watch?v=jEuxuspM6e0&hl=en_US&fs=1&] This game is very fun and I suggest to at least try it – you will get addicted at no time.
GUADALAJARA, Mexico – They were slow to hop on the bandwagon, but fans of Chivas Guadalajara are now fully onboard ahead of one of the biggest matches in the fabled Mexican team’s long and rich history. Chivas enters the second leg of the CONCACAF Champions League final against Toronto FC on Wednesday with the distinct advantage. A 2-1 win at BMO Field in last week’s opener of this aggregate series means the Mexican outfit only needs a draw in the return match. Even a 1-0 loss would be enough for the Mexicans to be crowned the kings of the continent. Formed in 1906, Chivas is one of the biggest and most successful teams in Mexico. Alongside bitter rivals Club America from the nation’s capital, Chivas has won record-12 league titles, its latest coming in 2017 which ended an 11-year drought. But the Guadalajara team has fallen on hard times since then. Chivas currently sits in second-last place (out of 18 teams) in Mexico’s Liga MX standings, and was recently eliminated from playoff contention for a second consecutive campaign. While rivals Club America, Tigres and Tijuana bowed out earlier in the Champions League, Chivas made it to the final, thus ensuring Mexican representation in each championship game since the inaugural tournament held in 2008-09. But it was though fans didn’t take notice of Chivas’ impressive form in Champions League while the team was simultaneously fighting for its life on the domestic front. Now that Chivas’ Liga MX playoff dreams have ended, supporters are firmly focused on Wednesday’s second leg, as their beloved team attempts to win its first Champions League crown. “There’s a definite buzz in the city. This Champions League run has come as a little bit of a surprise for Chivas fans,” said Tom Marshall, a Guadalajara-based correspondent for ESPN.com. “When you look at the four Mexican teams in this competition, Chivas would have been third or fourth in the rankings. With Chivas not doing well in Liga MX, it was all doom and gloom in Guadalajara before beating the Red Bulls [in the Champions League semifinals]. But after scraping by New York and getting that win in Toronto last week, it woke everybody up.” Club America won the Champions League in 2015 and 2016. It also won the old CONCACAF Champions’ Cup five times. Chivas lone international title came in 1962 when it won the first Champions’ Cup. Regarded as Mexican soccer’s most popular team, Chivas’ lack of international honours has long been a sore spot for them, especially in light of Club America’s success. “Chivas boasts it has 40 million fans, so the club is a massive institution. But they haven’t won a CONCACAF title since the 1960s. For a team the size of Chivas, that’s not good enough. There’s a real longing to win that first trophy,” Marshall explained. He later added: “The other element of this is that Toronto FC knocked out Club America in the semifinals, and that’s Chivas’ biggest rival. Chivas are looking at this as going one better than America if they can finish the job against Toronto on Wednesday. That’s very important from a Chivas perspective.” Winning the Champions League goes beyond bragging rights, beyond the ability for either Chivas or TFC to proclaim itself as the best team in North America. The winner of this competition also automatically qualifies for the FIFA Club World Cup, an annual tournament featuring the six continental club champions, including the winners of this year’s UEFA Champions League. The 2018 FIFA Club World Cup is scheduled for Dec. 12-22 in the United Arab Emirates. Getting to the FIFA Club World Cup, and conceivably playing against a team the calibre of Real Madrid or Bayern Munich, would be a source of great pride for any Liga MX side. Marshall argues that pride would burn brightest within Chivas because of its unique philosophy. While Club America and other Liga MX teams splash big money on foreign stars, Chivas is the only team in Mexico to exclusively field Mexican players. “This fans base is absolute desperate to win [the Champions League]. If Chivas can get to the FIFA Club World Cup, the fact they’ll do it entirely with an all-Mexico squad would be a source of great pride. It adds an extra element for Chivas fans that this club, which only plays with Mexicans, would be representing the entire CONCACAF region on the world stage, it would mean a lot. It would mean more, than say, if Club America got there,” Marshall offered. For Chivas, winning the Champions League and qualifying for the FIFA Club World Cup would also reaffirm the validity of their approach when it comes to player recruitment. “Teams such as Tigres and Club America have basically spent more money than Chivas. When you only play Mexican players, it’s much more difficult. The other clubs take advantage of this by trying to buy the best Mexican stars, forcing Chivas to overpay for players. Chivas doesn’t have the option to go to South America and pick up a cheaper player who offers the same characteristics,” Marshall explained. “They also rely heavily on their youth system. Against Toronto in the first leg, six of the starting 11 came out of their academy. The fact they only play with Mexicans is a big barrier, which is why winning the Champions League would incredibly special for Chivas.”
It may just be political posturing, but the willingness of the University of California regents to even float the idea of a big tuition increase at a time like this is galling even by the standards of, well, politics. ------------ FOR THE RECORD: A previous version of Ted Rall’s cartoon incorrectly said UC regents voted themselves pay raises of up to 20%. The regents had agreed to such pay raises for some UC chancellors. ------------ The UC system has been devastated by years of drastic budget cutbacks. UC President Janet Napolitano announced yet another round of austerity measures in January. Asking students and their families to pay more for less is bad enough, but it’s unconscionable at a time when top university executives are lining their pockets at those families’ expense. As Los Angeles Times columnist George Skelton writes: “Two months ago, the UC regents gave pay hikes of up to 20% to the leaders of the Santa Barbara, Santa Cruz, Merced and Riverside campuses and awarded the new Irvine chancellor 24% more than his predecessor. We’re talking salaries ranging from $383,000 to $485,000, plus perks. And it goes deeper than chancellor. Last year, for example, UC Davis hired a PR person at a $260,000 annual salary.” Whaaaaaa—? As they say in the PR business, those are some lousy optics. UC students are ticked off — as they should be. Behind the regents’ cavalier attitude toward the students is the knowledge that they enjoy immunity from the laws of free-market capitalism, under which service providers suffer downward pressure on costs from competition and their customers’ disposable income. Unlike coffee shops and cartoonists and probably you, colleges and universities raise tuition and fees much faster than the inflation rate, year after year — and their customers keep on paying. Why? They sell a product — diplomas — most need to succeed economically and they offer easy credit in the form of student loans. Remember how loose credit created millions of new homeowners before 2008? All those buyers drove up prices — which, in turn, led to the rupture of the housing bubble and the crash. A lot of people believe that we’re in the middle of a big college tuition bubble, with banks lending tens of thousands of dollars to young adults who will never be able to find jobs when they graduate that pay enough to repay them. That includes President Obama. “We can’t just keep on subsidizing skyrocketing tuition,” he said in 2012. If the higher education racket ever unravels, it’ll be a boon to the 21-year-olds of the future — but a serious bummer to Obama’s former secretary of Homeland Security and her ridiculously well-paid “executives.” Follow Ted Rall on Twitter @tedrall
The airway in day surgery. Securing an adequate airway and ventilation is of outmost importance in day case anesthesia/sedation, as elsewhere in anesthesia. Since its introduction, the LMA has made a major contribution to the development of safe and efficacious airway management in elective day case anesthesia. It has also become increasingly popular in laparoscopy in selected patients. The use of a LMA should always be based on individual patient assessment, and a rescue plan for inadequate airway and or ventilation should be in place. Intubation is still the technique of choice in emergency situations.
Q: Merge data in SQL Server 2016 I have a really generic question about SQL Server. I have a SP that is having as input a @reportID parameter and is returning the history of that report. As I have about 3000 reports and need to process the history of each report, I created a temporary table where to insert the returned data, but without the report ID is useless. OPEN cursor_reportStats FETCH NEXT FROM cursor_reportStats INTO @ReportID WHILE @@FETCH_STATUS = 0 BEGIN INSERT INTO @temp EXEC dbo.GetHistory @ReportID FETCH NEXT FROM cursor_reportStats INTO @ReportID END So what I need is to attach the @ReportID to each line returned by GetHistory Many thanks A: Add second @temptable which will have the ReportID column + the rest of the data. Your @temp table will be buffer table and will be deleted on each iteration. At the end of each iteration you will insert the current @ReportID value and the data from the buffer table in the second @temptable. So, you will have something like this: OPEN cursor_reportStats FETCH NEXT FROM cursor_reportStats INTO @ReportID WHILE @@FETCH_STATUS = 0 BEGIN DELETE FROM @temp; INSERT INTO @temp EXEC dbo.GetHistory @ReportID INSERT INTO @temptable SELECT @ReportID, * FROM @temp FETCH NEXT FROM cursor_reportStats INTO @ReportID END
46 Mich. App. 291 (1973) 207 N.W.2d 924 SMITH v. DEPARTMENT OF STREET RAILWAYS, CITY OF DETROIT Docket No. 12623. Michigan Court of Appeals. Decided April 23, 1973. *292 William B. Cope, for plaintiff. William Dietrich and Michael F. Peters, for defendant on appeal. Before: V.J. BRENNAN, P.J., and HOLBROOK and VAN VALKENBURG,[*] JJ. HOLBROOK, J. Plaintiff Johnnie Louise Smith suffered injuries to her tailbone February 6, 1967, when she slid off her seat and landed on the floor on defendant's bus as the bus rounded a Detroit street corner. She sued defendant common carrier for alleged damages of $30,000. At the close of plaintiff's evidence defendant made a motion for a directed verdict[1] which was denied. The case then proceeded through defendant's evidence with the trial judge finally deciding for the defendant. The main issue on appeal is the correctness of the trial judge's findings of fact. For clarity's sake we quote the judge's complete opinion: "The Court: In order to render a verdict in this case, I have had to make a finding of fact and a conclusion of law. "I find it is a fact that on the sixth day of February, 1967, that the plaintiff, Johnnie L. Smith, boarded a *293 DSR bus at Ferry Park and Fourteenth Street and that there were several other people on this bus and that they had just crossed Grand River Avenue in the area of Columbia and where a short distance beyond the crossing at a point where the road is straight and level, the bus gave a lurch and the plaintiff fell on the floor. "I further find that prior to and up to and including this time, there was no evidence of any improper driving upon the part of the bus driver; that it was snowing and the pavement was covered with snow and that he was operating the bus at a reasonable and proper speed and, as a matter of fact, not in excess of ten miles per hour; that the basis for the lurch can only be a conjecture on the part of this court because there is no factual situation appearing from the evidence to give any reason for the bus having lurched. "All the evidence tends to show to this court that the bus driver was negotiating the terrain as a reasonable, careful person would negotiate the terrain under similar circumstances. "I find that there is no negligence upon the part of the bus driver and there are no circumstances from which an inference of negligence could be obtained. "I further find that the defendant is entitled to a verdict of no cause of action and I enter a verdict for the defendant." (Emphasis supplied.) While recognizing our duty to affirm the trial judge's findings of fact unless they are clearly erroneous, we are forced to conclude that clear error does exist here, and therefore must reverse. GCR 1963, 517.1. To begin, if there were "no circumstances from which an inference of negligence could be obtained" as the trial judge determined below, then he improperly denied defendant's motion for a "directed verdict" at the close of plaintiff's evidence. However, had he in fact directed a verdict for the defendant we would still need reverse here, since we disagree with the trial judge's ruling that there were "no circumstances from which an inference *294 of negligence could be obtained". A brief review of the evidence substantiates our conclusion. Plaintiff testified that on the snowy morning of February 6, 1967, she got on one of the defendant's buses to go to work. She sat on the left-hand seat on the right-hand side directly behind the rear door of the fully loaded bus. She was not in a position to see the condition of the street where the bus was being driven. As the bus turned the corner and crossed Grand River Avenue, plaintiff testified she saw the driver turning his steering wheel very fast around the curve, and she felt the bus lurch, "like hit something like — it was something like a jar", throwing her out of her seat to the floor of the bus. The passenger next to her also slid slightly off his seat. She was helped by three persons back into her seat, and remained on the bus until the end of the line in order to get the driver's bus number. She continued on by cab to her job as a domestic servant, but left early in the afternoon to go to the hospital for treatment of her injuries allegedly suffered in the accident. The bus driver, Ernest Bell, testified he had been a bus driver for approximately five weeks at the time of the accident, and at that time he worked different routes every day. He remembered someone coming up to him at the end of the line, and telling him what had happened. He gave inconsistent testimony about whether he remembered the accident, at one point saying he remembered nothing, and at another saying to his knowledge there was no "lurch", and that he made the turn at less than ten miles an hour. To us plaintiff's testimony certainly provides the basis from which negligence might be inferred, especially since the only other evidence offered was the inconsistent, confused, and unenlightening testimony of the bus driver. *295 Plaintiff relied on an assertion of the doctrine of res ipsa loquitur[2] to reach the same conclusion that we have. While the doctrine of res ipsa loquitur has apparently not been formally adopted in Michigan, a reasonable facsimile of it has on occasion been used by some Michigan courts to decide an issue of negligence. Rose v McMahon, 10 Mich App 104 (1968); Rohdy v James Decker Munson Hospital, 17 Mich App 561 (1969). Cf. the analysis of the doctrine's applicability in Michigan in Gadde v Michigan Consolidated Gas Co, 377 Mich 117 (1966); Mitcham v Detroit, 355 Mich 182 (1959). Unfortunately, courts don't always agree on the precise elements of the doctrine, or whether all elements need be present at once, or whether a showing of the existence of res ipsa loquitur raises a mere inference or a full presumption of negligence. We agree with Dean Prosser's suggestion that res ipsa loquitur "has been the source of so much trouble to the courts that the use of the phrase itself has become a definite obstacle to any clear thought, and it might better be discarded entirely". Prosser, Torts (3d Ed), § 39, p 217. The Court in Gadde at 125 saw through the pitfalls of the doctrine of res ipsa loquitur to reaffirm Michigan's adherence to the "traditional concepts of the law of negligence and of evidence", and to settle the controversy existing in that case by looking to circumstantial evidence and what might be inferred or presumed from a proper foundation of facts. We follow precisely the same course here in *296 deciding that the evidence of an unexplained lurch and resulting fall on the bus, coupled with uncontroverted testimony showing icy weather conditions and the bus driver turning the steering wheel fast around the corner, and plaintiff's observation that it felt like the bus hit something is sufficient to provide a basis from which an inference of negligence might have been drawn, contrary to the finding of the trial judge. Since we reverse and remand for a new trial we have no need to reach other issues raised by plaintiff on appeal. All concurred. NOTES [*] Former circuit judge, sitting on the Court of Appeals by assignment pursuant to Const 1963, art 6, § 23 as amended in 1968. [1] Since this was a nonjury action, the motion for a directed verdict should have properly been called a motion to dismiss. Dauer v Zabel, 19 Mich App 198 (1969); GCR 1963, 504.2. The difference between the two motions is that on the latter motion a trial judge may weigh the evidence before him, unlike a motion for a directed verdict, where the trial judge's only task is to determine if there is no issue of fact raised by plaintiff's evidence for the jury to decide. Cullins v Magic Mortgage, Inc, 23 Mich App 251 (1970); Serijanian v Associated Material & Supply Co, 7 Mich App 275 (1967). [2] The four conditions for the substantive application of the concept of res ipsa loquitur are (1) the event must be of a kind which ordinarily does not occur in the absence of someone's negligence, (2) the event must have been caused by an agency or instrumentality within the exclusive control of the defendant, (3) the event must not have been due to any voluntary action or contribution on the part of the plaintiff, and (4) evidence of the true explanation of the event must be more readily accessible to the defendant than to the plaintiff. Rohdy v James Decker Munson Hospital, 17 Mich App 561 (1969).
.......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... .......... ALBUQUERQUE, N.M. — Police have determined the shooting death of a teen, deemed suspicious at first, to be accidental, according to a police spokesman. Officer Simon Drobik said 17-year-old Matthew Graham accidentally shot himself on Monday night at a home in the 5000 block of Azuelo NW. Police were called out to the shooting around 7 p.m. after several friends found Graham dead in a northwest Albuquerque home. ADVERTISEMENTSkip ................................................................
--- abstract: 'In this paper we consider horseshoes containing an orbit of homoclinic tangency accumulated by periodic points. We prove a version of the Invariant Manifolds Theorem, construct finite Markov partitions and use them to prove the existence and uniqueness of equilibrium states associated to Hölder continuous potentials.' author: - 'Renaud Leplaideur[^1] and Isabel Rios[^2] [^3]' bibliography: - 'mabiblio.bib' title: 'Invariant manifolds and equilibrium states for non-uniformly hyperbolic horseshoes' --- Introduction and statement of results ===================================== intro2.tex Horseshoes with internal tangencies {#sec-horse} =================================== horse.tex Geometric properties of the map $f$ {#sec-techn-lem} =================================== lem-tek.tex Kergodic charts =============== kergodic.tex Invariant manifolds and some of their properties {#foliations} ================================================ foliations.tex Markov partitions and equilibrium states {#sec-thermo} ======================================== thermo2.tex [^1]: Département de mathématiques, UMR 6205, Université de Bretagne Occidentale, 29285 Brest Cedex, France [^2]: Instituto de Matemática, Universidade Federal Fluminense, Rua Mário Santos Braga s/n, Niterói, RJ 24.020-140, Brasil [^3]: This work was partially suported by CNRS-CNPq, UBO, PRONEX-Dynamical Systems, FAPERJ and PROPP-UFF.
1. Field of the Invention The present invention relates to a dipole antenna, and in particular relates to a dipole antenna with reduced dimensions. 2. Description of the Related Art FIG. 1a shows a conventional dipole antenna 1, comprising a first arm 10, a second arm 20, a signal line 31 and a ground line 32. The signal line 31 is electrically connected to the first arm 10. The ground line 32 is electrically connected to the second arm 20. The dipole antenna 1 transmits a wireless signal. The wireless signal has a wave length λ. Conventionally, the lengths of the first arm 10 and the second arm 20 are λ/4. Thus, decreasing the dimensions of the conventional dipole antenna 1 is difficult. Also, with reference to FIG. 1b, conventional dipole antennas 1 have a housing 40, and the housing 40 covers the first arm 10, the second arm 20, the signal line 31 and the ground line 32. Thus, when the conventional dipole antenna 1 is disposed on a top edge of a portable computer (for example, a notebook computer), the appearance of the portable computer is influenced. Meanwhile, when the conventional dipole antenna 1 is disposed on a side edge of the portable computer, signal transmission thereof is deteriorated. Specifically, the circuit board of the portable computer interferes with electrical fields of the dipole antenna 1.
Q: c++0x_warning.h:31:2: error: I was trying make a file and got this error. I am a newbie. Can any one help me here. /usr/include/c++/4.6/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. How to enable with -std=c++0x? I used this in my makefile #CXX_VERSION_FLAG = -std=c++0x but did not work. Thanks, Addy A: No, just pass these flags (aka options) to the compiler. Instead of running gcc ..., run gcc -std=c++0x ... (or -std=c++11 for newer compilers).
Precise Shooter update - 2015-10-07 King County Elections The deadline for online voter registration have passed - if you did not register, you can still register to vote in person. Please remember - Sandy Brown, one of the principal authors of I-594 is running for a City Council seat in District 5 (North Seattle). Let's show him that willingness to arbitrary restrict human rights for fake security (by now it is quite obvious that I-594 had no effect on violence) is not a value we want to have on City Council. Ammunition We have worked very hard to bring some of the best prices on ammunition to Seattle area shooters. We have a large amount of 9mm Luger ammunition starting at only $9.99 per box of 50 rounds - which compares favorably with the best prices you can find on the Internet, especially when you account for shipping. We are now excited to add budget 308/7.62x51 ammunition to the mix. We have 50rd boxes of Magtech (CBC) ammunition for only $29.95 per box of 50 rounds - which works out to just under $12 for a 20rd box equivalent. We also have the new Turkish ZQI ammunition for only $11.99 a box. Both are reloadable, Boxer-primed ammunition. Another fantastic deal this week - we have a limited number of full cases (40 boxes) of FN 5.7x28 ammunition for only $800 a case ($20/box of 50 rounds). This ammunition is typically sold for around $30. These are sold by the case only, and only 3 are available. Please call or email to reserve yours. Throughout the year we have successfully maintained a good stock of 22LR ammunition without raising the prices by limiting the purchases to 100rd. We have good supply of CCI ammunition for $3.99/box of 50, and more. We now have 2 types of 22lr ammunition where we do not limit the purchase: CCI Quiet, at $3.99/box of 50, and Armscor High Velocity 36gr at $44.95/brick of 500 rd. Armscor ammunition is pricey, but our price is still much better than the Internet (Lucky Gunner, $70 plus shipping; Cheaper Than Dirt, $49.95 plus shipping, Bulk Ammo, $64.95, etc). One case available. We also have a few 300rd holiday packs of CCI standard velocity ammunition. We will only sell it with a 22lr rifle or a pistol - give the gift of shooting this holiday season! You can see our full stock of ammunition here. New guns in stock We had a few interesting arrivals this week. First, Armscor 1911-A1 XT22 pistol with rail. Only one available (we got it at the distributor sale, so the price is roughly $50 below the Internet - and that's before shipping and transfer). Only one is available. Price: (currently unavailable) We've got a few CZ 75 Shadow Tac II guns. These are spectacularly accurate (thin 1" @ 25yd) pistols from CZ Custom. We have only 2 as of this writing. They are super rare - last time I was able to get a Shadow was in 2012. Price: $1349.00 Also in CZ land, 2 2075 RAMI BDs arrived yesterday. We've got an excellent deal at a distributor sale event on 22lr AR-15 uppers. These are dedicated uppers, so the barrels have the right twist. Price: (currently unavailable) A new and still rare rifle from Ruger - an American in 300 Blackout. Price: (currently unavailable) Last but not least, take a look at this high capacity tactical bolt action rifle :-) - we have a few Mossberg MVP Varmints in stock. They take AR-15 magazines, and they have awesome reviews on the 'net.
A critique of social capital. This article critiques the concepts of communitarianism and social capital as used in the United States and in Europe. For the United States, the author focuses on Robert Putnam's understanding of both concepts, showing that the apolitical analysis of the Progressive Era, of the progressive developments in Northern Italy, and of the situation of labor unions in the United States is not only insufficient but wrong. The critique also includes the difference between U.S. communitarianism and its European versions, Christian democracy and New Labour, and the limitations of both approaches. The uses and misuses of these concepts in the political debate are discussed.
UNITED BROTHERHOOD OF CARPENTERS & JOINERS OF AMERICA,Carpenters District Council of Denver and Vicinity, andUnited Brotherhood of Carpenters & Joiners of America,AFLCIO, Colorado State Council of Carpenters its AffiliatedDistrict Councils and Affiliated Local Unions, and LesliePrickett, Adolph Lavalle and James McFarland, Appellants,v.HENSEL PHELPS CONSTRUCTION COMPANY, a Colorado corporation, Appellee. Before BREITENSTEIN and SETH, Circuit Judges, and KERR, District judge. SETH, Circuit Judge. 1 The appellee, Hensel Phelps Construction Company, brought this action under 301 of the Labor Management Relations Act of 1947, 29 U.S.C.A. 185, for damages for breach of collective bargaining agreements. The defendant-appellants are labor organizations and individuals representing a single union which was a party to the agreements. The action was tried to the court, and judgment was rendered against the appellant-union in the amount of $8,000.00. The union took this appeal. 2 The disagreement arose between the parties over the question whether certain work done by appellee's carpenter employees on an elevated ramp for automobiles to reach the entrance to the Denver airport was to be paid at the rate for building work or highway work. The issue raised the question as to which of two collective bargaining agreements would be applicable to this type of work, both agreements being between the same parties. The amount of the judgment appealed from represents the difference between the two wage scales. 3 The trial court concluded that the union had breached its highway collective bargaining agreement with appellee Phelps by causing a work stoppage without first complying with the disputes procedure of such agreement. 4 The facts, about which there is no dispute, may be summarized as follows: 5 Appellee Phelps had been awarded a contract by the City and County of Denver, Colorado, to construct an air terminal building and an elevated drive at Stapleton Airfield in Denver.1 Phelps was a member of the Associated Building Contractors of Colorado, Inc., hereinafter 'ABC.' ABC, representing its members, had negotiated a master collective bargaining contract with the appellant union, which contract is captioned 'Building Construction Agreement Carpenters,' and to which we will refer as the 'building contract.' Article I, section 4, of the building contract describes in detail the carpenter work within the coverage of the contract. However, Article I, section 2(d), of the building contract states that work covered by the 'Housing Agreement and The Heavy and Highway Agreement' shall not be considered similar to work within the coverage of the building contract for purposes of automatically granting a lower wage scale to any employer under the building contract when another employer has secured a lower wage scale than that provided by the contract. 6 Section 2(d) of the building contract further states that a Heavy and Highway Agreement 'shall be available to any member of the Employer (any member of ABC), who desires to engage in such work, for signature with the Union.' 7 The Heavy and Highway Agreement, which we will refer to as the 'highway contract,' is the second of two collective bargaining contracts involved in this appeal. The building contract describes work within its jurisdiction in terms of the particular job the employee might perform, e.g., 'making and setting of concrete forms,' 'fitting and hanging of all doors,' 'making and installing of all acoustic properties.' The highway contract describes work within its jurisdiction by the nature of the construction project, e.g., 'all work performed in the construction of streets and highways, airports, utilities, levee work,' etc.2 The important fact giving rise to the dispute and leading to this appeal is that the wage scale for carpenters provided in the highway contract is less than the wage scale provided in the building contract. 8 Although there is no dispute that a substantial part of the entire construction project was within the jurisdiction of the building contract (the terminal buildings), classification of the elevated drive leading to the building entrance as highway work or building work immediately became a source of disagreement between the parties. There is conflicting evidence relating to the understanding of the parties as work commenced; however, for the first four or five weekly pay periods Phelps paid employees working on the elevated drive the lower wage scale provided in the highway contract. After a number of informal discussions, the dispute between Phelps and the union representatives concerning the classification of the elevated drive and applicable wage scales reached a critical point in the week of July 24, 1964. 9 On July 23 Phelps mailed to the union a Heavy and Highway Agreement for union signature, as provided in section 2(d) of the building contract. The union received the agreement on July 24, but did not sign it. On July 24 a formal meeting was convened between the union representatives and representatives of ABC, the contractors' association, pursuant to Article VIII, section 1, of the building contract, which provides: 'The said committees are charged with the responsibility of reaching a settlement by mediation, conciliation or arbitration as the circumstances require; the decision so reached shall be put in writing and shall be binding on all parties to the controversy.' The foregoing excerpt is the extent of the procedure for resolving disputes under the building contract. A vote taken at the meeting resulted in a deadlock. The union considered the building wage scale applicable to the elevated drive, and the contractors considered the highway wage scale applicable. 10 After the meeting was adjourned on July 24, the union advised Phelps that unless Phelps agreed to pay building wages on the elevated drive, the union would inform its members that they were receiving substandard wages. It was understood that such advice would result in a wolkout or work stoppage. Phelps would not agree to pay building wages, but did offer to place the amount repersented by the difference between highway and building wages in escrow pending a final determination of the issue. The union declined this offer, and advised its members that Phelps was paying substandard wages, and the carpenters walked out. Work was resumed in two days, after Phelps agreed to pay building wages, but reserved all rights under the building contract pending a final determination of the issue.3 11 Phelps thereafter brought this suit against the union for breach of contract, seeking recovery of the difference between highway and building wages paid, and other damages. 12 The trial court found that the building contract meeting of July 24, which resulted in a deadlock, had exhausted the dispute procedure set forth in such contract. The trial court found that the ramp construction was highway work, and that the highway contract was operative. It also found that the dispute procedure set forth in the highway contract was different from that established in the building contract, and that the union had not complied with its contract dispute procedure before causing a work stoppage.4 13 From the foregoing findings, the trial court concluded that the parties were bound by the provisions of both the building and highway contracts, and that the dispute concerned the application of the highway contract to the elevated drive. Thus the union was required to comply with the dispute procedure of both the highway contract and the building contract. The court concluded that the union had failed to comply with the dispute procedure of the highway contract; that the highway contract was breached by the work stoppage, and Phelps was entitled to recover damages in the amount of the building wage scale paid for highway work. 14 The appeal is under the provisions of 301 of the Labor Management Relations Act, 29 U.S.C.A. 185, and federal substantive law applies. Republic Steel Corp. v. Maddox, 379 U.S. 650, 85 S.Ct. 614, 13 L.Ed.2d 580; John Wiley & Sons, Inc. v. Livingston, 376 U.S. 543, 84 S.Ct. 909, 11 L.Ed.2d 898. Although we have been referred to no case concerned with issues quite like those presented in the case at bar, the Supreme Court has established a broad policy for judicial interpretation of collective bargaining contracts. The Court has stated: 'We think special heed should be given to the context in which collective bargaining agreements are negotiated and the purpose which they are intended to serve.' United Stellworkers of America v. American Manufacturing Co., 363 U.S. 564, 80 S.Ct. 1343, 4 L.Ed.2d 1403. 'The collective agreement covers the whole employment relationship. It calls into being a new common law * * *.' United Steelworkers of America v. Warrior & Gulf Navigation Co., 363 U.S. 574, 80 S.Ct. 1347, 4 L.Ed.2d 1409. The Court has also held that in the context of collective bargaining contracts, preoccupation with the doctrines of ordinary contract law may thwart realization of congressional policy. Cf. United Steelworkers of America v. American Manufacturing Co., supra. 15 These policies are of course difficult to apply to a particular factual situation such as we have before us. We can examine however the purpose for which the agreements were intended to serve, the fact that they are intended to cover as great a portion of the parties' relationships as possible, and that all the doctrines of contract law may not be applicable. 16 This dispute originated during the course of work under a particular agreement, the building contract, and all negotiations and procedures were initially taken pursuant to it. The employees' pay was initially made at a different scale, but with no indication that a different agreement would be invoked in its entirety. The dispute was treated by the parties as one over an applicable pay scale under a single contract, and the court should treat it in the same way. There was no disagreement as to the hourly rate if the nature of the work was decided. Thus again it was treated by all concerned as a dispute over which of two wage scales should apply. 17 We hold that the judgment of the trial court must be affirmed, but we cannot agree with the trial court's conclusion of law that the parties were bound to comply with the dispute procedures of both the building contract and the highway contract. 18 As indicated above, the disagreement between the parties centered about one issue. Was the elevated drive building work, or was it highway work? The contents of the contracts in question cannot be considered spearately from the unusual nature of the dispute in the case at bar. Unlike the facts of most cases to which we have been referred, the dispute here is fundamental, for it questions which of two contracts should apply to the elevated drive. Until the underlying question of fact was determined, that is, classification of the elevated drive as building or highway work, the parties could not know which wage scale to use. Although the trial court concluded that the dipute concerned application of the highway contract to the elevated drive, the dispute was concerned equally with application of the building contract to the elevated drive. 19 Review of the record satisfies us that the parties regarded the dispute as one arising under the building contract. It appears that work on the airport project was commenced under the building contract, though the parties never agreed on the classification of the elevated drive. A substantial part of the airport project was within the jurisdiction of the building contract, and the building contract, in Article I, section 2(d), provides that a highway contract, for signature with the union, shall be available to any contractor desiring to engage in highway work. The meeting of July 24 was convened pursuant to the dispute procedure established in the building contract, and Phelps sought to invoke the highway contract in the manner provided in the building contract. 20 When Phelps sought to put into effect the highway contract under section 2(d) of the building contract, the dispute had long before focused on the classification of the elevated drive. It could not invoke the highway contract for building work, and thus the 'jurisdictional' question still remained as before. Phelps' action in executing the highway contract was no more than a further assertion of its position in the dispute. Neither 'party' to the highway contract could be the sole judge of whether the project was within the jurisdiction of the highway contract. Such an interpretation would be inconsistent with the efforts of the union and the contractors to establish comprehensive definitions of building work and highway work in the contracts. 21 In the case at bar, the union did not agree that the elevated drive was highway work. The basic issue was unchanged by the action of Phelps. The dispute was thus in fact still proceeding under the building contract. The union could not under such circumstances breach the highway contract by causing a work stoppage. The highway contract thus never came into existence, and the union cannot be required to exhaust its dispute procedure. Cf. United Steelworkers of America v. Warrior & Gulf Navigation Co., 363 U.S. 574, 80 S.Ct. 1347; United Mine Workers of America, Dist. 22 v. Roncco, 314 F.2d 186 (10th Cir.). 22 The dispute procedure established in the building contract is rudimentary. Beyond the requirement of a meeting between the parties, the dispute provisions in the building contract provide for no further procedure for a binding resolution of the dispute, short of a strike, a lockout, or a law suit. While the dispute procedure does charge the parties to reach a settlement by 'mediation, conciliation or arbitration as the circumstances require,' no machinery is provided by which any of the foregoing alternatives might be inplemented to achieve the 'binding decision' envisioned by the dispute procedure. The parties do not assert that any procedure, beyond the meeting, was required by the language of the contract, and it appears that the language of the contract is inadequate to compel the parties to resort to additional extrajudicial procedures. Cf. United Steelworkers of America (AFL-CIO), etc. v. New Park Mining Co., 273 F.2d 352 (10th Cir.). 23 A failure to agree that disputes shall be resolved by binding arbitration permits the parties to resort to other remedies such as work stoppages, lockouts, or the courts. Compliance with the dispute procedure of the building contract was effected by the meeting of July 24, which resulted in a deadlock. The contract does not contain a 'no strike' clause, and, as we have seen, it does not provide for binding and compulsory arbitration. Thus after the contractual dispute procedure proved ineffective to resolve the dispute, the parties were free to pursue their other remedies. The device selected by the union was a work stoppage. Phelps later has sought its remedy in the United District Court. With different facts the forums selected could be reversed, Phelps imposing a lockout and the union bringing suit. 24 The union argues that the merits of the dispute concerning classification of the elevated drive were 'matters which the parties left to mutual confidence and to their joint committees to work out, if possible, when problems should arise'; and therefore, the trial court erred by finding that the elevated drive was 'highway construction within the meaning and intent' of the highway contract. We cannot accept this analysis of the trial court's limited role in adjudicating disputes arising under a collective bargaining contract. Federal policy, as revealed by 301 of the Labor Management Relations Act, undoubtedly favors arbitration as the method for resolving disputes arising under collective bargaining contracts. United Steelworkers of America v. Warrior & Gulf Navigation Co., supra; Local 174 Teamsters, Chauffeurs, etc. v. Lucas Flour Co., 369 U.S. 95, 82 S.Ct. 571, 7 L.Ed.2d 593. In the case at bar however the contract does not provide for binding arbitration, and resort to the court was proper. 25 The contracts in question reveal a joint effort of the union and the contractors to define and classify construction projects as building or highway work for purposes of determining the appropriate wage scale. Classification of a particular construction project is not beyond the scope of the contracts in question, nor is such classification beyond the contractual intent of the parties. The dispute here concerned interpretation and construction of the contracts, and the trial court properly adjudicated the dispute on its merits. When the dispute is one arising within the provisions of the contract, it is the function of the courts, under 301, to adjudicate the dispute, absent provisions in the contract for binding arbitration. See Line Drivers Local No. 961, etc. v. W. J. Digby, Inc., 341 F.2d 1016 (10th Cir.); United Steelworkers of America (AFL-CIO), etc. v. New Park Mining Co., 273 F.2d 352 (10th Cir.); cf. Atkinson v. Sinclair Refining Co., 370 U.S. 238, 82 S.Ct. 1318, 8 L.Ed.2d 462; Smith v. Evening News Ass'n, 371 U.S. 195, 83 S.Ct. 267, 9 L.Ed.2d 246; Brown v. Sterling Aluminum Products Corp., 365 F.2d 651 (8th Cir.). See also Textile Workers Union of America v. Lincoln Mills, 353 U.S. 448, 77 S.Ct. 912, 1 L.Ed.2d 972. 26 Although the record reverals conflicting evidence relating to classification of the elevated drive, we are satisfied that there was substantial evidence to support the trial court's finding that the elevated drive was highway construction. Rule 52, Fed.R.Civ.Proc.; J. A. Tobin Construction Co. v. United States, 343 F.2d 422 (10th Cir.); State Farm Mutual Automobile Ins. Co. v. Lehman, 334 F.2d 437 (10th Cir,). The union caused Phelps to pay building wages for highway construction which was a breach of the building contract, and the union is liable to Phelps for damages. 27 The record discloses that Phelps claimed $9,327.00 in actual damages. The trial court awarded judgment for $8,000.00, finding that Phelps had paid to carpenters working on the elevated drive 'not less than $8,000.00 in excess of that which the plaintiff (Phelps) would have been obligated to pay' if the highway contract had been applicable. We are satisfied that there was substantial evidence upon which the trial court could find that a sum not less than $8,000.00 would compensate Phelps for excessive wages paid, and we cannot say that such finding is clearly erroneous. 28 The judgment is affirmed. 29 On Petition for Rehearing. 30 The appellants' petition for rehearing is denied. Our opinion is modified as follows: The judgment of the District Court is affirmed with respect to appellant, United Brotherhood of Carpenters & Joiners of America, Carpenters District Council of Denver and Vicinity, and the judgment of the District Court is reversed with respect to appellant, United Brotherhood of Carpenters & Joiners of America, AFL-CIO, Colorado State Council of Carpenters Its Affiliated District Councils and Affiliated Local Unions. The District Court's amended judgment dismissed the action against the individual defendants, Leslie Prickett, Adolph LaValle, and James McFarland. These individuals were erroneously listed as appellants in this court. The highway contract defines building construction as 'building structures, including modifications thereto or additions or repairs thereto, intended for use as shelter, protection, comfort or convenience * * * No structures such as * * * bridges * * *, forming a part of a highway, which are required under the provisions of a highway construction contract, shall be regarded as constituting building work.' The highway contract sets forth its dispute procedure in some detail, but representatives for the union and the contractors, for purposes of a meeting to resolve a dispute, are different than those designated by the building contract. Article VII of the highway contract provides: 'If the Joint Committee is unable or unwilling to render a decision because of deadlock vote or otherwise, thereafter the Employer and the Union shall be free to pursue whatever other legal rights and remedies they may have.' The highway contract expressly prohibits work stoppage by either side until the joint committee has, or has not, reached a decision, but neither party is bound to abide the decision of the joint committee
Tag Archive: Asbury Park Sunday Yo yo! So hopefully some of you had a little read of my blog post on Asbury Park and the Jersey Shore. I basically put together everything I did during my day there, including bars, shops and of course, places to eat. It also includes details of my visit to the Born to Run house,… Monday As far as boardwalks go, this is a pretty good one. In fact, in my experience of boardwalks, I’d say it’s the best. This is the boardwalk in Asbury Park, New Jersey. Bruce Springsteen describes it as his “adopted hometown”, the town that has inspired lyric after lyric – songs laced with stories of the…
A rotary hammer, such as a hammer drill, is designed to impart axial percussive vibrations along with rotation to a tool, such as a drill bit, held at the front end of the hammer body, so as to perform chipping and drilling operations. The construction of such rotary hammer is disclosed, e.g., in U.S. Pat. No. 4,280,359, wherein a reciprocable piston-like drive member is installed in a cylinder which guides a vibrating mechanism disposed inside the hammer body, said drive member being adapted to be driven by an electric motor through a motion conversion transmission mechanism which converts rotary motion into axial reciprocating motion, the reciprocating motion of said drive member imparting axial percussive vibrations to a tool, such as a drill bit, held at the front end of the hammer body through a striker axially movably installed in the cylinder, and concurrently with this impartation, the rotation of the electric motor is reduced and imparted to a tool holding member which concomitantly rotatably holds the tool, whereby percussive vibrations and rotation are imparted to the tool. In the conventional rotary hammer as described above, the cylinder provided with the piston-like drive member and striker, and the motion conversion transmission mechanism and electric motor which form a drive section for driving said piston-like drive member are received by a frame forming a shell barrel, while a bracket section for rotatably supporting the tool holding member at the front end of the hammer body and a bracket section for holding a bearing which supports one end of the rotor of the electric motor are integrally formed and fixed on said frame. As a result, in assembling this rotary hammer, the tool holding member and electric motor must be built into the bracket which supports them before said bracket can be fixed to the frame and, moreover, after the electric motor and tool holding member have thus been built in, the bracket which holds the cylinder with the piston-like drive member and the motion conversion transmission device is fixed to the frame, a fact which, coupled with the substantial complexity of the internal construction, makes the assembly operation very troublesome. Further, disassembly operation which becomes necessary, e.g., when a machine trouble occurs is never easy as it must be performed in the order reverse to that for assembly operation. Particularly, the rugged the shell of the hammer body is made so as to have sufficient shock resistance to endure a long period of use, the more difficult the assembly operation. In this type of rotary hammers, which requires operation performance tests, e.g., on the electric motor during assembly operation, if the hammer body is of unitary construction as described above, an operation performance test, e.g., on the electric motor must be conducted with not only the electric motor but also the bit or other tool holding section built into the shell barrel; thus, such test is very troublesome. Further, in this type of rotary hammers, it often occurs that the internal mechanism breaks down or that the hammer fails to operate owing to the consumption of parts such as the sealing rings for the piston-like drive member. With the hammer body construction difficult of disassembly as described above, repair of damage or replacement of parts cannot be easily made in the field and the difficulty of disassembly often makes it necessary to carry the rotary hammer to the factory, during which time another rotary hammer has to be used. For this reason, in another example of prior art, the shell serving as a frame for holding various parts is bisected along the axis of the vibrating mechanism to make it possible to open the shell to opposite sides. With this arrangement, although the assembly operation is easy, in disassembly all the parts are exposed and unnecessary parts are also disassembled. Moreover, since the split type shell halves on opposite sides must be clamped together as by screws, a number of fastening parts such as screws are required and the construction must be such that the fastening parts will not become loose under heavy shocks, a fact which makes disassembly operation more difficult. Further, the aforesaid split construction renders the parts liable to loosen, lowers the accuracy of assembly and fails to provide sufficient reliability in shock resistance; therefore, it is not preferable in practice. Thus, it is less easy than expected to provide a rotary hammer which is easy to assemble and disassemble and which has high quality and high reliability. The fact is that rotary hammers are manufactured with it being admitted unavoidable that repair and replacement of parts take much time and labor. Accordingly, I proposed a rotary hammer to solve the aforesaid problems (Japanese Patent Application No. 108602/1984). The present invention is an improved version of the same.
Vestfold Privatbaner Vestfold Privatbaner was a private railway company which operated two railways in Vestfold, Norway, the Holmestrand–Vittingfoss Line (HVB) and the Tønsberg–Eidsfoss Line (TEB). The company was created in 1934 as a merger between the two former operating companies of each of the two lines, but Vestfold Privatbaner closed operations already on 1 June 1938. History Prior to the merger, the Norwegian State Railways carried out a detailed survey of the business aspects of the two lines. In the evaluation of TEB, NSB was worried about the amount of traffic and stated that while traffic on other lines was rising, TEB was experiencing a decrease in traffic, which was characterized as "very poor", even by Norwegian standards. The importance of the line was also question, as it run parallel to the Vestfold Line and no station south of Hildestad was ever more than from a station on the main line. NSB also argued that the line's utility was severely reduced because the line had never been extended to Vestfossen Station on the Sørlandet Line. NSB further criticized that trains running on the northern segment did not run the shorter distance to Holmestrand. NSB therefore rejected taking over the line and instead proposed running a bus service from Tønsberg to Auli twice per day. NSB was more positive to the upper section, and proposed that the segment from Hoff to Eidsfoss remain, as most of the freight could be transferred to Holmestrand. This rearrangement would, according to the estimates, break even. However, local politicians were not interested in closing either of the lines. However, to rationalize operations, they merged the two railway companies to create Vestfold Privatbaner on 23 August 1934. It was given a board consisting of five members, one appointed by the Ministry of Labour, one each by the municipal councils of Tønsberg and Holmestrand and one additional member from each town, elected by the annual meeting. This was later changed to ten members, one from each of the ten municipalities which the line ran through. As Vittingfoss Bruk was owned by Tønsberg Municipality, they had moved all transport from the pulp mill to Tønsberg, despite the longer ride and higher costs this incurred. Tønsberg Municipal Council therefore demanded that the head office of Vestfold Privatbaner be placed in Tønsberg. They also rejected any proposals to close the Tønsberg–Hillestad segment. Aldermen in Holmestrand disagreed and demanded that the head office be located in Holmestrand and thus Holmestrand Municipality decided not to buy shares in the new railway company. This resulted in the capital being limited to NOK 50,000, but secured Tønsberg Municipality control over the company. They therefore started planning to close the segment from Hillestad to Holmestrand. This was met with resistance in Holmestrand, as an estimated 44 people would lose their jobs and Viking Melk would possibly have to close down. The financial difficulties at Hvittingfoss Bruk caused stress for the railway company. The detour via Tønsberg increased transport costs, and the director attempted at using trucks to freight pulp to Holmestrand instead of using the train. The factory shut down production several times, leaving the railway without its main customer. The railway company's director thus in 1936 started the process of closing the segment from Hillestad to Holmestrand, and from 1936 only irregular trains ran on the segment. A youth fair resulted in a several charter trains being run on 13 June 1937. The final revenue train was a series of half-completed freight trains which were being built by Eidsfoss Verk. Because of the uncertain future of the line, they decided to transfer production to Sundland in Drammen and the unfinished cars were sent via HVB. The last revenue train to Tønsberg ran on 31 May 1938, which, caused by lack of proper maintenance of the track, derailed at Hoff Station. There was little domestic need for narrow-gauge rolling stock in Norway at the time. NSB was in the process of gauge converting all its narrow-gauge railways to standard gauge, and had a surplus of narrow-gauge rolling stock. The only other remaining private narrow-gauge railway in the country was the Lillesand–Flaksvand Line, although Vestfold Privatbaner's stock was not suited for the line. Attempts to sell the locomotives to Swedish narrow-gauge railways were also fruitless. The only interest was a German railway in current-day Poland which bought locomotive no. four. Three locomotives, Tønsberg, Vittingfoss and Holmestrand, were sold to Norcem Langøya. Eidsfoss was, along with the Eidsfoss Station, sold to Eidsfoss Verk, and remained there until it has scrapped in 1957. By then the railway line at Eidsfoss was still in working order. Two freight cars were sold to NSB. Stations were typically sold to the owners which had sold the land to the railway, while others were bought by the respective municipalities. However, not until 1954 was the last property sold. Norsk Privatebane Historisk Selskap was established in 1967 with the intension of establishing a heritage railway. It first attempted to establish itself at Kopstad Station, but instead settled for Kleppen Station. It was at the time intact with a full inventory, including such items as a complete storage of unused tickets. Several of the railway carriages were identified, most of them used as cabins. One person offered to donate two carriages, with original interior and coloring, but after a building permit was rejected he instead chose to burn them down. A representative traveled to Sweden, where he was able to purchase narrow-gauge rolling stock. Clearing of the line at Hillestad started in May 1968 and station building at Kleppen was attempted transported up to Hillestad. However, the truck carrying the building had an accident and the building was smashed. The heritage enthusiasm died out. Rolling stock Vestfold Privatbaner operated seven steam locomotives and one railcar. Four locomotives were inherited from HVB and the rest from TEB. At the time services closed the company had 95 freight cars, of which 60 were in "good shape". References Bibliography Category:Defunct railway companies of Norway Category:Rail transport in Vestfold Category:Transport companies of Vestfold Category:1934 establishments in Norway Category:1938 disestablishments in Norway Category:Railway companies established in 1934 Category:Railway companies disestablished in 1938 Category:Companies based in Tønsberg
Q: Pandas how to fillna in place on a column? After running: df[['column']].fillna(value=myValue, inplace=True) or: df['column'].fillna(value=myValue, inplace=True) or: # Throws warning "A value is trying to be set on a copy of a slice..." df.fillna({'column': myValue}, inplace=True) or: df[['column']] = df[['column']].fillna({'column': myValue}) or: df['column'] = df['column'].fillna({'column': myValue}) My df['column'] still contains nan (!) list(df['column'].unique()) returns ['a', 'b', 'c', 'd', nan] and sum(pd.isnull(df['column'])) returns 1,000+. I've tried several variations but this problem persists. How do you fillna in place on a column in pandas? A: Ed Chum's comment's correctly points out the difference between the methods you propoosed. Here is an example I used to show how it works. import pandas as pd import numpy as np d = {'col1': [1, 2, 3, 4], 'col2': [3, 4, np.nan, np.nan]} df = pd.DataFrame(data=d) df col1 col2 0 1 3.0 1 2 4.0 2 3 NaN 3 4 NaN df['col2'].fillna(value=6, inplace=True) col1 col2 0 1 3.0 1 2 4.0 2 3 6.0 3 4 6.0 Having posted this, I think it'd be most valuable to see what your my_value variable's value is and what your dataframe looks like. I discard Aditya's hypothesis. In the case the nan would be a string, it would appear between quotations marks, and it doesn't. Hope this helps!
Q: parsing xml using python / elementree The xml I need to search specifies but does not use a namespace: <WRMHEADER xmlns="http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader" version="4.0.0.0"> <DATA> <PROTECTINFO> <KEYLEN>16</KEYLEN> <ALGID>AESCTR</ALGID> </PROTECTINFO> <LA_URL>http://192.168.8.33/license/rightsmanager.asmx</LA_URL> <LUI_URL>http://192.168.8.33/license/rightsmanager.asmx</LUI_URL> <DS_ID></DS_ID> <KID></KID> <CHECKSUM></CHECKSUM> </DATA> </WRMHEADER> I'd like to read the values for various fields, e.g. data/protectinfo/keylen etc. root = ET.fromstring(sMyXml) keylen = root.findall('./DATA/PROTECTINFO/KEYLEN') print root print keylen This code prints the following: <Element {http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader}WRMHEADER at 0x7f2a7c35be60> [] root.find and root.findall return None or [] for this query. I've been unable to specify a default namespace, is there a solution to querying these values? thanks A: Create a namespace dict: x = """<WRMHEADER xmlns="http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader" version="4.0.0.0"> <DATA> <PROTECTINFO> <KEYLEN>16</KEYLEN> <ALGID>AESCTR</ALGID> </PROTECTINFO> <LA_URL>http://192.168.8.33/license/rightsmanager.asmx</LA_URL> <LUI_URL>http://192.168.8.33/license/rightsmanager.asmx</LUI_URL> <DS_ID></DS_ID> <KID></KID> <CHECKSUM></CHECKSUM> </DATA> </WRMHEADER>""" from xml.etree import ElementTree as ET root = ET.fromstring(x) ns = {"wrm":"http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader"} keylen = root.findall('wrm:DATA', ns) print root print keylen Now you should get something like: <Element '{http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader}WRMHEADER' at 0x7fd0a30d45d0> [<Element '{http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader}DATA' at 0x7fd0a30d4610>] To get /DATA/PROTECTINFO/KEYLEN: In [17]: root = ET.fromstring(x) In [18]: ns = {"wrm":"http://schemas.microsoft.com/DRM/2007/03/PlayReadyHeader"} In [19]: root.find('wrm:DATA/wrm:PROTECTINFO/wrm:KEYLEN', ns).text Out[19]: '16'
Oestrogen-induced expression of a novel liver-specific aspartic proteinase in Danio rerio (zebrafish). Aspartic proteinases are a group of endoproteolytic proteinases active at acidic pH and characterized by the presence of two aspartyl residues in the active site. They include related paralogous proteins such as cathepsin D, cathepsin E and pepsin. Although extensively investigated in mammals, aspartic proteinases have been less studied in other vertebrates. In a previous work, we cloned and sequenced a DNA complementary to RNA encoding an enzyme present in zebrafish liver. The sequence resulted to be homologous to a novel form of aspartic proteinase firstly described by us in Antarctic fish. In zebrafish, the gene encoding this enzyme is expressed only in the female liver, in contrast with cathepsin D that is expressed in all the tissues examined independently of the sex. For this reason we have termed the new enzyme liver-specific aspartic proteinase (LAP). Northern blot analyses indicate that LAP gene expression is under hormonal control. Indeed, in oestrogen-treated male fish, cathepsin D expression was not enhanced in the various tissues examined, but the LAP gene product appeared exclusively in the liver. Our results provide evidence for an oestrogen-induced expression of LAP gene in liver. We postulate that the sexual dimorphic expression of the LAP gene may be related to the reproductive process.
Designed with the connoisseur in mind, Arc introduce the Mineral Wine Glasses. Made from tempered glass, these fine wine glasses feature Sheer Rim technology for a better tasting experience. The ultra fine rim gives a more luxurious feel and is ideal for wine tasting. A durable, dishwasher safe construction helps this glass become more resistant for everyday use.
Tevez inspires City fight back Related Links London - Carlos Tevez marked his Manchester City return by setting up Samir Nasri's late winner in a vital 2-1 home victory over Chelsea on Wednesday that kept the pressure on leaders Manchester United in the title race. The Argentine, an outcast after falling out with the club in September, came off the bench with City trailing 1-0 to Gary Cahill's deflected shot and his moment of class in the 85th minute lifted City to within a point of their local rivals. Sergio Aguero had levelled from the penalty spot for City who have won all 15 home league matches this season. Tottenham Hotspur drew 1-1 at home with Stoke City in their first game since the FA Cup tie against Bolton Wanderers was abandoned on Saturday following Fabrice Muamba's cardiac arrest. Rafael van der Vaart scored a late equaliser for Spurs to end a run of three league defeats but they fell below north London rivals Arsenal who won 1-0 at Everton to climb to third. Thomas Vermaelen's first-half goal secured rejuvenated Arsenal's sixth successive league win which took them into the top three for the first time this season. United have 70 points from 29 games with City on 69. Arsenal have 55 points to Tottenham's 54 and Chelsea, who host Tottenham on Saturday, are out of the Champions league places on 49. Liverpool suffered a late collapse in west London, throwing away a 2-0 lead to concede three late goals and lose 3-2 at struggling Queens Rark Rangers who moved out of the relegation zone above Muamba's Bolton. City boss Roberto Mancini, who said Tevez would never play for the club again when the striker refused to warm up in a Champions League match in Munich six months ago, called the Argentine off the bench in the 65th minute with the home side desperate for a way back into the game. This time Tevez needed no persuasion to join the fray as he was given a rousing welcome by City's fans. After 78 minutes Michael Essien was adjudged to have handled a shot from Pablo Zabaleta and Aguero stayed cool to beat Petr Cech from the penalty spot. Seven minutes later Tevez showed why he used to be such a hero at the club, playing a cute reverse pass in a crowded area to Nasri who tucked away City's winner. "We deserved to win the game. We won because we had the desire to win. This was more than three points this game." Chelsea's caretaker manager Roberto Di Matteo, who gave a rare start to Fernando Torres after his two goals in the FA Cup against Leicester City, looked on course for a fifth consecutive win since stepping in for Andre Villas-Boas after Cahill's 60th- minute shot deflected in off Yaya Toure. He said the penalty that got City back on level terms was the turning point. "We were defending well until that point," he said. "It was a bit harsh. It was a handball but I don't think he could have disappeared because he was very close to the ball as well." Tottenham fans sung the name of Muamba at White Hart Lane, scene of the dramatic events on Saturday when the Bolton player's life was saved by medics and doctors on the pitch after he collapsed in the first half of the FA Cup tie. Muamba is recovering in hospital in the intensive care department of a London hospital and Spurs manager Harry Redknapp was hoping his side would get back to winning ways against Stoke after three defeats in a row. Cameron Jerome's tap-in gave visiting Stoke a surprise lead in the 75th minute before Van der Vaart, one of the players clearly distressed by Muamba's collapse, headed his side's late equaliser from Gareth Bale's cross. "It was a game I thought we would win and we're disappointed to only take a point but it might come in handy at the end of the season," Redknapp, whose side now face a scrap for a top-four finish, said. Goals from Shaun Derry, Djibril Cisse and Jamie Mackie in the final 13 minutes at Loftus Road gave QPR a remarkable victory over Liverpool. "We've come back from a situation where we looked dead and buried," said QPR boss Mark Hughes. "I thought the crowd were fantastic as they never lost faith in us. Once we got a little bit of momentum they drove us over the line." 24.com publishes all comments posted on articles provided that they adhere to our Comments Policy. Should you wish to report a comment for editorial review, please do so by clicking the 'Report Comment' button to the right of each comment.
Q: MVC pattern and RoR or where must that code be placed? My task is to take some data from users. The next part is to make permanent requests to an API of a third party site and to process responses from it. I don't know where this part should be placed: model, controller, or module? The final part of the app will send statuses to users' emails. A: Processing user input from an HTTP request is usually done in the controller. Send a request to the rails server including the user input. The request will be routed to the appropriate controller action. In the controller action, form an HTTP request to an external API and include the user input in the request using something like RestClient. Finally, you will send an email to the user and include the request statuses by calling the deliver! method on a mailer class. Example: class UsersController < ApplicationController def controller_action @user_input = params[:query] # Build the external API request URI. # Using www.icd10api.com as an example. url = Addressable::URI.new( scheme: "http", host: "www.icd10api.com", query_values: {code: @user_input, r: "json", desc: "long"}) # Perform the external request and parse the response resp = JSON.parse(RestClient.get(url.to_s)) # Finally, deliver the email. UserMailer.statuses_email(resp).deliver! # Return status code render status: 200 end end You can always refactor your code into a module, but I only do this if it's used in 3+ locations. If you're using this as more than a demo app, I would refer to the link in Andrew CP Kelley's comment: Where do API calls go in a ruby on rails MVC framework project? References: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller You also might want to look into concerns if you're using rails 4+: How to use concerns in Rails 4
Distributed Adaptive Fuzzy Event-Triggered Containment Control of Nonlinear Strict-Feedback Systems. In this paper, the adaptive fuzzy event-triggered containment control problem is addressed for uncertain nonlinear strict-feedback systems guided by multiple leaders. A novel distributed adaptive fuzzy event-triggered containment control is designed only using the information of the individual follower and its neighbors. Moreover, a distributed event-trigger condition with an adjustable threshold is developed simultaneously. The designed containment control law is updated in an aperiodic manner, only when event-triggered errors exceed tolerable thresholds. It is proved that the uniformly ultimately bounded containment control can be achieved, and there is no Zeno behavior exhibited by applying the proposed control scheme. Simulation studies are outlined to illustrate the effectiveness of the theoretical results and the advantages of the event-triggered containment control proposed in this paper.
96 F.2d 816 (1938) INDIANAPOLIS GLOVE CO. v. UNITED STATES. No. 6421. Circuit Court of Appeals, Seventh Circuit. April 4, 1938. James W. Morris, Asst. Atty. Gen., Sewall Key and Leon Cooper, Sp. Assts. to Atty. Gen., and Val Nolan, U. S. Atty., and B. Howard Caughran, Asst. U. S. Atty., both of Indianapolis, Ind., for the United States. Louise Foster, Sp. Asst. Atty. Gen., for the United States. Paul Y. Davis, Kurt F. Pantzer, Ernest R. Baltzell, and William G. Sparks, all of Indianapolis, Ind., for appellee. Before EVANS and MAJOR, Circuit Judges and LINDLEY, District Judge. *817 MAJOR, Circuit Judge. This is an appeal from a judgment of the District Court in an action to recover income taxes in the amount of $10,681.44, with interest thereon, alleged to have been overpaid by appellee for the calendar year 1929; a claim for refund having been disallowed by the Commissioner of Internal Revenue. The applicable statute and regulation is section 23(a) of the Revenue Act of 1928, 45 Stat. 799, 26 U.S.C.A. § 23 and note, and Article 128 of Regulations 74: Section 23(a): "In computing net income there shall be allowed as deductions: "(a) Expenses. All the ordinary and necessary expenses paid or incurred during the taxable year in carrying on any trade or business, including a reasonable allowance for salaries or other compensation for personal services actually rendered." Article 128: "Bonuses to employees. — Bonuses to employees will constitute allowable deductions from gross income when such payments are made in good faith and as additional compensation for the services actually rendered by the employees, provided such payments, when added to the stipulated salaries, do not exceed a reasonable compensation for the services rendered. It is immaterial whether such bonuses are paid in cash or in kind or partly in cash and partly in kind. Donations made to employees and others, which do not have in them the element of compensation or are in excess of reasonable compensation for services, are not deductible from gross income." The facts were largely stipulated and are substantially as follows: Appellee is and was an Indiana corporation engaged in the business of manufacturing and selling gloves and similar products. In June, 1925, at a special meeting of stockholders, it was authorized to offer 850 shares of its common stock, having a total par value of $85,000, to 15 designated employees at par, and to accept therefor their respective noninterest-bearing demand notes aggregating $85,000; the said shares of stock to be held as collateral security for the payment of the notes. Stock dividends were to be applied upon the payment of the notes and it was agreed that the notes should be canceled and surrendered to the makers when the stock dividends equaled the face value of such notes. The stock was issued in the names of the employees as authorized, and the latter delivered to appellee their demand promissory notes in respective amounts exactly equal to the par value of the shares issued to each. The notes so received were carried on appellee's books of accounts in the notes receivable account as an asset, and the shares of stock issued as aforesaid were entered and carried in appellee's books of account in the capital stock account. The certificates representing the 850 shares of stock so issued were retained by appellee as collateral security for the payment of said notes. The market value of the stock when issued was $159.50 per share. During the years 1925 to 1929, inclusive, appellee earned $108.48 net for each share of its common stock outstanding during those years, after the payment of dividends on its preferred stock. During this period, cash dividends amounting to $37 per share were paid by appellee on its outstanding stock, including that issued in the names of the employees referred to, and this amounted to $31,450 paid to such employees. None of these cash dividends were applied on the said promissory notes; in fact, no payment whatever was made upon such notes by the makers thereof. On August 22, 1929, the common stock of appellee had a book value of approximately $200 per share, and on that date, appellee, by appropriate action, changed the character of its common stock from $100 par value shares to no par value shares and authorized the issuance of 4 shares of the latter for each share of the former. Thus, the 850 shares of par value stock issued to the 15 individual employees were converted into 3,400 shares of no par value stock; the latter having a market value of $57.12. By action of appellee's board of directors, August 24, 1929, new notes of the employees were substituted, aggregating $85,000, which were interest bearing, and the cash, as well as stock dividends, were to be applied on the payment of such notes. The old notes were returned to the makers and at the same time half of the 3,400 shares of common stock was delivered to such employees, and the stock thus delivered affords the basis for the present controversy. The remaining one-half of said stock, or 1,700 shares, was retained and continued to be held as collateral for the substituted notes. The certificates delivered to the *818 employees in August, 1929, were not reported as additional compensation to those individuals in the income tax return filed by appellee for that year, nor were they entered upon appellee's books or records as compensation paid or owing to said employees. On March 8, 1932, appellee, by appropriate action, relieved the makers of the substituted notes of all liability to the company, ordered the notes surrendered, and repurchased the 1,700 shares of stock held as collateral. January 6, 1932, appellee filed a claim for a refund of 1929 income taxes in the sum of $10,681.44, based upon the contention that the fair market value of the 1,700 shares of no par value stock, the certificates for which were delivered to the employees on August 24, 1929, represents additional compensation for services rendered by such individuals. Appellant has set forth in considerable detail the salaries and compensation received by each of such employees for each of the years from 1925 to 1929, inclusive, with a view of showing that even though the 1,700 shares of stock thus delivered in 1929 be considered as additional compensation, it was not reasonable as such within the terms of the pertinent statute and regulation. In view of the conclusion reached, we do not regard it as important in this opinion to set forth such figures. The essential questions in controversy: First, are the 1,700 shares of common stock of appellee issued to its employees in 1925 and retained by it as collateral security for the notes executed by the employees in favor of appellee to be treated as additional compensation for services rendered, or were they sold to such employees; second, if such shares of stock be treated as additional compensation, is it such for the year 1925, when issued, or 1929, when delivered; and, third, if treated as additional compensation to employees, is the amount reasonable when added to other compensation paid to such employees? The court below made special findings of fact which are binding upon this court if there be substantial evidence fairly tending to establish the facts as found. United States v. Jefferson Electric Co., 291 U.S. 386, 407, 54 S.Ct. 443, 450, 78 L. Ed. 859; Law v. United States, 266 U.S. 494, 496, 45 S.Ct. 175, 176, 69 L.Ed. 401. Pertinent extracts from such findings are found in the footnote.[1] The court permitted certain officers of appellee, over the objection of appellant, to testify as to certain matters pertaining to the agreement between appellee and its employees at the time of the related transaction of June 23, 1925. It is claimed the effect of this testimony was to *819 vary the terms of the written instruments executed by the parties at that time. An investigation of the record convinces us that some of the findings as made by the court find support only by giving weight to this parol testimony. Undoubtedly, the general rule is that parties to a written contract are precluded from showing, by parol evidence, that the contract was something different from that contained in the instrument itself. From an investigation of the authorities, however, we conclude such rule cannot be invoked by a third person not a party to the written instrument for whose benefit and protection the rule has been established. In White v. Woods, 183 Ind. 500, 109 N.E. 761, is an exhaustive analysis of the rule and authorities. The court on page 504 of 183 Ind., 109 N.E. 761, 763, said: "It is unquestionably true that the rule does not operate to exclude parol evidence, otherwise admissible, in a controversy between strangers, or one of the parties and strangers, who are not representatives or privies of a party, and have no connection with the instrument, where they (the strangers) are not seeking to enforce it as effective for their own benefit, or the like." Thus, the parol evidence rule sought to be invoked by appellant not being applicable, the court properly admitted the oral testimony, and, when such is considered, we find substantial evidence to support the findings of the court with reference to the agreement between appellee and its employees in 1925. We also find substantial evidence to support the court's finding that the additional compensation, if such it be, represented by the 1,700 shares of stock delivered to the employees in 1929, was reasonable to the extent and in the amount as found by the court. Accepting the trial court's findings of fact as we must, we are called upon to determine whether the law has been properly applied. Such determination revolves largely around the question as to whether the transaction between appellee and its employees in 1925 constituted an absolute sale of the stock or whether it was merely an arrangement between the parties by which the employees were to receive an additional compensation for services rendered by them in succeeding years when and if certain things occurred. The solution of this question is not an easy one and a plausible argument may be made upon both sides. Appellant contends that the transaction was consummated and completed when the shares of stock were issued and the notes accepted, and that it contains all the elements of a sale. It treats the notes the same as a money consideration and insists the transaction is no different from what it would have been had appellee delivered the stock to the employees simultaneously with the delivery of the notes. The fact, of course, that the certificates were issued in the name of the employees and notes accepted for the agreed value of the stock tends to support this theory. Such position, however, is weakened, if not refuted, by the findings of fact as made by the District Court. We think, in solving the question, weight must be given, not only to such findings, but to the circumstances surrounding the transaction, as well as the conduct and acts of the parties bearing upon their intentions as to what was the true agreement. In other words, what did the parties have in their minds at that time? Notwithstanding that the employees gave their demand notes, valid upon their face, yet they were told that such notes would not be collected and that they were to be paid entirely from stock dividends when such dividends were sufficient for such purpose. The fact that the notes were noninterest bearing is consistent with this theory and the further undisputed fact that for five years appellee made no effort to, and judged by its actions, never had any intention of collecting said notes in whole or in part, together with the fact that the employees made no payment, is to us a strong circumstance that said notes were given in accordance with an agreement that they were not to be paid. It is of some consequence, we think, that the 1,700 shares of stock, delivered to the employees in 1929, were paid for solely by appellee and without any cost to the employees. In other words, unless it be the services rendered by the employees, the stock received by them in 1929 was without consideration on their part. While the subsequent acts and conduct of the parties are not determinative of the agreement made in 1925, yet it seems to us such acts and conduct must be considered in ascertaining the real contract entered into between appellee and its employees. It is said that appellee could have, at any time, brought suit and recovered on said notes. To have so done would have been in violation of its promise, and the *820 fact that it retained such notes in its possession for five years without doing so is a rather strong circumstance in support of the findings of the District Court that there was an agreement to the contrary. In derogation of the contention that the arrangement between appellee and its employees was for the purpose of providing additional compensation, it is said there was no provision which bound the latter to continue in the service of appellee. The fact is, however, that the employees did remain in appellee's employment, and even though there was no express provision, what other conclusion can be reached than that it was implied that they should do so? What could have been the purpose of appellee in issuing the shares of stock to these employees, taking their notes without interest, coupled with an agreement that the makers of the notes would never be called upon for payment, if it did not contemplate the continued services of such employees? We are unable to find any satisfactory answer to such query, except such arrangement was made as an inducement to the employees to give their best services so that the business might prosper, in which prosperity the employee was to share in the form of additional compensation. The hopes entertained by the parties in 1925 were realized to such an extent that in 1929 the shares of stock had substantially doubled in value. In the reorganization of that year, the 850 shares of stock were canceled and 3,400 shares issued in their place. Half of this stock was sufficient to secure the substituted notes, the other 1,700 shares were merely a reflection of the prosperity enjoyed by appellee since 1925, which, in conformity with the understanding of the parties, the employees were permitted to share by receiving the shares of stock in question. It seems apparent it must have been as an additional compensation which appellee was entitled to treat as a deduction for the year 1929. We are unable to adopt appellant's theory that, if appellee is entitled to a deduction for additional compensation, it was for the year 1925, rather than 1929. At that time the employees received nothing except to become participants in an arrangement by which they might benefit in the future. Nor is it essential that the services be rendered during the year for which the deduction is claimed. As was said in Lucas v. Ox Fibre Brush Co., 281 U.S. 115, 116, 119, 50 S. Ct. 273, 274, 74 L.Ed. 733: "The statute does not require that the services should be actually rendered during the taxable year, but that the payments therefor shall be proper expenses paid or incurred during the taxable year." Appellant relies strongly upon the opinion of this court in Gardner-Denver Company v. Commissioner, 7 Cir., 75 F.2d 38. In many respects, the situation there presented is similar to the one here. We find, however, this clear distinction, in that, there the employees gave notes in an amount equal to the market value of the stock which were payable at all events with interest from date. There was a written agreement that the employees were to pay so much per month until the indebtedness represented by the notes was extinguished. The opinion of the Board of Tax Appeals in that case, 27 B.T.A. 1171, goes into the facts more fully than does this court, and it is there plainly shown that the full amount of the notes given was to be paid by the employees either by deducting certain amounts from their wages, or otherwise. That was an essential, if not controlling, element, from which the court concluded the involved agreement constituted a sale of the stock. We do not regard that case as controlling here, where the makers of the notes paid nothing and, in fact, were relieved of all obligation to pay. Both sides rely upon Alger-Sullivan Lumber Company v. Commissioner of Internal Revenue, 5 Cir., 57 F.2d 3. In that case the taxpayer entered into an agreement with its employees whereby its stock was to be held in its treasury for the employees at $100 per share. The same was to be paid for out of dividends arising from the stock until the purchase price was discharged, when the employees would be entitled to receive the stock. In 1921, the dividends were sufficient to pay the purchase price of the stock, when it was issued to the employees. In discussing the question involved, the court on page 5 of 57 F.2d said: "It is clear that the contracts in question lacked essential elements of a sale. There was no agreement to buy. There was no price in money to be paid by the employee, and he gave nothing for the stock except his honest and faithful services, which he was obliged to continue until the dividends *821 credited equalled the par value of the stock. If there had been no dividends, he might never have become the owner of the stock set aside to him. The other provisions of the contract are immaterial, as conditions did not arise to bring them into operation. It is evident that petitioner intended in good faith to give bonuses in stock to valued employees as additional compensation, and there was no intention to make an outright sale of stock to them." This language, as will be noted, is very pertinent to the instant situation. The same court in Moore v. McGrawl, 5 Cir., 63 F.2d 593, again had occasion to consider a similar question. In the latter case the employee gave his note payable on demand, but the note provided that while there was expressed an unconditional obligation to pay, nevertheless, the real agreement was that the note should be paid only out of dividends accruing upon the stock. The latter was issued in the name of the employee, indorsed by him, and redelivered to the company. It will be noted that this situation is strikingly similar to that here presented. On page 594 of 63 F.2d it is said: "In a case involving a similar contract, we held that, as there was no obligation on the part of the employee to pay for the stock, the agreement was lacking in mutuality, and consequently that there was no sale or transfer of title to the stock. Alger-Sullivan Lumber Co. v. Commissioner of Internal Revenue (C.C.A.) 57 F.2d 3. There is nothing to differentiate this from that case, and we see no reason to change our opinion." Appellee stresses the case of Hudson Motor Car Company v. United States, 3 F.Supp. 834, 843, an opinion by the United States Court of Claims, while appellant insists the facts in that case are at such variance with those here that it cannot be regarded as an authority. However that may be, the language of the court strikes us as being appropriate. On page 846 of 3 F.Supp. it is said: "When we come to consider the entire contract and what transpired with respect thereto, the conclusion seems inescapable that whatever Hills received in the form of the stock was compensation for services rendered. Any other conclusion would be tantamount to saying that plaintiff gave the stock to him, since it is clear that no payments were made for the stock other than the rendering of service. It is, of course, true that dividends were allowed to accumulate on the stock until such accumulation equaled what, in one sense, might be termed the purchase price, and from this defendant seems to contend in effect that Hills was constructively receiving dividends, which were being applied in satisfaction of the purchase price, but how did Hills acquire the right to receive the dividends which might be so applied? It would seem a most unusual situation for an individual to be able to acquire stock under a contract by which the stock would be paid for from the dividends without any other obligation resting on the individual to make payment. Any one would be willing to make acquisitions under such circumstances, since there would be everything to gain and nothing to lose. On the other hand, it is difficult to suppose a case where the owner of valuable dividend-paying stock would be willing to `sell' it under such term with no other consideration." Many other cases are cited as sustaining the positions of the respective parties, an analysis of which would unduly prolong this opinion. Naturally, the results reached by the courts in such cases must depend to a considerable extent upon the particular situation presented. A study of the cases convinces us that courts generally have undertaken to give effect to the real intention of the parties to such agreements. The substance of the arrangement, rather than the form, is the matter for ascertainment. We are of the opinion that there was no intention on the part of either appellee or its employees that the transaction of June, 1925, should constitute a sale of the shares of stock. On the contrary, we think it was their intention and understanding to provide a plan by which the employees were to receive additional compensation for services rendered. The acts and conduct of the parties at the time the agreement was entered into, as well as subsequently, are consistent with the latter theory — inconsistent with the former. Thus concluding, we find no reversible error in the judgment of the court below. The same is affirmed. EVANS, Circuit Judge (dissenting). For two reasons I find myself unable to accept the conclusion of the majority of the court. (1) Plaintiff failed to bring its claim for deduction within section 23(a), Revenue *822 Act 1928, 26 U.S.C.A. § 23 and note, or article 128 of Regulations 74. (2) If the payment of a bonus is established it was deductible in 1925 and not in 1929, as claimed. (1) In reaching conclusion No. 1, I assume (a) that the burden is upon the taxpayer to prove facts upon which the deduction depends and (b) all the essential facts were stipulated and were covered by the court's thorough and specific findings. These covered the detailed transactions which are briefly set forth in the statement appearing in the majority opinion. They raise a question of law. The precise question may be stated thus: Do the facts as stated show the payment of a reasonable allowance for salaries or other compensation for personal services actually rendered by the fifteen employees of plaintiff? In answering this question it must be, and is, assumed that "bonuses" are deductible when they are reasonable and are paid in good faith and as additional compensation for services actually rendered by the employees. The execution of notes by the fifteen employees and the pledging of the stock which the notes purchased, to secure the payment of their notes, conclusively negatives the existence of a gift of such stock as a bonus to said employees. The transaction was doubtless motivated by a worthy desire, namely, to distribute stock among faithful employees. This motive was not, however, determinative of the grant of a bonus, the essential feature of which is a gift or reward for services rendered. The instant transaction was like unto one where the owner of a farm conveys it to another and takes back a note and mortgage. Here the employees gave their notes for the stock which was issued to them and then assigned the stock as collateral to secure their notes. The rate of interest or the absence of any interest provision in the notes has no bearing on the nature of the transaction. Positive, unequivocal written documents evidencing an absolute liability such as those before us speak for themselves and establish a status which cannot and should not be set aside in order that one of the contracting parties may thereafter avoid or lessen its Federal income tax. (2) Equally persuasive is the second reason. If bonuses were found to have been given, they were given in 1925 and not in 1929. The transaction which is the basis of the asserted payment of a bonus or additional compensation occurred in 1925. There was a modification of the 1925 agreement in 1929 but the modification did not affect the facts which alone determined the existence or non-existence of a bonus grant in 1925. Solid substantiation of the conclusion that the stock was sold in 1925 appears in the fact that the employees thereafter received the dividends on said stock. In truth, the modification of the agreement in 1929 which accompanied the stock dividend of that year called for new notes which drew interest to replace the old notes which did not draw interest. If we appraise the transactions as plaintiff contends, we have the execution of notes by employees but with the verbal understanding that they were not enforceable and were void as between the parties. In 1929, however, the employees gave new notes which were enforceable and drew interest. Such a modification was not a gift, — a bonus to the employees. The 1929 transaction was favorable to the employer and it could make no valid deduction for it in its income tax return. If plaintiff's position be upheld, then a taxpayer may choose the year when he will claim his bonus deductions. That is to say, a solemn, unequivocal written agreement made by employer and employee for the sale of stock, the delivery of notes therefor, and the pledging of the stock to secure the purchase money notes, may, notwithstanding the employer pays and the employee receives cash dividends for several years, be nullified by an oral agreement of the parties to the effect that the written documents were unenforceable and void, but may become binding and valid if and when the company has an abnormal income and a correspondingly large Federal income tax which it is desirous of reducing by deductions of a so-called bonus. In other words, a bonus agreement may be a nudum pactum during a lean year but a deductible bonus in a prosperous year. The plan has the merit of ingenuity, and it has more than mere ingenuity if it succeeds in permitting the taxpayer to reduce its income in a year when its profits are very large. NOTES [1] "At a special meeting of the stockholders of the plaintiff corporation held on June 23, 1925, as recorded in the minutes of the said meeting, the following occurred: "Messrs. Zwick and Elsey each made short talks expressing their appreciation for the loyalty manifested by the employees, advising that arrangements had been made to distribute eighty-five thousand dollars of common stock to some of the men holding more responsible positions, taking their notes therefor and holding the company script as collateral until such time as additional stock dividends can be paid from surplus. The company then proposes to surrender the notes to employees instead of giving them the equivalent amount of additional stock. By this method the notes are to be eventually paid. * * * "The fifteen employees above mentioned were present at the meeting of plaintiff's stockholders held June 23d, 1925. The terms of the stockholders' resolution adopted at such meeting were explained to them, and that their ultimate receipt of the stock which they were to receive pursuant to such resolution depended upon the making, during the period, of sufficient earnings to justify the payment of a 100% stock dividend. It was also stated to them that they would not be required to make payment of the notes in any other manner. * * * "Plaintiff's purpose in entering into the transaction shown in the last finding was to pay to the employees named additional compensation for their services in such manner that its ultimate payment would depend upon the success of the Company and of the efforts of these employees in promoting the company's business. * * * "The 1700 shares of stock distributed in 1929 to the fifteen employees named in the resolution of June 23d, 1925, constituted reasonable additional compensation for services actually rendered to plaintiff by the employees receiving the same during the years 1925 to 1929, inclusive, and the value of such 1700 shares of stock, namely $97,104, was an expense paid by plaintiff in the year 1929 in the carrying on of plaintiff's business."
The Three Billy-Goats Gruff The Three Billy Goat's Gruff is a famous Norwegian folk-tale that will charm any child. A mean and hungry troll lives under a bridge. He's hungry for a meal and would love to snatch and eat any goat attempting to cross his bridge. How can the three goats get across safely? They must be clever! A wonderful children's story to read out loud in a classroom or before bedtime. Once upon a time there were three billy goats, who were to go up to the hillside to make themselves fat, and the name of all three was "Gruff." On the way up was a bridge over a cascading stream they had to cross; and under the bridge lived a great ugly troll , with eyes as big as saucers, and a nose as long as a poker. So first of all came the youngest Billy Goat Gruff to cross the bridge. "Trip, trap, trip, trap! " went the bridge. "Who's that tripping over my bridge?" roared the troll . "Oh, it is only I, the tiniest Billy Goat Gruff , and I'm going up to the hillside to make myself fat," said the billy goat, with such a small voice. "Now, I'm coming to gobble you up," said the troll. "Oh, no! pray don't take me. I'm too little, that I am," said the billy goat. "Wait a bit till the second Billy Goat Gruff comes. He's much bigger." "Well, be off with you," said the troll. A little while after came the second Billy Goat Gruff to cross the bridge. Trip, trap, trip, trap, trip, trap, went the bridge. "Who's that tripping over my bridge?" roared the troll. "Oh, it's the second Billy Goat Gruff , and I'm going up to the hillside to make myself fat," said the billy goat, who hadn't such a small voice. Trip, trap, trip, trap, trip, trap! went the bridge, for the billy goat was so heavy that the bridge creaked and groaned under him. "Who's that tramping over my bridge?" roared the troll. "It's I! The big Billy Goat Gruff ," said the billy goat, who had an ugly hoarse voice of his own. "Now I 'm coming to gobble you up," roared the troll. Well, come along! I've got two spears, And I'll poke your eyeballs out at your ears; I've got besides two curling-stones, And I'll crush you to bits, body and bones. That was what the big billy goat said. And then he flew at the troll, and poked his eyes out with his horns, and crushed him to bits, body and bones, and tossed him out into the cascade, and after that he went up to the hillside. There the billy goats got so fat they were scarcely able to walk home again. And if the fat hasn't fallen off them, why, they're still fat; and so,
For the average American who will never see it, the new US Embassy in Baghdad may be little more than the Big Dig of the Tigris. Like the infamous Boston highway project, the embassy is a mammoth development that is overbudget, overdue, and casts a whiff of corruption. For many Iraqis, though, the sand-and-ochre-colored compound peering out across the city from a reedy stretch of riverfront within the fortified Green Zone is an unsettling symbol both of what they have become in the five years since the fall of Saddam Hussein, and of what they have yet to achieve. "It is a symbol of occupation for the Iraqi people, that is all," says Anouar, a Baghdad graduate student who thought it was risk enough to give her first name. "We see the size of this embassy and we think we will be part of the American plan for our country and our region for many, many years." The 104-acre, 21-building enclave – the largest US Embassy in the world, similar in size to Vatican City in Rome – is often described as a "castle" by Iraqis, but more in the sense of the forbidden and dominating than of the alluring and liberating. "We all know this big yellow castle, but its main purpose, it seems, is the security of the Americans who will live there," says Sarah, a university sophomore who also declined to give her last name for reasons of personal safety. "I heard that no one else can ever reach it." Among the Iraqi elites who have suffered so much in the chaos of the post-Hussein period – the professors, doctors, architects, and artists – the impact of the new American giant is often expressed more symbolically but sometimes using the same terms. Castles in the sand "Saddam had his big castles; they symbolized his power and were places to be feared, and now we have the castle of the power that toppled him," says Abdul Jabbar Ahmed, a vice dean for political sciences at Baghdad University. "If I am the ambassador of the USA here I would say, 'Build something smaller that doesn't stand out so much, it's too important that we avoid these negative impressions.' " Yet while the new embassy may be the largest in the world, it is not in its design and presence unlike others the US has built around the world in a burst of overseas construction since the bombings of US missions in the 1980s and '90s. Efforts to provide the 12,000 American diplomats working overseas a secure environment were redoubled following the 9/11 attacks. Designed according to what are called the "Inman standards" – the results of a 1985 commission on secure embassy construction headed by former National Security Agency head Bobby Inman – recent embassies have been built as fortified compounds away from population centers and surrounded by high walls. In the case of larger embassies in the most dangerous environments, as in Baghdad, secure housing is included, along with some of the amenities of home – restaurants, gyms, pools, cinemas, shopping – that can give the compound the air of an enclave. The US government cleared the new Baghdad Embassy for occupancy last week, with the embassy's 700 employees and up to 250 military personnel expected to move in over the month of May, according to Ambassador Ryan Crocker. $1 billion a year to operate The $740 million compound – expected to cost more than $1 billion a year to operate – was originally expected to cost $600 million to build and was to open in September 2007. Design changes and faulty construction caused repeated delays. Congress learned last fall of problems with the site's electrical system, and early this year reports surfaced of significant problems with the fire-fighting systems. Nevertheless, embassy personnel have been anxious for the complex, with more than 600 blast-resistant apartments, to open and give them some refuge from the mortar fire that has increasingly targeted the Green Zone this year. Last month, a mortar slammed into one of the unfortified trailers where personnel now sleep, killing an American civilian contractor. At least two US soldiers have died from rocket fire on the Green Zone since then. But even the embassy's opening may not be assuaging diplomats' concerns about assignments in Iraq. Last week, the State Department warned that it may start ordering employees to serve at the embassy next year if more volunteers do not come forward for the 300 posts expected to open. The State Department announcement follows a similar warning last fall of a shortfall of volunteers for about 50 Iraq positions. Candidates were eventually found without any compulsory assignments for 2008, but the prospect of ordered assignments to a war zone caused tensions at the department. Such challenges to the full manning of the new embassy have yet to reach Iraqi ears. Still, some Iraqis who condemn the imagery of the imposing new compound say they are even more critical of what, in an indirect way, it also tells Iraqis about their own leadership. "What does it say to Iraqis that we cannot walk along a beautiful part of the river in our own land because of this big American place?" says Qasim Sabti, an Iraqi artist and Baghdad gallery owner. "But it shows us something else about our own government," he adds. "At least the Americans could build this thing, but we Iraqis have no new buildings or streets, everything is destroyed – but still the corruption is so great that the money goes into pockets before it can build something new." Other Iraqis say the embassy highlights the long-term interests the US has in both Iraq and the region. "If it is so big, it is a reflection of the size of the designs they have for Iraq and the Middle East," says Maimoon al-Khaldi, an actor and professor at Baghdad's Fine Arts Academy. "It is a sign of their energy agenda and of their security agenda in this region," he adds. "This building faces the Iraqis, yes, but also the Iranians they have declared to be their enemies." Mr. Jabbar says the Americans "surely have a right and duty to protect their delegation here." But he says he still wouldn't have built something so large. "That is too much of a symbol," he says. "It sends a message to the Iraqis that says, 'Be careful, we removed Saddam Hussein and we can remove what has come after him anytime we want.'"
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // See the LICENSE file in the project root for more information. using System; using System.Runtime.InteropServices; internal static partial class Interop { internal static partial class Gdi32 { /// <remarks> /// Use <see cref="DeleteDC(HDC)"/> when finished with the returned DC. /// Calling with ("DISPLAY", null, null, IntPtr.Zero) will retrieve a DC for the entire desktop. /// </remarks> [DllImport(Libraries.Gdi32, SetLastError = true, CharSet = CharSet.Unicode)] public static extern HDC CreateDC(string lpszDriver, string? lpszDeviceName, string? lpszOutput, IntPtr devMode); } }
Pakistan take on New Zealand in 2nd ODI tomorrow Pakistan have won the toss and will bat first in the second ODI against the Black Caps in Nelson. After their 61-run win in the first ODI, via the Duckworth-Lewis method, the Black Caps can extend their current streak to nine with victory in game two in Nelson on Tuesday. Farkar sustained a contusion to his right thigh while fielding during the first ODI played in Wellington on January 6. Pakistan is the guest team there and they could not understand the field nature clearly. In allotted 50 overs, New Zealand post a challenging total. Pakistan too, shouldn't be making wholesale changes to their team. Williamson - dropped by Sarfraz Ahmed on 26 - scored at nearly a run a ball, but resisted the temptation to cut loose and felt that paid off. It was Williamson's 10th ODI ton, his 27th worldwide century and came off 117 balls. However, Pakistan came to chase the score and made 166 runs in 30.1 overs. Over the past three years they have been beaten 2-0 in two separate series and lost a home series to New Zealand in the United Arab Emirates 3-2. 83-run stand before Hasan Ali finally removed Munro three balls into the 13th over. Munro, the top-ranked batsmen in Twenty20 internationals, hit 58 from 35 balls to give the hosts a rapid beginning in conditions in which scoring generally required careful application. Kiwis are one up in the series as the rain helped them win the first game by 51 runs. But Williamson and Henry Nicholls steadied the ship with a 90-run stand for the fifth wicket before Nicholls was dismissed on 50 by Hasan Ali with just two overs remaining. But they will have to adjust to New Zealand conditions with overnight rain forecast and strong southerly winds during the day, which could pose problems for top-order batsmen Fakhar Zaman, Azhar Ali and Babar Azam. Fakhar Zaman (82*) played an nearly completely unassisted knock with partners unable to stay with him in testing conditions for players coming off a run of domestic cricket on flat wickets. Compartir Artículos relacionados Asked if he disagrees with the idea that he's lost the locker room, Walton responded, "I would disagree with that, yes". The press is going to make it out like I'm trying to demean them or they're trying to demean me. Philippe Coutinho's £142million move from Liverpool to Barcelona has cast a long shadow over third-round weekend and Pochettino has seen his star man linked with a move to Real Madrid in recent months. She explained that El Salvador has received significant worldwide aid and that much of the country's infrastructure is rebuilt. There were 262,500 people, mostly adults, under TPS across the United States as of October 2017, according to the U.S. In the late 80s and early 90s, the younger Van Dyke became well-known for his work on the series Coach , where he played Asst. Van Dyke married Shirley Jones of Glen Rose in 1984 and bought a ranch in Hot Spring County. Sydney, Australia sees hottest day in more than 70 years They've also informed residents that if they are considering leaving their homes, they should do so earlier rather than later. The two bystanders then rushed to the man's aid and brought him ashore, where he was taken to hospital in a stable condition. Aussie pair reach Brisbane semi-finals Kyrgios became the second Australian to lift the trophy in the tournament's 10-year history after Lleyton Hewitt in 2014. After that, he appeared to move more freely and didn't expect it to trouble him at the Australian Open. Panthers extend Rivera's contract The Carolina Panthers have given coach Ron Rivera a 2-year contract extension, according to ESPN, citing a team official . The Panthers will take on the New Orleans Saints on Sunday for the right to continue competing in the playoffs. Intel chief sold shares after company told of bugs Thus far, Intel has insisted that Krzanich's sale of company stock was unrelated to any revelations regarding the security flaws. But while the public is just being informed about the security problem, tech companies have known about it for months. When will Snow Bomb Cyclone end stop? Almost 60 million people are in the path of the storm, with weather warnings in effect from the ME to parts of Georgia in the US. Flights resumed Thursday afternoon at LaGuardia Airport in NY , and they were expected to resume later Thursday night at John F.
Q: How do I figure out what object my session is trying to serialize? I recently upgraded to Spring Security 4.2.3.RELEASE. I'm also using spymemcached v 2.8.4. I'm running into this situation where for some reason Spring is trying to serialize service implementation classes. I can't figure out where this is coming from. The line of my code that the exception refers to is Set<Session> userSessions = (Set<Session>) memcachedClient.get(userId); ... memcachedClient.set(userId, sessionTimeoutInSeconds.intValue(), userSessions); // dies here with the mysterious error (the "java.io.NotSerializableException: org.mainco.subco.ecom.service.ContractServiceImpl" is buried within) ... 09:06:47,771 ERROR [io.undertow.request] (default task-58) UT005023: Exception handling request to /myproject/registration/save: java.lang.IllegalArgumentException: Non-serializable object at net.spy.memcached.transcoders.BaseSerializingTranscoder.serialize(BaseSerializingTranscoder.java:110) at net.spy.memcached.transcoders.SerializingTranscoder.encode(SerializingTranscoder.java:162) at net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:282) at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:733) at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:126) at org.mainco.subco.session.service.MemcachedSessionService.associateUser(MemcachedSessionService.java:365) at org.mainco.subco.session.service.MemcachedSessionService.setSessionSecurityContext(MemcachedSessionService.java:288) at org.mainco.subco.core.security.SubcoSecurityContextRepository.saveContext(subcoSecurityContextRepository.java:116) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:114) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:85) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72) at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263) at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:198) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:784) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.NotSerializableException: org.mainco.subco.ecom.service.ContractServiceImpl at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509) How do I figure out the object memcache is trying to serialize and thus figure out how it is referencing a service class to serialize? The point of this question is not how do I make my service class serializable but why is my session trying to serialize it in the first place. A: I'm running into this situation where for some reason Spring is trying to serialize service implementation classes. spring don't try to serialize implementation . looks lite when you do memcachedClient.set(userId, sessionTimeoutInSeconds.intValue(), userSessions); it's expect for Serializable object ,but userSessions is not. Set is not serializable , and type that you get (Set) memcachedClient.get(userId); is not serialized to. check if you can cast memcachedClient.get(userId) to something that is Serializable like HashSets or TreeSet .... the worst it's you can try BUT you might get CastException memcachedClient.set(userId, sessionTimeoutInSeconds.intValue(), (Serializable)userSessions); you can iterate set and do simple check : int count=0; for(Session session: userSessions ){ log.... boolean isSerializable = checkIfSerializable(session); count=isSerializable ? count+1 : count; log } private static boolean checkIfSerializable(Object value) throws IllegalAccessException { try { ByteArrayOutputStream bf = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bf); oos.writeObject(value); oos.close(); ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(bf.toByteArray())); Object o = ois.readObject(); return true; } catch (Exception e) { System.out.println("----->>>> Not exactly Serializable : " + value); } return false; } A: So what about org.mainco.subco.ecom.service.ContractServiceImpl? Is this a spring bean? I guess it has scope="session" and so it's bound to your session as one of it's attributes. And if you want to serialize your session - all of it's attributes has to be serializable. A: Why don't you try to enumerate the sessions and the attributes and java-serialize them to a dummy output stream until one fails ? when it fails you print attribute name and value class. I suppose session is org.apache.catalina.Session. In your code add: for(Session x : userSessions) checkSerializable(x.getSession()); with checkSerializable: private static void checkSerializable(HttpSession s) { Enumeration e = s.getAttributeNames(); while (e.hasMoreElements()) { String name = (String) e.nextElement(); Object value = s.getAttribute(name); try { ObjectOutputStream out=new ObjectOutputStream(new ByteArrayOutputStream()); out.writeObject(value); } catch(NotSerializableException ex) { ex.printStackTrace(); System.out.println(name + " is not serializable, class is: " + value.getClass().getName()); } catch(IOException e2) { throw new RuntimeException("Unexpected", e2); } } }
Q: How to script bulk nslookups I have a list of several million domain names and I want to see if they are available or not. I tried pywhois first but am getting rate limited. As I don't need an authoritative answer, I thought I would just use nslookup. I am having trouble scripting this though. Basically, what I want to do is, if the domain is registered, echo it. What I'm getting is grep: find”: No such file or directory . I think its something easy and I've just been looking at this for too long... #!/bin/bash START_TIME=$SECONDS for DOMAIN in `cat ./domains.txt`; do if ! nslookup $DOMAIN | grep -v “can’t find”; then echo $DOMAIN fi done echo ELAPSED_TIME=$(($SECONDS - $START_TIME)) A: If you have millions to check, you may like to use GNU Parallel to get the job done faster, like this if you want to repeatedly do, say 32 lookups in parallel parallel -j 32 nslookup < domains.txt | grep "^Name" If you want to fiddle with the output of nslookup, the easiest way is probably to declare a little function called lkup(), tell GNU Parallel about it and then use that, like this #!/bin/bash lkup() { if ! nslookup $1 | grep -v "can't find"; then echo $1 fi } # Make lkup() function visible to GNU parallel export -f lkup # Check the domains in parallel parallel -j 32 lkup < domains.txt If the order of the lookups is important to you, you can add the -k flag to parallel to keep the order.
Snoops is top drawer, good lad, helped me a few times, I think he's just really fucking busy and it's no coincidence, the less important United becomes in your life because it's not giving you any happiness, the more likely you are to come on here, do you think if we were playing lovely football and... United "authentic" home jersey 18/19 :- Adult, £106 United "cheaper home replica jersey" Adult £63 FC United Authentic Adult £36 It's actually quite retro, I might get one just to start arguments with anyone who's pro Jose over here and because it doesn't make you look like a fan boy full of 101 col... Fuck off Dozer, Martial WAS the first Mbappe and he came and scored 17 goals as a kid ffs in a team with £500 mill less spent on it! Under a supposedly shite manager! And last season despite your loathing of him, he still, for all the time he got, had Best goals to minutes ratio, assists to minutes ... Willian will be 30? The Young players are gonna start going fuck this and go be brilliant elsewhere, if we are not careful, we going to end up with Jose fucking up, youngsters have fucked off, and we've a team of 30 odd year olds We don't need to sign anyone, you can change the formation, the players, even some tactics but you're NEVER NEVER EVER going to change safety first, players no confidence, sideways, hoof, nick a goal, win a league years ago (and don't bring up his Chelsea second time round, he took over a squad whic... He's a bitter angry, youth ruining, fan base destroying cunt, he can fuck off. Just fuck off now. That squad is more than capable of playing lovely football. A fit, invigorated Sanchez, Perriera finally getting a go in the Prem, Pogba if you take the breaks off, best gk in the world, young players w...
Filed 9/29/15 P. v. Bond CA1/2 NOT TO BE PUBLISHED IN OFFICIAL REPORTS California Rules of Court, rule 8.1115(a), prohibits courts and parties from citing or relying on opinions not certified for publication or ordered published, except as specified by rule 8.1115(b). This opinion has not been certified for publication or ordered published for purposes of rule 8.1115. IN THE COURT OF APPEAL OF THE STATE OF CALIFORNIA FIRST APPELLATE DISTRICT DIVISION TWO THE PEOPLE, Plaintiff and Respondent, A143773 v. WILLIAM ANTHONY BOND, (Lake County Super. Ct. No. CR935117) Defendant and Appellant. Appellant William Anthony Bond, was on June 19, 2014, charged by information with transportation of methamphetamine, possession of methamphetamine for sale, and possession of methamphetamine. (Health & Saf. Code, §§ 11379, subd. (a), 11378, 11377, subd. (a).)1 The information also alleged appellant had served two prior prison terms. On July 8, 2014 appellant entered a plea of not guilty. On September 24, 2014, after appellant’s motion to suppress all evidence obtained during a traffic stop had been denied, appellant pled no contest to transporting methamphetamine in violation of section 11379, subdivision (a), and admitted one prior prison term within the meaning of Penal Code section 667.5, subdivision (b). The remaining counts and enhancements were dismissed. On December 6, 2014, the court sentenced appellant to the upper term of four years, plus a one-year enhancement for the prior prison term, for a total sentence of five years. The clerk’s minute order dated December 9, 2014, states that appellant was 1 Unless otherwise indicated, all statutory references are to the Health and Safety Code. 1 directed to comply with the narcotic offender registration requirement authorized by section 11590, although no such requirement was orally imposed at the sentencing hearing. The minute order also erroneously stated that appellant pled guilty to violation of subdivision (a) of section 11397, rather than section 11379, which describes the offense to which he actually pled guilty. Additionally, the clerk’s minutes attributed the total 516 days of presentence credits awarded appellant only to custody and conduct credits, erroneously failing to attribute 129 days of the total 516 days to work credit. Timely notice of this appeal was filed on December 12, 2014. Appellant, who does not challenge the validity of his plea, raises two related issues pertaining only to the sentence imposed on the basis of the plea: namely, that (1) the narcotic offender registration requirement must be stricken because it is not applicable to violation of section 11379, subdivision (a), even apart from the fact the trial court did not orally impose that requirement at sentencing, and (2) the court’s minute order must be corrected to indicate appellant was convicted of violation of section 11379, subdivision (a), and accurately set forth the calculation of presentence credits. The Attorney General agrees with both contentions, as we do, and we shall therefore affirm the judgment with the necessary modifications. Because the issues presented are unrelated to the facts of the case—which were elicited only in connection with the motion to suppress, because there was no preliminary hearing or trial—it is unnecessary to describe them at length. It suffices to note that when the court asked for a factual basis for the plea, the prosecutor stated, and appellant agreed, to the following representation: “On March 27, 2014, law enforcement deputy sheriff Aaron Clark conducted a traffic stop of . . . the vehicle the defendant was driving, on Highway 29. That was in Lake County. The defendant had approximately 20 grams of methamphetamine on his person. The methamphetamine being transported was for purposes of sale and was in a usable amount of methamphetamine.” 2 DISCUSSION The first paragraph of subdivision (a) of section 11590 requires persons convicted of specified violations of the Health and Safety Code to “register with the chief of police of the city in which he or she resides or the sheriff of the county in which he or she resides in an unincorporated area.” The second paragraph provides that “For persons convicted of an offense defined in Section 11379 or 11379.5, this subdivision shall not apply if the conviction was for transporting, offering to transport, or attempting to transport a controlled substance.” The odd factor in this case is that, as the parties agree and the record demonstrates, appellant was convicted of “transportation, offering to transport, or attempting to transport a controlled substance” in violation of section 11379, subdivision (a), which statute defines “transport” as “transport for sale” (id. subd. (c), italics added), though the charge of possession of a controlled substance for sale, in violation of section 11378, was dismissed as part of the plea bargain. Moreover, as earlier noted, in response to the trial court’s request for a representation as to the factual basis for appellant’s plea, the district attorney represented, among other things, that “[t]he methamphetamine being transported was for purposes of sale . . . .” The anomaly is that appellant was convicted of the offense of transporting methamphetamine for sale, for which registration is not required, but the charge of possessing methamphetamine for sale, for which registration would be required, was dismissed. Appellant maintains that “ ‘[b]ecause registration is an onerous burden that may result in a separate misdemeanor offense for noncompliance, a registration requirement may not be imposed upon persons not specifically described in the [registration] statute.’ (People v. Martinez (2004) 116 Cal.App.4th 753, 760.)” Thus where, like here, the trial court imposes a narcotics registration for an offense specifically exempted from registration, it has imposed an unauthorized sentence that must be stricken on appeal. (See People v. Brun (1989) 212 Cal.App.3d 951, 954 [“Even though defendant accepted 3 the [registration as a] condition of probation, he [could] challenge it in [the Court of Appeal] on the ground imposition exceed[ed] the statutory authority of the trial court”].) The Attorney General agrees the registration requirement was statutorily unauthorized. As she states, because appellant was convicted of transporting methamphetamine, “[t]he narcotics offender registration requirement of . . . section 11590 does not, by its terms, apply to him. The narcotics offender registration requirement should, therefore, be stricken from the clerk’s minute order.” (See In re Luisa Z. (2000) 78 Cal.App.4th 978; People v. Brun, supra, 212 Cal.App.3d at pp. 953- 955.) The parties are correct that imposition on appellant of the registration requirement specified by section 11590 is unauthorized by that statute. We also agree with the parties that where, as here, the record of oral pronouncement of a sentence conflicts with the clerk’s minutes, the oral pronouncement controls. (People v. Farrell (2002) 28 Cal.4th 381, 384, fn. 2.) Any such conflict is presumed to be a clerical error in the clerk’s transcript, and a court possesses the authority to correct clerical errors at any time. (Ibid.; People v. Mitchell (2001) 26 Cal.4th 181, 185-187.) Because the registration requirement of section 11590 does not apply to appellant and was never orally imposed on him by the court, the statement in the December 9, 2014 clerk’s minutes that it was imposed constitutes clerical error. The minutes must be corrected by striking the statement that appellant was ordered to comply with the narcotic offender registration requirement pursuant to section 11590. Appellant contends the clerk’s minutes also contain two additional clerical errors that must also be corrected. First, appellant pled guilty to violation of section 11379, subdivision (a), not section 11397, subdivision (a), as stated in the clerk’s minutes. Second, the court awarded appellant 258 days of credit for time served pursuant to Penal Code section 2900.5, as well as 129 days of work credits pursuant to Penal Code section 4019, subdivision (c), and 129 days of conduct credits pursuant to Penal Code section 4019, subdivision (c), for a total of 516 days. While the minute order correctly lists the 4 total number of credits as 516 days, it indicates that they consist only of credits for time served and conduct credits, erroneously omitting the fact that the total also includes 129 days of work credits. The Attorney General agrees with appellant, as we do, that the foregoing errors are also clerical, and the minute order should also be corrected by stating (1) that appellant was convicted of violation of Health and Safety Code section 11379, subdivision (a), and (2) that appellant was awarded 258 days of credit for time served pursuant to Penal Code section 2900.5, 129 days of work credit pursuant to Penal Code section 4019, subdivision (c), and an additional 129 days of conduct credit also pursuant to Penal Code section 4019, subdivision (c). DISPOSITION The trial court is ordered to amend the clerk’s minutes of its December 9, 2014 sentencing hearing so that it conforms to the directives of this opinion. In addition, the court is ordered to prepare an amended abstract of judgment stating that appellant is entitled to 258 days credit for time served pursuant to Penal Code section 2900.5, plus 129 days of work credit pursuant to Penal Code section 4019, subdivision (c), plus 129 days of conduct credit also pursuant to Penal Code section 4019, subdivision (c). The amended abstract of judgment shall be sent to the Department of Corrections and Rehabilitation. In all other respects the judgment is affirmed. 5 _________________________ Kline, P.J. We concur: _________________________ Richman, J. _________________________ Miller, J. 6
Q: Setting to only show applications of current workspace in launcher? Is it possible to have the opened applications of the current workspace in the launcher but not the ones from other workspaces? A: For Ubuntu Dock shipped with Ubuntu 17.10 and later (with GNOME) Well, Other answers are pretty old, so I think it is worth to add up-to-date answer. It is possible to do so right now and not too hard tbh (With Ubuntu 17.10 and it having Gnome). Just use dconf-editor: sudo apt install dconf-editor Navigate to org > gnome > shell > extensions > dash-to-dock and check isolate-workspaces A: How to make applications untraceable on (other) workspaces Using xdotool's windowunmap, it is possible to hide a window completely. The window, nor its application, occurs any more in the launcher icon, and is not even listed any more in the output of wmctrl. Theoretically, this could be connected to the "workspace-engine", that was used in this and this answer. That would have been the most elegant solution. However, the process of only hiding windows on other workspaces and to automatically raise the ones on the current workspace is too demanding to use in an ongoing background script (for now), and not unlikely "to catch a cold" as well. Since windows are lost for good in case of errors, I therefore decided not to offer the procedure as an automatic (background-) process. If this answer is nevertheless useful for you or not depends on the situation, and the reason why you'd like to hide icons of applications, running on other workspaces; the decision is yours. The solution; what it is and how it works in practice A script, available under a shortcut key, seemingly making all windows on the current workspace (and thus applications) disappear completely. That means the application's icon in the Unity launcher shows no activity of the application: Three running applications: After pressing the shortcut key: Pressing the schortcut key combination again, the windows and their applications will re-appear. Since the key combination will only hide the windows and applications from the current workspace, you can subsequently switch to another workspace without a sign of what is (hidden) on the current workspace. Also unhiding is done (only) on the current workspace, so in short, the process of hiding and unhiding is doen completely independent per workspace. The script #!/usr/bin/env python3 import subprocess import os import time datadir = os.environ["HOME"]+"/.config/maptoggle" if not os.path.exists(datadir): os.makedirs(datadir) workspace_data = datadir+"/wspacedata_" def get_wlist(res): res = get_res() try: wlist = [l.split() for l in subprocess.check_output(["wmctrl", "-lG"]).decode("utf-8").splitlines()] return [w for w in wlist if all([ 0 < int(w[2]) < res[0], 0 < int(w[3]) < res[1], "_NET_WM_WINDOW_TYPE_NORMAL" in subprocess.check_output(["xprop", "-id", w[0]]).decode("utf-8"), ])] except subprocess.CalledProcessError: pass def get_res(): # get resolution xr = subprocess.check_output(["xrandr"]).decode("utf-8").split() pos = xr.index("current") return [int(xr[pos+1]), int(xr[pos+3].replace(",", "") )] def current(res): # get the current viewport vp_data = subprocess.check_output( ["wmctrl", "-d"] ).decode("utf-8").split() dt = [int(n) for n in vp_data[3].split("x")] cols = int(dt[0]/res[0]) curr_vpdata = [int(n) for n in vp_data[5].split(",")] curr_col = int(curr_vpdata[0]/res[0])+1 curr_row = int(curr_vpdata[1]/res[1]) return str(curr_col+curr_row*cols) res = get_res() try: f = workspace_data+current(res) wlist = eval(open(f).read().strip()) for w in wlist: subprocess.Popen(["xdotool", "windowmap", w[0]]) os.remove(f) except FileNotFoundError: current_windows = get_wlist(res) open(f, "wt").write(str(current_windows)) for w in current_windows: subprocess.Popen(["xdotool", "windowunmap", w[0]]) How to use The script needs both wmctrl and xdotool: sudo apt-get install wmctrl xdotool Copy the script into an empty file, save it as toggle_visibility.py Test- run the script: in a terminal window, run the command: python3 /path/to/toggle_visibility.py Now open a new terminal window (since the first one seemingly disappeared from the face of the earth) and run the same command again. All windows should re-appear. NB: make sure you do not have "valuable" windows open while testing If all works fine, add the command to a shortcut key combination: choose: System Settings > "Keyboard" > "Shortcuts" > "Custom Shortcuts". Click the "+" and add the command: python3 /path/to/toggle_visibility.py Explanation As said, the script uses xdotool's windowunmap, to (completely) hide windows and the applications they belong to. The script: reads what is the current workspace reads the windows, which exist on the current workspace (only) writes the window list to a file, named after the current workspace hides the windows On the next run, the script: checks if the file, corresponding to the current workspace exists if so, reads the window list and un- hides the windows. thus toggling visibility of windows and applications on the current workspace. A: Unfortunately it's impossible. Unity always shows all applications from everywhere and there are no way to change this. There is a bug report - https://bugs.launchpad.net/ayatana-design/+bug/683170 But seems developers aren't going to do anything. Probably if you mark at the top of the page that this bug affects you it will help developers to understand importance of such option.
1. Field of the Invention The present invention is related to computer systems in which processor clock frequencies are adaptively adjusted in response to dynamic measurements of operating conditions, and in particular to a computer system in which power supply voltage domains are adjusted to cause an adaptive change in performance of the processors in the corresponding voltage domains. 2. Description of Related Art In recent computer systems, processor cores provide adaptive adjustment of their performance, e.g., by adjusting processor clock frequency, so that higher operating frequencies can be achieved, under most operating conditions and with most production processors, than could be otherwise specified. A specified maximum operating frequency for a given power supply voltage, and similarly a specified minimum power supply voltage for a given operating frequency, are necessarily conservative due to variable operating ranges of temperature and voltage and also ranges of manufacturing process variation for the particular device, i.e., the processor integrated circuit (IC). Workload differences also contribute to the need to provide operating margins for fail-safe operation, as the local voltage and temperatures at particular processor cores and particular locations within each processor core can vary depending on the particular program code being executed, and particular data or other input being processed. However, with an adaptive adjustment scheme, the effects of process, temperature and voltage can be taken into account, permitting much less conservative operation than would be possible in a fixed clocking scheme. One technique for adaptive adjustment of processor core clock frequency uses periodic measurements of propagation delay of one or more circuits that synthesize a critical signal path in the processor core. The critical path is a signal path that is determinative of the maximum operating frequency of the processor core under the instant operating conditions, i.e., the critical path is the signal path that will cause operating failure should the processor clock frequency be increased beyond an absolute maximum frequency for the instant operating conditions. The critical path may change under differing operating conditions, e.g., with temperature changes or with power supply voltage changes or with workload changes. Therefore, the critical path monitoring circuits (CPMs) as described above generally include some flexibility in the simulation/synthesis of the critical path delay, as well as computational ability to combine the results of simpler delay components to yield a result for a more complex and typically longer, critical path. Other techniques include using ring oscillators to determine the effects of environmental factors and process on circuit delay. Once the critical path delay is known for the present temperature and power supply voltage, the processor clock frequency can be increased to take advantage of any available headroom. In one implementation, multiple CPMs distributed around the processor IC die provide information to a clock generator within the processor IC that uses a digital phase-lock loop (DPLL) to generate the processor clock. The combined information allows the clock generator to adaptively adjust the processor clock to the instant operating conditions of the processor IC, which is further adapted to the processor IC's own characteristics due to process variation. Other techniques that may be used for processor frequency adjustment under dynamic operating conditions may use extrinsic environmental information to set the processor clock frequency, e.g., the temperature and power supply voltage within or without the processor IC die, to estimate the maximum processor frequency, rather than the more direct approach of measuring delay of a synthesized critical path. While the extrinsic measurements do not typically account for process variation, a significant performance advantage can still be realized by compensating for temperature and voltage variation, especially for processor ICs in which manufacturing process variation has a relatively minor impact on clock frequency. Further, other throttling mechanisms, such as adjusting the instruction dispatch, fetch or decode rates of the processor cores can be used to adjust the effective processor clock frequency, and thereby adapt the operating performance/power level of a processor in conformity with environmental measurements. Once a system is implemented using adaptively-clocked processors, such as those described above, the individual frequencies of the processor cores will necessarily vary within the system and will be distributed according to their local power supply voltage, temperatures, process characteristics of the individual processors, and workloads being executed, to achieve the maximum performance available while maintaining some safety margin. Such operation is not necessarily desirable. For example, in distributed computing applications that serve multiple computing resource customers, such as virtual machines hosting web servers or other cloud computing applications, the frequency of the processor clock or other measure of performance of one or more cores assigned to particular virtual machines may be specified as an absolute minimum, and falling below the specified performance level cannot be permitted. Exceeding the specified performance by too great a margin is also undesirable, as such operation typically wastes power. Further, in some applications, accounting of processor usage may be tied to the processor clock frequency or other performance level metric, which could cause a higher charge for a processor operating at a frequency exceeding a specified operating frequency for a customer's requirements. Therefore, it would be desirable to provide a control method and system that controls processor performance in a system that has one or more processors individually clocked by an environmentally-adaptive clocking scheme.
National Immigration Law Center executive director Marielena Hincapie discussed illegal immigration Saturday on MSNBC’s “AM Joy.” Hincapie said Attorney General Jeff Sessions, who earlier this week vowed the Trump administration would be tougher on United States-Mexican border security than previous administrations, has a “very clear white supremacist agenda.” “We are deeply troubled, Joy,” Hincapie stated. “This is a situation in this country where a man who has very clear white supremacist agenda, a nativist agenda is now in authority. He has the power now to use Department of Justice to prosecute and criminalize immigrants. The fact that we are seeing that federal prosecutors one of their top priorities is to detain and deport immigrants is really critical.” Follow Trent Baker on Twitter @MagnifiTrent
(function() {var implementors = {}; implementors["edn"] = [{text:"impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/error/trait.Error.html\" title=\"trait std::error::Error\">Error</a> for <a class=\"struct\" href=\"edn/parse/struct.ParseError.html\" title=\"struct edn::parse::ParseError\">ParseError</a>",synthetic:false,types:["edn::parse::ParseError"]},]; implementors["mentat_parser_utils"] = [{text:"impl <a class=\"trait\" href=\"https://doc.rust-lang.org/nightly/std/error/trait.Error.html\" title=\"trait std::error::Error\">Error</a> for <a class=\"struct\" href=\"mentat_parser_utils/struct.ValueParseError.html\" title=\"struct mentat_parser_utils::ValueParseError\">ValueParseError</a>",synthetic:false,types:["mentat_parser_utils::ValueParseError"]},]; if (window.register_implementors) { window.register_implementors(implementors); } else { window.pending_implementors = implementors; } })()
getting back to the article, the author is just presenting an argument and how he has to use a special process in his photography. In school, we were taught how to work around film's limitations with all sorts of skin-tones. It's nothing new and those with experience know what to do. This thread is pointless. 05-02-2014, 04:02 AM BMbikerider Basically it is a lack of fundamental understanding of the photographic process and what it can and cannot do. Exacerbated to the 'Nth' degree, when the photographer and the subsequent processing are crap or even damn crap! It is simply a case of a bad workman blaming his tools and then tries to make out it is someone else's fault - or - more simply, pass me that compensation claim!. 05-02-2014, 04:46 AM lxdude Quote: Originally Posted by markaudacity If you think the agenda "make people aware of discrimination" is unacceptable, you are part of the problem. It helps if the examples to back up the agenda are rooted in truth and rationality. The writer did not make people aware of discrimination by using fallacious examples to bolster a fatuous assertion. The writer is so caught up in proving her assertion that her examples become almost laughable to informed persons. She mentions the Polaroid ID2 camera, citing its use as a "tool for racial segregation and enforcement during the apartheid era." She then says it has a flash boost button to put 42% more light on subjects, and says it would "result in a deliberate darkening of dark-skinned subjects." So, throwing more light makes a subject darker? Huh? Does this person really have a clue about what she's talking about? I mean, she's making some very strong assertions, yet seems to not understand something very basic. I certainly would not see Olan Mills as an example of the best in portrait photography, yet she uses their work as an example of deep racism along with examples from minilabs, fer gawd's sakes. Hell, I never thought Olan Mills made anybody really look good. For most of negative color film's existence, a big problem for ordinary users was getting even decent color from cheap processors. Instead of just looking at those, maybe she could have learned something by looking at the pictures made on transparency film by Steve McCurry or Alex Webb, to name a couple of white guys who know how to photograph dark skin. Maybe then she could see that it was not the film that was the problem. She mentions the Shirley Card. Apparently she has never heard of a Macbeth Color Checker, or the extent to which real professionals doing product shoots, catalog shoots, etc., would go to ensure accurate color reproduction, because it was necessary in their work. The film they used did not somehow refuse to render brown tones as well as it did others. To me, her article is little more than a rant, an immature one at that. It does not hold up under serious scrutiny, though it will play well with others who know no better. She came up with a conclusion, and then attempts to use anything she can to support the conclusion. This sort of thing diminishes how terrible things were for non-whites for so long in this country. The racism was not subtle; it took no stretch such as exists in the article to know it was there. And people still living today suffered from it, relegated to second class or third or fourth class status. Many families have stories of family members who were attacked or killed for their skin color. I just hate to see that real suffering diminished by people seeing racism in everything, as if there could be no other explanation. The thing is, there is real racism around still; why do people go around digging it up where it doesn't exist? 05-02-2014, 05:14 AM lxdude Quote: Originally Posted by CatLABS It's even worse if its dismissed as some techinchal failing that is natural to the system. Read: face recognition systems are designed to only recognize Caucasian faces, its not a far stretch to see how this applies to every other piece of technology. Gee, what to make of the fact that the camera that thought an Asian woman blinked when she didn't was designed and made by a Japanese company? Damn that racist Nikon! I have a Fuji digital point and shoot that renders colors well, except for red. Deep reds come out a bright red, and bright reds look fluorescent. Did Fuji have a reason for doing that, or is it just something they needed to improve? 05-02-2014, 05:50 AM Regular Rod Rather than blame the tools for inadequate results it would be better to blame the lazy workmen (and women) that misused them! It was perfectly possible to expose colour film to render beautiful results showing a black person's skin in the same frame as a perfectly rendered version of a white person's skin well before the dates she claimed for the "brown furniture" changes. Just because so many (including herself by her own account) failed to achieve similar results is not the fault of the film stock or the lighting equipment or the cameras. It was their fault that they didn't use these correctly. A lack of skill is the culprit not a lack of consideration. RR 05-02-2014, 06:54 AM RalphLambrecht what do you call Zone V skin? 05-02-2014, 07:15 AM RalphLambrecht That's why I shoot B&W.I like the tones better. 05-02-2014, 07:53 AM ParkerSmithPhoto Quote: Originally Posted by jovo @ Parker Smith....Just looked at your website...gorgeous work! The encounter with Emmit Gowin on your blog is a terrific story. @Jovo Thanks for checking out the site and for your kind words. I really appreciate it. 05-02-2014, 07:58 AM ParkerSmithPhoto Quote: Originally Posted by Tom1956 Another edit: the trick to shooting dark-skinned black people is to blast their faces out with full frontal lighting. The work turns out perfect. When I photograph black families in my studio I always use an Octalight or a medium soft box in the butterfly lighting setup, plus a couple of gentle edge lights. You really do need to get light coming from right above the lens axis. Then again, I light a lot of white people this way, cause it just looks terrific! 05-02-2014, 08:11 AM AgX Quote: Originally Posted by Tom1956 The trick to shooting dark-skinned black people is to blast their faces out with full frontal lighting. The work turns out perfect. But pure frontal lighting is not the typical portrait lighting. So from an alledgedly biased film one comes to biased lighting.
You are here: Lifestyle Nowadays, people are interested in “being green”—doing things in the most eco-friendly, sustainable manner. This new awareness has come from pollution that has happened over the past 150 years from industrialization. Species have died off from dumps and poisonings. Animals have gotten trapped and killed in non-biodegradable items left over from human beings. Resources have become poisonous for people to consume. On top of it, there is a lot of Secondary Sidebar General Topics I am not sure whether they’re doing this to try and attract younger people in, or because humanity as a whole is accepting different ways of living and doing things, which means changing a religion’s tradition to something more temporary. But churches in my area are becoming more mainstream and hipper. Whether you believe in a God, no God, or a lot of gods, one thing is true, there was
The Pentagon can't confirm what happened to $45 billion spent in Afghanistan before 2010 U.S. Army cavalrymen from 1st Platoon, Bulldog Troop, 1st Squadron, 91st Cavalry Regiment; walk by some of the qalat buildings of Charkh District while on their way into the village of Paspajak, Logar province, Afghanistan, June 20.Flickr/US ArmyHundreds of millions of dollars are missing in action in Afghanistan, and auditors are blaming the Pentagon's flawed accounting practices for the problem. A new report from the office of John Sopko, the Special Inspector General for Afghanistan Reconstruction (SIGAR), revealed that there's virtually no way to know what happened to a large chunk of money the Defense Department spent in Afghanistan before 2010. The auditors said DOD handed over data only for $21 billion of the total $66 billion it spent rebuilding the war-torn country. But unlike most cases of missing money in Afghanistan (of which there are plenty), the auditors don't blame this on corruption or waste — but rather on accounting issues. The Commander's Emergency Response Program, for example, is set up in such a way that it's extremely difficult to monitor all of the money spent on the program's projects. Under that program, commanders may spend money to respond to emergencies like floods and fires. Any expense below $500,000 isn't treated as a traditional defense contract and doesn't have to be recorded in the same way. The Pentagon only had data for about 57 percent of the total $795 million spent by that program between the years 2002 and 2013. The report blamed the Pentagon's earlier (and since discontinued) process for tracking contracts. Today, when DOD awards a contract, it enters the contract into the Federal Procurement Database along with the specific pool of money that will be used to pay the contract. This dry-firing range for the Afghan National Police literally disintegrated, but only after costing US taxpayers nearly a half-million dollars SIGAR Before 2010, however, the Pentagon wasn't required to identify the pool of money the contracts were being paid from when it came to foreign military equipment and arming the Afghan National Security Forces. No wonder those transactions were and are nearly impossible to track. Out of the total amount DOD has spent in Afghanistan, more than $57 billion has gone to the Afghan Forces - but the Pentagon can only account for about $17 billion. Unlike in most of SIGAR's reports, Sopko did not include any recommendations in this audit. "SIGAR is presenting this data here to inform Congress and the U.S. taxpayer how their reconstruction dollars are being spent in Afghanistan," Sopko said. This is only the latest example of missing taxpayer money in Afghanistan. SIGAR routinely cranks out eyebrow-raising reports flagging serious waste, fraud or abuse that has plagued the 13-year reconstruction effort. Early this year, the watchdogs released a scathing report revealing the Pentagon had no way to verify whether the annual $300 million going to the Afghan Police Force was ending up in the right hands. In another example, SIGAR noted U.S. agencies couldn't identify all the various projects, programs and initiatives supporting Afghan women. They don't how much they've spent on the individual efforts, which together have cost at least $64 million. Sopko has intensified his scrutiny of the U.S.'s rebuilding mission in Afghanistan—raising questions about whether we're any closer to achieving a stable and sustainable country hundreds of billions of dollar (and tens of thousands of lives) later. Overall, the U.S. has poured more than $104 billion into Afghanistan since 2002.
"use strict"; /*! * @author electricessence / https://github.com/electricessence/ * Licensing: MIT https://github.com/electricessence/TypeScript.NET/blob/master/LICENSE.md */ Object.defineProperty(exports, "__esModule", { value: true }); var Types_1 = require("../Types"); var InvalidOperationException_1 = require("../Exceptions/InvalidOperationException"); var EMPTY = '', TRUE = 'true', FALSE = 'false'; function toString(value, defaultForUnknown) { var v = value; switch (typeof v) { case Types_1.Type.STRING: return v; case Types_1.Type.BOOLEAN: return v ? TRUE : FALSE; case Types_1.Type.NUMBER: return EMPTY + v; default: if (v == null) return v; if (isSerializable(v)) return v.serialize(); else if (defaultForUnknown) return defaultForUnknown; var ex = new InvalidOperationException_1.InvalidOperationException('Attempting to serialize unidentifiable type.'); ex.data['value'] = v; throw ex; } } exports.toString = toString; function isSerializable(instance) { return Types_1.Type.hasMemberOfType(instance, 'serialize', Types_1.Type.FUNCTION); } exports.isSerializable = isSerializable; function toPrimitive(value, caseInsensitive, unknownHandler) { if (value) { if (caseInsensitive) value = value.toLowerCase(); switch (value) { case 'null': return null; case Types_1.Type.UNDEFINED: return void (0); case TRUE: return true; case FALSE: return false; default: var cleaned = value.replace(/^\s+|,|\s+$/g, EMPTY); if (cleaned) { if (/^\d+$/g.test(cleaned)) { var int = parseInt(cleaned); if (!isNaN(int)) return int; } else { var number = parseFloat(value); if (!isNaN(number)) return number; } } // Handle Dates... Possibly JSON? // Instead of throwing we allow for handling... if (unknownHandler) value = unknownHandler(value); break; } } return value; } exports.toPrimitive = toPrimitive;
/* * Copyright (c) 2017-2018 THL A29 Limited, a Tencent company. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.tencentcloudapi.tci.v20190318.models; import com.tencentcloudapi.common.AbstractModel; import com.google.gson.annotations.SerializedName; import com.google.gson.annotations.Expose; import java.util.HashMap; public class CreateFaceResponse extends AbstractModel{ /** * 人脸操作结果信息 */ @SerializedName("FaceInfoSet") @Expose private FaceInfo [] FaceInfoSet; /** * 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 */ @SerializedName("RequestId") @Expose private String RequestId; /** * Get 人脸操作结果信息 * @return FaceInfoSet 人脸操作结果信息 */ public FaceInfo [] getFaceInfoSet() { return this.FaceInfoSet; } /** * Set 人脸操作结果信息 * @param FaceInfoSet 人脸操作结果信息 */ public void setFaceInfoSet(FaceInfo [] FaceInfoSet) { this.FaceInfoSet = FaceInfoSet; } /** * Get 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 * @return RequestId 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 */ public String getRequestId() { return this.RequestId; } /** * Set 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 * @param RequestId 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。 */ public void setRequestId(String RequestId) { this.RequestId = RequestId; } /** * Internal implementation, normal users should not use it. */ public void toMap(HashMap<String, String> map, String prefix) { this.setParamArrayObj(map, prefix + "FaceInfoSet.", this.FaceInfoSet); this.setParamSimple(map, prefix + "RequestId", this.RequestId); } }
A novel method for isolation of Campylobacter spp. from environmental samples, involving sample processing, and blood- and antibiotic-free medium. To develop a method that involves sample processing, and blood- and antibiotic-free medium for isolation and enumeration of Campylobacter spp. from environmental samples. The sample processing (preT) was standardized to minimize the population of competing bacteria. A blood- and antibiotic-free differential, Kapadnis-Baseri medium (KB medium) was formulated and tested for isolation of Campylobacter spp. in comparison with CAT medium. PreT-KB method was evaluated in comparison with the conventional viable count method and with the conventional most probable number (C. MPN) method for enumeration of Campylobcater from environmental samples. The results indicated that sample processing significantly reduced population of competing bacteria. The KB medium selected Gram-negative bacteria and differentiated Campylobacter from lactose-fermenting competing bacteria. The population of Campylobacter detected by preT-KB method was similar to that by conventional viable count method. While, the population of Campylobacter spp. determined by preT-KB method was higher than that by C. MPN method. In addition, the preT-KB method detected antibiotic sensitive campylobacters. The preT minimizes population of competing bacteria and the KB medium selects Gram-negative bacteria and differentiates Campylobacter from them. Therefore, Campylobacter can be isolated from environmental samples without using antibiotics. The preT-KB method is simple and facilitates isolation of antibiotic sensitive and enumeration of Campylobacter in the environmental samples. Therefore, the new method will be useful for isolation and enumeration of Campylobacter from water, food and sewage samples. Besides, it would also detect antibiotic-sensitive campylobacters, which are not detected by conventional viable count and MPN methods.
High-protein and high-carbohydrate breakfasts differentially change the transcriptome of human blood cells. Application of transcriptomics technology in human nutrition intervention studies would allow for genome-wide screening of the effects of specific diets or nutrients and result in biomarker profiles. The aim was to evaluate the potential of gene expression profiling in blood cells collected in a human intervention study that investigated the effect of a high-carbohydrate (HC) or a high-protein (HP) breakfast on satiety. Blood samples were taken from 8 healthy men before and 2 h after consumption of an HP or an HC breakfast. Both breakfasts contained acetaminophen for measuring the gastric emptying rate. Analysis of the transcriptome data focused on the effects of the HP or HC breakfast and of acetaminophen on blood leukocyte gene expression profiles. Breakfast consumption resulted in differentially expressed genes, 317 for the HC breakfast and 919 for the HP breakfast. Immune response and signal transduction, specifically T cell receptor signaling and nuclear transcription factor kappaB signaling, were the overrepresented functional groups in the set of 141 genes that were differentially expressed in response to both breakfasts. Consumption of the HC breakfast resulted in differential expression of glycogen metabolism genes, and consumption of the HP breakfast resulted in differential expression of genes involved in protein biosynthesis. Gene expression changes in blood leukocytes corresponded with and may be related to the difference in macronutrient content of the breakfast, meal consumption as such, and acetaminophen exposure. This study illustrates the potential of gene expression profiling in blood to study the effects of dietary exposure in human intervention studies.
Q: Using preprocess node variable in page.tpl.php I've set up a couple of variables in my template_preprocess_node(). How can I access them within a page template? I can access the $node variable, but it doesn't seem to have been preprocessed. How can I force preprocessing? A: The node variable for page templates is set in template_preprocess_page http://api.drupal.org/api/drupal/includes--theme.inc/function/template_preprocess_page/6 You should override that function in your theme
Voice onset time In phonetics, voice onset time (VOT) is a feature of the production of stop consonants. It is defined as the length of time that passes between the release of a stop consonant and the onset of voicing, the vibration of the vocal folds, or, according to other authors, periodicity. Some authors allow negative values to mark voicing that begins during the period of articulatory closure for the consonant and continues in the release, for those unaspirated voiced stops in which there is no voicing present at the instant of articulatory closure. History The concept of voice onset time can be traced back as far as the 19th century, when Adjarian (1899: 119) studied the Armenian stops, and characterized them by "the relation that exists between two moments: the one when the consonant bursts when the air is released out of the mouth, or explosion, and the one when the larynx starts vibrating". However, the concept became "popular" only in the 1960s, in a context described by Lin & Wang (2011: 514): "At that time, there was an ongoing debate about which phonetic attribute would allow voiced and voiceless stops to be effectively distinguished. For instance, voicing, aspiration, and articulatory force were some of the attributes being studied regularly. In English, "voicing" can successfully separate from when stops are at word-medial positions, but this is not always true for word-initial stops. Strictly speaking, word-initial voiced stops are only partially voiced, and sometimes are even voiceless." The concept of VOT finally acquired its name in the famous study of Lisker & Abramson (1964). Analytic problems A number of problems arose in defining VOT in some languages, and there is a call for reconsidering whether this speech synthesis parameter should be used to replace articulatory or aerodynamic model parameters which do not have these problems, and which have a stronger explanatory significance. As in the discussion below, any explication of VOT variations will invariably lead back to such aerodynamic and articulatory concepts, and there is no reason presented why VOT adds to an analysis, other than that, as an acoustic parameter, it may sometimes be easier to measure than an aerodynamic parameter (pressure or airflow) or an articulatory parameter (closure interval or the duration, extent and timing of a vocal fold abductory gesture). Types Three major phonation types of stops can be analyzed in terms of their voice onset time. Simple unaspirated voiceless stops, sometimes called "tenuis" stops, have a voice onset time at or near zero, meaning that the voicing of a following sonorant (such as a vowel) begins at or near to when the stop is released. (An offset of 15 ms or less on and 30 ms or less on is inaudible, and counts as tenuis.) Aspirated stops followed by a sonorant have a voice onset time greater than this amount, called a positive VOT. The length of the VOT in such cases is a practical measure of aspiration: The longer the VOT, the stronger the aspiration. In Navajo, for example, which is strongly aspirated, the aspiration (and therefore the VOT) lasts twice as long as it does in English: 160ms vs. 80ms for , and 45ms for . Some languages have weaker aspiration than English. For velar stops, tenuis typically has a VOT of 20-30 ms, weakly aspirated [k] of some 50-60 ms, moderately aspirated averages 80–90 ms, and anything much over 100 ms would be considered strong aspiration. (Another phonation, breathy voice, is commonly called voiced aspiration; in order for the VOT measure to apply to it, VOT needs to be understood as the onset of modal voicing. Of course, an aspirated consonant will not always be followed by a voiced sound, in which case VOT cannot be used to measure it.) Voiced stops have a voice onset time noticeably less than zero, a "negative VOT", meaning the vocal cords start vibrating before the stop is released. With a "fully voiced stop", the VOT coincides with the onset of the stop; with a "partially voiced stop", such as English in initial position, voicing begins sometime during the closure (occlusion) of the consonant. Because neither aspiration nor voicing is absolute, with intermediate degrees of both, the relative terms fortis and lenis are often used to describe a binary opposition between a series of consonants with higher (more positive) VOT, defined as fortis, and a second series with lower (more negative) VOT, defined as lenis. Of course, being relative, what fortis and lenis mean in one language will not in general correspond to what they mean in another. Voicing contrast applies to all types of consonants, but aspiration is generally only a feature of stops and affricates. Transcription Aspiration may be transcribed , long (strong) aspiration . Voicing is most commonly indicated by the choice of consonant letter. For one way of transcribing pre-voicing and other timing variants, see extensions to the IPA#Diacritics. Other systems include that of Laver (1994), who distinguishes fully devoiced and from initial partial devoicing of the onset of a syllable by and from final partial devoicing of the coda of a syllable by . Examples in languages References Taehong Cho and Peter Ladefoged, "Variations and universals in VOT: Evidence from 18 languages". Journal of Phonetics vol. 27. 207-229. 1999. Angelika Braun, "VOT im 19. Jahrhundert oder "Die Wiederkehr des Gleichen"". Phonetica vol. 40. 323-327. 1983. External links Buy a pie for the spy A description of the mechanism of voiced, tenuis (voiceless unaspirated), and (voiceless) aspirated stops in relation to voice onset time Category:Phonetics Category:Human voice
Q: Static QString initialization fail sorry for my creepy english. In Qt 4.8.4 I'm try to create singleton class with static QString field. It cannot be const, because this field will change value at runtime. Initialization this field making by constant QString, declared at same file. Unfortunately this is don't work and in string of initialization LogWay = QString("file.txt"); program unexpectedly finish with mistake The inferior stopped because it received a signal from the Operating System. Signal name: SIGSEGV Signal meaning : Segmentation fault in file "qatomic_i386.h" What i doing wrong? This code sample work with int, bool or double variables, but not with QString. Why? I tried write without QString constructor, use simple equating LogWay = "..."; but i have same error. Thank for helping. Full code of class: this is .h file #include const double _ACCURACY_ = 0.0001; static const QString _LOG_WAY_ = "GrafPrinterLog.txt"; class GlobalSet { private: static QSettings *_Settings; GlobalSet(){} GlobalSet(const GlobalSet&){} GlobalSet &operator=(const GlobalSet&); static GlobalSet *GS; public: static double IntToCut; static double ReplaceSize; static double Accuracy; static int MaxPointCount; static bool NeedProper; static QString LogWay; ~GlobalSet(); static GlobalSet *Instance() { if (GS == NULL) { GS = new GlobalSet(); GS->firstSetUp(); } return GS; } void firstSetUp() { Accuracy = _ACCURACY_; LogWay = QString("file.txt");//fail is here! NeedProper = false; _Settings = new QSettings("options.ini",QSettings::IniFormat); } }; and this is .cpp file #include "globalset.h" GlobalSet *GlobalSet::GS = NULL; QSettings *GlobalSet::_Settings = NULL; double GlobalSet::Accuracy = _ACCURACY_; QString GlobalSet::LogWay = _LOG_WAY_; A: It could be "static order initialisation fiasco" (see here), because you are trying to create a static QString from another static QString.
--- abstract: 'We study the duration and variability of late time X-ray flares following gamma-ray bursts (GRBs) observed by the narrow field X-ray telescope (XRT) aboard the [*Swift*]{} spacecraft. These flares are thought to be indicative of late time activity by the central engine that powers the GRB and produced by means similar to those which produce the prompt emission. We use a non-parametric procedure to study the overall temporal properties of the flares and a structure function analysis to look for an evolution of the fundamental variability time-scale between the prompt and late time emission. We find a strong correlation in 28 individual x-ray flares in 18 separate GRBs between the flare duration and their time of peak flux since the GRB trigger. We also find a qualitative trend of decreasing variability as a function of time since trigger, with a characteristic minimum variability timescale $\Delta t/t=0.1$ for most flares. The correlation between pulse width and time is consistent with the effects of internal shocks at ever increasing collision radii but could also arise from delayed activity by the central source. Contemporaneous detections of high energy emission by GLAST could test between these two scenarios, as any late time X-ray emission would undergo inverse Compton scattering as it passes through the external shock. The profile of this high energy component should depend on the distance between the emitting region and the external shock.' author: - 'Daniel Kocevski , Nathaniel Butler , Joshua S. Bloom' title: 'Pulse Width Evolution of Late Time X-rays Flares in GRBs' --- Introduction {#sec:Introduction} ============ One of the most unanticipated results to come from the [*Swift*]{} spacecraft [@Gehrels04] is the wide variety of X-ray behaviors observed in the early afterglows of gamma-ray bursts (GRBs). As of January of 2007, [*Swift*]{} had detected 206 GRBs and had observed a subset of $>90$% of those events with the spacecraft’s narrow field X-ray telescope or XRT [@Burrows05a] . Of these events, $>90$% show temporal properties that deviate from the simple post cooling break powerlaw decline that had been seen at late times ($\gtrsim 3 \times 10^{4}$ seconds) by previous spacecraft [e.g., @Frontera00; @Gendre06]. Afterglows with simple powerlaw declines that extend from a few $\sim 10^{2}$ seconds to several days after a burst are seen, for example GRB 061007 [@Mundell06], but they constitute a far minority of the afterglows observed by the XRT. Instead, most afterglows show sharp drops in the observed flux immediately following the gamma-ray emission [@Barthelmy05a], lasting anywhere from $\sim 10^{2}$ to $\sim 10^{3}$ seconds post trigger. This is followed by a flattening of the light curve that can last hundreds of seconds [@Granot06] before eventually transitioning to the late time powerlaw decay previous observed by other spacecraft. Most surprisingly, interspersed among these various components of the prompt afterglow emission have been the detections of major re-brightening episodes with emission flaring in some cases several hundred times above the declining afterglow emission [@Burrows05b]. In rare cases, these flares have actually surpassed the luminosity of the original GRB [@Burrows07]. Numerous papers have been published discussing a variety of mechanisms that could produce the late time flaring [@Zhang06; @Liang06; @Falcone06; @Mundell06; @Perna06; @Proga06; @Lazzati07; @Lee07; @Lyutikov06; @Fan05]. Most of these mechanisms place tight constraints on the timescales on which the their emission can be produced [@Ioka05]. The simplest explanation would be that the forward shock powering the afterglow runs into ambient density fluctuations as it moves into the surrounding medium [@Wang00]. This external shock interpretation has difficulties explaining the degree of variability that is clearly seen in many of these flares [e.g., @Burrows05b and below]. Simple kinematic arguments show that fluctuations due to turbulence of the interstellar medium or variable winds from the progenitor are expected to produce broad and smooth rise and decay profiles, with $\Delta t / t \sim 1$ [@Ioka05]. Here $t$ is the time since the gamma-ray trigger and $\Delta t$ is the variability timescale. Shocks internal to the relativistic outflow [@Rees94; @Narayan92], similar to the shocks believed to produced the prompt gamma-ray emission, do not suffer from these same constraints and could in theory produce variability on much shorter timescales. In the internal shock scenario the rise time of an individual pulse is governed by the time it takes for the reverse shock to propagate back through the shell. The decay time is largely set by the relativistic kinematics, or curvature effects, in which the arrival of off axis emission from a relativistically expanding shell is delayed and affected by a varying Doppler boost. Another clue that the flares are produced in a region distinct from the external shock is that the temporal decay of the afterglow emission appears largely unaffected by the presence of flaring. The temporal index of the afterglow after the flaring activity is typically consistent with the pre-flare decay index. Although most bright flaring occurs within one hour of the GRB, flares have been observed during of each of the light curve phases described above. If, for example, the flare represented the onset of forward or reverse shock emission of a slow shell catching up and colliding with the external shock, then these flares would be expected to occur only before the flat energy injection phase. Furthermore, [@Burrows07] points out flaring in one example of a possible “naked burst“ [@Kumar00], an event which decays rapidly in time and therefore exhibits no evidence of external shock emission. This supports the argument that whatever is powering the afterglow is most likely not creating the X-ray flares, leaving internal shocks or direct central engine activity as likely methods for their production. Further evidence that late time X-ray flares might be associated with internal shocks comes from their spectral characteristics. First, most of the flares are much harder than the underlying afterglow emission and, as reported by @Burrows07, the spectral characteristics of the afterglow emission appears unaffected by the flaring activity, possibly indicating two distinct emitting regions. Second, spectral fitting by @Butler07 has shown that many flares can be well fit by the Band model [@Band93] that so effectively describes the prompt emission which is largely believed to be the result of internal shock collision. Furthermore, detailed time resolved spectral fitting of bright flares by @Butler07 has shown that the spectral break energy $E_{pk}$ of the Band model, which represents the energy at which most of the photons are emitted, evolves to lower energy during the flare in a way that is very similar to what is seen in the prompt emission [@Norris86]. The evolution also follows the hardness-intensity correlation [@Golenetski83], a well known relationship observed in the prompt emission that can be attributed to the relativistic effects that produce the decay profile of individual pulses [@Kocevski03]. If the energy released by this activity is converted to radiation through late time internal shocks, then the question remains as to the characteristic radius that these internal shocks are occurring as well as the delay in their ejection. Either the central engine is still functioning and emitting shells at very late times, or the final few shells of the original outflow, which were emitted along with the shells that created the prompt emission, catch up with each other only after a long delay due to a small relative difference in their bulk Lorentz factor $\Gamma$. The first scenario could essentially produce shell collisions at any radius, as the delayed arrival of the flares would, in this case, primarily reflect the time that the engine was dormant [@Kobayashi97]. The second scenario, predicts that the late time flares should occur at a radius that is significantly larger than the radius at which the prompt emission was created, with their delayed arrival being a result of the shells’ time of flight before colliding. This second scenario leads to a very specific and testable prediction, namely that the width of individual pulses of emission should become broader and less variable when originating from shells of increasing collision radii $R_{c}$. @Ramirez-Ruiz00 tested for this pulse width evolution in the light curve profiles of BATSE events and found no evidence for any such effect. They concluded that the prompt emission observed by BATSE must have been produced over a small range of $R_{c}$ from the central engine and that no significant deceleration of $\Gamma$ could have occurred over the duration of the observed activity. The goal of this paper is to extend the gamma-ray pulse width analysis to the late time flaring X-ray emission following GRBs. The public catalog of [*Swift*]{} XRT flares (see also Chincarini et al. 2007) represents the first dataset to test the internal vs. external shock scenario for this flaring activity. Whereas previous studies were limited to prompt emission occurring less than 100 seconds after trigger, the late time X-ray flares give us the opportunity to test for pulse width and variability evolution out to, in some case, 1000 seconds after the trigger of the GRB where this effect may be more pronounced. We provide a simple derivation of the expected pulse width evolution in both small $\Delta\Gamma$ and delayed engine activity scenarios in $\S 2$, followed by a discussion of our data reduction techniques in $\S 3$ and results in $\S 4$. We find evidence for pulse width evolution in 28 flares as well as a qualitative trend of decreasing variability as a function of the flare’s time of peak flux. We discuss the implications of our observations in $\S 5$. This work expands upon and formalizes our previous reports [@2006AAS20922703K; @ButlerGLAST07] of the discovery of pulse width evolution. Pulse Width Evolution {#sec:PulseWidthEvolution} ===================== The standard fireball model postulates the release of a large amount of energy by a central engine into a concentrated volume [@Cavallo78], which causes the resulting outflow to expand and quickly become relativistic [@Paczynski86]. In the internal shock scenario [@Rees94], this outflow is assumed to be variable, consisting of multiple shells of differing bulk Lorentz factors $\Gamma$. These shells propagate and expand adiabatically until a faster shell collides with a slower one, causing the shells to coalesce and convert a significant fraction of their kinetic energy into radiation, most probably through optically thin synchrotron radiation. The resulting pulse profile that is observed is a convolution of two distinct timescales. The rise time of the pulse is largely due to the time it takes for the reverse shock that is induced by the collision to cross the width of the faster shell. The decay time, on the other hand, is governed mainly by angular and kinematic effects where off axis emission is delayed and effected by a varying Doppler boost due to the curvature of the relativistic shell (see Figure 1 in @Kocevski03). As a result, the decay time can be, and in most cases is, much longer than the rise time, leading to an asymmetric pulse profile. The combination of these two timescales (the shell crossing time and the angular time) naturally explains the so called “fast rise exponential decay” or FRED pulses that are so ubiquitous in prompt GRB emission.[^1] If we examine these two timescales in more detail, we can see that the rise time is primarily a thickness effect and can be expressed as $\Delta t_{rise} = \delta R / c(\beta_{2}-\beta_{rs})$, where $\delta R$ and $\beta_{2}$ are the thickness and velocity of the second shell that is catching up to the first and $\beta_{rs}$ is the velocity of the reverse shock. If both the slow and fast shells have Lorentz factors of roughly the same order $\sim \Gamma$, then the resulting rise time is of order $\sim \delta R / c$. Because the merging shells are traveling forward at a velocity very close to the speed of light ($\Gamma \gg 1$), the resulting coalesced shell keeps up with the photons that it emits. Therefore, any emission activity over a fixed duration will appear to an outside observer to be compressed in time by a factor of $1/2\Gamma_{m}^{2}$, where $\Gamma_{m}$ is the resulting Lorentz factor of the merged shell. The observed rise time can therefore be written as $$\label{eq:rise} \Delta t_{r} \approx \frac{\delta R }{2c\Gamma_{m}^{2} }$$ So given a sufficiently large $\Gamma_{m}$, internal shocks can essentially produce variability along the line of sight on arbitrarily short timescales. Angular (or curvature) effects have the opposite effect, causing a broadening of the overall emission profile that can quickly come to dominate the observed pulse shape. The decay timescale is essentially the difference in light-travel time between photons emitted along the line of sight and photons emitted at an angle $\theta$ along a shell of radius $R$. This can be stated as $$\label{eq:decay} \Delta t_{d} = \frac{R(1-\cos\Delta\theta)}{c} \approx \frac{R(\Delta\theta)^{2}}{2c} \approx \frac{R}{2c\Gamma^{2}}$$ Where the last step assumes that the shell is moving with sufficient velocity such that the solid angle accessible to the observer is limited by relativistic beaming and thus given by $\Delta\theta \sim 1/\Gamma^{2}$. Therefore comparing Equation \[eq:rise\] and Equation \[eq:decay\], we can see that curvature effects become important whenever the radius of the shell exceeds the shell thickness, which is true for all but the earliest moments of the shell’s expansion. The significance of Equation \[eq:decay\] is that angular effects should scale linearly with the radius of the emitting shell and therefore pulse durations should become broader as shell collisions occur further from the central engine. If the flares are the result of multiple shells that have been ejected almost instantaneously (or at least within a timescale that is small compared to the overall GRB duration) but collide at very late times due to a small dispersion in Lorentz factors, then one would expect that these late collisions would occur at greater radii. In this scenario, we can replace the radius of the shell in Equation \[eq:decay\] with the time $t$ since the ejection of the first shell by noting that the observed radius of a spherical shell expanding with $v \sim c$ can be approximated as $R \approx ct\Gamma^{2}$, where the extra factor of $\Gamma^{2}$ is due to relativistic corrections, leading to $$\label{eq:decay_t} \Delta t_{d} \approx \frac{t}{2}$$ Therefore, the late shock scenario would predict a linear correlation between a shell’s time of flight and the resulting pulse duration, independent of the Lorentz factor of the shell. This relationship between the pulse duration the time since the ejection of the internal shocks has been noted before. @Fenimore96 found, through a much more detailed derivation, that a pulse’s [*FWHM*]{} should scale roughly as $0.26T_{0}$ to $0.19T_{0}$ as the low energy powerlaw index $\alpha$ varies from 1 to 2. Similarly, @Ioka05 derive that the variability of flares that result from refreshed shocks should be limited by $\Delta t \geqslant t_{p}/4$. In each case, flares occurring at larger radii are expected to produce broader pulse durations. This relationship between $\Delta t_{d}$ and $t$ is modified if there is an intrinsic delay $\Delta t_{engine}$ in the ejection of the subsequent shells by the central engine. If we imagine two shells emitting at time zero and time $\Delta t_{\rm engine}$, provided the Lorentz factor of the second shell $\Gamma_2 > \Gamma_1$, the Lorentz factor of the first shell, the shells will collide at time $$t_c = { \Gamma_1^2 \Delta t_{\rm engine} \over \Gamma_2^2-\Gamma_1^2}$$ If the the shells have equal mass, which corresponds to the maximal efficiency for conversion of kinetic energy into radiation, energy and momentum conservation lead to a merged shell with Lorentz factor $\Gamma_m = \sqrt{\Gamma_1 \Gamma_2}$. The timescale over which the shell emits will be governed by the longest timescale of $\Delta t_a$, $\Delta t_r$, or $\Delta t_c$, the angular, the radial, or the cooling timescale, respectively. The angular and radial timescales can both be given as: $$\Delta t_a \approx \Delta t_r \approx { R \over 2c \Gamma_m^2 } = { R \over 2c \Gamma_1 \Gamma_2}.$$ The time at which a flare is observed will be $t_c$, and the observed duration will be $\Delta t = t_c \Gamma_1/\Gamma_2 \approx t_c/2$ for an efficient collision with $\Gamma_2=2\Gamma_1$. For this $\Gamma_2/\Gamma_1$ ratio, the flare duration $\Delta t$ is related to the duration at the central engine by $\Delta t = \Delta t_{\rm engine} /6 \sim \Delta t_{\rm engine}$. Therefore, if there is any appreciable delay in the ejection of relativistic material from the central engine, the resulting pulse shape will not necessarily reflect the shell radius, but rather the intrinsic delay between the ejection of the two shells. Any correlation between pulse shape and time of peak flux must then be attributed to the activity of the central engine. Data $\&$ Analysis {#sec:Data} ================== We select a subsample of 28 bright ($\gtrsim 10$ cts/s) flares that are fully time-sampled (i.e., no gaps in their light curves) in 18 separate GRB afterglows observed by XRT. The Burst Alert Telescope (BAT) and XRT data were downloaded from the [*Swift*]{} Archive[^2] and processed with version 0.10.3 of the [xrtpipeline]{} reduction script and other tools from the HEAsoft 6.0.6[^3] software release. We employ the latest (2006-12-19) calibration files available to us at the time of writing. The reduction from cleaned event lists output by the [xrtpipeline]{} code and from the HEAsoft BAT software to science ready light curves and spectra is described in detail in @Butler07. The bright XRT flare data are taken overwhelmingly in windowed-timing (WT) mode, which mandates special attention to bad detector columns. As the spacecraft moves, a significant and time varying fraction of the source flux can be lost if source counts fall on the bad columns. To account for this [see, @Butler07], we calculate exposure maps for the WT mode on a frame-by-frame basis. We accumulate 0.3-10.0 keV counts in each light curve bin until a fixed signal-to-noise ($S/N$) of 3 is achieved. All of the resulting light curves to which we apply our analysis are publicly available[^4]. A composite light curve plot showing all 18 GRBs in our data set is shown in Figure 1. Flare Duration Measures ----------------------- The first step in our analysis consists of measuring the global flare duration timescales. Because we have found no one functional form (e.g., Gaussians) to adequately fit the X-ray flare time profiles (Figure 1), we employ non-parametric duration estimators. We consider the flare $T_{90}$ duration as the time required to accumulate between 5% and 95% of the flare counts. We also define a rise time as the time between 5% accumulation and the time of count rate peak. Errors on these quantities are determined from the non-parametric bootstrap (i.e., by recalculating the quantities for data simulated using the measured data and errors). A bias affecting these duration measures (and probably all durations measures) is the unknown background under the flares. As discussed above, studies have shown that a powerlaw decaying background likely does exist. However, it cannot cleanly be measured in many of our events and not at all for events which suffer from data gaps. In an effort to avoid such biases in our duration measurements, we have restricted our analysis to flares that are typically 2$-$3 orders of magnitude above background. Flare Variability Measures -------------------------- The flare duration and the component rise and decay times are gross measures of variability. In addition to this information, we attempt to measure the finer timescale fluctuations in the light curves which may prove important for inferring the size and nature of the flare’s emitting region. Several methods for measuring signal power versus time scale have been applied to astronomical inquires and in GRB research in particular. Several authors (e.g., Belli 1992; Giblin, Kouveliotou, & van Paradijs 1998; Beloborodov, Stern, & Svensson 2000) employ the Fourier power spectral density (PSD) to study time variations in GRB light curves. The autocorrelation function (ACF), which is simply the Fourier transform of this PSD, has been used to demonstrate a narrowing of GRB pulses with increasing energy band (e.g., Fenimore et al. 1995). Below, we utilize the [*first order structure function*]{}, which is directly proportional to the ACF and has a rich heritage in the study of quasar time histories (e. g., Simonetti, et al. 1985; Hughes, Aller, & Aller 1992). Because the light curves of flaring sources are by definition non-stationary signals (i.e. signals whose frequency content changes with time) which exhibit sharp discontinuities, Fourier transforms do a particularly poor job of accurately measuring their power on both short and long timescales. Furthermore, they offer no ability to distinguish the temporal variations of specific spectral components (i.e. the time at which a characteristic frequency changes in a light curve) They are also somewhat more prone than $ACF$ methods to aliasing effects due to irregularly time-sampled data. Instead of constructing a PSD using superpositions of sines and cosines, we can perform the equivalent analysis by constructing a scaleogram through the use of a discrete Haar wavelet transform. The Haar wavelet is the simplest possible wavelet, consisting of a step function, and has been previously exploited to “denoise” GRB light curves (e.g., Kolaczyk & Dixon 2000) and to infer milli-second variability during the first seconds of bright BATSE GRB [@Walker00]. As described in more detail in the Appendix, we calculate the structure function from Haar wavelet coefficients as: $$\sigma^2_{X,\Delta t} = \Delta t/t \sum_{i=0}^{t/2\Delta t-1} (\bar X_{2i+1,\Delta t} - \bar X_{2i,\Delta t})^2.$$ where $X_i$ is the natural logarithm of the observed XRT count rate in bin $i$ at time $t$, and $\Delta t$ is the timescale (or time “lag”) between successive bins. The bar over the $X_i$ denotes an averaging with respect to shorter timescales, which is accomplished by the discrete wavelet transform (see, e.g., Press et al. 1992). If this averaging were not performed, $\sigma^2_{X,\Delta t}$ would be equal to the structure function $SF = < (X_{i+\Delta t}-X_i)^2 > = 1 - ACF$. Instead, we have an estimator for $SF$, which ends up being far easier to interpret, as we discuss in the Appendix. Results {#sec:Results} ======= Pulse Broadening in an Individual Event --------------------------------------- A composite BAT and XRT light curve for GRB 060714 is shown in Figure 2. The red solid line represents a multiply-broken powerlaw fit to the light curve. The inflection points in the fit allow us to measure the boundaries and durations of the individual pulses within the signal. Each pulse is delineated by the short-dotted lines with the corresponding pulse duration labeled below the light curve. A general trend can be seen in which the pulse durations become broader as the burst progresses, with the shortest activity occurring early in the event. The bottom panel plots the minimum timescale $\Delta t$ for which $\sigma_{X,\Delta t}$ is at least $3\sigma$ above the floor expected from Poissonian fluctuations. Consistent with the trend seen in pulse duration, this minimum variability timescale, which is calculated without fitting the data, increases roughly as a powerlaw as the burst moves from early gamma-ray emission to late X-ray emission. To show that this increase in the variability timescale is not simply due to an increase in the data binning as the burst fades, we have plotted the time binning as a short dotted line in the bottom panel. For most of the event ($t\lesssim$ 200 seconds), the timescale below which little or no significant power exists is at least an order of magnitude higher than the resolution of the light curve allowed by the binning of the signal. Overall, the bottom panel shows that the very fast time variability associated with the prompt GRB emission dies out at late times. Although the general broadening of the pulse durations seen in Figure 2 can be detected within the separate BAT and XRT light curves, the comparison of the pulse durations can only be qualitative when considering a light curve that spans both detectors. This is because GRBs are typically wider at lower energies [@Fenimore96], a direct result of the evolution of their spectral break energy $E_{pk}$ to lower energies. We shown in @Butler07 that the X-ray flares typically have $E_{pk}$ in the X-ray band, while the earlier GRB emission has $E_{pk}$ in the gamma-ray band. Therefore pulses are expected to be intrinsically broader in the 0.3$-$10.0 keV bandpass of the XRT than the higher 10$-$100 keV observed by BAT. Pulse Broadening in the Sample Taken as a Whole ----------------------------------------------- To eliminate the pulse broadening between separate energy bands, we limit our quantitative comparison of pulse durations (both within a single GRB and across our entire sample) to measurements made using only the XRT data on each event. This comparison is shown in Figure 3, where we plot pulse duration versus time of peak flux for our entire sample of XRT observed GRBs. The flares associated with each GRB are represented by the same color and symbol, with several GRBs exhibiting multiple flares throughout their early afterglow. As a whole, the sample shows a clear correlation between the flare duration and the time of peak flux since the GRB trigger. The resulting correlation strength is Kendall’s $\tau_{K} = 0.7$, with a significance of $10^{-7}$. The slope is consistent with linear, implying $\Delta t \propto t_{p}$, which cancels out the effects of cosmological redshift. We find no significant correlation between duration and redshift ($\tau_K=0.2$, signif.$=0.2$), further ruling out cosmological time dilation as the source of this correlation. Roughly, half of the events with multiple flares (those plotted in color in Figure 3) show a trend toward increasing duration with observation time. The other half show an anticorrelation. The pulse durations, time of peak flux, and rise times for al the flares in our sample can be found in Table 1. Haar Structure Function View ---------------------------- Figure 4 shows $\sigma_{X,\Delta t}$ versus $\Delta t$ and $\sigma_{X,\Delta t}$ versus $\Delta t/t$ for the ensemble of flares under study[^5]. In this scaleogram plot, we show only $3\sigma$ excesses over the power associated with Poisson fluctuations and report lower values as $3\sigma$ upper limits. An X-ray flare is an emission episode uncorrelated in time to the afterglow flux prior to and after the flare. During the flare and on timescales short relative to the flare duration, the flux will be highly correlated in time and there will be a linear rise in $\sigma_{X,\Delta t}$. This can be observed to arbitrarily short timescale if the fading powerlaw tail of a flare is measured with very high $S/N$. On the other hand, as we describe in more detail in the appendix, correlated behavior in the light curve flattens the structure function, and this provides a direct measure of the flare timescales. Consistent with the pulse duration correlation seen in Figure 3, the scaleogram plots show a range of important flaring timescale $dt=30-300$s, which becomes much tighter in units $dt/t=0.1-0.5$. We observe a minimum characteristic timescale $dt/t=0.1$. The fractional flux variation levels at the minimum timescale are large ($\sigma_{X,\Delta t} \gtrsim 80$%), suggesting that the variations correspond to gross features in the light curve. Consistent with this interpretation, we observe the flare rise times to have $\Delta t{\rm rise}/t=0.1$ on average (Figure \[fig:time\_plot\_rise\]), and it is likely the sharp flare rises which produce the shortest timescales reflected in the structure function turnover. From the linear $\sigma_{X,\Delta t}$, we can rule out significant flickering on timescales shorter than $dt/t=0.1$ (or $dt=30$s) at very small $\gtrsim 3$% fractional flux levels (Figure 3). We discuss the flare noise properties as a function of timescale in more detail in @Butler07a. For observation times in the 100 to 1000 second range, $dt/t=0.1$ implies emission radii $R_c\approx 10^{15}$ cm$-$$10^{16}$ cm, for a bulk Lorentz factor $\Gamma=100$ (Equation \[eq:rise\]). The observable emission is restricted to an angle $\approx 1/\Gamma$, implying an effective emitting region of size $\delta R\approx R_c/\Gamma \approx 10^{14}$ cm$-$$10^{15}$ cm, compared to the typical external shock values of $10^{16}$ cm in the first hour or so [@Piran99]. Discussion {#sec:Discussion} ========== The results from the temporal analysis outlined above provide substantial evidence that both the pulse duration and pulse variability of late time X-ray flares evolve with time. Both the pulse duration and variability timescales appear to have a narrow intrinsic range in $\Delta t/t = 0.3\pm 0.2$, consistent with a narrow range found independently by @Burrows07 for $\Delta t_{\rm rise}$ and by @Chincarini07 for $T_{90}$. GRB 060714 provides the best example of this behavior in an individual event. Several other individual GRBs display a similar increasing pulse duration trend among their associated flares, although several bursts do not (e.g., GRB 060210). For the bursts with multiple flares, only half show increasing flare durations. Each burst event typically shows only 1, sometimes 2 (and 3 in one case) separate flares. These multiple flares within individual GRBs are only weakly separated in logarithmic time, and hence probe a small range of $R_{c}$ or $t_{engine}$ which may not allow for a clean measurement of time evolution in individual events. The linear relationship between $\Delta t$ and $t_{p}$ is consistent with the pulse width evolution that is expected from the angular effects of late internal shocks at large radii as outlined in $\S$\[sec:PulseWidthEvolution\]. Because we do not see a significant alteration of the afterglow light curve after the occurrence of an X-ray flare, the standard refreshed shock model, in which the trailing shells catch up to the leading shell only after the leading shell decelerates due to an external medium, is disfavored. Although such a scenario is expected to produce a correlation between the pulse width and time of peak flux on the order of $\Delta t \geqslant t_{p}/4$ [@Ioka05], the trailing shells should have the effect of increasing the overall afterglow energy and thus have a discernible effect on the afterglow light curve, which is not seen. Therefore, the internal shocks producing the flares would have to be occurring behind the leading shock that has begun powering the afterglow, with their late occurrence, in this scenario, being due to a small relative Lorentz factor between the two inner shells. The primary difficulty with this interpretation is the high flux ratio between the prompt and late emission, given the relatively small $\Delta\Gamma$ needed to explain the late collision time. As shown in detail by @Krimm07, the efficiency $\epsilon$ of an internal shock in converting a system’s kinetic energy into radiation scales roughly as $\epsilon \sim \Delta\Gamma^{2}$, so the observed flux drops quickly as the contrast between the Lorentz factor of the shocks decreases. This posses a problem for the flares observed by Swift, as many exhibit peak fluxes that are significant fractions of, and in some cases comparable to, their associated prompt emission. The small $\Delta\Gamma$ scenario would require an extremely large total amount of kinetic energy to remain in the system after the release of the prompt emission, given the low efficiency of the late collisions. These late and highly energetic shocks would, after producing the flaring activity, eventually collide with the external shocks and affect the observed afterglow light curve, something that is not seen in all events with flares. Alternatively, if the late nature of the X-ray flares is due to a significant delay in the ejection of late shells by the central engine, then the necessity of a small $\Delta \Gamma$ is eliminated, alleviating this efficiency constraint. As described in $\S$\[sec:PulseWidthEvolution\], the arrival time $t_{c}$ and pulse width $\Delta t$ would then directly reflect the activity of the central engine. Therefore, in this scenario, the correlation between $t_{c}$ and $\Delta t$ would require an explanation intrinsic to the powering and/or reactivation of the central engine at late times. Several authors have suggested mechanisms by which the central engine could be active at late times, most involving late-time fallback material or a long lived accretion disks around a central black hole. A model proposed by @King05 suggests that the late-time activity could be attributed to the fragmentation and accretion of a collapsed stellar core resulting in a sporadic release of energy rather than the classic view of a single cataclysmic event. Similarly, @Perna06 have proposed a viscous disk model in which the late-time activity is due to re-energization by material that falls in from a range of initial radii toward the accreting black hole. In this scenario, the correlation between $t_{c}$ and $\Delta t$ would be due to the range of radii from which the accreting material was falling. Material at large radii, if continuously distributed throughout its orbit, would take longer to fall back onto the central black hole and would do so over a longer duration, due to its larger orbital circumference. These models are not without their own share of difficulties. The simple fragmentation models [@King05] are inconsistent with the implication of the spectral evolution seen in may flares [@Krimm07]. Similarly, the viscous disk model requires a continuous distribution of material at discrete orbits to account for the episodic nature of the flares as well as an extremely long lived, and hence low viscosity, accretion disk to explain flares at 1000 seconds after the original collapse. It cannot be completely ruled out that the observed time evolution is due to spectral evolution or the superposition of multiple flares. Consider 060124 [@Butler07; @Romano06], in which the flares may in fact be the prompt emission, because the faint BAT trigger may be a pre-cursor. At high energies, the first XRT “flare” resolves into 2$-$3 shorter timescale BAT flares, which are blurred together in the XRT. We note that a shift of time origin for 060124 from $t\sim 0$ s to $t\sim 300$ s, corresponding to a shift in origin from the pre-cursor to the flare start, does not lead to a violation of the $\Delta t$ and $t_{p}$ correlation. Although, if we used the BAT flare durations, the correlation could be violated. This indicates that spectral considerations are important, and that we are likely measuring in the XRT (in some cases) a pulse superposition. The duration which increases in time still appears to measure the duration of major emission activity, however, it is not clear that these are individual pulses. We know that spectra of late time flares are evolving strongly [@Butler07] during the flares. However, we observe only a weak correlation between peak time and hardness, indicating that there is a diversity of flare spectra at each epoch. Another important concern involves the powerlaw background onto which most of these flares are superimposed. Although we have not attempted to subtract the background from the events in our sample (because the backgrounds are not well defined), this should not dominate the observed correlation. We have selected the brightest flares for analysis, which have peak fluxes orders of magnitude greater than the underlying background flux. The correlation is also strong for measures of duration like $T_{50}$ or the @Reichart01 $T_{45}$, which are largely insensitive to pulse tails. Finally, we note that the flare rise time also strongly correlates with the peak time $t_{p}$ as shown in Figure 5. Barring any of these selection and/or analysis effects and assuming that the pulse width evolution is real, one possible test to distinguish between the late internal shocks with small contrasts $\Delta\Gamma$ and direct central engine activity may come from contemporaneous high energy emission during the X-ray flares. If the internal shocks creating the flares are occurring behind the external shock, then one would expect the X-ray photons to be boosted to higher energies by a factor of $\Gamma_{FS}^{2}$ through inverse Compton (IC) scattering as they pass through the external shock [@Rybicki79]. The soft X-ray 10 KeV photons associated with the X-ray flares could easily be boosted into the 1$-$100 MeV range depending on the Lorentz factor of the external shock. The temporal profile of this high energy component should depend heavily on the distance behind the external shock at which this emission originated [@Wang06], as the duration of the IC component will reflect the geometry of the external shock, roughly $R/2\Gamma^{2}c$. The ratio between the flare duration and the IC component’s duration should approach 1:1 as the radius of the internal shock producing the flare approaches the external shock radius. Internal shocks that result from delayed central engine activity do not necessarily have to be at large radii to produce the longer observed durations. Therefore, larger IC component to flare duration ratios are expected for flares produced from small radii collisions. Even if these late-time collisions at small radii have intrinsically longer durations, as suggested by late central engine activity models, the additional light travel time from the origin of the late time flares to the external shock as it expands may make this change in duration ratios measurable. Such a test for contemporaneous high energy emission will be aptly suited for the upcoming GLAST mission which will be sensitive to photons up to $>$ 300 GeV. Acknowledgments {#sec:acknowledgments} =============== D.K. acknowledges financial supported through the NSF Astronomy $\&$ Astrophysics Postdoctoral Fellowships under award AST-0502502. N.B. gratefully acknowledges support from a Townes Fellowship at U. C. Berkeley Space Sciences Laboratory and partial support from J. Bloom and A. Filippenko. J. S. B. and his group are partially supported by a DOE SciDAC Program through the collaborative agreement DE-FC02-06ER41438. We also thank Phil Chang, Edison Liang, and Demos Kazanas for their thoughtful discussion. Appendix ======== We describe here the mathematical representation of a Haar wavelet and its use in the construction of a scaleogram closely related the the $ACF$ and first order structure function $SF$. Given $T$ successive data bins $X_i$, we define the Haar wavelet coefficients $h_{i,1}$ on scale $\Delta t=1$ as $$\label{eq:hi1} h_{i,1} = X_{2i+1}-X_{2i}, \quad i=0,...,T/2-1.$$ At the same time, we can calculate the signal smoothed over a 2 bin scale $\Delta t=2$: $$\bar X_{i,2} = {1 \over 2}(X_{2i+1}+X_{2i}), \quad i=0,...,T/2-1.$$ By successively differencing and smoothing the signal on dyadic scales $\Delta t=1,2,4,$ etc., we build up the discrete Haar transform (see, also, Press et al. 1992): $$\label{eq:hidt} h_{i,\Delta t} = \bar X_{2i+1,\Delta t}- \bar X_{2i,\Delta t}, \quad i=0,...,T/2\Delta t-1.$$ If the $X_i$ are uncorrelated with equal variance, then the $h_{i,\Delta t}$ will be approximately linearly independent. We form a Haar scaleogram by averaging the $h_{i,\Delta t}$ at each scale $\Delta t$: $$\sigma^2_{X,\Delta t} = \Delta t/t \sum_{i=0}^{t/2\Delta t-1} h_{i,\Delta t}^2 = \Delta t/t \sum_{i=0}^{t/2\Delta t-1} (\bar X_{2i+1,\Delta t} - \bar X_{2i,\Delta t})^2.$$ In practice, we calculate this average as an average weighted by the data measurement uncertainties, $w_i=1/\sigma_{D,i}^2$. This quantity, also known as the Allan (1966) variance, is closely related to the structure function $SF = < (X_{i+\Delta t}-X_i)^2 >$, where $<...>$ denotes an average over the data. Unlike $\sigma^2_{X,\Delta t}$ the quantity $SF$ is calculated without averaging the data on scale $\Delta t$ before differencing on that scale. This leads to a scaleogram with correlations (even for uncorrelated input data) between nearby data bins. The uncorrelated scaleogram $\sigma^2_{X,\Delta t}$ is therefore easier to fit and interpret, while both scaleograms have similar shapes for a wide variety of noise models. Flare Ensemble Haar Structure Function -------------------------------------- Because the Haar wavelets encode signal scale information as a function of time, it is possible to calculate $\sigma^2_{X,\Delta t}$ for arbitrary time sections of a light curve (e.g., Figure 2) or for the full light curve. To make useful scaleogram plots for multiple GRB flares (e.g., Figures 4), we place the times series data end-to-end and perform the Haar transform as though the data were binned on an even time grid. Transform coefficients formed by differencing data from separate events are discarded. By saving the actual time since GRB trigger $t$ and time bin width $\Delta t$ for each wavelet coefficient, we can then rebin the coefficients in time on a dyadic grid starting with the minimum bin size. In this fashion, it is possible to plot statistically independent $\sigma_{X,\Delta t}$ points versus the physically meaningful $\Delta t$ or $\Delta t/t$. For $X_i$ in Equations \[eq:hi1\]$-$\[eq:hidt\], we use the natural logarithm of the XRT count rate. Because the counts have been binned to a fixed $S/N$ ratio, the error in $X_i$ is approximately constant ($\sigma_D \approx 1/3$). The natural logarithm is also useful because powerlaw flux variations lead to a “zero-flaring” scaleogram with $\sigma_{X,\Delta t} \propto \Delta t$, as can be seen from a Taylor expansion of the flux in time. Also because we are working with the logarithm of the count rate, $\sigma_{X,\Delta t}$ can be interpreted as a root-mean-square (RMS) fractional variation in the flux $F$ (i.e., $\delta X \approx \delta F/F$). Structure Function Interpretation --------------------------------- Following the discussion in Hughes, Aller, & Aller (1992), on short timescales, the scaleogram $\sigma_{X,\Delta t}$ asymptotes to $\sigma_D$, where $\sigma_D$ is the data measurement uncertainty. Because we know $\sigma_D$, we can subtract this flattening out. (This is typically not possible for $SF$ due to the introduction of correlations in the data.) From the Cauchy-Schwarz inequality [^6], the scaleogram increases with increasing time lag. It eventually saturates to a characteristic signal level $\sigma_{\rm signal}$ at time $\lessim T_{90}$, once we begin to run out of correlated variations in the signal. On intermediate timescales, the slope of $\sigma_{X,\Delta t}$ depends on the shape of the light curve and on the noise spectrum of possible low-level or unresolved flares. If the light curve is correlated on these timescales, which is to say smooth on these timescales, $\sigma_{X,\Delta t}$ will increase as $\Delta t$. If, however, the light curve is dominated by the sum of slowly decaying responses to low level flares, a characteristic “flicker noise” spectrum ($PSD(f)\propto 1/f$) may result and $\sigma_{X,\Delta t} \propto \Delta t^0$. Hence, we can test for flaring as a function of timescale by measuring powerlaw $\sigma_{X,\Delta t}$ slopes less than unity. The fading powerlaw tail of a flare measured with infinite $S/N$ would produce a statistically significant $\sigma_{X,\Delta t}$ for arbitrarily small $\Delta t$. These timescales, where $\sigma_{X,\Delta t} \propto \Delta t$, are therefore uninteresting. However, the beginning of a $\sigma_{X,\Delta t} \propto \Delta t^0$ phase yields a physically meaningful timescale for the flaring. The breadth of this phase indicates the range of $\Delta t$ present in the light curve. Allan, D. W. 1966, Statistics of Atomic Frequency Clocks. Proc. IEEE 31, 221-230 Band, D., et al. 1993, , 413, 281 Barthelmy, S. D., et al. 2005a, 2005, , 635L, 133 Barthelmy, S. D., et al. 2005b, Space Sci. Rev., 120, 143-164 Belli, B. M. 1992, ApJ, 393, 266 Beloborodov, A. M., Stern, B. E., Svensson, R. 2000, ApJ, 535, 158 Burrows, D. N., et al. 2005a, Space Sci. Rev., 120, 165-195 Burrows, D. N., et al. 2005, Science, 309, 1833 Burrows, D. N., et al. 2007, Submitted to Philosophical Transactions (astro-ph/0701046) Unpublished, Presented at the First Glast Symposium, 7 Feb 2007, Palo Alto, CA Butler, N. $\&$ Kocevski, D. 2007, Submitted to (astro-ph/0612564) Butler, N., Kocevski, D., $\&$ Bloom, J. S., in prep. Cavallo G, $\&$ Rees MJ. 1978, MNRAS, 183, 359 Chincarini, G., et al. 2007, Submitted to ApJ (astro-ph/0702371) Dado, S., Dar, A., $\&$ De Rœjula, A. 2006, ApJ, 646L, 21 387, 783 Falcone, A. D., et al. 2006, , 641, 1010 Fan, Y. Z. $\&$ Wei, D. M., 2005, MNRAS, 364L, 42 Fenimore et al. 1995, , 448, L101 Fenimore, E., Madras, C., Nayakshin, S. 1996, , 473, 998 Frontera, F., et al. 2000, Suppl., 127, 59-78 Gehrels, N., et al. 2004, , 611, 1005-1020. Gendre, B., 2006, A$\&$A 455, 803 Giblin, T. W., Kouveliotou, C., van Paradijs, J. 1998, AIP conference proceedings, 428, 241G - (4th Huntsville Symposium) Granot, J., Kšnigl, A. $\&$ Piran, T. 2006, MNRAS, 370, 1946 Golensetski, S. V. et al. 1983, Nature, 306, 451 Hughes, P. A., Aller, H. D., Aller, M. F. 1992, ApJ, 396, 469 King, A., O’Brien, P. T., Goad, M. R., Osborne, J., Olsson, E., Page, K. 2005, ApJ, 630, 113 Ioka, K., Kobayashi, S., $\&$ Zhang, B. 2005, , 631, 429 Kobayashi, S., Piran, T., $\&$ Sari, R. 1997, , 490, 92 Kocevski, D., Ryde, F., Liang, E. P., 2003, ApJ, 596, 389 Kocevski, D., Butler, N., & Bloom, J.S. 2006, American Astronomical Society Meeting Abstracts, 209, 22703 Kolaczyk, E. D., Dixon, D. D. 2000, ApJ, 534, 490 Krimm, H. A., Granot, J., Marshal, F., Perri, M., Barthelmy, S. D., Burrows, D. N., Gehrels, N., MŽsz‡ros, P., Morris, D 2007, submitted to ApJ (astro-ph/0702603) Kumar, P., $\&$ Panaitescu, A., 2000, Ap. J., 541, L51-L54 Lazzati, D. $\&$ Perna, R., 2007, MNRAS, 375L, 46 Lee, W. $\&$ Ramirez-Ruiz, E. 2007, New J.Phys. 9 17 (astro-ph/0701874) Liang, E. W., et al. 2006, ApJ, 646, 351 Lyutikov, M. 2006, MNRAS, 369L, 5 Mundell, C.G., et al. 2006, Accepted to ApJ (astro-ph/0610660) Narayan, R., Paczýnski, B., $\&$ Piran, T. 1992, ApJ, 395, L83 Norris, J. P., et. al. 1986, ApJ, 301, 213 Paczýnski B. 1986. Ap. J. Lett. 308:L43 Panaitescu, A. 2005, MNRAS, 363, 1409 Perna, R., Armitage, P., Zhang, B., 2006, ApJ, 636, L29 Piran, T. 1999, PhR, 314, 575 Press, W. H., et al. 1992, Numerical Recipes in C, (2nd ed.; Cambridge: Cambridge Univ. Press) Proga, D. $\&$ Zhang, B. 2006 MNRAS, 370, L61 Ramirez-Ruiz, E., Fenimore, E., 2000, ApJ, 539, 712 Rees, M. J. $\&$ Meszaros, P. 1994, ApJ, 430, L93 Reichart, D., et al. 2001, , 552, 57 Romano, R. 2006, A&A, 456, 917 Rybicki, G., & Lightman, A. 1979, Radiative Processes in Astrophysics (New York: Wiley) Scargle, J. D. 1998, ApJ, 504, 405 Simonetti, J. H., Cordes, J. M., Heeschen, D. S. 1985, ApJ, 296, 46 Walker, K. C., Schaefer, B. E., Fenimore, E. E. 2000, ApJ, 537, 264 Wang, X., $\&$ Loeb, A. 2000, ApJ, 535, 788 Wang, X., Zhuo, L., Mészáros, P. 2006, ApJ, 641, 89 Zhang, B., et al. 2006, ApJ, 642, 354 Figure Captions =============== [**Fig. 1.**]{} - The XRT count rate (cts/s) plotted vs. time since trigger for all 28 flares in 18 separate GRBs. The light curves in this plot are rebinned to $S/N=10$. A qualitative trend between pulse width and time of peak flux can be seen by inspection. [**Fig. 2.**]{} - [*Top Panel.*]{} A composite BAT and XRT light curve for GRB 060714 showing an increasing pulse duration as a function of time. [*Bottom Panel.*]{} The minimum variability timescale in the composite light curve (with power that is at least 3$\sigma$ above that which is expected from Poissonian fluctuations). The variability of the light curve increases with time, roughly as a powerlaw of $\Delta T_{min} \propto T^{1.9 \pm 0.6}$. [**Fig. 3.**]{} - The pulse duration $T_{90}$ versus time of peak flux $T_{p}$ for our entire sample of XRT observed flares. Multiple flares from individual GRBs are displayed with a unique color-symbol combination, whereas GRBs with only one flare are represented by a black diamond. A strong trend ($tau_{K} = 0.7$) between pulse width and the time since trigger, as measured in the observer frame, is clear from the data. Only half of the GRBs with multiple flares display a similar increasing pulse duration trend between their associated flares. We conclude that the observed pulse width evolution only becomes apparent when examining durations that cover a broad temporal range. [**Fig. 4.**]{} - Haar wavelet scaleogram $\sigma_{X,\Delta t}$ versus timescale $\Delta t$ (Panel A) and $\Delta t/t$ (Panel B) for the ensemble of flares under study. The expected level for Poisson noise has been subtracted out. Because $\sigma_{X,\Delta t}$ is calculated from the natural logarithm of the XRT count rate, it can be interpreted as a measure of RMS fractional flux variation versus timescale. The scaleograms reach maximum and turn over on timescales $\Delta t \approx 30-300$s and $\Delta t/t \approx 0.1-0.5$, indicating that the flaring occurs on these characteristic timescales. Significant ($>3$-sigma level) variability is observed on timescales $\Delta t\gtrsim 3$s and $\Delta t/t \gtrsim 0.01$, however, $\sigma_{X,\Delta t} \propto \Delta t$ (dotted red curves) indicates that this variation is due to flaring on intrinsically longer timescales. [**Fig. 5.**]{} - The flare rise time $T_{r}$ plotted vs. the time of peak flux $T_{p}$. As in Figure 3, multiple flares from individual GRBs are displayed with a unique color-symbol combination, whereas GRBs with only one flare are represented by a black diamond. An increasing trend similar to that seen between $T_{90}$ and $T_{p}$ is evident in the data. The observed rise times are largely insensitive to the effects of background subtraction. Figures {#figures .unnumbered} ======= \[fig:all\_flares\] \[fig:060714\_dt\] \[fig:time\_plot\] \[fig:multi\_haar\] [![image](f4.ps){width="4.5in"}]{} ![image](f5.ps){width="4.5in"} \[fig:time\_plot\_rise\] [rrrrrrrrr]{} \[Table:sample\] \ 050502B & 400.0 – 1200.0 & 358.1 $\pm$ 3.2 & 784.6 $\pm$ 23.7 & 203.5 $\pm$ 23.7\ 050607 & 250.0 – 600.0 & 165.5 $\pm$ 18.0 & 312.6 $\pm$ 3.2 & 28.8 $\pm$ 9.6\ 050713A & 95.0 – 150.0 & 39.3 $\pm$ 0.5 & 122.8 $\pm$ 5.3 & 20.7 $\pm$ 5.3\ 060111A & 200.0 – 500.0 & 191.3 $\pm$ 2.5 & 279.6 $\pm$ 15.4 & 43.4 $\pm$ 15.4\ 060312 & 100.0 – 200.0 & 56.0 $\pm$ 2.5 & 113.2 $\pm$ 1.7 & 11.4 $\pm$ 1.7\ 060526 & 230.0 – 450.0 & 128.9 $\pm$ 0.9 & 251.4 $\pm$ 2.8 & 4.8 $\pm$ 2.8\ 060604 & 120.0 – 200.0 & 63.9 $\pm$ 0.5 & 136.8 $\pm$ 3.4 & 11.0 $\pm$ 3.5\ 060904B & 140.0 – 300.0 & 105.4 $\pm$ 0.8 & 182.3 $\pm$ 6.6 & 30.5 $\pm$ 6.7\ 060929 & 470.0 – 800.0 & 201.5 $\pm$ 3.0 & 513.0 $\pm$ 19.9 & 22.4 $\pm$ 20.0\ 050730 & 300.0 – 600.0 & 245.6 $\pm$ 2.4 & 434.4 $\pm$ 16.3 & 105.1 $\pm$ 16.5\ 050730 & 600.0 – 800.0 & 165.4 $\pm$ 1.7 & 672.2 $\pm$ 22.7 & 54.1 $\pm$ 22.8\ 051117A & 800.0 – 1250.0 & 365.0 $\pm$ 1.7 & 997.1 $\pm$ 50.0 & 158.9 $\pm$ 50.1\ 051117A & 1250.0 – 1725.0 & 389.3 $\pm$ 1.4 & 1328.3 $\pm$ 33.8 & 41.5 $\pm$ 33.8\ 060124 & 300.0 – 650.0 & 221.7 $\pm$ 1.6 & 563.8 $\pm$ 7.9 & 165.7 $\pm$ 7.7\ 060124 & 650.0 – 900.0 & 158.4 $\pm$ 1.4 & 694.9 $\pm$ 6.0 & 28.5 $\pm$ 6.0\ 060204B & 100.0 – 270.0 & 103.3 $\pm$ 6.4 & 118.6 $\pm$ 3.5 & 13.2 $\pm$ 3.6\ 060204B & 270.0 – 450.0 & 91.9 $\pm$ 3.9 & 332.3 $\pm$ 9.8 & 33.4 $\pm$ 10.1\ 060210 & 165.0 – 300.0 & 98.5 $\pm$ 1.1 & 207.6 $\pm$ 6.5 & 35.4 $\pm$ 6.4\ 060210 & 350.0 – 450.0 & 79.2 $\pm$ 0.9 & 369.9 $\pm$ 4.4 & 10.4 $\pm$ 4.4\ 060418 & 83.0 – 110.0 & 22.7 $\pm$ 0.4 & 87.8 $\pm$ 1.2 & 3.4 $\pm$ 1.2\ 060418 & 122.0 – 200.0 & 58.0 $\pm$ 0.6 & 130.3 $\pm$ 0.8 & 4.8 $\pm$ 0.8\ 060607A & 93.0 – 130.0 & 31.1 $\pm$ 0.3 & 99.0 $\pm$ 3.2 & 3.7 $\pm$ 3.2\ 060607A & 220.0 – 400.0 & 138.8 $\pm$ 1.9 & 265.3 $\pm$ 11.6 & 35.5 $\pm$ 11.7\ 060714 & 100.0 – 125.0 & 15.9 $\pm$ 0.2 & 114.1 $\pm$ 0.6 & 5.8 $\pm$ 0.6\ 060714 & 125.0 – 160.0 & 27.9 $\pm$ 0.3 & 132.5 $\pm$ 3.5 & 5.2 $\pm$ 3.5\ 060714 & 160.0 – 230.0 & 48.3 $\pm$ 1.2 & 178.4 $\pm$ 1.7 & 14.9 $\pm$ 1.6\ 060904A & 250.0 – 600.0 & 219.6 $\pm$ 12.8 & 288.9 $\pm$ 19.5 & 21.0 $\pm$ 19.6\ 060904A & 600.0 – 1000.0 & 314.2 $\pm$ 6.9 & 678.5 $\pm$ 7.7 & 33.3 $\pm$ 7.7\ [^1]: Here we assume that the intrinsic cooling time $\Delta t_{c}$ of the shell is insignificant compared to the duration of the shell crossing $\Delta t_{r}$ and angular $\Delta t_{d}$ timescales because of the magnetic field strength required to produce the gamma-ray emission [^2]: ftp://legacy.gsfc.nasa.gov/swift/data [^3]: http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/ [^4]: http://astro.berkeley.edu/$\sim$nat/swift [^5]: We reserve a more detailed study of the Haar structure functions of individual flare events and [*Swift*]{} GRBs in a separate paper [@Butler07a]. [^6]: Recall that the Cauchy-Schwarz inequality states that $| \langle x,y \rangle |^{2} \leqslant \langle x,x \rangle \cdot \langle y,y \rangle$ and that the two sides are equal only if x and y are linearly dependent.
Friday, September 26, 2008 Politics: Contrasting Strategies on the Right TORONTO, ONTARIO - Globe and Mail columnist John Duffy has made the case that the first North American election is taking place in the United States and Canada. Duffy believes that that rural voters (represented by the Republicans and the Conservatives) are being pitted against urban voters (represented by the Democrats and the other parties in Canada). While challenges to that thesis could take several forms, perhaps the most interesting divide may be that between the tactics used by the Republican Party of John McCain and the Conservative Party of Stephen Harper. In Canada, Stephen Harper has positioned his Conservative Party as the beacon of stability and competent economic stewardship. He has called the other parties "too risky for Canada" especially in rough economic times, and has tried to emphasize that risk. Indeed, polls indicate that Harper is viewed by Canadians as the best leader to handle the economy. The deeper the economic problems in the United States prove to be and larger their impact on Canada appears, the more Harper would seem to benefit. In contrast, John McCain has been trying to establish himself as more anti-establishment than the candidate of the party that actually is out of power in the White House. Building on his long-standing reputation as a "maverick," McCain picked as his vice-presidential nominee someone else who could claim populist credentials in Alaska governor Sarah Palin. Since picking Palin, McCain has rarely talked about his advantage in experience over rival Barack Obama, instead emphasizing what his ticket would do to change the nation's capitol. He paints himself as a man of action, who takes on his own party and forces change in ways that benefit the country, even if that means doing things in an unconventional manner and hurting his own political prospects. (As a side note, the etymology of the word "maverick" is somewhat amusing, traditionally meaning an unbranded ranch animal, named after a Texas rancher who refused to brand his cattle. This was essentially an unethical practice by Samuel Maverick, allowing him to claim ANY unbranded cattle whether it was actually his or not, subverting the branding system to his personal advantage. Does a politician really want to be associated with a word of that etymology?) The "maverick," "country-first" strategy reached a new apex this week when McCain announced he was suspending his campaign to return to Washington to lead the effort to pass economic bail-out legislation, and that furthermore he would not attend the scheduled Presidential debate scheduled for this evening. While McCain tried to make a case for demonstrating self-sacrificing leadership through this action, in many ways he made Obama look like the competent statesman who was taking a measured approach and understood that the personal contribution of either candidate to the legislation was likely to be minimal. Obama's statements that "presidents have to do more than one thing at a time" and that "injecting presidential politics into this process may not be helpful" sound like things that Stephen Harper might say. Far from calling the other candidate too risky, McCain is opening the door to appearing too risky himself. A case could be made, in fact, that the personality expressed by McCain in his campaign does not most closely resemble Stephen Harper, but that of Bloc Québécois leader Gilles Duceppe. Interestingly, it is indeed the Bloc that may effectively compete for rural votes in Quebec if for no other reason than its nationalist stance. While generally regarded as a left-leaning party, the Bloc currently holds most of the rural seats from Quebec and may retain a number of them considering the current backlash against perceived Conservative insensitivity to Québécois culture. The rise of the action démocratique Québec (ADQ) at the provincial level in Quebec, as well as the election of a New Democratic Party member of parliament in urban Montreal, may indicate that Duffy's rural-urban thesis is coming to that province. However, since there is still competition for rural voters in Quebec and the current divide in tactics between the Republicans and Conservatives exists, the first North American election still probably lies in the future.
Wow is all I gotta say right now... iForce shocks me with another GREAT product... So far this is my favorite protein EVER! I refuse to buy any other from this point on. Big thank you to AllNatural and the iForce family. So I got mine a little later in the mail which I promise its worth the wait! Wife knows I have been lookin forward to this for a awhile and how excited I was. She appreciates iForce as much as I do in a diff way lol seeing how after my last log from you my libido was high hahaha anyways back to Protean!!! Mixability: 9/10.... For me in a shaker bottle it mixed up well only a few tiny clumps. I'm sure in a blender its no prob. But who cares it still tastes amazing as powder! Nutrition: 10/10 exactly what it says it is... A lean protein with full flavor. Low in cals makes a great protein for cut or recomp. Can't wait to try with recipes Taste: 11/10 lmfao UNBELIEVABLE!!! title says it all!!! I spilled a little bit of powder before tryin and mmmm lol mixed it with whole milk. My wife even loved it!!! We both agreed that it tastes like a Hershey Cookies & Cream bar with mint :-D I can't believe its healthy Haha best protein hands down!!! Buy this and I guarantee you will drop all others. Can't wait to try the other flavors I'm hooked!!! This product gets a 10 out of 10 from me. Look at other reviews nothin but amazing. Thank you once again AllNatty and iForce for this opportunity!!! Cant wait to try all of your other products! It has got me sold on iForce Protean. I need to get my review up, but I've been trying to take the time to try different things with it and to see if I'll get tired of the flavor as the tub empties like I normally do with protein. It has got me sold on iForce Protean. I need to get my review up, but I've been trying to take the time to try different things with it and to see if I'll get tired of the flavor as the tub empties like I normally do with protein. It has got me sold on iForce Protean. I need to get my review up, but I've been trying to take the time to try different things with it and to see if I'll get tired of the flavor as the tub empties like I normally do with protein. an initial review as well as subsequent reviews would be much appreciated. Full tubs = plenty of reviews Haha thx bro! Although I think she ordered it so she could have a tub of RVC for herself lol she loves it Lol I made my wife try it with me lol she doesn't even lift or anything but I heard everyone's wives and girls are takin their Protean lol I told my wife ill be takin my single tub everywhere with me and baby it. She will get me more and ya I think its the same reason Haha Yessir! I'm planning on making ice cream with mine coming up. I've already made pancakes and brownies with it. I made a batch this weekend - I used fat free half and half and used the VMS protean - I also added some vanilla extract and some real vanilla beans. No need for sugar or added spenda it was sweet and refreshing as anything. The bad news is it gets frostbitten real quick the good news is it doesn't last very long so it doesn't matter. Lol I made my wife try it with me lol she doesn't even lift or anything but I heard everyone's wives and girls are takin their Protean lol I told my wife ill be takin my single tub everywhere with me and baby it. She will get me more and ya I think its the same reason Haha ^^i'm jelly, need to find myself a woman that will buy/love protean the way i do :P Originally Posted by Airborne42 It's so delicious I want to have my buddies try it but then again... It's Miiiiine Lmao Lol don't blame you for not sharing, it's so damn good. awesome review bro, and glad you love it so much. And i have to agree with what a lot of people are saying, although I wouldn't have thought I ever would: VMS > RVC ...But it still goes VMS > RVC > All MAN Sports Online Lead Rep and Saleshttp://www.nutraplanet.com/product/man/pure-pf3-free-fermented-leucine-limited-time.htmlLike us on Facebook! Follow me on Instagram: Rob_MANSports
Synthesis of the phosphoramidite derivatives of 2'-deoxy-2'-C-alpha-methylcytidine and 2'-deoxy-2'-C-alpha-hydroxymethylcytidine: analogues for chemical dissection of RNA's 2'-hydroxyl group. Oligonucleotides containing 2'-C-alpha-methyl and 2'-C-alpha-hydroxymethyl modifications enable strategies for delineation of the distinctive role fulfilled by the 2'-hydroxyl group in RNA structure and function. Synthetic routes to the phosphoramidite derivatives of 2'-deoxy-2'-C-alpha-methylcytidine (14%, 15 steps) and 2'-deoxy-2'-C-alpha-hydroxymethylcytidine (19%, 10 steps) from methyl 3,5-di-O-(4-chlorobenzyl)-alpha-d-ribofuranoside are developed.
Thanks, I tried forcing the lib path and it solved it .. however it then couldn't find the includes path .. so I forced that and now its saying it can't find the package. hmmm, Seems like there is some underlaying problem. I have the R2 working with gr-osmsdr but the performance is poor compared to my airspy ... i think I need Soapy to get the R2 supported properly. There's an 'uninstall' target in the SoapySDR Makefile, so you can use that to remove it. There isn't one for SoapySDRPlay, but I don't think it does much more than install a lib, which you could manually remove. This was how the install of the latter went for me:
Buying call options - Fidelity Viewpoints Description Short one call option and long a second call option with a more distant expiration is an example of a long call calendar spread. Short one call option and long a second call option with a more distant expiration is an example of a long call calendar spread. Options trading is not suitable for all investors. Your account Long Call Option Explained (Best Guide w/ Examples Synthetic Long Put Options Trading Strategy is a Synthetic Trading Strategy, a type of Options Trading Strategy created by the combination of short stock position with a long call of the same series. This article briefly explains Synthetic Options using a live market example along with implementing it using Python programming language. How to Make Money Trading Options, Option Examples Options Trading Strategies | Top 6 Options Strategies you Long Call Trading Strategy. The long call, or buying call options, is about as simple as options trading strategy gets, because there is only one transaction involved. It's a fabulous strategy for beginners to get started with and is also commonly used by more experienced traders too. Call Option Explained | Online Option Trading Guide Option (finance) - Wikipedia Long Call Butterfly is a neutral strategy where very low volatility in the price of underlying is expected. The strategy is a combination of bull Spread and bear Spread. It involves Buy 1 ITM Call, Sell 2 ATM Calls and Buy 1 OTM Call. The strike prices of all Options should be at equal distance from the current price. Long Call Vs Covered Call | Options Trading Strategies Find out what the trading terms long and short mean. See examples of how to profit no matter which way the market moves. The Balance The Difference Between Long and Short Trades . What Is the Difference between "Call" and "Put" Options? What is a Trailing Stop Loss in Day Trading? Large Bid and Ask Spreads in Day Trading Explained. Learn the basics about call options - Fidelity In this example, imagine you bought (long) 1 $40 July call option and also bought 1 $40 July put option. With the underlying trading at $40, the call costs you $1.14 and the put costs $1.14 also. Now, when you're the option buyer (or going long) you can't lose more than your initial investment. The Long Call - Options Trading Strategy for Bull Market 7/7/2018 · In this Long Put Vs Short Call options trading comparison, we will be looking at different aspects such as market situation, risk & profit levels, trader expectation and intentions etc. Hopefully, by the end of this comparison, you should know which strategy works the best for you. Long Call - TradeStation 7/11/2018 · Thus, with this, we wrap up our comparison on Long Call Vs Covered Call option strategies. As mentioned above, if you are in a Bullish market situation and want to make unlimited profits on your trades, then Long Call is one of the options trading strategies you can opt for. Options in Long Term Trading Strategies | Options Profits The Complete Options Trading Course (New 2019) | Udemy A call option, often simply labeled a "call", is a financial contract between two parties, the buyer and the seller of this type of option. Trading options involves a constant monitoring of the option value, which is affected by the following factors: How to Trade Options | TD Ameritrade Synthetic Long Put Options Trading Strategy In Python Options involve risk and are not suitable for all investors. For more information, please read the Characteristics and Risks of Standardized Options before you begin trading options. Also, there are specific risks associated with covered call writing, including the risk that the underlying stock could be sold at the exercise price when the Option Types: Calls & Puts - NASDAQ.com Long Call The VIX Strategy Workshop is a collection of discussion pieces designed to assist individuals in learning how options work and in understanding VIX options strategies. These discussions and materials are for educational purposes only and are not intended to provide investment advice. A long call option can be used as an - Option Trading Tips #3: Long Put Options Trading Strategy. Long Put is different from Long Call. Here you must understand that buying a Put is the opposite of buying a Call. When you are bullish about the stock / index, you buy a Call. But when you are bearish, youmay buy a Put option. Buy Call Options / Long Call Options - Options Trading in Understanding the Difference Between a Long and Short This strategy of trading call options is known as the long call strategy. See our long call strategy article for a more detailed explanation as well as formulae for calculating maximum profit, maximum loss and breakeven points. Selling Call Options. Instead of purchasing call options, one can also sell … Option Alpha - 12 Free Options Trading Courses | #1 Furthermore, the cost-to-carry savings offered by a long call strategy, versus an outright long stock position, diminish over time. Once time value disappears, all that remains is intrinsic value. For in-the-money options, that is the difference between the stock price and the strike price. Buying LEAP Options | Long Term Options - The Options Playbook Learn how to trade options with TD Ameritrade options trading educational resources. View articles, videos and available options webinars so you can discover how to trade options. Discover how to trade options in a speculative market When the buyer of a long option exercises the contract, the seller of a short option is "assigned", and Long Options, Long Call, Long Put - great-option-trading Options: The Basics -- The Motley Fool #1 Option Trading Mistake: Buying Out-of-the-Money (OTM) Call Options Buying OTM calls outright is one of the hardest ways to make money consistently in option trading. OTM call options are appealing to new options traders because they are cheap. Trading Options: Long Combo Trading Strategy Long Call Strategies | Ally Options Trading Excel Long Call If you go buy a call option, then the maximum loss would be equal to the Premium; but your maximum profit would be unlimited. The Break-Even price would be equal to the Strike Price plus the Premium. Long Call Option Strategy | Call Options - The Options Buying call options, or also known as Long Call Options or simply Long Call, is the simplest bullish option strategy ever and is a great starting point for beginner option traders. Buying call options / Long Call Options offers the protection of limited downside loss with the benefit of leveraged gains. Buy Options | Online Options Trading | E*TRADE Long Calls - Definition. Investors will typically buy call options when they expect that a underlying's price will increase significantly in the near future, but do not have enough money to buy the actual stock (or if they think that implied volatility will increase before the option expires - more on this later). Long call calculator: Purchase call options Trading Options: Long Combo Trading Strategy Click To Tweet What Is Long Combo Trading Strategy? As an Options trader you can consider using the Long Combo strategy if you are bullish about the market i.e. you are expecting the stock price to go up. It involves selling a Put and buying a Call option. Strategy Characteristics. Moneyness of the Long Call Options | Everything You Need to Know The Complete Options Trading Course is designed to turn you into a highly profitable options trader in a short period of time by providing you with the best options trading strategies that actually work in …
555 F.3d 656 (2009) James G. FRANKE, and all others similarly situated, Appellee, v. POLY-AMERICA MEDICAL AND DENTAL BENEFITS PLAN, Appellant. No. 08-1637. United States Court of Appeals, Eighth Circuit. Submitted: November 14, 2008. Filed: February 5, 2009. *657 Stephen P. Lucke, argued, Kari S. Berman, Diane Bratvold, Glen M. Salvo and Stephen P. Lucke, on the brief, Minneapolis, MN, for appellant. Mark M. Nolan, argued, Mark M. Nolan, Jodell M. Galman, on the brief, St. Paul, MN, for appellee. Before WOLLMAN, BEAM, and BENTON, Circuit Judges. WOLLMAN, Circuit Judge. Poly-America Medical and Dental Benefits Plan (the Plan) appeals from the district court's denial of its motion to compel arbitration. We reverse and remand for entry of an order compelling arbitration. I. James G. Franke has been employed by Up-North Plastics Inc., an affiliate of Poly-America, L.P., since 2001. Through this employment, Franke enrolled in the Plan, which is governed by the Employee Retirement Income Security Act (ERISA), 29 U.S.C. § 1001 et seq. During each year of his employment, Franke acknowledged in writing his agreement to arbitrate any claims associated with his enrollment in the Plan. After suffering a myocardial infarction in November 2006, Franke submitted his medical bills to the Plan for payment. Following the denial of the request for payment, Franke appealed to the Plan's Administrator, Chuck Kramer, who upheld the original decision. Kramer informed Franke that he could "file a written request with the Plan Administrator for final and binding arbitration." Franke chose instead to file suit in federal district court. The Plan moved to compel arbitration. In its submissions to the court, the Plan recognized that certain provisions in the arbitration agreement were unlawful under ERISA, but argued that those provisions should be corrected pursuant to the severability clause in the contract and that the arbitration agreement should be enforced. The district court disagreed, however, and concluded that "the mere existence of the illegal provision, in this instance, unduly inhibits or hampers the processing of appeals and therefore renders the arbitration requirement in the Plan unenforceable." II. We review de novo the district court's denial of the Plan's motion to compel. *658 EEOC v. Woodmen of the World Life Ins. Society, 479 F.3d 561, 565 (8th Cir. 2007). The Federal Arbitration Act (FAA) requires that "[a] written provision in any... contract evidencing a transaction involving commerce to settle by arbitration a controversy thereafter arising out of such contract ... shall be valid, irrevocable, and enforceable, save upon such grounds as exist at law or in equity for the revocation of any contract." 9 U.S.C. § 2. The FAA intended "to reverse the longstanding judicial hostility to arbitration agreements ... and to place arbitration agreements upon the same footing as other contracts." Gilmer v. Interstate/Johnson Lane Corp., 500 U.S. 20, 24, 111 S.Ct. 1647, 114 L.Ed.2d 26 (1991). "In light of this federal policy, arbitration agreements are to be enforced unless a party can show that it will not be able to vindicate its rights in the arbitral forum." Faber v. Menard, Inc., 367 F.3d 1048, 1052 (8th Cir.2004). This strong federal policy applies equally to claims grounded in statutory rights, and we have found no "compelling basis to treat agreements to arbitrate ERISA claims differently." Arnulfo P. Sulit, Inc. v. Dean Witter Reynolds, Inc., 847 F.2d 475, 479 (8th Cir.1988). When reviewing the enforcement of an arbitration agreement, we determine only whether there is a valid arbitration agreement and whether the dispute at issue falls within the terms of that agreement. Faber, 367 F.3d at 1052. If there is a valid agreement and the dispute is properly within the terms of the agreement, the agreement must be enforced. Franke concedes that the dispute falls within the agreement. He argues, however, that certain provisions contained in the agreement are in violation of ERISA and thus undermine the agreement to arbitrate. As noted by the district court, the two provisions that appear to be unlawful are the agreement's assertion that arbitration is binding and the requirement that arbitration costs be shared. Despite the presence of these two provisions, the agreement before us is distinguishable from the cases on which Franke relies. For example, it does not approach the "sham system unworthy even of the name arbitration" at issue in Hooters of America, Inc. v. Phillips, 173 F.3d 933, 940 (4th Cir.1999). In that case, the agreement was riddled with biased provisions that allowed Hooters, among other things, to choose the arbitrators and unilaterally modify the arbitral rules without notice, presumably even during arbitration. Id. at 938-39. A member of the American Arbitration Association's board of directors testified that the agreement was "without a doubt the most unfair arbitration program I have ever encountered." Id. at 939. Equally misplaced is Franke's reliance on Graham Oil Co. v. ARCO Products Co., 43 F.3d 1244 (9th Cir.1994), for the proposition that the mere presence of unlawful provisions can invalidate an arbitration agreement, given that we have refused to follow its holding. See Larry's United Super, Inc. v. Werries, 253 F.3d 1083, 1086 (8th Cir.2001). Rather, the case before us is more akin to the facts in Woodmen, Faber, and Gannon v. Circuit City Stores, Inc., 262 F.3d 677 (8th Cir.2001). In each of those cases the severability clause found in the arbitration agreements "specifically state[d] the intent of the parties in the event a provision within the agreement is found invalid," i.e., that arbitration proceed once any invalid terms have been severed. Gannon, 262 F.3d at 680. The same is true here. Although Franke and the district court voice the concern that Plan participants "cannot be expected to know" that the provisions at issue are unlawful, that is not a sufficient basis upon which to deny the motion to compel. "We will not *659 extend [our limited] review to the consideration of public policy advantages or disadvantages resulting from the enforcement of the agreement." Faber, 367 F.3d at 1052 (citing Gannon, 262 F.3d at 682) (internal quotations omitted). The judgment is reversed, and the case is remanded to the district court for entry of an order compelling arbitration under the Plan as modified pursuant to the stipulated-to amendment.
Primary porcine endothelial cells express membrane-bound B7-2 (CD86) and a soluble factor that co-stimulate cyclosporin A-resistant and CD28-dependent human T cell proliferation. Increasing evidence suggests that endothelial cells can directly activate syngeneic, allogeneic and xenogeneic T cells. In this study we demonstrate that unstimulated, paraformaldehyde-fixed primary porcine aortic endothelial cells (PAEC) and microvascular endothelial cells (PMVEC) can provide co-stimulation for human T cell IL-2 secretion and proliferation. EC-mediated co-stimulation has both cyclosporin A (CsA)-sensitive and CsA-resistant components. The CsA-resistant component is completely suppressed either by blocking with anti-CD28 F(ab) fragments or CTLA-4-Ig. Northern blot analysis of unstimulated PAEC and PMVEC with porcine-specific probes reveals constitutive expression of B7-2 mRNA while B7-1 message was not detected. hCTLA-4-Ig and anti-B7-2 mAb immunoprecipitates a single 79 kDa PMVEC surface protein. Surprisingly, PMVEC conditioned media also has soluble co-stimulatory activity that is blocked by anti-CD28 F(ab) fragments or anti-B7-2 mAb. These findings demonstrate that primary unstimulated porcine EC can co-stimulate CsA-resistant human T cell proliferation through binding of membrane bound, constitutively expressed EC B7-2 (CD86) to human T cell CD28, providing one of the first demonstrations of functional B7-2 on cells outside the immune system. In addition, PMVEC secrete or shed a soluble factor that mediates CD28-dependent human T cell proliferation, demonstrating the existence of soluble mediators of CD28 activation.
<?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="ProjectModuleManager"> <modules> <module fileurl="file://$PROJECT_DIR$/flutter_lottie.iml" filepath="$PROJECT_DIR$/flutter_lottie.iml" /> <module fileurl="file://$PROJECT_DIR$/android/flutter_lottie_android.iml" filepath="$PROJECT_DIR$/android/flutter_lottie_android.iml" /> <module fileurl="file://$PROJECT_DIR$/example/android/flutter_lottie_example_android.iml" filepath="$PROJECT_DIR$/example/android/flutter_lottie_example_android.iml" /> </modules> </component> </project>
Dallas billionaire Harold Simmons is photographed in his North Dallas office. DALLAS — Dallas billionaire and heavyweight GOP political donor Harold Simmons, who has given tens of millions of dollars to Republican candidates, including Texas Gov. Rick Perry and former presidential candidate Mitt Romney, has died. He was 82. Simmons, born to two school teachers in East Texas, became one of the richest men in the country with interests ranging from energy to chemicals. Simmons’ spokesman Chuck McDonald said Simmons died Saturday in Dallas. McDonald said he did not know the cause of death. Perry on Sunday called Simmons “a true Texas giant, rising from humble beginnings and seizing the limitless opportunity for success we so deeply cherish in our great state.” “His legacy of hard work and giving ... will live for generations,” Perry said in a statement. Simmons’ wife, Annette Simmons, told The Dallas Morning News her husband died at Baylor University Medical Center at Dallas. She said he’d been in Baylor’s intensive care unit for the last eight days, the newspaper reported. She did not give the cause of death. Attorney General Greg Abbott noted in a statement that Simmons “shared his success with the state he dearly loved, giving generously to make advancements in healthcare and to improve higher education.” Simmons’ has given tens of millions to Texas organizations, including charities, medical groups, education groups and civic organizations. Daniel K. Podolsky, president of UT Southwestern Medical Center, said his donations to their institution alone approached $200 million. He currently sits at No. 40 on Forbes’ list of the 400 wealthiest Americans with a net worth of $10 billion as of the fall, according to Forbes. “Harold Simmons was one of my best friends, and it’s never easy to say goodbye to close friends,” Texas oil tycoon T. Boone Pickens said in a statement. “Harold accomplished so much in his life. He was a passionate person — passionate about his family, his business, philanthropy and politics. ... We should all leave such a rich legacy behind.” According to a biography on his namesake foundation’s website, Simmons earned his bachelor’s and master’s degrees from the University of Texas. He decided at the age of 29 to buy a small Dallas drugstore, according to his biography. He went on to buy Williams Drug Co. in 1966 and 30 more drug stores the next year, followed by an $18 million buyout of Ward’s Drugstores in 1969. He sold his stores in 1973 for $50 million in Eckerd stock. He then started a career as an investor, buying major positions in publicly traded companies. In 2008, Simmons bankrolled ads linking then-presidential candidate Barack Obama to William Ayers, a Vietnam-era militant who helped found the violent Weather Underground. Simmons was also a key backer of the Swift Boat Veterans’ attacks on Democratic presidential candidate John Kerry in 2004. Simmons also called Obama “the most dangerous American alive” in an interview with the Wall Street Journal last year. According to The Dallas Morning News, his foundation has also recently donated $600,000 to Resource Center, a group that serves the city’s lesbian, gay, bisexual and transgender community. Other donations have included $5 million to the campaign to build the AT&T Performing Arts Center in Dallas.