text
stringlengths
8
5.77M
A podcast all about Japanese cartoons and comics as discussed by three self-proclaimed experts in the world of anime and manga! Plus anime news / reviews, coverage of classic anime, hentai / yaoi, and much, much more. Updated every week. We hope. Our special guest this week is Zac “Answerman” Bertschy from Anime News Network! We spend the episode giving our con report of Otakon 2006. Clarissa is woefully absent from this recording, but she’ll be back next time. If you’re lucky. Introduction (0:00 – 3:45) I guess Jeff Nimoy sounds like Wolfwood as his regular speaking voice. Daryl quickly introduces this episode since he thought there was a pre-recorded intro for this episode but there wasn’t. He actually had acute bronchitis at the time this segment was recorded, so that’s why his voice sounds kind of off. Thanks to your generous support, we have successfully met our donations goal, so we’re getting some brand new audio equipment! For not other reason aside from because Mur Lafferty and Rob Walch told us to, we’ve ordered one Alesis Multimix8 USB mixer and two M-Audio MobilePre USB preamps. We’ll order the third preamp after JACON matches the up to $250 that they so generously offered to do, but it’s not like we’re going to be battering on their door saying “hey, where’s our free money?!” especially when you consider that we don’t actually KNOW how to use any of the equipment yet! Alternatively, we could get another $40 in donations and that’d cover the cost of the final preamp right now, but you guys have already done way more than enough, WAY faster than any of us would have imagined. The Otakon 2006 report constitutes the rest of the show. Here’s time markers denoting when we start to talk about certain subjects. Does Zac’s voice sounds similar to Alan Chaess to anyone else? 4:45 – What Gerald and Zac expect out of Otakon each year, which ultimately kicks off an East Coast vs West Coast anime fandom feud 8:05 – Getting to the convention and the hassles involved therein…for just Daryl: he referenced The Way of the Gun casually, a movie nobody else likes but him 16:02 – Gerald talks about going to dinner with other podcasters Thursday night, an event Daryl mostly missed because he was one step away from reliving Judgment Night, but Daryl was there just long enough to make most everyone else there despise him, resulting in this rather severe joke. Several people have tried similar initiatives over the years, and all it does is remind Daryl of how depressing it is that even Mike Jittlov is more lovable than he. Worry not, he follows the credo that Toki spoke unto almighty Kenshiro: “turn your sorrow into anger, and move on…” 17:45 – Press access, how unnecessarily difficult it is to obtain at Otakon, and how Otakon’s the only anime con where we still had to pay to get in even as press since let’s face it, we’re not really press that MATTERS…not quite yet, anyway! 23:40 – Badge selection, why it sucked, and why Daryl and Gerald are out of touch with reality Promo: The Greatest Movie EVER! Podcast (27:20 – 28:07) Not only did Paul donate the largest single amount to us out of everyone who sent in donations (Daryl’s good friend Jason donated just as much, and between the two of them that pretty much covered most everything), he has a podcast where he has VOWED to talk about Highlander 2…WITH HIS MOM. After having already reviewed American Ninja with his mom as well as Godzilla vs the Smog Monster, the next logical step is for the two of them to review Zardoz together. 28:08 – Signs at conventions and the thankful lack thereof at Otakon 30:58 – The con guide, con mini-guide/map/schedule, and how useful/accurate these actually were 37:00 – Staffers at Otakon, the recent trend of anime cons having trouble getting their volunteers/staffers to actually SHOW UP for their shifts, and our dealer’s room experience 46:43 – Panel stuff: the toy collecting panel that was contradictory in its message, the Gainax/Ghibli/Studio 4C panels that we were hoping would have been more, and the awesome sentai panel as run by the guy who hosts the Rangercast podcast. We’ve been told that the people from TV-Nihon have declared this guy anathema because he had the BRAZEN NERVE to show Mighty Morphin Power Rangers at a sentai panel, and according to them “POWER RANGERS ISN’T SENTAI, RAAAAGH!” as if to suggest that they must have all gotten into that terrifying fandom by watching Spectreman on TV right as it came out or something. As a result, they plan to do all they can to ensure he doesn’t get the sentai panel next year. If this is indeed true, then that is absolute, total crap. Also, the reason so many people think Gerald has a weird accent is because he pronounces “schedule” as “shed-ule” unless he deliberately thinks not to. 53:00 – Cosplay stuff at Otakon and how most of it wasn’t anime-related Promo: Fast Karate for the Gentleman (54:05 – 55:03) Now that Daryl has admitted that he steals his jokes from this podcast, more people are listening to it and are able to realize that he was telling the truth for once. It is only fitting that this promo be played right after one more such joke was stolen. 57:03 – The Older Otaku panel that was run by people about our age…hey, wait a second! 1:00:18 – The Madhouse panel and the Mecha Anime Trivia panel which Gerald won – find out how Gerald hates the concept of giving things away that he doesn’t like or need to begin with, Zac reiterates his love for Anime Expo, and Daryl uses HIGHLY OFFENSIVE anti-Semitic remarks and Mel Gibson’s recent antics for comedy, thus earning this podcast the wrath of anti-defamation groups everywhere! 1:06:20 – Zac tells us about what REAL press do at anime conventions (sobriety optional) 1:07:43 – The state of industry panels at anime conventions 1:09:58 – Panels that Otakon didn’t expect quite so many people to attend: the Kingdom Hearts panel, the 4Chan panel which Gerald and Zac didn’t like, and the Oldschool Anime panel which nobody liked 1:15:56 – Zac offers up an idea that is GUARANTEED to make someone FILTHY RICH if they actually implement it 1:18:00 – The podcasting panel as hosted by GeekNights and the Anime News Network panel which Zac was on; Zac plugs some of ANN’s future (probably current by now) contests and promotions 1:27:30 – What appears to be the official Otakon response to the various con gripes, and our final thoughts Closing (1:36:12 – 1:37:55) The hissing that Daryl was talking about probably disappears once he sets the sampling rate to 22050. Next week, Clarissa will be back as we answer listener emails with Dave Merrill, former Anime Weekend Atlanta con chair and Anime Hell grand poobah, as featured in the documentary Otaku Unite! which is now available on DVD. Dave is officially Smarter Than Us and is someone that Daryl has always aspired to be like, but comes up short time and time again. I wore my costume to a non-cosplay panel I ran this year. Didn’t think too much of it other than I spent a lot of time on it, so dammit I’m gonna wear it. At least I was talking about parody dubs, so it’s not like I needed people to take me seriously. I must know, Gerald, what Gundam SEED model kit was so much more important, than the happiness of your dear fans. Becuase simply put, you have no excuse, unless it was the 1/12 Person-sized Original RX-78 Gundam. I can understand why Darryl and Gerald might think that the badges were co-sponsored by the Anime Studios. It is free press, after all. Otacon is practically ONE gigantic commercial made more effective because it is opt-in. Admitting it sucks would make you stupid for paying for it in the first place. But since Otacon is small, it stands to think they dont have enough muscle to talk with the big wigs. A true Otacon would have badges from all anime genres. But the ppl profiting from anime aren’t fans. They are businessmen. Just look at the reporters that AWO compete with. If AWO, with their puny resources, could find, engage, and create newsworthy stories, then there must be something seriously wrong about the “real news” who are rarely knowledgeable about the subject of their reports. AWO is a bit too idealistic sometimes. That’s how you can tell that they do love the industry. Cynical yet wide eyed naivete. This post is a mass of contradictions! The badges reflect the manga/anime distribution numbers. AWO & company is not the market for them. I would love badges of Yamato Takeru, Future Boy Conan, SD Gundam, Shurato, Gaogaigar, the chick from Starship Yamato, and the ninja girl from Wataru. Fuck Hellsing and Speed Grapher. It’s safer to do cons the corporate way. Or you can do things like Penny Arcade and be GREAT. I missed PA expo this year but it looks like it was great. Sigh. If you are going to make money off anime fans, employ real fans! They aren’t all uncreative, unintelligent low lifes. Just look at AWO. 2 out of 3 aint bad. your observation of the lack of staff was not wrong at all. It was in fact right on the mark.I had dinner with an otakon staffer on saturday night and he told me how for the first time in years they had a SERIOUS shortage of people in staff helping to just run things in general. and he told me how it was a bit of a pain to many involved. they did have more of those security guys in the pseudo cop uniforms but all they really did is yell at people if they were anywhere near the escalators in a very annoying way… I must know, Gerald, what Gundam SEED model kit was so much more important, than the happiness of your dear fans. Becuase simply put, you have no excuse, unless it was the 1/12 Person-sized Original RX-78 Gundam. Well this is the model I won on the right. On the left is just something I was very happy to find at Otakon and it wasn’t too expensive either. And if you’re talking about my fans, you’re deluding yourself since I have none, if you mean for the podcast in general, well, it was a general lapse of judgement, BUT WAIT, what’s Daryl doing with that Patlabor 1 mega box that he said he was going to give away, or that Tenchi Universe box set that he won from a show that he hates? Oh HO, NAY, I say that we are one and the same and are only thinking of ourselves! That’s strange that they’d give you all that trouble for press. I was a mere panelist and was allowed to go to the Special Needs booth with no line, and only had to provide photo ID. This might have had something to do with our intentional attempt to travel in a big group of people wearing TEAM 4CHAN shirts. (no, we have no idea where our users come from either) And I promise that my panel on anonymous Japanese internet forums was almost kind of about anime. > We’ve been told that the people from TV-Nihon have declared this guy anathema because he had the BRAZEN NERVE to show Mighty Morphin Power Rangers at a sentai panel On one of those websites where you can download things for free, TV-Nihon posts always fill the comment field with insults of everyone on the planet not them. I once suggested the site filter out their comments, but I don’t think the owner liked my idea. Zac seemed much more upbeat than he did posting on SA, didn’t he? Must be the time delay. Huh…Gold Frame Amatu…I guess that’s a pretty good mobile suit. But still, it’s not MG level at least, even if it is 1/100 scale.And you see, I was at one point, your fan. Until the Search for the Truth for Jacon came, and you were playing Dead Or Alive 4. That just tore me up inside. Yeah, so where the hell is the Patlabor boxset that I attempted to ask for but failed, go to anyways? That’s what I want to know. You guys need to cut down on the con coverage. It’s really provincial and just plain doesn’t matter much. Yeah, we really should spread these out some. For what it’s worth, “con season” is pretty much over, so the only con report left that’ll take up a full show is Anime Weekend Atlanta. Which I really should start working on panels for about now. BUT WAIT, what’s Daryl doing with that Patlabor 1 mega box that he said he was going to give away, or that Tenchi Universe box set that he won from a show that he hates? Oh HO, NAY, I say that we are one and the same and are only thinking of ourselves! The Patlabor box is packed and ready to ship out to Jeff Tatarek. I just have to go to the post office and actually send it, which is tricky because my work hours overlap with the post office’s hours and I always oversleep on Saturdays. The Tenchi box set is in my “to watch” pile right next to the Black Jack, Gatchaman, Dirty Pair, Tetsujin 28, and soon to be all of Ghost in the Shell SAC. I don’t think Zac actually has a SA forums account himself, though he was certainly aware of the site. Besides, people have to look like their avatars in real life. TheSwami almost certainly looks like Jason Schwartzman, much like I have a giant pulsating toe and Gerald is…uh…what is Gerald again… A fat, mostly bald infant baby head with a question mark and “STUPID NEWBIE” written on it. Yeah, that’s the ticket. A fat, mostly bald infant baby head with a question mark and “STUPID NEWBIE” written on it. Yeah, that’s the ticket. I think that site needs a standard avatar of a lollipop with “Sucker” written on it for having paid $10 to register for an account on a forum and then another $5.00 to change your avatar. Man, to pay to post on a forum, and another $10 to search, those are the suckers, thankfully my account is free and I never use it. It’s a waste of time and money to sell that on ebay. You should just build the thing, buy a gun, and youtube the execution of the gundam. What’s even worse, now that I think about it. I was informed right after I picked the model that it wasn’t actually from any Gundam TV show and was from a manga instead. So not only did I get a eh Gundam, but it’s like a side-story/fanfiction Gundam to boot! For what it’s worth, I can’t stand Seed and I think that design is pretty awesome, if way over done. The wings should go, and the hooks should go where the wings are. Also, what is that Wolverine beam claw bullshit. The rest is pretty swank, though. It’s a very interesting side mobile suit though. But I don’t think they could put any of the Gundams in the side stories into SEED. They’re all too powerful, so it would be like adding antoher three or four Strike Freedoms. Hm, I have to say I was slightly annoyed by your review of the toy panel. I’m biased as I was one of the panelists. But, that aside, you didn’t even get your facts straight on the panel itself. It was a toy collecting panel, yes. However, you took alot of our comments out of order. We said yes, collect toys. I myself said DO NOT collect for the monetary value, but for the sheer enjoyment of collecting. We explained why our toys are stored as they are because of lack of display space available to us at the moment. It was also not just anime toys we were referencing. We NEVER said “how to buy and store your toys so they’ll be worth something as collector’s items”. We did cover how to store toys for those that do wish to save them for collecting, since that was one element of what our panel was about. The rest of the panel focused on the joy of collecting and what strange things you can find out there. The storage part was only about 5 minutes worth of panel time. Nothing was mentioned about profit of toy collecting. Get your facts straight next time. It can’t be that bad a panel if we’ve been there for 3 years and people keep coming back to see us. Maybe it is. Who knows. But like I said, get your facts straight. Before you do a review of something you didn’t even fully attend. We had a good amount of positive feedback from those that did bother to stay for the whole thing after the panel ended. I do admit our panel wasn’t as polished this year as it was in years past, and that is purely our fault. Had you chosen that to critique, I would have been ok with your review.
Q: Creating a multi level comments system in asp.net mvc I'm currently working on a blog written in asp.net mvc. I have a table that stores comments with the following model public partial class Comment { public int CommentId { get; set; } public string Comment { get; set; } public System.DateTime CommentDate { get; set; } public int AuthorId { get; set; } public int PostId { get; set; } public int CategoryId { get; set; } public virtual Category Category { get; set; } public virtual Post Post { get; set; } public virtual UserProfile UserProfile { get; set; } } Currently the comment is still a one-level comment i.e a user cannot reply to a comment. How can i implement a multilevel comment system in such a way that a comment can have a reply and that reply can have another reply and so on. A: Add parentCommentId column which refers to the Comments table. After that in each reply : If it is a direct reply to the post, leave parentCommentId column empty If it is a reply to a previously posted comment, put that comment id in this column. In this case you can leave postid column empty or not. It depends on your favor!
Aldo - Bartolello-R Suede Oxford Shoes Navy suede size: Synonymous with everyday luxury, Canadian brand has been designing with a steadfast commitment to quality since its humble beginnings. Reworking a classic Oxford silhouette with a modern suede build, the Bartolello shoes showcases the label dedication to detail with the exquisite brogue and wingtip perforation – ideal for desk-to-dinner dressing
Shopping Cart Could “personal intelligence” be the future for workplace success? The concept of emotional intelligence continues to gain further ground in the management sphere, as many begin to recognise the value of employees with this trait. And while it looks set to hold high regard in the workplace for years to come, a similar new theory – that of “personal intelligence” – may yet add more weight to the argument that a holistic approach to candidate recruitment and selection is needed. Coined by the University of New Hampshire's renowned Professor John Mayer, who was one of the leading proponents of the theory of emotional intelligence, personal intelligence refers to the ability to “understand our own personality and the personalities of the people around us”. Those who can use this form of “broader intelligence”, Professor Mayer says, are likely to possess superior interpersonal skills – a key requirement of the modern workplace. “People who are high in personal intelligence are able to anticipate their own desires and actions, predict the behaviour of others, motivate themselves over the long term, and make better life decisions,” he explains. One of the most important skills those with high personal intelligence possess is the ability to read others' non-verbal cues and make quick, efficient decisions based on them. Additionally, these people can draw from the feedback from others to gain a better perception of themselves. “We draw initial guesses about personalities based on how people dress and present themselves, and we adjust how we interact with them accordingly. We run through scenarios in our heads, trying to anticipate how others will react, in order to choose the best course in dealing with a boss, a coworker, or a partner,” he says. Ultimately, those with greater personal intelligence make decisions in all aspects of their life based on what will be the best fit for their personality. Whether it's emotional or personal intelligence, employees with high levels of these forms of “broader intelligence” have plenty to offer the right employers. Making personality assessments a part of your recruitment procedures can help you find those who possess these important traits. TWITTER FEED SUBSCRIBE TO OUR NEWSLETTER Related Blog Posts Lack of career progression ‘key motivation for leaving a job’ Australians have cited a lack of career progression as the primary reason for wanting to leave their current job. Morgan McKinley polled 351 people across a range of industries and found 52 per cent were unfulfilled by existing promotion chances.... The dangers of overconfidence: Q&A with Tomas Chamorro-Premuzic We had a chat with Tomas Chamorro-Premuzic, vice president of research and innovation at Hogan, to find out what exactly constitutes confidence in the workplace - and whether too much of it can be a bad thing. What separates good... Why a degree is no longer everything For as long as anyone can remember, the traditional path for school leavers in Australia (and indeed most countries around the world) has been to attend university, gain a degree and find a job. Although this mindset has become much... Confrontational staff causing problems? Dig into their personality It seems like every workplace has them - employees who commit to making mountains out of molehills and constantly act in a confrontational nature. Staff members who are argumentative by default and instigate conflicts can be detrimental to any organisation....
Le parquet d'Evry a ouvert mercredi matin une enquête sur les circonstances du suicide d'une jeune femme à Egly dans l'Essonne mardi après-midi, a appris France Info. Elle s'est jetée sous un train de la ligne C du RER. Selon les enquêteurs, la victime âgée de 19 ans a mis fin à ses jours en direct dans une vidéo diffusée sur l'application Periscope devant 35 à 40 personnes connectées. La jeune femme habitait Egly, apparemment seule. Ses parents vivent dans le département.Quelques heures avant les faits, elle a commencé à mettre en scène, en quelque sorte, la préparation de son acte en annonçant aux personnes qui la suivaient sur Periscope qu’elle devait faire quelque chose en live. En réalité, elle annonçait son futur suicide aux utilisateurs de cette application de diffusion de vidéo en direct. Dans ces vidéos, elle évoque un déboire amoureux. Et dans une séquence, qui a été censurée depuis, elle met fin à ses jours. Des utilisateurs ont reconnu la gare d’Egly mais il était déjà trop tard. Le téléphone portable saisi Sur place, le suicide a également été enregistré par les caméras de vidéosurveillance de la gare. Après l’arrivée des pompiers, les policiers ont saisi le téléphone de la jeune femme. C’est désormais le parquet d’Evry et la brigade territoriale de gendarmerie d’Egly qui sont saisis des faits. Ils vont tenter de connaître les circonstances exactes de ce drame. Selon les premiers éléments communiqués par la justice, la jeune fille aurait envoyé un sms à un proche quelques minutes avant son suicide. France Info a pour le moment tenté de joindre sans succès la société Twitter France, propriétaire de l’application Periscope.
Apparent treatment-resistant hypertension among elderly Korean hypertensives: an insight from the HIT registry. The aim of this study was to determine the clinical characteristics of patients with resistant hypertension (RH) and predictors among elderly Korean hypertensives. This prospective, multi-center, observational study evaluated 2439 elderly hypertensive patients between December 2008 and November 2011, who visited secondary hypertension clinics for high blood pressure (BP). Patients were categorized as resistant if their BP was ≥140/90 mm Hg and if they reported using antihypertensive medications from three different drug classes, including a diuretic or drugs from ≥4 antihypertensive drug classes, regardless of BP. Characteristics of patients with RH were compared with those of patients who were controlled with one or two antihypertensive medications after 6-month antihypertensive treatment. In comparison with 837 patients with non-RH, 404 patients with RH were more likely to be aware of their status of high BP before enrollment and have a high baseline systolic BP ≥160 mm Hg, microalbuminuria, high body mass index (BMI) ≥24 kg m(-2) and diabetes mellitus (DM). In drug-naive patients, awareness of hypertension at baseline was the only independent predictor for RH. In elderly Korean hypertensives, BMI (≥24 kg m(-2)), baseline systolic BP (≥160 mm Hg), microalbuminuria, DM and awareness of hypertension showed an association with RH.
Three default taxonomies are used here, lingua, tags, and categories. By convention, taxonomies that are typically single-valued use the singular (for example lingua because a document can be written in only one language), taxonomies that can have multiple values usually use the plural form. Taxonomies allow you to partition or divide your web site into groups of related documents. For example, you can identify all English documents with the condition lingua="en". All English documents. Likewise, you can identify all documents that are tagged with Qgoda. All documents tagged with "Qgoda". By combining taxonomies you can define arbitrary subsets of a site's documents. You can for example identify all English documents tagged with Qgoda and Plug-Ins by AND'-ing the two above conditions.
News Red Wings News Wings play big role in golden victory TURIN, Italy (AP) - The three crowns on Sweden's hockey sweaters are said to represent three great kings. Try convincing any fan they don't stand for hockey royalty: Forsberg, Sundin and Lidstrom. Sweden's three biggest stars came through in its biggest game ever, with Nicklas Lidstrom scoring the game-winning goal 10 seconds into the third period on assists by Mats Sundin and Peter Forsberg as the Swedes beat rival Finland 3-2 Sunday for the Olympic men's hockey gold medal. Three stars and three goals combined to make for one huge celebration in Sweden, which once again established its on-ice superiority over its smaller neighbor. Finland had been unbeaten in seven Olympic games in Turin, playing near-perfect hockey, but again couldn't beat the team it wants to beat most. On Saturday, Finn general manager Jari Kurri joked it was like a little brother-big brother matchup. The only question was whether Finland could shake its little brother role. Again, it couldn't. The game winner came so quickly in the third, Finn goalie Antero Niittymaki almost didn't react. Forsberg, playing despite a severe groin injury that kept him out of the Philadelphia Flyers' last eight games, grabbed the puck off the faceoff and fed ahead to Sundin, whose perfect-as-can-be drop pass to the blue line was one-timed by Lidstrom past Niittymaki. Finland pressed and pressed for the tying goal after that, and nearly got it with 20 seconds remaining by Olli Jokinen. But Swedish goalie Henrik Lundqvist, made a series of big saves in outplaying fellow NHL rookie Niittymaki, who had shut out three of his previous five opponents. After they won, the Swedes mobbed each other behind the Finn goal and Sundin and Forsberg grabbed Swedish flags and carried them around the ice, and Forsberg joyously tossed both gloves into the crowd. Several Swedish players cried during the medals ceremony. Sweden's second gold medal in four Olympics - it also won on Forsberg's dramatic shootout goal against Canada in 1994 - more than made it for its dreadful loss to Belarus in the 2002 quarterfinals. One of the biggest upset losses in Olympics history eventually led to former coach Hardy Nilsson's firing and the hiring of coach Bengt-Ake Gustafsson, a former NHL defenseman. Unlike Finland, which outscored its opponents by 27-5 while winning its first seven games, Sweden was far from perfect in Turin - losing to Russia 5-0 and also to Slovakia, when Gustafsson caused a major stir by suggesting his team might chose to lose to set up a more favorable quarterfinals game against Switzerland. Sweden also won in what likely was the final Olympics appearance for its major stars, including Forsberg, Sundin and Lidstrom, so it was only fitting they all played a role in the decisive goal. Three Swedes played on both Olympic gold medal teams - Forsberg, Kenny Jonsson and Jorgen Jonsson. Finland and Sweden have met in three world championship finals, the last eight years ago, but this was the first time the Nordic neighbors had played each other for an Olympic gold medal. That each was trying to win against its biggest rival only increased the pressure in a game that was expected to attract record TV audiences in each country. This smaller-than-small country matchup probably didn't do much for NBC's Nielsen ratings - but the Nilsson ratings in Sweden no doubt were off the charts. Sweden has been more dominant on the world stage than Finland, winning seven world titles to the Finns' one, and is 2-1 in world championship finals against its next-door neighbor - the country that, until 1809, was under Swedish rule. Even today, hundreds of thousands of Finns speak Swedish. Finland, as it has consistently throughout the tournament, scored the opening goal - this time, on Kimmo Timonen's slap shot from the blue line that flew through traffic in front of the net and deflected off Lundqvist's skate and into the net. But the Finns, mistake-free until Sunday, lost some of that defensive discipline while taking four consecutive penalties during one stretch of the second period. Sweden took advantage by scoring twice, with both goals by Detroit Red Wings players: Henrik Zetterberg slightly less than five minutes into the period and Niklas Kronwall eight minutes later. Kronwall joined the team before Friday's semifinals to replace the injured Mattias Ohlund. Finland tied it at the 15-minute mark when Jussi Jokinen threaded a beautiful backhand pass through the crease and between defenseman Lidstrom's legs directly onto Ville Peltonen's stick.
Any idea when the KMP9 Magazines are going to hit the market? I've only seen them at Airsoft Atlanta but they were posted as out of stock and this is a brand new gun. That, and I don't even see them in the Pro Shop. I can't figure out whether I should buy 2, 5, or 8; 2 will fit into my shoulder holster, but 5 will fit onto a tac belt, and 8 will fit even better... Plus, the more I have, the more I get to roll people before I have to go hide and reload O_o And while I'm at it, the manual says to put "a little bit" of silicon oil on the various valves of the mag... but how much is "a little" ? I can't really help but wonder... AirsoftGI was selling spare KMP9 mags for $43.00 each. The KWA pro shop has them for $59.95 each. Why the gigantic price difference? Only reason I ask is GI doesn't look like they're going to have them back any time soon.
Q: Gradle building takes forever after upgrading to Java 8 I tried to enable Java 8 features in Android Studio like suggested in https://android.com: defaultConfig { ... jackOptions { enabled true } } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } After that I added compile 'net.sourceforge.streamsupport:streamsupport:1.5.1' and was able to use lambdas. Since I've done that, the Gradle build takes forever (I killed the process after 20 minutes to try other solutions). My hardware is not pretty good, but still this is not an acceptable time for a build (and it never finished). I also tried to remove this changes, but I face related compilations errors. I can pull the previous app version from git, but I rather solve these issues to be able to us Java 8 features. Did anyone face this problem and managed to solve it? Thanks. A: The Jack toolchain is now deprecated, and Java 8 features are available "natively" in Android Studio 3.0. Rather than trying to get your Gradle builds to go faster with Jack, you should upgrade Android Studio.
Determination of plasma heparin by micellar electrokinetic capillary chromatography. The micellar electrokinetic capillary chromatography (MECC) method is reported for the separation of heparin, and for the possibility of direct determination of free heparin in plasma. The conditions for MECC were: pH 8.5, 25 mM sodium dodecyl sulfate (SDS), 25 mM borate buffer, with a 30 cmx50 mum ID fused-silica capillary. The sample was detected with a UV-detector at 270 nm with heparin as external standard. The recovery rate was 95.6-98.7%. This method was linear in the range 80-7000 U l(-1). The within-run and between-run relative standard deviations were lower than 3.1 and 4.5%, respectively. It is suggested that this MECC method may be used to determine blood samples containing high levels of heparin.
Pete Dougherty | Packers News Aaron Nagler/USA TODAY NETWORK-Wisconsin Ted S. Warren/AP As intriguing as John Schneider to the Green Bay Packers sounds, it’s a long shot, if that. The Seattle Seahawks have denied Packers President/CEO Mark Murphy permission to talk to Schneider about the team's opening at general manager. Based on the wording of the NFL’s anti-tampering policy, it looks like they have that right. That means the only avenue for the Packers to get Schneider is if the Seahawks would part with him for compensation. Money won’t do the trick. It would take a draft pick, very likely a first- or second-rounder. The Packers can't afford to give up that valuable of a resource, which their next GM badly is going to need for a quick build-up of their roster. I’m also not convinced coach Pete Carroll would let Schneider go anyway. Maybe. But the Seahawks are built on the Carroll-Schneider partnership, and Carroll very well might be unwilling to end it. There’s a reason the Seahawks made sure to remove his out clause for the Packers when they extended Schneider’s contract in July 2016 at a reported $4 million a year. “Pete and (Schneider) have a superior relationship,” said an NFL source who knows both well. “I don’t ever see that being broken away, but you never know.” {{props.notification}} {{props.tag}} {{props.expression}} {{props.linkSubscribe.text}} {{#modules.acquisition.inline}}{{/modules.acquisition.inline}} ... Our reporting. Your stories. Get unlimited digital access to exclusive content. Subscribe Now One big question is whether Murphy’s pursuit of Schneider, Oakland’s Reggie McKenzie (turned down the interview), Minnesota’s George Paton (interview denied by the Vikings while they’re still in the playoffs) and Baltimore’s Eric DeCosta (no reports that he accepted the interview) is part of a real search, or a formality with Murphy knowing all along whom he’s going to hire. I think back to March 2016, when the Packers promoted Eliot Wolf to director-football operations and Brian Gutekunst to director of player personnel. At the time, Murphy said he and Ted Thompson had a succession plan at GM. I and some people who follow the Packers closely, including Bob Harlan, the Packers’ chairman emeritus, interpreted Wolf’s new title as Murphy’s signal to the rest of the NFL that Wolf would be the Packers’ next GM. Then in February 2017, Murphy changed his answer and said he had “a plan in place for the process to find (Thompson’s) successor.” That sounded like he’d hire a search firm to help with the process, as he’d done to fill other positions in the organization, and as the team did to find him. Murphy did just that when he hired Jed Hughes of Korn Ferry as a consultant on this search. But it might be only Murphy’s public answer that changed in ’17, not his actual plan for succession. Though Thompson has worked closely with all the Packers’ in-house GM candidates, sources say he has worked most closely with Russ Ball, who still appears to be the front runner for the job. So as far back as 2016, Murphy and Thompson might have been planning for Ball to be the next GM. We’ll know when Murphy announces his choice. I can see why coach Mike McCarthy would have concerns about Ball as GM, as has been widely reported in the last couple days. One NFL source told me that Thompson and Ball have been in lockstep on the team’s approach to acquiring players, which is the most draft-oriented approach in the NFL. Without saying it outright, McCarthy in his season-ending news conference made clear he didn’t think Thompson had done enough in the last few years to upgrade the Packers’ roster. At one point McCarthy asked rhetorically if the Packers were doing enough to win the Super Bowl. “That question … needs to be answered throughout football operations,” he said. As Thompson’s salary-cap adviser, Ball was involved directly in all the Packers’ re-signings and general lack of interest in free agency. What we won’t know unless and until he sits in the GM seat is whether he would maintain that approach while in charge. Murphy will know, though, from his interview with Ball, which presumably took place sometime this weekend. Murphy’s search has had a chaotic feel to it late this week, in part because his choice could create a ripple throughout the team’s scouting department. Three of the candidates are in-house, which raises all sorts of questions about what happens with the others if one is chosen for the job. Having a longtime coach in place adds another layer to the intrigue. What jumped out at me from McCarthy’s news conference was his bold stance on the GM-coach fit being a two-way street. He clearly was trying to influence Murphy and almost surely in private has had a chance to make his preferences known. But in the end, the new GM will be McCarthy’s boss. Murphy was explicit on that at his own news conference. Would McCarthy really try to get out of the final two years of a contract that pays about $8 million a year, and away from a team that has Aaron Rodgers at quarterback? I seriously doubt it. Still, the ripple could be real. If Murphy’s choice in fact is Ball, what happens with Wolf and Gutekunst? As Thompson’s No. 2 in personnel, at least by title, Wolf would feel spurned. You can bet John Dorsey wants him in Cleveland if the Packers would let Wolf out of his contract. Gutekunst reportedly is interviewing for the Houston Texans' GM job Sunday. He could end up going there. Ball probably could persuade at least one of the two to stay as his top personnel man. That job would have a lot of influence, because Ball’s background is mostly in the cap and administration, not personnel. And it would carry a big raise. But could he retain both? That could be tougher.
using namespace std; #include <bits/stdc++.h> typedef long double ld; const ld inf = 1e100; struct edge { int to; ld p; edge() {} edge(int a, ld b) : to(a), p(b) {} bool operator < (const edge &o) const { return p < o.p; } }; void dijkstra(vector<vector<edge> > &g, vector<int> &pi) { int n = g.size(); pi.assign(n, -1); vector<ld> d(n, 0); priority_queue<edge> q; q.push(edge(0, 1.0)); d[0] = 1.0; while (!q.empty()) { int cur = q.top().to; ld dis = q.top().p; q.pop(); // if (dis < d[cur]) continue; // if (cur == (n - 1)) return dis; for (int i = 0; i < g[cur].size(); ++i) { int to = g[cur][i].to; ld w_extra = g[cur][i].p; if (d[cur] * w_extra > d[to]) { d[to] = d[cur] * w_extra; pi[to] = cur; q.push(edge(to, d[to])); } } } // return 0; } void solve() { int n, m, s, k; cin >> n >> m >> s >> k; int u, v, p; vector<vector<edge> > g(n); vector<vector<long double> > prob(n, vector<long double>(n, 0)); for (int i = 0; i < m; ++i) { cin >> u >> v >> p; g[u].push_back(edge(v, p / 100.0)); g[v].push_back(edge(u, p / 100.0)); prob[u][v] = prob[v][u] = p / 100.0; } vector<int> pi; // ld ans = dijkstra(g, pi); dijkstra(g, pi); int node = n - 1; long double ans = 1; while (pi[node] != -1) { ans *= prob[node][pi[node]]; node = pi[node]; } // cout << ans << endl; double exp = (1.0 / ans ) * (s * 2 * k); printf("%.10lf\n", exp + 1e-9); } int main() { ios_base::sync_with_stdio(false);cin.tie(NULL); int n; cin >> n; for (int i = 0; i < n; ++i) { printf("Case %d: ", i + 1); solve(); } return 0; }
Kenton High School Kenton High School may refer to: Kenton School, Newcastle upon Tyne, England Kenton High School (Kenton, Ohio), U.S. See also Simon Kenton High School, Independence, Kentucky, U.S.
The Indian Supreme Court will hold a hearing in July in an effort to decide on the growing number of crypto-related petitions filed against the country’s central bank. The Supreme Court has barred all other courts from accepting petitions in the wake of the filing of five petitions against the Reserve Bank of India’s (RBI) move to bar banks from dealing with cryptocurrency companies. The RBI published a circular in early April to that effect, saying at the time that the entities it regulates “shall not deal with or provide services to any individual or business entities dealing with or settling [cryptocurrencies].” The hearing will be held on July 20, according to an Economic Times report. One of the petitions was filed by a startup called Kali Digital Ecosystems – which planned to launch its crypto exchange, CoinRecoil – has been transferred to the Supreme Court. Two other petitions transferred to the Supreme Court were originally filed in the Delhi High Court and the Calcutta High Court, the Times further reported. Anirudh Rastogi, a managing partner at the law firm that filed the petitions, told the publication: “One of the key arguments made out in the petition was that the circular was not preceded by any stakeholder consultation, which is what the latest order gets to.” In the wake of the RBI move, a group of exchanges indicated that they, too, were moving to seek some kind of an appeal against the central bank circular. The goal, as expressed at the time, was to obtain a hearing before the Supreme Court in order to challenge the RBI policy. The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.
United States Court of Appeals for the Federal Circuit ______________________ NEUROREPAIR, INC., Plaintiff-Appellant, v. THE NATH LAW GROUP AND ROBERT P. COGAN, Defendants-Appellees, AND DOES 1-20, Defendants. ______________________ 2013-1073 ______________________ Appeal from the United States District Court for the Southern District of California in No. 09-CV-0986, Judge John A. Houston. ______________________ Decided: January 15, 2015 ______________________ MATTHEW KLIPSTEIN, of Denver, Colorado, argued for plaintiff-appellant. GREGOR A. HENSRUDE, Klinedinst PC, of San Diego, California, argued for defendants-appellees. With him on the brief were HEATHER L. ROSING and SAMUEL B. STROHBEHN. 2 NEUROREPAIR, INC. v. THE NATH LAW GROUP ______________________ Before WALLACH, CHEN, and HUGHES, Circuit Judges. WALLACH, Circuit Judge. The question before this court is whether a California state court malpractice case involving patent law repre- sentation was properly removed to a federal court. Under the principles of Gunn v. Minton, 133 S. Ct. 1059 (2013), it was not. Plaintiff-appellant NeuroRepair, Inc. (“NeuroRepair”) appeals from a final judgment of the United States Dis- trict Court for the Southern District of California granting partial summary judgment in favor of defendants- appellees The Nath Law Group and Robert P. Cogan (collectively, “Defendants”) on July 12, 2011, as well as the district court’s orders (1) denying NeuroRepair’s motion for reconsideration on August 19, 2011, (2) grant- ing Defendants’ motion in limine with respect to lost licensing opportunity of March 12, 2012, (3) entering judgment on September 26, 2012, in favor of Defendants, and (4) denying NeuroRepair’s motion for reconsideration on July 1, 2013, and all related post-judgment costs. Based on Gunn v. Minton, this court vacates and remands the district court’s judgments with instructions to remand the case to California state court. This court “[has] jurisdiction to decide whether the district court had jurisdiction under [28 U.S.C.] § 1338.” C.R. Bard, Inc. v. Schwartz, 716 F.2d 874, 878 (Fed. Cir. 1983); see also Scherbatskoy v. Halliburton Co., 125 F.3d 288, 291 (5th Cir. 1997) (finding the “right to determine if a district court has jurisdiction under [§] 1338” is a power that “concurrently exists with [the Federal Circuit and] the regional circuits”); Shaw v. Gwatney, 795 F.2d 1351, 1353 n.2 (8th Cir. 1986) (A federal appellate court carries out “traditional and inherent functions [such] as deter- NEUROREPAIR, INC. v. THE NATH LAW GROUP 3 mining its own jurisdiction and supervising the exercise of jurisdiction by the district courts below it.”); cf. Maddox v. Merit Sys. Prot. Bd., 759 F.2d 9, 10 (Fed. Cir. 1985) (“If the MSPB does not have jurisdiction, then neither do we, except to the extent that we always have the inherent power to determine our own jurisdiction and that of the board.”). BACKGROUND In December 2005, NeuroRepair retained Robert Cogan, an attorney with The Nath Law Group, to assist in the prosecution of certain patent applications. Over time, NeuroRepair became increasingly dissatisfied with what it viewed as slow progress and excessive legal fees, and in August 2007 NeuroRepair requested that Mr. Cogan transfer the relevant files to another law firm, Welsh & Katz, to continue prosecution before the United States Patent and Trademark Office (“USPTO”). In September 2007, Defendants filed a request to withdraw from repre- sentation of NeuroRepair before the USPTO, but contin- ued to assist NeuroRepair with other matters. NeuroRepair filed suit against Defendants in the San Diego Superior Court on March 20, 2009, alleging profes- sional negligence, breach of fiduciary duty, breach of written contract, breach of oral contract, breach of implied covenant of good faith and fair dealing, negligent misrep- resentation, and false promise. Defendants removed the case to federal district court on May 7, 2009, on the ground that it was “a civil action relating to patents.” J.A. 55. After the district court entered judgment in Defend- ants’ favor on September 26, 2012, NeuroRepair timely filed this appeal challenging the district court’s subject matter jurisdiction. The principal issue this court must address is whether jurisdiction in the district court was proper in light of the Supreme Court’s recent pronounce- ment in Gunn v. Minton. 4 NEUROREPAIR, INC. v. THE NATH LAW GROUP DISCUSSION I. Standard of review “We review issues of jurisdiction de novo.” Prasco, LLC v. Medicis Pharm. Corp., 537 F.3d 1329, 1335 (Fed. Cir. 2008). Under 28 U.S.C. § 1441(a) (2012), a defendant may remove to federal district court “any civil action brought in a State court of which the district courts of the United States have original jurisdiction.” As this court stated in Jim Arnold Corp. v. Hydrotech Systems, Inc.: The question we must answer . . . is whether fed- eral subject-matter jurisdiction would exist over this case had it originally been filed in federal court. If the answer is yes, then removal was proper, and the matter is before us on the merits; if the answer is no, then removal was improper and federal courts are without jurisdiction to de- termine the cause. 109 F.3d 1567, 1571 (Fed. Cir. 1997). II. Subject matter jurisdiction At issue in this case is whether the district court would have had original jurisdiction under 28 U.S.C. § 1338, 1 which gives federal district courts original juris- diction over “any civil action arising under any Act of 1 There does not appear to be a basis for jurisdiction under 28 U.S.C. § 1332 (diversity of citizenship). “Where . . . appellants do not claim diversity of citizen- ship, there must be federal question jurisdiction.” Semi- conductor Energy Lab. Co. v. Nagata, 706 F.3d 1365, 1369 (Fed. Cir. 2013); ExcelStor Tech., Inc. v. Papst Licensing GmbH & Co. KG, 541 F.3d 1373, 1375 (Fed. Cir. 2008). No claim of diversity was made here. NEUROREPAIR, INC. v. THE NATH LAW GROUP 5 Congress relating to patents.” 28 U.S.C. § 1338(a). 2 In Christianson v. Colt Industries Operating Corp., the Supreme Court held a claim may “aris[e] under” the patent laws even where patent law did not create the cause of action, provided the “well-pleaded complaint establishes . . . that the plaintiff’s right to relief necessari- ly depends on resolution of a substantial question of federal patent law.” 486 U.S. 800, 808–09 (1988). In its recent decision in Gunn v. Minton, the Court made clear that state law legal malpractice claims will “rarely, if ever, arise under federal patent law,” even if they require resolution of a substantive question of feder- al patent law. 133 S. Ct. at 1065. The Court reasoned that while such claims “may necessarily raise disputed questions of patent law,” those questions are “not sub- stantial in the relevant sense.” Id. at 1065, 1066. The Court emphasized that “[b]ecause of the backward-looking nature of a legal malpractice claim, the question is posed in a merely hypothetical sense” and that “[n]o matter how the state courts resolve that hypothetical ‘case within a case,’ it will not change the real-world result of the prior federal patent litigation.” Id. at 1066–67. In view of the absence of a question that was “significant to the federal system as a whole” and the “‘especially great’” state interest in regulating lawyers, the Court concluded that 2 The second sentence of § 1338(a) was amended by the Leahy–Smith America Invents Act, Pub. L. No. 112- 29, § 19(a), 125 Stat. 284, 331 (2011) (“AIA”). Neu- roRepair commenced this action before these amendments took effect on September 16, 2011, so this court applies the pre-AIA version of the statute. AIA § 19(e), 125 Stat. at 333; see also Wawrzynski v. H.J. Heinz Co., 728 F.3d 1374, 1378 (Fed. Cir. 2013) (actions commenced before September 16, 2011, are not subject to the AIA amend- ments). 6 NEUROREPAIR, INC. v. THE NATH LAW GROUP Congress had not intended to bar state courts from decid- ing state legal malpractice claims simply because they may involve an underlying hypothetical patent issue. See id. at 1066, 1068 (quoting Goldfarb v. Va. State Bar, 421 U.S. 773, 792 (1975)). The Court in Gunn explained that its earlier decision in Grable & Sons Metal Products, Inc. v. Darue Engineer- ing & Manufacturing, 545 U.S. 308 (2005), is properly viewed as setting forth a four-part test to determine when federal jurisdiction over a state law claim will lie. Gunn, 133 S. Ct. at 1065. Under this test, a cause of action created by state law may nevertheless “arise under” federal patent law within the meaning of 28 U.S.C. § 1338(a) if it involves a patent law issue that is “(1) necessarily raised, (2) actually disputed, (3) substantial, and (4) capable of resolution in federal court without disrupting the federal-state balance approved by Con- gress.” Id. Although the events in the present matter transpired prior to the decision in Gunn, the Supreme Court’s interpretation of federal civil law “must be given full retroactive effect in all cases still open on direct review and as to all events, regardless of whether such events predate or postdate [the Supreme Court’s] an- nouncement of the rule.” Harper v. Va. Dep’t of Taxation, 509 U.S. 86, 97 (1993). A. NeuroRepair’s suit would not “necessarily raise” issues of patent law NeuroRepair’s suit fails Gunn’s jurisdictional test. An issue of patent law is “necessarily raised” if “a well- pleaded complaint establishes either that federal patent law creates the cause of action or that the plaintiff's right to relief necessarily depends on resolution of a substantial question of federal patent law, in that patent law is a necessary element of one of the well-pleaded claims.” Christianson, 486 U.S. at 809; see also Grable, 545 U.S. at 315 (finding a federal issue to be an “essential element” of NEUROREPAIR, INC. v. THE NATH LAW GROUP 7 the cause of action); Gunn, 133 S. Ct. at 1065 (noting the plaintiff’s required showing in order to prevail “will necessarily require application of patent law to the facts of [his] case”). NeuroRepair’s claims of professional negligence, breach of fiduciary duty, breach of written contract, breach of oral contract, breach of implied cove- nant of good faith and fair dealing, negligent misrepre- sentation, and false promise are each created by state, not federal, law. See J.A. 62–68. Therefore, a patent law issue will be necessarily raised only if it is a necessary element of one of the well-pleaded claims. NeuroRepair’s state law claims, as presented in its complaint of March 20, 2009, include a number of refer- ences to patent issues. For example, its First Cause of Action for professional negligence asserts Defendants breached their duty of care “by, among other things, failing to communicate with Plaintiff . . . ; failing to com- petently and effectively pursue the Patent Applica- tions; . . . [and] failing to accurately record and bill time.” J.A. 63. However, because NeuroRepair’s complaint sets forth multiple bases in support of its allegation of professional negligence, a court could find NeuroRepair is entitled to relief based on this allegation without ever reaching a patent law issue. See Immunocept, LLC v. Fulbright & Jaworski, LLP, 504 F.3d 1281, 1285 (Fed. Cir. 2007) (“Because it is the sole basis of negligence, the claim drafting error is a necessary element of the malpractice cause of action.”). Therefore, it would not “necessarily require the application of patent law to the facts of [this] case” for NeuroRepair “to prevail on [its] legal malpractice claim.” Gunn, 133 S. Ct. at 1065; see also Christianson, 486 U.S. at 812 (“Since there are reasons completely unrelated to the provisions and purposes of federal patent law why petitioners may or may not be entitled to the relief [they] see[k] . . . , the claim does not ‘arise under’ federal patent law.”) (internal quotation marks and 8 NEUROREPAIR, INC. v. THE NATH LAW GROUP citation omitted); Dixon v. Coburg Dairy, Inc., 369 F.3d 811, 816 (4th Cir. 2004) (en banc) (“A plaintiff’s right to relief for a given claim necessarily depends on a question of federal law only when every legal theory supporting the claim requires the resolution of a federal issue.”). Simi- larly, NeuroRepair could prevail on its remaining six causes of action under alternate bases that do not neces- sarily implicate an issue of substantive patent law. B. At least one patent law issue is actually dis- puted Although a court would not necessarily be required to reach the patent law issues that underlie the causes of action alleged by NeuroRepair, at least one patent law issue is actually disputed by the parties. NeuroRepair claims Defendants’ wrongdoing hindered its ability to timely obtain patents of the same scope it would have obtained but for Defendants’ delay and mishandling. Defendants counter that the patent did not issue sooner because the claims as initially presented were not patent- able and that Defendants had not narrowed the claims because “NeuroRepair had expressly ordered [Defendants] not to.” Appellees’ Br. 26. Whether the patent could have issued earlier and with broader claims is thus actually disputed by the parties. C. The patent issue in NeuroRepair’s suit is not “substantial” Even if the disposition of this matter necessarily re- quired the resolution of patent law issues, those issues would not be of sufficient importance “to the federal system as a whole,” as required under the third part of the Gunn test. 133 S. Ct. at 1066, 1068. “[I]t is not enough that the federal issue be significant to the particu- lar parties in the immediate suit; that will always be true when the state claim ‘necessarily raise[s]’ a disputed federal issue . . . .” Id. at 1066. NEUROREPAIR, INC. v. THE NATH LAW GROUP 9 The Supreme Court has described three nonexclusive factors that may help to inform the substantiality inquiry, none of which is necessarily controlling. See MDS (Can.) Inc. v. Rad Source Techs., Inc., 720 F.3d 833, 842 (11th Cir. 2013); see also Mikulski v. Centerior Energy Corp., 501 F.3d 555, 570 (6th Cir. 2007). First, a substantial federal issue is more likely to be present if a “pure issue of [federal] law” is “dispositive of the case.” Empire Healthchoice Assurance, Inc. v. McVeigh, 547 U.S. 677, 700 (2006). Second, a substantial federal issue is more likely to be present if the court’s resolution of the issue will control “numerous other cases.” Id. Third, a sub- stantial federal issue is more likely to be present if “[t]he Government . . . has a direct interest in the availability of a federal forum to vindicate its own administrative ac- tion.” Grable, 545 U.S. at 315. i. No pure issue of federal law is dispositive NeuroRepair asserts Defendants’ wrongdoing caused harm by, among other things, hindering its ability both to pursue the patent applications in a timely and effective manner and to obtain patents of the same scope it would have obtained but for Defendants’ delay and mishandling. Although resolution of these assertions could involve the application of substantive patent law principles, it is not clear from the record that any particular substantive patent law issue or issues would need to be resolved. Both claim scope and timing of issuance are likely to depend primarily on the particular facts and circumstanc- es of the prior art, timely responses to office actions, etc., rather than on the interpretation of federal law. This is therefore unlike cases where a distinct issue of federal law was dispositive of the case. See, e.g., Gunn, 133 S. Ct. at 1065, 1066 (finding the viability of an experimental-use argument to be actually disputed and central to resolution of the case, but concluding this issue was not substantial in the relevant sense); Grable, 545 U.S. at 311 (noting the underlying dispute centered on whether 26 U.S.C. § 6335 10 NEUROREPAIR, INC. v. THE NATH LAW GROUP required personal service rather than service by mail); Jang v. Bos. Scientific Corp., 767 F.3d 1334, 1336 (Fed. Cir. 2014) (noting “Jang’s right to relief . . . depends on . . . whether the stents sold by [petitioners] would have in- fringed [Jang’s patents]”). Instead, the present matter involves a question of federal law, at most, as only one of several elements needed to prevail. See Empire HealthChoice, 547 U.S. at 701 (“[I]t takes more than a federal element to open the ‘arising under’ door.”) (inter- nal quotation marks and citation omitted); see also Mikul- ski, 501 F.3d at 571 (Even if the federal issue is resolved in their favor, “plaintiffs must still prove the remaining elements of fraudulent misrepresentation (such as intent) or breach of contract (such as the existence of a con- tract).”). In addition, NeuroRepair’s assertions with respect to patent scope and timing do not constitute the totality and perhaps not even the most significant part of the state law causes of action included in its complaint. These causes of action also include assertions of failure to com- municate, overbilling, failure to accurately record time billed, failure to deliver work product, and misrepresenta- tion of Cogan’s expertise in neuroscience. Additional factual issues are raised in the parties’ briefs, including whether Cogan represented himself as a partner of The Nath Law Group, whether he was in fact a partner, whether Cogan deliberately overbilled NeuroRepair, whether The Nath Law Group “deliberately concealed from NeuroRepair the firm’s internal investigation of Cogan,” Appellant’s Br. 14, when NeuroRepair became aware of the basis for its suit, and when NeuroRepair became aware of Cogan’s qualifications, Appellees’ Br. 40– 43. These and other factual issues related to Neu- roRepair’s claims of Defendants’ professional conduct and alleged actions or inactions make clear this case does not present a “pure issue of law” that is “dispositive of the case.” NEUROREPAIR, INC. v. THE NATH LAW GROUP 11 ii. The court’s decision is unlikely to control numer- ous other cases In arguing the resolution of the present matter will affect “subsequent litigation,” id. at 26, Appellees suggest that if a state court adjudicates this case, “a third-party infringer could conceivably be found liable for infringing a patent that its own state court previously found to be unpatentable,” id. at 27–28. This argument is unpersua- sive. If a federal court finds a defendant liable for infring- ing a valid patent notwithstanding a prior state court determination of invalidity, it is self-evident the state court decision did not “control” the later federal court case. Moreover, to the extent a state court must address is- sues of substantive patent law, the court is likely to focus on whether the invention was patentable as initially claimed, as reflected in the assertions of Appellees them- selves. See id. at 26 (arguing “the claims as initially presented were not patentable”) (emphasis added). Any determination of validity of claims that ultimately did not issue constitutes a hypothetical matter that would not affect the scope of any live patent. See Byrne v. Wood, Herron & Evans, LLP, 676 F.3d 1024, 1032 n.4 (Fed. Cir. 2012) (O’Malley, J., dissenting from the denial of the petition for rehearing en banc) (stating, in the context of a patent prosecution malpractice claim, “the patent issue in any malpractice action will involve only an academic inquiry into what likely would have happened absent the attorney negligence, and the answer will affect only the result of the state law claim, not the rights or scope of any live patent”). If the state court action would neither affect the scope of any live patent nor require resolution of a novel issue of patent law, it is unclear how it could control numerous other cases or impact the federal system as a whole. 12 NEUROREPAIR, INC. v. THE NATH LAW GROUP iii. The government does not have a direct interest in the availability of a federal forum to vindicate its own administrative action “[Q]uestions of [federal] jurisdiction over state-law claims require careful judgments about the nature of the federal interest at stake.” Grable, 545 U.S. at 317 (inter- nal quotation marks and citation omitted). Grable in- volved a dispute over title to real property, a quintessential state law matter. See Or. ex rel. State Land Bd. v. Corvallis Sand & Gravel Co., 429 U.S. 363, 378 (1977) (“This Court has consistently held that state law governs issues relating to . . . real property, unless some other principle of federal law requires a different result.”). The central issue, however, was whether the Internal Revenue Service (“IRS”), in seizing Grable’s property to satisfy a delinquent tax debt and later selling the property to the defendant, had failed to notify Grable “in the exact manner required by [26 U.S.C.] § 6335(a).” Grable, 545 U.S. at 311. Resolution of the dispute re- quired a determination of whether § 6335(a) required personal service or allowed service to be made by certified mail, id., a determination that would directly impact IRS practices. In finding federal jurisdiction proper, the Court noted the government’s “strong interest in the prompt and certain collection of delinquent taxes,” and the importance of ensuring the IRS could “satisfy its claims from the property of delinquents.” Id. at 315 (internal quotation marks omitted). Given these considerations, the govern- ment had “a direct interest in the availability of a federal forum to vindicate its own administrative action.” Id. The federal interest asserted to be at stake in the pre- sent matter is far more nebulous than in Grable. Appel- lees assert state court jurisdiction “would be a recipe for inconsistency,” Appellees’ Br. 28, and “[i]f state courts start ruling on issues of this nature, subsequent patent prosecutions and litigation arising out of those patents will be difficult, to say the least,” id. at 26. These vague NEUROREPAIR, INC. v. THE NATH LAW GROUP 13 assertions, which do not contain citations to authority, do not convincingly establish the USPTO or any other gov- ernment agency has a “direct interest” in the outcome of this dispute, which is between private parties and relates to alleged legal malpractice and other state law claims. Grable, 545 U.S. at 315. D. If cases such as NeuroRepair’s were heard in federal court, it would disrupt the federal-state balance Finally, to the extent federal interests are implicated by NeuroRepair’s state law claims, they do not outweigh the “especially great” interests of the state in regulating that state’s lawyers. See Gunn, 133 S. Ct. at 1068. Since Gunn, courts considering alleged violations of a variety of state laws have declined to find federal question jurisdic- tion notwithstanding the presence of an underlying issue of patent law. See, e.g., Forrester Envtl. Servs. Inc. v. Wheelabrator Techs., Inc., 715 F.3d 1329 (Fed. Cir. 2013) (tortious interference with a contractual relationship); MDS (Can.), 720 F.3d at 842 (breach of contract); Mirowski Family Ventures, LLC v. Bos. Scientific Corp., 958 F. Supp. 2d 1009 (S.D. Ind. 2013) (breach of patent license agreement); Airwatch LLC v. Good Tech. Corp., No. 1:13-cv-2870-WSD, 2014 WL 1651964 (N.D. Ga. Apr. 24, 2014) (defamation); Bonnafant v. Chico’s FAS, Inc., No. 2:13-cv-893-FtM-29CM, 2014 WL 1664554 (M.D. Fla. Apr. 25, 2014) (state whistleblower legislation). In sum, federal jurisdiction is lacking here under Gunn because no federal issue is necessarily raised, because any federal issues raised are not substantial in the relevant sense, and because the resolution by federal courts of attorney malpractice claims that do not raise substantial issues of federal law would usurp the im- portant role of state courts in regulating the practice of law within their boundaries, disrupting the federal-state balance approved by Congress. 14 NEUROREPAIR, INC. v. THE NATH LAW GROUP III. Defendants have not effectively distinguished Gunn Defendants seek to distinguish Gunn on the basis that it involved alleged malpractice within the patent litigation context while the present matter involves alleged malpractice within the patent prosecution context. Gunn made no such distinction. See 133 S. Ct. at 1066–67 (“Because of the backward-looking nature of a legal malpractice claim, the question is posed in a merely hypothetical sense.”) (emphasis added); id. at 1065 (“[S]tate legal malpractice claims based on underlying patent matters will rarely, if ever, arise under federal patent law . . . .”) (emphasis added). Accepting Defend- ants’ invitation to carve out a broad exception for patent prosecution malpractice would conflict with the Supreme Court’s description of such exceptions as comprising a “slim category.” Id. at 1065; see also Empire HealthChoice, 547 U.S. at 699 (describing exceptions to this rule as a “special and small category”). The number of patent-related malpractice cases considered by the Federal Circuit demonstrates that such cases have not been rare. See, e.g., Byrne, 676 F.3d at 1037 (O’Malley, J., dissenting). Defendants further attempt to distinguish Gunn by arguing that NeuroRepair’s patents were undergoing prosecution at the time of the litigation, and so any court decision with respect to the malpractice claim could have a real-world result and would not be backward-looking. However, as already explained, the outcome of this dis- pute is not likely to control numerous other cases. See supra Part II.C.ii. In addition, the Gunn Court consid- ered and rejected the argument that “state courts’ an- swers to hypothetical patent questions can sometimes have real-world,” forward-looking effects, such as where a state court’s interpretation of claim scope impacts a USPTO examiner’s later consideration of a continuation application related to the earlier-litigated patent. 133 S. NEUROREPAIR, INC. v. THE NATH LAW GROUP 15 Ct. at 1067. In rejecting this argument, the Court ex- pressed doubt that an examiner would be bound by a state court’s interpretation, and found in any event such effects would be “‘fact-bound and situation-specific’” and any forward-looking results would be limited to the par- ties and patents that had been before the state court. Id. at 1068 (quoting Empire HealthChoice, 547 U.S. at 701). Similarly, it noted that “federal courts are of course not bound by state court case-within-a-case patent rulings.” Gunn, 133 S. Ct. at 1067. Addressing what would have happened had the al- leged bad acts of Defendants not occurred requires a court to engage in precisely the sort of backward-looking, hypo- thetical analysis contemplated in Gunn. Exercise of federal jurisdiction is therefore improper. CONCLUSION For these reasons, this court VACATES AND REMANDS TO THE DISTRICT COURT WITH INSTRUCTIONS TO REMAND THE CASE TO CALIFORNIA STATE COURT
Background ========== Orthostatic hypotension (OH) is generally defined as a drop in blood pressure on standing, which in a given subject is regarded as abnormal purely on the basis of its magnitude. As such, OH is a clinical sign and may be symptomatic or asymptomatic \[[@B1]\]. Orthostatic blood pressure changes can be measured by different methods. Most commonly, clinicians use the auscultatory or oscillometric method with sphygmomanometer \[[@B2]\]. As applied to the latter methods, OH is defined by consensus as a sustained reduction of systolic blood pressure (SBP) of at least 20 mmHg or diastolic blood pressure (DBP) of 10 mmHg within 3 minutes of standing \[[@B1]\]. Many clinicians also measure orthostatic hemodynamic changes with non-invasive beat-to-beat finger arterial blood pressure monitors; however, in the latter case the consensus definition of OH may lack clinical relevance \[[@B3],[@B4]\] and there are no internationally agreed cut-offs for the definition of OH. OH is an independent cardiovascular risk factor and may be practically estimated by systolic reaction only \[[@B5]\]. It is recognised that elevated SBP prior to a standing manoeuvre is directly associated with the magnitude of the SBP drop \[[@B6]\], to the extent that the current consensus definition of OH requires an SBP drop of at least 30 mmHg in patients with supine hypertension \[[@B1]\]. The clinically recognised syndrome of *supine hypertension and orthostatic hypotension* (SH-OH) poses a particular therapeutic dilemma, as treatment of one aspect of the condition may worsen the other \[[@B7]\]. Indeed, in the treatment of combined hypertension and OH in older adults, more questions than answers still remain \[[@B8]\], and little is known on the influences of cardiovascular and neurological medications on this syndrome. Although most patients with OH are asymptomatic or have few non-specific symptoms \[[@B9]\], a marked orthostatic blood pressure drop may cause symptoms of orthostatic intolerance (OI) such as dizziness, light-headedness, and/or loss or near-loss of consciousness \[[@B10],[@B11]\]. These symptoms are attributed to hypoperfusion of the central nervous system during orthostasis \[[@B12]\]. OI symptoms may correlate with the lowest blood pressure point reached (i.e. nadir), with the magnitude of blood pressure drop (i.e. delta), or with the rate of blood pressure recovery \[[@B13],[@B14]\]. However, OI may also be caused by conditions other than blood pressure changes, such as vestibular \[[@B15],[@B16]\] or psychosomatic \[[@B17]\] disorders. Indeed, OI is a heterogeneous syndrome \[[@B18],[@B19]\]. It has been suggested that postural symptoms (i.e. OI) correlate much more strongly with (pre-)syncope and falls than does OH (i.e. the isolated blood pressure drop sign) *per se*\[[@B20],[@B21]\]. In other words, *if* OH triggers OI symptoms, then syncope is more likely. However, in many real-life situations the latter theoretical sequence is interrupted, as marked OH can be asymptomatic \[[@B22],[@B23]\] and not all instances of OI result in syncope \[[@B18],[@B24]\]. In a previous investigation, a *morphological* classification of OH (MOH) was proposed \[[@B25]\] as an approach to the measurement of three known \[[@B26]-[@B28]\] orthostatic hemodynamic patterns using non-invasive beat-to-beat finger arterial blood pressure monitoring. In that study, a gradient of OI was identified across morphological blood pressure patterns: 17.9% in the *small drop/fast over-recovery* (MOH-1), 27.5% in the *medium drop/slow recovery* (MOH-2) and 44.6% in the *large drop/non-recovery* group (MOH-3) (*P* \< 0.001). We also showed a gradient of baseline SBP across MOH groups, suggesting that MOH-3 is, in fact, a syndrome of SH-OH. To date, there had been no studies of the SH-OH syndrome using non-invasive beat-to-beat finger arterial blood pressure monitoring. In view of the above, the aims of the present study were: (1) to replicate the MOH patterns in a large, population-based sample such as the first wave of The Irish Longitudinal Study on Ageing (TILDA, <http://www.tcd.ie/tilda>), with especial attention to the MOH-3 pattern as a beat-to-beat analogue of SH-OH; (2) to characterise the MOH-3 group with especial attention to associations with cardiovascular and neurological medications, concurrent OI, and history of fainting; (3) to identify predictors of MOH-3 response in the presence of potential confounders; and (4) to assess the effect of MOH-3 towards OI, and the effect of MOH-3 *and* OI towards fainting history, in the presence of confounders. The latter can be seen as a cross-sectional evaluation of the above-mentioned three pathophysiological steps (OH → OI → fainting). A comprehensive investigation of factors associated with each of those three steps has not been conducted to date but would be helpful in order to gain insights into potentially modifiable factors to prevent OH and OI-related faints, particularly in relation to association with prescribed medications in the SH-OH syndrome. The identified factors will then be investigated longitudinally in TILDA. Methods ======= Setting ------- The Irish Longitudinal Study on Ageing (TILDA, <http://www.tcd.ie/tilda/>) is a large prospective cohort study of the social, economic, and health circumstances of community-dwelling older people in Ireland. This study is based on the first wave of data, which was collected between October 2009 and July 2011. The sampling frame is the Irish Geodirectory, a listing of all residential addresses in the Republic of Ireland. A clustered sample of addresses was chosen, and household residents aged 50 and older and their spouses/partners (of any age) were eligible to participate. The household response rate was 62.0%. In the present study, the analytic sample consisted of those aged ≥ 50 from TILDA wave 1. The study design has previously been described in detail \[[@B29],[@B30]\]. There were three parts to data collection: a computer-assisted personal interview that included detailed questions on sociodemographic characteristics, wealth, health, lifestyle, social support and participation, use of health and social care, and attitudes toward aging; a self-completion questionnaire; and a health assessment that research nurses performed. Health assessments were conducted in a health centre or in the homes of participants; however, only the centre-based assessments included detailed measurements and novel technologies such as beat-to-beat finger arterial blood pressure monitors \[[@B31]\]. Ethics and consent ------------------ Ethical approval was obtained from the Trinity College Dublin Research Ethics Committee, and all participants provided written informed consent. Active stand protocol --------------------- Subjects underwent a lying-to-standing orthostatic test (active stand) with non-invasive beat-to-beat blood pressure monitoring using digital photoplethysmography (Finometer® MIDI device, Finapres Medical Systems BV, Amsterdam, The Netherlands, <http://www.finapres.com>). An appropriate cuff size was applied to the finger as recommended by the manufacturer \[[@B32]\]. Prior to standing, subjects were resting in the supine position for 10 minutes. The active stand protocol included the use of the automatic *physiocal* function (physiologic calibration that calibrates the finger arterial size at which finger cuff air pressure equals finger arterial blood pressure). We aimed at the beat interval between physiocals being 30 beats or higher before the start of the active stand. Just prior to standing, the *physiocal* was switched off to ascertain a continuous recording during the orthostatic blood pressure changes, and remained switched off until the end of the test. The height correction unit (HCU) of the Finometer® was zeroed and implemented as per manufacturer's specifications \[[@B32]\], and was used to compensate for hydrostatic pressure changes on standing. After the ten minutes of supine rest the subjects were asked to stand, unaided, in a timely manner. After standing, systolic, diastolic blood pressure, and heart rate were monitored for three minutes. Throughout the recording subjects stood motionless and in silence with the monitored arm resting extended by the side. Immediately after the test, subjects were asked to report whether they had felt any symptoms of dizziness, light-headedness or unsteadiness (OI: yes or no). Active stand data pre-processing -------------------------------- Active stand data analysis required a number of steps including: 1) data quality screening and artefact rejection; 2) pre-processing and filtering; and 3) blood pressure waveform feature extraction. Data were imported and processed in Matlab*®* R2011b. Data records where the HCU had not been properly applied (e.g. sensor fell off during the recording, was zero throughout, contained significant noise, or was inverted due to incorrect placement) were removed from the analysis. Additional checks were carried out on the data such as ensuring data met the requirement of a minimum length of stand (≥ 90 seconds). A further check examined the total noise in the baseline and stand sections. For this, each record was divided into two sections; baseline pre-stand (baseline) and standing activity (stand) demarked by times before and after the stand. Each of the sections - baseline and stand - was scored in terms of artefact presence separately. The total number of beats within the baseline and stand sections was counted. The proportion of total time containing significant motion artefact as a fraction of the total signal time were used to quantify the amount of noise in the signal using a validated automated algorithm \[[@B33],[@B34]\]. Signals with significant artefact were rejected from the analysis as per pre-defined criteria. For the final dataset to be used in the analysis, beat-to-beat values were averaged according to the 5-second averages method described by van der Velde *et al*. \[[@B35]\], in order to filter any remaining noise. Following this, features were extracted for each of these records. Finometer® features ------------------- The following measures were recorded: ● Baseline systolic (SBP), diastolic blood pressure (DPB), and heart rate (HR): defined as the mean value in the time interval −60 seconds to −30 seconds prior to standing. ● SPB, DBP and HR at the lowest blood pressure value after standing (i.e. nadir values, which are generally achieved within 15 seconds after standing \[[@B36]\]). ● SPB, DBP and HR at 30 seconds post-stand. ● SPB, DBP and HR at 60 seconds post-stand. ● SPB, DBP and HR at 90 seconds post-stand. ● SPB, DBP and HR at 110 seconds post-stand. ● Delta (∆SBP, ∆DBP and ∆HR) was defined as the difference between the respective baselines and nadirs. ● We also computed the percentage of SBP recovery (with respect to baseline) by 30 seconds, 60 seconds, and 110 seconds after the stand. Characterisation variables -------------------------- ● *Demographics*: age, sex. ● *Orthostatic intolerance* (OI): self-reported symptoms of dizziness, light-headedness or unsteadiness during the active stand (yes or no). ● *Ever had a blackout or fainted*: yes or no. ● *Medications*, based on the WHO Anatomical Therapeutic Chemical (ATC) classification system (<http://www.whocc.no/atc_ddd_index/>): *○ Cardiovascular medications*: ▪ C01A: cardiac glycosides. ▪ C01B: antiarrhythmics, class I and III. ▪ C07A: beta blocking agents. ▪ C03: diuretics. ▪ C09A: ACE inhibitors, plain. ▪ C09C: angiotensin II antagonists, plain. ▪ C08C: selective calcium channel blockers with mainly vascular effects. ▪ C08D: selective calcium channel blockers with direct cardiac effects. ▪ C02C: antiadrenergic agents, peripherally acting. ▪ C01D: vasodilators used in cardiac diseases. ▪ C04A: peripheral vasodilators. *○ Neurological medications*: ▪ N03A: antiepileptics. ▪ N05A: antipsychotics. ▪ N05B: anxiolytics. ▪ N05C: hypnotics and sedatives. ▪ N06A: antidepressants. *○ Polypharmacy* was defined as the simultaneous use of 5 or more medications. ● *Comorbidities* (self-reported): ○ Hypertension. *○ Angina*. *○ Heart attack*. *○ Heart failure*. *○ Diabetes*. *○ Stroke*. *○ Transient ischaemic attack (TIA)*. *○ Abnormal heart rhythm*. *○ Parkinson's disease*. *○ Three or more chronic diseases*. ● *Disability* (self-reported): any disability from the list of Independent Activities of Daily Living (IADL). ● *Cognition:* Mini-Mental State Examination (MMSE) score. Statistics ---------- All statistical analyses were performed with SPSS (version 18). Descriptives for dichotomous variables were given as percentages (%). Continuous variables were described as mean with standard deviation (SD). To classify the sample into MOH groups we used, as before \[[@B25]\], an automatic *K-means Cluster Analysis* procedure, which assigns cases to a fixed number of groups (clusters) whose characteristics are not yet known but are based on a set of specified (clustering) variables. We chose three (*k* = 3) as the number of clusters because three were the previously described \[[@B26]-[@B28]\] orthostatic hemodynamic patterns. There was no statistical process to arrive at *k* = 3. The *clustering* variables (i.e. ∆SBP and % of Baseline SBP at 30, 60 and 110 s) were chosen as key *morphological descriptors* of the beat-to-beat orthostatic blood pressure response that we intended to model. The influence of outliers on the *K*-means cluster analysis was minimised by the pre-processing and data cleaning steps as outlined above. In the *K*-means analysis, the clustering variables were entered unstandardised. It was decided not to standardise the clustering variables as some have argued that standardisation (*z*-scores specifically) can result in misleading conclusions when true group structure is present \[[@B37]\]. The final cluster membership variable was saved to the dataset. To compare baseline characteristics between those with and without MOH data, we used the t-test or Mann--Whitney U test (as appropriate) for continuous variables, and the Chi-square test for count data. To test for a linear trend (i.e. gradient) across MOH clusters, we used the Chi-square for trend for dichotomous variables and the Spearman's rank correlation coefficient for continuous variables. Multivariable analyses were based on binary logistic regression (forward conditional procedure). This stepwise method of variable selection involves entry testing based on the significance of the score statistic, and removal testing based on the probability of a likelihood-ratio statistic based on conditional parameter estimates. The significance level for entry into the model was set at *P* \< 0.05 and for removal was set at *P* \< 0.1. Multicollinearity diagnostics (tolerance and variance inflation factors: VIF) were checked. The multivariable models were repeated in the subsample of those aged ≥70 in order to explore whether the overall findings also applied to those of more advanced age. For the purpose of the discussion, and given the elevated number of characterisation variables used, we focused on the most statistically significant associations (i.e. *P* \< 0.01). Results ======= Of the 8,175 participants aged 50 and over in the first wave of TILDA, 5,037 (62%) had a Health Centre Assessment. Amongst the latter, 4,919 (98%) completed the active stand test. Active stand data were deemed of sufficient quality for analysis in 4,475 participants (91% of active stands). Complete data for defining the MOH groups was available for 4,467 participants. The flowchart of participants is shown in Figure [1](#F1){ref-type="fig"}. ![Flowchart of participants.](1471-2318-13-73-1){#F1} Table [1](#T1){ref-type="table"} compares characteristics of Health Centre participants with (*N* = 4,467) and without (*N* = 570) MOH data. Those without MOH data were older (mean age 65 *vs*. 62 years, *P* \< 0.001), had higher sitting SBP (mean SBP 137 *vs*. 134 mmHg, *P* \< 0.001), were taking a higher number of medications (mean 3 *vs*. 2, *P* \< 0.001), had higher burden of cardiovascular disease (at least 1: 70% *vs*. 62%, *P* \< 0.001) and higher disability burden (at least 1 IADL disability: 10% *vs*. 4%, *P* \< 0.001). ###### Comparison of Health Centre participants with and without MOH data   **Has MOH groups (N = 4,467)** **Missing MOH groups (*N*= 570)** ***P*** --------------------------------------- -------------------------------- ----------------------------------- ---------------- Age: mean (SD) 61.6 (8.3) 64.5 (9.4) **\<0.001**^a^ Seated^\*^ SBP: mean (SD) 134.3 (19.4) 137.4 (20.0) **\<0.001**^a^ Seated^\*^ DBP: mean (SD) 82.3 (11.1) 82.5 (11.5) 0.67^a^ Number of medications: mean (SD) 2.3 (2.5) 2.8 (2.9) **\<0.001**^b^ Any cardiovascular disease: count (%) 2,772 (62.1) 403 (70.1) **\<0.001**^c^ Any IADL disability: count (%) 172 (3.9) 58 (10.1) **\<0.001**^c^ ^a^ t-test; ^b^ Mann Whitney U test; ^c^ Chi-square test. ^\*^ Seated blood pressure was measured with sphygmomanometer. In the *K*-means analysis on *N* = 4,467, all clustering variables significantly contributed to the solution (*P* \< 0.001). Table [2](#T2){ref-type="table"} shows the characteristics of the three morphological groups and Figure [2](#F2){ref-type="fig"} shows their hemodynamic profiles. Of the 4,467 cases assigned to clusters, 1,456 (33%) were assigned to MOH-1 (characterised by a *small drop and overshoot*), 2,230 (50%) to MOH-2 (characterised by a *medium drop and slower but full recovery*), and 781 (18%) to MOH-3 (characterised by a *large drop and incomplete recovery*). ###### MOH clusters characterisation   **MOH-1** **MOH-2** **MOH-3** ***P*** ------------------------------------------------------------------------ -------------- -------------- -------------- --------------------- Systolic blood pressure (SBP)         Baseline SBP (mmHg) 130.4 (20.7) 137.2 (21.3) 144.9 (24.9) **\<0.001**^Σ^ ∆ SBP (mmHg) −24.1 (10.8) −40.2 (9.9) −64.8 (15.2) **\<0.001**^Σ^ Nadir SBP (mmHg) 106.3 (22.0) 97.0 (22.1) 80.1 (24.5) **\<0.001**^Σ^ SBP by 30 s (mmHg) 143.1 (22.9) 133.8 (23.0) 122.9 (28.7) **\<0.001**^Σ^ SBP by 30 s (% baseline) 110.0 (8.1) 97.5 (6.6) 84.5 (11.4) **\<0.001**^Σ^ SBP by 60 s (mmHg) 142.5 (23.4) 134.0 (23.2) 125.8 (28.9) **\<0.001**^Σ^ SBP by 60 s (% baseline) 109.5 (8.2) 97.6 (6.6) 86.6 (11.5) **\<0.001**^Σ^ SBP by 90 s (mmHg) 142.9 (23.3) 135.3 (23.0) 127.3 (30.0) **\<0.001**^Σ^ SBP by 90 s (% baseline) 109.9 (8.8) 98.7 (7.1) 87.7 (13.0) **\<0.001**^Σ^ SBP by 110 s (mmHg) 143.2 (23.5) 135.3 (22.8) 126.5 (30.7) **\<0.001**^Σ^ SBP by 110 s (% baseline) 110.1 (9.1) 98.7 (7.1) 87.1 (13.6) **\<0.001**^Σ^ Diastolic blood pressure (DBP)         Baseline DBP (mmHg) 70.7 (10.4) 74.0 (10.7) 76.4 (13.1) **\<0.001**^Σ^ ∆ DBP (mmHg) −18.5 (7.8) −26.4 (7.5) −37.5 (9.7) **\<0.001**^Σ^ Nadir DBP (mmHg) 52.2 (12.3) 47.5 (12.8) 38.9 (14.6) **\<0.001**^Σ^ DBP by 30 s (mmHg) 75.6 (11.6) 72.3 (12.2) 66.0 (16.0) **\<0.001**^Σ^ DBP by 90 s (mmHg) 75.9 (11.3) 73.1 (11.7) 68.3 (15.2) **\<0.001**^Σ^ DBP by 90 s (mmHg) 75.7 (11.0) 73.3 (11.6) 68.6 (15.6) **\<0.001**^Σ^ DBP by 110 s (mmHg) 75.8 (11.3) 73.1 (11.5) 68.0 (15.7) **\<0.001**^Σ^ Heart rate (HR)         Baseline HR (bpm) 66.4 (10.0) 65.3 (10.0) 63.1 (10.0) **\<0.001**^Σ^ ∆ HR (bpm) 20.6 (8.6) 19.8 (8.8) 18.4 (9.5) **\<0.001**^Σ^ Nadir HR (bpm) 87.0 (12.6) 85.1 (13.1) 81.5 (14.2) **\<0.001**^Σ^ HR by 30 s (bpm) 72.6 (11.4) 72.2 (11.8) 69.7 (12.2) **\<0.001**^Σ^ HR by 60 s (bpm) 74.1 (11.4) 74.1 (11.5) 71.8 (12.2) **\<0.001**^Σ^ HR by 90 s (bpm) 73.1 (10.9) 73.1 (11.2) 71.0 (11.9) **0.001**^Σ^ HR by 110 s (bpm) 73.1 (10.9) 73.2 (11.1) 71.0 (11.9) **0.001**^Σ^ Orthostatic intolerance and blackouts/faints         OI symptoms during active stand (%) 33.1 39.4 44.9 **\<0.001**^χt^ Ever had a blackout or fainted (%) 17.8 20.2 21.6 0.07^χt^ Demographics         Age 61.2 (8.1) 61.2 (8.1) 63.6 (9.0) **\<0.001**^Σ^ Age range 50 - 89 50 - 90 50 - 91 **-** Female gender (%) 48.8 53.4 64.4 **\<0.001**^χt^ Medications         Polypharmacy (5 or more meds) (%) 16.7 16.8 21.0 0.02^χt^ On cardiac glycosides (e.g. digoxin) (C01A) (%) 0.8 0.8 0.5 0.76^χt^ On antiarrhythmics class I and III (C01B) (%) 0.2 0.4 0.4 0.58^χt^ On beta-blocker (C07A) (%) 10.4 10.4 15.7 **\<0.001**^**χt**^ On diuretic (C03) (%) 6.2 5.7 6.5 0.66^χt^ On ACE-i (C09A) (%) 10.1 10.9 9.7 0.58^χt^ On ARA (C09C) (%) 7.8 6.8 7.2 0.48^χt^ On calcium channel blocker -- with mainly vascular effects (C08C) (%) 7.3 7.3 6.4 0.66^χt^ On calcium channel blocker -- with direct cardiac effects (C08D) (%) 1.0 1.3 1.5 0.44^χt^ On peripherally acting anti-adrenergic (e.g. alpha-blocker) (C02C) (%) 0.7 1.8 2.3 **0.004**^**χt**^ On cardiac vasodilator (e.g. nitrates) (C01D) (%) 1.0 1.2 1.7 0.35^χt^ On peripheral vasodilator (C04A) (%) 0.2 0.2 0.3 0.92^χt^ On antiepileptic (N03A) (%) 1.8 2.5 2.4 0.33^χt^ On antipsychotic (N05A) (%) 0.8 0.9 1.8 0.07^χt^ On anxiolytics (N05B) (%) 1.2 1.6 2.7 0.03^χt^ On hypnotics or sedatives (N05C) (%) 3.4 3.3 4.4 0.36^χt^ On antidepressant (N06A) (%) 4.0 5.7 10.2 **\<0.001**^**χt**^ Comorbidities         Hypertension (%) 32.1 32.4 35.3 0.25^χt^ Angina (%) 4.3 3.9 5.9 0.07^χt^ Heart attack (%) 4.3 3.9 3.3 0.51^χt^ Heart failure (%) 0.7 0.9 0.6 0.59^χt^ Diabetes (%) 7.4 6.1 5.9 0.21^χt^ Stroke (%) 1.2 1.1 1.3 0.89^χt^ TIA (%) 1.3 1.7 1.7 0.56^χt^ Abnormal heart rhythm (%) 6.5 7.4 7.3 0.61^χt^ Parkinson's disease (%) 0.1 0.4 0.5 0.13^χt^ 3 or more chronic diseases (%) 23.2 23.7 26.9 0.13^χt^ Disability         Any IADL disability (%) 3.2 3.8 5.2 0.06^χt^ Cognition         MMSE score 28.6 (2.0) 28.7 (1.7) 28.6 (1.6) 0.18^Σ^ ^Σ^ Spearman's rank correlation coefficient; ^χt^ Chi-squared test for trend. ![MOH phenotypes (visual description of SBP, DBP and HR behaviour).](1471-2318-13-73-2){#F2} OI and fainting history (Table [2](#T2){ref-type="table"}) ---------------------------------------------------------- Across MOH groups, there was a significant gradient in OI and a non-significant gradient in history of faints/blackouts, in the expected direction (*P* \< 0.001 and *P* = 0.065, respectively). MOH groups: other gradients (Table [2](#T2){ref-type="table"}) -------------------------------------------------------------- Age-wise, the mean age of participants in MOH-3 was 64 years, while in MOH-1 and MOH-2 it was 61 years. There was an increasing gradient of female sex across MOH clusters (49%, 53% and 64%, respectively). Twenty-one percent of participants in MOH-3 were on polypharmacy, as opposed to 17% of participants in MOH-1 and MOH-2. Sixteen percent of MOH-3 participants were on beta blockers, compared to 10% in MOH-1 and MOH-2. Across MOH groups, there was an increasing burden of antidepressants (*P* \< 0.001). There was also an increasing burden of peripherally acting antiadrenergic agents (e.g. alpha blockers) (*P* = 0.004) (Table [2](#T2){ref-type="table"}). As regards comorbidity burden, 35% of participants in MOH-3 had history of hypertension, compared to 32% and 32% of MOH-1 and MOH-2 participants, respectively. Twenty-seven percent of MOH-3 participants had three or more chronic diseases, compared to 23% and 24% of MOH-1 and MOH-2 participants, respectively. There were increasing gradients of history of angina and abnormal heart rhythm across MOH clusters, and a decreasing gradient in diabetes. None of these trends reached statistical significance. There was a non significant gradient of increasing IADL disability across clusters (*P* = 0.058) (Table [2](#T2){ref-type="table"}). In the multivariable binary logistic regression model to predict MOH-3 membership (Table [3](#T3){ref-type="table"}), the statistically significant factors were: antidepressants (OR = 1.99, 95% CI: 1.50 -- 2.64, *P* \< 0.001), female sex (OR = 1.73, 95% CI: 1.46 -- 2.04, *P* \< 0.001), beta blockers (OR = 1.60, 95% CI: 1.26 -- 2.04, *P* \< 0.001), and age (OR = 1.03, 95% CI: 1.03 -- 1.04, *P* \< 0.001). In addition, those on peripheral calcium channel blockers were less likely to have a MOH-3 response (OR = 0.68, 95% CI: 0.49 -- 0.94, *P* \< 0.001). In the subsample of those aged 70 or more, antidepressants, sex, beta blockers, age and peripheral calcium channel blockers still had 95% CIs not including 1 (Table [3](#T3){ref-type="table"}). ###### Generalised linear model to predict MOH-3 membership **Full sample** **B** **Std. error** ***P*** **Odds ratio** **95% wald confidence interval for odds ratio** ----------------------------- ----------- ---------------- --------- ---------------- ------------------------------------------------- ------ Female sex 0.55 0.08 \<0.001 1.73 1.46 2.04 C07A (beta blockers) 0.47 0.12 \<0.001 1.60 1.26 2.04 C08C (peripheral CCB) −0.38 0.16 \<0.001 0.68 0.49 0.94 N06A (antidepressants) 0.69 0.14 \<0.001 1.99 1.50 2.64 Heart attack −0.47 0.23 0.042 0.62 0.40 0.98 age 0.03 0.00 \<0.001 1.03 1.03 1.04 **Subsample ≥70 years old** **B** **Std. error** ***P*** **Odds ratio** **95% wald confidence interval for odds ratio** **Lower** **Upper** Female sex 0.34 0.17 0.040 1.41 1.02 1.96 C07A (beta blockers) 0.44 0.19 0.021 1.55 1.07 2.25 C08C (peripheral CCB) −0.60 0.27 0.025 0.55 0.33 0.93 N06A (antidepressants) 0.70 0.30 0.018 2.01 1.13 3.60 Diabetes −0.62 0.33 0.064 0.54 0.28 1.04 Age 0.05 0.02 0.009 1.05 1.01 1.09 Dependent variable: MOH type 3 (large drop, under-recovery). Binary logistic response, forward conditional procedure. Predictors entered: female sex, polypharmacy, C01A (cardiac glycosides), C01B (antiarrhythmics), C07A (beta blockers), C03 (diuretics), C09A (ACE-i), C09C (ARA), C08C (peripheral CCB), C08D (cardiac CCB), C02C (alpha blockers), C01D (cardiac vasodilators), C04A (peripheral vasodilators), N03A (antiepileptics), N05A (antipsychotics), N05B (anxiolytics), N05C (hypnotics, sedatives), N06A (antidepressants), hypertension, angina, heart attack, heart failure, diabetes, stroke¸ TIA, abnormal heart rhythm, Parkinson's disease, three or more chronic diseases, any IADL disability, age. In the multivariable binary logistic regression model to predict OI during active stand (Table [4](#T4){ref-type="table"}), the statistically significant factors were: hypnotics and sedatives (OR = 1.83, 95% CI: 1.31 -- 2.54, *P* \< 0.001), MOH-3 (OR = 1.47, 95% CI: 1.25 -- 1.73, *P* \< 0.001), and history of heart attack (OR = 1.59, 95% CI: 1.16 -- 2.19, *P* = 0.004). In addition, advancing age (OR = 0.98, 95% CI: 0.98 -- 0.99, *P* \< 0.001) and female sex (OR = 0.84, 95% CI: 0.74 -- 0.96, *P* = 0.008) were associated with less OI during active stand. In the subsample of those aged 70 or more, being on antiepileptics and having 3 or more chronic diseases seemed to be associated with greater OI, while peripheral calcium channel blockers seemed protective (Table [4](#T4){ref-type="table"}). ###### Contribution of MOH-3 towards OI in the presence of potential confounders **Full sample** **B** **Std. error** ***P*** **Odds ratio** **95% wald confidence interval for odds ratio** ----------------------------- ----------- ---------------- --------- ---------------- ------------------------------------------------- ------ MOH-3 0.39 0.08 \<0.001 1.47 1.25 1.73 Female sex −0.17 0.06 0.008 0.84 0.74 0.96 N05C (hypnotics, sedatives) 0.60 0.17 \<0.001 1.83 1.31 2.54 Heart attack 0.47 0.16 0.004 1.59 1.16 2.19 3 or more chronic diseases 0.18 0.08 0.023 1.19 1.02 1.39 Any IADL disability 0.32 0.16 0.048 1.38 1.00 1.90 Age −0.02 0.00 \<0.001 0.98 0.98 0.99 MMSE −0.04 0.02 0.043 0.96 0.93 1.00 **Subsample ≥70 years old** **B** **Std. error** ***P*** **Odds ratio** **95% wald confidence interval for odds ratio** **Lower** **Upper** C08C (peripheral CCB) −0.53 0.22 0.015 0.59 0.39 0.90 N03A (antiepileptics) 0.91 0.44 0.036 2.50 1.06 5.86 3 or more chronic diseases 0.38 0.15 0.011 1.46 1.09 1.95 Dependent variable: phasic OI. Binary logistic response, forward conditional procedure. Predictors entered: female sex, polypharmacy, C01A (cardiac glycosides), C01B (antiarrhythmics), C07A (beta blockers), C03 (diuretics), C09A (ACE-i), C09C (ARA), C08C (peripheral CCB), C08D (cardiac CCB), C02C (alpha blockers), C01D (cardiac vasodilators), C04A (peripheral vasodilators), N03A (antiepileptics), N05A (antipsychotics), N05B (anxiolytics), N05C (hypnotics, sedatives), N06A (antidepressants), hypertension, angina, heart attack, heart failure, diabetes, stroke¸ TIA, abnormal heart rhythm, Parkinson's disease, three or more chronic diseases, any IADL disability, age, MOH-3, MMSE. In the multivariable binary logistic regression model to predict history of blackouts or faints (Table [5](#T5){ref-type="table"}), statistically significant factors were: antiepileptics (OR = 2.39, 95% CI: 1.57 -- 3.63, *P* \< 0.001), history of abnormal heart rhythm (OR = 1.95, 95% CI: 1.49 -- 2.53, *P* \< 0.001), history of TIA (OR = 1.93, 95% CI: 1.15 -- 3.25, *P* = 0.013), female sex (OR = 1.35, 95% CI: 1.16 -- 1.57, *P* \< 0.001), antidepressants (OR = 1.62, 95% CI: 1.22 -- 2.15, *P* = 0.001), polypharmacy (OR = 1.37, 95% CI: 1.12 -- 1.68, *P* = 0.002), and OI during active stand (OR = 1.27, 95% CI: 1.09 -- 1.48, *P* = 0.003). In the subsample of those aged 70 or more, antiepileptics, history of abnormal heart rhythm, female sex and polypharmacy were directly associated with history of blackouts or faints. ###### Contribution of MOH-3 and OI towards history of blackout or faints in the presence of potential confounders **Full sample** **B** **Std. error** ***P*** **Odds ratio** **95% wald confidence interval for odds ratio** ----------------------------- ----------- ---------------- --------- ---------------- ------------------------------------------------- ------- OI 0.24 0.08 0.003 1.27 1.09 1.48 Female sex 0.30 0.08 \<0.001 1.35 1.16 1.57 Polypharmacy 0.32 0.10 0.002 1.37 1.12 1.68 N03A (antiepileptics) 0.87 0.21 \<0.001 2.39 1.57 3.63 N06A (antidepressants) 0.48 0.14 0.001 1.62 1.22 2.15 Transient Ischemic Attack 0.66 0.26 0.013 1.93 1.15 3.25 Abnormal heart rhythm 0.67 0.13 \<0.001 1.95 1.49 2.53 Age −0.01 0.00 0.013 0.99 0.98 1.00 **Subsample ≥70 years old** **B** **Std. error** ***P*** **Odds ratio** **95% wald confidence interval for odds ratio** **Lower** **Upper** Female sex 0.45 0.19 0.014 1.58 1.10 2.26 Polypharmacy 0.42 0.19 0.028 1.52 1.05 2.20 N03A (antiepileptics) 1.57 0.45 \<0.001 4.82 2.00 11.61 Abnormal heart rhythm 0.84 0.24 \<0.001 2.31 1.45 3.67 Dependent variable: Ever had a blackout or fainted. Binary logistic response, forward conditional procedure. Predictors entered: female sex, polypharmacy, C01A (cardiac glycosides), C01B (antiarrhythmics), C07A (beta blockers), C03 (diuretics), C09A (ACE-i), C09C (ARA), C08C (peripheral CCB), C08D (cardiac CCB), C02C (alpha blockers), C01D (cardiac vasodilators), C04A (peripheral vasodilators), N03A (antiepileptics), N05A (antipsychotics), N05B (anxiolytics), N05C (hypnotics, sedatives), N06A (antidepressants), hypertension, angina, heart attack, heart failure, diabetes, stroke¸ TIA, abnormal heart rhythm, Parkinson's disease, three or more chronic diseases, any IADL disability, age, MOH-3, MMSE, OI. In all multivariable models, all VIF were less than 2, excluding significant multicollinearity. Discussion ========== In the present study, we replicated and characterised the MOH groups and assessed the association of MOH-3 (postulated as a beat-to-beat equivalent of the SH-OH syndrome) on concurrent OI and fainting history. The advantage of the present study is that it is based on a large population-based sample, as opposed to the previous study \[[@B25]\], which was based on a small convenience sample. The aim was to identify potentially modifiable factors to prevent OH and OI-related faints, particularly in relation to association with prescribed medications, and inform further longitudinal studies in TILDA. Naturally, in view of the observational and cross-sectional nature of the study, results need to be interpreted with caution and represent 'insights' rather than confirmed signals. Results highlighted the association of MOH-3 with non-modifiable risk factors such as age and sex, even in the subsample of those aged 70 or more. Consistent with previous literature, advancing age is associated with hypertension \[[@B38]\] and greater orthostatic blood pressure drops \[[@B39],[@B40]\]. There are also known differences in postural autonomic modulation between men and women, which might make women less able to compensate for drops in blood pressure in response to positional changes \[[@B41]\]. In terms of potentially modifiable factors, our findings highlight the important influence of certain types of medications (particularly *antidepressants* and *beta blockers*) in contributing to an SH-OH response (even in those aged ≥70), which may potentially lead to clinically adverse consequences (e.g. complaints of OI). In particular, results strengthen the hypothesis on the relationship between OH and antidepressant pharmacotherapy \[[@B42]\], and also between depressive symptoms and impaired orthostatic blood pressure response. It is known that OH and OI are more frequent in depressed older adults \[[@B43]\], and recent studies have found evidence for an association between the degree of orthostatic SBP drop and brain white matter hyperintensities volume in late-life depression \[[@B44],[@B45]\]. Our results also highlight the association between beta blockers and an impaired orthostatic blood pressure response. The higher burden of beta blockers in MOH-3 subjects is consistent with their lower baseline heart rate and poorer orthostatic heart rate response (Table [2](#T2){ref-type="table"}). Beta receptors have been implicated in the pathophysiology of OH \[[@B46]\]; indeed, a known primary autonomically mediated mechanism for maintenance of mean arterial pressure and orthostatic tolerance in healthy subjects is beta adrenergic-induced tachycardia \[[@B47]\]. Our results agree with previous observations that the pressor effects of beta blockers on standing blood pressure may be harmful for older patients with OH \[[@B48]\]. An interesting insight is the potential protective effect of peripheral calcium channel blocker (CCB) medications against MOH-3 (also present in the older subgroup), which is supported by the literature. For example, in a previous study, the peripheral CCB Nilvadipine did not aggravate OH in a sample of patients with Alzheimer's disease, despite significant reduction in the SBP of treated patients \[[@B49]\]. In another study in hypertensive patients, the peripheral CCB Cilnidipine showed significant decreases in blood pressure without adverse OH effects \[[@B50]\]. Furthermore, a study comparing the influences of anti-anginal drugs on cardiovascular responsiveness to orthostasis found that a dihydropyridine CCB influenced the latter less than a mononitrate or a beta-blocker \[[@B51]\]. On bivariate analyses, we found an increasing burden of peripherally acting antiadrenergic agents (e.g. alpha blockers) across MOH groups. It is likely that the number of patients on alpha blockers was too small to detect an independent effect in multivariable analyses. Yet, the alpha(1)-adrenergic receptor pathway is known to be critical in the recovery from initial OH to prevent cerebral hypoperfusion and ultimately syncope \[[@B52]\], so the clinical relevance of alpha blockers in the SH-OH syndrome might be significant. Our results support previous findings that OI is a wider, more complex syndrome than the mere sign of having an impaired orthostatic hemodynamic response \[[@B19]\]. In the full sample, MOH-3 was as an independent predictor of OI, but the latter was not the case in the older subsample. In terms of non-modifiable risk factors, and in the full sample, OI was less likely to be reported by females and at advancing age. Again, these age and sex effects merit further investigation but, since they are not modifiable, were not the focus of our study. Previous studies have focused on sex-related differences in OI \[[@B53],[@B54]\]. As regards the age effect, a previous study investigating the changing face of orthostatic and neurocardiogenic syncope with age found that symptomatic patients were significantly younger than asymptomatic \[[@B55]\], which is in keeping with our results. Although the age and sex effects are non-modifiable, they could be relevant in terms of the postulated sequence of pathophysiological steps (OH → OI → fainting). In short, female and older participants were more likely to have an MOH-3 pattern (Table [3](#T3){ref-type="table"}); however, when OI was the dependent variable and MOH-3 membership a covariate, being female and older were significantly less likely to have OI (Table [4](#T4){ref-type="table"}). And when MOH-3 and OI were included in a model with blackouts or faints as the dependent variable, being female was once again significantly associated with increased risk (Table [5](#T5){ref-type="table"}). The differential effects of age and sex are difficult to explain in our cross-sectional design. However, in terms of the age effect, it is plausible that advancing age may lead to an impairment of *both* the orthostatic hemodynamic response (i.e. more MOH-3) *and* the awareness of the latter (i.e. less OI). Indeed, we know that awareness of orthostatic hypotension is influenced by age: in younger subjects it is usually brief but symptomatic whereas in older individuals the situation is reversed \[[@B56],[@B57]\]. In terms of the differential effects of sex in the pathophysiological sequence OH → OI → fainting, it is difficult to explain why women had more MOH-3, more history of faints, but less OI. OI is much more common in young women relative to men, children or older women \[[@B58],[@B59]\], and women in our sample were middle-aged and older. Another possibility is the presence of sex differences in the self-report of OI; for example, a previous study showed that symptoms of vertigo, dizziness or unsteadiness may be more related to psychological factors in men \[[@B60]\]. The full understanding of this sex effects requires purpose-designed research. According to our full sample results, OI is more likely to be reported by more co-morbid and disabled patients with MOH-3 hemodynamic pattern, on whom it would be prudent to avoid the use of hypnotics and sedatives. Indeed, hypnotics and sedatives may add to the OI effect of MOH-3, and this is consistent with previous observations on the reduced tolerability of benzodiazepines in the elderly \[[@B61]\] and the greater incidence of OI in older subjects taking sedatives and hypnotics \[[@B62]\]. In keeping with the latter, and consistently in meta-analyses and systematic reviews, the use of sedatives and hypnotics, antidepressants, and benzodiazepines has been shown to be significantly associated with falls in older individuals \[[@B63],[@B64]\]. People with history of faints or blackouts may suffer from conditions not directly related to OH such as epilepsy or cardiac syncope (hence the association with anti-epileptics and anti-arrhythmics). However, in both full and older samples, polypharmacy (i.e. being on five or more regular medications) was independently associated with history of blackouts or faints. We know that age-related physiologic impairments of heart rate, blood pressure, baroreflex sensitivity, and cerebral blood flow, in combination with a higher prevalence of comorbid disorders and concomitant medications (including polypharmacy), account for the increased susceptibility of older persons to syncope \[[@B65]\]. In the full sample, phasic OI was more common in those with history of faints or blackouts, suggesting that part of the latter syndrome may be of hemodynamic nature \[[@B66]\]. Interestingly, faints or blackouts were not significantly related to MOH-3, which supports previous opinion that postural symptoms (i.e. phasic OI) correlate much more strongly with endpoint clinical events such as (pre-)syncope, blackouts and recurrent unexplained falls than does OH (i.e. the isolated blood pressure drop sign) *per se*\[[@B20],[@B21]\]. Indeed, it would appear that the above-mentioned three pathophysiological steps (OH → OI → fainting) operate as a chain, so the prevention of endpoint clinical events could be tackled both at improving OH hemodynamics and at minimising OI as a *mediator*\[[@B21]\]. We know that fall preventive interventions should be provided to older people by a structured, multifaceted approach \[[@B67]\]. As in the previous pilot MOH investigation \[[@B25]\], there was an increasing gradient in baseline SBP across clusters, but only MOH-3 had a mean baseline SBP in the hypertension range (i.e. ≥ 140 mmHg). Indeed, MOH-3 resembles the *syndrome of supine hypertension--orthostatic hypotension* (SH-OH), as applied to beat-to-beat orthostatic blood pressure data. We agree with previous recommendations that, of all patients with SH‒OH, those who have OI require the most clinical attention \[[@B7]\]. In patients with SH-OH, the avoidance of medications that may exacerbate OH and OI and the judicious use of antihypertensive classes that are less likely to aggravate postural blood pressure changes may be safe and adequate approaches to the treatment of this challenging condition \[[@B68]\]. Interestingly, antihypertensive types that have shown consistent benefits in the treatment of hypertension in the very elderly (e.g. ACE-i and diuretics, as in the HYVET trial \[[@B69],[@B70]\]) were not linked with any of our adverse outcomes (i.e. MOH-3, OI, history of faints). Several other studies (e.g. SYST-EUR, CONVINCE, VALUE) have demonstrated the benefits of treating aged hypertensive patients with cardiovascular medications that were not associated with adverse outcomes (e.g. angiotensin receptor antagonists), or seemed even protective (i.e. peripheral CCB) in our study \[[@B71]\]. A Cochrane systematic review established that treating healthy older persons with hypertension is highly efficacious, and that benefits of treatment with low dose diuretics or beta-blockers were clear for persons in their 60s to 70s with either diastolic or systolic hypertension \[[@B72]\]. However, this Cochrane review concluded that differential treatment effects based on patient risk factors, pre-existing cardiovascular disease and competing co-morbidities could not be established from the published trial data \[[@B72]\]. Our study sheds light into the latter limitation and supports the overall conclusion that, in treating older and frailer hypertensive patients, the evidence of benefit does not necessarily have to conflict with the evidence of potential harm. A number of limitations in this study must be noted. Firstly, its observational cross-sectional design precludes the inference of causality relationships and direct extrapolation of associations. As we stated above, results are to be interpreted with caution and represent 'insights' rather than confirmed signals. Secondly, despite the large total sample size, we know that the 38% of participants who did not have a Health Centre assessment were more likely to have lower socio-economic status, higher levels of physical disability, and were weaker (handgrip strength) and slower (walking speed) than Health Centre respondents \[[@B31]\]. In addition, as Table [1](#T1){ref-type="table"} showed, participants who attended the Health Centre assessment but had no MOH data were older, more hypertensive, more medicated, and more comorbid and disabled than those whose active stand data were included. For these reasons, the frailest in the population may have been underrepresented in the analytic sample. Precise information as to what made participants not complete each stage of the participants' flow chart (Figure [1](#F1){ref-type="fig"}) is not available, but frailty-related reasons are very likely. It is known that frailty is associated with missing data in research designs that involve the collection of physical performance measures (i.e. as required in the active stand) \[[@B73]\]. The statistical techniques employed in our analyses also have limitations. Indeed, the *K*-means cluster analysis is exploratory in nature, and the scale and variability of the clustering variables may affect results in unstandardised analyses \[[@B74]\]. However, we replicated here the exact same *K*-means clustering method as previously conducted in a different, smaller and convenient sample \[[@B25]\], leading to very similar results in terms of the characterisation of the MOH clusters. Hence, it is plausible that these three MOH groups exist 'out there' in clinical practice, although naturally we cannot confirm their existence as a true group structure. There is ongoing work in TILDA in this area. As stated above, some of the results from the logistic regression models should be interpreted with caution due to the nature of how the multivariate logistic models were developed, and especially due to the presence of a large number of covariates in the models. In addition, only few participants were on medications potentially associated to MOH-3 such as peripheral vasodilators or antiarrhythmics, so results for the latter classes may have been underpowered. A limitation of the active stand protocol is that information was lacking on precise dosages, time of ingestion of, and compliance with, the reported medications. Even though polypharmacy was used as a control variable in all models, specific drug interactions towards the outcomes of interest could not be investigated. Limitations of the active stand test itself include its known diurnal variability and relationship with meals \[[@B1]\]. Finally, a limitation of the present study is the impact of *frailty* on orthostatic hemodynamic responses, OI and faints/blackouts. Given that the latter are complex disorders involving multiple physiological systems, the inclusion of frailty may have complemented the inclusion of comorbidity, disability and cognition in the models. For example, older adults without measured hypertension, who are not on an anti-hypertensive medication, appear to have high physiological reserve in general \[[@B75]\]. Unsurprisingly then, many of the people with OH have other many health deficits as well, which can combine to make the person frail and when frailty is taken into account, the specific influence of OH on risk is greatly attenuated, even becoming no longer statistically significant \[[@B76]\]. This is an important area of ongoing work in the longitudinal dimension of TILDA. Conclusions =========== In the present study, we replicated our previously proposed *morphological* classification of orthostatic hypotension (MOH, intended for beat-to-beat monitoring) in the first wave of The Irish Longitudinal Study on Ageing, and we found that the clinical associations were similar as previously reported (e.g. association with OI) \[[@B25]\]. In addition, we proposed the MOH-3 pattern as a beat-to-beat analogue of SH-OH and studied its associations with cardiovascular and neurological medications, concurrent OI, and history of fainting. Our findings offer cross-sectional insights that, if further validated, may inform the development of clinical guidelines for the treatment of SH-OH. Based on the results of the current study, in a typical clinical setting using phasic orthostatic blood pressure measurements, MOH-3 should be recognised by the presence of baseline hypertension (\>140 mmHg), an initial orthostatic blood pressure drop greater than 40 mmHg, and failure to recover 90% of the baseline blood pressure after 2 minutes of standing. If a patient fulfilling those criteria has complaints of OI, then his/her risk of (pre-)syncope, falls and blackouts is higher and clinicians should avoid (if possible) medications that may exacerbate OH and OI (such as beta blockers, antidepressants and hypnotics and sedatives), and make a judicious use of antihypertensives that are less likely to aggravate postural blood pressure changes. Naturally, this should be done within a wider multifaceted approach. Competing interests =================== The authors declared that they have no competing interest. Authors' contributions ====================== All authors: 1) made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) were involved in drafting the manuscript or revising it critically for important intellectual content; and 3) gave final approval of the version to be published. Pre-publication history ======================= The pre-publication history for this paper can be accessed here: <http://www.biomedcentral.com/1471-2318/13/73/prepub> Acknowledgements ================ The authors would like to acknowledge the contribution of the participants in the study, members of the TILDA research team, study nurses, and administrators. Funding was gratefully received from the Atlantic Philanthropies, the Irish Government, and Irish Life plc. The sponsors played no part in the design, methods, subject recruitment, data collection, analysis, or preparation of this paper.
WTA Congoleum Classic The WTA Congoleum Classic is a defunct WTA Tour affiliated tennis tournament played in 1983. It was held in Palm Springs, California in the United States and played on outdoor hard courts. Results Singles Doubles References WTA Results Archive Category:Defunct tennis tournaments in the United States Category:Hard court tennis tournaments in the United States Category:WTA Tour
Q: What is "This is Just to Say" about? In "This is Just to Say" by William Carlos Williams, the speaker appears to deliver an apology for stealing the plums of the person at whom the poem is targeted. I have heard some people analyze this poem as being truly about plums. However, I have seen others analyze this poem as being about murder or sexual assault. What is the poem truly about? Here is the text of the poem: I have eaten the plums that were in the icebox and which you were probably saving for breakfast Forgive me they were delicious so sweet and so cold. A: From here: Williams's poem allows the reader a wide range of possibilities. He or she is free to decide whether it is "about" temptation, a re-enactment of the fall, or the triumph of the physical over the spiritual. Each reader is left free to construct a poem, and the reader becomes the owner of the resulting poem. The site also notes that there has never been a consensus on what the poem meant, and that Williams never mentioned the meaning of the poem. Another site says: It might be as simple as this: A little poem about eating plums is too delicious to spend that much time thinking about. Over-analyzing removes the joy we receive from reading these words, smiling, and imagining how perfectly ripe those plums must have tasted. Nothing can agree on a more complex solution than "it's just about plums." Conclusion: Either it's up to you, or it's just about plums.
Electronic devices, such as the integrated circuits (IC) 103 shown in FIG. 1, commonly include circuits 106 (labeled “HV circuits”) that operate from a relatively high DC supply voltage (for example, 3V). IC 103 also includes circuits 109 (labeled “LV circuits”) that operate from a lower DC supply voltage (for example, 1V), i.e., lower than the relatively high DC supply voltage. To accommodate such circuits, IC 103 includes an internal linear regulator 112 to generate the low-voltage supply (1V) from the high-voltage supply (3V, as provided by battery 115). Linear regulator 112 drives the 1V supply rail, including a pin to an external capacitor 118. Often, the actual supply voltage to HV circuits 106 is higher than the level that would support a given performance specification. For example, although HV circuits 106 may only have a minimum operating supply voltage of 2V, it may be supplied by a 3V power source. Assuming that HV circuits 106 consume approximately the same supply current independent of the supply voltage, HV circuits 106 consume about 50% more power than necessary. Similarly, linear regulator 112 consumes approximately two times the power consumed by LV circuits 109. To reduce the excess power consumption in the IC in FIG. 1, IC 103 in FIG. 2 incorporates a switch-mode DC-DC regulator 121 to drop a higher supply voltage down to a level closer to the minimum voltage actually required by the circuitry. For example, an inductor-based switch-mode DC-DC regulator 121 (using inductor 124 in conjunction with capacitor 118A) is used in the arrangement in FIG. 2 to step down the voltage of a 3V battery 115 to the 2V level appropriate for HV circuits 106. A switching DC-DC regulator can provide power transfer efficiencies much higher than that of a typical linear regulator. Using a linear regulator to drop the battery voltage from 3V to 2V for HV circuits 106 would have relatively little impact on the power consumed from the battery, while switch-mode DC-DC regulator 121 with, say, 90% efficiency, would reduce the battery power drain by approximately 26%. In IC 103 of FIG. 2, switch-mode DC-DC regulator 121 is used to generate the HV supply (2V) used by both HV circuits 106 and linear regulator 112, which generates the LV supply. By reducing the supply voltage to linear regulator 112, switch-mode DC-DC regulator 121 reduces the power loss in linear regulator 112 relative to the arrangement in FIG. 1. Linear regulator 112 in FIG. 2, however, still wastes about the same amount of power as consumed by LV circuits 109 (compared to wasting twice the power consumed by LV circuits 109 in FIG. 1). One way of reducing the power lost in linear regulator 112 is to further reduce its input voltage. However, given that the 2V supply generated by switch-mode DC-DC regulator 121 is limited by the minimum operating voltage of HV circuits 106, switch-mode DC-DC regulator 121 output voltage cannot be further reduced, given the circuit arrangement of FIG. 2. An alternative arrangement, shown in FIG. 3, uses switch-mode DC-DC regulator 121 to power LV circuits 109 directly from battery 115, i.e., keep HV circuits 106 powered directly from external battery 115. In this arrangement, switch-mode DC-DC regulator 121 generates the 1V supply for LV circuits 109, while HV circuits 106 operate directly from 3V battery 115. Although the power consumed by HV circuits 106 does not benefit from using switch-mode DC-DC regulator 121, the power loss of a linear regulator (as shown in FIGS. 1-2) is eliminated and replaced by a smaller power loss in switch-mode DC-DC regulator 121. Depending on the relative power consumption of HV circuit 106 and LV circuits 109 and their operating supply voltages, some ICs might benefit more from the arrangement shown in FIG. 2, while other ICs might benefit more from the arrangement shown in FIG. 3. For example, if the HV circuits' power consumption is much larger than the LV circuits' power consumption, using switch-mode DC-DC regulator 121 to generate the supply to HV circuits 106 provides a larger benefit, as the power saved in HV circuits 106 would exceed the potential power savings of the arrangement in FIG. 3. Conversely, if the power consumed by LV circuits 109 dominates, the arrangement in FIG. 3 would provide a larger benefit, given that the power saved by eliminating linear regulator 112 would exceed the power loss in HV circuits 106 due to the larger supply voltage provided to HV circuits 106.
Claim: Red Bull lost a $13 million lawsuit after promising that its customers would grow wings. FALSE Example: [Collected via e-mail, October 2014] Did someone sue Red Bull and win, receiving $13M because he did not grow wings? Origins: In August 2014, Red Bull agreed to pay more than $13 million to settle a false advertising lawsuit. While the lawsuit made reference to the company’s slogan, “Red Bull Gives You Wings,” plaintiff Benjamin Careathers did not sue Red Bull because he remained wingless after consuming the energy drink. Rather, Careathers sued the company for false advertising, claiming they made false promises about Red Bull’s ability to boost energy. According to the claims made in Careathers’ lawsuit, Red Bull does not provide consumers with any more of an energy boost than drinking a cup of coffee would supply: Even though there is a lack of genuine scientific support for a claim that Red Bull branded energy drinks provide any more benefit to a consumer than a cup of coffee, the Red Bull defendants persistently and pervasively market their product as a superior source of ‘energy’ worthy of a premium price over a cup of coffee or other sources of caffeine. Such deceptive conduct and practices mean that [Red Bull’s] advertising and marketing is not just ‘puffery,’ but is instead deceptive and fraudulent and is therefore actionable. So how did this rumor get started? It’s possible that the temptation to pen humorous headlines proved irresistible for many journalists, and after Red Bull agreed to settle the false advertising lawsuit, several publications reporting on the case employed titles that playfully referenced the company’s slogan, such as “Red Bull Will Pay $10 to Customers Disappointed the Drink Didn’t Actually Give Them ‘Wings'” and “Red Bull Does Not Give You Wings: Company Settles $13 Million Lawsuit Over False Advertising Claims.” Although Red Bull agreed to settle the false advertising suit, the company did not admit to any wrongdoing, proclaiming in a statement that: Red Bull settled the lawsuit to avoid the cost and distraction of litigation. However, Red Bull maintains that its marketing and labeling have always been truthful and accurate, and denies any and all wrongdoing or liability. Additional information: Settlement Claim Form Last updated: 8 October 2014 Sources: O’Reilly, Laura. “Red Bull Will Pay $10 to Customers Disappointed the Drink Didn’t Actually Give Them ‘Wings.'” Business Insider. 8 October 2014. Rothman, Max “Red Bull to Pay $13 Million for False Advertising Settlement.”
2° (191 x 291mm). Large woodcut of King Ferdinand the Catholic, with floral sidepieces on title, woodcut historiated and decorative initials opening each part. (Washed, some paper repairs, title reinforced on verso, last leaf partly restored with loss of a few letters.) Modern vellum, lettered on spine in manuscript. Provenance: some early annotations. RARE FIRST EDITION of the official apology for the conquest of Navarre. The Kingdom was invaded in 1512 by Ferdinand the Catholic as part of the second phase of the War of the League of Cambrai. The invasion began in Vitoria, capital of the Basque Country, and continued towards Pamplona. The Navarrese Cortes (Parliament) had to accept annexation to Castile, which agreed to permit Navarrese autonomy and identity. Juan López de Palacios Rubios was most famous for his Requerimiento, a document read to the New World Indians by Conquistadors to recognise the sovereignty of the Spanish monarch. Norton corrected previous identifications of this edition and assigns it to Biel de Basilea at Burgos, active c. 1515-17. No copy has appeared at auction in over 30 years. Palau 141652; BL Spanish, p.53; Haeb. 509; Salva 3721; Norton, Spain, 294.
When is it OK to shoot a child soldier? - EduardoBautista https://www.economist.com/news/americas/21719821-canada-writes-rules-troops-who-face-armed-nine-year-olds-when-it-ok-shoot-child ====== pdpi > By acknowledging their right to defend themselves, Canada’s government may > lessen the trauma of those forced to fight the youngest warriors. I think the closing line sums it up nicely. Killing children is not, and will never be, OK. It's a truly terrible thing to do, and only a truly monstrous person would ever do it without remorse. But war does, sometimes, force you to do terrible things, and you have to live with that for the rest of your life. This seems to me like a fine attempt at, at least, saying "we understand". ~~~ BrandoElFollito These kind of articles are written in the comfort of an office in London, Paris or New York. I am sure that if I was a soldier and another soldier (absolutely no matter the age) was threatening my life, I would shoot without hesitation. And yes ,I have small children who in an alternative reality could be soldiers and yes, I love them very much. ------ M_Grey The tragic reality is that in modern conflict you often engage with people you can't readily identify as an adult or child until after the shooting is over. If someone is trying to kill you, often from a distance with an AK-47 or mortar, you're going to act to save yourself and the people around you. Of course, many people will struggle in the aftermath regardless of how necessary their actions may have been. "Oh I had to kill that kid who could have been my nephew..." can be very cold comfort indeed. In the worst case when someone is trying to kill you with a blade you're going to shoot them, and it's probably going to be pretty traumatic. ------ Overtonwindow Treat child soldiers as a threat until they prove they're not a threat. Letting your guard down just because it's a kid is how many soldiers got killed in Vietnam. ------ ptaipale The adopted doctrine is well-founded: if child soldiers shoot at you, you shoot them. It's not easy, but it's right. Doing otherwise would not only put you and your own troops in danger; it would also encourage ruthless and immoral warlords to use _more_ child soldiers, since they would have such an advantage. ~~~ michaelrhansen Sadly agreed. And I am a father. However for every child shot, I wish there was a dollar amount we could commit to spending to reach and educate them. ------ NuSkooler I haven't read the article because apparently I've reached my "free limit" or something. ...but the short answer: It's never OK, but perhaps a necessary evil if you're in the me vs them situation. But that should go for adults as well in general. ~~~ rocky1138 The article is mainly about Canada's armed forces' answer to the question and is in line with your opinion. ------ LyndsySimon From a moral standpoint, I see no reason the attacker's age matters. If you're morally justified in using lethal force, you're justified whether they are 8, 18, or 88. ~~~ nindalf As a society we hold people responsible for their actions beyond a certain age. That particular age might vary from country to country, but everyone will agree that a 21 year old can face any consequence of a decision they've made. That's why they are allowed to drink, vote, join the military etc. Everyone will also agree that a 12 year old does not understand the consequences of their actions. Such a child might not know anything other than murdering people but also might not be capable of evaluating their own actions with respect to right vs wrong. This is also why there is an alternate legal system for minors accused of crimes. Now a peacekeeper who shoots an armed child has to live with the knowledge that they probably killed someone who didn't know any better. They've been the judge, jury and executioner in a trial that lasted a split-second, despite the fact that children get a lot of leeway when it comes to most crimes. ------ pinaceae When he/she is about to murder your loved ones with a machete. ~~~ cortesoft That is easy to be trite and say that in the abstract, sitting at your computer. In in certain situations, it might be that obvious. Most situations, however, are not so clear cut. At what point does it shift to 'about' to murder your loved ones? As soon as you see the kid with the machete? What if he is 300 yards away? Do you shoot him? 200 yards? 50? 10? What if he has a gun, but isn't pointing it at you? What if the kid is 3 years old and holding a gun? 5? 2? ~~~ pinaceae That is easy to be trite and say that in the abstract, sitting at your computer. Right back at'ya. And a 2 year old child soldier? the hell man, running out of strawman arguments?
Q: How to update data without possible duplication of id? I am updating a data from database in c# winform. When I update, i don't want it to update if there are same IDnum inside my data. I have a problem doing my bool method. public bool ExistsKey(string keyField, string table, string value, SqlConnection con){ try { if(con.State != ConnectionState.Open) con.Open(); using(SqlCommand com = new SqlCommand( string.Format("IF EXISTS(SELECT * FROM {0} WHERE {1}='{2}') SELECT 1 ELSE SELECT 0", table, keyField, value), con){ var result = com.ExecuteScalar(); return result != null && (int)result == 1; } } catch { return false; } finally { con.Close(); } } public void Update() { if (ExistsKey("idnum", "TableVotersInfo", _idnum.ToString(), sc)) { MessageBox.Show("ID number already exist!"); FAddVoters._cleardata = "0"; FAddVoters._checkID = checkID; } else { if (sc.State != ConnectionState.Open) sc.Open(); try { using (cmd = new SqlCommand(@"UPDATE TableVotersInfo SET Education=@ed, idnum=@idnum, FirstName=@firstname, MiddleName=@middlename, LastName=@lastname, SchoolYear=@schoolyear, ControlNum=@controlnum WHERE id=@id SELECT @ed, @idnum, @firstname, @middlename, @lastname, @schoolyear, @controlnum WHERE @id NOT IN (SELECT idNum FROM TableVotersInfo);", sc)) { cmd.Parameters.AddWithValue("@id", _id); cmd.Parameters.AddWithValue("@ed", _ed); cmd.Parameters.AddWithValue("@idnum", _idnum); cmd.Parameters.AddWithValue("@firstname", _firstname); cmd.Parameters.AddWithValue("@middlename", _middlename); cmd.Parameters.AddWithValue("@lastname", _lastname); cmd.Parameters.AddWithValue("@schoolyear", _schoolyear); cmd.Parameters.AddWithValue("@controlnum", _controlnum); cmd.ExecuteNonQuery();// <-- this is what you want MessageBox.Show("Data Successfully Updated!"); FAddVoters._cleardata = cleardata; FAddVoters._checkID = "0"; } } catch (SqlException ex) { if(ex.Number == 2627)//duplicated primary key { MessageBox.Show("ID number already exist!"); FAddVoters._cleardata = "0"; FAddVoters._checkID = checkID; } else { MessageBox.Show("There was some error while attempting to update!\nTry again later."); } } finally { sc.Close(); } } } what am i doing wrong here? The problem is in my bool. SOME ERROR LIKE var result = com.ExecuteScalar(); = 'Invalid initializer member declarator.' My Table (source: akamaihd.net) A: Well if you look closely at your code: using(SqlCommand com = new SqlCommand( string.Format("IF EXISTS(SELECT * FROM {0} WHERE {1}='{2}') SELECT 1 ELSE SELECT 0", table, keyField, value), con){ There is a bracket missing after "con)" causing the compiler to think that the rest is part of your SqlCommand construct intializer. If you put in the missing bracket all will be ok :) using(SqlCommand com = new SqlCommand( string.Format("IF EXISTS(SELECT * FROM {0} WHERE {1}='{2}') SELECT 1 ELSE SELECT 0", table, keyField, value), con)) {
Share This Story! Sexy teen Halloween costumes: What's a parent to do? If it's Halloween, it's time for dressing up and indulging in make believe. But parents may be taken aback by the sexy costumes marketed to teens girls. Experts discuss what parents can do to about it. Editor's note: This story has been updated from an earlier version to remove an erroneous detail about a scene from the movie "Mean Girls." If it's Halloween, it's time for kids to dress up and indulge in make believe. But along with costumes for ghosts, goblins and the latest pop culture icons (Duck Dynasty, anyone?) parents are often taken aback by the sexy outfits marketed to teen and preteen girls. USA TODAY's Michelle Healy talked to gender studies expert Annalisa Castaldo of Widener University in Chester, Pa., and counseling and school psychology professor Sharon Lamb of the University of Massachusetts, Boston, about the popularity of these costumes and how parents can help their daughters develop a healthy and positive attitude about their bodies. Q: When did the concept of sexy Halloween costumes for teen and tween girls become cool? Castaldo: Sexy adult costumes have been around for years, but costumes designed for teens and tweens have more recently begun displaying a sexualized edge. Lamb: The movie Mean Girls certainly helped popularize it. There's a line in the film, repeated by many girls, that Halloween is the one night a year you can (dress like) a slut.... So there is this attitude that (sexy costumes) are the cool costumes. Q: Isn't this simply about playing pretend and seeking attention? Lamb: Girls get that this kind of thing is risqué and proves to the world that they're not under their parents' thumb. They understand from an early age that acting sexy and looking sexy gets you attention and at the same time they understand that looking sexy doesn't necessarily mean you want to have sex. It just means that you want to look mature and that means looking sexy, like the Victoria's Secret models and the (women) on prime-time TV who parade around in their underwear. But dressing up on one Halloween looking sexy is not going to ruin their lives. Castaldo: What's most disturbing is that girls have much less choice when they go to the costume store to be seen as anything other than a physical object. The only way they can dress up for Halloween is as something that reveals their body. A boy can be a pirate with baggy pants, an eye patch, a sword and a parrot on his shoulder, The costume matches the character. With the girl, the pirate is wearing a short skirt. As a superhero, she's wearing a short skirt. And my favorite is Cookie Monster with a short skirt. Every costume becomes about the physicality of the body it reveals, not about the characteristics of the character being impersonated. Q: A 2007 American Psychological Association task force said that the proliferation of sexualized images of girls and young women in advertising, merchandising and media is harmful to girls' self-image and healthy development. So there's a bigger issue here than just the inappropriateness of some costumes? Lamb: I was on the task force that wrote that statement. We don't want to give adolescent girls a message that it's wrong to be sexual and that there's something wrong with being sexy. If you're a teenager or adult, most people at some point do want to be sexy. But a message we do want to give is that our society (allows for) a very narrow stereotype of who and what is sexy, and very often it's about being sexy in a pornographic way. That's all you're seeing these days on television, movies, video games, music videos. Another important message is that If you over-invest in looking a certain way, whether it's sexy or anything, you're over-investing in something very superficial when you could be developing your talents and intelligence in multiple other ways. Castaldo: The bigger issue is that this continues to teach girls every day that what matters most about them is their physical body. Their intellect, their ethics, their character are all secondary to the physical presentation and their need to please the world with how they look. And there's no doubt in the research that this (over sexualization of girls) is problematic for boys, too. As they hit adolescence they are surrounded by girls presenting themselves as much more adult than they really are. And one of the messages they can pick up is that to be a winner you have to "score." That means learning very inappropriate behavior that has very long-term effects on how they relate to women. Q: What's your advice for parents? Lamb: Certainly a parent can just say, 'No, you're not wearing that costume out of the house.' There are some kids who just want that kind of structure from their parents, and there are others who are just going to rebel. And although it's too late to affect this Halloween, I think you want to have some conversations with your daughter about why some kids buy into sexy costumes and let her know you value her qualities and talents that are based on who she is and not what she looks like. Castaldo: Learn to sew! Seriously though, I would love to see parents take back Halloween costumes. I know parents are incredibly over-burdened and creating a costume is just one more thing for them to do. But ideally, costumes should be something that you and your child come up with together. Also, if you have a 10- or 11-year-old girl, there's nothing wrong with shopping on the boys side of the Halloween aisle if she wants to do something that's not a tiny skirt and fishnet stockings. And when tweens and teens show interest in sexy or skimpy costumes, don't demean that interest or fight over the issue. Aim to maintain a dialogue throughout the year that praises what they attempt and accomplish, not how they look.
About Me Sunday, August 17, 2008 We've picked each of the girls' names for their meaning, and because we like the names of course. Sophia means "wisdom." Another time I'll tell you about the other two gals' meanings. But, we find that they each live up to their meanings in amazing ways. For example... So, Sophie is 3 years old in this story. Not kidding. (You'll see why I'm not kidding in a moment). She and Jossie and I are in the car and I'm feeling a bit melancholy about life and not really feeling like a deep conversation. But, it didn't really matter what I wanted this moment, and God really yanked on my heart with this one. Sophie says out the clear blue, "Mommy, how do we hear God?" hmmm?? My amazing response, "You should ask your Dad when you get home. He seems to hear God talk to him more than I do." "Why?" (Of course "why," she's 3!) My thoughts: I'm such a moran. Did I really just tell her to ask her dad on such a real and pertinent question?? So, I pipe up..."well, it's not that we can't hear Him. Well...uh, we don't hear Him the way we hear each other...uh, though you can hear specific things from Him. Well, uh...you can hear Him through things like nature, and sunsets, and..." She interrupts, "I think I know. Is it like, the more we worship Him and love Him and get to know Him, then the more He speaks to us and we can hear Him?" Me: "uh, Yeah Sweetie, I think you answered your own question." Then I drove like a happy, but dumbfounded lady the rest of the way home. (This picture is from the day of this conversation. We were on our way back from Mother-daughter tea party, and I was pregnant...not just large.)
Q: Example of a nilpotent matrix which is not a nilpotent element of Lie Algebra Let $\mathfrak {g}$ be a finite dimensional complex Lie algebra. Recall that an element $g \in \mathfrak {g}$ is called nilpotent element if ad$g: \mathfrak {g} \to \mathfrak {g}$ is nilpotent endomorphism. Give an example of a linear Lie Algebra with a element $g$ such that $g$ is nilpotent as a matrix but ad$(g)$ is not nilpotent? I am not sure how can we construct this.Please help. A: Such an example does not exist, because of the following Lemma (from Humphreys book) Lemma: Let $V$ be a vector space and $x\in \mathfrak{gl}(V)$ be nilpotent, i.e., $x^n=0$ for some $n$. Then $ad(x)\in \mathfrak{gl}(\mathfrak{gl}(V))$ is nilpotent too. The proof is easy, one considers the linear maps $L\colon \mathfrak{gl}(V)\rightarrow \mathfrak{gl}(V)$ given by $y\mapsto xy$ and $R\colon \mathfrak{gl}(V)\rightarrow \mathfrak{gl}(V)$ given by $y\mapsto yx$, and notes that $L$ and $R$ commute because of $(LR)(y)=xyx=(RL)(y)$. Since $x^n=0$ we have $L^n=R^n=0$, and thus $$ ad(x)^{2n}=(L-R)^{2n}=\sum_{k=0}^{2n}\binom{2n}{k}L^{2n-k}(-R)^k=0. $$ Remark: The converse statement is not true in general. There are ad-nilpotent elements $x$, i.e., with $ad(x)$ nilpotent, where $x\in \mathfrak{gl}(V)$ is not nilpotent. Take the Lie algebra $\mathfrak{d}_n$ of diagonal matrices. This Lie algebra is abelian, so that $ad(d)=0$ is nilpotent for all $d$, but not all diagonal matrices are nilpotent. A: To answer the question in the comment, namely: show that every Lie algebra over an algebraically closed field $K$ of characteristic zero, of positive finite dimension, has an ad-nilpotent element: Let $V$ be such a Lie algebra. Choose any nonzero $x$. If $\mathrm{ad}(x)$ is nilpotent, we are done. Assume it's not the case. Decompose $V=\bigoplus_{t\in K}V_t$ into characteristic subspaces with respect to $\mathrm{ad}(x)$. Then this is a Lie algebra grading in the sense that $[V_t,V_u]\subset V_{t+u}$ for all $t,u\in K$. If $t\neq 0$ and $y\in V_t$ then $\mathrm{ad}(y)$ is nilpotent: indeed $\mathrm{ad}(y)^n(V_u)\subset V_{u+nt}$, and for some $n$ (actually, for all large enough $n$), $W\cap (W+nt)$ is empty, where $W=\{u:V_u\neq 0\}$ (because $W$ is finite). So $\mathrm{ad}(y)^n=0$. The assumption that $\mathrm{ad}(x)$ is not nilpotent means that $V_t$ is nonzero for some $t$. So nonzero elements in $V_t$, which exist, are non-zero ad-nilpotent elements.
SHERLOCK GNOMES brings the fun home on Blu-ray Combo Pack and DVD June 12 from Paramount Home Media Distribution and Metro Goldwyn Mayer Pictures (MGM). Here is my review: When Gnomeo (James McAvoy) and Juliet (Emily Blunt) discover their friends and family have gone missing, there’s only one gnome to call – the legendary detective Sherlock Gnomes (Johnny Depp). Working together, the mystery takes them beyond the garden walls and across the city on an unforgettable journey to save the day and bring the gnomes home. Review The coolest thing about one of the worst animated films in some time (Gnomeo & Juliet) was the incredible Elton John fuelled soundtrack. Everything else — was hot garden soupy trash. I don’t know how a sequel was greenlit and I REALLY don’t know how they managed to assemble such a roster of A-List talent let alone pack the soundtrack with even more wonderful music, but they pulled it off by releasing Sherlock Gnomes. Johnny Depp joins the shenanigans as Sherlock himself, who is investigating a series of garden gnome kidnappings around the city. Coinciding with this boring story is Gnomeo and Juliet, with a supporting cast of unfunny plaster objects who make even unfunnier jokes, leading to a conclusion you could see coming from a mile away. Hey Sherlock, turn the clue card upside down you stupid ass. Jeeeezus…. I think the idea is cute, seeing these classic tales of adventure and romance brought to life by garden gnomes and re-enacted for a younger audience, but you have to have a script and it should probably be funny too. Just sayin… I don’t remember any jokes that didn’t fall flat on their ceramic asses, but I watched this movie with my kids and they weren’t laughing either. Not once. Through the entire movie. The only thing that really stood out were the songs. There is but ONE highlight in Sherlock Gnomes and that comes via Mary J. Blige with her jawdropping performance of ‘Stronger Than I Ever Was’. I loved it – and it got my three-year old up and dancing. Once again – music is what makes the Gnomes franchise worth visiting (kinda). I just don’t see anything else of substance here considering the script was so bad that it made an 80 minute movie feel like 2plus hours of drudging nonsensical sequences strung together. If you’re a sucker for great music then check out Sherlock Gnomes. Just don’t go in expecting much else, even if you’re a fan of Sherlock Holmes (the references to classic Sherlock tales were cute, but they weren’t enough to save a dreadful slog of a movie). Special Features I have to give it up for the special features – it’s a way to watch the best part of the movie really quick – the Mary J. Blige musical number. The interviews were all neat as well and I was shocked that they got all of those A-Listers to sound like the enjoyed working on the movie. That in itself is a stunning accomplishment. You get just under an hour of bonus segments which is more than solid for a Blu-ray. Gnome is Where the Heart Is – Go behind-the-scenes with the all-star cast All Roads Lead to Gnome: London Locations in Sherlock Gnomes Miss Gnomer: Mary J. Blige and the Music of Sherlock Gnomes Stronger Than I Ever Was – Enjoy the brand new music video performed by Mary J. Blige
Endpoints communicating through a packet-based network are conventionally under control of a softswitch during communication. However, if the softswitch fails, the endpoint cannot initiate or be involved in any communication, including communication to emergency services.
Tabreed’s Annual General Assembly Approves 5 Fils per Share Dividend Tabreed’s Annual General Assembly Approves 5 Fils per Share Dividend Shareholders ofNational Central Cooling Company PJSC (“Tabreed”), the regional Abu Dhabi-based district cooling utility company, yesterday approved a cash dividend of five fils per share at the company’s Annual General Assembly (AGA). The AGA was chaired by Waleed Al Mokarrab Al Muhairi, Tabreed’s Chairman, and attended by Tabreed’s Board of Directors, shareholders, and the company’s senior leadership team. Tabreed’s approved dividend distribution of five fils per share represents a payout ratio of 53% and a yield of 4.6%. Commenting on Tabreed’s performance in 2014, Al Muhairi said: “As a utility company, Tabreed distinguishes itself by providing sustainable and stable earnings year-on-year, a trend that we maintained in 2014 by returning a strong set of results that build upon the preceding years’ performance. “2014 is the third consecutive year that we have distributed cash dividends, which underscores the company’s commitment to enhancing shareholder value and its healthy financial position.” Addressing Tabreed’s shareholders, Jasim Husain Thabet, Chief Executive Officer, added: “Tabreed continued to benefit from its position as the only district cooling provider with regional operations as we made significant connections in Qatar and Saudi Arabia during the past year.” “We also have a strong presence in our local market of the UAE, where in addition to operating 62 plants and an extensive network that stretches across the whole nation, we continue to partner with leading entities such as Aldar, Meraas Leisure and Entertainment, Roads and Transport Authority, and the UAE Armed Forces. We are therefore well positioned to capitalize on the expected growth opportunities that will arise in the UAE over the coming years.” Shareholders also approved the Board of Directors’ Report, the Independent Auditors’ Report and the Financial Statements for the year ended 31 December 2014. Robust economic growth and increasing demand for district cooling across the region enabled Tabreed to reach several critical milestones in 2014, including an AED 1.05 billion acquisition of the existing district cooling plant on Al Maryah Island in Abu Dhabi in a consortium with Mubadala Infrastructure Partners. The company also signed a long term concession agreement with Meraas Leisure and Entertainment to provide 45,600 tons of cooling to the new Dubai Parks and Resorts development in Jebel Ali, and has renewed its master services agreement with the UAE Armed Forces in a contract valued at AED 6 billion. Today, Tabreed has 69 district cooling plants across the GCC and provides its services to many of the region’s critical projects including all the developments on Abu Dhabi’s Al Maryah Island, home to Cleveland Clinic and Galleria, in addition to all the developments on Yas Island such as Ferrari World, Yas Marina Circuit and Yas Mall, as well as other national and regional landmarks including Sheikh Zayed Grand Mosque, Dubai Metro, the Pearl – Qatar, and the Jabal Omar Development Project in the Holy City of Mecca.
The current military retirement system has been integral to sustaining the All Volunteer Force (AVF). Mounting federal budget challenges, however, have raised concern that the program may become fiscally unsustainable. While several restructuring proposals have emerged, none have considered the implications of these changes to the broader issue of manning an AVF. Changes to the existing system could create military personnel shortfalls, adversely affect servicemember and retiree wellbeing, and reduce public confidence in the Armed Forces. With the right analytical framework in place, however, a more holistic system restructuring is possible, one that avoids these negative effects while significantly reducing costs. A comprehensive framework is provided, as well as a proposal that stands to benefit both servicemembers in terms of value and the military in terms of overall cost savings. U.S. light vehicle demand has recovered to near pre-recession levels; in fact, June’s seasonally adjusted annual sales rate of almost 16 million vehicles was the highest since 2007. Assembly of light vehicles in the U.S. has also returned to near pre-recession levels. These figures show a remarkable turnaround from just four years ago, when the industry’s major domestically owned firms were bankrupt. The U.S. auto industry has fundamental strengths in demand, quality, and innovation. However, other nations are not standing still, leading to a growing U.S. trade deficit in autos. In particular, Mexico accounts for 26% of the U.S. trade deficit in passenger vehicles and parts (33% if heavy trucks are included). Germany accounts for another 16% of our auto trade deficit, despite significantly higher wages than in the U.S.. Both of these nations could be partners as well as rivals. In particular, having Mexico and the U.S. each specialize in what it does best could mean more total U.S. employment in the industry, by making the overall North American industry more competitive. Policy could play a role in avoiding a race to the bottom, and instead promote growth that leads to higher wages and more innovation on both sides of the border. Four years after the recession officially ended, the economic recovery remains a long way off in the view of many Americans. A new survey by the Pew Research Center, conducted July 17-21 among 1,480 adults, finds that 44% say it will be a long time before the nation’s economy recovers. Smaller percentages say either the economy already is recovering (28%) or will recover soon (26%). These opinions are little changed from March. But last October, shortly before the presidential election, fewer Americans (36%) said it would be a long time before the economy recovers. [Note: contains copyrighted material]. While taxation of overseas profits of U.S. multinational corporations has made the headlines lately, U.S. citizens who work overseas also face special rules. Unlike most countries, the United States requires that its citizens pay tax on their worldwide income (with a credit for foreign taxes paid), even when they are residing elsewhere. But the United States also allows its citizens who reside abroad exclusion for the first $97,600 of their foreign earned income and a special housing allowance. [Note: contains copyrighted material]. The study gives a compelling look at how today’s families view higher education, manage higher education costs, and tap a variety of funding sources. This year’s study finds that families are adjusting to a new post-recession reality to pay for college. [Note: contains copyrighted material]. The report seeks to increase understanding of the responsibility to protect (R2P), assess how the concept has worked in relevant cases, and identify concrete steps to bolster the will and capacity of U.S. decision makers to respond in a timely manner to threats of genocide, crimes against humanity, and other mass atrocities. [Note: contains copyrighted material]. Portions of all 50 states and the District of Columbia are vulnerable to earthquake hazards, although risks vary greatly across the country and within individual states. Seismic hazards are greatest in the western United States, particularly in California, Washington, Oregon, and Alaska and Hawaii. California has more citizens and infrastructure at risk than any other state because of the state’s frequent seismic activity combined with its large population and developed infrastructure. The United States faces the possibility of large economic losses from earthquake-damaged buildings and infrastructure. The Federal Emergency Management Agency has estimated that earthquakes cost the United States, on average, over $5 billion per year.
[Cytokeratin expression in the pre- and postnatal ontogeny of the rat liver]. Expression of cytokeratins nos. 7 and 19 has been examined during pre- and postnatal ontogenesis in rat liver. Cells expressing cytokeratin no. 19 appeared around large incoming vessels in the liver at days 17-18 of gestation. From day 20 of gestation and during the first week of postnatal ontogenesis, cholangiocytes and periportal hepatocytes could be stained with antibodies against cytokeratin no. 19. The expression of cytokeratin no. 7 begins later than cytokeratin no. 19, and it is present only in cholangiocytes, throughout pre- and postnatal ontogenesis. Since pre- and postnatal hepatocytes are capable of expressing cytokeratin no. 19, we discuss possible use of this marker to study cytodifferentiation of epithelial cell lines in rat liver.
The BCCI has revealed its hand ahead of crucial negotiations on cricket's new financial model. The Indian board, unhappy with its projected share of $290 million in the model under consideration, has told ICC Full Members that it wants $570 million - the same revenue it would have received from the ICC under the original Big Three model. Whether the BCCI gets that much remains to be seen. In earlier negotiations between ICC chairman Shashank Manohar and the BCCI's Committee of Administrators (CoA), Manohar had offered to pay an additional $100 million to the BCCI, taking the board's share to nearly $400 million. Until the ICC meetings began this week in Dubai, that figure was thought to have been close to final. But Amitabh Choudhury, the BCCI secretary and an N Srinivasan-loyalist, has not always seen eye to eye with the CoA. And it is Choudhury, who is representing the board in Dubai. The BCCI's approval of the financial model is crucial to a host of governance changes being agreed upon, as the ICC strives to put a new constitution in place. But ESPNcricinfo understands the BCCI has also told the ICC Full Members that it wants to defer those governance changes until after June. Manohar, the key force behind the new constitution, has been in discussions with the BCCI's CoA since February to get the new constitution approved not only in principle, but in reality. A major hurdle is the BCCI's unhappiness with its new share of revenues, down by between $180-$190 million to $285-$290 million if the ICC generates $2.7 billion in revenue in the 2015-2023 rights cycle. It is understood that Manohar made an offer to increase the BCCI's share to approximately $400 million to the CoA, which has been supervising the BCCI since January 30. On April 23 Manohar passed the details of the deal to Choudhury. Choudhury and BCCI treasurer Anirudh Chaudhry, however, have had conversations with the other nine Full Members and said the BCCI wants its share to be $570 million, but that the rest of the Full Members' shares will not be reduced. In the BCCI's counter-offer, given that other shares remain the same while the Indian board gets roughly $280 million more, the extra money will come from removing the shares allotted to the two Associates - Ireland and Afghanistan. In Manohar's model, they were allotted $60 million each over eight years, pending approval of their Full Member status which is up for discussion at the ICC Board meeting on Wednesday and Thursday. The BCCI has proposed that Ireland and Afghanistan be inducted as Full Members from 2019. That leaves another $160 million to be found, which the BCCI believes can be available if the ICC's administrative costs are cut down by $100 million. The issue of the ICC's administrative costs has been a prickly one. In the financial model devised by Manohar's working group, the ICC's administrative costs were increased to $160 million and stayed flexible. The BCCI argued these costs - and the increase - were "arbitrary". Prospects for approval of the other governance changes now appear gloomy as well. The BCCI wants the discussion on governance be deferred to the ICC annual conference in June, and suggested a new working group be formed in the interim on which BCCI has a seat. This new working group, to be formed in June, would devise fresh resolutions on governance. The BCCI's proposal is at odds with the ICC's plan, which wants the constitution to be signed off formally in these April meetings before it is approved at the AGM in June. Though Manohar and the ICC are aware of the BCCI's counter offer, the BCCI's back-room negotiations would have surprised him. During meetings with both the CoA and Choudhury, Manohar said the settlement deal was solely to get the constitution approved this week without needing a vote. Manohar had met the CoA the day before heresigned as ICC chairman. While his resignation distracted everyone, he had been close to working out a deal with them on the financial model. The deal was jeopardised by his resignation, but as soon as he was persuaded to return, Manohar re-started negotiations and finalised the deal recently. According to an official privy to those meetings, the eventual sum the BCCI would get was in the $390-400 million range. "There is an approximate number which is a $100 million more than what the ICC has proposed," the official told ESPNcricinfo. "(In February) the ICC offered $289 million. Now he is willing it take up by a 100 more. Then you are home. You are within a striking distance of a deal." Now it is down to how Choudhury plays his cards at the ICC Board meeting over the next two days. The CoA has shown it is keen on finding a middle ground, but Choudhury and a section of BCCI insist on sticking to the Big-Three model. The CoA met Choudhury before he travelled to Dubai and told him about its discussions with various boards in the past two months, including Cricket Australia, Cricket South Africa, Bangladesh Cricket Board, Sri Lanka Cricket, Zimbabwe Cricket, West Indies Cricket Board (WICB) and Imran Khawaja, the Associate representative who sits on the ICC Board and is also part of Manohar's working group. Choudhury is understood to have been receptive, but did not commit to anything. "The deal can be done," the official said. "The danger if you are jingoistic is you will not get what you are getting."
Product Summary Description Give your plants a home they'll love with the charming H. Potter Walden Table Top Terrarium. Clear glass walls and dark grey wrought iron frame add to its simple, classic charm. You'll also love its compact table top design that lets you display your plants anywhere in the house. Terrariums provide a unique opportunity to garden under glass during any season. Plant a rainforest, desert, or woodland arrangement to create your own force of nature. These small greenhouses are warmed by the sun and trap moisture inside to produce a prosperous miniature garden. The distinctive glass windowed container offers traditional styling and functionality that can be used as a protective showcase for any cherished favorites. About H. Potter ProductsOver the past nine years, H. Potter has continually enhanced all aspects of their business to fill the desires of their growing list of satisfied customers. With the entrance of 2006, they were able to offer over 100 impressive designs. Not only are they always striving to bring you products that are new, bold, and unique, but they also work hard to increase the overall quality of the items. They do this by incorporating heavier materials, stainless steel hardware, and dramatically expanding their copper container business. H. Potter artisans design many 100% hand-made pieces to fit effortlessly into your home or garden setting. Customer Q & A Enter your question, and one of our Customer Care experts will respond via email and also post the answer here. Ask a Question Email Address Please Enter Your Valid Email Your Question Please Enter Your Question Greenhouses 6:6:6:6:6:6:6:6:6:6 1:1:1:1:1:1:1:1:1:1 Sponsored Links H. Potter H. Potter Walden Table Top Terrarium 0 Give your plants a home they'll love with the charming H. Potter Walden Table Top Terrarium. Clear glass walls and dark grey wrought iron frame add to its simple, classic charm. You'll also love its compact table top design that lets you display your plants anywhere in the house.Terrariums provide a unique opportunity to garden under glass during any season. Plant a rainforest, desert, or woodland arrangement to create your own force of nature. These small greenhouses are warmed by the sun and trap
Why I Resigned From The Mighty I was in a car, on my way to a work retreat for my new job at The Mighty, a popular mental health and disability blog site. The driver, a woman from Fresno who blogs about her life with a vascular birthmark, explained that she didn’t care for driving in Los Angeles. She turned to another woman who sat in the passenger seat, a blogger who focuses on disability issues from the perspective of a Christian and a parent of disabled children. After talk of traffic was over, they resumed their conversation about the famous bloggers who had reached out to them. I tried to block the sound of their voices from my mind so that I could be fully engaged with the person sitting next to me. I sat in the backseat and was enjoying a nice conversation with another new co-worker who had flown from her home in New Orleans to participate in today’s activities. I’d perked up when she’d told me where she was from. Our conversation had taken off when I explained that I loved the city and that my family has some Creole ancestry. It was a beautiful California day, and I was looking forward to the work retreat at The Mighty. The driver maneuvered though Hollywood until she found parking in a cement garage. My new friend from New Orleans got into her wheelchair, and we all ascended a ramp that ended at a glass door. I could already hear loud music, so I pushed my earplugs deeper into my ear canals. When the door opened I was blasted by the sound. I grabbed my Peltor muffs from the strap of my purse and placed them on my head. The confusion and disorientation I experience in response to exposure to loud or prolonged sound descended. I realized I was chanting, “I’m in hell. I’m in hell. I’m in hell.” My new co-workers seemed to avoid my gaze. We navigated through hallways and corridors, and the music persisted in its relentless assault. When we entered the vast room in which the retreat was to be held, the CEO’s wife noted my earmuffs and steered me aside. “I’m sorry about the music. We’re trying to get it turned off. You can stay over here in this quieter area,” she offered. Her look of concern seemed sincere, but I wondered how expecting me to withstand such an auditory onslaught was acceptable when it would not have been likewise okay to ask my wheelchair-using co-worker to attend the conference by ascending a flight of stairs. The apprehensions I’d initially had about joining the team returned to my mind. I’d allowed my original cautious disposition to be overtaken by optimism when I had accepted a position of contributing editor with The Mighty, but my hopes were about to be dashed. My caution about working with the company was twofold. First, I’m autistic, and The Mighty remains partnered with Autism Speaks, an organization that was at the forefront of promoting parent-centered propaganda about the perceived horrors associated with autism and autistic people. Autism Speaks persisted in this spirit until its board of directors voted to change its mission statement in September of 2016. But I believe in the possibility of both personal and organizational transformation, so I pushed that concern to the back of my mind. The second reason for my prudence rested in the ways the notion of mental illness is most often discussed on The Mighty. Hundreds of contributors to the mental illness section of the site embrace and support psychiatric drug use. I tried to look beyond that as well, especially when the editor-in-chief informed me that people were welcome to present anti-psychiatry perspectives too, as long as they were worded in a constructive manner. Since I had agreed to edit the contributions to the chronic illness portion of the site, I thought I would be able to keep my personal views on the autism and mental health components of the site at arms length. I thought I could promote empathy and understanding for people living with chronic illnesses by promoting their stories, yet remain disengaged from the problematic elements present in the autism and mental illness categories. I remained in the quieter area until I was told the music had been extinguished. Within minutes of walking from the quiet area into the main conference room, I realized how misplaced my optimism had been. I removed my Peltor muffs and earplugs as the newly hired Chief Revenue Officer launched into her presentation. “If the CEO for Abilify was in the front row right now, he’d be salivating,” she declared. She had just explained her strategic plan for monetizing the site with pharmaceutical advertising. But the plan didn’t end there. The Mighty planned to give drug companies user data that would help focus the pharmaceutical manufacturer’s marketing efforts. After I heard this, I stood and left the room again. I had some soul searching to do. The staff at The Mighty are not greedy ogres at the exclusive beck and call of big pharma. They are real human beings who care deeply about changing perceptions about disability via the sharing of stories written primarily by disabled people and their families. They seek to promote ideas of inclusion and acceptance in the face of pervasive ‘othering’ and discrimination. The Mighty staff wants to grow a platform that helps people find comfort in the perspectives of those who came before them, at times when they find themselves on the receiving end of a life-changing diagnosis or in the throes of undiagnosed illness. The folks at The Mighty are legitimately good people who want to change the conversation about disability. They hope to make the world a better place for those of us who are roadblocked by larger society because of our differences. They want to monetize the operation in large part so that they can be in a position to pay contributors, who they readily acknowledge are the site’s raison d’être. But despite the merits of The Mighty staff, my caution transformed to distrust when plans for an alliance with pharmaceutical makers was revealed as a core component of creating revenue. As soon as I returned home I submitted my resignation: It is with a feeling of deep disappointment that I offer my resignation. I will be very direct as to why. I was horrified to learn at the retreat that the company plans to monetize the site by pairing with pharmaceutical companies. Had I known in advance that was the chosen strategy to create revenue, I would have declined the position initially, rather than accept the resources you invested in me. Let me offer a brief synopsis of my personal journey through healthcare. After reading it, I think you will better understand my stance: In 2014, after I had been a patient of the mental health and psychiatric treatment communities for over 20 years, I suffered an iatrogenic brain injury. According to a neurologist, the injury was made possible by years of exposure to various psychiatric drugs, but specifically because of years of exposure to Abilify. Immediately following the injury, I lost my ability to read, write, and speak. I presented in the emergency room with symptoms similar to those seen in stroke patients. It has taken the intervening years for me to even partially recover these skills. I still cannot write legibly, I must type in order to make my written communication understood. And as you all saw at the retreat, I can still lose my speech if I am exposed to too much sound, in terms of both volume and duration. I did not experience that type of inability to speak prior to my injury. My previous autism-related challenges with spoken language had been of an entirely different character. The injury damaged my already compromised auditory system as well. I’ve lived with Auditory Processing Disorder (APD) for my entire life, but it too went unrecognized by the mental health community, despite the fact that my inability to hear in environments with background noise was an enormous factor in many of the life stresses I sought help for. Rather than listen to my reports of these difficulties and try to uncover an underlying cause, psychiatrists threw drugs at me. My audiologist, the renowned Dr. Jack Katz, documented that my APD was profoundly exacerbated by the treatment modalities I underwent on the orders of psychiatrists. Over the lengthy course of all of my interactions with the mental health community, my autism, like my APD, remained overlooked, and was instead characterized as bipolar disorder, borderline personality disorder, major depressive disorder, general anxiety disorder, or PTSD — it seemed like half of the DSM was thrown at me. But partly because I’m outside of the stereotypical autism demographic — white and male — my autism remained unrecognized until after my brain injury exacerbated the most problematic aspects of the condition, such as my hyperacusis, misophonia, auditory processing disorder, and dysgraphia. After undergoing genetic testing, I learned in 2015 I have a CYP2d6 gene mutation that makes me a slow metabolizer of many medications, particularly certain categories of psych medications. A person who metabolizes a drug slowly cannot tolerate the same dosages as normal metabolizers, and is more prone to side effects. I had complained of these side effects to the mental health community for years, but my complaints were dismissed as symptoms of my ‘mental illness’. I also have a yet undiagnosed autoimmune condition that permits the allergic-type reactions I have to drugs and other things my body perceives as toxic to detrimentally affect my brain. I am trying to work with neuroimmunologists to understand the mechanisms behind these events. Since I stopped taking all psychiatric mediations, my mood is phenomenal, despite the obstacles to tasks of daily living and problems with executive function I experience as a result of both my injury and my autism. I credit my meditation practice, the wisdom gained from my journey through the mental health system, and the hysterectomy I had to cure my premenstrual dysphoric disorder (a condition that I have observed seems to be common among autistic women), for my significantly improved mood. I am currently on Social Security Disability, but I have never stopped trying to re-enter the workforce. I hoped this was my chance. When I joined The Mighty, I thought I would be able to compartmentalize my views on psychiatry from the way mental health is discussed on the site. I have intentionally abstained from participating in any conversation on the topic, because I realize my perspective is at odds with the majority of the mental health perspectives presented in the forum. I thought I could peacefully co-exist with the difference of opinion. But I have to draw the line with being associated with a site that plans to actively promote psychiatric drugs and allow for data mining among registered users to this end. I believed in my heart that I could play an important role in promoting the writings of people who live with chronic illness. I hoped I could help expand empathy and understanding. So it is with great sadness that I depart from that role. I’m not only a member of the chronic illness community because of my autoimmune and genetically mediated intolerance to drugs and other substances, I’m a member of the autistic community as well. The autistic community has been on the wrong side of medicalization and medication for far too long. I cannot in good conscience lend my voice or my skills to a platform that associates with drug companies that have caused so much destruction to a community I care very deeply about. Sincerely, Twilah Hiari CEO Mike Porath graciously responded to my resignation. Twilah, I’m sorry to hear this, and I’m sorry that the day of the retreat was a challenging one for you too, but I appreciate you giving us the context of your personal experiences in explaining your decision. I don’t know if and how we can be helpful to you down the road, but please don’t hesitate to reach out if we can be. I’m sorry this didn’t work out, but I respect your decision and I speak for all of us when I say we truly wish you the best. I also wish The Mighty the best. It is my deepest hope that they can help people, but it is my greatest fear that they will open doors to more psychiatric injuries. Illness and the Analytical Mind: Twilah Hiari is a recovering patient with a B.A. in Philosophy. She explores how the siloed nature of Western medicine contributes to misdiagnosis, and how clinician biases regarding issues of gender, race, class, education, religion and disability promote a culture that dismisses the credibility of the patient’s perspective. She blogs about her experiences at http://www.athinkingpatient.com. 42 COMMENTS You have taken a very principled position at great expense to your ability to work in your chosen field. It may not be immediately apparent now, but this kind of activism will cause people to question and reevaluate the role of Biological Psychiatry in every facet of the “mental health” industry. You did the right thing, Twilah. Irregular heart beats, heart attacks, seizures, and the list goes on and on. I’m sure there are many people who think they have no choice in the matter when it comes to psych-drugs, but they certainly do have a choice. Simply caving into drug company corporate funding pressure is to ignore the high toll that people are paying in this regard, and the point is, people are paying a high toll. Should we turn a blind eye to that heavy price then we contribute to it. Rather than increasing the damage, in all good conscience, resignation seems the only responsible course of action you could have taken. I’ve been suspicious of that site for a while. Back in October 2015, The Mighty was promoting the #MedicatedAndMighty hashtag, and I wouldn’t be surprised if the higher-ups were planning to sell out to pharma from the beginning. After all, The Mighty is a private company beholden to its investors, and pharma is where you go if you want to chase the big money. @Bean, Absolutely! I remember #medicatedandmighty since I fell into some Twitter spats opposing it. I remember following links and finding that it appeared to be a small, new private company (wasn’t it originally crowd-funded?). Now, to me, ‘The Mighty’ looks very much like the baby born from a shotgun marriage between BigEntrepreneurship and the regressive left’s obsession with victim-identityhood. Post after post after post of, ‘I AM this disorder…’; ‘I HAVE that disease…’; ‘Poor me…’; and ‘I suffer…’, which is all music to the ears of pharma and prescribing MDs. Darkly funny. ‘Mighty about my victimhood status…and my meds are my badges of honour’ is what I hear. Gee my psychiatrist knows everything. Becuz of the magic meds he put me on I think gooder than ever. My life stinks, I can’t work, I don’t enjoy anything, don’t get along with anyone, and am too tired to pick up or wash my own dishes. And weigh 400 pounds But if it weren’t for my magic cocktail of 15 pills 3 times a day I wud be dead. My psychiatrist told me this and he’s real smart. He knows everything and never lies. Without my meds, he told me, my mood swings would go way out of control. I might go on mad shooting sprees. Without a gun. Cause my mental illness could turn my right index finger into a handgun. Really! And my head would explode from all those chemical chane reacshuns! Then the Law of Entrupy or somethin would take over. The universe wud end. And my dog Rover wud die! As a former NAMI member I am still haunted by the vacant eyes and emotionless faces of my fellow “consumers” shuffling in. We were usually treated like second-class members. The first-class were the family members. Occasionally we would whisper about how we felt. Although this was not the Ultimate Heresy: blasphemy against the safety and efficacy of the Blessed “Meds,” there were other forms of heresy. 1. Pointing out the double standard in the way the family members and “consumers” were treated was naughty. 2. Asking why our “education” lasted 6 weeks, while the class for family members lasted 12–ostensibly because of our lack of intelligence–was also dangerous. Finally out of that cult! They accuse us of being Scientologists. While patently false, I doubt the Scientology cult could be much more oppressive than NAMI. Amazing how depressed we all were. Apparently that was due to our inferior brains with chemical imbalances. Our life situations and constant experiences of abuse at the hands of those “helping” us played no role at all! (Sarcasm.) 😛 Yeah, I went to a NAMI national conference once expecting that after years of living behind a mask in my professional job I would be in a place where I could safely self-identify as a patient. Wrong! Once I put the “Consumer” ribbon on my name badge (along with the ribbons related to my NAMI volunteer activities), I found myself being shunted off to the side over and over. Most notably, during a lunch, I seated myself with some of the chapter leaders from my state … and they all left the table, guiding others with “Consumer” ribbons to their spots. I’m not 100% anti-meds, but I’ve lost a lot of years to the “side” effects my doctors thought were reasonable. Very glad to be back in a conservative treatment region where I take 3 pills a day instead of 15. What is dishonest is that no mention of the Big Pharma funding is to be found on the site… or maybe I didn’t look hard enough to find it. But sites need to be honest and list where they are getting their money from: To me, “themighty” looks just like more “progressive”, “liberal”, “bleeding-heart-liberal”, pseudo-compassionate, CORPORATE MARKETING. Psychiatry is a pseudoscience, a drug racket, and a means of social control. Look at “the mighty” – it’s just more PhRMA shills, pimping the DRUGS, and the LIES…. The DSM is a catalog of billing codes. ALL of the bogus “diagnoses” in it were INVENTED, not discovered. So-called “mental illnesses” are exactly as real as presents from Santa Claus, but not more real…. I’m really not trying to be too personal here, but you shoulda’ known better, Twilah. “Abilify” is made by Otsuka Pharm., which sells over $1BILLION a year of that DRUG, alone. And, they are already marketing drugs with embedded RFID chips, “to improve compliance”. Think about that. The quack shrinks give you a drug, and the internet can track whether you’ve taken it or not. Please, take a few days rest. Read Orwell’s “1984”, *AND* Huxley’s “Brave New World”…. Yes, that **IS** **WHAT’s** **HAPPENING**…. Great comment Bradford, as are all of the comments here. I did laugh at the point where you said I shoulda known better. Laughed and then had to log in to respond — I do think I knew better on some level, but being on the SSDI rolls will make a person do desperate things for a paycheck. I’m not sure which is worse, fighting off doctors who chased me with drugs that hurt my brain or dealing with the SSDI people. Doctors chased me and government employees run from me. At least that’s what I’m led to conclude this based on the multiple emails I have sent to government employees about the Ticket to Work program that just seem to go into a big void based on the lack of response or helpfulness I get. Thank you for posting this article. I had not known about The Mighty, and I am glad that I had not. I am 100% opposed to Autism Speaks. Like Peter Breggin wrote in Toxic Psychiatry, most of the large autism groups are simply defensive formations of the parents. And I do want to see the problematization of children who supposedly have “Autism”, and what goes on at the Judge Rotenberg Center in Massachusetts, and at the Koegel Center in California lead to lawsuits, criminal charges, and prosecutions in the international court. And I also oppose the Autism Self Awareness Network, and anything to do with Neuro-Diversity, because they are acting like Autism is real. So they are hence exonerating perpetrators, by accepting something which amounts to original sin. So I congratulate you Twihal, for extricating our self from something negative. So I know this gets into areas of continuing sensitivity. But we have another article here, where there are links to a web site with discussion of ‘reparations’. This is the only way there will be progress, by holding perpetrators accountable. Without this, it is still just asking for pity. So here are my responses to this, and most of all I want to call your attention to my challenges about the idea of affixing disability labels to oneself. And I say this based on the understanding that the reason we have the mental health system, and so much attention to disability issues, is because those behind our economy want to advance the bogus sciences of eugenics and social Darwinism. We should be meeting them at the barricades, not doing their jobs for them. Everyone is unique, everyone has limitations, everyone has special needs. We have to take care of people and compensate for them, but we should minimize the need for labeling. People who belief in Mental Health,are one who support existence of Mental Health System and existence of Mental Illness or Disorders.Most of people on MIA blogs,fell into such *category*.Only difference between NAMI and Mighty vs MIA,is in their approach towards *meds*.Sadly like NAMI or Mighty,MIA also support existence of Mental Health System.And anti-psychiatry movement have just too much Mental Health professions amoung their ranks,that can be called as *anti*-psychiatry.There is only one way forward-anti Mental Health movement. And start with movement Occupy NIMH in first place. Sadly, I agree with this statement. There is no right action being taken against the fraud of the DSM at MIA that allows room for the medical model that is killing and disabling of humanity. Considering an anti-NIMH movement, other than to expose it for what it is, ignore it at all costs like one would any other social ill and concentrate on what must be done to move in the opposite direction. The anti-psychiatry movement, if congruent to its own truth and the massive body of facts already at our disposal, can, should and must challenge the legitimacy of using the DSM to diagnose people, as the junk science it is under Daubert in a court of law- repeatedly if necessary. The downstream approach of fishing dead and disabled victims out of the water, is ineffective and unethical in the face of the evidence. Our collective failure to do so, despite the evidence, exposes/presents a wishy-washy, mixed message that allows the mainstream to continue to be dismissive of the anti-psych movement at large. The questions are, is there an anti-psych movement, if so where is it, who are its leaders and what is it actively doing to stop the fraud and carnage? NAMI et al will always be in search of a buck and whore itself out to pharma et al. The only way to build a desperately needed, purely anti-psych movement, is to connect true supporters and never allow the movement to be compromised by people with mainstream agenda’s. Deep respect for Twilah Hiari who could have used the financial support but put her truth front and center over and above that which would compromise her core beliefs. Judi, Quote: “The questions are: is there an anti-psych movement? If so where is it? Who are its leaders? And what is it actively doing to stop the fraud and carnage?” re: Those are very good and important questions. Oldhead other day said at MIA (someting like this): that the history of anti-psychiatry movement was being so distorted… that this day is not possible to find it anymore (the true story). Seems obvious that PHARMA/ APA…both have gained power since 1970. Whitaker explained that part at his books… how it was done. With a plan, that took decades, and much $$$, paying psychiatrists, NAMI, articles, dinners, speaches… That part is know, and the profits of Pharma can be found. Very profitable. Now the “dark side” (not that we are evil, but people hate/fear us, the ones with a DSM-diagnosis). I dont pretend to know the history of anti-psychiatry movement, but will point a few guesses: a) Some leaders have died (and some disd died early); b) some got old (or got discredited); c) some give up (got tired of getting: no results/ no support/ the heavy burden); d) some got bougth/ payed for their silence; e) some found a way to get a little money & status (but helping very few people),… kinda payed fakes (or cases or vanity/ dont see their own incompetence/ lack of honesty). f) Some leaders are so naif/ passive/ and dumb that qualify for: USEFUL IDIOTS. g) the anti-psychiatry movement got infiltrated (that weakeaned it from the inside). h) the anti-psychiatry movement lacked important things: “know-how”, “strategy”, and was too soft to make changes/adapt and never went “all the way”. ………………….. So, PHARMA/ APA, see a)b)c)d)e)f)g)h)… and they laugh the whole way to the bank. The above is not facts… just a tentative explanation. The results are known. Those who know better (oldhead, or others)… can post what failed (and when). Because failed… it did. AntiP, you might consider spending as much time deconstructing psychiatry as you are now spending deconstructing anti-psychiatry. If you did, something positive might conceivably come of it. Organized psychiatry has spent a great deal of time and energy on PR and public image. Some of its leading lights have targeted what they see as the anti-psychiatry movement for many of the social ills the world faces today. The APA has developed, in fact, its own hit squads for dealing with critics. Anti-psychiatry is blamed for the suicide rate, the high numbers of people with psych-labels in the criminal justice system, and for the growing numbers of impoverished homeless people in the world. You name it, anti-psychiatry must be behind it. I just don’t think there is any sense in contributing to this fiasco, which is, no, not the fiasco of anti-psychiatry, but rather the fiasco of psychiatry. Psychiatrists apparently are authoritarian bullies unscrupulous at seeking out scapegoats for any mess they happen to have stirred up for themselves. Search YouTube for anti-psychiatry, and I’m sure you will find something there that may interest you. At the international level I think there is a legitimate sense in which you can speak of an anti-psychiatry movement. This movement however is at a great remove from the mental health movement, its parasitic followers, and the mental illness religion that guides and defines it. If it is making and unmaking itself at all times, well, as far as I’m concerned, there is hope in that. May it help to inspire a more worthy resistance to the unrelenting psychiatric intrusions, interventions, and human rights violation that we have been subjected to since they first began segregating and incarcerating people for being different, and breaking out of the dull and dismal norm of constrained existence. Frank, I am new here, have other things to do, so havent been able to read at MIA all i wanted (and maybe that is not possible, as time goes… new things come everyday… that need solution. … just wanted to remind that: a) that i am new at MIA; b) I am from Europe; c) i own just 3 good books about SZ (all from Robert Whitaker): 1) Anatomy; 2) MIA; 3) Influence. And am lacking time to read all i like. d) I lack information, and historical data (i know that). But to solve a problem, is useful to know the problem. And at participants post at the blogs/ news/ forums at MIA… i see some “issues” daily. At this point i do not want to write about it (dont want to get banned). I am new here. Here i learn things. I already get little cooperation (…about “things”…), as thing are. No need to make it worse for myself. Whitaker told us what Pharma/APA/NAMI did (the general lines). For details we need to read at other places. Now… some thing rely heavily on money. We dont have $$$, it wont work. One ACTUAL problem is this: if we dont learn from the errors of the past… result will be:… nothing done that will last. And a lot of time/ energy/hope wasted. ………………. And Frank and oldhead… you 2 know a lot more than i do. And dont forget english is not my native language (i am aware any kid from the USA will write better than me). I think one thing you need to understand is that since MIA isn’t an anti-psychiatry website, and Robert Whitaker isn’t explicitly anti-psychiatry himself, you are going to get a lot of things on this site that have nothing to do with anti-psychiatry. The amazing thing, I think you could say, despite this, are the number of things that do get on this website, due to the position of some of its users, which are very much in an anti-psychiatric vein. As for the movements history, I think that is always going to be something of a contentious matter. Many of the people most associated with the term would not have it used to describe themselves in the first place. There is always this matter of separating the wheat from the chaff, and the figureheads from their ideas. We must rely on these ah-ha moments. Given a few ah-ha moments, well, that’s when one can be said to “get it”, and that’s pretty much what we’re after. Why would folks from category F have wanted to leave the psychiatry movement in the first place? They would have made perfect shills, tools, and boot-licking poster children for Big Pharma at no risk at all. Twilah Hiari, You done the rigth thing. And you done it quickly. That speaks volumes about your: inteligence, values, honesty and determination. You are needed. Some health professionals see daily similar things. Yet that does not bothers them. Unless they their favorite football team loses. Oh day to forget. Some health professionals take YEARS/ DECADES to reckon… a little, tiny error. Anti-psychiatry movement failed,I know this.And everyone on MIA also knows this!It’s a time for Occupy NIMH and Occupy WHO movements!And time for anti-mental health movement!Sadly that MIA,considering NIMH or WHO as *friends* of crazy people.*Re-think* mental health,sounds so. I don’t think any anti-psychiatry movement failed. We’re back at the myth of the phoenix, but I will reserve that story for another time. Anti-psychiatry is more likely to be reborn than it is to die out entirely. Psychiatry, after all, fervently believes in it. I agree that mental health agencies, organizations, programs, etc., are all part of the problem, however I equate anti-psychiatry with opposition to the mental health system en toto. Occupying mental health organizations and agencies is not a suitable thing to be doing unless it brings about their downfall. The social control business, I would hope, is going to find the going rougher than ever before in the not too distant future, and that with a lot of help, undoubtedly, from enemies like this one. Does Psychiatry honestly believe in an anti-psychiatry movement? If not, they are utilizing the idea to create fervor in the hearts of the faithful. Caught snatches of Bipolarburble by mistake during a google search. The brainless blogger was in a tizzy. Seemed to think there was a vast underground conspiracy of powerful anti-psychiatry bogeymen. They wanted to lead the MI lambs astray and take away her precious meds! Those poor, oppressed, vulnerable psychiatrists were in desperate need of her help! Lucky they had a super blogger like her to save the day. She wrote a series telling how she hated non-compliant folks because they were trying to make her crazy. All anti-psychiatry people were clearly evil and delusional and a threat to everything dear. She is in her right mind because of her med compliance and the fact that she believes EVERYTHING her psychiatrist says without question. Now she has to go on break to play with her lower lip! “Anti-psychiatry” is definitely a bogeyman to the psych establishment, and also an ideological tool with which to (in their view) smear even their most moderate critics (many of whom aren’t “anti-psychiatry” in the least). As to what they believe themselves, interesting question — certainly they help draw people to anti-psychiatry just by mentioning the idea. Which is one of the best demonstrations of the term’s political value. Success for anti-psychiatry would be no psychiatry. Psychiatry itself grew out of segregating and locking up mad men and women for being perceived as some kind of a threat to the rest of society. Psychiatry’s success is anti-psychiatry’s failure, and vice versa. As long as there is a psychiatry, there will exist a need for an anti-psychiatry to oppose it. No psychiatric labels, no psychiatrists, no psychiatric prisons, no harmful practices, etc. Such would be success for anti-psychiatry. Sure, it may be a long hard climb, but we were there before there were any psychiatric prisons, too, and in that sense, it would be a return, a return to the time before forced psychiatric treatment (i.e. the time after the demise of forced psychiatric treatment.) Let’s see psychiatric prisons (mad houses, insane asylums, “mental hospitals”) go the way of debtors prison, poor houses, and the institution of chattel slavery. They are where psychiatric prisons should be, in obsolescence. Psychiatry is pseudo-science, much like the pseudo-science employed in reinforcing superstitious beliefs, and so it should be treated. I agree with Frank here. With whatever due respect, you guys going on about the “antipsychiatry movement” really don’t know what the hell you’re talking about. Although this hopefully will be changing soon, there currently IS NO active anti-psychiatry movement other than in spirit, and there hasn’t been for some time, so there’s been nothing to “fail.” The real anti-psychiatry movement predated Bob Whitaker by several decades and was well over by the time he even considered his research on psychiatry. MIA has never pretended to be anti-psychiatry so it can’t be criticized for not “representing.” It’s a good question as to whether the real movement failed or was defeated; there’s a difference. In the sense that a concentrated propaganda and cooptation/bribery effort was conducted by the APA, NIMH and others, we were clearly out-funded and out-strategized and thus defeated. However the willingness of “liberal” elements in the movement to collaborate with the enemy also must be taken into account, as this led directly to the current “peer” racket. ALL TALKS,NO ACTION! is good slogan,for loud mouths writters and activists on MIA! Indeed this can be changed!We need and I will repeat this 1000 times ,if needed: ANTI- MENTAL HEALTH MOVEMENT! Why you have *problem* with it,people here?It’s the very name,or what?Until we don’t start with it,we can’t even talk about it’s mission or policy!MIA should be proper *place* to start with it,or not? As a disabled former “mental patient” currently struggling with Iatrogenic Neurolepsis, I do hereby name: *BORUT* *RUDL* to be the Global BOSS of the WORLD ANTI-PSYCHIATRY MOVEMENT CLUB. What do I do *FIRST*, Boss? You tell me, you’re the boss! I think you’ll do a GREAT job, BORUT RUDL! And I’m serious. please tell me what to do, to destroy the pseudoscience drug racket and means of social control known as “psychiatry”, and “mental health”. I correct myself and perhaps worded my post badly in what should have been a new thread in respect to Twilah act of activism. Bonnie Brustow, Tina Minkowitz and Chuck Ruby of ISEPP all recent put forth opportunities for people to participate in changing this broken system. The conversation remains important: what kind of change is needed, what is currently happening, where are people meeting and is there currently an information hub where people can find this out in order to participate? It seems to be here and other than that, pretty dispersed. Is MIA willing.able to be that space or should another space be created? Thank you judi You tell the truth.You don’t need need to be sorry for.Activists here are keyboard *warriors* and mere posts,not any serious movement,are all what represent them,as such.Why they didn’t protest before NIMH or WHO so far?All the time same on MIA blogs-Psychiatry and BIG PHARMA,human rights violations…Really I belief that they are here,to re-direct attention away from NIMH or WHO.This two organization,we crazies should hate the most.Only Mental Health crooks here,won’t agree with me! If you could tell us about the protests you are personally orchestrating sometime, Borut, against whomever or whatever. I’m sure there are those among us who would be more than happy to join you, if at all possible, in that noble effort. Include me in. While it will be 3 or 4 years before I recover enough to join physically I’ll be happy to do anything I can to provide moral support, internet PR of the demonstrations and other forms of help behind the scenes. Protests will happened only,if critical *mass* is reached.This means a Occupy Movement,like was one,which sadly failed in some matter.But Occupy NIMH shall be crazy people own *bidding*.It means normals aren’t welcomed!They making decisons about us in all matters and even about or life and death!We crazies all knows what this means!You have a good point Frank-Institutional Psychiatry=Slavery,better Mental enslavement allowed in USA Mental Institutions,altough USA constitution strictly forbid slavery.And same measure of *democracy* also Europe already learned.Means one *mistake* I did and farewell my citizen *rights* in my country.Slavery in Mental Health System is still legal around the world and crazies becomed by State/ or their country owned Mental Slaves and not human beings.And race isn’t matter for Mental enslavement.DSM matters.And clinical insanity.Psychologists and Psychiatrists your classifications aren’t just pseudo-science and against biology and evolution,but they even *give* a *legal* *right* to states,or countries to own people as Mental Slaves!We live in times,when normals aren’t doing anything good for this world anymore,so we can be masters of our own fate!Which was always in our hands,but many of you didn’t realized this and some sadly can’t anymore! We shall see if any MIA writter with M.D. or Ph.D. will have *guts* to comment,my angry *message* here. I belief,that vanity of their professions exposed above in my comment,will be main reason for the *silence*.
When the members of the United Nations adopted the Agenda 2030 for Sustainable Development, they promised, among other things, to fight poverty and hunger worldwide, protect the climate and improve the health of all. They set up 17 Sustainable Development Goals (SDGs), among them SDG 3: 'Health for All at All Ages'. The most important instrument to achieve this is SDG 3.a: 'Strengthen the implementation of the WHO Framework Convention on Tobacco Control (FCTC)'. The FCTC is an international health treaty with 180 Parties, is based on human rights and explicitly refers to the UN Convention on the Rights of the Child (UN CRC). *Unfairtobacco* emphasizes the links between SDGs, children's rights and tobacco control in a new brochure and offers recommendations aiming for a tobacco-free world. How tobacco impedes sustainable development {#sec1.1} =========================================== More than 17 million people work in tobacco cultivation worldwide, mainly in low- and middle-income countries with low labour standards, where more than 90% of the global tobacco harvest is produced. Smallholder farmers find it difficult to earn a living from tobacco cultivation (irreconcilable with SDGs 1 and 2)^[@cit0001]^ and need the help of their children as contribution to their livelihood, even at the expense of their education (irreconcilable with SDGs 8.7 and 4). Dangerous chemicals are intensively used in the fields, and due to the lack of protective clothing occupational accidents such as poisonings are widespread (irreconcilable with SDGs 3.9 and 8). In addition, nicotine is absorbed through the skin when workers get into contact with tobacco leaves, eventually causing acute nicotine poisoning, the so-called 'green tobacco sickness' (irreconcilable with SDG 8.8). Thus, the widespread use of child labour is particularly worrying^[@cit0002]^. On top of it, tobacco cultivation damages the environment: tobacco depletes the soil of nutrients and, consequently, forests are cleared to develop new fertile fields as well as to obtain firewood for curing the green tobacco leaves. The curing process requires globally around 8 million tonnes of fuelwood every year (irreconcilable with SDGs 12.2, 13 and 15.2). Furthermore, the chemicals used in tobacco growing enter waterbodies and adversely affect aquatic life biodiversity (irreconcilable with SDGs 6.3 and 6.6)^[@cit0003]^. Approximately one billion people worldwide consume tobacco. Eight million people die from it every year and about 1.2 million die from exposure to secondhand smoke^[@cit0004]^. Tobacco is the leading preventable cause of premature death from non-communicable diseases (irreconcilable with SDG 3.4). Smoking prevalence is highest worldwide in population groups with low socioeconomic status, in low- and middle-income countries as well as in high-income countries (irreconcilable with SDGs 1.2 and 10.2)^[@cit0005]^. After tobacco consumption, tobacco waste, especially cigarette butts, also damage the environment because the toxicants contained in the butts leach out into soil and water (irreconcilable with SDGs 6.3, 6.6, 11.6 and 14.1). How tobacco violates children's rights {#sec1.2} ====================================== Children and adolescents are particularly vulnerable to the effects of tobacco production and consumption. The widespread use of child labour in connection with the living and working conditions in tobacco cultivation specifically violates the children's rights to health (UN CRC Art. 24), to adequate standard of living (UN CRC Art. 27), to education (UN CRC Art. 28), to leisure (UN CRC Art. 31) and to protection from economic exploitation (UN CRC Art. 32). Both the marketing of addictive and harmful tobacco products, which is specifically targeted at children and adolescents, and the lack of protection from secondhand smoke violate children's rights to life (UN CRC Art. 6), to information (UN CRC Art. 17), to health (UN CRC Art. 24) and to protection from narcotic drugs (UN CRC Art. 33). In 2013, the UN Committee on the Rights of the Child published its General Comment on the Right to Health and explicitly referred to the need to transpose the WHO Framework Convention on Tobacco Control into domestic law^[@cit0006]^. The entirety of children's rights leads to the conclusion: children have a right to a tobacco-free world. That means a world where tobacco consumption has been reduced to a meaningless level in the majority of countries and where the tobacco industry is highly regulated. Children have the right to be protected from the tobacco industry, i.e. not to be exploited in tobacco cultivation, to live in a smoke-free environment that protects them from secondhand smoke as well as from starting to smoke themselves, and to have access to smoking cessation support if they have become addicted to tobacco^[@cit0007]^. The state has an obligation to respect, protect and fulfil children's rights. The regulation of the tobacco industry is not a voluntary matter of companies, but a duty of the government. In all measures taken on the way to a tobacco-free world, the best interests of the child (UN CRC Art. 3) must be paramount and it must be ensured that children's views are considered (UN CRC Art. 12). How a tobacco-free world can be created {#sec1.3} ======================================= Aiming for a tobacco-free world, one can find the framework and guidelines for action in the WHO Framework Convention on Tobacco Control, the Agenda 2030 for Sustainable Development and the UN Convention on the Rights of the Child, which are complementary and mutually reinforcing. The monitoring of implementation progress is embedded within the framework of these international instruments. The FCTC Secretariat of the WHO regularly evaluates the mandatory reports of the States Parties. In 2018, for example, measures to protect people from secondhand smoke in public places (FCTC Art. 8) have been implemented by 88% of the reporting states. A comprehensive ban on tobacco advertising (FCTC Art. 13) has only been implemented by 61% of the states, not including Germany, where *Unfairtobacco* is located. Support for alternative livelihoods for tobacco farmers (FCTC Art. 17) is the least implemented Article^[@cit0008]^. The monitoring of the sustainability agenda is voluntary for the states. Since 2016, Germany has been reporting on progress with different priorities. The measures for implementing the FCTC (SDG 3.a) are assessed by the government as sufficient solely on the basis of smoking prevalence, disregarding for example social inequalities in smoking or protection from secondhand smoke. Efforts to shape sustainable supply chains of German companies (SDGs 8 and 12) are focused on individual sectors, e.g. textiles and cocoa, and continue to be based on voluntary action^[@cit0009]^. The UN Convention on the Rights of the Child requires all States Parties to fulfil their reporting obligations. The German government sent its regular report to the UN Committee on the Rights of the Child in April 2019. In this report, the German government explains that smoking among youth aged 12--17 years has decreased since the turn of the millennium, but completely ignores the topics of exposure to secondhand smoke and cigarette advertising. At the same time, the responsibility of companies for their supply chains remains voluntary^[@cit0010]^. Alternative reports from civil society are expected in the first half of 2020. Together with members of the German Network on Children's Rights and Tobacco Control, *Unfairtobacco* will submit such a report. What our brochure offers {#sec1.4} ======================== Children's Rights and Tobacco Control assembles experts from different areas who deal with issues ranging from tobacco cultivation to tobacco use. They show the impact of smoking and secondhand smoke on children and discuss social inequalities in smoking among children as well as the legal situation when children are exposed to secondhand smoke at home. They analyse how the tobacco industry uses influencer marketing in social media. They describe conditions and consequences of child labour in tobacco growing and examine the tobacco industry's responsibility for human rights violations. The concluding chapter offers detailed recommendations for governments, businesses, civil society, and individuals. Furthermore, children themselves have their say. They share their views on working on tobacco plantations, being exposed to secondhand smoke at home or banning tobacco. 'I dig in the fields for many hours, the whole day, I never find time to rest. (...) If I explain \[to her stepmother, editor's note\] that I am tired, she does not listen. Instead, she gives me other work to do, I have to weed tobacco and water seedbeds for tobacco.' 16-year-old girl from Tanzania, working in her family's tobacco farm 'My mother and father always smoke. I always tell them to quit, but they don\'t listen.' Boy, 5th grade, from Germany, exposed to secondhand smoke at home 'If I were a politician, I would also forbid the sale of cigarettes and the cultivation of cigarettes'. Boy, 5th grade, from Germany, in a school workshop The brochure can be ordered or downloaded at: <https://unfairtobacco.org/en/material/brochure-childrens-rights-and-tobacco-control/> CONFLICTS OF INTEREST ===================== The author has completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none was reported. FUNDING ======= This research was funded by Engagement Global on behalf of the German Federal Ministry for Economic Cooperation and Development; Berlin Senate Department for Economics, Energy and Businesses; Brot für die Welt using Church Development Service funds; Foundation Umverteilen; Foundation Oskar-Helene-Heim. PROVENANCE AND PEER REVIEW ========================== Not commissioned; externally peer reviewed.
Q: "Reset" Serializer for Adminhtml Product Widget Chooser I have a custom module where in the managerial interface, the user can select products from a Grid (rendered in a modal) to add to a list specific to the Module's functionality. This part is working great! I am encountering a bit of a weird issue with how the serializer works in that I can't seem to figure out how to "reset" it along with the Grid. This is a very simple problem but hard to explain, so bear with me! Steps to reproduce: User opens Product Chooser Modal Window and selects a few products (for example, lets say product ids 8, 10 and 15 are selected) via the checkboxes in the grid. The user clicks "Add Products" and the products are added to their list, the modal window closes. All good. The user needs to add another set of products, so they re-open the modal window. I programmatically reload the Grid before the modal appears, and manually empty the serializer's hidden selected_products input before the modal window is rendered, as so: var varienGrid = this.getVarienGridJsObject(), serializerInput = $('input[name="selected_products"]'); // logs "8&10&15" i.e., previously selected product id's console.info('before grid reload products', serializerInput.val()); // reload the grid with empty params as to not re-select products varienGrid.reloadParams = {}; varienGrid.reload(); // attempting to "reset" the serializer by null'ing the input field serializerInput.val(null); // now this will log out empty, which is intended console.info('after grid reload products', serializerInput.val()); Now the Grid is reloaded for the User to select their new set of products to add to the Module's product list. However, as soon as the select a single product in this freshly loaded grid, the serializer input field "magically" contains all of the previously selected product ids! So if the user selected product id "20", the serializer's input would contain "8&10&15&20" i.e., the new selection, plus all of the product id's selected before the Grid was reloaded and the hidden input was emptied. What I've Tried Aside from the above "solution" that didn't end up working, I dug into the varienGrid JS object and found that in the setCheckboxChecked method a callback is invoked which is, as far as I can tell, what handles updating the selected_products input field with new values, as commenting this callback out results in the input field not getting updated. I thought the solution would be to override the getCheckboxCheckCallback method to provide my own method (e.g., MyJsObject.handleCheckboxChange) in my Grid class which extends Mage_Adminhtml_Block_Catalog_Product_Widget_Chooser but doing so still results in the hidden field being updated. Any help regarding how to "reset" the Mage_Adminhtml_Block_Catalog_Product_Widget_Chooser serializer would be awesome, thank you and please let me know if I can clarify anything or provide any additional information! A: Although this may not work for everyone, my solution was to overload the getCheckboxCheckCallback() method in my Grid class which extends Mage_Adminhtml_Block_Catalog_Product_Widget_Chooser, as-so: public function getCheckboxCheckCallback() { return "function(grid,element,checked){ MyModuleJsObject.productChooser.handleCheckboxChange(element, checked).bind(MyModuleJsObject.productChooser); }"; } Since all I need to do is keep track of what checkboxes are checked in the Grid so they can be added to my module's Product List when the user clicks the 'Add Products to List' button, the JS is simple: handleCheckboxChange: function(element, checked){ var productId = $(element).val(); if(checked){ this.selectedProductIds.push(productId); }else{ this.selectedProductIds.splice(this.selectedProductIds.indexOf(productId), 1); } } Then, when the user clicks the 'Add Products to List' button, the handler simply POST's the productId's in this.selectedProductIds to the action controller which takes care of persisting the data. I feel like this is a fairly graceful and non intrusive approach, I'm happy with it.
Bhaluka Upazila Bhaluka () is an Upazila, or sub-district, of the Mymensingh District in Mymensingh Division , Bangladesh. Bhaluka is the 1st Model Thana Of Bangladesh. It is one of the fastest growing basic industrial area of Bangladesh. Notables Khan Shaheb Abedullah Chowdhury was a notable zamindar of Bhaluka. Chowdhury's wife, Halimunnesa Chowdhurani, made notable contributions to the development of this Upazila. Their son, Aftabuddin Chowdhury was a member of the National Assembly for the term 1965-1969 from the Pakistani Muslim League, as well as a member of Parliament in Independent Bangladesh from the same party. The Dhaka Mymensingh highway was built in the Ayub Khan regime because of the proposal given by Aftabuddin Chowdhury. During the war of liberation, Afsar Uddin Ahmed (Sub Sector Commandar of Sector 11) collected arms and ammunition and challenged the Pakistani army. Late Shah Ali Akbar was a valiant freedom fighter. He, along with Afsar Uddin, organized Mukti bahini (freedom fighter) at Bhaluka. Geography Bhaluka has a total area of 444.05 km2. It is bounded by Fulbaria and Trishal Upazilas on the north, Sreepur Upazila on the south, Gaffargaon Upazila on the east and Sakhipur and Ghatail Upazilas on the west. The main rivers are Sutia, Khiru, Lalti, and Bajua. Demographics According to the 2011 Bangladesh census, Bhaluka had a population of 430,320. Males constituted 50.46% of the population and females 49.54%. Muslims formed 95.45% of the population, Hindus 4.04%, Christians 0.34% and others 0.17%. Bhaluka had a literacy rate of 49.12% for the population 7 years and above. As of the 1991 Bangladesh census, Bhaluka had a population of 264,991 in 53,222 households. Males constitute 51.08% of the population, and females 48.92%. The Upazila's 18 year old and up population is 137,860. Bhaluka has an average literacy rate of 41.10% (7+ years), and a national average literacy rate of 32.4%. Administration Bhaluka thana, now an Upazila, was established in 1917. It is one of the oldest small business hubs in the area. It became an Upazila at 1983. The Upazila consists of 11 union parishads, 87 mouzas and 102 villages. Education According to Banglapedia, Bhaluka Pilot High School, founded in 1950 by Aftabuddin Chowdhury, is a notable secondary school. Besides, many schools and madrasahs were established by the Chowdhury family. They also gave utmost importance to girls education and established Halimunnesa Chowdhurani Memorial Girls School. Batazor M.U. Founder of Dakhil Madrasah - Alhaj Moslem Uddin Sarkar. References Category:Upazilas of Mymensingh District
Recorded and streamed live from Aether Club, Budapest 22.12.2016. SoundCloud: Cheap china Jerseys https://soundcloud.com/rtsfm/daidai-l… Mixcloud: https://www.mixcloud.com/RTSfmBudapes… Facebook video: https://www.facebook.com/rtsfm/videos… DAIDAI is a far eastern sounding word, originally used for naming a type of citrus. Also cheap ray bans the name
Wild waterfowl form the main reservoir of influenza A viruses, from which transmission occurs directly or indirectly to various secondary hosts, including humans ^[@R1]^. Direct avian-to-human transmission has been observed for viruses of subtypes A/H5N1, A/H7N2, A/H7N3, A/H7N7, A/H9N2, and A/H10N7 upon human exposure to poultry ^[@R2]-[@R7]^ but a lack of sustained human-to-human transmission has prevented these viruses to cause new pandemics. Recently, avian A/H7N9 viruses were transmitted to humans, causing severe respiratory disease and deaths in China ^[@R8]^. Since transmission via respiratory droplets and aerosols (hereafter referred as "airborne" transmission) is the main route for efficient transmission between humans, it is important to gain insight on airborne transmission of the A/H7N9 virus. Here we show that, although A/Anhui/1/2013 A/H7N9 virus harbours determinants associated with human adaptation and transmissibility between mammals, its airborne transmissibility in ferrets was limited, intermediate between that of typical human and avian influenza viruses. Multiple A/H7N9 virus genetic variants were transmitted. Upon ferret passage, variants with higher avian receptor binding, higher pH of fusion, and lower thermostability were selected, potentially resulting in reduced transmissibility. This A/H7N9 virus outbreak highlights the need for increased understanding of determinants of efficient airborne transmission of avian influenza viruses between mammals. At the end of March 2013, the World Health Organization was notified by the Chinese authorities of three human cases of infection with a novel avian-origin influenza A/H7N9 virus ^[@R9]^. All three cases developed bilateral pneumonia with progression to acute respiratory distress syndrome and death ^[@R10],[@R11]^. As of July 2013, 133 A/H7N9 laboratory-confirmed cases have been reported in ten different provinces of China, including 43 deaths ^[@R8]^. This novel A/H7N9 virus emerged in humans after reassortment between viruses of poultry and wild bird origin. The hemagglutinin (HA) and neuraminidase (NA) genes are genetically related to H7 and N9 of viruses isolated from wild ducks while the other genes are closely related to A/H9N2 viruses circulating in poultry ^[@R12]^. It is most likely that the new A/H7N9 viruses have circulated undetected in domestic birds because of their low pathogenicity for poultry. A/H7N9 viruses were isolated from specimens at live poultry markets, pointing to domestic birds as a potential source of human infections ^[@R13]^. Although A/H7N9 viruses harbour genetic traits associated with human adaptation of avian viruses and increased transmission between mammals, such as the Q217L substitution in HA (H7 numbering, 226 in H3 numbering) conferring a human receptor preference ^[@R14]^ and the E627K substitution in PB2 ^[@R15]^, no sustained human-to-human transmission of A/H7N9 viruses has been reported to date. Apart from two confirmed cases that might have arisen from family clusters and for which human-to-human transmission cannot be ruled out, human cases of A/H7N9 infection were epidemiologically unrelated and identified in different parts of China ^[@R11]^. Gaining knowledge on the ability of animal viruses to transmit via the airborne route is crucial to be able to mitigate future pandemics. Recently, the airborne transmissibility of the human isolate A/Shanghai/2/2013 was evaluated in ferrets and in pigs and was found to be less robust than for the 2009 pandemic A/H1N1 (pH1N1) virus ^[@R16]^. Here, we assessed the airborne transmissibility of a different human isolate, A/Anhui/1/2013 (AN1) in the ferret model as described ^[@R17],[@R18]^. Both A/Shanghai/2/2013 and A/Anhui/1/2013 possess human receptor specificity and are of particular interest regarding transmission. Four donor ferrets were inoculated intranasally with 10^6^ 50% tissue culture infectious doses (TCID~50~) of AN1 virus isolate and the following day, four recipient ferrets were placed in adjacent cages, designed to prevent direct contact between animals but to allow transmission via the airborne route. Throat and nasal swabs were collected at 1, 3, 5 and 7 days post inoculation (dpi) from the donor ferrets and at 1, 3, 5, 7 and 9 days post exposure (dpe) from the recipient ferrets. Virus shedding from the donor ferrets was detected from 1 dpi onwards with infectious virus titers up to 10^6.5^ TCID~50~/ml ([Fig. 1](#F1){ref-type="fig"}, Panel A-D). AN1 virus was transmitted to 3 out of 4 recipient ferrets (F1, F2 and F3). Transmission was detected at 3 dpe for two ferrets (F2 and F3) and at 5 dpe for one ferret (F1) with infectious virus titers in respiratory swabs up to 10^6^ TCID~50~/ml. All three animals infected via the airborne route seroconverted by two weeks after exposure, while recipient ferret F4 did not seroconvert. Using the same experimental set-up and protocol, we previously tested the transmissibility of the human pH1N1 virus, seasonal human A/H1N1 virus, and avian influenza A/H5N1 virus. While A/H5N1 virus was not transmitted between ferrets via the airborne route, human influenza viruses were transmitted in all donor-recipients pairs ^[@R17],[@R18]^. Replication in donor ferrets inoculated with the AN1 virus and pandemic and seasonal A/H1N1 viruses was comparable, but virus shedding from recipient ferrets was less abundant and delayed for the AN1 virus as compared to the A/H1N1 viruses. Sanger sequencing was used to determine the consensus genome sequence of the three airborne-transmitted AN1 viruses isolated from recipient ferrets F1, F2 and F3 ([Table 1](#T1){ref-type="table"}) and substitutions were detected in all gene segments. Several of these were already present in the inoculum, demonstrating the presence of a mixture of viruses in the AN1 virus isolate. Consultation with other laboratories revealed that heterogeneous virus mixtures were present in A/H7N9 virus isolates shipped to others also. Viruses with different genotypes were transmitted ([Table 1](#T1){ref-type="table"}). Only two substitutions, N123D and N149D in HA, were consistently found in all three airborne-transmitted viruses. These substitutions are not part of potential N-linked glycosylation sites. Virus recovered from the recipient ferret F1 at 7 dpe, which contained the lowest number of substitutions compared to the AN1 virus isolate and had a high virus load, was used to inoculate four additional ferrets. One day later, these animals were paired with 4 recipient ferrets in transmission cages ([Fig. 1](#F1){ref-type="fig"}, Panel E-H), of which one became infected (F5). None of the donor ferrets developed clinical signs upon intranasal inoculation. Two recipient ferrets were also without clinical signs, but one recipient ferret (F2) showed loss of appetite, ruffled fur, lethargy and breathing difficulties from 7 dpe onwards. Recipient ferret F5 became moribund and was euthanized at 8 dpe. Infectious virus titers up to 10^4.9^ TCID~50~/g in the lungs and 10^5.8^ TCID~50~/g in nasal turbinates were detected in ferret F5. Based on pathological analysis, this animal was suffering from a moderate multifocal suppurative rhinitis with associated virus antigen expression in the nasal respiratory and olfactory epithelium ([Supplementary Table 4](#SD2){ref-type="supplementary-material"}). No lesions or virus antigen expression were seen in the other parts of the respiratory tract, liver or brain that could explain the lethargic state of the ferret. Consensus virus genome sequences as determined by Sanger sequencing of the airborne transmitted viruses recovered from recipient ferrets F1 and F5 were identical ([Table 1](#T1){ref-type="table"}). Substitutions N123D and N149D in HA and M523I in the basic polymerase 1 (PB1) were found consistently in two subsequent transmission experiments. Since mixed populations were detected in the inoculum using Sanger sequencing, next-generation sequencing was performed with respiratory samples of the donor/recipient pairs for which airborne transmission was observed. The entire HA gene and the PB1 part that contained the M523I substitution (nt positions 1126-1616) were sequenced using a 454 sequencing platform (Roche). None of the airborne-transmitted viruses possessed consensus genome sequences identical to that of the AN1 virus isolate ([Table 2](#T2){ref-type="table"}). Interestingly, an L217Q substitution, conferring a receptor switch from human to avian specificity (α2.6 to α2.3 linked sialic acids respectively), was detected in donor ferrets F1, F2, and F3 in 9.1%, 15.7%, and 24.5% of the total number of reads, respectively, but this substitution was not detected in any of the viruses isolated from the ferrets upon airborne exposure. By setting the detection threshold at 1%, we confirmed that the number of variable nucleotide positions in the genome of the AN1 virus isolate was high, and that a rapid gain in clonality occurred after two transmission experiments ([Table 2](#T2){ref-type="table"} and [Supplementary Table 1](#SD2){ref-type="supplementary-material"}). The double substitution N123D+N149D in HA appeared to be selected in most airborne-transmitted viruses and constituted the main viral population after two subsequent transmission experiments ([Supplementary Table 2](#SD2){ref-type="supplementary-material"}). Although there appeared to be selection of N123D and N149D, it was not possible to determine whether the selection occurred at the level of individual or double substitutions. However, the rapid selection of substitutions in the HA and PB1 genes and the gain in clonality did not change transmission substantially enough to be detectable with the current group size of 4 ferrets. Residues N123 and N149 are adjacent to the receptor binding site but do not interact directly with α2.3 and α2.6 linked sialic acids ([Fig. 2](#F2){ref-type="fig"}). Recombinant viruses were generated based on seven gene segments of A/Puerto Rico/8/1934 (PR/8) with the wildtype AN1 HA and the AN1 HA with amino acid substitutions N123D (AN1~N123D~), N149D (AN1~N149D~) and N123D+N149D (AN1~N123D,N149D~). Binding to α2.3 linked sialic acids of AN1~N123D~, AN1~N149D~ and AN1~N123D,N149D~ viruses as assessed using a modified turkey red blood cell (TRBC) assay ^[@R19]^ was increased slightly by 2 to 4-fold compared to AN1 virus ([Supplementary Table 3](#SD2){ref-type="supplementary-material"}). It was recently hypothesized that stability of HA in an acidic environment - such as mammalian nasal mucosa - is a determinant for airborne transmissibility of influenza viruses ^[@R20]^. It has also been noted that viruses that fuse at low pH have higher thermostability than those fusing at higher pH ^[@R21]^. We observed that fusion for AN1 HA occurred at pH 5.6, a higher pH than reported previously for human viruses ^[@R22]^. Neither the single N123D and N149D substitutions nor the double N123D+N149D substitution reduced the threshold pH for HA-mediated membrane fusion compared to the wildtype AN1 HA ([Supplementary Fig. 1](#SD2){ref-type="supplementary-material"}). Both single substitutions N123D and N149D and the double substitution N123D+N149D decreased the temperature stability compared to AN1 ([Supplementary Fig. 2](#SD2){ref-type="supplementary-material"}). We also assessed the effect of the M523I substitution on the polymerase complex activity in mammalian cells using a minigenome assay at 33°C, the temperature of the mammalian upper respiratory tract, and at 37°C as previously described ^[@R23]^. No differences in polymerase activity were observed for the polymerase complex with or without the M523I substitution in PB1 ([Supplementary Fig. S3](#SD2){ref-type="supplementary-material"}). Here we report that airborne transmission of AN1, as for A/Shanghai/2/2013, can occur between ferrets. Keeping in mind that quantitating transmission in our current experimental model is difficult, and that any transmission may not be directly extrapolated to transmission between humans, these data suggest that AN1 transmission is more efficient than for other avian influenza viruses, which are not airborne transmitted in ferrets, but less robust - with fewer animals becoming infected, and less and delayed virus shedding -- as compared to seasonal and pandemic A/H1N1 virus transmission ^[@R17]^. Despite efficient virus replication in ferrets ([Fig. 1](#F1){ref-type="fig"}) and attachment to α2.6 linked sialic acids ([Supplementary Table 3](#SD2){ref-type="supplementary-material"}), we speculate that the residual binding to α2.3 linked sialic acids ([Supplementary Table 3](#SD2){ref-type="supplementary-material"}), the fusion occurring at a relatively high pH ([Supplementary Fig. S1](#SD2){ref-type="supplementary-material"}), and instability of HA ([Supplementary Fig. S2](#SD2){ref-type="supplementary-material"}) may be responsible, at least in part, for limited transmission of AN1 A/H7N9 virus. Contrary to what was observed for A/H5N1 virus ^[@R18],[@R20]^, the substitutions selected upon ferret passage and transmission of A/H7N9 virus - N123D and N149D -- increased binding to α2.3 linked sialic acids, increased the pH threshold for membrane fusion, and decreased the thermostability of HA, thus not contributing to increased virus transmission. Influenza viruses carrying human adaptation markers can arise in poultry. This appears to be the case for the newly emerged A/H7N9 viruses, but also during the 2003 A/H7N7 outbreak in the Netherlands ^[@R24]^ and for A/H5N1 viruses currently circulating in poultry ^[@R25]^. Fortunately, additional changes to e.g. further tune receptor preference, lower the pH for HA fusion, and increase HA stability, may be needed for the A/H7N9 viruses to transmit efficiently in mammals ^[@R18],[@R20]^. Increased understanding of the mechanisms and molecular determinants that facilitate crossing the species barrier and airborne transmission of avian influenza viruses between mammals is urgently needed. METHODS {#S4} ======= Biocontainment {#S5} -------------- All experiments were conducted within the enhanced animal biosafety level 3 (ABSL3+) facility of Erasmus MC. The ABSL3+ facility consists of a negative pressurized (-30Pa) laboratory in which all in vivo and in vitro experimental work is carried out in class 3 isolators or class 3 biosafety cabinets, which are also negative pressurized (\< -200Pa). Although the laboratory is considered 'clean' because all experiments are conducted in closed class 3 cabinets and isolators, special personal protective equipment, including laboratory suits, gloves and FFP3 facemasks is used. Air released from the class 3 units is filtered by High Efficiency Particulate Air (HEPA) filters and then leaves via the facility ventilation system, again via HEPA filters. Only authorized personnel that have received the appropriate training can access the ABSL3+ facility. For animal handling in the facilities, personnel always work in pairs. The facility is secured by procedures recognized as appropriate by the institutional biosafety officers and facility management at ErasmusMC and Dutch and United States government inspectors. Antiviral drugs (oseltamivir and zanamivir) are directly available ^[@R18]^. Viruses {#S6} ------- Influenza virus A/Anhui/1/2013 (AN1) was isolated from a human case of infection and passaged three times in embryonated chicken eggs and once in Madin-Darby Canine Kidney (MDCK) cells. The virus was kindly provided by the Chinese CDC via the WHO collaborating centre in the UK in the context of the WHO PIP framework. A synthetic construct of the AN1 HA gene segment was kindly provided by Dr Richard Webby. The PB2, PB1, PA and NP gene segments were amplified by reverse transcription polymerase chain reaction (RT-PCR) from the AN1 virus isolate and cloned in a modified version of the bidirectional reverse genetics plasmid pHW2000 ^[@R26],[@R27]^. In addition, the HA gene segment was cloned in pCAGGs expression plasmid. Mutations of interest (M523I in PB1, D123N, and D149N in HA) were introduced in reverse genetics and pCAGGs vectors using the QuikChange multi-site-directed mutagenesis kit (Stratagene, Leusden, the Netherlands) according to the instructions of the manufacturer. Recombinant viruses containing 7 gene segments of A/Puerto-Rico/8/1934 and wildtype AN1 HA or AN1 HA containing the mutations N123D and N149D were produced upon transfection of 293T cells. Virus stocks were propagated and titrated in MDCK cells as described previously ^[@R26]^. Cells {#S7} ----- MDCK cells were cultured in Eagle's minimal essential medium (EMEM, Lonza Benelux BV, Breda, the Netherlands) supplemented with 10% fetal bovine serum (FBS), 100U/ml penicillin (Lonza), 100U/ml streptomycin (Lonza), 2mM glutamine (Lonza), 1.5mg/ml sodiumbicarbonate (Lonza), 10mM Hepes (Lonza), and non-essential amino acids (MP Biomedicals Europe, Illkirch, France). 293T cells were cultured in Dulbecco modified Eagle's medium (DMEM, Lonza) supplemented with 10% FBS, 100U/ml penicillin, 100U/ml streptomycin, 2mM glutamine, 1mM sodium pyruvate (Gibco), and non-essential amino acids. Vero cells were cultured in Iscove's modified Dulbecco's medium + L-glutamine (IMDM, Lonza) supplemented with 10% FBS, 100IU/ml penicillin, 100mg/ml streptomycin, and 2mM glutamine. Virus titration in MDCK cells {#S8} ----------------------------- Virus titrations were performed as described previously ^[@R17]^. Briefly, MDCK cells were inoculated with tenfold serial dilutions of virus stocks, nose swabs, throat swabs and homogenized tissue samples. Cells were washed with PBS one hour after inoculation and cultured in 200μl of infection media, consisting of EMEM supplemented with 100U/ml penicillin, 100U/ml streptomycin, 2mM glutamine, 1.5mg/ml sodiumbicarbonate, 10mM Hepes, non-essential amino acids, and 20μg/ml trypsin (Lonza). Three days after inoculation, supernatants of cell cultures were tested for hemagglutinating activity using turkey erythrocytes as an indicator of virus replication in the cells. Infectious virus titers were calculated from four replicates each of the homogenized tissue samples, nose swabs, and throat swabs and for ten replicates of the virus stocks by the method of Reed and Munch ^[@R28]^. Ferret models {#S9} ------------- An independent animal experimentation ethical review committee approved all animal studies. The Animal Experiments Committee (Dier Experimenten Commissie -- DEC) judges ethical aspects of projects in which animals are involved. Research projects or educational projects involving laboratory animals can only be executed if they are approved by the DEC. The DEC considers the application and pays careful attention to the effects of the intervention on the animal, its discomfort, and weighs this against the social and scientific benefit to humans or animals. The researcher is required to keep the effects of the intervention to a minimum, based on the three Rs (Refinement, Replacement, Reduction). All experiments with ferrets were performed under animal biosafety level 3+ conditions in class 3 isolator cages. Airborne transmission experiments were performed as described previously ^[@R17],[@R18]^. In short, 4 female influenza virus and Aleutian Disease Virus seronegative adult ferrets, 1 to 2 years of age, were inoculated intranasally with 10^6^ TCID50 of virus by applying 250μL of virus suspension to each nostril. The sample size of 4 is based on earlier calculations for this type of experiments ^[@R29]^. Each donor ferret was then placed in a transmission cage. One day after inoculation, one naïve recipient ferret was placed opposite each donor ferret. Each transmission pair was housed in a separate transmission cage designed to prevent direct contact but allowing airflow from the donor to the recipient ferret. Nose and throat swabs were collected on 1, 3, 5, and 7 dpi for donor ferrets and on 1, 3, 5, 7 and 9 dpe for the recipient ferrets. Virus titers in swabs were determined by end-point titration in MDCK cells. A nose swab sample of recipient ferret F1 at 7 dpe was used for the second transmission experiment, with a final dose of approximately 10^4^ TCID50 for each ferret. All animals were monitored daily for clinical signs. Necropsy was performed on 1 ferret that was moribund and due to ethical reasons was removed before the end of the experiment. Nasal turbinates, trachea, lungs, brain, and liver were collected, homogenized in 3 ml of virus transport medium, after which the supernatant was collected and stored at -80°C. Virus titers in the supernatant were determined by end-point titration in MDCK cells. Duplicate samples of these tissues were fixed in 10% neutral-buffered formalin for pathological analysis. Serology {#S10} -------- The exposure of recipient ferrets to AN1 viruses was confirmed by an hemagglutination inhibition assay using standard procedures ^[@R30]^. Briefly, blood of the recipient ferrets was collected 12-14 dpe. Antisera were pre-treated overnight with receptor destroying enzyme (Vibrio cholerae neuraminidase) at 37°C and incubated at 56°C for 1h the next day. Twofold serial dilutions of the antisera, starting at a 1:20 dilution, were mixed with 25μl of a virus stock containing 4 hemagglutinating units and were incubated at 37°C for 30 minutes. Subsequently, 25μl 1% turkey erythrocytes were added and the mixture was incubated at 4°C for 1h. Hemagglutination inhibition was expressed as the reciprocal value of the highest dilution of the serum that completely inhibited agglutination of virus and erythrocytes. Sequencing {#S11} ---------- Viral RNA was extracted from respiratory swab samples collected from the ferrets that were infected via the airborne route and the virus inoculum, using the High Pure RNA Isolation Kit (Roche). All eight gene segments of the influenza viruses were amplified by RT-PCR using 8 primer sets that cover the full viral genome which specifically amplify each gene segment ^[@R31]^ and sequenced using a BigDye Terminator v3.1 Cycle sequencing kit (Applied Biosystems, Nieuwerkerk a/d IJssel, the Netherlands) and a 3130XL genetic analyzer (Applied Biosystems), according to the instructions of the manufacturer. The consensus sequence was determined for viruses isolated from the following samples: virus inoculum obtained after three egg passages and one MDCK passage, recipient F1: nose swab 7 dpe, recipient F2: throat swab 5 dpe, recipient F3: nose swab 5 dpe, and recipient F5: nose swab 5 dpe. Primer sequences are available upon request. Sequences were compared to reference sequences obtained from the GISAID EpifluTM database (accession numbers EPI439503 through EPI439510). Viral RNA was extracted from the virus inoculum and respiratory swabs of ferrets using the High Pure RNA Isolation Kit (Roche). RNA was subjected to RT-PCR, using 5 primer sets (for HA: set 1; AGCAAAAGCAGGGGATACAA and GTATGACTTAGTCATCTGCGG, set 2; GGCGGAATTGACAAGGAAGC and CCACTATGATAGCAATCTCCTTCAC, set 3; GTGACTTTCAGTTTCAATGGGGC and GATTCTCCATTGCTACCAAGAGTTC, set 4; CTAACCAACAATTTGAGTTAATAGAC and AGTAGAAACAAGGGTGTTTT, for PB1; CAGCGGAAATGCTCGCAAAT and TTGAGCTGTTGCTGGTCCAA) that amplify the region containing the PB1 M523I mutation and the complete HA gene segment. These fragments, approximately 500-600 nucleotides in length, were sequenced using the Roche 454 GS Junior sequencing platform. The fragment library was created for each sample according to the manufacturer's protocol without DNA fragmentation (GS FLX Titanium Rapid Library Preparation, Roche). The emulsion PCR (Amplification Method Lib-L) and GS Junior sequencing run were performed according to instructions of the manufacturer (Roche). Sequence reads from the GS Junior sequencing data were sorted by bar code and aligned to reference sequence A/Anhui/1/2013 using CLC Genomics software 6.0.2. Primers used to amplify the fragments were trimmed at the 3' and 5' ends of the sequence reads. For quality control, sequence reads were trimmed at the 3' end for Phred scores less than 30. The threshold for the detection of single nucleotide polymorphisms was manually set at 1% and 5%. Model Generation {#S12} ---------------- A model of the structure of the HA of AN1 was built by using MODELLER ^[@R32]^ based upon the crystal structure of HA of H7N3 virus A/turkey/Italy/02 (PDB code 1TI8). The N123D and N149D mutations were introduced into the structure using the program Andante ^[@R33]^. Three-sugar glycans NeuAcα2,6Galβ1-4GlcNAc and NeuAcα2,3Galβ1-3GlcNAc were docked into the binding site of the AN1 HA structure and that of the AN1N123D,N149D HA structure. Several strategies were then used to explore the docking of the glycan within the binding pocket. Alternative glycan conformations were produced by altering the phi angle of the glycosidic bond between the second and third sugars and the exploration of alternative side chain conformations of amino acids within the binding pocket was performed by using a rotamer search ^[@R33]^. Lower energy structures were produced iteratively, until no further energy changes were seen. All simulations were performed using the University of Cambridge CAMGRID computing cluster ^[@R34]^. Modified TRBC hemagglutination assay {#S13} ------------------------------------ Modified TRBC assays were performed as described previously ^[@R35]^. Briefly, all α2,3-, α2,6, α2,8-, and α2,9-linked sialic acids were removed from the surface of TRBC by incubating 62.5μl of 1% TRBC in PBS with 50mU Vibrio cholerae NA (VCNA; Roche, Almere, Netherlands) in 8mM calcium chloride at 37°C for 1 hour. Removal of sialic acids was confirmed by observation of complete loss of hemagglutination of the TRBC by control influenza A viruses. Subsequently, resialylation was performed using 0.25mU of α2,3-(N)-sialyltransferase (COSMOBIO, bio-connect, Huisen, Netherlands) or 12mU of α2,6-(N)-sialyltransferase (COSMOBIO, bio-connect, Huisen, Netherlands) and 1.5mM CMP-sialic acid (Sigma-Aldrich, Zwijndrecht, Netherlands) at 37°C in 75μl for 2h to produce α2,3-TRBC and α2,6-TRBC, respectively. After a washing step, the TRBC were resuspended in PBS containing 1% bovine serum albumin to a final concentration of 0.5% TRBC. Resialylation was confirmed by hemagglutination of viruses with known receptor specificity; recombinant viruses with six or seven gene segments of influenza virus A/PR/8/1934 and the HA and NA of A/Vietnam/11/2004 H5N1 without the basic cleavage site or the HA of A/Netherlands/213/2003 H3N2. The receptor specificity of mutant viruses (recombinant viruses with seven gene segments of influenza virus A/PR/8/1934 and the HA of AN1 with or without the substitutions D123N and or D149N) was tested by performing a standard hemagglutination assay with the modified TRBC. In brief, serial two-fold dilutions of virus in PBS were made in a 50μl volume; 50 μl of 0.5% TRBC were added, followed by incubation for 1h at 4°C before determining the hemagglutination titer. Fusion assay {#S14} ------------ Influenza virus HA induced cell fusion was tested in Vero-118 cells transfected with 5μg of pCAGGs-HA using Xtremegene transfection reagent (Roche). One day after transfection, cells were harvested using trypsin-EDTA and plated in 6-wells plates. The next morning, cells were washed and medium was replaced with IMDM medium containing 10μg/ml of trypsin. After one hour, cells were washed with PBS and exposed to PBS at pH 5.0, 5.2, 5.4, 5.6, 5.8, or 6.0 for 10 minutes at 37°C. Subsequently, the PBS was replaced by IMDM supplemented with 10% FBS. Eighteen hours after the pH shock, cells were fixed using 80% ice-cold acetone, washed, and stained using a 20% Giemsa solution (Merck Millipore, Darmstadt, Germany). HA stability assay {#S15} ------------------ The stability of HAs from the mutant viruses (recombinant viruses with seven gene segments of influenza virus A/PR/8/1934 and the HA of AN1 with or without the substitutions N123D and/or N149D and the HA of H5N1 A/Indonesia/5/2005 (INDO) with and without the substitution T318I) was evaluated by performing a thermostability assay. In short, viruses were diluted to 64 HA units/25μl using PBS. The samples were incubated in a thermal cycler for 30 minutes at temperature of 50°C, 52°C, 54°C, 56°C, and 58°C. Subsequently, the HA titer was determined by performing a hemagglutination assay using turkey erythrocytes. Minigenome assay {#S16} ---------------- A model viral RNA (vRNA), consisting of the firefly luciferase open reading frame flanked by the noncoding regions (NCRs) of segment 8 of influenza A virus, under the control of a T7 RNA polymerase promoter was used for minigenome assay ^[@R36]^. The reporter plasmid (0.5μg) was transfected into 293T cells in 6-well plates, along with 0.5μg of each of the pHW2000 plasmids encoding PB2, PB1, PA, and NP; 1μg of pAR3132 expressing T7 RNA polymerase and 0.02μg of the Renilla luciferase expression plasmid pRL (Promega, Leiden, Netherlands) as an internal control. 48 hours after transfection, luminescence was measured using the Dual-Glo Luciferase Assay System (Promega) according to instructions of the manufacturer in a TECAN Infinite F200 machine (Tecan Benelux bv, Giessen, Netherlands). Relative light units (RLU) were calculated as the ratio of Firefly and Renilla luciferase luminescences. Pathology & Immunohistochemistry {#S17} -------------------------------- After fixation in 10% neutral-buffered formalin, tissues were embedded in paraffin, sectioned at 3μm, and stained with hematoxylin and eosin (HE) for the detection of histological lesions by light microscopy. For the detection of virus antigen by immunohistochemistry, tissues were stained with a monoclonal antibody against influenza A virus nucleoprotein as the primary antibody as described previously ^[@R37]^. After determining the cell types expressing viral antigen, the percentage of positively staining cells per tissue was estimated and ranked on an ordinal scale: 0, 0% of cells; 1, 1-25%; 2, 25-50%; 3, \>50%. Supplementary Material {#S18} ====================== We thank Peter van Run, Stefan van der Vliet and Anne Reiners for technical assistance. We thank the Chinese CDC for providing the A/Anhui/1/2013 isolate and Dr. Richard Webby for the synthetic construct of HA. This work was financed through NIAID-NIH contract HHSN266200700010C and EU FP7 programs EMPERIE and ANTIGONE. **Author Contributions**: M.R., E.J.A.S. and R.A.M.F. designed the experiments. M.R., E.J.A.S, M.G., M.S., T.M.B., S.B., D.M., P.L., M.L. and S.H. performed the experiments, M.R., E.J.A.S., T.M.B., S.B., J.M.B. and D.F.B. analyzed the data, M.R., E.J.A.S., D.J.S., T.K., G.F.R., A.D.M.E. and R.A.M.F. wrote the manuscript. **Supplementary Information** is linked to the online version of the paper. The authors declare no competing financial interest. ![Airborne transmission of AN1 viruses between ferrets. Transmission experiments are shown for AN1 virus isolate in four ferret pairs (F1-F4) in panels A-D. A nose swab sample from the recipient ferret F1 at 7 dpe was used for the transmission experiments in four ferret pairs (F5-F8) shown in panels E-H. Data for individual transmission experiments is shown in each panel, with virus shedding in inoculated and airborne virus--exposed animals shown as lines and bars, respectively. Black circles and bars represent shedding from the throat; white circles and bars represent shedding from the nose. The asterisk indicates the lack of swab collection at 9 dpe for the recipient ferret euthanized at 8 dpe. The lower limit of detection is 0.5 log~10~ TCID~50~/mL.](nihms507296f1){#F1} ![Cartoon representation of a model of the trimer structure of HA of AN1 (green) and AN1~N123D,N149D~(cyan) bound to α2.6 (A) and α2.3 linked sialic acids (B). The structure of the three-sugar glycan NeuAcα2,6Galβ1-4GlcNAc (A) and NeuAcα2,3Galβ1-4GlcNAc (B) were docked into the H7 receptor binding site (RBS). The glycans and the amino acids substitutions discussed in the text are shown as sticks. Amino acids N123 and N149 are adjacent to the RBS and, in AN1, do not interact directly with the three-sugar glycans that are depicted in the figure. The mutations cause small changes in the position of some of the residues around the receptor binding site, notably R121 and D148, and additionally for the α2,6 linked glycan, residues S128 and Q213. The D123 mutant can form stronger interactions with the sidechain of R121 restricting the movement and orientating its sidechain to point towards the RBS and interact with the glycan. In AN1, N149 interacts with the neighbouring residue, D148, restricting its orientation. The D149 mutant allows the sidechain of D148 to rearrange and interact with the glycan. These changes allow both the α2,6- and α2,3-linked glycans to alter position and form more interactions with the HA. All residues are labelled in H7 numbering.](nihms507296f2){#F2} ###### Sanger sequence analysis of full viral genomes of the AN1 virus inoculum and airborne transmitted viruses. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Segment nt\ nt\ nt\ aa\ aa\ aa\ Inoculum Transm 1 Transm 2 position wt mut position wt mut --------- ---------- ------- ------- ---------- --------- ---------------------------------- -------------------------------------- ---------- -------------------------------------- ---------------------------------- ------- PB2 411 A G 128 Gly S[1](#TFN1){ref-type="table-fn"} X PB2 1017 C T 330 Phe S X[2](#TFN2){ref-type="table-fn"} X PB2 1309 C T 428 Leu S X X PB2 1846 C T 607 Leu S X[2](#TFN2){ref-type="table-fn"} X **PB1** **1593** **G** **A** **523** **Met** **Ile** **X** **X** PB1 2055 G A 678 Ser Asn X PA 1070 A G 349 Glu Gly X PA 1167 C T 380 Asp S X PA 1380 C T 452 His S X PA 1616 G A 531 Arg Lys X PA 1674 G T 550 Leu S X[2](#TFN2){ref-type="table-fn"} PA 1776 C T 584 Cys S X[2](#TFN2){ref-type="table-fn"} **HA** **442** **A** **G** **123** **Asn** **Asp** **X**[2](#TFN2){ref-type="table-fn"} **X** **X**[2](#TFN2){ref-type="table-fn"} **X** **X** **HA** **520** **A** **G** **149** **Asn** **Asp** **X**[2](#TFN2){ref-type="table-fn"} **X** **X**[2](#TFN2){ref-type="table-fn"} **X** **X** HA 704 C A 210 Ala Glu X[2](#TFN2){ref-type="table-fn"} NP 718 A C 225 Ile Leu X[2](#TFN2){ref-type="table-fn"} NA 46 C T 10 Thr Ile X[2](#TFN2){ref-type="table-fn"} X[2](#TFN2){ref-type="table-fn"} X[2](#TFN2){ref-type="table-fn"} M 652 T C 225 Ala S X[2](#TFN2){ref-type="table-fn"} NS 180 C T 52 Leu S X[2](#TFN2){ref-type="table-fn"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- S; silent substitution Mixture of wildtype and mutant nucleotides. Substitutions in bold were found in two subsequent transmission experiments and were phenotypically characterized. ###### Amino acid substitutions in the HA gene and the PB1 gene (nt positions 1126-1616) in AN1 viruses before and after transmission in ferrets as determined by 454 sequencing ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Gene nt\ nt\ nt\ aa\ aa\ aa\ Inoculum Donor\ Recipient\ Donor\ Recipient\ Donor\ Recipient\ Donor\ Recipient\ pos wt mut pos wt mut F1 F1 F2 F2 F3 F3 F5 F5 ------------- -------------- ------------ ------------ ------------------------------------ -------------- ---------------------------------- ------------------------------------------------------------------------- ------------------- --------------- -------------------- ---------------- -------------------- --------------- ---------------- ---------------- HA 117 C T 14 Thr S[1](#TFN3){ref-type="table-fn"} 19.5[2](#TFN4){ref-type="table-fn"}/6629[3](#TFN5){ref-type="table-fn"} 12.5/7954 20.3/10349 7.2/4763 9.6/5364 HA 257 C T 61 Thr Ile 5,6/8142 **HA** **442** **A** **G** **123** **Asn** **Asp** **48,7/8648** **14.4/9183** **99.8/6224** **37.4/11196** **28.5/12139** **36.5/6810** **90.1/8417** **99.8/11576** **99.9/10182** HA 448 G A 125 Ala Thr 1.8/7740 14.7/10044 20.5/10733 7.6/6088 10.4/7250 **HA** **520** **A** **G** **149** **Asn** **Asp** **62.5/6163** **17.9/6737** **99.9/3622** **50.4/8447** **28.6/7818** **61.2/5266** **90.4/6840** **99.7/9229** **99.7/9047** HA 704 C A 210 Ala Glu 72.5/3903 28.4/4804 56.4/4109 8.5/3359 *[HA]{.ul}* *[725]{.ul}* *[T]{.ul}* *[A]{.ul}* *[217]{.ul}* *[Leu]{.ul}* *[Gln]{.ul}* *[9.1/3831]{.ul}* *[15.7/4749]{.ul}* \- *[24.5/3322]{.ul}* HA 1032 G T 319 Lys Asn 8.3/6447 HA 1218 C T 381 Asn S 5.2/2102 23.4/1963 HA 1396 G A 441 Glu Lys 14.5/2299 HA 1422 G A 449 Glu S 12.6/4027 HA 1575 C T 500 Ser S 6.1/6532 15.5/2245 7.8/2387 HA 1706 T C NCR[4](#TFN6){ref-type="table-fn"} 93.0/470 PB1 1404 G A 460 Gln S 6.8/603 11.9/1024 **PB1** **1593** **G** **A** **523** **Met** **Ile** **100/256** **82.2/1795** **99.9/1064** ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ S; silent substitution Percentage variant present with a detection threshold of 5% Number of reads NCR; non coding region. Substitutions in bold were found in two subsequent transmission experiments and were phenotypically characterized. Substitution in italic and underlined (L217Q) corresponds to the receptor switch from α2.6 to α2.3 linked sialic acids preference. [^1]: These authors contributed equally to this work.
Category Archives: Method One of the traditions of FooCamp is playing Werewolf. I really did not enjoy playing the one time I tried there but for the folks who do it. They love it. Another Game we played there was the Reverse Scavenger … Continue reading → I am a bit stunned at the aggressive nature of ‘the rules of bar camp‘ that have been put forward. I guess I am not surprised because they are parodied from ‘the rules of fight club.’ Going back up to … Continue reading → Speed Geeking is a great way to share projects in a community it is a method innovated at Penguin Days lead by Allen Gunn director of Aspiration. This process comes from Penguin Day. This process is great because it 1) … Continue reading → Open Space Technology created originally by Harrison Owen, is a great process to support agenda formation amongst technical communities meeting to accomplish work together. Before the day of the meeting participants can put forward ideas they have about sessions they … Continue reading → I recently did this exercise with the Identity Community to reflect on where we have come from. Someone called it a 1980’s wiki. Using large paper and a wall a map of the community can be made. It could document … Continue reading → A Strong Wind Blows is a fun group introduction method that I learned from attending Penguin Day lead by Allen Gunn director of Aspiration. It is play with a group of between 20 and 100. The whole point is to … Continue reading → The Spectorgram is fun opinion surfacing method that I learned from attending Penguin Day lead by Allen Gunn director of Aspiration. The spectrogram is a way to surface opinions in a group and spark dialogue on critical issues. The goal … Continue reading → The Fish Bowl is a way to support dialogue in a community about critical issues. It is called a fish bowl because a center circle of people have a conversation and those sitting around them watch. The form looks like … Continue reading → My talk at BayCHI on Tuesday went really well. reinventnow: daily reinvention of who we are wrote about their experience. He talks about a pattern that did not make it out to the whole group. The fact that meeting people … Continue reading →
It’s the question on many minds as the Miami HEAT begin training camp at Florida Atlantic University. When you have more quality players than you have realistic rotation spots, what exactly is the rotation going to be? If you ask Erik Spoelstra that question, you get a response with the same contemplative fervor as you would any other question asked of the coach during training camp. Each camp presents its own unique puzzles to piece together, and this one is no different. “We have more talent. That’s a good thing,” Spoelstra said. “We’ve been working to get this kind of dilemma, or challenge, I wouldn’t call it a dilemma. We were working for several years to get this kind of environment again. I love it. I embrace it. I want our players to embrace it. Things are going to have to be earned, that’s the way it should be.” Run down the list of returning players, many of them on the team at least partially because they fit a specific competitive mindset, and they’ll echo much of the same sentiment. “You earn what you get around here,” says James Johnson. “Everything is going to have to be earned,” says Justise Winslow. “It’s going to come down to who is playing well. I don’t think anybody has earned anything on this team,” says Tyler Johnson. You get the picture. A few years in, many of the guys are a reflection of their coach and so they answer questions in kind. Despite the legitimacy of the concerns regarding scarcity of minutes facing as many players as could deserve them – we could spend forever making spreadsheets of possible lineups, detailed down to each second of each quarter – perhaps we are asking the wrong questions. Maybe what we should be asking is: when was the last time the HEAT actually had a set rotation? Let’s quickly go back to the 2014-15 season, the year after LeBron James re-joined the Cleveland Cavaliers. The HEAT made multiple trades that year, first to increase roster flexibility by getting below the luxury tax and then to acquire Goran Dragić. On the day Dragić arrives in Miami, excited to play with Chris Bosh and Dwyane Wade, Bosh is hospitalized with blood clots. The same happens the following season, with Amar’e Stoudemire jumping into the starting lineup during the second half of the season and Joe Johnson signing after the trade deadline. Then, after Wade goes to Chicago, the HEAT begin 2016-17 with injuries to Josh Richardson and Wayne Ellington, followed by a lengthy absence for Dion Waiters and shoulder surgery for Justise Winslow. Last year begins with an injury for Rodney McGruder, Waiters has ankle surgery midway through the year and Wade re-joins the team at the deadline. All along, the HEAT dealt with the normal wear and tear injuries, with Spoelstra trying to work through various combinations to find the right mix. All together, it’s been more than five years since the HEAT had fewer than 11 players start at least five games in a season. “Unfortunately [after] what happened with [Bosh], every season somebody is out. I’m already used to it,” Dragić said. No, the HEAT haven’t exactly had an actual rotation in quite some time. With Waiters and possibly James Johnson missing time in at least preseason, that will only continue for the short term. The second layer to all of this preseason talk – the only conversation more preseason-y than sussing out the rotation is finding out who is in the best shape of their lives – is to ask whether the rotation really matters all that much. Players thrive with stability and consistency, yes. Playing extended minutes with various combinations, like Dragić and Waiters two seasons ago when they developed precise drive-and-kick timing, offers a foundation for the players. The stronger that foundation, the better chance you have of your team’s identity holding up when the postseason threatens to snap everything you’ve built out of existence. Injuries are going to get in the way of that. That’s the nature of the beast. “It’s funny how the NBA season works itself out,” Wayne Ellington said. “We’re all preaching how deep we are right now. Preferably we stay that deep all year, but realistically we probably won’t. That’s just the truth of the matter. Different things happen.” Where Miami’s general rotation might be at least partially irrelevant beyond mere injuries is that their coach is as far from stuck in his ways as is humanly possible. Even if the team was healthy and managed to firm up the rotation for the first couple weeks of the season, it wouldn’t particularly matter. This is Spoelstra we’re talking about, a coach with a long history of lineup experimentation. One day you could be wearing a suit and the next you could be starting against one of the league’s best teams. Spoelstra is no longer as fond of saying, “The rotation is the rotation” as he used to be, but it applies today all the same. The soup du jour is determined by the chef. The rotation, as Winslow says, is “constantly evolving”. “You can be at Game 81 and nothing will be set in stone with Spo,” Kelly Olynyk said. “It’s fun. It keeps you on your toes.” None of this is to say that Spoelstra ignores the value of consistency. He’ll tinker, but he won’t do it so often that he throws all the parts of the machine out of whack. Players are coached to know their roles no matter who they are playing with, and the systems on either end of the floor are designed to reflect that same flexibility. It’s easy to plug-and-play when everyone knows what it expected of them. “At the end of the day, if you’re going to find the good chemistry, everybody can play together,” Dragić said. “You just rotate players. Everybody knows the system.” Having a ton of options to work with, and through, opens you up for both criticism and praise. Even if you make the best decision you can possibly make, process-wise, one loss offers anyone else the opportunity to say you chose the wrong combination of imminently capable players. There’s always a backup quarterback, so to speak, and Spoelstra has never shied away from saying when he might have made a mistake. We’re being mildly facetious when we say the rotation might be irrelevant. When we say that, we’re really just talking about the regular season. Where the rotation really matters is in the playoffs, which is why Spoelstra tends to settle into something more solid in the final two months of the season or so. Even then, the players that play are going to be the ones that match up best with the opponent. So, in the end, here we are talking about preseason things just as we always do about this time. The rotation is worth talking about, but whatever it is in three weeks doesn’t have much of a chance of being in the same in three months, unless of course it works to a high degree right out of the gate. Either Spoelstra will make changes or the basketball gods will make changes for him, players will adjust, and we’ll soon forget these conversations of summer. “This league is crazy. You never know,” Dragić said.
etermine p*q(g) - 3*c(g). -4*g**2 - 3*g + 2 Let a(z) = -13*z**2 + 33*z - 9498. Let u(y) = -7*y**2 + 18*y - 4753. Give -6*a(j) + 11*u(j). j**2 + 4705 Let f(i) = 20*i - 209. Let k(n) = -9*n + 103. What is -2*f(h) - 5*k(h)? 5*h - 97 Suppose -4*x + 106 = 3*i + 74, 2*i - 24 = -3*x. Let k(m) = -m**2 - 8*m. Let l(c) be the second derivative of -c**3/2 - 19*c - 2. Determine x*l(t) - 3*k(t). 3*t**2 Let i(f) = -4*f**2 + 7*f - 9. Let a(y) be the third derivative of 3*y**5/20 - 5*y**4/8 + 19*y**3/6 - 954*y**2. Give -6*a(g) - 13*i(g). -2*g**2 - g + 3 Let x(w) be the second derivative of 7*w**4/12 + w**2/2 + 28*w + 1. Let r(p) = 11*p**2 + 1. Suppose -7*a = -13*a + 30. What is a*r(v) - 8*x(v)? -v**2 - 3 Let d(s) be the second derivative of 6*s - 1/3*s**4 - 3 - 1/2*s**3 + s**2. Let q(a) = a**2 + a - 1. Determine -d(y) - 2*q(y). 2*y**2 + y Let m(v) be the first derivative of v**4/4 - v + 9468. Let f(w) = -26*w**3 - w + 2. What is -f(l) - 3*m(l)? 23*l**3 + l + 1 Let c(d) = -d + 1. Let g(r) = -r - 9. Let m = -344 - -341. Let y(h) = h - 9. Let v(w) = m*g(w) + 4*y(w). Determine 5*c(u) + v(u). 2*u - 4 Let c(t) = 209*t**2 + 19*t - 19. Let j(b) be the first derivative of -14*b**3 - 2*b**2 + 4*b + 556. Determine -4*c(s) - 19*j(s). -38*s**2 Let v be (-2 + 0)*(26255/(-9790) + (-2)/(-11)). Let c(j) = 5*j**3 + 9*j**2 - 5*j + 5. Let y(z) = 6*z**3 + 10*z**2 - 6*z + 6. Give v*c(f) - 4*y(f). f**3 + 5*f**2 - f + 1 Let w be (22/(-6))/((-6)/18). Let p(h) = -w - 19 + 29 - 9*h. Let f be (-3*8/30)/((-12)/(-30)). Let y(u) = 4*u + 1. Calculate f*p(n) - 5*y(n). -2*n - 3 Let d(z) = -13*z**2 + 6*z - 181. Let u(i) = 10*i**2 - 5*i + 180. What is 4*d(h) + 5*u(h)? -2*h**2 - h + 176 Let w(b) = -2*b**3 + 4*b**2 - 11*b + 1. Let o(j) = -j**3 + j**2 - 5*j. Give 9*o(g) - 4*w(g). -g**3 - 7*g**2 - g - 4 Let x(q) = -115*q - 223 + 223 + 5*q**2 + 115*q. Suppose -b + 1 + 5 = 0. Suppose 0 = 2*s - b + 20. Let z(c) = -10*c**2. Determine s*x(h) - 4*z(h). 5*h**2 Let g(f) = 13*f + 152. Let z(b) = 13*b + 136. What is -4*g(w) + 5*z(w)? 13*w + 72 Let m be (-2)/(-4)*-2 + 1 + -5. Let b(v) = 6*v + 2. Suppose 23*c + 63 = 17. Let y(i) = -11*i - 5. What is c*y(d) + m*b(d)? -8*d Let r(n) be the first derivative of 3*n**2/2 - 7. Let g(j) = -1 - j**2 + 304983*j + 1 - 304987*j. Determine -3*g(k) - 4*r(k). 3*k**2 Let u(p) = 1. Let l be 2 + 11 + 0*2/(-10). Let f = 26 - l. Let r(j) = -5*j + f*j + 2 + 0 - 3*j. Calculate r(m) - 2*u(m). 5*m Let j(o) = -7985*o**3 - 6*o**2 + 13*o + 8. Let f(n) = 2663*n**3 + 2*n**2 - 5*n - 3. Calculate -8*f(m) - 3*j(m). 2651*m**3 + 2*m**2 + m Let d be -1*(-1)/(-5)*5. Let j be 27 + d + (2 - 1)*-2. Let l = 18 - j. Let a(w) = 11*w - 17. Let y(n) = 5*n - 8. Determine l*a(i) + 13*y(i). -i - 2 Let v(c) = 6*c - 1. Suppose -42*b = -45*b - w + 9, -w + 8 = 2*b. Let q(t) = t. Determine b*v(y) + 2*q(y). 8*y - 1 Let n(z) = -206*z**2 + 5*z + 3. Let m(y) = -412*y**2 + 10*y + 7. What is 2*m(v) - 5*n(v)? 206*v**2 - 5*v - 1 Let f be (-172)/(-8) + (-6)/(-4) + -1. Let m(y) = -2*y - 14 + 7*y**2 - f + 28. Let s(c) = -4*c**2 + c + 4. Calculate -3*m(t) - 5*s(t). -t**2 + t + 4 Let z(s) = 0 + 4 - 5. Let o(i) = -2*i. Suppose -6*w - 2 = 4. Let y = -114 - -115. Calculate w*o(m) + y*z(m). 2*m - 1 Let g(n) = -9*n**3 - 6*n**2 + 4*n - 4. Suppose 204*m + 347 + 1897 = 0. Let o(s) = -27*s**3 - 17*s**2 + 11*s - 11. What is m*g(z) + 4*o(z)? -9*z**3 - 2*z**2 Let x(i) = 2 - i + 3 - 2*i. Let a(l) = -108872228*l - 5 + 16 + 108872221*l. Give -2*a(m) + 5*x(m). -m + 3 Let r(c) = -1666*c**3 + 98*c - 98. Let u(g) = -9*g**3 + g - 1. What is -4*r(j) + 392*u(j)? 3136*j**3 Let v(b) = -b - 1. Suppose -4*k + 152 = 4*u, -7*k + 12*k = -2*u + 88. Suppose 0 = -27*t + 10*t + u. Let y(x) = 3*x - 2. Give t*v(o) - y(o). -5*o Let f(p) = 522*p**2 - 25*p - 2. Let c(j) = -j**2 + 18*j. Calculate -c(m) - f(m). -521*m**2 + 7*m + 2 Let z(t) = 27*t**3 - 3*t**2 + 3*t - 3. Let f = -7347 + 7247. Let r(b) = -460*b**3 + 50*b**2 - 50*b + 50. Give f*z(p) - 6*r(p). 60*p**3 Let m(j) = -2*j**3 - 2*j**2 + j + 1. Let t(v) = -2817*v**3 - 5634*v**2 + 2817*v + 2817. Calculate -8451*m(h) + 3*t(h). 8451*h**3 Let z(r) = 145*r**3 - 55*r**2 + 55*r + 55. Let n(k) = 8*k**3 - 3*k**2 + 3*k + 3. Let v(g) = -g**2 + 344*g - 3450. Let s be v(10). Determine s*n(o) + 6*z(o). -10*o**3 Suppose 1 = -3*q - 17. Let z(m) = m + 1. Let l(v) = -771*v - 14. Let p(o) = -482*o - 9. Let k(h) = -5*l(h) + 8*p(h). Give q*z(r) - 5*k(r). -r + 4 Let v(y) = -4*y**2 - 25*y - 10. Suppose -5295*o + 5290*o - 55 = 2*h, 3*o - 2*h = -49. Let m(r) = -2*r**2 - 12*r - 5. Determine o*m(p) + 6*v(p). 2*p**2 + 6*p + 5 Let l(i) = -i**2 + 3*i - 85. Let z(x) = 6*x**2 - 17*x + 513. What is 34*l(b) + 6*z(b)? 2*b**2 + 188 Let o(s) = -2*s**3 - 3*s**2 - 7*s - 4. Let a(q) = q**2 - q - 5. Determine 2*a(g) - o(g). 2*g**3 + 5*g**2 + 5*g - 6 Let r be (0 - (2 + -2)) + (-1311)/(-3). Let s = 434 - r. Let i(y) = -y**2 + y - 1. Let m(o) = 4*o**2 - 6*o + 2. What is s*i(t) - m(t)? -t**2 + 3*t + 1 Let g(k) = -k**2. Let u(b) be the second derivative of b**5/20 - 5*b**4/12 - b**3 + 1792*b. What is 3*g(a) - u(a)? -a**3 + 2*a**2 + 6*a Let f(j) = -89*j + 34. Let n(h) = -77*h + 34. Calculate 4*f(k) - 5*n(k). 29*k - 34 Let v(u) = -12*u**2 - 3*u - 3. Let p be (22 + -16)/((-6)/(-4 + 1)). Let l(j) = -12*j**2 - 2*j - 2. Determine p*v(w) - 4*l(w). 12*w**2 - w - 1 Let x(z) = 2685*z**2 + 2687*z**2 - 5368*z**2 - 1 + 6*z. Let h(o) = 0 + 18*o**2 + 5*o - 38*o**2 - 1 + 24*o**2. What is 6*h(n) - 5*x(n)? 4*n**2 - 1 Let f(z) = 3*z - 3. Suppose 98 = 23*t - 86. Let y(x) = x**3 - 6*x**2 - 5*x - 9. Let l be y(7). Let m(c) = -5*c + 5. Calculate l*m(r) + t*f(r). -r + 1 Let p(d) = 40*d - 6. Suppose -5099*n = -5124*n - 25. Let f(t) = -1. Determine n*p(b) + 6*f(b). -40*b Let v(o) = -6*o**2 - 29*o - 6. Let h(w) = -5*w**2 - 14*w - 4. Calculate -5*h(b) + 2*v(b). 13*b**2 + 12*b + 8 Let u(r) = 3*r**2 + 268*r - 50. Let p(g) = g**2 + 134*g - 23. What is -13*p(t) + 6*u(t)? 5*t**2 - 134*t - 1 Let s(h) = h - 4. Let t(g) = g + 17. Give 8*s(x) - 7*t(x). x - 151 Let j(f) = -888963*f**3 + 2306*f**2 - 4612*f - 2306. Let m(z) = -1156*z**3 + 3*z**2 - 6*z - 3. What is 3*j(d) - 2306*m(d)? -1153*d**3 Let y(r) = 13*r - 1. Let f(w) = 1049*w + 8. Give f(i) + 5*y(i). 1114*i + 3 Let m(z) = -5*z**3 - 52*z**2 - 33*z - 2. Let u(c) = 9*c**3 + 129*c**2 + 66*c + 5. What is -5*m(r) - 2*u(r)? 7*r**3 + 2*r**2 + 33*r Let d(y) = 15*y - 12. Let m be -14*-1*(-14 + 936/63). Let b(z) be the second derivative of -z**3/6 + z**2/2 - z. Determine m*b(a) + d(a). 3*a Let t(g) = -4*g + 6. Let o(v) = -9*v + 13. Let k(w) = w**2 - 4*w + 11. Let h be k(4). Suppose h*a - 7*a = 48. Let x be (3 - a)*2/3. Calculate x*o(p) + 13*t(p). 2*p Let r(c) = 406*c**3 + 22*c**2 - 31*c + 23. Let x(j) = -136*j**3 - 8*j**2 + 11*j - 8. Determine 6*r(b) + 17*x(b). 124*b**3 - 4*b**2 + b + 2 Let w = 214 + -207. Let o(c) = -13*c**3 - 10*c. Let s(g) = -4*g**3 - 3*g. Calculate w*s(k) - 2*o(k). -2*k**3 - k Let p(t) be the third derivative of t**4/24 - t**3/3 + 51*t**2 + 2*t. Let m(y) = 2 - 1 + 0. Let r(g) = 2*g - 1. Let u be r(0). Calculate u*p(q) - 3*m(q). -q - 1 Let b(r) = 5 - 4 - 1812*r + 2 + 1809*r + 23*r**2. Let z(n) = -22*n**2 + 2*n - 2. Calculate 4*b(x) + 6*z(x). -40*x**2 Let i(q) = 8*q**2 + 4*q - 2. Let m(z) = -4 + 2 + 96*z**2 + 71*z**2 + 5*z - 160*z**2. Give -6*i(o) + 5*m(o). -13*o**2 + o + 2 Let p(i) = 4*i**3 - 7*i**2 - 3*i - 1. Let b(g) = 9*g**3 - 13*g**2 - 5*g - 1. What is 7*b(s) - 13*p(s)? 11*s**3 + 4*s + 6 Let p(b) = -2*b**2 + 2*b. Suppose 0 = 4*n + y - 4, 4 + 2 = n - y. Suppose -3*q = q - 48. Let z(t) = 3*t**2 - t + q*t**2 - 14*t**n. What is -3*p(g) - 7*z(g)? -g**2 + g Let f(x) = 27*x**3 - 207*x + 240. Let l(h) = -7*h**3 + 52*h - 64. What is 4*f(m) + 15*l(m)? 3*m**3 - 48*m Let k(i) = 10*i**2 - 10*i + 1. Let r(p) = -19*p**2 + 20*p - 3. Give 5*k(u) + 3*r(u). -7*u**2 + 10*u - 4 Let v(l) = -3*l + 2. Let j(c) = -19*c + 13. Let q = 63 + -24. Suppose -553*b + 572*b - 44441 = 0. Let r = -2345 + b. Calculate q*v(o) + r*j(o). -3*o Let i(t) = -3*t - 5. Let w(o) = -o - 1. Suppose 0*j - 3 = -3*j. Let b be 1*(-8 - -4) - (0 + 2). Determine b*w(l) + j*i(l). 3*l + 1 Let q(m) = 836*m + 1. Let s(g) = 3351*g + 7. Calculate 13*q(d) - 3*s(d). 815*d - 8 Let s(b) = 2079*b + 6. Let c(u) = -1. Give 6*c(i) + s(i). 2079*i Let y(a) = -20*a - 8. Let x(o) = 5*o + 2.
1. Technical Field The present invention generally relates to a semiconductor device, and to a semiconductor memory device capable of supplying and measuring an electric current through a pad. 2. Related Art In general, a semiconductor memory device is classified into a volatile memory device and a nonvolatile memory device. The volatile memory device loses data stored therein when power is cut off, whereas the nonvolatile memory device retains data stored therein even though power is cut off. The nonvolatile memory device includes various types of memory cells. Depending on the structures of the memory cells, the nonvolatile memory device may be classified into a flash memory device, ferroelectric RAM (FRAM) using a ferroelectric capacitor, magnetic RAM (MRAM) using a tunneling magneto-resistive (TMR) layer, and a semiconductor memory device using chalcogenide alloys. Particularly, the semiconductor memory device is a nonvolatile memory device using a phase change, that is, a resistance change, according to a temperature change. For this reason, the semiconductor memory device is also called a variable resistance memory device. The memory cell of the semiconductor memory device is made of a calcogen compound, that is, phase change materials, for example, a germanium (Ge)-antimony (Sb)-tellurium (Te) mixture (GST) (hereinafter referred to as “GST materials”). The GST materials have an amorphous state indicative of relatively high resistivity and a crystalline state having relatively low resistivity. The memory cell of the semiconductor memory device may store data “1” corresponding to the amorphous state and data “0” corresponding to the crystalline state. When the GST materials are heated, data corresponding to the amorphous state or the crystalline state is programmed into the memory cell of the semiconductor memory device. For example, the amorphous state or crystalline state of the GST materials may be controlled by controlling the amount of current for heating the GST materials and the time that it takes to supply the current. As described above, the state of a memory cell of the phase change memory device is changed depending on a write current supplied to the memory cell. Furthermore, the state of a memory cell of the phase change memory device is determined depending on how much the memory cell can conduct current supplied thereto. In the write operation of the phase change memory device, if a write current is shifted by the influence of a write driver and peripheral circuits, memory cells may have an unexpected resistance distribution. In the read operation of the phase change memory device, if a sensing current is shifted by the influence of a sense amplifier and peripheral circuits, a resistance distribution of memory cells may not be precisely detected. For the reasons, a problem arises because a test in the memory cells may not be precisely performed. Accordingly, there is a need for a scheme capable of supplying a write current to a memory cell or detecting the sensing current of a memory cell without the influence of relevant circuits in the test operation of a phase change memory device.
Global versus local: double dissociation between MT+ and V3A in motion processing revealed using continuous theta burst transcranial magnetic stimulation. The functional properties of motion selective areas in human visual cortex, including V3A, MT+, and intraparietal sulcus (IPS) are not fully understood. To examine the functional specialization of these areas for global and local motion processing, we used off-line, neuronavigated, continuous theta burst (cTBS) transcranial magnetic stimulation to temporarily alter neural activity within unilateral V3A, MT+, and IPS. A within-subjects design was employed and stimulation sessions were separated by at least 24 h. In each session, subjects were asked to discriminate the global motion directions of successively presented random dot kinematograms (RDKs) before and after cTBS. RDKs were presented at either 100 or 40 % coherence in either the left or right visual field. We found that V3A stimulation selectively impaired discrimination of 100 % coherent motion, while MT+ stimulation selectively impaired discrimination of 40 % coherent motion. IPS stimulation impaired discrimination of both motion stimuli. All cTBS effects were specific to stimuli presented contralaterally to the stimulation site and vertex stimulation had no effect. The double dissociation between the cTBS effects on MT+ and V3A indicates distinct roles for these two regions in motion processing. Judging the direction of 100 % coherent motion can rely on local motion processing because every dot moves in the same direction. However, judging the global direction of 40 % coherent motion requires global processing. Thus, our results suggest separate, parallel processing of local and global motion in V3A and MT+, respectively, with the outputs of these two areas being combined within the IPS.
Available files: SoftSite Pro is a powerful and easy-to-use software portal solution written in Perl with full SSI (Server Side Includes) support. SoftSite Pro can be used in both the Windows and Unix/Linux environments and is remarkably easy to set up since it contains a built-in installer that automatically detects and configures the correct paths/URLS and installs the script for you. Almost everything in SoftSite Pro is completely automated saving you valuable time and money.
# frozen_string_literal: true module PathsHelper ## # Includes functions related to links or redirects ## def active_nav_li(link) if current_page?(link) 'active' else '' end end end
Q: How to fix strange loop iteration when using npm request-promise? I have been trying to get some data from a website into my NODE application using web-scraping. The data looks to work alright although there are strange operations in the for loop I have created. ISSUES I'm facing: Iterations are not sequential. list.length starts from 1 and not 0, WHY? Not all the data are added on the table. So the code I'm using (see below), runs through a list of URL's, then I add them in an object called options and finally pass the option in the request-promise function. The first issue is that on one trial it will execute on this sequence 1, 2, 0 and on another trial it might execute 0, 2, 1. Since I access a server with a GET request I thought it would need time to load the data, so I have tried using async and await which didn't work. I have also tried sleep but also didn't work. The sequence remains unstable. The second problem is that the length of the list doesn't start with 0 but on 1. (i.e. let list = ["0", "1", "2", "3"] would have length of 4). Is it by default in NODE? The third issue is that even if all iterations are made (even in wrong sequence), SOMETIMES it would show less data than expected! const listOfJobIds = ["MTM2NDQtMTE1NzQ2LVMgMQ", "MjI3MjkwIDU", "MjI3MjIzIDU"]; let listLength = listOfJobIds.length - 1; let options = {}; function loopJobs(listOfJobIds) { for(let i=0; i<listOfJobIds.length; i++){ //Declare options for the request-promise options = { url: 'https://ec.europa.eu/eures/eures-searchengine/page/jv/id/'+listOfJobIds[i]+'?lang=en&_=1594981312724&app=2.4.1-build-2', json: true } rp(options).then( async (data) => { await getJobInformation(data, i); } ).catch( (err) => { console.log(err); } ); } } loopJobs(listOfJobIds); async function getJobInformation(data, i) { process.stdout.write('JOB: '+i+' - Loading JOB information '); //GET SOME DATA var language = data.jvProfiles[data.preferredLanguage]; let job_id = data.id; process.stdout.write('. '); let job_vacancy_id = data.documentId; process.stdout.write('. '); let job_title = language.title; process.stdout.write('. '); let job_description = language.description; process.stdout.write('. '); //ADD INFORMATION IN THE TABLE job_table.push([ job_id, job_vacancy_id, job_title, job_description ]); console.log("✅"); if(i == listLength){ printTable1(); } } TRIAL 1: JOB: 0 - Loading JOB information . . . . ✅ JOB: 2 - Loading JOB information . . . . ✅ ┌──────────────┬──────────────────┬────────────────────────────────────────┬──────────────────────────────────────────────────────────────────────┐ │ Job ID │ Job Vacancy ID │ Job Title │ Job Description │ ├──────────────┼──────────────────┼────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────┤ │ MTM2NDQtMTE… │ 13644-115746-S │ SAP BASIS HANA Manager, Database Engi… │ Stellenangebotsbeschreibung: <br>Minimum qualifications:<br><br>- B… │ ├──────────────┼──────────────────┼────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────┤ │ MjI3MjIzIDU │ 227223 │ DATABASE ADMINISTRATION │ -MANAGING DATABASES ON PREMISES AS WELL AS IN CLOUD -HANDLING MIGRA… │ └──────────────┴──────────────────┴────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────┘ JOB: 1 - Loading JOB information . . . . ✅ TRIAL 2: JOB: 2 - Loading JOB information . . . . ✅ ┌──────────────┬──────────────────┬────────────────────────────────────────┬──────────────────────────────────────────────────────────────────────┐ │ Job ID │ Job Vacancy ID │ Job Title │ Job Description │ ├──────────────┼──────────────────┼────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────┤ │ MjI3MjIzIDU │ 227223 │ DATABASE ADMINISTRATION │ -MANAGING DATABASES ON PREMISES AS WELL AS IN CLOUD -HANDLING MIGRA… │ └──────────────┴──────────────────┴────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────┘ JOB: 1 - Loading JOB information . . . . ✅ JOB: 0 - Loading JOB information . . . . ✅ A: If you're trying to sequence your calls to getJobInformation(), then you can do this: async function loopJobs(listOfJobIds) { for (let i=0; i<listOfJobIds.length; i++){ //Declare options for the request-promise let options = { url: 'https://ec.europa.eu/eures/eures-searchengine/page/jv/id/'+listOfJobIds[i]+'?lang=en&_=1594981312724&app=2.4.1-build-2', json: true } let data = await rp(options); await getJobInformation(data, i); } printTable1(); // I would suggest removing this call from getJobInformation() } loopJobs(listOfJobIds).then(() => { console.log("all done"); }).catch(err => { console.log(err); }); Note: getJobInformation() isn't a classic asynchronous function. It writes to the stream which is somewhat asynchronous, but nothing in the function is being waited on and thus you are getting no use out of making it async or awaiting it.
George Pullinger George Richard Pullinger (14 March 1920 – 4 August 1982) was an English cricketer. Pullinger was a right-handed batsman who bowled right-arm fast-medium. He was born in Islington, London. An amateur, Pullinger made his first-class debut for Essex against Middlesex in the 1949 County Championship as cover for Ken Preston. Available for only the first half of the 1949 season, he made fifteen further appearances. He played twice more in the 1950 season, then disappeared from first-class cricket. He often opened the bowling with Trevor Bailey. In his eighteen first-class matches he took 41 wickets at an average of 37.97, with best figures of 5/54. These figures, which were his only first-class five wicket haul, came against Somerset in 1949. A true tailender, with the bat he scored 53 runs at a batting average of 5.88, with a high score of 14 not out. He died at Thurrock, Essex on 4 August 1982. His obituary appeared in the 1986 edition of Wisden Cricketers' Almanack. References External links George Pullinger at ESPNcricinfo George Pullinger at CricketArchive Category:1920 births Category:1982 deaths Category:People from Islington (district) Category:Sportspeople from London Category:English cricketers Category:Essex cricketers
.TH reprocmd 8 "May 2012" .\" ==================================================================== .\" Copyright 2012 Daniel Pocock. All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in .\" the documentation and/or other materials provided with the .\" distribution. .\" .\" 3. Neither the name of the author(s) nor the names of any contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR(S) AND CONTRIBUTORS "AS IS" AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR(S) OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" ==================================================================== .\" .\" .\" .SH NAME reprocmd \- control the repro SIP proxy server .SH SYNOPSIS .B reprocmd [OPTIONS...] .SH DESCRIPTION .B reprocmd controls the .B repro SIP proxy server. .SH SEE ALSO Repro web site at .B http://www.resiprocate.org/About_Repro .\".SH AUTHORS .\".SH BUGS
Manipulation of porcine carcass composition by ractopamine. The effect of dietary ractopamine and protein level on growth performance, individual muscle weight and carcass composition of finishing pigs were evaluated in two experiments. Twelve barrows and 12 gilts (Exp. 1) and 32 barrows (Exp. 2) with an average initial weight of 64 kg were penned individually and offered ractopamine at 0 or 20 ppm in diets containing 13 or 17% CP in 2 x 2 factorial experiments for 28 d. In both experiments, dietary ractopamine improved daily gain (P less than .1) and gain-to-feed ratio (P less than .05) at 17% dietary protein level but depressed these response criteria at 13% protein level. Leaf fat was reduced (P less than .05) and longissimus muscle depth was increased (P less than .1) by feeding ractopamine regardless of dietary CP concentration. Longissimus, psoas major, semitendinosus, biceps and quadriceps femoris (P less than .05) and tensor facia latae (P less than .1) muscles were 8 to 22% heavier with ractopamine feeding at 17% dietary CP level. Results from both trials suggest that ractopamine improves growth rate and carcass leanness at the higher dietary protein level but improves only carcass leanness at the lower protein level.
Swinging I was at the park with mummy and daddy and my 3 brothers. I had such fun picking flowers and going on the rides. But then I walked in front of the swing that my big brother had just come off and it was More One time, we did a party with our university’s friends. Many people have been in the party. The party was very good and fun. But, the local (the neiborhood) that happened the party was very strange, More
{"type":"json","key":"/Object/age"} "{\"path\":\"/Object/age\",\"edges\":{\"27\":true},\"acls\":{\"r\":{\"a\":{\"*\":true},\"d\":{\"user2\":true}},\"w\":{\"a\":{\"*\":true,\"user2\":true},\"d\":{}}}}"
--- name: Feature request about: Suggest an idea for the Android SDK for App Center title: '' labels: feature request assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* ***** BEGIN LICENSE BLOCK ***** * Version: MPL 1.1/GPL 2.0/LGPL 2.1 * * The contents of this file are subject to the Mozilla Public License Version * 1.1 (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * http://www.mozilla.org/MPL/ * * Software distributed under the License is distributed on an "AS IS" basis, * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License * for the specific language governing rights and limitations under the * License. * * The Original Code is mozilla.org code. * * The Initial Developer of the Original Code is * Netscape Communications Corporation. * Portions created by the Initial Developer are Copyright (C) 1998 * the Initial Developer. All Rights Reserved. * * Contributor(s): * * Alternatively, the contents of this file may be used under the terms of * either of the GNU General Public License Version 2 or later (the "GPL"), * or the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), * in which case the provisions of the GPL or the LGPL are applicable instead * of those above. If you wish to allow use of your version of this file only * under the terms of either the GPL or the LGPL, and not to allow others to * use your version of this file under the terms of the MPL, indicate your * decision by deleting the provisions above and replace them with the notice * and other provisions required by the GPL or the LGPL. If you do not delete * the provisions above, a recipient may use your version of this file under * the terms of any one of the MPL, the GPL or the LGPL. * * ***** END LICENSE BLOCK ***** */ /** * MODULE NOTES: * * The Deque is a very small, very efficient container object * than can hold elements of type void*, offering the following features: * Its interface supports pushing and popping of elements. * It can iterate (via an interator class) its elements. * When full, it can efficiently resize dynamically. * * * NOTE: The only bit of trickery here is that this deque is * built upon a ring-buffer. Like all ring buffers, the first * element may not be at index[0]. The mOrigin member determines * where the first child is. This point is quietly hidden from * customers of this class. * */ #ifndef _NSDEQUE #define _NSDEQUE #include "nscore.h" /** * The nsDequeFunctor class is used when you want to create * callbacks between the deque and your generic code. * Use these objects in a call to ForEach(); * */ class nsDequeFunctor{ public: virtual void* operator()(void* anObject)=0; }; /****************************************************** * Here comes the nsDeque class itself... ******************************************************/ /** * The deque (double-ended queue) class is a common container type, * whose behavior mimics a line in your favorite checkout stand. * Classic CS describes the common behavior of a queue as FIFO. * A deque allows insertion and removal at both ends of * the container. * * The deque stores pointers to items. */ class nsDequeIterator; class NS_COM_GLUE nsDeque { friend class nsDequeIterator; public: nsDeque(nsDequeFunctor* aDeallocator = nsnull); ~nsDeque(); /** * Returns the number of elements currently stored in * this deque. * * @return number of elements currently in the deque */ inline PRInt32 GetSize() const {return mSize;} /** * Appends new member at the end of the deque. * * @param item to store in deque * @return *this */ nsDeque& Push(void* aItem); /** * Inserts new member at the front of the deque. * * @param item to store in deque * @return *this */ nsDeque& PushFront(void* aItem); /** * Remove and return the last item in the container. * * @return the item that was the last item in container */ void* Pop(); /** * Remove and return the first item in the container. * * @return the item that was first item in container */ void* PopFront(); /** * Retrieve the bottom item without removing it. * * @return the first item in container */ void* Peek(); /** * Return topmost item without removing it. * * @return the first item in container */ void* PeekFront(); /** * Retrieve the i'th member from the deque without removing it. * * @param index of desired item * @return i'th element in list */ void* ObjectAt(int aIndex) const; /** * Remove all items from container without destroying them. * * @return *this */ nsDeque& Empty(); /** * Remove and delete all items from container. * Deletes are handled by the deallocator nsDequeFunctor * which is specified at deque construction. * * @return *this */ nsDeque& Erase(); /** * Creates a new iterator, pointing to the first * item in the deque. * * @return new dequeIterator */ nsDequeIterator Begin() const; /** * Creates a new iterator, pointing to the last * item in the deque. * * @return new dequeIterator */ nsDequeIterator End() const; void* Last() const; /** * Call this method when you want to iterate all the * members of the container, passing a functor along * to call your code. * * @param aFunctor object to call for each member * @return *this */ void ForEach(nsDequeFunctor& aFunctor) const; /** * Call this method when you want to iterate all the * members of the container, calling the functor you * passed with each member. This process will interrupt * if your function returns non 0 to this method. * * @param aFunctor object to call for each member * @return first nonzero result of aFunctor or 0. */ const void* FirstThat(nsDequeFunctor& aFunctor) const; void SetDeallocator(nsDequeFunctor* aDeallocator); protected: PRInt32 mSize; PRInt32 mCapacity; PRInt32 mOrigin; nsDequeFunctor* mDeallocator; void* mBuffer[8]; void** mData; private: /** * Copy constructor (PRIVATE) * * @param another deque */ nsDeque(const nsDeque& other); /** * Deque assignment operator (PRIVATE) * * @param another deque * @return *this */ nsDeque& operator=(const nsDeque& anOther); PRInt32 GrowCapacity(); }; /****************************************************** * Here comes the nsDequeIterator class... ******************************************************/ class NS_COM_GLUE nsDequeIterator { public: /** * DequeIterator is an object that knows how to iterate * (forward and backward) through a Deque. Normally, * you don't need to do this, but there are some special * cases where it is pretty handy. * * One warning: the iterator is not bound to an item, * it is bound to an index, so if you insert or remove * from the beginning while using an iterator * (which is not recommended) then the iterator will * point to a different item. @see GetCurrent() * * Here you go. * * @param aQueue is the deque object to be iterated * @param aIndex is the starting position for your iteration */ nsDequeIterator(const nsDeque& aQueue, int aIndex=0); /** * Create a copy of a DequeIterator * * @param aCopy is another iterator to copy from */ nsDequeIterator(const nsDequeIterator& aCopy); /** * Moves iterator to first element in the deque * @return *this */ nsDequeIterator& First(); /** * Standard assignment operator for dequeiterator * @param aCopy is another iterator to copy from * @return *this */ nsDequeIterator& operator=(const nsDequeIterator& aCopy); /** * preform ! operation against two iterators to test for equivalence * (or lack thereof)! * * @param aIter is the object to be compared to * @return TRUE if NOT equal. */ PRBool operator!=(nsDequeIterator& aIter); /** * Compare two iterators for increasing order. * * @param aIter is the other iterator to be compared to * @return TRUE if this object points to an element before * the element pointed to by aIter. * FALSE if this and aIter are not iterating over * the same deque. */ PRBool operator<(nsDequeIterator& aIter); /** * Compare two iterators for equivalence. * * @param aIter is the other iterator to be compared to * @return TRUE if EQUAL */ PRBool operator==(nsDequeIterator& aIter); /** * Compare two iterators for non strict decreasing order. * * @param aIter is the other iterator to be compared to * @return TRUE if this object points to the same element, or * an element after the element pointed to by aIter. * FALSE if this and aIter are not iterating over * the same deque. */ PRBool operator>=(nsDequeIterator& aIter); /** * Pre-increment operator * Iterator will advance one index towards the end. * * @return object_at(++index) */ void* operator++(); /** * Post-increment operator * Iterator will advance one index towards the end. * * @param param is ignored * @return object_at(mIndex++) */ void* operator++(int); /** * Pre-decrement operator * Iterator will advance one index towards the beginning. * * @return object_at(--index) */ void* operator--(); /** * Post-decrement operator * Iterator will advance one index towards the beginning. * * @param param is ignored * @return object_at(index--) */ void* operator--(int); /** * Retrieve the the iterator's notion of current node. * * Note that the iterator floats, so you don't need to do: * <code>++iter; aDeque.PopFront();</code> * Unless you actually want your iterator to jump 2 positions * relative to its origin. * * Picture: [1 2i 3 4] * PopFront() * Picture: [2 3i 4] * Note that I still happily points to object at the second index. * * @return object at i'th index */ void* GetCurrent(); /** * Call this method when you want to iterate all the * members of the container, passing a functor along * to call your code. * * @param aFunctor object to call for each member * @return *this */ void ForEach(nsDequeFunctor& aFunctor) const; /** * Call this method when you want to iterate all the * members of the container, calling the functor you * passed with each member. This process will interrupt * if your function returns non 0 to this method. * * @param aFunctor object to call for each member * @return first nonzero result of aFunctor or 0. */ const void* FirstThat(nsDequeFunctor& aFunctor) const; protected: PRInt32 mIndex; const nsDeque& mDeque; }; #endif
Not That Easy Sheer Top in Pink Regular price $48.95Sale price$34.95 Save $14.00 or 4 payments of $8.73 AUD Dispatched within 24 hours Size 6/XS8/S10/M12/L Quantity 1 in stock [{"id":12408225595456,"title":"6\/XS","option1":"6\/XS","option2":null,"option3":null,"sku":"74164","requires_shipping":true,"taxable":false,"featured_image":null,"available":true,"name":"Not That Easy Sheer Top in Pink - 6\/XS","public_title":"6\/XS","options":["6\/XS"],"price":3495,"weight":250,"compare_at_price":4895,"inventory_quantity":1,"inventory_management":"shopify","inventory_policy":"deny","barcode":""},{"id":12408225628224,"title":"8\/S","option1":"8\/S","option2":null,"option3":null,"sku":"74164","requires_shipping":true,"taxable":false,"featured_image":null,"available":true,"name":"Not That Easy Sheer Top in Pink - 8\/S","public_title":"8\/S","options":["8\/S"],"price":3495,"weight":250,"compare_at_price":4895,"inventory_quantity":3,"inventory_management":"shopify","inventory_policy":"deny","barcode":""},{"id":12408225660992,"title":"10\/M","option1":"10\/M","option2":null,"option3":null,"sku":"74164","requires_shipping":true,"taxable":false,"featured_image":null,"available":true,"name":"Not That Easy Sheer Top in Pink - 10\/M","public_title":"10\/M","options":["10\/M"],"price":3495,"weight":250,"compare_at_price":4895,"inventory_quantity":2,"inventory_management":"shopify","inventory_policy":"deny","barcode":""},{"id":12408225693760,"title":"12\/L","option1":"12\/L","option2":null,"option3":null,"sku":"74164","requires_shipping":true,"taxable":false,"featured_image":null,"available":true,"name":"Not That Easy Sheer Top in Pink - 12\/L","public_title":"12\/L","options":["12\/L"],"price":3495,"weight":250,"compare_at_price":4895,"inventory_quantity":1,"inventory_management":"shopify","inventory_policy":"deny","barcode":""}] Our Not That Easy Top is a sheer delight! It features a rough neck line and full length sleeves with shirred cuff detail. We love wearing ours with a white bralet and our Change Your Life Skirt! Not lined Sheer top in pink Polyester True to size Model is wearing size 8/S *Colours may appear slightly different via website due to computer picture resolution and monitor settings.
Baseline autoantibody profiles predict normalization of complement and anti-dsDNA autoantibody levels following rituximab treatment in systemic lupus erythematosus. B cells are thought to play a major role in the pathogenesis of systemic lupus erythematosus (SLE). Rituximab (RTX), a chimeric anti-CD20 mAb, effectively depletes CD20( +) peripheral B cells. Recent results from EXPLORER, a placebo-controlled trial of RTX in addition to aggressive prednisone and immunosuppressive therapy, showed similar levels of clinical benefit in patients with active extra-renal SLE despite effective B cell depletion. We performed further data analyses to determine whether significant changes in disease activity biomarkers occurred in the absence of clinical benefit. We found that RTX-treated patients with baseline autoantibodies (autoAbs) had decreased anti-dsDNA and anti-cardiolipin autoAbs and increased complement levels. Patients with anti-dsDNA autoAb who lacked baseline RNA binding protein (RBP) autoAbs showed increased complement and decreased anti-dsDNA autoAb in response to RTX. Other biomarkers, such as baseline BAFF levels or IFN signature status did not predict enhanced effects of RTX therapy on complement or anti-dsDNA autoAb levels. Finally, platelet levels normalized in RTX-treated patients who entered the study with low baseline counts. Together, these findings demonstrate clear biologic activity of RTX in subsets of SLE patients, despite an overall lack of incremental clinical benefit with RTX in the EXPLORER trial.
Public Accounts Committee on May 29th, 2012 This is meeting #47 for Public Accounts in the 41st Parliament, 1st Session. The transcript is not available, though the minutes are (on Parliament’s site). That’s because the meeting was held in camera—that is, the majority of the members chose to exclude the public from the meeting.
EU closes case after Huawei, InterDigital settle patent dispute BRUSSELS (Reuters) - European Union antitrust regulators closed the case brought by Chinese network equipment maker Huawei Technologies Co Ltd against and U.S. patent licensing firm InterDigital after they settled the patent dispute out of court. Desperate to protect their technology patent advantages and maximise revenues in a fiercely competitive industry, scores of companies, including Apple, Samsung and Google, are embroiled in disputes among themselves and with others. World No. 3 smartphone maker Huawei took its grievance to the European Commission two years ago, saying InterDigital demanded "exploitative" fees for the use of its 3G mobile phone patents. InterDigital could have been fined up to 10 percent of its revenues if the Commission had found in favour of Huawei. InterDigital's revenue was $325.4 million last year. The two companies however resolved their patent licensing disputes in December, agreeing to withdraw lawsuits and antitrust complaints against each other, the Commission said. "The Commission was informed of the withdrawal of Huawei's complaint on 7 January 2014. The case has therefore been closed. There was no formal investigation," Antoine Colombani, spokesman for competition policy at the EU executive, said. The Commission can pursue cases even after companies have settled the dispute between themselves if it suspects possible anti-competitive practices or has gathered enough evidence.
Ready to fight back? Sign up for Take Action Now and get three actions in your inbox every week. You will receive occasional promotional offers for programs that support The Nation’s journalism. You can read our Privacy Policy here. Sign up for Take Action Now and get three actions in your inbox every week. Thank you for signing up. For more from The Nation, check out our latest issue Subscribe now for as little as $2 a month! Support Progressive Journalism The Nation is reader supported: Chip in $10 or more to help us continue to write about the issues that matter. The Nation is reader supported: Chip in $10 or more to help us continue to write about the issues that matter. Fight Back! Sign up for Take Action Now and we’ll send you three meaningful actions you can take each week. You will receive occasional promotional offers for programs that support The Nation’s journalism. You can read our Privacy Policy here. Sign up for Take Action Now and we’ll send you three meaningful actions you can take each week. Thank you for signing up. For more from The Nation, check out our latest issue Travel With The Nation Be the first to hear about Nation Travels destinations, and explore the world with kindred spirits. Be the first to hear about Nation Travels destinations, and explore the world with kindred spirits. Sign up for our Wine Club today. Did you know you can support The Nation by drinking wine? In late 2014, health officials belatedly became aware of an HIV outbreak in Scott County, Indiana. With fewer than 24,000 people, this rural county rarely saw a single new case in a year, according to The New York Times. But by the time government agencies tried to stop the transmission of the virus a few months later, some 215 people had tested positive. Ad Policy One man seemed responsible for needlessly letting the situation get out of control: Indiana’s then-Governor Mike Pence. In 2015, when the virus was seeming to rapidly move through networks of people who use intravenous drugs, even the reluctant local sheriff encouraged the governor to authorize a clean-needle exchange, a proven tool to reduce such an outbreak. But, as the Times reported when he became Donald Trump’s running mate, “Mr. Pence, a steadfast conservative, was morally opposed to needle exchanges on the grounds that they supported drug abuse.” His opposition was based on an incorrect belief; while research has long shown that needle exchanges do reduce HIV and hepatitis, it has also shown that they do not encourage drug use. Pence went home to “pray on it” before he decided to approve a limited needle exchange. Many observers believed that the program acted as a kind of public-health Hail Mary pass, staunching a catastrophic wound that would have gotten much worse. Related Article Sex, Race, and the Law: Considering America ‘In the Dark’ Steven W. Thrasher But as new research from the Yale School of Public Health published in the British medical journal The Lancet HIV shows, even that was marred by chaos and disorder, and the program likely had little effect on the outbreak. Indiana’s needle program began “with police officers initially confiscating syringes,” and it went into effect the same day the Pence “signed a bill that upgraded possession of a syringe with intent to commit an offence with a controlled substance from a misdemeanor to a felony charge, subject to imprisonment for up to 2.5 years.” This law began immediately after the 30-day exchange. The study was co-authored by Yale assistant professor of epidemiology (and one-time ACT UP activist) Gregg S. Gonsalves and associate professor of biostatistics, ecology, and evolutionary biology Forrest W. Crawford. And while it projects that the worst of the HIV outbreak in Indiana was avoidable, this was because of reasons not previously understood. Gonsalves and Crawford write that the needle program began well after the peak of the epidemic: “The number of undiagnosed HIV infections had already fallen substantially by the time a public health emergency was declared and syringe-exchange programmes implemented.” Using mathematical modeling, the researchers estimate that the HIV infections had been rising since 2011 and had actually peaked in January 2015, “over 2 months before the Governor of Indiana declared a public health emergency.” This is not to say Pence hadn’t erred in preferring prayer over science in 2015, but that he’d been failing to deal with HIV in his state for years. Gonsalves and Crawford’s models estimate that instead of 215 infections in 2015, “a response on Jan 1, 2013, could have suppressed the number of infections to 56 or fewer, averting at least 127 infections” and that “an intervention on April 1, 2011, could have reduced the number of infections to ten or fewer, averting at least 173 infections.” Current Issue View our current issue But those dates are years before Indiana knew there was an HIV epidemic underway. Because of funding cuts, the only HIV testing provider in southeastern Indian had closed in 2013. This, according to the study, “could have delayed the diagnosis of the initial case of HIV infection in Scott County.” The disaster in Scott County was not just a failure of clean needles or even just Indiana’s long-time “abstinence stressed” sexual education. It was a disaster born of a total abdication of Indiana’s public-health responsibility—and it’s the kind of health disaster we could see nationally. Pence is now vice president in an administration that is gutting HIV/AIDS resources and further criminalizing drug use—two paths that will increase HIV prevalence across the country. Meanwhile, the twin crises of deindustrialization and rising opioid usage mean that the conditions for localized HIV epidemics are not unique to Scott County. Indeed, Gonsalves and Crawford write that the Centers for Disease Control and Prevention believes there are “220 counties across the USA at risk of outbreaks of HIV” and hepatitis C. As conditions favorable to epidemics spread across the country, the ability for public-health agencies to respond to such crises are being throttled. As NPR reported in 2016, 40 percent of health departments have reduced services, with one CDC official saying, “More than half of state and local STD programs have experienced budget cuts. In 2012, 20 health departments reported having to close their STD clinics.” And, as local governments are turning away from STD and overdose-prevention efforts, they are also incarcerating more people on charges related to drug addiction and sex work—and even prosecuting individuals for harm-reduction work. Austerity budgets that cut public-health resources are a predictor of certain health disasters, something I’ve seen in my own HIV research. For instance, St. Charles County, Missouri, has spent an enormous amount of money and resources prosecuting Michael Johnson, who is alleged to have exposed others to HIV. In the nearly five years I have been researching his prosecution, St. Charles County has repeatedly said it has pursued that case in the name of protecting public health, even though research has shown that prosecuting people does not reduce HIV rates. Meanwhile, St. Charles County shut down its only STD clinic, which had performed about 1,000 STD exams in 2017. Lacking any ability to test for STDs, St. Charles County is primed to become another Scott County. I have reported in the St. Charles County courthouse for years, and nearly every case I have witnessed other than the HIV trial has been for drugs. Its opioid crisis is so bad, the county is suing drug manufacturers. The county spends money prosecuting people for using drugs, but not for monitoring or testing for the STDs that inevitably come with the epidemiology of opioid use. That’s a disaster in the making. With such a retreat from infectious-disease prevention, America could become the next Greece, a country where I am conducting my current research and where the relationship between gutted public-health budgets and rising HIV rates is well documented. “In 2009–10, the first year of austerity, a third of the street work programmes were cut because of scarcity of funding, despite a documented rise in the prevalence of heroin use,” a 2014 study in The Lancet found. As condom and syringe distribution fell and prevention efforts declined In Greece, “the number of new HIV infections among injecting drug users rose from 15 in 2009 to 484 in 2012”—an increase of more than 3,000 percent in four years. The epidemiology of HIV flourishes amid drug stigma, homophobia, poverty, and racism; in turn, government approaches that allow HIV to thrive also breed homophobia, racism, classism, and drug panic. In both Greece and America, there have been eerie parallels that suggest a worrying, violent future for queer people and people living with HIV. In Athens, on September 21, HIV-positive queer-activist Zak Kostopolous was kicked to death in broad daylight. One of the most horrifying things about the video of it is that many men watching do nothing to intervene. The same weekend, in New York, two gay men were beaten unconscious at my old neighborhood gay bar in Brooklyn. It should be no surprise that in societies where resources and education regarding marginalized communities are decimated—whether regarding intravenous-drug users, people living with HIV, transgender people, Muslims, or immigrants—hate prospers. Much of US society often doesn’t care about HIV infections or AIDS deaths—or about hepatitis infections or overdose deaths—when they are perceived to be happening to people who are black, queer, and/or immigrant. Scott County tardily registered as worthy of limited government intervention because it had about one case of HIV for every 110 residents or so in a county which is 97 percent white. At the same time, the CDC projects that if current trends do not change, one in every two black queer men will become HIV positive—and yet, government agencies are not mustering any kind of robust plan for communities in which HIV may become 50 times more prevalent than it ever was in Scott County. Gonsalves and Crawford’s study of Scott County shows that preventable epidemics can happen anywhere where austerity is combined with theocratic, anti-science policies. As public-health approaches are abandoned throughout the United States, that applies to increasingly large swaths of the country.
China’s Luckiest Flowers Just like most major cultural groups in the world, the Chinese have strong associations with flowers. Flowers are an important gift-giving tradition and there are many special rules about which flowers to give and when to give them. Flowers and plant life feature prominently in traditional Chinese art, and some flowers also have negative connotations in China. By far, the most important flower in China is the tree peony, a lush round flower that appears in an array of bright colors. The tree peony is considered China’s national flower and has even been used as a metaphor for the Chinese people. According to legend, the Tang Emperor Wu Zetian once ordered all of the flowers in his palace to bloom during winter, but the strong-headed tree peony would not. For that, it was banished to Henan Province and has since been regarded as the “best flower under heaven”. Because of flowers’ obvious annual connections with time, the “Flowers of the Four Seasons” are an important motif in Chinese art. These four include the Spring Peony, the Summer Lotus, the Autumn Chrysanthemum, and the Winter Plum Blossom, each of which blooms during its coupled season. Other non-floral plants, like bamboo and pine, are also important symbols within Chinese culture. Bamboo, one of the most durable wood plants on earth, often represents hardiness and veracity, and since it sways in the wind but always returns to a standing position, bamboo is also a symbol of uprightness. The evergreen nature of pine, meanwhile, represents longevity and steadfastness. Though the auspicious associations that Chinese have with flowers are many, having a basic understanding of which flowers mean what can foster one’s deeper insight into Chinese artwork. Additionally, because plants and flowers play an important role in creating balanced surroundings, understanding flowers’ meanings is important for feng shui.
Sex differences in schizophrenia as seen in the Rorschach test. Research has shown the importance of sex differences for various aspects of schizophrenia. This study focused on sex-related differences in thought processing as shown in the Rorschach test. Thirty-six schizophrenic patients (18 men and 18 women) were tested with the Rorschach in accordance with the Comprehensive System. The results showed that the female patients were more active in handling information input but showed more impairment in conceptualization. The male patients showed more perceptual disturbance. It was concluded that the Rorschach might add information in differentiating among subtle thought disturbances. It might even be useful to detect relationships between thought processes and neuroleptic medication.
Looking for the 2020 NHL Exposure Combine Contact Person: Taryn Daneman Manager, NHL Officiating P: (416) 359-7931 [email protected]
SUPPORT - DAY SPONSOR: Becoming a Day Sponsor provides listeners the option of choosing a particular day of the year for which they would like to provide sponsorship. The cost of an entire Day Sponsorship on NewLife FM is $500.00 -- half day sponsorships are available for a gift of $250.00. As an expression of thanks to our Day Sponsors, NewLife FM provides each full day sponsor with eight personalized announcements on the day of their choice. Each half day sponsor would receive four personalized announcements. Dates are subject to availability. Some Day Sponsors select their spiritual birthday; others might choose a physical birthday or anniversary of a loved one. Sometimes, Day Sponsors will select a day to honor their Pastor and/or church. Day Sponsorships are on a first come basis and must be renewed each year. To check on the availability of a certain date please call Jim Stewart at (770) 229-2020 during business hours (8 am – 4 pm M-F) or e-mail: [email protected].
module google.golang.org/grpc require ( cloud.google.com/go v0.26.0 // indirect github.com/BurntSushi/toml v0.3.1 // indirect github.com/client9/misspell v0.3.4 github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b github.com/golang/mock v1.1.1 github.com/golang/protobuf v1.2.0 github.com/google/go-cmp v0.2.0 golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3 golang.org/x/net v0.0.0-20190311183353-d8887717615a golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135 google.golang.org/appengine v1.1.0 // indirect google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc )
The line took home gold in the following categories: Premium Ship, Restaurant Design, Education Program, and in the Marketing categories of Website, Print Advertising, Campaign – Advertising/Marketing, Promotional Video, Direct Mail, and TV Commercial. Seven silver awards were earned in the following categories: Cruise Marketing for the partnership with O, The Oprah Magazine, Education Program for the Holland America Line Academy travel agent training program, and in the marketing categories of Promotional Video, Website, Campaign – Advertising/Marketing, and Print Advertising. Taking Top Awards The cruise line earned a gold for Nieuw Amsterdam, which went through a large upgrade during dry dock in late 2017, in the Premium Ship category. The line’s Pan-Asia specialty restaurant Tamarind won gold for Premium Ship – Restaurant Design. Short Ribs in Tamarind Holland America’s website also won gold; in 2018 they laughed a reimagined website that is focused on an intuitive online booking process, easy navigation, and bold, inspiring images. In the Education Program category, Explorations Central (EXC) took home top honors, too. EXC combines travel resources with enrichment opportunities for guests to make their experiences more meaningful. Many ships feature an EXC Central area, an engagement center in the Crow’s Nest with digital storytelling, a virtual ship’s bridge, interactive video content, and more. photo: Holland America Line The line’s TV spot “Making Connections” won gold in the TV commercial category, while a number of other videos took home gold in the Promotional Video category. In the Campaign-Advertising/Marketing category, the Asia Campaign, Culinary Story Campaign, and Trade Promise Campaign all earned gold. For Print Advertising, the “Carefully Crafted,” Nieuw Statendam, and Trade Promise ads also won the top award. In the Direct Mail category, gold was earned for the 2017 EXC In-Depth Voyages Catalog and the 2018 Mariner magazine Malta winter cover. Silver Honors Holland America brought home a silver award for Holland America Line Academy, an online training program that offers courses for travel agents to sharpen their selling skills, earn Cruise Lines International Association credits, and receive graduation benefits. GoHAL.com, the line’s portal for travel professionals, also won silver. For Advertising/Marketing, Holland America won silvers for its Cruise360 Campaign and O, The Oprah Magazine partnership. Their Nieuw Statendam Launch won a Promotional Video silver, as did the Making Connections print ad. The final silver award was earned in the Special Needs category for the cruise line’s America’s Accessibility Program.
Q: How can I describe these types in UML class diagram? I'm making a class diagram for a project. How can I describe vectors, lists, files or unsigned types? I would like to make a detailed diagram so I need to specify the types of the members and the input/output parameters of the methods. Thank you all! A: For more detailed description of the inner structure of the class you need a Composite Structure Diagram. There you can describe your methods as "ports". And your fields as attributes. You can show there really almost everything! For detailed description of the specific instances of the class and their mutual behaviour you need an Object diagram. At the links applied you can see a bit how to make them. But take it as a start only. The class diagram is too common to describe the inner structure of the class. It is tooled for the description of the inter-classes relations. So, you can put your information into the model of the class, but some of it won't be seen on the diagram. But I would advise you to start from the class diagram and make it as detailed as it can show and only later go to more detailed diagrams. Maybe you won't need them after all. Edit: You can make a port on the border of your class, name it fileName and connect it to io interface you use. (Composite Structure Diagram only) As for vector/list, it is easier, and could be done in a Class Diagram. If you want to show that some attribute is a vector or list, simply write: someAttr:List or put a List block on the diagram, draw association to it and name its end "someAttribute". You could do it with File, too, but there you should draw more, I think, to show the used io interface. For showing attributes in class diagram also look here.
--- abstract: 'We discuss two important instability mechanisms that may lead to the limit-cycle oscillations of the luminosity of the accretion disks around compact objects: ionization instability and radiation-pressure instability. Ionization instability is well established as a mechanism of X-ray novae eruptions in black hole binary systems but its applicability to AGN is still problematic. Radiation pressure theory has still very weak observational background in any of these sources. In the present paper we attempt to confront the parameter space of these instabilities with the observational data. At the basis of this simple survey of sources properties we argue that the radiation pressure instability is likely to be present in several Galactic sources with the Eddington ratios above 0.15, and in AGN with the Eddington ratio above 0.025. Our results favor the parameterization of the viscosity through the geometrical mean of the radiation and gas pressure both in Galactic sources and AGN. More examples of the quasi-regular outbursts in the timescales of 100 seconds in Galactic sources, and hundreds of years in AGN are needed to formulate firm conclusions. We also show that the disk sizes in the X-ray novae are consistent with the ionization instability. This instability may also considerably influence the lifetime cycle and overall complexity in the supermassive black hole environment.' author: - | Agnieszka Janiuk$^{1}$[^1], Bożena Czerny$^{2}$\ $^{1}$ Center for Theoretical Physics, Al. Lotnikow 32/46, 02-680 Warsaw,Poland\ $^{2}$N. Copernicus Astronomical Center, Bartycka 18, 00-716 Warsaw, Poland\ date: Accepted Received in original form title: 'On different types of instabilities in black hole accretion discs. Implications for X-ray binaries and AGN' --- \[firstpage\] physical processes:accretion; X-rays:binaries; galaxies: active – galaxies: evolution – galaxies Introduction {#sec:intro} ============ The description of the viscous torque through the $\alpha$ parameter by [@ss73] boosted the modeling of the disk accretion onto the central object, with application in various fields, from young stellar objects through stellar close binary systems to active galactic nuclei. The parameterization was justified, but not based on a specific well developed theory. However, the properties of the stationary disk flow weakly depend on this assumption. The spectra models were totally unaffected by the viscosity unless either departure from the Keplerian motion or departure of the emission from a local black body were taken into account. However, the stability of disk models does depend significantly on the adopted description of the viscosity. First, it was noticed that the assumption of the proportionality of the viscous troque to the total (i.e. gas plus radiation) pressure leads to the viscous [@pringle74] and thermal [@lightman74] instabilities. Later, the instability of the outer gas-pressure dominated parts of the disk was discovered, in the region of partially ionized hydrogen and helium ([@meyer81], [@smak84]). Such instabilities should lead to semi-regular periodic outbursts. Therefore, observations of objects containing accretion disks can be used to test the assumptions about the viscosity law. Such confrontation of the models and theory is being done in case of the ionization instability. The early model development was actually motivated by the need to explain the dwarf novae outbursts in cataclysmic variables. This instability is also a leading explanation of the X-ray novae outbursts ([@can82]; for a review see [@Lasota01]). It may also apply to active galaxies ([@lin86], [@mineshige90]) although observational tests are at the early stage of development. The presence of the radiation pressure instability is not proved yet (see e.g. [@done07]). The understanding of the true nature of the viscosity as being due to the magnetorotational instability [@balb91] did not simply set the issue. The limit cycle cannot be seen in 3-D simulations of this instability since the authors cannot follow the global evolution of the disc in a viscous timescale and the radial propagation of heating and cooling fronts is here neglected. However, recent computations indicate that radiation pressure contributes to the viscous torque and the instability may be there [@hirose09b]. The comparison of the predictions of this instability with observational data was done only in a few papers so far. In this work we propose to study further the models of the accretion disc instabilities in a global picture, as well as to better support them observationally. In case of the radiation pressure instability, we use two models: one is based on the assumption of the viscoscous torque proportional to the total pressure and the other is based on the assumption of the viscosous torque proportional to geometrical mean of the gas and radiation pressure. We also include the cooling term due to the outflow. We mark the instability strips in the disk radius - acccretion rate plane and we compare them with the observed properties of the X-ray lightcurves of accreting black holes in binary systems, taken from the literature. We claim that the observational constraints for the thermal disc instability and the limit-cycle behaviour is much more than the one famous example of the microquasar GRS 1915+105. We give some examples of other sources and discuss further need for the detailed observational studies for both Galactic individual X-ray sources and AGNs. This article is organized as follows. In Section \[sec:results\] we discuss the theoretical background for the radiation pressure and partial hydrogen ionization instabilities. We also present the results for the extension and overlapping of the unstable regions in the accretion discs. In particular, we focus on the constraints for the accretion rate and jet efficiency which would be adequate for the astrophysical black hole discs to become unstable for one or both types of instabilities. These results are based on the numerical codes developed by ourselves and discussed in detail in a series of previous works. In section \[sec:obs\] we present the observational constraints for the disc instabilities found for a number of Galactic black hole binaries. We also discuss the supermassive black hole AGNs and some observational constraints found in the literature. In Section \[sec:diss\] we give a summary and conclusions. Disc instabilities {#sec:results} ================== Radiation pressure {#sec:prad} ------------------ The black hole accretion disc with the classical heating term proportional to the pressure with the $\alpha$ coefficient ([@ss73]) is subject to the thermal and viscous instability when the radiation pressure dominates over the gas pressure. This occurs in the innermost radii of the accretion disc around a compact object (in case of a white dwarf such a region cannot be present). Radiation pressure instability of classical alpha models of [@ss73] was noticed very early ([@lightman74], [@pringle74]) and it was fully analyzed by [@ss76]. The time evolution of the system and its stability is governed by the accretion rate outside the unstable region (i.e., the mean accretion rate). If the accretion rate is low, then the disc remains cold and stable, with a constant low luminosity. If the accretion rate is large, then the whole disc becomes hot and is stabilized by advection, i.e. enters a slim disc solution ([@abr88]). However, for the intermittent accretion rates, larger than some critical value, the unstable mode activates. In this case the source enters a cycle of bright, hot states, separated by the cold, low luminosity states. The outburst amplitudes and durations are sensitive to black hole mass, viscosity parameter and the mean accretion rate. Also, the heating prescription is important here. If the viscous heating is proportional to the total pressure, the outburst amplitudes are very large, however, they can be reduced if the heating is proportional to the square root of the gas times the total pressure. If we assume the heating proportional only to the gas pressure, the instability disappears. In case of the geometrical mean of the two components the parameter space of the instability is greatly reduced. Preliminary shearing-box 3D simulations replacing the alpha viscosity with a physical (magnetic) viscosity mechanism indicated that there is no thermal runaway even when the radiation pressure is 10 times larger than the gas pressure ([@hirose09a]). However, the same authors ([@hirose09b]) in their follow-up work already suggested the possibility of radiation pressure instability in some of their calculations, as the unstable solutions may be seen on the surface density - effective temperature plot. The limit cycle cannot be seen in those simulations since they do not follow the global evolution of the disc in a viscous timescale and the radial propagation of heating and cooling fronts is here neglected. Full time-dependent computations of the global evolution can be only performed with a simple viscosity parameterization, and in such computations a limit-cycle behaviour is seen, with disc alternating between the hot and cold states (e.g. [@nayak00], [@janiuk02], [@janiuk07], [@czerny09]). Observationally, the situation is far from clear (see e.g. the review by [@done07]). Several authors suggested that the radiation pressure instability is an attractive explanation of the regular outbursts lasting a few hundreds of seconds observed in the microquasar GRS 1915+105 (e.g. [@taam97], [@deegan09]). The radiation pressure instability is the only model which explains the absence of the direct transitions from the state C to the state B in this source. The lightcurves of some other objects also show the fluctuations in a form of a limit cycle on the appropriate timescales. An interesting example is X-ray pulsar GRO J1744-28 ([@can96], [@can97]) with periods of very high accretion rate and low magnetic field which allows for the presence of the inner radiation-pressure dominated part of the disk. This instability operates when the radiation pressure is important, so it is expected only in high Eddington ratio Galactic sources and AGN. However, for many sources accreting at very high rates the limit cycle oscillations have not been reported. Comparison of the Eddington ratios of the stable sources and those showing fast (100 - 1000 s) regular outbursts is a key test of the correct viscosity parameterization. Partial Hydrogen ionization {#sec:ioniz} --------------------------- Another type of the thermal - viscous instability occurs due to the partial ionization of hydrogen and helium. The unstable zone is typically present in the outer part of the disc, where the effective temperature is of order of 6000 K. The ionization instability was first found by [@meyer81] and [@smak84] in the disks surrounding white dwarfs and it is at present the accepted explanation of the dwarf novae and X-ray novae outbursts. The numerical models presented in a number of works (e.g., [@dubus01],[@janiuk04]; for a review, see [@Lasota01]) studied the one or two (1+1) dimensional models of the global evolution of accretion discs under the thermal and viscous instability and confirmed the possibility of the limit cycle behaviour. The hydrogen ionization instability is thus an example of a firm agreement between the observations and theory. A recent example is SU UMa-type dwarf nova V344 Lyr observed by Kepler satellite, for which its outbursts and superoutbursts have been modeled with this instability ([@can10]). This instability also likely applies to active galaxies ([@lin86], [@mineshige90]) although this is still uncertain. The models of the outbursts were further developed by [@siem96], [@janiuk04], and they were applied to statistics of AGN by [@sie97]. [@MenQ01] and [@hameury09] argued that the amplitude of such outbursts will be small but the evaporation of the inner disk enhances the amplitude considerably [@janiuk04] and prolongs the qiescent state [@hameury09]. In this case the situation is much similar to the above, as the disc cycles between two states. The hot and mostly ionized state of a large local accretion rate is intermittent with a cold, neutral state of a small local accretion rate. Again, the quantitative outcome of the model is governed by the assumed external (mean) accretion rate, viscosity and mass of the central object. Location of the unstable zones {#sec:locations} ------------------------------ We calculated the steady-state models of the accretion disc structure, for two exemplary values of the black hole mass, characteristic for Galactic sources (10 $M_{\odot}$) and AGN ($10^{8} M_{\odot}$). The models are based on the vertically averaged equations for the energy balance between the viscous heating and radiative as well as advective cooling and hydrostatic equilibrium. The heating term, governed by the viscosity parameter $\alpha$, is in the radiation pressure dominated region assumed proportional to either the total pressure, or to the square root of the total times the gas pressure. In the gas pressure dominated region, located at larger distances, the heating is assumed proportional only to the gas pressure. We calculate here the vertical profiles of temperature, density and pressure, using the opacity tables that cover the temperature range relevant for partial hydrogen ionization, including the presence of dust and molecules (see details in [@roz99]). The basic parameter of each stationary model is the global (external) accretion rate, through which we determine the total energy flux dissipated in the disc at every radius $r$. Once the effective temperature and surface density are determined at every disc radius, we find the stable solutions, i.e. the accretion rates for which the slope of $T-\Sigma$ (or $\dot M -\Sigma$) relation is positive, and the unstable solutions, with the negative slopes. In other words, the “S-curve” is plotted locally at a number of disk radii, and we search for the critical $\dot m$ points at which the curve is bending. These points limit the maximum and minimum values of accretion rates for which at a given radius the disk will be unstable. In turn, we determine the range of radii, for which at a given global accretion rate the disc is unstable first due to the radiation pressure and then to the ionization instability. For the latter, the unstable strip is located at the outskirts of the disk. Obviously, if the instability arises at the outer disk, the front will then propagate inwards to much smaller radii. However, if the disk size is smaller than the inner edge of the unstable strip plotted in Figure \[fig:topo\], no ionization instability outbursts should take place. Our results, based on the detailed vertical structure calculations, are consistent with the simplified formulae given in the Appendix of [@Lasota01], with respect to the inner boundary of the ionisation instability strip. The outer edge we determined is somewhat larger, due to a different opacity tables in our model which include absorption on molecules, e.g. molecular hydrogen, as described in [@roz99]. In Figure \[fig:topo\] we show the maps of the disc instabilities for the two chosen black hole masses, on the plane radius vs. global accretion rate (in dimensionless units). In addition, we distinguish the two possible stabilizing mechanisms for the radiation pressure instability: the heating prescription and the possibility of energy outflow to the jet. The latter is parameterized by the following function: $$\eta_{\rm jet} = 1 - {1 \over 1 + A \dot{m}^{2}} \label{eq:jet}$$ and the jet outflow acts as a source of additional cooling ([@nayak00], [@janiuk02]). ![The extension of the radiation pressure (solid and dotted lines) and hydrogen ionization (green; dashed lines) unstable zones, depending on mean accretion rates (Eddington units). The results are for two heating prescriptions: $\alpha P_{\rm tot}$ (blue; solid lines) and $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ (red; dotted lines). The crossed regions mark the results for a non-zero fraction of jet power, described by Eq. (\[eq:jet\]) with $A=25$. The black hole mass is $M = 1\times 10^{8} M_{\odot}$ (bottom) and $M = 10 M_{\odot}$ (top). The viscosity is $\alpha=0.01$. []{data-label="fig:topo"}](Fig_topo_M10_rad_ioniz.ps "fig:"){width="8cm" height="8cm"} ![The extension of the radiation pressure (solid and dotted lines) and hydrogen ionization (green; dashed lines) unstable zones, depending on mean accretion rates (Eddington units). The results are for two heating prescriptions: $\alpha P_{\rm tot}$ (blue; solid lines) and $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ (red; dotted lines). The crossed regions mark the results for a non-zero fraction of jet power, described by Eq. (\[eq:jet\]) with $A=25$. The black hole mass is $M = 1\times 10^{8} M_{\odot}$ (bottom) and $M = 10 M_{\odot}$ (top). The viscosity is $\alpha=0.01$. []{data-label="fig:topo"}](Fig_topo_M1e8_rad_ioniz.ps "fig:"){width="8cm" height="8cm"} The jet outflow reduces the size of the unstable zone for large accretion rates, as well as limits the instability to operate below some threshold maximum $\dot m$. This rate is of course sensitive to our adopted parameter for the jet strength. The Figure \[fig:topo\] shows the case of a very strong jet, with $A=25$ and in this case the limiting accretion rate is about 20% of the Eddington rate. For a 10 times weaker jet the limiting accretion rate is about 3 times larger. The viscosity parameter, $\alpha$, only moderately affects the results for the radiation pressure instability, apart from the timescales of the limit cycle. The Figure \[fig:topo\] presents the conservative case of a very small viscosity, $\alpha=0.01$, for which the extension of the unstable zone is small. If the viscosity is larger, the unstable zone size is also increased, for instance for $\alpha=0.1$ the outer radius of the instability is larger by a factor of $\sim 1.25$. This also means that the minimum accretion rate for which the disc is unstable, is slightly smaller. However, in this smallest accretion rates the unstable zone is very narrow. For a very narrow width of the zone, the instability does not work, because the front does not propagate, and we have only marginally stable solutions ([@szusz97]). For instance, [@hameury09] find that the heating and cooling fronts of the ionization instability do not propagate strongly enough in their model to account for the large luminosity oscillations in AGN. The outburst cycles caused by the ionization instability are very sensitive to the viscosity parameter. As was shown already for dwarf novae, the amplitudes consistent with observations can be obtained only in the models with non-constant viscosity, i.e. $\alpha$ in the hot state must be larger than in the cold state. This may also be the case in the X-ray binaries, however in AGN the situation may be different (see [@janiuk04]). Thus the fast radiation pressure instability is expected to operate for an average accretion rate higher than a certain lower limit, mostly dependent on the adopted viscous scaling. There is an upper limit as well if the outflow is strongly increasing function of the Eddington ratio. The ionization instability should operate if the disc is large enough to show a partially ionized hydrogen zone for a given average accretion rate. Confronting the observational constraints with model predictions will in turn allow us to find the constraints for the viscosity parameterization and the role of the outflow. In the next sections we make a preliminary step in this direction. Observational constraints {#sec:obs} ========================= The Galactic X-ray binary systems are variable in wide range of timescales. First, the transient X-ray sources undergo their X-ray active states on timescales of years. Second, some sources exhibit X-ray periodic variability on timescales of months. Third, some of the most luminous sources are variable in timescales of tens-thousands of seconds. Finally, many of X-ray binaries undergo quasi-periodic oscillations. Direct comparison of these data and the models is not simple: even the identification of the type of the variability with the mechanism is not unique. In Table \[tab:binaries\] we summarize the properties of the exemplary, best studied X-ray binary sources found in the literature, which in our opinion may display radiation pressure or ionization instability. We list their characteristic variability timescales and amplitudes, estimated Eddington ratios and disc sizes, as well as we indicate a possible instability mechanism responsible for the variability, whenever it is in agreement with our computations presented in Sec. \[sec:results\]. The maximum disc radius is estimated based on the 60% of the Roche lobe size, from the simplified formula given by [@pac71], whenever we had the data for the system orbital parameters. Source $\Delta T$ $ F_{max}/F_{min}$ $\dot M/\dot M_{Edd}$ $R_{d}/R_{s}$ Instability Ref. ------------------------- -------------- -------------------- ----------------------------- --------------------- ------------- ------------- A0620-00 150 days 300 $10^{-2} - 3$ $4.8\times 10^{5}$ Ioniz. 14 GRS 1915+105 20-100 yrs $> 100$ 0.25-0.7 $6.3\times 10^{5}$ Ioniz. 2 GRS 1915+105 100-2000 s 3-20 as above as above $P_{rad}$ 1,3,19 GS 1354-64 $\sim$ 30 d $> 20 $ 0.1-1.8 $1.8\times 10^{5}$ Ioniz. 33 GS 1354-64 $\sim$ 20 s 1.5-2 as above as above $P_{rad}$ 4,20,21, 35 XTE J1550-564 200 d 300 $\sim 0.15$ $1.3\times 10^{5}$ Ioniz 36 XTE J1550-564 $\sim$2000 s 1.5 $\sim 0.15$ as above $P_{rad}$ 5, 22 GX 339-4 100-400 days 75 $< 0.05$ $1.6\times 10^{5}$ Ioniz. 6, 23, 24 GRO J0422+32 200 days $>30$ $0.002 - 0.02$ $4.8\times 10^{4}$ Ioniz. 7, 25 GRO J1655-40 20-100 days 16 $5\times 10^{-4}-0.45$ $1.0 \times 10^{5}$ Ioniz. 8, 26, 27 GRO J1655-40 0.1-1000 s 7.5 as above as above $P_{rad}$ 8, 32 4U 1543-47 50 days 300 $4.5\times 10^{-4}-0.04$ $9.6\times 10^{4}$ Ioniz. 9 GS 1124-684 200 days 24 $\sim 10^{-4}$ - $\sim 1.0$ $5.2 \times 10^{4}$ Ioniz. 6,14 GS 2023+338 150 days $>100$ $0.01- 1.0$ $3.8 \times 10^{5}$ Ioniz. 10, 29, 30 GS 2023+338 60 s ? 500 as above as above $P_{rad}$ 10 SWIFT J1753.5-0127 150 days 10 0.03 $2.0 \times 10^{4}$ Ioniz. 17, 31 4U 1630-472 50-300 days 60 Ioniz. 28 GRS 1730-312 6 days 200 Ioniz. 11 H 1743-322 60-200 days 100 Ioniz. 12 GS 2000+251 200 days 240 Ioniz. 6 MAXI J1659-152 20 days 15 Ioniz. CXOM31 J004253.1+411422 $ >30$ days $>300$ Ioniz. 13 XTE J1818-245 100 days 40 Ioniz. 15 XTE J1650-500 80 days 120 Ioniz. 16 XTE J1650-500 100 s 24 $P_{rad}$ 18 \ \[tab:binaries\] The estimates given in Table \[tab:binaries\] should be treated only as indications. The information comes mostly from the literature and frequently relies on quantitative description. All objects in Table \[tab:binaries\] display ionization instability since we selected objects classified as X-ray novae and there is very little uncertainty in the establishment of their nature. Most of them show large amplitude outbursts lasting days. The ratio of the maximum to minimum flux was estimated from the peak and the emission level at the end of the outburst so it represents the lower limit - only some of the X-ray novae have clear detection in the quiescence. Clearly, more careful observational analysis is needed to better study the amplitude pattern during the ionization instabilities in these sources. However, from the comparison of the accretion rates and disc sizes with our map presented in Figure \[fig:topo\] one can already infer some information about the individual sources. All but one among the sources are well within the instability strip, so the instability operates as expected. Only one source, SWIFT J1753.5-0127, is at the inner border of the instability strip. However our disc size estimation is based on the black hole mass. If this mass is larger than current value inferred from the mass function, the unstable strip will be broader in its accretion disc. The amplitude of the outbursts in this source is rather small compared to the other X-ray novae in Table \[tab:binaries\] which would be consistent with the narrow instability strip due to too small disk radius. Further studies of this exceptional source can provide key tests of the exact location of the instability zone. We also indicated in Table \[tab:binaries\] which sources are promissing candidates for the presence of the radiation pressure instability. In selecting them we paid attention to the possible detection of exceptionally low frequency QPO or just reports of outbursts or variability in timescales longer than 10 s. Any faster variability than 10 s in unlikely related to radiation pressure instability and for such fast QPO there are other mechanisms under consideration. GRS 1915+105 is the most obvious candidate, with its semi-regular outbursts in timescales of 100 - 2000 s present in several among the brighter characteristic states ([@belloni00]). Those outbursts were already modeled by several authors as caused by the radiation pressure instability. However, fast outbursts are apparently present in several other sources. We show a specific example. In Figure \[fig:compar\] we show an exemplary lightcurve of GS 1354-64, which we extracted from RXTE data archive. Clearly, a periodicity of the $\sim 20$ s outbursts is visible in the data. The profiles with a slow rise and fast decay are characteristic for the limit cycle oscillations in the radiation pressure instability. For comparison, the lightcurve of Cyg X-1 in its hard state is plotted in the bottom panel of the Figure \[fig:compar\]. We see here a very stable X-ray emission and no signatures of the cyclic outbursts. Other sources were selected at the basis of their description in the literature. The selction is thus not completely objective or uniform but may serve as a guide of the viability of the approach. Having devided the Galactic sources into those which possibly show radiation pressure instability and those which seem stable we can compare the Eddington ratios within the two groups. Sources with the Eddington ratio below 0.03 are stable. Examples are GRO J0422+32, GX 339-4, as well as Cyg X-1. Among the unstable sources, the object XTE J1550-564 has the smallest Eddington ratio, 0.15. The instability seen in this source, however, is likely marginal. [@cui99] reported 82 mHz oscillations, whith the frequency later increasing to a few Hz. Low frequency oscillations in this source were recently studied by [@rao10], and they report periods varying between 2 and 10 Hz which is far too high for the radiation pressure instability. The interesting transition hinting for an instability was reported by [@homan01]. In the MJD 51,254 observation, when the source was still very bright, the luminosity suddenly increased without a change in the color. Whether indeed this single transition hints for the radiation pressure instability or not, the source likley defines the lower limit for the radiation pressure instability to operate. In Table \[tab:binaries\] we do not see sources which have very large Eddington ratio and are stable against the radiation pressure instability. As we mentioned above, GRS 1915+105 is a good example of showing outbursts even at Eddington ratio close to 1 so it seems we have no upper limit for the Eddington ratio in the case of radiation pressure instability. Thus, observationally, the radiation pressure instability should operate between the Eddington ratio 0.15 up to 1 or more. Comparing this with the several theoretical possibilities plotted in Fig. \[fig:topo\] we can draw certain conclusions. First, only the viscosity prescription $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ is consistent with the lower limit for the radiation pressure instability, as the unstable region then extends from the Eddington 0.16 up. The prescription $\alpha P_{\rm tot}$ would allow instability to operate at too low luminosity. Second, too efficient cooling by the jet is also ruled out. The cooling operates similarly in both cases of viscosity parameterization and stabilize the disc. For the adopted values of the jet efficiency parameter, the disc is stable for Eddington ratio above 0.22, which is clrearly inconsistent with observations. Therefore, the parameter $A$ is Eq.\[eq:jet\] of the disk-jet coupling must be significantly lower than this exemplary value of $A=25$. However the jet is by no means excluded and still can carry a substantial energy, because in the case of equipartition between the disk and jet radiation, for the Eddington accretion rate $\dot m =1$ the jet coupling constant equal to $A=1$ would be enough. The parameterization $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ has an additional advantage of reducing the outburst amplitude in comparison to $\alpha P_{\rm tot}$. Most of the candidate sources for radiation pressure instability show rather low to moderate amplitudes, from factor 2 to 20. Only one source - GS 2023+338 - shows huge outbursts, with the factor of 500 brightenings in timescales of 60 seconds. [@zand92] interpreted this short timescale variability as caused by variable absorption. The behaviour of this source is exceptional and puzzling. ![Top panel: an RXTE lightcurve of GS 1354-64, observed on 05/12/1997. Bottom panel: a lightcurve of Cyg X-1, observed by RXTE in the hard state, rescaled to the mean countrate of GS 1354-64. The time bin is 0.1 s in both data.[]{data-label="fig:compar"}](compar2.eps){width="8cm" height="8cm"} When studying the instabilities in the supermassive black hole environment, we usually cannot directly observe a duty cycle of a one single object, since the black hole masses are large and the expected timescales are very long. Instead, the statistical studies are useful here and we can find an evidence for the source episodic activity (e.g. [@czerny09]). However, the exceptional object is NGC 4395, with the black hole mass of $3.6 \times 10^{5} M_{\odot}$ ([@peterson05]). In this source, in principle, we could observe the variability due to radiation pressure instability. As was shown by [@czerny09], the outbursts for the central black hole of the mass $10^{7} M_{\odot}$, should last below 100 years, so for a mass 30 times smaller, the outbursts should last $\sim 3$ years! No such outbursts are observed. However, this fact is actually consistent with our expectations, since the Eddington ratio in this source is only $1.2\times 10^{-3}$. The source is thus stable with both $\alpha P_{\rm tot}$ and $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ mechanisms and provides no useful constraints for the parameterization of the viscous torque. Significant constraints can be obtained from radio galaxies. In case of the accretion discs in radio galaxies, the Eddington ratios can be estimated e.g. through the correlation with the broad line luminosities ([@dai07]). The FR I and FR II sources in this sample have low Eddington ratios, of 0.00975 and 0.0096, for FR I and FR II sources, respectively. Observations clearly show that these sources are stable against the radiation pressure instability since they form very large scale radio structures. In particular, the central engine of FR II galaxies must be operating in a continuous way for millions of years. Fig. \[fig:topo\] shows that their stability is consistent with theory if the heating is given by $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$. On the other hand, the FSRQ sources with compact radio structures tend to have larger Eddington ratios. These sources may in fact exhibit episodic activity and the small size of the structure is indicating a new episode, as proposed by [@czerny09]. Therefore, it seems that the assumption of the $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ can accomodate the observational constraints both for Galactic sources and AGN. A typical value of the Eddington ratio found in the SDSS sample of quasars by [@kelly10] is 0.05 with a scatter of 0.4 dex. This is also large enough for the episodic activity caused by the radiation pressure instability. Possibly, the selection effect is in fact the reason why we detect only the sources in the active state: most of the sources in the quiescent state are too dim to be detectable. There is also a possibility that active galaxies at high Eddington ratios, close to 1, are actually stable due to the stabilizing power of jet/outflow. This mechanism seems not to work efficiently in Galactic sources but the relative jet power in accreting sources rises with the black hole mass: $$\log L_{R} = 0.6 \log L_{X} + 0.8 \log M$$ as was discussed in the context of the so called ’fundamental plane’ of the black hole activity ([@merloni03], [@falcke04]). Therefore, this effect in AGN can be much stronger than in the Galactic sources. Discussion {#sec:diss} ========== The theory of accretion disks suggests the presence of two instabilities: ionization instability and radiation pressure instability. In the present paper we made a step towards confronting the theoretical expectations with the observations of Galactic sources and AGN. The ionization instability is broadly accepted as an explanation of the X-ray novae phenomenon and it was compared to the data for galactic s ources by several authors. In this paper we tested whether the size of the disk in the X-ray novae systems is consistent with the conditions of the ionization instability. The source SWIFT J1753.3-0127 is at the border of the instability strip, which may explain the low outburst amplitude in this source. All other sources are located well within the instability strip supporting this outburst mechanism. The details of the outbursts, however, are not well understood yet. We note here that the interpretation of the time profiles of X-ray novae is somewhat complex. Some of the outbursts have well understood FRED profile (i.e. fast rise and exponential decay), with the sharp luminosity rise due to the the ionization instability and an extended wing due to the X-ray irradiation of the outer disc. However, in many cases an additional ’superoutburst’ follows the first outburst (see Figure \[fig:super\]). The possible interpretation of this secondary, extended maximum may be that the accretion rate from the companion star increases due to some modulation effect (see [@smak10] for the analysis of the dwarf novae superoutbursts, very much similar to the X-ray novae presented here). Therefore the classification and quantitative analysis of the outburst durations due to the ionization instability is not very straightforward. ![RXTE/ASM lightcurves of two X-ray novae. The bottom panel shows 4U 1543-475 which is the example of classical FRED behaviour. The top panel shows a much more complex behaviour of GRO 1655-40: almost symmetric short outburst, followed by extended phase of activity. This is possibly an analog of superoutbursts seen in many CV systems[]{data-label="fig:super"}](two_examples_poziom.eps){width="8cm" height="8cm"} Interestingly, the famous microquasar GRS1915+105, observed in the very high state since its discovery in 1992, may in fact still be in such a ’superoutburst’ phase. On the other hand, the FRED profile is not always seen possibly also due to the lack of irradiation of the disc due to unknown reasons. In this case, the ionization instability results in a short, symmetric profile. An example of such a source can be GRS1730-312, with a timescale of 6 days (see Table \[tab:binaries\]). In the case of AGN the applicability of the ionization instability is still under discussion. Fortunately, radio observations can give direct insight into the activity history. The radio maps of several sources show multiple activity periods, mostly in the form of double-double structures ([@schoenmakers00], [@saripal09], [@marecki09]). Some of those events may be due to mergers and significant change of the jet direction suggests such a mechanism. However, if the jet axis does not change, either a minor merger or ionization instability are the likely cause. The timescales can be studied by analyzing the ages of the structures. Some other sources in turn show a decay phase ([@marecki10]). It is quite likely that the microquasar, which has been recently discovered to possess the outflow highly dominated by the kinetic power ([@pakull10]) actually also represents such a fading source. The analysis of such a complex behaviour requires the radio maps with large dynamical range. Nevertheless, multiple radio surveys are under way and more observational constraints should be soon available. The radiation pressure instability model has been proposed first to model the microquasar time variability ([@taam97], [@nayak00]). In case of the regular periodic outbursts of GRS 1915+105 (see e.g. [@fb04]), lasting from $\sim 100$ to $\sim 2000$ s, (depending on the source mean luminosity), this approach is successful ([@janiuk02]). No other quantitative mechanism has been put forward to explain the observed behavior of this object, and only the limit cycle mechanism (likely driven by the radiation pressure instability) explains the absence of the direct transitions from its spectral state C to the state B. Still, the question arises, why the microquasar seems to remain an exceptional case where the radiation pressure instability gives an observational signature. In the present paper we suggest that several other black hole binaries can also be promising objects for the radiation pressure instability. All the candidate sources have the Eddington ratio above $\sim 0.15$. Such a condition is consistent with theory if the viscous torque parameterization as $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ is adopted, instead of $\alpha P_{\rm tot}$. Small outburst amplitudes in all candidate sources (with one exception) also support $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ prescription. The recent 3-D MHD simulations show the contribution to the stress from radiation pressure, but likely weaker than $\alpha P_{\rm tot}$ ([@hirose09b]). Further numerical work and observational constraints aimed at finding the proper description of the viscous torque should be treated as complementary. We argue that the radiation pressure instability also likely applies to active galaxies. The derived separations between outbursts are on order of $10^6$ yrs for a $10^8\,M_{\odot}$ black hole, while the outburst duration is an order of magnitude shorter. Again the parameterization of the viscosity through $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ seems to be working better than $\alpha P_{\rm tot}$. Such parameterization is consistent with the lack of instability in FR II radio galaxies as their Eddington ratio is below 0.025 - the lower limit for the instability in active galaxies if the $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ is adopted. This lower limit here is much lower than in case of stellar mass objects (see also e.g., [@sadowski09]) due to the direct dependence on the black hole mass. This model has been applied recently also to explain the apparent young ages of the Giga-Hertz Peak Spectrum radiosources ([@czerny09]). We speculate that in the hot state, the luminous core will power a radio jet, while during the cold state the radio activity ceases. Scaling the timescale with the black hole mass by a factor $10^8$ gives the outbursts durations of $10^{2} - 10^{4}$ yrs, and amplitudes are sensitive to the energy fraction deposited in the jet. This gives an additional, model-independent argument that the intermittency in quasars on the timescales of hundreds/thousands of years is likely of a similar origin as in the microquasars. An important, unstudied aspect is a possible interplay between the two instabilities. The location of the unstable zone is sensitive to the black hole mass (see Fig. \[fig:topo\]). The two zones are located much closer to each other in the case of the active galaxies than in the case of Galactic systems. In Galactic binaries only one - ionization instability - is frequently operating. For small accretion rates, which allow for systematically lower disc temperatures and the ionization instability fits well in the disc size, the radiation pressure instability will not develop. If both instabilities are present, they are separated in the disc by over 1000 $R_{S}$ which implies that they are well separated in time, acting on timescales of seconds and tends of days, respectively, so they can be modeled independently, as the radiation pressure instability oscillations of a short timescale will be just overimposed on the high luminosity state. In AGN, the situation is different. The partly ionized zone is much closer to the black hole. If the two unstable zones are very close to each other, the rate of supply of material to the radiation pressure dominated region may be modulated on slightly longer timescales, independently of the environment changes in the host galaxy. This poses an observational challenge to deconvolve the interplay between the two instability timescales, as well as the environmental effects. Additional modeling effort should thus be undertaken to better formulate the theoretical expectations of these instabilities. Our research has a very preliminary character, and further work is clearly needed. On the observational side, careful search for excess variability in Galactic sources at timescales of a few tens of seconds to a few hundreds of seconds should be done. The main difficulty will concern inventing a proper mathematical description of this excess, because the outbursts due to radiation pressure instability are not likely to be strictly periodic and appear clearly in periodograms. In the case of the disks around supermassive black holes, more constraints should come from detailed radio maps of compact sources showing reactivation events in short timescales of hundreds - thousands of years. Further development of the models, particularly in the case of the radiation pressure instability is also needed, and this should be done in two ways. First, if the 3-D MHD computations as those done by Hirose et al. (2009) with realistic boundary conditions can be prolonged to viscous timescales, we would see whether the instability develops a limit cycle without an ad-hoc parametric description of the viscosity. This approach is computationally challenging and may not happen soon. Second, the sources which in our oppinion are promissing candidates for the radiation pressure instability do not exhibit regular outbursts which implies that strong non-linear/non-local phenomena are important. Several such phenomena can be implemented into the current parametric codes. Irradiation is certainly important, and actually some models for ionization instability incorporate them. The disk irradiation may also be important for the radiation pressure instability. The parameter $\alpha$ may depend locally on the disk thickness, and in addition the dissipation may be coupled to the magnetic pressure with a radius-dependent significant time delay (Kluzniak, private communication). The theoretical lightcurves can be distorted by a stochastic process, such as magnetic dynamo, operating on the local dynamical timescale ([@mayer06]; see also [@janiuk07] for the additional discusssion of this problem in the context of the magnetically coupled hard X-ray corona). That process can evolve e.g. according to the Markov chain model. The timescale and possibly also the magnetic cell size are governed by the $H/R$ ratio. As a result, the outbursts in the limit cycle can be affected by the flickering, as the fluctuations propagate to the inner disc. On the other hand, in between the outbursts the fluctuations are more likely to be smeared out. The resulting lightcurve and PDS spectrum will depend on the adopted magnitude of the poloidal magnetic field and other parameters. Comparison of such better models with observational constraints will help in the future to better understand the disk dynamical behaviour. Conclusions {#sec:concl} =========== Our preliminary survey of model predictions in confrontation with observational data suggests that the parametric description of accretion disk viscosity through $\alpha \sqrt{P_{\rm gas}P_{\rm tot}}$ is a promissing representation in the radiation pressure dominated disk part. We selected several Galactic sources as the candidates which may show the radiation pressure instability, but further research is clearly needed. The same law likely applies to AGN, and the support comes from the stability of dwarf Seyfert galaxy NGC 4395 and FR I and FR II sources. Ionization instability criterion in Galactic sources is consistent with the disk sizes. There are only very limited constraints for this instability in AGN. [**Acknowledgments**]{} We thank Rob Fender, Ranjeev Misra, Marek Abramowicz, Wlodek Kluzniak and Olek Sadowski for helpful discussions. We are very grateful to the anonymous referee for his comments which helped us to improve the presentation of the results. This work was supported in part by grant NN 203 512638 from the Polish Ministry of Science. The ASM lightcurves were taken from http://xte.mit.edu/. 1988, *ApJ*, 332, 646 1991, *ApJ*, 376, 214 2000, *A&A*, 355, 271 2001, *MNRAS*, 323, 517 2004, *ApJ*, 615, 880 2009, *A&A*, 501, 1 1982, *ApJL*, 260, 83 1996, *ApJL*, 466, 31 1997, *ApJ*, 482, 178 2010, *ApJ*, 725, 1393 2004, *ApJ*, 613, L133 2004, *ApJ*, 617, 1272 1999, *ApJ*, 512, L43 2009, *ApJ*, 698, 840 2007, *AJ*, 133, 2187 2009, *MNRAS*, 400, 1337 2007, *A&ARev*, 15, 1 2001, *A&A*, 373, 251 2000, *ApJ*, 532, 1069 2004, *A&A*, 414, 895 2004, *ARAA*, 42, 317 2010, *The Astronomers Telegram*, No. 2474 2010, *A&A*, 512, A21 1994,*IAU Circ.*, No. 6078 1995, *Nature*, 374, 703 2009, *A&A*, 496, 413 2009, *ApJ*, 691, 16 2009, *ApJ*, 704, 781 2001, *ApJS*, 132, 377 2003, *ApJ*, 583, L95 1992, *A&A* 266, 283 2002, *ApJ*, 576, 908 2004, *ApJ*, 602, 595 2007, *A&A*, 466, 793 2010, *ApJ*, 719, 1315 1990, *ApJ*, 361, 590 1997, *ApJ*, 485, L33 2001, *New Astron. Revs.*, 45, 449 1974, *ApJL*, 187, 1 1986, *ApJL*, 305, 28 2009, *A&A*, 506, L33 2010, *A&A*, in press, arXiv:1010.0651 2006, *MNRAS*, 368, 379 2001, *ApJL*, 562, 137 2003, *MNRAS*, 345, 1057 1981, *A&A*, 104, L10 2006, *ApJ*, 653, 525 1990, *ApJ*, 351, 47 2010, *MNRAS*, 408, 1769 2000, *ApJ*, 535, 798 1997, *A&A*, 321, 776 1971, *A&A*, 9, 183 2010, *Nature*, 466, 209 2005, *ApJ*, 632, 799 1974, *A&A*, 29, 179 2010, *ApJ*, 714, 1065 2000, *ApJ*, 530, 955 1999, *MNRAS*, 305, 481 2009, *ApJS*, 183, 171 2009, *ApJ*, 695, 156 1996, *ApJ*, 458, 491 , 1997, *ApJ*, 482, L9 1973, *A&A*, 24, 337 1976, *MNRAS*, 175, 613 2000, *MNRAS*, 315, 371 1997, *ApJ* 487, 858 1984, *Acta Astron.*, 34, 161 2010, submitted to *Acta Astron.*, arXiv:1011.1090 1999,*ApJ*, 517, L121 2000, *ApJ*, 531, 537 2008, *AIP Conference Proceedings*, 1010, 103 1997, *MNRAS*, 287, 165 1997, *ApJ*, 485, 83 1996, *ARA&A*, 34, 607 2003, *ApJ*, 592, 1100 1996, *Astronomy Letters*, 22, 664 2010, *ApJ*, 718, 620 1999, *ApJ*, 513, 477 1996, *ApJ*, 464, L139 2002, *ApJ*, 578, 357 2007, *ApJ*, 659, 1511 1997, *ApJL*, 488, 113 1999, *MNRAS*, 305, 231 [^1]: E-mail: [email protected]
Crosstalk (or inter-channel interference) is a major source of channel impairment for Multiple Input Multiple Output (MIMO) wired communication systems, such as Digital Subscriber Line (DSL) communication systems. As the demand for higher data rates increases, DSL systems are evolving toward higher frequency bands, wherein crosstalk between neighboring transmission lines (that is to say transmission lines that are in close vicinity over part or whole of their length, such as twisted copper pairs in a cable binder) is more pronounced (the higher frequency, the more coupling). Different strategies have been developed to mitigate crosstalk and to maximize effective throughput, reach and line stability. These techniques are gradually evolving from static or dynamic spectral management techniques to multi-user signal coordination (or vectoring hereinafter). One technique for reducing inter-channel interference is joint signal precoding: the transmit data symbols are jointly passed through a precoder before being transmitted over the respective communication channels. The precoder is such that the concatenation of the precoder and the communication channels results in little or no inter-channel interference at the receivers. A further technique for reducing inter-channel interference is joint signal post-processing: the receive data symbols are jointly passed through a postcoder before being detected. The postcoder is such that the concatenation of the communication channels and the postcoder results in little or no inter-channel interference at the receivers. Postcoders are also sometimes referred to as crosstalk cancellation filters. The choice of the vectoring group, that is to say the set of communication lines, the signals of which are jointly processed, is rather critical for achieving good crosstalk mitigation performances. Within a vectoring group, each communication line is considered as a disturber line inducing crosstalk into the other communication lines of the group, and the same communication line is considered as a victim line receiving crosstalk from the other communication lines of the group. Crosstalk from lines that do not belong to the vectoring group is treated as alien noise and is not canceled. Ideally, the vectoring group should match the whole set of communication lines that physically and noticeably interact with each other. Yet, local loop unbundling on account of national regulation policies and/or limited vectoring capabilities may prevent such an exhaustive approach, in which case the vectoring group would include a sub-set only of all the physically interacting lines, thereby yielding limited vectoring gains. Signal vectoring is typically performed within a Distribution Point Unit (DPU), wherein all the data symbols concurrently transmitted over, or received from, all the subscriber lines of the vectoring group are available. For instance, signal vectoring is advantageously performed within a Digital Subscriber Line Access Multiplexer (DSLAM) deployed at a Central Office (CO) or as a fiber-fed remote unit closer to subscriber premises (street cabinet, pole cabinet, building cabinet, etc). Signal precoding is particularly appropriate for downstream communication (toward customer premises), while signal post-processing is particularly appropriate for upstream communication (from customer premises). More formally, a vectored system can be described by the following linear model:Y(k)=H(k)X(k)+Z(k)  (1),wherein the N-component complex vector X, respectively Y, denotes a discrete frequency representation, as a function of the frequency/carrier/tone index k, of the symbols transmitted over, respectively received from, the N vectored channels, wherein the N×N complex matrix H is referred to as the channel matrix: the (i,j)-th component hij of the channel matrix H describes how the communication system produces a signal on the i-th channel output in response to a signal being transmitted to the j-th channel input; the diagonal elements of the channel matrix describe direct channel coupling, and the off-diagonal elements of the channel matrix (also referred to as the crosstalk coefficients) describe inter-channel coupling, and wherein the N-component complex vector Z denotes additive noise over the N channels, such as Radio Frequency Interference (RFI) or thermal noise. Linear signal precoding and post-processing are advantageously implemented by means of matrix products. In downstream, the linear precoder performs a matrix-product in the frequency domain of a transmit vector U(k) with a precoding matrix P(k), i.e. X(k)=P(k)U(k) in eq. (1), the precoding matrix P(k) being such that the overall channel matrix H(k)P(k) is diagonalized, meaning the off-diagonal coefficients of the overall channel H(k)P(k), and thus the inter-channel interference, mostly reduce to zero. Practically, and as a first order approximation, the precoder superimposes anti-phase crosstalk pre-compensation signals over the victim line along with the direct signal that destructively interfere at the receiver with the actual crosstalk signals from the respective disturber lines. In upstream, the linear postcoder performs a matrix-product in the frequency domain of the receive vector Y(k) with a crosstalk cancellation matrix Q(k) to recover the transmit vector U(k) (after channel equalization and power normalization), the crosstalk cancellation matrix Q(k) being such that the overall channel matrix Q(k)H(k) is diagonalized, meaning the off-diagonal coefficients of the overall channel Q(k)H(k), and thus the inter-channel interference, mostly reduce to zero. Thus, the performance of signal vectoring depends critically on the component values of the precoding or cancellation matrix, which component values are to be computed and updated according to the actual and varying crosstalk couplings. The various channel couplings are estimated by a vectoring controller based on pilot (or probing) signals transmitted over the respective channels. The pilot signals are typically transmitted over dedicated symbols and/or over dedicated tones. For instance, in the recommendation entitled “Self-FEXT Cancellation (vectoring) For Use with VDSL2 Transceivers”, ref. G.993.5, and adopted by the International Telecommunication Union (ITU) on April 2010, the transceiver units send pilot signals on the so-called SYNC symbols. The SYNC symbols occur periodically after every 256 DATA symbols, and are transmitted synchronously over all the vectored lines (super frame alignment). On a given disturber line, a representative subset of the active tones of the SYNC symbol are 4-QAM modulated by the same pilot digit from a given pilot sequence, and transmit one of two complex constellation points, either ‘1+j’ corresponding to ‘+1’, or ‘−1−j’ corresponding to ‘−1’. The remaining carriers of the SYNC symbol keeps on carrying the typical SYNC-FLAG for On-Line Reconfiguration (OLR) message acknowledgment. On a given victim line, both the real and imaginary part of the slicer error, which is the difference vector between the received frequency sample and the constellation point onto which this frequency sample is demapped, are measured on a per pilot tone basis and reported for a specific SYNC symbol to the vectoring controller for further crosstalk estimation. The successive error samples are next correlated with a given pilot sequence transmitted over a particular disturber line in order to obtain the crosstalk contribution from that disturber line. To reject the crosstalk contribution from the other lines, the pilot sequences used over the respective disturber lines are made orthogonal with respect to each other, for instance by using the well-known Walsh-Hadamard sequences. The crosstalk estimates are eventually used for updating the coefficients of the precoding or cancellation matrix. Once the precoding or cancellation matrix is initialized and in force, the process is repeated as needed to track the residual crosstalk and to obtain more and more accurate estimates. With the advent of new copper access technologies and the use of even broader spectrum up to and beyond 100 MHz, the crosstalk coupling increases, and the crosstalk power may even exceed the direct signal power. The superimposition of the crosstalk precompensation signals on the victim line may thus cause a violation of the transmit Power Spectral Density (PSD) mask, which defines the allowed amount of signal power for an individual user as a function of frequency, and may as well result in signal clipping within the Digital to Analog Converter (DAC) causing severe signal distortions. A prior art solution is to scale down the direct signal gains such that the transmit signals, including both the direct and precompensation signals, remain within the allowed bounds. The PSD reduction is line and frequency dependent, and may change over time, e.g. when a line joins or leaves the vectoring group. The change in direct signal gains must be communicated to the receiver to avoid FEQ issues. This first solution has been described in a standard contribution to the International Telecommunication Union (ITU) from Alcatel-Lucent entitled “G.fast: Precoder Gain scaling”, reference ITU-T SG15 Q4a 2013-03-Q4-053, March 2013. Another prior art solution is the use of Non-Linear Precoding (NLP), which applies modulo arithmetic operation to shift a transmit constellation point with excessive power back within the constellation boundary. At the receiver, the same modulo operation will shift the signal back to its original position. The idea to employ modulo arithmetic to bound the value of the transmit signal was first introduced by Tomlinson and Harashima independently and nearly simultaneously with application to single-user equalization (M. Tomlinson, “New Automatic Equalizer Employing Modulo Arithmetic” Electronics Letters, 7(5-6), pp. 138-139, March 1971; and H. Harashima, and H. Miyakawa, “Matched-Transmission Technique for Channels with Inter symbol Interference” IEEE Trans. on Communications, 20(4), pp. 774-780, August 1972). Ginis and Cioffi applied the concept to multi-user system for crosstalk cancellation (G. Ginis and J. M. Cioffi, “A Multi-User Precoding scheme Achieving crosstalk Cancellation with Application to DSL systems”, Proc. 34th Asilomar conference on Signals, Systems and computers, 2000). Yet, modulo operation directly affects the transmit signal and thus the actual crosstalk induced onto the system, ending into a ‘chicken-egg’ problem: modulo operation for a first user alters precompensation for a second user; altered precompensation for the second user alters modulo operation for the second user; altered modulo operation for the second user user alters precompensation for the first user; and altered precompensation for the first user alters modulo operation for the first user; and so on. In order to overcome this issue, the non-linear precoder is constructed using the so-called QR matrix decomposition. A good overview of the technique, with step-by-step description of the functions is given by Ikanos (S. Singh, M. Sorbara, “G.fast: Comparison of Linear and Non-Linear Pre-coding for G.fast on 100m BT Cable”, ITU-T SG15 Q4a contribution 2013-01-Q4-031, January 2013). More formally, the channel matrix H is first written as:H=D. (I+C)  (2),wherein the carrier index k has been voluntarily omitted, D is a diagonal matrix comprising the direct channel coefficients hii, I is the identity matrix, and C is an off-diagonal normalized crosstalk matrix comprising the normalized crosstalk coefficients hij/hii. Ideal Zero-Forcing (ZF) linear precoding is achieved when the precoding matrix P implements the inverse of the normalized crosstalk coupling channel, namely:P=(I+C)−1  (3),such that H·P=D, the latter being compensated by single-tap Frequency EQualization (FEQ) at the receiver. With linear ZF precoding, the noise at the receiver input is enhanced by the direct channel frequency response by a factor 1/hi,i. We also note that the noise is evenly enhanced for identical lines as they are all expected to have an equal path loss hi,i. With non-linear precoding, the conjugate transpose of the normalized channel matrix is first factored into two matrices, namely:(I+C)*=QR  (4),wherein * denotes the conjugate transpose, R is an N×N upper triangular matrix, Q is a N×N unitary matrix that preserves power, i.e. Q*Q=I. One diagonalizing precoding matrix is then given by:P=QR*−1  (5),yielding HP=D(I+C)QR*−1=DR*Q*QR*−1=D. Let us write:R*−1=LS−1  (6),wherein L is a N×N lower triangular matrix with unit diagonal, and S is a N×N normalization diagonal matrix whose elements are the diagonal elements of R*. The diagonal matrix S indicates a per-line precoding gain that depends on the encoding order. S scaling is to be disposed of as modulo operation has to operate on normalized frequency samples, thereby yielding P=QL and HP=D(I+C)QL=DR*Q*QR*−1S=DS. A further equalization step S−1 is thus required at the receiver to recover the initial transmit sample. The gain scaling matrix S is estimated by the vectoring controller, and sent to the receiver for proper signal equalization. The non-linear precoder comprises a first feedforward filter L, or equivalently a first feedback filter I-S−1R*, followed by a second feedforward filter Q. In a first step, the transmit vector U is multiplied row by row with the lower triangular matrix L, but before proceeding to the next row, the output for element i is adapted through a modulo operation, thereby keeping the transmit power within the allowed bounds. The triangular structure of the matrix L is a solution to the aforementioned ‘chicken-egg’ problem: the modulo output for user i serves as input for users j encoded later (j>i), but does not affect the output of users k encoded earlier (k<i). In a second step, the resulting vector is multiplied with the matrix Q, which preserves the initial transmit power on account of its unitary property. More formally, the output of the non-linear precoder X′ is given by: x 1 ′ = u 1 ⁢ ⁢ x 2 ′ = Γ 2 , k ⁡ ( u 2 - r 21 r 22 ⁢ x 1 ′ ) ⁢ ⁢ ⋮ ⁢ ⁢ x N ′ = Γ N , k ⁡ ( u N - r NN - 1 r NN ⁢ x N - 1 ′ - … - r N ⁢ ⁢ 1 r NN ⁢ x 1 ′ ) , ( 7 ) wherein rij denotes the coefficients of R*, and Γi,k denotes the modulo operator as a function of the constellation size for carrier k and user i. The modulo operator Γi,k is given by: Γ i , k ⁡ ( x i , k ) = x i , k - d · M i , k · ⌊ x i , k + d · M i , k / 2 d · M i , k ⌋ , ( 8 ) wherein xi,k denotes a transmit frequency sample for carrier k and user i, Mi,k denotes the number of constellation points per I/Q dimension for carrier k and user i, and d denotes the distance between neighboring constellation points in the one dimension. At the receiver, the equalized receive signal samples are given by: y i ′ = r ii ⁢ Γ i , k ⁡ ( u i - ∑ j = 1 i - 1 ⁢ r ij r ii ⁢ x j ′ ) + ∑ j = 1 i - 1 ⁢ r ij ⁢ x j ′ + z i . ( 9 ) A further equalization step S−1 together with a further modulo operation is then needed to recover the initial transmit vector U: y ^ i = Γ i , k ⁡ ( y i ′ r ii ) = Γ i , k ⁡ ( Γ i , k ⁡ ( u i - ∑ j = 1 i - 1 ⁢ r ij r ii ⁢ x j ′ ) + ∑ j = 1 i - 1 ⁢ r ij r ii ⁢ x j ′ + z i r ii ) = Γ i , k ⁡ ( u i + z i r ii ) . ( 10 ) The term u i + z i r ii is expected to be within the constellation boundaries and thus Γ i , k ⁡ ( u i + z i r ii ) should be equal to u i + z i r ii . The decision ûi is then made on that sample. We note that the non-linear precoder implemented with QR matrix decomposition achieves ZF equalization, while the noise sample at the receiver input is enhanced by a factor of 1/rii. We also note that for a cable with identical lines, the diagonal values of the R* matrix do not have the same value; hence the noise enhancement is not the same on each line, which may lead to an unfair distribution of bit rates to the different users depending on the level of crosstalk couplings. A major issue with non-linear precoding is the amount of processing resources required for updating the non-linear precoder. Indeed, whenever the crosstalk couplings substantially change, then a new QR matrix decomposition of the normalized channel matrix I+C is required to update both the precoding matrices Q and L. Such a QR matrix decomposition requires intensive computational resources as it is computationally equivalent to a full matrix inversion. Another issue is that the non-linear precoder breaks the orthogonality of the pilot sequence, thereby biasing in the crosstalk estimates. To overcome this issue, one can either reduce the transmit power of the pilot (leading to slower convergence), or treat the modulo-pilots as quasi-orthogonal (leading to slower convergence, if any), or to not apply modulo operation (leading to PSD mask violation and clipping). None of these solutions are satisfactory.
Epidemiology of over-the-counter drug use in community dwelling elderly: United States perspective. Among US community dwelling individuals aged > or = 65 years, about as many persons take nonprescription drugs as take prescription drugs. A review of US data from the last 2 decades indicates that the average number of over-the-counter (OTC) drugs taken daily is around 1.8, but varies with geographical area (highest in the Midwest) and race/ethnicity (lowest use among Hispanics, followed by African Americans. and highest use among Whites). Use has consistently been found to be higher in women than in men. While OTC use appears to be increasing over time, it also decreases with increase in age. The most common OTC classes used are analgesics, laxatives and nutritional supplements. Our ability to explain or to predict OTC use and change in use is poor, and further studies, particularly on use by elderly individuals of minority races, are needed.
Mastoid obliteration with hydroxyapatite--the value of high resolution CT scanning in detecting recurrent cholesteatoma. Mastoid obliteration carries a risk of enclosing cholesteatoma within the mastoid cavity. Using temporal bones obliterated with either muscle or hydroxyapatite granules, the value of high resolution CT scanning in the early detection of 'epithelial pearls' was studied. The results showed that scanning was effective in detecting small epithelial pearls within the cavity obliterated with hydroxyapatite, but not so effective when muscle was used. This is explained by the difference in the CT density between epidermoid cysts and hydroxyapatite allowing the cysts (dark shadows) to be identified easily within a white background. The authors also studied scans performed on 31 ears following mastoid obliteration with hydroxyapatite. There was no residual cholesteatoma in the obliterated area but there was an area of abnormality identified within the obliterated area in one patient due to a cholesterol granuloma.
Bipolar disorder and aggression. In clinical practice, overt aggressive behaviour is frequently observed in patients diagnosed with bipolar disorder. It can be dangerous and complicates patient care. Nevertheless, it has not been adequately studied as a phenomenon that is separate from other symptoms such as agitation. The aim of this review is to provide information on the prevalence, clinical context, and clinical management of aggression in patients with bipolar disorder. MEDLINE and PsycInfo data bases were searched for articles published between 1966 and November 2008 using the combination of key words 'aggression' or 'violence' with 'bipolar disorder'. For the treatment searches, generic names of mood stabilisers and antipsychotics were used in combination with key words 'bipolar disorder' and 'aggression'. No language constraint was applied. Articles dealing with children and adolescents were not included. Acutely ill hospitalised bipolar patients have a higher risk for aggression than other inpatients. In a population survey, the prevalence of aggressive behaviour after age 15 years was 0.66% in persons without lifetime psychiatric disorder, but 25.34% in bipolar I disorder. Comorbidity with personality disorders and substance use disorders is frequent, and it elevates the risk of aggression in bipolar patients. Impulsive aggression appears to be the most frequent subtype observed in bipolar patients. Clinical management of aggression combines pharmacological and non-pharmacological approaches. A major problem with the evidence is that aggression is frequently reported only as one of the items contributing to the total score on a scale or a subscale. This makes it impossible to ascertain specifically aggressive behaviour. Large controlled head-to-head randomised controlled studies comparing treatments for aggressive behaviour in bipolar disorder are not yet available. There is some evidence favouring divalproex, but it is not particularly strong .We do not know if there are any efficacy differences among antipsychotics for this indication.
Introduction {#sec1-1} ============ Cervical cancer was the 8th commonest cancer in 2012 among the Hong Kong female population (Centre for Health Protection, 2015). Unlike other cancers with broad-spectrum aetiologies, cervical cancer is primarily caused by sexually transmitted human papillomavirus (HPV) infection, in particular HPV-16 and -18 (Chen and Leung, 2016). It has been proven that HPV vaccination, preferably before initiation of sexual life, is highly effective in HPV and cervical cancer prevention (Chatterjee, 2014). In Hong Kong, two registered HPV vaccines, namely Gardasil and Cervarix, are currently available. Clinical trials have shown not only do the two vaccines provide nearly 100% protection against dysplastic changes and cervical cancer, they are also safe for use with limited side effects (Ferris et al., 2014; Naud et al., 2014). In light of the high efficacy and safety, HPV vaccination has been advocated by the government and non-governmental organizations as part of the dual preventive measure against cervical cancer, which also included regular Pap smear screening (Hong Kong Cancer Fund, 2012; Globeathon, 2015). Despite extensive promotion campaigns on HPV vaccination in Hong Kong over the past decade, vaccination rate was less than 20% among the population (Li et al., 2013). Studies revealed that the vaccination rates were as low as 9.7% among university students (Chen and Leung, 2016) and 7.2% among secondary school students (Li et al., 2013). A number of studies have demonstrated the relationship between medical education and the KAP towards various health issues. In Hong Kong, Chen (2016) explored how personal health beliefs and knowledge contributed to the practice of HPV vaccination among female university students. Their study showed that students who were more informed about cervical cancer were more likely to receive HPV vaccination. Moreover, participants with low perceived susceptibility and high perceived barriers, such as the cost of HPV vaccine, were less likely to receive vaccination. There was a positive association between medical education, health beliefs and HPV vaccination. Considering the low vaccination uptake rate in this locality and its proven effectiveness, this research compared the difference in knowledge, attitude and practice towards HPV vaccination for cervical cancer prevention between medical and non-medical students in the University of Hong Kong. Our study will provide information for further policies related to medical education on cervical cancer, especially for non-medical students, and to enhance the overall practice of HPV vaccination in Hong Kong. Material and Methods {#sec1-2} ==================== Study Design {#sec2-1} ------------ This was a cross-sectional observational study conducted from November 2015 to February 2016, and full-time undergraduates from the University of Hong Kong were recruited. Students under 18 were withdrawn. Approval from the Institutional Review Board of the University of Hong Kong / Hospital Authority Hong Kong West Cluster were obtained before the commencement of the study (UW 15-583). Written and informed consents were obtained from all participants. Data were collected through self reported questionnaires. Questionnaire {#sec2-2} ------------- The questionnaire covered demographic information, sexual risk profile, knowledge, attitude and practice (KAP) towards HPV vaccination for cervical cancer prevention. Participants were also asked whether they have been tested HPV positive. Demographics {#sec2-3} ------------ Personal particulars including age, gender, study programme and year of study were collected. Sexual Risk Profile {#sec2-4} ------------------- Participants were asked to indicate whether they had any sexual activity, including vaginal, oral and anal sex. Among participants who had sexual activity, safe sex practice and number of sexual partners were asked. Safe sex was defined as sexual intercourse with the use of male condom. Knowledge of HPV Vaccination {#sec2-5} ---------------------------- Participants' knowledge was assessed by 6 questions including (1) whether HPV can cause penile cancer, (2) whether HPV-16 can cause cancer in human, (3) number of HPV vaccines available in Hong Kong, (4) number of injections in a full course of HPV vaccination in Hong Kong, (5) whether women receiving HPV vaccines can develop cancer, and (6) whether women who have received HPV vaccination need to take Pap smear test to screen for cervical cancer. Attitude towards HPV Vaccination {#sec2-6} -------------------------------- Participants were asked to indicate their views towards HPV vaccination about its usefulness for men, potential in promoting sexual risk behaviour (i.e. unsafe sex), and whether they would recommend HPV vaccine to their families. For participants who received or planned to receive HPV vaccine, they were asked about their views on the safety of HPV vaccination, its effectiveness in cancer prevention, and sexual partner protection. For those who did not receive HPV vaccination, their rationales were explored, including lack of prior knowledge about HPV vaccination, the price, effectiveness, perceived inefficiency due to previous sexual exposure, side effects, and perceived risk of HPV infection. Practice of HPV Vaccination {#sec2-7} --------------------------- Participants were asked to indicate their HPV vaccination status; whether they had (1) completed the full course, (2) not completed the full course, (3) scheduled for vaccination in the coming 6 months, or (4) never vaccinated and not scheduled for vaccination in the coming 6 months. Statistical Analysis {#sec2-8} -------------------- All analyses were performed using SPSS 23 statistical software. The incomplete questionnaires were excluded. Descriptive statistics were used for the demographic items and attitudes towards HPV vaccination. The chi-square test was applied to examine the differences in knowledge and practice between 4 groups: (1) medical and non-medical students, (2) female and male, (3) people with and without sexual experience, and (4) junior and senior medical students. P-value of less than 0.05 was considered as statistically significant. Results {#sec1-4} ======= Demographics {#sec2-9} ------------ The total of 512 questionnaires was collected. After exclusion of incomplete responses, data analysis was performed on the remaining 420 responses. Among the respondents shown in [Table 1](#T1){ref-type="table"}, 43.1% were male and 56.9% were female, and the mean age was 19.8 years. In terms of study programme, 58.1% were medical; and among them, 58.2% were at year 3 or above. None of the respondents were found to be HPV positive previously. ([Table 1](#T1){ref-type="table"}) ###### Demographics, Risk Profile and HPV Vaccination Practice 1.1 Characteristics of Respondents, N=420 1.2 Risk Profile of Respondents, N=420 ----------------------------------------------------- --------------------------------------------------------------------------------------- ------------------------------------------ ------------------ ------------ Demographics No. (%) Sexual Behaviour No. (%)  Male 181 (43.1) Sexually active 35 (8.3)  Female 239 (56.9) Unsafe sexual practice, N=35 22 (62.9) ≥2 sexual partners, N=35 6 (17.1)  Mean Age (Median) 19.8 1.3 HPV Vaccination Status, N=420 Medical Student 244 (58.1) Vaccination Status No. (%)  \< Year 3 102 (41.8) Male Female  ≥ Year 3 142 (58.2) Completed the Whole Course 12 (2.9) 86 (20.5) Currently Participating 3 (0.7) 13 (3.1) Non-Medical Student 176 (41.9)  \< Year 3 119 (67.6) Scheduled in 6 Months 1 (0.2) 9 (2.1)  ≥ Year 3 57 (32.4) Not Scheduled in 6 Months 165 (39.3) 131 (31.2) HPV positive 0 (0.0) 1.4 HPV Vaccination Status between Different Groups Respondents' Characteristics Completed the Course / Currently Participating / Scheduled in the Coming 6 Months (%) Not Scheduled in the Coming 6 Months (%) P-value (α=0.05) Major of Study 0.671  Medicine, N=244 74 (30.3) 170 (67.7)  Non-Medicine, N=176 50 (28.4) 126 (71.6) Gender \<0.001  Male, N=181 16 (8.8) 165 (91.2)  Female, N=239 108 (45.2) 131 (54.8) Year of Study (Medicine) 0.267  \<Year 3, N=102 27 (26.5)  ≥ Year 3, N=142 47 (33.1) Risk Profile 0.302  Sexually Active, N=35 13 (37.1)  Sexually Non-Active, N=385 111 (28.8) HPV, Human Papillomavirus; bolded values significant at P\<0.05 Risk Profile {#sec2-10} ------------ A minority of respondents n=? (N=35, 8.3%) were sexually active. Among them, 62.9% had practiced unsafe sex. Knowledge {#sec2-11} --------- ### Comparison between Medical and Non-Medical Students {#sec3-11} Comparing medical and non-medical students, the former had significantly more comprehensive knowledge on HPV. For example, 92.6% (N=226) of medical students correctly stated that HPV-16 is a cause of cancer in human, while only 78.4% (N=138) of non-medical students could correctly identify HPV-16 as a carcinogen ([Table 2](#T2){ref-type="table"}). ###### Knowledge of HPV Vaccination in Medical and Non-Medical Students Medical, N=244 (% of correct responses) Non-Medical, N=176 (% of correct responses) P-value (α=0.05) --------------------------------------------------------------------------- ----------------------------------------- --------------------------------------------- ------------------ 1\. Question on HPV and penile cancer 178 (73.0) 94 (53.4) **\<0.001** 2\. Question on HPV-16 and malignancy 226 (92.6) 138 (78.4) **\<0.001** 3\. Question on HPV vaccinations available in Hong Kong 169 (69.3) 80 (45.5) **\<0.001** 4\. Question on full course of HPV vaccination 200 (82.0) 110 (62.5) **\<0.001** 5\. Question on efficacy of HPV vaccination on cervical cancer prevention 223 (91.4) 144 (59.0) **\<0.001** 6\. Question on screening test for cervical cancer after HPV vaccination 229 (93.9) 145 (59.4) **\<0.001** Ans, correct answer; HPV, Human Papillomavirus; bolded values significant at P\<0.05 Medical students also performed better in knowledge about HPV vaccination with statistical significance (P\<0.001). More than three quarters of medical students (82.0%) understood that 3 injections were necessary for a complete vaccination regimen, while less than two-thirds of non-medical students (62.5%) answered correctly (P\<0.001). Also, 69.3% of medical students were aware of the 2 different types of vaccines available in Hong Kong, while less than half (45.5%) of non-medical students had this knowledge (P\<0.001) ([Table 2](#T2){ref-type="table"}). In addition, medical students were more knowledgeable in cervical cancer prevention. A majority of medical students (93.9%) realized the importance of Pap smear test after HPV vaccination to prevent cervical cancer, but only 59.4% of non-medical students were aware of this preventive measure (P\<0.001) ([Table 2](#T2){ref-type="table"}). Among the respondents, 142 (58.2%) were senior medical students (i.e. Year 3 or above). Senior medical students performed better with statistical significance than junior medical students (P\<0.001), except to the question whether women would develop cervical cancer after HPV vaccination (P=0.719) ([Table 3](#T3){ref-type="table"}). ###### Knowledge of HPV Vaccination in Junior and Senior Medical Students ≥ Year 3, N=142 (% of correct responses) \< Year 3, N=102 (% of correct responses) P-value (α=0.05) --------------------------------------------------------------------------- ------------------------------------------ ------------------------------------------- ------------------ 1\. Question on HPV and penile cancer 117 (82.4) 61 (59.8) **\<0.001** 2\. Question on HPV-16 and malignancy 136 (95.8) 90 (88.2) **0.026** 3\. Question on HPV vaccinations available in Hong Kong 123 (86.6) 46 (45.1) **\<0.001** 4\. Question on full course of HPV vaccination 123 (82.6) 77 (75.5) **0.026** 5\. Question on efficacy of HPV vaccination on cervical cancer prevention 129 (90.8) 94 (92.2) 0.719 6\. Question on screening test for cervical cancer after HPV vaccination 141 (99.3) 88 (86.3) **\<0.001** Ans, correct answer; HPV, Human Papillomavirus; bolded values significant at P\<0.05 Among the respondents, 43.1% were male and 56.9% were female. The male and female study population had similar knowledge towards HPV vaccination, except for the question about the number of injections included in a full course of vaccination in Hong Kong, where the female respondents performed significantly better (P\<0.001)([Table 4](#T4){ref-type="table"}). ###### Knowledge of HPV Vaccination in Junior and Senior Medical Students Male, N=181 (% of correct responses) Female, N=239 (% of correct responses) P-Value (α=0.05) --------------------------------------------------------------------------- -------------------------------------- ---------------------------------------- ------------------ 1\. Question on HPV and penile cancer 119 (65.7) 153 (64.0) 0.713 2\. Question on HPV-16 and malignancy 156 (86.2) 208 (87.0) 0.802 3\. Question on HPV vaccinations available in Hong Kong 112 (61.9) 137 (57.3) 0.347 4\. Question on full course of HPV vaccination 116 (64.1) 194 (81.2) **\<0.001** 5\. Question on efficacy of HPV vaccination on cervical cancer prevention 155 (85.6) 209 (87.4) 0.588 6\. Question on screening test for cervical cancer after HPV vaccination 162 (89.6) 212 (88.7) 0.795 Ans, correct answer; HPV, Human Papillomavirus; bolded values significant at P\<0.05 Comparison between respondents with and without sexual experience showed that only 8.3% of respondents had sexual experience. Respondents with or without sexual experience had no statistical difference in knowledge about HPV vaccination (P\>0.05) ([Table 5](#T5){ref-type="table"}). ###### Knowledge of HPV Vaccination in Respondents with and without Sexual Experiences Experience, N=35 (% correct responses) No Experience, N=385 (% of correct responses) P-Value (α=0.05) --------------------------------------------------------------------------- ---------------------------------------- ----------------------------------------------- ------------------ 1\. Question on HPV and penile cancer 21 (60.0) 251 (65.2) 0.538 2\. Question on HPV-16 and malignancy 30 (85.7) 334 (86.8) 0.863 3\. Question on HPV vaccinations available in Hong Kong 19 (54.3) 230 (29.7) 0.529 4\. Question on full course of HPV vaccination 23 (65.7) 287 (74.5) 0.255 5\. Question on efficacy of HPV vaccination on cervical cancer prevention 24 (68.6) 340 (88.3) **0.001** 6\. Question on screening test for cervical cancer after HPV vaccination 29 (82.6) 345 (89.6) 0.221 Ans, correct answer; HPV, Human Papillomavirus; bolded values significant at P\<0.05 Practice {#sec2-12} -------- Among the 420 students shown in [Table 1](#T1){ref-type="table"}, a quarter (23.4%) had completed the whole course of the vaccine; 3.8% were currently participating; 2.3% had scheduled in the coming 6 months. The majority (70.5%)was never vaccinated and had not scheduled it in the coming 6 months. The characteristics of respondents were grouped according to their study programme, gender, year of study and risk profile. Vaccination status was compared within the groups. Only female students had statistically significant difference when compared to male students (P\<0.001). About half of the female respondents (54.8%) were not vaccinated and did not scheduled it in the coming 6 months, while an overwhelming majority (91.2%) of the male respondents indicated this practice ([Table 1](#T1){ref-type="table"}). Attitude {#sec2-13} -------- More than three quarters (84.0%) of medical students agreed HPV vaccination was useful for men, while only 66.5% of non-medical students agreed. Also, 91.0% of medical students and 90.9% of non-medical students disagreed that HPV vaccination would promote high-risk sexual behaviour. Moreover, 86.1% of medical students would recommend HPV vaccine to their family members and friends, while only 78.4% of non-medical students would recommend ([Table 6](#T6){ref-type="table"} and [7](#T7){ref-type="table"}). ###### Positive Attitudes Towards HPV Vaccination in Medical Students Medical Students, N=74 Agree Disagree ---------------------------------------------------------------------------------------- ------------ ---------- HPV vaccine is effective in cancer prevention 74 (100.0) 0 (0.0) HPV vaccine is safe for injection 74 (100.0) 0 (0.0) Receiving HPV vaccination can effectively protect themselves and their sexual partners 73 (98.6) 1 (1.4) HPV, Human Papillomavirus ###### Positive Attitudes Towards HPV Vaccination in Medical Students Non-Medical Students, N=50 Agree (%) Disagree (%) ---------------------------------------------------------------------------------------- ----------- -------------- HPV vaccine is effective in cancer prevention 49 (98.0) 1 (2.0) HPV vaccine is safe for injection 49 (98.0) 1 (2.0) Receiving HPV vaccination can effectively protect themselves and their sexual partners 48 (96.0) 2 (4.0) HPV, Human Papillomavirus From [Table 3](#T3){ref-type="table"}, among 124 students who had participated or scheduled for HPV vaccination, nearly all (99.2%) agreed HPV vaccine was safe and effective in cancer prevention. Among 296 students who was never vaccinated and had not scheduled for HPV vaccination, 58.8%had never thought about taking HPV vaccination. There was a higher proportion of non-medical students (66.7%) who had never considered receiving HPV vaccine than medical students (52.9%). Around half (48.3%) considered the HPV vaccine to be too expensive, and the proportion of medical students (47.1%) with this belief was similar to that of non-medical students (50.0%). Some students (11.1%) thought that the HPV vaccine was ineffective, and 4.4% thought that HPV vaccination would be ineffective because of previous sexual activities. One-third (33.1%) were concerned of the side effects of receiving HPV vaccination. There was a higher proportion of non-medical students (38.1%) with concerns about the side effects this concern compared to medical students (29.4%). A majority of students (66.6%) thought they had low risk of HPV infection. The proportion of medical students (68.2%) with this risk perception was similar to that of non-medical students (64.3%) ([Table 8](#T8){ref-type="table"} and [9](#T9){ref-type="table"}). ###### Negative Attitudes Towards HPV Vaccination in Medical Students Medical Students, N=170 Agree (%) Disagree (%) ------------------------------------------------------------------------ ------------ -------------- Never considered of receiving HPV vaccine 90 (52.9) 80 (47.1) HPV vaccine is too expensive 80 (47.1) 90 (52.9) HPV vaccine is not effective 17 (10.0) 153 (90.0) HPV vaccine would be inefficient because of previous sexual activities 6 (3.5) 164 (96.5) Concerned of side effects of receiving HPV vaccination 50 (29.4) 120 (70.6) Self perceived low risk of HPV infection 116 (68.2) 54 (31.8) HPV, Human Papillomavirus ###### Negative Attitudes Towards HPV Vaccination in Non-Medical Students Non-Medical Students, N=126 Agree (%) Disagree (%) ------------------------------------------------------------------------ ----------- -------------- Never considered of receiving HPV vaccine 84 (66.7) 42 (33.3) HPV vaccine is too expensive 63 (50.0) 63 (50.0) HPV vaccine is not effective 16 (12.7) 110 (87.3) HPV vaccine would be inefficient because of previous sexual activities 7 (5.6) 119 (94.4) Concerned of side effects of receiving HPV vaccination 48 (38.1) 78 (61.9) Self perceived low risk of HPV infection 81 (64.3) 45 (35.7) HPV, Human Papillomavirus ###### General Attitudes towards HPV Vaccination in Medical Students Medical Student, N=244 Agree (%) Disagree (%) ----------------------------------------------------- ------------ -------------- HPV vaccination is useful for males 205 (84.0) 39 (16.0) HPV vaccination promotes high-risk sexual behaviour 22 (9.0) 222 (91.0) Would recommend HPV vaccine to family and friends 210 (86.1) 34 (13.9) HPV, Human Papillomavirus ###### General Attitudes Towards HPV Vaccination in Non-Medical Students Non-Medical Student, N=176 Agree (%) Disagree (%) ----------------------------------------------------- ------------ -------------- HPV vaccination is useful for males 117 (66.5) 59 (33.5) HPV vaccination promotes high-risk sexual behaviour 16 (9.1) 160 (90.9) Would recommend HPV vaccine to family and friends 138 (78.4) 38 (21.6) HPV, Human Papillomavirus Discussion {#sec1-5} ========== In this study, we found that medical students had more broad knowledge on the nature of HPV, cervical cancer and HPV vaccination. Medical students received systematic education on health-related issues (Han, 2012). With increased exposure to the pathophysiological knowledge of malignancies, it was not surprising to see medical students having more comprehensive knowledge on cervical cancer, which was one of the most prevalent cancer in this locality. In contrast, non-medical students had less opportunity to come in contact with general medical knowledge. It was reasonable to see that non-medical students were less familiar with cervical cancer or HPV vaccination. In addition to the different level of knowledge, medical and non-medical students also had different attitudes towards HPV vaccination. For instance, 84.0% of medical students believed HPV vaccination was useful for men, while only around 66.5% of non-medical students held this belief. In general, the more positive attitude in medical students could be attributed to more comprehensive knowledge about HPV vaccination. By having a greater understanding about the aetiology and impact of cervical cancer, medical students would tend to have a more positive perception of HPV vaccination, and would be more willing to recommend vaccination. This finding was compatible with previous study which showed that students with higher level of medical knowledge were more enthusiastic to be vaccinated (Chen and Leung, 2016). However, for participants who had not been vaccinated, the attitudes of medical and non-medical students were similar. Around half of those not vaccinated believed that HPV vaccines were too expensive. Two-thirds of them were concerned about the side effects of the vaccine and had low risk perception of HPV infection. From the study, the reasons for no vaccination were mainly related to personal health beliefs and cost-effectiveness evaluation. While knowledge about cervical cancer and HPV played a role in determining the attitude towards HPV vaccination; other factors, such as financial concern, health perception and cost-effectiveness evaluation also came into play (Li et al., 2013). In the study, about a quarter of respondents were in the process or had taken the full course of HPV vaccination. While medical students had a higher level of knowledge, and a more positive attitude towards HPV vaccination, there was no significant difference in vaccination status between the medical and non-medical students. This was compatible with previous studies in Belgium by Deriemaeker (2014) and in Turkey by Borlu (2016). The underlying reasons could be attributed to the high cost of HPV vaccines, the concern about the side effects and the self-perception of HPV risks. No matter the knowledge and attitude, financial considerations may hinder HPV vaccination. In addition, despite the minimal side effects of HPV vaccination, both medical and non-medical students may worry about potential harm. Besides, risk perception of HPV infection could explain why difference in knowledge and attitude did not correlate with vaccination practice. Both groups of university students may perceive themselves as having low risk of HPV infection because of abstinence or trust in their partners. With the above health beliefs, students may choose not to be vaccinated even if they had the knowledge and positive attitudes towards HPV vaccination. Senior medical students had more comprehensive knowledge of HPV vaccination than junior medical student in this study, suggesting students accumulated knowledge about HPV and its vaccination through medical education. A study by Chen (2016) also provided compatible findings. Through repeated exposure from classes and study materials, students would consolidate their knowledge in HPV vaccination. However, the difference in knowledge level between junior and senior medical students did not affect HPV vaccination status. This suggested that knowledge level was not the only factor affecting practice of HPV vaccination. Self-perceived susceptibility, barriers and the cost of vaccination all played a role in health behaviours. In the study, male and female respondents had similar knowledge of HPV vaccination. This may be attributed to the fact that they received health-related information through comparable channels. At the moment, HPV vaccination was advertised through leaflets, TV advertisements and public transport posters by the government and drug companies. However, there was a lack of gender specific information provided. For example, in question 1, we aimed to test if respondents knew HPV may cause penile cancer in men. Only two-thirds of male (65.7%) and female (64.0%) respondents knew the correct answer, showing that even some men did not realise HPV vaccination would be useful for them. In contrast, there was statistical difference in the vaccination status between men and women. Nearly half of all female respondents (45.2%) received or was in the process of receiving HPV vaccination, while less than one-tenth of male respondents (8.8%) had the same practice. This could beattributed to the skewed promotion campaign in Hong Kong, which mostly focused on the prevention of cervical cancer by HPV vaccination. Information such as prevention of genital warts or penile cancer by HPV vaccination seldom became a focus in the promotion campaigns. Therefore, men would be less aware of how HPV vaccination directly benefited them, and thus had less motivation to receive vaccination. There was no statistically significant difference in knowledge and vaccination status between people with and without sexual experience. Further research may be required to explore about the correlations, as only 35 out of 420 respondents had sexual experience. In light of the study results, there were several recommendations to promote HPV vaccination for cervical cancer prevention in the community. Medical students had significantly more comprehensive knowledge related to HPV vaccination compared to non-medical students. This highlighted the need to include relevant general medical knowledge, especially on HPV vaccination, in the curricula of non-medical students. On the other hand, in view of the comparably low vaccination rates among all respondents despite their differences in knowledge and attitude, further campaigns could focus on preparing students for the contemplation stage of HPV vaccination. Obstacles towards HPV vaccination, such as low perceived susceptibility, concern of side effects and the high cost of vaccine should be tackled. In conclusion, medical students in Hong Kong, especially those in senior years, had more comprehensive knowledge and positive attitudes towards HPV vaccination than non-medical students. However, the practice of HPV vaccination was comparable between medical and non-medical students. This suggested the role of other factors, including health beliefs, risk perception and financial considerations, in HPV vaccination for cervical cancer prevention.
/5 + 26/u. Let t be a/294 - (-4)/3. Which is smaller: t or 3? t Let v = -22687 - -22172. Which is bigger: v or -3615/7? v Let j be 3 - -1*47/(-3). Let s = j + 13. Let w(x) = x**2 - 7*x + 2. Let k be w(5). Is k greater than s? False Suppose 283 = 2*y + 295, -5*r + 4*y + 30449 = 0. Which is smaller: r or 6084? 6084 Let i be 1 + -5 + (-24)/(-6). Let d be (i + 4)*12/(-208). Which is greater: d or 3/2? 3/2 Let r = -926194702283/1023 + 905371044. Let d = r + 3654/31. Let z = -2/33 + d. Which is smaller: z or -3? z Let j = 15 + 2.5. Let x = 2.5 - j. Let n = -15.1 - x. Is n equal to 4? False Let w(l) = l + 7. Let h be w(-6). Let m be (-4)/5 + 594/4675*10. Which is bigger: h or m? h Let k(l) = -6*l - 50. Let s be k(-8). Let o = s + -3. Which is bigger: -8 or o? o Let y = -3/950 + -1873/8550. Which is bigger: y or 306? 306 Let d(g) = g**3 - g**2 - 5*g + 7. Suppose 2*f - 40 = -36. Let b be d(f). Is -2/1237 less than b? True Let m be ((-2)/(-3))/((-5)/3). Let r = 581 + -581.447. Let g = r + 0.547. Is m != g? True Let m(u) = 2*u**3 + 13*u**2 + 7*u + 5. Let h be m(-6). Are -1347 and h equal? False Let a = -310.221 + 4.221. Is 1 equal to a? False Let j be -6*(-42)/117460*(-160)/(-6). Are 1 and j nonequal? True Let t = -117.57 + -1.43. Let r = -168 - t. Let j = 48.4 + r. Which is bigger: j or 2? 2 Let m = 2067/4755317 - -1/2143. Is m at least 1? False Let v = -26.053 - -0.053. Let i = 1099 + -5493/5. Is i smaller than v? False Let u = 12.67 + 0.33. Let d = -12.64 + u. Let z = d - 0.46. Is 1.9 != z? True Let t(f) = -f**2 - 7*f + 7. Let r be t(-7). Suppose 7*y + r = -0*y. Is y not equal to 3/20? True Suppose -4*i + 5*y = -482 - 1688, -i + 530 = 5*y. Let x be 10/(-55) - i/(-429). Which is greater: 1 or x? x Suppose -7*f - f + 24 = 0. Let m(q) = -6*q + 6. Let s be m(f). Let n = 7 + -18. Is n at most s? False Let s be 4*8*(-2)/(-4). Suppose 7*v - s = 3*v. Suppose -5*o = -v*o. Which is bigger: -1 or o? o Let i = -1709628/97 + 17625. Let m(a) = a - 2. Let o be m(2). Is i bigger than o? False Let g be 11/(2*(7/(-21) + (-3)/18)). Is g smaller than -218? False Let v(q) = 4*q + 30. Let c = 123 - 135. Let g be v(c). Which is smaller: g or -83/5? g Let q(z) = -393*z**3 - 4*z**2 - 15*z + 33. Let b be q(2). Which is smaller: -3155 or b? b Let v = -56793/160 + 1774/5. Let t(m) = -m**2 + 5*m - 3. Let u be t(4). Is v not equal to u? True Let o = -301/30 + 19/12. Let u be ((-6)/80)/(30/(-20)). Let v = u - o. Is v >= 10? False Suppose -13*a + 8*a - p - 58 = 0, 0 = 4*a - 8*p + 64. Which is bigger: -115/9 or a? a Let j = 1.1 + -1.1. Let p = -18061 - -17840. Is j >= p? True Suppose -2*c + 2 = 0, 4*b + 7 = 3*c - 12. Let p = -4 - b. Suppose -2*k + 36 = 4*d, -2*k = d + 72 - 90. Is p not equal to d? True Let r(f) = 14*f**3 - f**2 - f + 1. Let v be r(1). Let c(y) = -7*y - 248. Let m be c(0). Let k be m/(-20) + ((-6)/15 - -1). Do v and k have the same value? True Let s = -577537/3826118832 + -1/295248. Which is smaller: s or -1? -1 Suppose 2*m - 5*t = 250, 18*m - 14*m - 490 = 5*t. Let k be (-19)/(-5) - m/30. Let d = -1771/9843 - -2/579. Is k != d? True Let b = -2867 + 3117. Which is smaller: 247 or b? 247 Let p = 3.4 + -3.5. Suppose -4*n = 7*n - 7920. Let v = 20157/28 - n. Is v bigger than p? False Let a(x) = -33*x - 105. Let o be a(-4). Suppose -60 = 4*z - 4*r, -3*z = -r + o + 8. Which is greater: -14 or z? z Suppose -4*f = -2*h + 4742, -49*h - 5*f - 11850 = -54*h. Which is smaller: h or 2367? 2367 Let l = 3 - 66. Let j = l + 54. Let k be 2/((-4)/4 + j). Which is smaller: -1 or k? -1 Suppose 604 = 103*k - 99*k. Let z = k + -153. Is z smaller than -36/11? False Let h = 4278 + -6474. Is h greater than -2197? True Let y = -316 + 416. Suppose 94*o = y*o - 342. Does o = 57? True Suppose -7*t - 1244 - 3375 - 4698 = 0. Which is greater: t or -2/11? -2/11 Let h = -47/49 + 45/98. Which is smaller: 986 or h? h Let r = -6.2 - -4.9. Let u = r - -39.3. Let h = -33 + u. Which is smaller: 1/4 or h? 1/4 Let n be 2 + (-961)/408 + (5 - 5). Let l = n + 3/8. Is l > 1? False Suppose -4*g + 3*g = -4*u + 8, 4 = -g. Let d be ((3 - 4)*1)/u. Which is smaller: 1/19 or d? d Let p = -222776/51 - -4368. Suppose 3*q = -2*l + 7, 3*q = -5*l + 17 - 4. Do q and p have the same value? False Let b be (-111*(-6)/(-6))/((-6)/68). Let n be (4/(-10))/((-34)/b). Is n smaller than 15? True Suppose -24 + 26 = 113*t - 111. Is -8025 > t? False Let g(u) = -10*u - 140. Let s be g(-14). Let m be (-4)/18 - 3937/(-13950). Is m at most s? False Let f = 7.85 - 8. Let x = 31.15 - 33. Let h = f - x. Is 2/5 >= h? False Suppose m = -y - 2346, -5*m - 2366 = -15*y + 16*y. Let v = 42119/18 + y. Which is greater: 0 or v? 0 Let o = -3 - -5. Let j = -9387/10 + 4689/5. Which is greater: j or o? o Let w = -6526 + 6528. Which is greater: w or -159.8? w Let y be (-1 + 0)/(-1)*-1. Let g = 983/15180 + 30/253. Does g = y? False Let o(g) = 8*g**2 + 224*g - 56. Let p be o(-31). Are 688 and p non-equal? False Let a(d) = -d**3 - 17*d**2 - 31*d - 273. Let u be a(-17). Which is smaller: u or 255? u Suppose -2*v - 3*v + 89 = -4*g, g + 3*v = -18. Let n be 6/g + (-765)/(-252). Is n <= -0.5? False Suppose 28 + 11 = 13*z. Let o be ((-4)/32*18 + 3)/z. Which is greater: o or -8/3? o Suppose 112 = -5*r - t + 93, -4*r - 4*t - 28 = 0. Which is bigger: r or -57/20? -57/20 Let z(s) = s + 1. Let w be z(-2). Let o be w - 1 - 111/(-55). Let t = 27374 + -27375. Is t >= o? False Let r(j) = j**3 - 140*j**2 + 79*j - 2675. Let a be r(140). Is 8386 at least as big as a? True Let g(r) = -r**2 + 6*r - 4. Let f(p) = 4*p**2 - p. Let t be f(-1). Let y be g(t). Which is greater: -1/10 or y? y Let u = 195718625 + -12912781519812121/65976253. Let k = -13217616550283/8510936637 + u. Let t = 1553 + k. Is t >= -1? True Let x be (2 - (-3 + 2)) + -11. Let q be (110/25 - 5)*((-76)/(-12) + 9). Which is bigger: x or q? x Let z be (2 - -3) + (-2276)/58 + -9. Does -44 = z? False Let n(x) = x**3 + 7*x**2 + 21. Let z be n(-10). Let o = z - -947. Which is smaller: 667 or o? 667 Suppose -101*s + 7696 + 15166 = 60*s. Is s not equal to 167? True Suppose 4*k - 21 - 87 = 0. Suppose 0 = -p - k - 69. Is p > -96? False Let k be ((-29)/8*-2 + 29 + -36)/(-172). Which is bigger: 0 or k? 0 Let y(q) = q**2 + 8*q - 67. Let d be y(-17). Suppose -6*m + 62 = d. Which is bigger: -17 or m? m Suppose 29 = 2*r + 23. Let b be r - ((-47)/(-6) + 2). Is -7 at most as big as b? True Let f(z) = z**3 - z**2 - 2*z + 1. Let h be f(2). Let l = 225.3 + -214. Let k = 12.4 - l. Which is greater: h or k? k Suppose -3*g - 5*f + 2*f - 174 = 0, -3*f + 66 = -g. Let w = 60 + g. Let n be -48 + -1 - 5/15*w. Is -49 > n? False Let u = -3536 - -3471. Is u < -52? True Let o = -1 + 1.1. Let n(c) = c**2 - 2*c + 1. Let z(f) = -4*f**2 + 38*f - 9. Let l(i) = -3*n(i) - z(i). Let g be l(31). Is o > g? True Let l = -356.0443 + 355.9. Let a = 0.0443 + l. Which is smaller: a or 64? a Let r be 16/6 + (-9)/(-81)*12. Suppose 0 = -2*u - r. Is -1 smaller than u? False Let g(s) = -249*s + 3896. Let m be g(13). Is m >= 4622/7? False Let s(v) = 5*v**3 + 15*v**2 - 28*v - 113. Let b be s(-9). Is -2292 equal to b? False Let x be (4/6)/(5/3). Let a = 0.02 - -0.66. Let w = 5.68 - a. Is w not equal to x? True Suppose -240*o - 24 = -232*o. Let a be (o/3 - -2) + (-110)/121. Let r = -6 - -6.7. Which is greater: a or r? r Suppose 15*u - 3*u = -36. Let h be u*(-6)/(-54)*-501. Is h <= 167? True Let l = 247.298 + 0.702. Which is bigger: 0.07 or l? l Let z = 53933 - 755495/14. Let k = -617/14 - z. Which is smaller: k or -14? -14 Let h(p) = -557*p + 12*p**2 - 25 + p**3 + 277*p + 278*p. Let u be h(-12). Which is smaller: u or -23/12? -23/12 Suppose 5*d = 188*r - 186*r + 1008, 5*d - 1015 = -5*r. Is d at most 210? True Let r = 2 + -2. Suppose r*x - 4*x - 12 = 0. Suppose j - 2*y = -788 + 760, 84 = -3*j - 3*y. Is x greater than j? True Let g = 520971/5 - 104759. Let n = g - -563. Is -1 smaller than n? False Suppose -2*a = 4*f - 15012, 5*a - 4*f - 36729 = 857. Are 7514 and a nonequal? False Let f = 0 + 0. Let v = 1197 + -1197. Suppose v = 4*s, -5*s = -4*o - f + 4. Is -2/77 > o? False Let u = -0.15809 - 0.04191. Which is smaller: -118.4 or u? -118.4 Let h = -1529 + 784. Let n = h - -743. Is n less than -187? False Let z be ((-6)/4)/(-2 - (-10)/4). Let g be 1 + z + (15392/966)/8. Wh
Cooperative assembly of a nativelike ubiquitin structure through peptide fragment complexation: energetics of peptide association and folding. Peptide fragments corresponding to the N- and C-terminal portions of bovine ubiquitin, U(1-35) and U(36-76), are shown by NMR to associate in solution to form a complex of modest stability (Kassn approximately 1.4 x 10(5) M(-1) at pH 7.0), with NMR features characteristic of a nativelike structure. The complex undergoes cold denaturation, with temperature-dependent estimates of stability from NMR indicating a DeltaC(p) degrees for fragment complexation in good agreement with that determined for native ubiquitin, suggesting that fragment association results in the burial of a similar hydrophobic surface area. The stability of the complex shows appreciable pH dependence, suggesting that ionic interactions on the surface of the protein contribute significantly. However, denaturation studies of native ubiquitin in the presence of guanidine hydrochloride (Gdn.HCl) show little pH dependence, suggesting that ionic interactions may be "screened" by the denaturant, as recently suggested. Examination of the conformation of the isolated peptide fragments has shown evidence for a low population of nativelike structure in the N-terminal beta-hairpin (residues 1-17) and weak nascent helical propensity in the helical fragment (residues 21-35). In contrast, the C-terminal peptide (36-76) shows evidence in aqueous solution, from some Halpha chemical shifts, for nonnative phi and psi angles; nonnative alpha-helical structure is readily induced in the presence of organic cosolvents, indicating that tertiary interactions in both native ubiquitin and the folded fragment complex strongly dictate its structural preference. The data suggest that the N-terminal fragment (1-35), where interaction between the helix and hairpin requires the minimum loss of conformational entropy, may provide the nucleation site for fragment complexation.
The busy schedule of Dolph Lundgren continues. His latest venture? The sequel to Kindergarten Cop (1990), which he is filming right now in Vancouver, Canada. Well, if you can’t believe it, just check out the pictures. The new film will be directed by Michael Don Paul, director of Jarhead 2: Field of Fire, from a script by David H. Steinberg, who wrote ‘American Pie.’ The new Kindergarten cop is a leading man type with an Indian sidekick named Sanjit. They’re on the trail of a missing flash drive from the Federal Witness protection program. Somehow it’s wound up in a kindergarten class. Flimsy premise, but it puts him side by side with a beautiful teacher and they hit it off. There’s also bad guys involved– this time they’re Albanian. Well, there is little else to say about this. I’d forgotten that this movie was even a thing, even as a DTV.
reenicake Content Count Joined Last visited you might try searching for tortes, as they are known. Austrian baking is vast, but many different European recipes use nut flours (almond, chestnut, pistachio) for cake-type and tart-like baked goods (Torta di Castagnaccio form Italy, Linzertorte, etc.) Thank you both! (sharon, I sent you a separate email..) Justloafing, I am afraid to try spelt or kamut because of my son's reaction to wheat -- basically he was not allergic (tested at 18 mos), then he was (tested at 3 years), then he wasn't (tested at 5 years), and now he is again (tested last month). Basically his intestines have been barraged and reacted poorly to the reintroduction of wheat -- so much so that it has affected his colon. I'm afraid that even spelt and kamut (at $4/half pound at my heath food store) are still going to be damaging... I can't afford (literally, cost of the organic kamut plus the cost of the gastroenterologist visits) to take the chance. I'm following this post closely as my kids are allergic to eggs, wheat and soy, and one of them is also allergic to nuts and dairy... interested to know what you've been able to tinker. I don't know if you've tried any of the mixes available but if it is only one layer and you run out of time to experiment (or to sort of know what to aim for), the ones made by Cherrybrook Kitchen work well and the only thing artificial in them is xanthan gum. are you using regular cow's milk or soy/rice/etc? I have made instant pudding mixes with soy milk and they just do not set... same with rice or other grain-based milks. If we want pudding it has to be the cornstarch-thickened type. options: 1. If you still do not have setting with regular or skim cow's mik, you can try adding some whey powder. 2. On the other hand, if you want to see if it sets with just the pudding, make the pudding first, let that set, then add the peanut butter, formula 1 etc. before pouring it into the crust. 3. Gelatin would be 1 tbsp. -- sprinkle it on top of 1/4 cup water and let it bloom, then warm in the microwave until clear (usually only about 30 sec.) then temper in some of the pudding before stirring it into the larger amount of pudding. Actually, it might help to temper it with the peanut butter so that the fats melt a little. HTH. I'm surprised nobody has mentioned any type of protein-based ice-cream... think a mousseline forcemeat run through the ice cream machine. I have made scallop (very delicious, almost fluffy), chicken liver, and lobster (a bisque, frozen into scoopable consistency. Fabulous.) When I was at Le Cirque aeons ago we tried infusing many different herbs and spices into creme anglaise, one I remember distinctly was tobacco. Not my taste but I don't smoke cigars. Are you using a regular churn or a Paco-jet? That might influence your choices for texture; you can't get chunks with a Paco-jet so big chiffonade of basil or something like that would be out unless you stirred them in at the end. And I totally thumbs-up the cheese ice cream! I use a sharp NY cheddar and it is amazing. Curry and pecan is a classic (well, with a nod to Herme...) Hi everyone! Haven't been here for awhile, but I am trying to find some information or leads about feeding a sourdough starter non-wheat flour. From what I know this should not be that strange, since the yeast feeds on carbohydrates. My son and daughter have recently been tested for allergies -- wheat and eggs have come up on the banned list for both. My son is allergic to dairy and casein, nuts and peanuts (aside from fish and shellfish) while my daughter is not. I really like the moistness and keeping qualities that sourdough starters give homemade bread, especially in the absence of eggs (that so many gluten-free breads call for). I do know that some traditional Chinese/ other Asian cultures' breads are risen with a starter based on rice, but I can't find any constructive info on the 'net. Thanks for any help! I have seen (and used) recipes where this is not done at all - the sugar gets dissolved in the heating milk and the yolks are simply whisked to break them up. So no, you don't need to get any kind of fluff from the eggs/sugar. As to why it is still in many recipes, perhaps to ensure that you don't forget to add the sugar?? not to add another volume to the pile, but How Baking Works by Paula Figoni (I have both, second edition larger format than the first) is a great book for this; she even explains and has exercises for seeing the difference between fats in cake batters. In it, I learned that commercial bakeries have something to turn to called liquid hi-ratio emulsifying shortening -- a pourable, opaque/cloudy shortening that makes it possible to mix a finely grained, moist cake in one step. Fascinating and instructional. Also for tenderness, the reason hi-ratio cakes are called that is the quantity of sugar (by weight) is equal to or exceeds the flour; sugar tenderizes by absorbing water, thus minimizing gluten development; in a hi-ratio cake, the sugar is mixed with the flour so that when it is wet it (the flour) won't produce too gluten. For the most part, creaming the butter and sugar makes a cake light by incorporating air (sugar's crystalline shape and butter's plasticity hang on to maximum air). Lightness and tenderness are seemingly opposite, but mixing method and how/where you employ the sugar matter a lot. hth! Are you straining the infusion or letting it sit? As with andiesenji, I find that milkfat will pull out some of the harsher elements of the ginger, so poaching it would help; also instead of grating it try cutting into coins instead... less busted surface area will probably help mellow out the flavor. If nothing else works, maybe just use the ginger juice and not the whole root. If I'm not mistaken, this becomes Tant pour Tant? I have had success buzzing the coarser ones with the sugar called for in a food processor, then sifting. I like the quality and color of the pistachio flour from American Almond, but it is slightly coarse. HTH. hope everything goes swimmingly! Haven't been here to cheer you on but waiting for pix as we speak... BTW, sometimes a wedding cake IS the dessert. I've had several brides order 3 different sizes of a particular favorite dessert, set them up on a stand, and call it a day. My best friend had a giant stand made and filled it with custard tartlets, native Filipino sweets and coconut candy (this was after I had to send my regrets about not being able to be there to make her wedding cake -- she moved it up 3 months and I wasn't prepared to leave.) As with any work of art, you also have to consider that a wedding cake is usually commissioned -- so it should be made to the client's aesthetic wishes as however they were expressed to the maker of the cake. Ergo, a bride will show a baker stuff from magazines and books, etc., and the designer knows that whatever they put out into the media is going to be used at least as a template for other cakes. If no visual guidance is given (a rarity, and for myself at least, a mixed blessing) the baker can feel free to execute a design of their own creation. The design process is at least as intensive as the baking, and takes just as much effort. Those "really creative" cakes that are copies of favorite things? I know several companies have sued over the Vuitton logo, John Deere tractors, etc. being copied in cake. rice pudding hits both counts. Anything with coconut milk -- custard and cake as well as ice cream. Something I make in my Asian Desserts class that everyone loves is a baked yucca pudding with caramel milk topping. It's a traditional Filipino dessert (Bibingkang Kamoteng-kahoy or Kasaba Bibingka) but is much loved everywhere -- the latino dishwashers called it theirs, Pichet Ong had 4 pieces, and the student assistants of all cultures fight over the leftovers. another thing you might want to try if the kids make the candy themselves (rolling and cutting etc) is making impressions in the shapes with smaller cutters, or stamps, or other textured (food safe) stuff like brushes. It would be like clay that you can eat.
<?php namespace Rakit\Validation\Rules; use Rakit\Validation\Rule; class Nullable extends Rule { /** * Check the $value is valid * * @param mixed $value * @return bool */ public function check($value): bool { return true; } }
Early this morning, Mesa PD tased and arrested a man who answered the door with "war paint" streaked across his face, and who on other occasions had greeted police armed with a sword. Neighbors in the 500 block of Alma School Road called police because they heard yelling and banging from the home of Matthew Andersen, 30. According to the police report, a neighbor recorded Andersen raving about wanting to use an arsenal of guns to shoot people. Apparently, from previous interaction, police also knew Andersen carried throwing knives. When police arrived, Andersen said he would shoot the officers, who also noticed he had smeared what looked like blood across his face, according to the report. Police lured Andersen from the home, tased him, and when they checked inside they found a broken window punched out and a puddle of blood on the ground, a mattress barricading a door, a loaded sawed-off shotgun, a rifle, handgun, and myriad other tactical gear and weapons scattered through his house. Mesa PD booked Andersen for threatening and intimidating, disorderly conduct, and weapon possession by a prohibited possessor.
The flexible circuit industry requires adhesives for polyimide film and copper which can withstand elevated temperatures and a variety of harsh solvents and chemicals. During the many preparation and processing steps for circuit manufacture, these solvents and chemicals can cause an adhesive to swell. This swelling leads to blister formation and/or delamination, which results in reduced circuit yields. The application of heat, such as in soldering, can similarly cause circuit failures. In preparing multilayer flexible circuits, manufacturers need adhesive-coated polyimide films which can effectively encapsulate patterned metal circuits and also provide a planar surface for subsequent lamination.
<h2>Sign up</h2> <%= simple_form_for(resource, :as => resource_name, :url => registration_path(resource_name)) do |f| %> <%= f.error_notification %> <div class="form-inputs"> <%= f.input :username, :required => true, :autofocus => true %> <%= f.input :email, :required => true %> <%= f.input :password, :required => true %> <%= f.input :password_confirmation, :required => true %> </div> <div class="form-actions"> <%= f.button :submit, "Sign up" %> </div> <% end %> <%= render "devise/shared/links" %>
There will be one less Houston brew on taps and on the shelves. Two-year-old Fort Bend Brewery Co. revealed today that it has ceased operations at its Missouri City facility as of January 31. Owners Ty and Sharon Coburn shared the news on Fort Bend Brewery's website and Facebook: Due to various circumstances, it is with regret we must inform you that Fort Bend Brewing Company ceased brewing operations December 30th, 2014, and permanently closed on January 31st, 2015. FBBC has been brewing high quality beers for over 2 years, and has been a proud contributor to many activities and charities in the local community. We gave it all we had to build a brand that we were proud of. For all our cynics and disparagers, you know who you are, well, my mother taught me if I don’t have anything good to say about someone, then don’t say anything at all – enough said.. To all of our supportive retailers and patrons, we sincerely thank you for your support during the past years, you made it worthwhile. To our awesome volunteers, we couldn’t have done it without you. With the warmest gratitude from the bottom of our heart, thank you, we will miss you. On to the next chapter…. Ty and Sharon Coburn As craft beer continues to expand its reach across the nation, Houston area brewers have found a growing audience appreciative of the art involved in making a fine brew, so much so, people are known to wait in long lines for new releases. While Fort Bend Brewing didn't have the cult-like following of Saint Arnold, Karbach, Southern Star and Buffalo Bayou Brewing, the brand had a loyal following nonetheless, especially in Missouri City, Stafford and Sugar Land. Unfortunately, it wasn't enough to sustain the cost of running a brewery — which is very costly as noted by Houston Chronicle. While several up-and-coming breweries have folded within the last four years, Fort Bend Brewing Company is one of the better known brewers in town to close. With a new crop of brewers like Allen's Landing Brewing and 11 Below on the way, plus the rise of dedicated craft beer bars, there's still hope that Houston's new breweries can survive in what's quickly becoming a crowded market.
The present invention relates to a supporting device for prefabricated units, in particular for buildings having a metallic structure. Supporting devices for prefabricated units made of concrete or the like are known. In the field of buildings having a concrete supporting structure, appropriate brackets are sometimes provided in order to support prefabricated units which are also made of concrete, such as for example prefabricated panels; such brackets are formed monolithically with the supporting element or with the supported element, protrude from it, and are meant to engage in seats provided for this purpose in the supported element or in the supporting element, or simply form a resting surface for the supported element or for the supporting element. Other devices for supporting prefabricated units are constituted by brackets which are rigidly coupled, during installation, by welding or bolting, to steel inserts embedded beforehand in the units. These supporting devices for prefabricated units have the problem of difficulty in performing installation and of poor precision in positioning the prefabricated unit with respect to the supporting structure. EP-423,660 in the name of these same Applicants discloses a supporting and anchoring device for prefabricated units in particular made of concrete or the like, which is substantially constituted by a bush-like seat formed in one face of the supporting element and by a supporting element which is detachably inserted in the seat and protrudes from the seat and from the face of the supporting element, so as to form a resting region for the prefabricated unit to be connected to the supporting element. The device is provided with adjustment means which allow to vary the distance of the resting region with respect to the face of the supporting element and the elevation of the resting region, so as to allow, in a simple and rapid way, a very precise positioning of the prefabricated unit with respect to the supporting structure. Since this device requires, inside the supporting element, a bush-like seat which must be provided during the production of the supporting element, it cannot be used in buildings having a metallic supporting structure and whenever it is not possible or convenient to form a bush-like seat inside the body of the supporting element, even if said element is made of concrete. The aim of the present invention is to provide a supporting device for prefabricated units which does not require the preliminary provision of a bush-like seat inside one of the two units to be mutually connected and accordingly is particularly adapted for use in the field of buildings having a metallic supporting structure. Within the scope of this aim, an object of the present invention is to provide a device which can in any case be used also to connect prefabricated units made of concrete if it is impossible or not convenient to provide a bush-like seat in one of the two prefabricated units to be mutually connected. Another object of the present invention is to provide a device which in any case allows very precise adjustment, during installation, of the position of one unit with respect to the other. Another object of the present invention is to provide a device which allows particularly simple installation of prefabricated units. Another object of the present invention is to provide a device which offers good resistance in case of seismic events. These and other objects which will become better apparent hereinafter are achieved by a supporting device for prefabricated units, characterized in that it comprises a main body which can be fixed to an outer face of a first unit and a supporting element having a portion which forms a resting region for a second unit to be connected to said first unit; said supporting element being detachably associated with said main body and-being movable along said main body in order to vary the position of said resting region with respect to said first unit.
In Korea, there's a League of Legends player called Faker that makes 3Billion won(2.6M dollars) a year. He's a professional gamer(or esports player) with a whole bunch of sponsors. What do you think makes it difficult for Pokemon to become a game that handles this much money? Could Pokemon have "professional" gamers and bigger competitions in the future? if Rollout/Ice Ball is used on a Pokémon with the Disguise ability during any of the 5 turns, the Rollout counter is increased but the multiplier is not. This multiplier, including the x2 Defense Curl bonus if applicable, will affect the next non-status move even if it isn't Rollout or Ice Ball. Click to expand... But... let's return to the topic XD is there some eSport of a Nintendo game? I think there isn't... so maybe it's Nintendo that don't want to have it's games as eSports... (Splatoon 2 can be one eSport but it isn't...) or simply they don't are interested ot don't want to give prize money
from helper import * from data_loader import * # sys.path.append('./') from model.models import * class Runner(object): def load_data(self): """ Reading in raw triples and converts it into a standard format. Parameters ---------- self.p.dataset: Takes in the name of the dataset (FB15k-237) Returns ------- self.ent2id: Entity to unique identifier mapping self.id2rel: Inverse mapping of self.ent2id self.rel2id: Relation to unique identifier mapping self.num_ent: Number of entities in the Knowledge graph self.num_rel: Number of relations in the Knowledge graph self.embed_dim: Embedding dimension used self.data['train']: Stores the triples corresponding to training dataset self.data['valid']: Stores the triples corresponding to validation dataset self.data['test']: Stores the triples corresponding to test dataset self.data_iter: The dataloader for different data splits """ ent_set, rel_set = OrderedSet(), OrderedSet() for split in ['train', 'test', 'valid']: for line in open('./data/{}/{}.txt'.format(self.p.dataset, split)): sub, rel, obj = map(str.lower, line.strip().split('\t')) ent_set.add(sub) rel_set.add(rel) ent_set.add(obj) self.ent2id = {ent: idx for idx, ent in enumerate(ent_set)} self.rel2id = {rel: idx for idx, rel in enumerate(rel_set)} self.rel2id.update({rel+'_reverse': idx+len(self.rel2id) for idx, rel in enumerate(rel_set)}) self.id2ent = {idx: ent for ent, idx in self.ent2id.items()} self.id2rel = {idx: rel for rel, idx in self.rel2id.items()} self.p.num_ent = len(self.ent2id) self.p.num_rel = len(self.rel2id) // 2 self.p.embed_dim = self.p.k_w * self.p.k_h if self.p.embed_dim is None else self.p.embed_dim self.data = ddict(list) sr2o = ddict(set) for split in ['train', 'test', 'valid']: for line in open('./data/{}/{}.txt'.format(self.p.dataset, split)): sub, rel, obj = map(str.lower, line.strip().split('\t')) sub, rel, obj = self.ent2id[sub], self.rel2id[rel], self.ent2id[obj] self.data[split].append((sub, rel, obj)) if split == 'train': sr2o[(sub, rel)].add(obj) sr2o[(obj, rel+self.p.num_rel)].add(sub) self.data = dict(self.data) self.sr2o = {k: list(v) for k, v in sr2o.items()} for split in ['test', 'valid']: for sub, rel, obj in self.data[split]: sr2o[(sub, rel)].add(obj) sr2o[(obj, rel+self.p.num_rel)].add(sub) self.sr2o_all = {k: list(v) for k, v in sr2o.items()} self.triples = ddict(list) for (sub, rel), obj in self.sr2o.items(): self.triples['train'].append({'triple':(sub, rel, -1), 'label': self.sr2o[(sub, rel)], 'sub_samp': 1}) for split in ['test', 'valid']: for sub, rel, obj in self.data[split]: rel_inv = rel + self.p.num_rel self.triples['{}_{}'.format(split, 'tail')].append({'triple': (sub, rel, obj), 'label': self.sr2o_all[(sub, rel)]}) self.triples['{}_{}'.format(split, 'head')].append({'triple': (obj, rel_inv, sub), 'label': self.sr2o_all[(obj, rel_inv)]}) self.triples = dict(self.triples) def get_data_loader(dataset_class, split, batch_size, shuffle=True): return DataLoader( dataset_class(self.triples[split], self.p), batch_size = batch_size, shuffle = shuffle, num_workers = max(0, self.p.num_workers), collate_fn = dataset_class.collate_fn ) self.data_iter = { 'train': get_data_loader(TrainDataset, 'train', self.p.batch_size), 'valid_head': get_data_loader(TestDataset, 'valid_head', self.p.batch_size), 'valid_tail': get_data_loader(TestDataset, 'valid_tail', self.p.batch_size), 'test_head': get_data_loader(TestDataset, 'test_head', self.p.batch_size), 'test_tail': get_data_loader(TestDataset, 'test_tail', self.p.batch_size), } self.edge_index, self.edge_type = self.construct_adj() def construct_adj(self): """ Constructor of the runner class Parameters ---------- Returns ------- Constructs the adjacency matrix for GCN """ edge_index, edge_type = [], [] for sub, rel, obj in self.data['train']: edge_index.append((sub, obj)) edge_type.append(rel) # Adding inverse edges for sub, rel, obj in self.data['train']: edge_index.append((obj, sub)) edge_type.append(rel + self.p.num_rel) edge_index = torch.LongTensor(edge_index).to(self.device).t() edge_type = torch.LongTensor(edge_type). to(self.device) return edge_index, edge_type def __init__(self, params): """ Constructor of the runner class Parameters ---------- params: List of hyper-parameters of the model Returns ------- Creates computational graph and optimizer """ self.p = params self.logger = get_logger(self.p.name, self.p.log_dir, self.p.config_dir) self.logger.info(vars(self.p)) pprint(vars(self.p)) if self.p.gpu != '-1' and torch.cuda.is_available(): self.device = torch.device('cuda') torch.cuda.set_rng_state(torch.cuda.get_rng_state()) torch.backends.cudnn.deterministic = True else: self.device = torch.device('cpu') self.load_data() self.model = self.add_model(self.p.model, self.p.score_func) self.optimizer = self.add_optimizer(self.model.parameters()) def add_model(self, model, score_func): """ Creates the computational graph Parameters ---------- model_name: Contains the model name to be created Returns ------- Creates the computational graph for model and initializes it """ model_name = '{}_{}'.format(model, score_func) if model_name.lower() == 'compgcn_transe': model = CompGCN_TransE(self.edge_index, self.edge_type, params=self.p) elif model_name.lower() == 'compgcn_distmult': model = CompGCN_DistMult(self.edge_index, self.edge_type, params=self.p) elif model_name.lower() == 'compgcn_conve': model = CompGCN_ConvE(self.edge_index, self.edge_type, params=self.p) else: raise NotImplementedError model.to(self.device) return model def add_optimizer(self, parameters): """ Creates an optimizer for training the parameters Parameters ---------- parameters: The parameters of the model Returns ------- Returns an optimizer for learning the parameters of the model """ return torch.optim.Adam(parameters, lr=self.p.lr, weight_decay=self.p.l2) def read_batch(self, batch, split): """ Function to read a batch of data and move the tensors in batch to CPU/GPU Parameters ---------- batch: the batch to process split: (string) If split == 'train', 'valid' or 'test' split Returns ------- Head, Relation, Tails, labels """ if split == 'train': triple, label = [ _.to(self.device) for _ in batch] return triple[:, 0], triple[:, 1], triple[:, 2], label else: triple, label = [ _.to(self.device) for _ in batch] return triple[:, 0], triple[:, 1], triple[:, 2], label def save_model(self, save_path): """ Function to save a model. It saves the model parameters, best validation scores, best epoch corresponding to best validation, state of the optimizer and all arguments for the run. Parameters ---------- save_path: path where the model is saved Returns ------- """ state = { 'state_dict' : self.model.state_dict(), 'best_val' : self.best_val, 'best_epoch' : self.best_epoch, 'optimizer' : self.optimizer.state_dict(), 'args' : vars(self.p) } torch.save(state, save_path) def load_model(self, load_path): """ Function to load a saved model Parameters ---------- load_path: path to the saved model Returns ------- """ state = torch.load(load_path) state_dict = state['state_dict'] self.best_val = state['best_val'] self.best_val_mrr = self.best_val['mrr'] self.model.load_state_dict(state_dict) self.optimizer.load_state_dict(state['optimizer']) def evaluate(self, split, epoch): """ Function to evaluate the model on validation or test set Parameters ---------- split: (string) If split == 'valid' then evaluate on the validation set, else the test set epoch: (int) Current epoch count Returns ------- resutls: The evaluation results containing the following: results['mr']: Average of ranks_left and ranks_right results['mrr']: Mean Reciprocal Rank results['hits@k']: Probability of getting the correct preodiction in top-k ranks based on predicted score """ left_results = self.predict(split=split, mode='tail_batch') right_results = self.predict(split=split, mode='head_batch') results = get_combined_results(left_results, right_results) self.logger.info('[Epoch {} {}]: MRR: Tail : {:.5}, Head : {:.5}, Avg : {:.5}'.format(epoch, split, results['left_mrr'], results['right_mrr'], results['mrr'])) return results def predict(self, split='valid', mode='tail_batch'): """ Function to run model evaluation for a given mode Parameters ---------- split: (string) If split == 'valid' then evaluate on the validation set, else the test set mode: (string): Can be 'head_batch' or 'tail_batch' Returns ------- resutls: The evaluation results containing the following: results['mr']: Average of ranks_left and ranks_right results['mrr']: Mean Reciprocal Rank results['hits@k']: Probability of getting the correct preodiction in top-k ranks based on predicted score """ self.model.eval() with torch.no_grad(): results = {} train_iter = iter(self.data_iter['{}_{}'.format(split, mode.split('_')[0])]) for step, batch in enumerate(train_iter): sub, rel, obj, label = self.read_batch(batch, split) pred = self.model.forward(sub, rel) b_range = torch.arange(pred.size()[0], device=self.device) target_pred = pred[b_range, obj] pred = torch.where(label.byte(), -torch.ones_like(pred) * 10000000, pred) pred[b_range, obj] = target_pred ranks = 1 + torch.argsort(torch.argsort(pred, dim=1, descending=True), dim=1, descending=False)[b_range, obj] ranks = ranks.float() results['count'] = torch.numel(ranks) + results.get('count', 0.0) results['mr'] = torch.sum(ranks).item() + results.get('mr', 0.0) results['mrr'] = torch.sum(1.0/ranks).item() + results.get('mrr', 0.0) for k in range(10): results['hits@{}'.format(k+1)] = torch.numel(ranks[ranks <= (k+1)]) + results.get('hits@{}'.format(k+1), 0.0) if step % 100 == 0: self.logger.info('[{}, {} Step {}]\t{}'.format(split.title(), mode.title(), step, self.p.name)) return results def run_epoch(self, epoch, val_mrr = 0): """ Function to run one epoch of training Parameters ---------- epoch: current epoch count Returns ------- loss: The loss value after the completion of one epoch """ self.model.train() losses = [] train_iter = iter(self.data_iter['train']) for step, batch in enumerate(train_iter): self.optimizer.zero_grad() sub, rel, obj, label = self.read_batch(batch, 'train') pred = self.model.forward(sub, rel) loss = self.model.loss(pred, label) loss.backward() self.optimizer.step() losses.append(loss.item()) if step % 100 == 0: self.logger.info('[E:{}| {}]: Train Loss:{:.5}, Val MRR:{:.5}\t{}'.format(epoch, step, np.mean(losses), self.best_val_mrr, self.p.name)) loss = np.mean(losses) self.logger.info('[Epoch:{}]: Training Loss:{:.4}\n'.format(epoch, loss)) return loss def fit(self): """ Function to run training and evaluation of model Parameters ---------- Returns ------- """ self.best_val_mrr, self.best_val, self.best_epoch, val_mrr = 0., {}, 0, 0. save_path = os.path.join('./checkpoints', self.p.name) if self.p.restore: self.load_model(save_path) self.logger.info('Successfully Loaded previous model') kill_cnt = 0 for epoch in range(self.p.max_epochs): train_loss = self.run_epoch(epoch, val_mrr) val_results = self.evaluate('valid', epoch) if val_results['mrr'] > self.best_val_mrr: self.best_val = val_results self.best_val_mrr = val_results['mrr'] self.best_epoch = epoch self.save_model(save_path) kill_cnt = 0 else: kill_cnt += 1 if kill_cnt % 10 == 0 and self.p.gamma > 5: self.p.gamma -= 5 self.logger.info('Gamma decay on saturation, updated value of gamma: {}'.format(self.p.gamma)) if kill_cnt > 25: self.logger.info("Early Stopping!!") break self.logger.info('[Epoch {}]: Training Loss: {:.5}, Valid MRR: {:.5}\n\n'.format(epoch, train_loss, self.best_val_mrr)) self.logger.info('Loading best model, Evaluating on Test data') self.load_model(save_path) test_results = self.evaluate('test', epoch) if __name__ == '__main__': parser = argparse.ArgumentParser(description='Parser For Arguments', formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument('-name', default='testrun', help='Set run name for saving/restoring models') parser.add_argument('-data', dest='dataset', default='FB15k-237', help='Dataset to use, default: FB15k-237') parser.add_argument('-model', dest='model', default='compgcn', help='Model Name') parser.add_argument('-score_func', dest='score_func', default='conve', help='Score Function for Link prediction') parser.add_argument('-opn', dest='opn', default='corr', help='Composition Operation to be used in CompGCN') parser.add_argument('-batch', dest='batch_size', default=128, type=int, help='Batch size') parser.add_argument('-gamma', type=float, default=40.0, help='Margin') parser.add_argument('-gpu', type=str, default='0', help='Set GPU Ids : Eg: For CPU = -1, For Single GPU = 0') parser.add_argument('-epoch', dest='max_epochs', type=int, default=500, help='Number of epochs') parser.add_argument('-l2', type=float, default=0.0, help='L2 Regularization for Optimizer') parser.add_argument('-lr', type=float, default=0.001, help='Starting Learning Rate') parser.add_argument('-lbl_smooth', dest='lbl_smooth', type=float, default=0.1, help='Label Smoothing') parser.add_argument('-num_workers', type=int, default=10, help='Number of processes to construct batches') parser.add_argument('-seed', dest='seed', default=41504, type=int, help='Seed for randomization') parser.add_argument('-restore', dest='restore', action='store_true', help='Restore from the previously saved model') parser.add_argument('-bias', dest='bias', action='store_true', help='Whether to use bias in the model') parser.add_argument('-num_bases', dest='num_bases', default=-1, type=int, help='Number of basis relation vectors to use') parser.add_argument('-init_dim', dest='init_dim', default=100, type=int, help='Initial dimension size for entities and relations') parser.add_argument('-gcn_dim', dest='gcn_dim', default=200, type=int, help='Number of hidden units in GCN') parser.add_argument('-embed_dim', dest='embed_dim', default=None, type=int, help='Embedding dimension to give as input to score function') parser.add_argument('-gcn_layer', dest='gcn_layer', default=1, type=int, help='Number of GCN Layers to use') parser.add_argument('-gcn_drop', dest='dropout', default=0.1, type=float, help='Dropout to use in GCN Layer') parser.add_argument('-hid_drop', dest='hid_drop', default=0.3, type=float, help='Dropout after GCN') # ConvE specific hyperparameters parser.add_argument('-hid_drop2', dest='hid_drop2', default=0.3, type=float, help='ConvE: Hidden dropout') parser.add_argument('-feat_drop', dest='feat_drop', default=0.3, type=float, help='ConvE: Feature Dropout') parser.add_argument('-k_w', dest='k_w', default=10, type=int, help='ConvE: k_w') parser.add_argument('-k_h', dest='k_h', default=20, type=int, help='ConvE: k_h') parser.add_argument('-num_filt', dest='num_filt', default=200, type=int, help='ConvE: Number of filters in convolution') parser.add_argument('-ker_sz', dest='ker_sz', default=7, type=int, help='ConvE: Kernel size to use') parser.add_argument('-logdir', dest='log_dir', default='./log/', help='Log directory') parser.add_argument('-config', dest='config_dir', default='./config/', help='Config directory') args = parser.parse_args() if not args.restore: args.name = args.name + '_' + time.strftime('%d_%m_%Y') + '_' + time.strftime('%H:%M:%S') set_gpu(args.gpu) np.random.seed(args.seed) torch.manual_seed(args.seed) model = Runner(args) model.fit()
Senate Minority Leader Chuck Schumer on Aug. 2, unveiling “A Better Deal on Trade & Jobs.” (Photo: Manuel Balce Ceneta/AP) Washington was abuzz this week with talk about the new Democratic agenda, “A Better Deal,” which is suddenly dominating news coverage and captivating voters with a plan to remake the American economy, sending Republicans scrambling for a viable platform of their own in advance of the midterm elections. No, not really. I just wanted to see if you were paying attention on the beach. In reality, with Congress and the president out of town right now, Washington is deader than a Chick-fil-A on Sunday. Bored TV commentators would rather analyze every nuance of President Trump’s latest tweetstorm than spend a second debating trade policy. And the agenda I mentioned, which Democrats began rolling out a few weeks ago in a series of choreographed events, has impressed pretty much no one. The slogan, which apparently took months of focus-grouping to perfect, rather than the five seconds of idle thought while doing the laundry that you would think it required, evokes — yet again — memories of the Roosevelt and Truman administrations, which remain powerful in exactly two places in America: nursing homes and Democratic leadership meetings. Critics of the plan were quick to point out that it wasn’t really a plan at all — more like a collection of greatest hits like public infrastructure spending (1984), job retraining (1992) and monopoly busting (1896). But the more profound and more overlooked problem with this “Better Deal” proclamation isn’t actually about its language or its gauziness. It’s more about the underlying philosophy, which misreads in some fundamental way the core appeal of Trump’s campaign. Democrats are trying to do a couple of things with this new marketing push. One is to answer this question of what they actually want to achieve, aside from impeaching the president. In announcing the new slogan, Chuck Schumer, the Senate minority leader, lamented that “too many Americans don’t know what we stand for” before boldly declaring: “Not after today.” Story continues Because nothing redefines a party in the public mind like a slogan unveiled by congressional leaders at a podium. That’s always worked before. The other and perhaps more urgent objective is to co-opt some of the populist fury that’s simmering right now in the Democratic base, before it overwhelms the party establishment in the same way that Trump toppled leading Republicans. Schumer and his compatriots are trying to convincingly adopt the ethos of the anti-corporate politicians who appeal most to their activists — namely Bernie Sanders and Elizabeth Warren. It’s worth taking a moment here to consider what being a populist party actually means in 2017. Broadly speaking, populism is the practice of galvanizing the majority of the people against powerful and oppressive interests in the society. In the late 19th century and well into the 20th, populism necessarily translated into an assault on industrial-age business. This made sense. The most powerful institutions in American life were ascendant corporations, which concentrated their collective energy on exploiting both workers and consumers for profit. There was no central government to speak of back then, no balancing force on behalf of Americans who weren’t part of the industrial or financial elite. It took a series of populist leaders — most notably the two Roosevelts in the White House — to shatter the grip of corporate trusts and establish an essential counterbalance in the public sector. Almost a century later, however, the meaning of populism is a little more complicated. Yes, a lot of Americans remain deeply suspicious of banks and multinational corporations, especially those that move manufacturing overseas. That’s a reliably strong current in our politics. But we also depend on companies like Walmart and Target for affordable drugs, groceries and toys for our kids. The fastest-growing and most ubiquitous companies in America now aren’t in oil or steel; they’re Apple and Amazon and Google. You don’t sense a lot of populist outrage over next-day shipping. Meanwhile, government bureaucracies have grown exponentially in both size and power. If you went out on the street anywhere in America and asked people what the most powerful institutions in American life are today, I’m betting almost everyone would name Washington in their top three. And not just powerful but, to a lot of Americans, oppressive, too. It’s not so much the taxes people pay, which really aren’t all that onerous in most cases; yelling about taxes is really just a way of voicing general disdain. It’s the TSA guy barking at you in the airport, or the woman at the DMV who rejected your paperwork, or the county inspector who threatened to shut down your shop over some obscure code. It’s the VA hospital that won’t give you an appointment, or the detox facility with no beds. More than any of that, though, it’s the promises that never seem to be kept, year after year — of jobs, of affordable college, of renewal in abandoned towns. For decades now, since the onset of globalization and technological upheaval, politicians have been telling people they’ve got this or that plan to reverse the decline. They don’t. According to the latest data from the indispensable Pew Research Center, about 55 percent of Americans are frustrated with the federal government, and only 20 percent say they trust the government to do what’s right most or all of the time. The partisan divides here shift from year to year, but the pervasive sentiment is remarkably constant. This, at least among a lot of independent and less ideological voters, is what Trump tapped into last year with his silly red hat. Sure, he mouthed a lot of platitudes about setting Wall Street straight (and then hired the top echelon of Goldman Sachs to work in his White House). But it was his indictment of government generally — and the establishments of both parties — that ultimately washed away the Clintonian argument for faith in the governing class. What this means is that populism as a purely economic proposition — the people versus their corporate overlords — is too limiting a construct in modern politics. Any winning populist critique probably has to extend to the failures of the federal bureaucracy, too. Democrats don’t like to hear this. They represent the party of government, and they fear that if they acknowledge its flaws or anachronisms, they will essentially be validating the conservative argument. But that’s not right, and it’s self-defeating. You can be pro-government and still make the case for fundamental reform and modernization, as Gary Hart and Bill Clinton once did. That’s just admitting reality. What does the “Better Deal” have to say about this? Among the precious few new policy ideas Democrats now propose is the creation of yet more government agencies to rein in corporate excess and unfair trade. Praising this proposal in The Nation, the liberal writer David Dayen noted that “building new agencies with targeted missions was a hallmark of the New Deal.” Right. Except this isn’t 1933. We have all the agencies we can handle now, and we don’t trust them a whole lot to begin with. A party that believes more government will solve everything can’t really call itself populist in any modern sense of the word. It’s more just anti-business and anti-Trump. I’d be surprised if most Americans — or at least the ones you need to win back majorities — consider that much of a deal at all. _____
Looking for the ultimate light weight modular drop in handguard for your trusty AR-15? Look no further than the incredible Magpul AR-15 MOE M-LOK Handguard! This lightweight handguard is lightweight and extremely durable offering shooters a fantastic way to simply upgrade their current set up with a more versatile handguard. Magpul is world famous for offering shooters top notch gear, and the Magpul AR-15 MOE M-LOK Handguard is another in a very long of fantastic offerings for the discerning shooter. Pick up a Magpul AR-15 MOE M-LOK Handguard today and you will quite possibly own the last handguard you will ever WANT to replace.
Q: Why is the constant `fabletools` different from the mean in `forecast` package (ARIMA model)? I started rewriting all my code from forecast to fable. Does anybody know why the constant is different from the mean? library("fable") library("lubridate") library("dplyr") library("forecast") # gen data set.seed(68) df <- data.frame(time = ymd(Sys.Date() - c(1:1000)), V = rnorm(1000, 0.2)) df <- fabletools::as_tsibble(df, index = time, regular = TRUE) %>% dplyr::arrange(time) # fable model df %>% fabletools::model(fable::ARIMA(V ~ pdq(3, 0, 0) + PDQ(0, 0, 0))) %>% report() # forecast model as.ts(df) %>% forecast::Arima(c(3, 0, 0), include.mean = TRUE) fable model Series: V Model: ARIMA(3,0,0) w/ mean Coefficients: ar1 ar2 ar3 constant -0.0578 -0.0335 -0.0158 0.2141 s.e. 0.0316 0.0317 0.0317 0.0308 sigma^2 estimated as 0.9499: log likelihood=-1391.23 AIC=2792.45 AICc=2792.51 BIC=2816.99 forecast model Series: . ARIMA(3,0,0) with non-zero mean Coefficients: ar1 ar2 ar3 mean -0.0578 -0.0335 -0.0158 0.1934 s.e. 0.0316 0.0317 0.0317 0.0278 sigma^2 estimated as 0.9499: log likelihood=-1391.23 AIC=2792.45 AICc=2792.51 BIC=2816.99 and I get for some higher order models following error, which I can't interpret properly. I am able to estimate the models with forecast, even though the models might be silly, I can't even estimate them with fable Warning message: 1 error encountered for ar [1] There are no ARIMA models to choose from after imposing the `order_constraint`, please consider allowing more models.` A: The models that you are specifying between fable and forecast are equivalent. The parameterisation between the packages differ, fable::ARIMA uses constant form whereas forecast::Arima and stats::arima use mean form. This is discussed in https://otexts.com/fpp3/arima-r.html#understanding-constants-in-r Furthermore, in your fable model specification you have not specified if the constant (or equivalently, the include.mean) in the model. If this is not done, fable will automatically select between including and excluding the constant via an algorithm similar to auto.arima. You should add 1 (include) or 0 (exclude) to your formula to specify the model's constant. fable::ARIMA(V ~ 1 + pdq(3, 0, 0) + PDQ(0, 0, 0))) is equivalent to forecast::Arima(V, c(3, 0, 0), include.mean = TRUE). This is also why you're having trouble with estimating higher order models. When automatically selecting a model, fable::ARIMA will respect the argument order_constraint = p + q + P + Q <= 6. As the constant is not specified (and will be automatically selected), this order constraint is being enforced (giving no possible models to evaluate). You can keep automatic selection by removing the order_constraint with order_constraint = TRUE (this means that whenever testing the constraint, it will be TRUE, i.e. acceptable). I have updated the package to include more informative errors and a better description of the parameterisation in ?ARIMA.
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.wicket.examples.compref; import org.apache.wicket.MarkupContainer; import org.apache.wicket.examples.WicketExamplePage; import org.apache.wicket.markup.html.basic.Label; import org.apache.wicket.markup.html.panel.Fragment; /** * Page with examples on {@link org.apache.wicket.markup.html.panel.Fragment}. * * @author Eelco Hillenius */ public class FragmentPage extends WicketExamplePage { /** * A fragment, */ private class MyFragment extends Fragment { /** * Construct. * * @param id * The component Id * @param markupId * The id in the markup * @param markupProvider * The markup provider */ public MyFragment(String id, String markupId, MarkupContainer markupProvider) { super(id, markupId, markupProvider); add(new Label("label", "yep, this is from a component proper")); add(new AnotherPanel("otherPanel")); } } /** * Constructor */ public FragmentPage() { add(new MyFragment("fragment", "fragmentid", this)); } @Override protected void explain() { String html = "<wicket:fragment wicket:id=\"fragmentid\">...</wicket:fragment>"; String code = "private class MyFragment extends Fragment {\n ...\n" + "add(new MyFragment(\"fragment\", \"fragmentid\"));"; add(new ExplainPanel(html, code)); } }
Q: JMenuBar Cannot Find Symbol I am just started learning Java and I've been reading through this documentation. I don't like to copy a bunch of code and paste it. So I have been trying to work my way through the documentation one thing at at time. I already have a working JFrame and decided I would start by adding a menu. HERE IS MY CODE: package mainframe; import javax.swing.*; public class menuBar extends JMenuBar { JMenuBar mainMenu = JMenuBar("Menu"); } MY ERROR: error: cannot find symbol JMenuBar mainMenu = JMenuBar("Menu"); symbol: method JMenuBar(String) location: class menuBar 1 error So anyways. I am not really sure what the "cannot find symbol error" means. Maybe I am searching wrong. But every time I Google it it takes me to more complex questions with no clear answer. Any advice as to what I am doing wrong and or to what the cannot find symbol error means would be very much appreciated. Thanks in advance. A: In response to your particular code here, I suggest that you do not extend the JMenuBar class. You may have seen it in many tutorials or examples where the JFrame class is extended, although that is considered bad practice. To add a JMenuBar to your window, I would suggest doing the following: public class MyProgram { JFrame frame; public MyProgram() { ... frame = new JFrame(); JMenuBar mainMenu = new JMenuBar(); JMenu fileMenu = new JMenu("File"); fileMenu.add(new JMenuItem("Open...")); mainMenu.add(fileMenu); // adds a single JMenu to the menubar frame.setJMenuBar(mainMenu); // adds the entire menubar to the window ... frame.setVisible(); ... } The only reason you would extend the JMenuBar class would be if you wanted to make a class that had additional functionality in terms of methods defined in your subclass, but that seems unlikely especially given the fact that you're just learning Swing.
var camera, controls, scene, renderer; function attach_renderer(target) { var SCREEN_WIDTH = 900, SCREEN_HEIGHT = 400; var VIEW_ANGLE = 35, ASPECT = SCREEN_WIDTH / SCREEN_HEIGHT, NEAR = 0.1, FAR = 20000; renderer = new THREE.WebGLRenderer(); renderer.setSize( SCREEN_WIDTH, SCREEN_HEIGHT ); target.appendChild( renderer.domElement ); scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera( VIEW_ANGLE, // Field of view ASPECT, // Aspect ratio NEAR, // Near plane FAR // Far plane ); camera.position.set(200.0,186.0,6000.0); camera.lookAt( scene.position ); // placeholder for the FreeCAD camera controls = new THREE.TrackballControls( camera ); controls.rotateSpeed = 1.0; controls.zoomSpeed = 1.2; controls.panSpeed = 0.8; controls.noZoom = false; controls.noPan = false; controls.staticMoving = true; controls.dynamicDampingFactor = 0.3; controls.keys = [ 65, 83, 68 ]; var geom = new THREE.Geometry(); var v0 = new THREE.Vector3(-20.75,-85.0,3000.0); var v1 = new THREE.Vector3(-20.75,-85.0,0.0); var v2 = new THREE.Vector3(-100.0,-85.0,0.0); var v3 = new THREE.Vector3(-100.0,-85.0,3000.0); var v4 = new THREE.Vector3(-100.0,-93.0,3000.0); var v5 = new THREE.Vector3(-100.0,-93.0,0.0); var v6 = new THREE.Vector3(100.0,-93.0,0.0); var v7 = new THREE.Vector3(100.0,-93.0,3000.0); var v8 = new THREE.Vector3(100.0,-85.0,3000.0); var v9 = new THREE.Vector3(100.0,-85.0,0.0); var v10 = new THREE.Vector3(20.75,-85.0,0.0); var v11 = new THREE.Vector3(20.75,-85.0,3000.0); var v12 = new THREE.Vector3(17.9341796293,-84.7783901307,0.0); var v13 = new THREE.Vector3(16.5479834506,-84.5026585672,0.0); var v14 = new THREE.Vector3(17.9341796293,-84.7783901307,3000.0); var v15 = new THREE.Vector3(19.3377362769,-84.9445120072,3000.0); var v16 = new THREE.Vector3(19.3377362769,-84.9445120072,0.0); var v17 = new THREE.Vector3(2.8054879928,-68.4122637231,0.0); var v18 = new THREE.Vector3(2.75,-67.0,0.0); var v19 = new THREE.Vector3(2.75,-67.0,3000.0); var v20 = new THREE.Vector3(2.8054879928,-68.4122637231,3000.0); var v21 = new THREE.Vector3(2.97160986929,-69.8158203707,0.0); var v22 = new THREE.Vector3(2.97160986929,-69.8158203707,3000.0); var v23 = new THREE.Vector3(3.24734143284,-71.2020165494,0.0); var v24 = new THREE.Vector3(3.24734143284,-71.2020165494,3000.0); var v25 = new THREE.Vector3(3.63098270669,-72.5623058987,0.0); var v26 = new THREE.Vector3(3.63098270669,-72.5623058987,3000.0); var v27 = new THREE.Vector3(4.1201684148,-73.8883017826,0.0); var v28 = new THREE.Vector3(4.1201684148,-73.8883017826,3000.0); var v29 = new THREE.Vector3(4.71188256461,-75.1718289953,0.0); var v30 = new THREE.Vector3(4.71188256461,-75.1718289953,3000.0); var v31 = new THREE.Vector3(5.40247704163,-76.4049741649,0.0); var v32 = new THREE.Vector3(5.40247704163,-76.4049741649,3000.0); var v33 = new THREE.Vector3(6.18769410125,-77.5801345413,0.0); var v34 = new THREE.Vector3(6.18769410125,-77.5801345413,3000.0); var v35 = new THREE.Vector3(7.0626926192,-78.6900648699,3000.0); var v36 = new THREE.Vector3(7.0626926192,-78.6900648699,0.0); var v37 = new THREE.Vector3(8.02207793864,-79.7279220614,0.0); var v38 = new THREE.Vector3(8.02207793864,-79.7279220614,3000.0); var v39 = new THREE.Vector3(9.05993513006,-80.6873073808,0.0); var v40 = new THREE.Vector3(9.05993513006,-80.6873073808,3000.0); var v41 = new THREE.Vector3(10.1698654587,-81.5623058987,0.0); var v42 = new THREE.Vector3(10.1698654587,-81.5623058987,3000.0); var v43 = new THREE.Vector3(11.3450258351,-82.3475229584,3000.0); var v44 = new THREE.Vector3(11.3450258351,-82.3475229584,0.0); var v45 = new THREE.Vector3(12.5781710047,-83.0381174354,0.0); var v46 = new THREE.Vector3(12.5781710047,-83.0381174354,3000.0); var v47 = new THREE.Vector3(13.8616982174,-83.6298315852,3000.0); var v48 = new THREE.Vector3(13.8616982174,-83.6298315852,0.0); var v49 = new THREE.Vector3(15.1876941013,-84.1190172933,3000.0); var v50 = new THREE.Vector3(15.1876941013,-84.1190172933,0.0); var v51 = new THREE.Vector3(16.5479834506,-84.5026585672,3000.0); var v52 = new THREE.Vector3(2.75,67.0,0.0); var v53 = new THREE.Vector3(2.75,67.0,3000.0); var v54 = new THREE.Vector3(2.97160986929,69.8158203707,0.0); var v55 = new THREE.Vector3(3.24734143284,71.2020165494,0.0); var v56 = new THREE.Vector3(3.24734143284,71.2020165494,3000.0); var v57 = new THREE.Vector3(2.97160986929,69.8158203707,3000.0); var v58 = new THREE.Vector3(2.8054879928,68.4122637231,0.0); var v59 = new THREE.Vector3(2.8054879928,68.4122637231,3000.0); var v60 = new THREE.Vector3(19.3377362769,84.9445120072,0.0); var v61 = new THREE.Vector3(20.75,85.0,0.0); var v62 = new THREE.Vector3(20.75,85.0,3000.0); var v63 = new THREE.Vector3(19.3377362769,84.9445120072,3000.0); var v64 = new THREE.Vector3(17.9341796293,84.7783901307,0.0); var v65 = new THREE.Vector3(17.9341796293,84.7783901307,3000.0); var v66 = new THREE.Vector3(16.5479834506,84.5026585672,0.0); var v67 = new THREE.Vector3(16.5479834506,84.5026585672,3000.0); var v68 = new THREE.Vector3(15.1876941013,84.1190172933,0.0); var v69 = new THREE.Vector3(15.1876941013,84.1190172933,3000.0); var v70 = new THREE.Vector3(13.8616982174,83.6298315852,0.0); var v71 = new THREE.Vector3(13.8616982174,83.6298315852,3000.0); var v72 = new THREE.Vector3(12.5781710047,83.0381174354,0.0); var v73 = new THREE.Vector3(12.5781710047,83.0381174354,3000.0); var v74 = new THREE.Vector3(11.3450258351,82.3475229584,0.0); var v75 = new THREE.Vector3(11.3450258351,82.3475229584,3000.0); var v76 = new THREE.Vector3(10.1698654587,81.5623058987,0.0); var v77 = new THREE.Vector3(10.1698654587,81.5623058987,3000.0); var v78 = new THREE.Vector3(9.05993513006,80.6873073808,3000.0); var v79 = new THREE.Vector3(9.05993513006,80.6873073808,0.0); var v80 = new THREE.Vector3(8.02207793864,79.7279220614,0.0); var v81 = new THREE.Vector3(8.02207793864,79.7279220614,3000.0); var v82 = new THREE.Vector3(7.0626926192,78.6900648699,0.0); var v83 = new THREE.Vector3(7.0626926192,78.6900648699,3000.0); var v84 = new THREE.Vector3(6.18769410125,77.5801345413,0.0); var v85 = new THREE.Vector3(6.18769410125,77.5801345413,3000.0); var v86 = new THREE.Vector3(5.40247704163,76.4049741649,3000.0); var v87 = new THREE.Vector3(5.40247704163,76.4049741649,0.0); var v88 = new THREE.Vector3(4.71188256461,75.1718289953,0.0); var v89 = new THREE.Vector3(4.71188256461,75.1718289953,3000.0); var v90 = new THREE.Vector3(4.1201684148,73.8883017826,3000.0); var v91 = new THREE.Vector3(4.1201684148,73.8883017826,0.0); var v92 = new THREE.Vector3(3.63098270669,72.5623058987,0.0); var v93 = new THREE.Vector3(3.63098270669,72.5623058987,3000.0); var v94 = new THREE.Vector3(100.0,85.0,0.0); var v95 = new THREE.Vector3(100.0,85.0,3000.0); var v96 = new THREE.Vector3(100.0,93.0,3000.0); var v97 = new THREE.Vector3(100.0,93.0,0.0); var v98 = new THREE.Vector3(-100.0,93.0,0.0); var v99 = new THREE.Vector3(-100.0,93.0,3000.0); var v100 = new THREE.Vector3(-100.0,85.0,3000.0); var v101 = new THREE.Vector3(-100.0,85.0,0.0); var v102 = new THREE.Vector3(-20.75,85.0,0.0); var v103 = new THREE.Vector3(-20.75,85.0,3000.0); var v104 = new THREE.Vector3(-17.9341796293,84.7783901307,0.0); var v105 = new THREE.Vector3(-16.5479834506,84.5026585672,0.0); var v106 = new THREE.Vector3(-16.5479834506,84.5026585672,3000.0); var v107 = new THREE.Vector3(-17.9341796293,84.7783901307,3000.0); var v108 = new THREE.Vector3(-19.3377362769,84.9445120072,0.0); var v109 = new THREE.Vector3(-19.3377362769,84.9445120072,3000.0); var v110 = new THREE.Vector3(-2.8054879928,68.4122637231,0.0); var v111 = new THREE.Vector3(-2.75,67.0,0.0); var v112 = new THREE.Vector3(-2.75,67.0,3000.0); var v113 = new THREE.Vector3(-2.8054879928,68.4122637231,3000.0); var v114 = new THREE.Vector3(-2.97160986929,69.8158203707,0.0); var v115 = new THREE.Vector3(-2.97160986929,69.8158203707,3000.0); var v116 = new THREE.Vector3(-3.24734143284,71.2020165494,0.0); var v117 = new THREE.Vector3(-3.24734143284,71.2020165494,3000.0); var v118 = new THREE.Vector3(-3.63098270669,72.5623058987,0.0); var v119 = new THREE.Vector3(-3.63098270669,72.5623058987,3000.0); var v120 = new THREE.Vector3(-4.1201684148,73.8883017826,0.0); var v121 = new THREE.Vector3(-4.1201684148,73.8883017826,3000.0); var v122 = new THREE.Vector3(-4.71188256461,75.1718289953,0.0); var v123 = new THREE.Vector3(-4.71188256461,75.1718289953,3000.0); var v124 = new THREE.Vector3(-5.40247704163,76.4049741649,0.0); var v125 = new THREE.Vector3(-5.40247704163,76.4049741649,3000.0); var v126 = new THREE.Vector3(-6.18769410125,77.5801345413,0.0); var v127 = new THREE.Vector3(-6.18769410125,77.5801345413,3000.0); var v128 = new THREE.Vector3(-7.0626926192,78.6900648699,3000.0); var v129 = new THREE.Vector3(-7.0626926192,78.6900648699,0.0); var v130 = new THREE.Vector3(-8.02207793864,79.7279220614,0.0); var v131 = new THREE.Vector3(-8.02207793864,79.7279220614,3000.0); var v132 = new THREE.Vector3(-9.05993513006,80.6873073808,0.0); var v133 = new THREE.Vector3(-9.05993513006,80.6873073808,3000.0); var v134 = new THREE.Vector3(-10.1698654587,81.5623058987,0.0); var v135 = new THREE.Vector3(-10.1698654587,81.5623058987,3000.0); var v136 = new THREE.Vector3(-11.3450258351,82.3475229584,3000.0); var v137 = new THREE.Vector3(-11.3450258351,82.3475229584,0.0); var v138 = new THREE.Vector3(-12.5781710047,83.0381174354,0.0); var v139 = new THREE.Vector3(-12.5781710047,83.0381174354,3000.0); var v140 = new THREE.Vector3(-13.8616982174,83.6298315852,3000.0); var v141 = new THREE.Vector3(-13.8616982174,83.6298315852,0.0); var v142 = new THREE.Vector3(-15.1876941013,84.1190172933,0.0); var v143 = new THREE.Vector3(-15.1876941013,84.1190172933,3000.0); var v144 = new THREE.Vector3(-2.75,-67.0,0.0); var v145 = new THREE.Vector3(-2.75,-67.0,3000.0); var v146 = new THREE.Vector3(-2.97160986929,-69.8158203707,0.0); var v147 = new THREE.Vector3(-3.24734143284,-71.2020165494,0.0); var v148 = new THREE.Vector3(-2.97160986929,-69.8158203707,3000.0); var v149 = new THREE.Vector3(-2.8054879928,-68.4122637231,3000.0); var v150 = new THREE.Vector3(-2.8054879928,-68.4122637231,0.0); var v151 = new THREE.Vector3(-19.3377362769,-84.9445120072,0.0); var v152 = new THREE.Vector3(-19.3377362769,-84.9445120072,3000.0); var v153 = new THREE.Vector3(-17.9341796293,-84.7783901307,3000.0); var v154 = new THREE.Vector3(-17.9341796293,-84.7783901307,0.0); var v155 = new THREE.Vector3(-16.5479834506,-84.5026585672,0.0); var v156 = new THREE.Vector3(-16.5479834506,-84.5026585672,3000.0); var v157 = new THREE.Vector3(-15.1876941013,-84.1190172933,0.0); var v158 = new THREE.Vector3(-15.1876941013,-84.1190172933,3000.0); var v159 = new THREE.Vector3(-13.8616982174,-83.6298315852,0.0); var v160 = new THREE.Vector3(-13.8616982174,-83.6298315852,3000.0); var v161 = new THREE.Vector3(-12.5781710047,-83.0381174354,0.0); var v162 = new THREE.Vector3(-12.5781710047,-83.0381174354,3000.0); var v163 = new THREE.Vector3(-11.3450258351,-82.3475229584,3000.0); var v164 = new THREE.Vector3(-11.3450258351,-82.3475229584,0.0); var v165 = new THREE.Vector3(-10.1698654587,-81.5623058987,0.0); var v166 = new THREE.Vector3(-10.1698654587,-81.5623058987,3000.0); var v167 = new THREE.Vector3(-9.05993513006,-80.6873073808,0.0); var v168 = new THREE.Vector3(-9.05993513006,-80.6873073808,3000.0); var v169 = new THREE.Vector3(-8.02207793864,-79.7279220614,3000.0); var v170 = new THREE.Vector3(-8.02207793864,-79.7279220614,0.0); var v171 = new THREE.Vector3(-7.0626926192,-78.6900648699,0.0); var v172 = new THREE.Vector3(-7.0626926192,-78.6900648699,3000.0); var v173 = new THREE.Vector3(-6.18769410125,-77.5801345413,0.0); var v174 = new THREE.Vector3(-6.18769410125,-77.5801345413,3000.0); var v175 = new THREE.Vector3(-5.40247704163,-76.4049741649,3000.0); var v176 = new THREE.Vector3(-5.40247704163,-76.4049741649,0.0); var v177 = new THREE.Vector3(-4.71188256461,-75.1718289953,0.0); var v178 = new THREE.Vector3(-4.71188256461,-75.1718289953,3000.0); var v179 = new THREE.Vector3(-4.1201684148,-73.8883017826,3000.0); var v180 = new THREE.Vector3(-4.1201684148,-73.8883017826,0.0); var v181 = new THREE.Vector3(-3.63098270669,-72.5623058987,3000.0); var v182 = new THREE.Vector3(-3.63098270669,-72.5623058987,0.0); var v183 = new THREE.Vector3(-3.24734143284,-71.2020165494,3000.0); console.log(geom.vertices) geom.vertices.push(v0); geom.vertices.push(v1); geom.vertices.push(v2); geom.vertices.push(v3); geom.vertices.push(v4); geom.vertices.push(v5); geom.vertices.push(v6); geom.vertices.push(v7); geom.vertices.push(v8); geom.vertices.push(v9); geom.vertices.push(v10); geom.vertices.push(v11); geom.vertices.push(v12); geom.vertices.push(v13); geom.vertices.push(v14); geom.vertices.push(v15); geom.vertices.push(v16); geom.vertices.push(v17); geom.vertices.push(v18); geom.vertices.push(v19); geom.vertices.push(v20); geom.vertices.push(v21); geom.vertices.push(v22); geom.vertices.push(v23); geom.vertices.push(v24); geom.vertices.push(v25); geom.vertices.push(v26); geom.vertices.push(v27); geom.vertices.push(v28); geom.vertices.push(v29); geom.vertices.push(v30); geom.vertices.push(v31); geom.vertices.push(v32); geom.vertices.push(v33); geom.vertices.push(v34); geom.vertices.push(v35); geom.vertices.push(v36); geom.vertices.push(v37); geom.vertices.push(v38); geom.vertices.push(v39); geom.vertices.push(v40); geom.vertices.push(v41); geom.vertices.push(v42); geom.vertices.push(v43); geom.vertices.push(v44); geom.vertices.push(v45); geom.vertices.push(v46); geom.vertices.push(v47); geom.vertices.push(v48); geom.vertices.push(v49); geom.vertices.push(v50); geom.vertices.push(v51); geom.vertices.push(v52); geom.vertices.push(v53); geom.vertices.push(v54); geom.vertices.push(v55); geom.vertices.push(v56); geom.vertices.push(v57); geom.vertices.push(v58); geom.vertices.push(v59); geom.vertices.push(v60); geom.vertices.push(v61); geom.vertices.push(v62); geom.vertices.push(v63); geom.vertices.push(v64); geom.vertices.push(v65); geom.vertices.push(v66); geom.vertices.push(v67); geom.vertices.push(v68); geom.vertices.push(v69); geom.vertices.push(v70); geom.vertices.push(v71); geom.vertices.push(v72); geom.vertices.push(v73); geom.vertices.push(v74); geom.vertices.push(v75); geom.vertices.push(v76); geom.vertices.push(v77); geom.vertices.push(v78); geom.vertices.push(v79); geom.vertices.push(v80); geom.vertices.push(v81); geom.vertices.push(v82); geom.vertices.push(v83); geom.vertices.push(v84); geom.vertices.push(v85); geom.vertices.push(v86); geom.vertices.push(v87); geom.vertices.push(v88); geom.vertices.push(v89); geom.vertices.push(v90); geom.vertices.push(v91); geom.vertices.push(v92); geom.vertices.push(v93); geom.vertices.push(v94); geom.vertices.push(v95); geom.vertices.push(v96); geom.vertices.push(v97); geom.vertices.push(v98); geom.vertices.push(v99); geom.vertices.push(v100); geom.vertices.push(v101); geom.vertices.push(v102); geom.vertices.push(v103); geom.vertices.push(v104); geom.vertices.push(v105); geom.vertices.push(v106); geom.vertices.push(v107); geom.vertices.push(v108); geom.vertices.push(v109); geom.vertices.push(v110); geom.vertices.push(v111); geom.vertices.push(v112); geom.vertices.push(v113); geom.vertices.push(v114); geom.vertices.push(v115); geom.vertices.push(v116); geom.vertices.push(v117); geom.vertices.push(v118); geom.vertices.push(v119); geom.vertices.push(v120); geom.vertices.push(v121); geom.vertices.push(v122); geom.vertices.push(v123); geom.vertices.push(v124); geom.vertices.push(v125); geom.vertices.push(v126); geom.vertices.push(v127); geom.vertices.push(v128); geom.vertices.push(v129); geom.vertices.push(v130); geom.vertices.push(v131); geom.vertices.push(v132); geom.vertices.push(v133); geom.vertices.push(v134); geom.vertices.push(v135); geom.vertices.push(v136); geom.vertices.push(v137); geom.vertices.push(v138); geom.vertices.push(v139); geom.vertices.push(v140); geom.vertices.push(v141); geom.vertices.push(v142); geom.vertices.push(v143); geom.vertices.push(v144); geom.vertices.push(v145); geom.vertices.push(v146); geom.vertices.push(v147); geom.vertices.push(v148); geom.vertices.push(v149); geom.vertices.push(v150); geom.vertices.push(v151); geom.vertices.push(v152); geom.vertices.push(v153); geom.vertices.push(v154); geom.vertices.push(v155); geom.vertices.push(v156); geom.vertices.push(v157); geom.vertices.push(v158); geom.vertices.push(v159); geom.vertices.push(v160); geom.vertices.push(v161); geom.vertices.push(v162); geom.vertices.push(v163); geom.vertices.push(v164); geom.vertices.push(v165); geom.vertices.push(v166); geom.vertices.push(v167); geom.vertices.push(v168); geom.vertices.push(v169); geom.vertices.push(v170); geom.vertices.push(v171); geom.vertices.push(v172); geom.vertices.push(v173); geom.vertices.push(v174); geom.vertices.push(v175); geom.vertices.push(v176); geom.vertices.push(v177); geom.vertices.push(v178); geom.vertices.push(v179); geom.vertices.push(v180); geom.vertices.push(v181); geom.vertices.push(v182); geom.vertices.push(v183); geom.faces.push( new THREE.Face3(0, 1, 2) ); geom.faces.push( new THREE.Face3(3, 0, 2) ); geom.faces.push( new THREE.Face3(4, 3, 2) ); geom.faces.push( new THREE.Face3(4, 2, 5) ); geom.faces.push( new THREE.Face3(4, 5, 6) ); geom.faces.push( new THREE.Face3(7, 4, 6) ); geom.faces.push( new THREE.Face3(8, 7, 6) ); geom.faces.push( new THREE.Face3(8, 6, 9) ); geom.faces.push( new THREE.Face3(8, 9, 10) ); geom.faces.push( new THREE.Face3(11, 8, 10) ); geom.faces.push( new THREE.Face3(12, 13, 14) ); geom.faces.push( new THREE.Face3(12, 14, 15) ); geom.faces.push( new THREE.Face3(16, 12, 15) ); geom.faces.push( new THREE.Face3(10, 16, 15) ); geom.faces.push( new THREE.Face3(10, 15, 11) ); geom.faces.push( new THREE.Face3(17, 18, 19) ); geom.faces.push( new THREE.Face3(17, 19, 20) ); geom.faces.push( new THREE.Face3(21, 20, 22) ); geom.faces.push( new THREE.Face3(21, 17, 20) ); geom.faces.push( new THREE.Face3(23, 22, 24) ); geom.faces.push( new THREE.Face3(23, 21, 22) ); geom.faces.push( new THREE.Face3(25, 24, 26) ); geom.faces.push( new THREE.Face3(25, 23, 24) ); geom.faces.push( new THREE.Face3(27, 26, 28) ); geom.faces.push( new THREE.Face3(27, 25, 26) ); geom.faces.push( new THREE.Face3(29, 28, 30) ); geom.faces.push( new THREE.Face3(29, 27, 28) ); geom.faces.push( new THREE.Face3(31, 30, 32) ); geom.faces.push( new THREE.Face3(31, 29, 30) ); geom.faces.push( new THREE.Face3(33, 32, 34) ); geom.faces.push( new THREE.Face3(33, 34, 35) ); geom.faces.push( new THREE.Face3(33, 31, 32) ); geom.faces.push( new THREE.Face3(36, 33, 35) ); geom.faces.push( new THREE.Face3(37, 35, 38) ); geom.faces.push( new THREE.Face3(37, 36, 35) ); geom.faces.push( new THREE.Face3(39, 38, 40) ); geom.faces.push( new THREE.Face3(39, 37, 38) ); geom.faces.push( new THREE.Face3(41, 40, 42) ); geom.faces.push( new THREE.Face3(41, 42, 43) ); geom.faces.push( new THREE.Face3(41, 39, 40) ); geom.faces.push( new THREE.Face3(44, 41, 43) ); geom.faces.push( new THREE.Face3(45, 44, 43) ); geom.faces.push( new THREE.Face3(45, 43, 46) ); geom.faces.push( new THREE.Face3(45, 46, 47) ); geom.faces.push( new THREE.Face3(48, 45, 47) ); geom.faces.push( new THREE.Face3(48, 47, 49) ); geom.faces.push( new THREE.Face3(50, 48, 49) ); geom.faces.push( new THREE.Face3(13, 50, 49) ); geom.faces.push( new THREE.Face3(13, 49, 51) ); geom.faces.push( new THREE.Face3(13, 51, 14) ); geom.faces.push( new THREE.Face3(19, 18, 52) ); geom.faces.push( new THREE.Face3(53, 19, 52) ); geom.faces.push( new THREE.Face3(54, 55, 56) ); geom.faces.push( new THREE.Face3(54, 56, 57) ); geom.faces.push( new THREE.Face3(58, 54, 57) ); geom.faces.push( new THREE.Face3(58, 57, 59) ); geom.faces.push( new THREE.Face3(58, 59, 53) ); geom.faces.push( new THREE.Face3(52, 58, 53) ); geom.faces.push( new THREE.Face3(60, 61, 62) ); geom.faces.push( new THREE.Face3(60, 62, 63) ); geom.faces.push( new THREE.Face3(64, 63, 65) ); geom.faces.push( new THREE.Face3(64, 60, 63) ); geom.faces.push( new THREE.Face3(66, 65, 67) ); geom.faces.push( new THREE.Face3(66, 64, 65) ); geom.faces.push( new THREE.Face3(68, 67, 69) ); geom.faces.push( new THREE.Face3(68, 66, 67) ); geom.faces.push( new THREE.Face3(70, 69, 71) ); geom.faces.push( new THREE.Face3(70, 68, 69) ); geom.faces.push( new THREE.Face3(72, 71, 73) ); geom.faces.push( new THREE.Face3(72, 70, 71) ); geom.faces.push( new THREE.Face3(74, 73, 75) ); geom.faces.push( new THREE.Face3(74, 72, 73) ); geom.faces.push( new THREE.Face3(76, 75, 77) ); geom.faces.push( new THREE.Face3(76, 77, 78) ); geom.faces.push( new THREE.Face3(76, 74, 75) ); geom.faces.push( new THREE.Face3(79, 76, 78) ); geom.faces.push( new THREE.Face3(80, 78, 81) ); geom.faces.push( new THREE.Face3(80, 79, 78) ); geom.faces.push( new THREE.Face3(82, 81, 83) ); geom.faces.push( new THREE.Face3(82, 80, 81) ); geom.faces.push( new THREE.Face3(84, 83, 85) ); geom.faces.push( new THREE.Face3(84, 85, 86) ); geom.faces.push( new THREE.Face3(84, 82, 83) ); geom.faces.push( new THREE.Face3(87, 84, 86) ); geom.faces.push( new THREE.Face3(88, 87, 86) ); geom.faces.push( new THREE.Face3(88, 86, 89) ); geom.faces.push( new THREE.Face3(88, 89, 90) ); geom.faces.push( new THREE.Face3(91, 88, 90) ); geom.faces.push( new THREE.Face3(92, 91, 90) ); geom.faces.push( new THREE.Face3(92, 90, 93) ); geom.faces.push( new THREE.Face3(92, 93, 56) ); geom.faces.push( new THREE.Face3(55, 92, 56) ); geom.faces.push( new THREE.Face3(62, 61, 94) ); geom.faces.push( new THREE.Face3(95, 62, 94) ); geom.faces.push( new THREE.Face3(96, 95, 94) ); geom.faces.push( new THREE.Face3(96, 94, 97) ); geom.faces.push( new THREE.Face3(96, 97, 98) ); geom.faces.push( new THREE.Face3(99, 96, 98) ); geom.faces.push( new THREE.Face3(100, 99, 98) ); geom.faces.push( new THREE.Face3(100, 98, 101) ); geom.faces.push( new THREE.Face3(100, 101, 102) ); geom.faces.push( new THREE.Face3(103, 100, 102) ); geom.faces.push( new THREE.Face3(104, 105, 106) ); geom.faces.push( new THREE.Face3(104, 106, 107) ); geom.faces.push( new THREE.Face3(108, 104, 107) ); geom.faces.push( new THREE.Face3(108, 107, 109) ); geom.faces.push( new THREE.Face3(108, 109, 103) ); geom.faces.push( new THREE.Face3(102, 108, 103) ); geom.faces.push( new THREE.Face3(110, 111, 112) ); geom.faces.push( new THREE.Face3(110, 112, 113) ); geom.faces.push( new THREE.Face3(114, 113, 115) ); geom.faces.push( new THREE.Face3(114, 110, 113) ); geom.faces.push( new THREE.Face3(116, 115, 117) ); geom.faces.push( new THREE.Face3(116, 114, 115) ); geom.faces.push( new THREE.Face3(118, 117, 119) ); geom.faces.push( new THREE.Face3(118, 116, 117) ); geom.faces.push( new THREE.Face3(120, 119, 121) ); geom.faces.push( new THREE.Face3(120, 118, 119) ); geom.faces.push( new THREE.Face3(122, 121, 123) ); geom.faces.push( new THREE.Face3(122, 120, 121) ); geom.faces.push( new THREE.Face3(124, 123, 125) ); geom.faces.push( new THREE.Face3(124, 122, 123) ); geom.faces.push( new THREE.Face3(126, 125, 127) ); geom.faces.push( new THREE.Face3(126, 127, 128) ); geom.faces.push( new THREE.Face3(126, 124, 125) ); geom.faces.push( new THREE.Face3(129, 126, 128) ); geom.faces.push( new THREE.Face3(130, 128, 131) ); geom.faces.push( new THREE.Face3(130, 129, 128) ); geom.faces.push( new THREE.Face3(132, 131, 133) ); geom.faces.push( new THREE.Face3(132, 130, 131) ); geom.faces.push( new THREE.Face3(134, 133, 135) ); geom.faces.push( new THREE.Face3(134, 135, 136) ); geom.faces.push( new THREE.Face3(134, 132, 133) ); geom.faces.push( new THREE.Face3(137, 134, 136) ); geom.faces.push( new THREE.Face3(138, 137, 136) ); geom.faces.push( new THREE.Face3(138, 136, 139) ); geom.faces.push( new THREE.Face3(138, 139, 140) ); geom.faces.push( new THREE.Face3(141, 138, 140) ); geom.faces.push( new THREE.Face3(142, 141, 140) ); geom.faces.push( new THREE.Face3(142, 140, 143) ); geom.faces.push( new THREE.Face3(142, 143, 106) ); geom.faces.push( new THREE.Face3(105, 142, 106) ); geom.faces.push( new THREE.Face3(112, 111, 144) ); geom.faces.push( new THREE.Face3(145, 112, 144) ); geom.faces.push( new THREE.Face3(146, 147, 148) ); geom.faces.push( new THREE.Face3(146, 148, 149) ); geom.faces.push( new THREE.Face3(150, 146, 149) ); geom.faces.push( new THREE.Face3(144, 150, 149) ); geom.faces.push( new THREE.Face3(144, 149, 145) ); geom.faces.push( new THREE.Face3(151, 1, 0) ); geom.faces.push( new THREE.Face3(151, 0, 152) ); geom.faces.push( new THREE.Face3(151, 152, 153) ); geom.faces.push( new THREE.Face3(154, 151, 153) ); geom.faces.push( new THREE.Face3(155, 153, 156) ); geom.faces.push( new THREE.Face3(155, 154, 153) ); geom.faces.push( new THREE.Face3(157, 156, 158) ); geom.faces.push( new THREE.Face3(157, 155, 156) ); geom.faces.push( new THREE.Face3(159, 158, 160) ); geom.faces.push( new THREE.Face3(159, 157, 158) ); geom.faces.push( new THREE.Face3(161, 160, 162) ); geom.faces.push( new THREE.Face3(161, 162, 163) ); geom.faces.push( new THREE.Face3(161, 159, 160) ); geom.faces.push( new THREE.Face3(164, 161, 163) ); geom.faces.push( new THREE.Face3(165, 163, 166) ); geom.faces.push( new THREE.Face3(165, 164, 163) ); geom.faces.push( new THREE.Face3(167, 166, 168) ); geom.faces.push( new THREE.Face3(167, 168, 169) ); geom.faces.push( new THREE.Face3(167, 165, 166) ); geom.faces.push( new THREE.Face3(170, 167, 169) ); geom.faces.push( new THREE.Face3(171, 169, 172) ); geom.faces.push( new THREE.Face3(171, 170, 169) ); geom.faces.push( new THREE.Face3(173, 172, 174) ); geom.faces.push( new THREE.Face3(173, 174, 175) ); geom.faces.push( new THREE.Face3(173, 171, 172) ); geom.faces.push( new THREE.Face3(176, 173, 175) ); geom.faces.push( new THREE.Face3(177, 176, 175) ); geom.faces.push( new THREE.Face3(177, 175, 178) ); geom.faces.push( new THREE.Face3(177, 178, 179) ); geom.faces.push( new THREE.Face3(180, 177, 179) ); geom.faces.push( new THREE.Face3(180, 179, 181) ); geom.faces.push( new THREE.Face3(182, 180, 181) ); geom.faces.push( new THREE.Face3(147, 182, 181) ); geom.faces.push( new THREE.Face3(147, 181, 183) ); geom.faces.push( new THREE.Face3(147, 183, 148) ); geom.faces.push( new THREE.Face3(176, 31, 173) ); geom.faces.push( new THREE.Face3(126, 84, 87) ); geom.faces.push( new THREE.Face3(126, 87, 124) ); geom.faces.push( new THREE.Face3(164, 44, 45) ); geom.faces.push( new THREE.Face3(10, 9, 6) ); geom.faces.push( new THREE.Face3(129, 84, 126) ); geom.faces.push( new THREE.Face3(164, 165, 44) ); geom.faces.push( new THREE.Face3(129, 82, 84) ); geom.faces.push( new THREE.Face3(177, 29, 31) ); geom.faces.push( new THREE.Face3(130, 82, 129) ); geom.faces.push( new THREE.Face3(177, 31, 176) ); geom.faces.push( new THREE.Face3(130, 79, 80) ); geom.faces.push( new THREE.Face3(130, 80, 82) ); geom.faces.push( new THREE.Face3(111, 52, 144) ); geom.faces.push( new THREE.Face3(161, 164, 45) ); geom.faces.push( new THREE.Face3(159, 45, 48) ); geom.faces.push( new THREE.Face3(159, 161, 45) ); geom.faces.push( new THREE.Face3(132, 79, 130) ); geom.faces.push( new THREE.Face3(180, 27, 29) ); geom.faces.push( new THREE.Face3(134, 74, 76) ); geom.faces.push( new THREE.Face3(180, 29, 177) ); geom.faces.push( new THREE.Face3(134, 76, 79) ); geom.faces.push( new THREE.Face3(134, 79, 132) ); geom.faces.push( new THREE.Face3(182, 23, 25) ); geom.faces.push( new THREE.Face3(182, 25, 27) ); geom.faces.push( new THREE.Face3(137, 74, 134) ); geom.faces.push( new THREE.Face3(110, 52, 111) ); geom.faces.push( new THREE.Face3(182, 27, 180) ); geom.faces.push( new THREE.Face3(157, 159, 48) ); geom.faces.push( new THREE.Face3(157, 48, 50) ); geom.faces.push( new THREE.Face3(110, 58, 52) ); geom.faces.push( new THREE.Face3(138, 72, 74) ); geom.faces.push( new THREE.Face3(138, 74, 137) ); geom.faces.push( new THREE.Face3(141, 70, 72) ); geom.faces.push( new THREE.Face3(147, 23, 182) ); geom.faces.push( new THREE.Face3(141, 72, 138) ); geom.faces.push( new THREE.Face3(142, 70, 141) ); geom.faces.push( new THREE.Face3(155, 50, 13) ); geom.faces.push( new THREE.Face3(155, 13, 12) ); geom.faces.push( new THREE.Face3(142, 68, 70) ); geom.faces.push( new THREE.Face3(155, 157, 50) ); geom.faces.push( new THREE.Face3(105, 66, 68) ); geom.faces.push( new THREE.Face3(114, 58, 110) ); geom.faces.push( new THREE.Face3(146, 21, 23) ); geom.faces.push( new THREE.Face3(146, 23, 147) ); geom.faces.push( new THREE.Face3(105, 68, 142) ); geom.faces.push( new THREE.Face3(154, 155, 12) ); geom.faces.push( new THREE.Face3(114, 54, 58) ); geom.faces.push( new THREE.Face3(104, 64, 66) ); geom.faces.push( new THREE.Face3(104, 66, 105) ); geom.faces.push( new THREE.Face3(150, 17, 21) ); geom.faces.push( new THREE.Face3(150, 21, 146) ); geom.faces.push( new THREE.Face3(116, 54, 114) ); geom.faces.push( new THREE.Face3(116, 92, 55) ); geom.faces.push( new THREE.Face3(116, 55, 54) ); geom.faces.push( new THREE.Face3(151, 12, 16) ); geom.faces.push( new THREE.Face3(118, 92, 116) ); geom.faces.push( new THREE.Face3(167, 37, 39) ); geom.faces.push( new THREE.Face3(167, 170, 37) ); geom.faces.push( new THREE.Face3(151, 154, 12) ); geom.faces.push( new THREE.Face3(167, 39, 41) ); geom.faces.push( new THREE.Face3(98, 102, 101) ); geom.faces.push( new THREE.Face3(98, 97, 61) ); geom.faces.push( new THREE.Face3(118, 91, 92) ); geom.faces.push( new THREE.Face3(1, 16, 10) ); geom.faces.push( new THREE.Face3(98, 61, 60) ); geom.faces.push( new THREE.Face3(98, 60, 64) ); geom.faces.push( new THREE.Face3(98, 64, 108) ); geom.faces.push( new THREE.Face3(1, 151, 16) ); geom.faces.push( new THREE.Face3(98, 108, 102) ); geom.faces.push( new THREE.Face3(108, 64, 104) ); geom.faces.push( new THREE.Face3(1, 10, 6) ); geom.faces.push( new THREE.Face3(144, 17, 150) ); geom.faces.push( new THREE.Face3(171, 36, 37) ); geom.faces.push( new THREE.Face3(144, 18, 17) ); geom.faces.push( new THREE.Face3(171, 37, 170) ); geom.faces.push( new THREE.Face3(120, 91, 118) ); geom.faces.push( new THREE.Face3(173, 31, 33) ); geom.faces.push( new THREE.Face3(173, 33, 36) ); geom.faces.push( new THREE.Face3(173, 36, 171) ); geom.faces.push( new THREE.Face3(5, 1, 6) ); geom.faces.push( new THREE.Face3(165, 167, 41) ); geom.faces.push( new THREE.Face3(122, 87, 88) ); geom.faces.push( new THREE.Face3(165, 41, 44) ); geom.faces.push( new THREE.Face3(122, 88, 91) ); geom.faces.push( new THREE.Face3(122, 91, 120) ); geom.faces.push( new THREE.Face3(2, 1, 5) ); geom.faces.push( new THREE.Face3(124, 87, 122) ); geom.faces.push( new THREE.Face3(61, 97, 94) ); geom.faces.push( new THREE.Face3(52, 18, 144) ); geom.faces.push( new THREE.Face3(174, 32, 175) ); geom.faces.push( new THREE.Face3(86, 85, 127) ); geom.faces.push( new THREE.Face3(125, 86, 127) ); geom.faces.push( new THREE.Face3(46, 43, 163) ); geom.faces.push( new THREE.Face3(7, 8, 11) ); geom.faces.push( new THREE.Face3(127, 85, 128) ); geom.faces.push( new THREE.Face3(43, 166, 163) ); geom.faces.push( new THREE.Face3(85, 83, 128) ); geom.faces.push( new THREE.Face3(32, 30, 178) ); geom.faces.push( new THREE.Face3(128, 83, 131) ); geom.faces.push( new THREE.Face3(175, 32, 178) ); geom.faces.push( new THREE.Face3(81, 78, 131) ); geom.faces.push( new THREE.Face3(83, 81, 131) ); geom.faces.push( new THREE.Face3(145, 53, 112) ); geom.faces.push( new THREE.Face3(46, 163, 162) ); geom.faces.push( new THREE.Face3(47, 46, 160) ); geom.faces.push( new THREE.Face3(46, 162, 160) ); geom.faces.push( new THREE.Face3(131, 78, 133) ); geom.faces.push( new THREE.Face3(30, 28, 179) ); geom.faces.push( new THREE.Face3(77, 75, 135) ); geom.faces.push( new THREE.Face3(178, 30, 179) ); geom.faces.push( new THREE.Face3(78, 77, 135) ); geom.faces.push( new THREE.Face3(133, 78, 135) ); geom.faces.push( new THREE.Face3(26, 24, 181) ); geom.faces.push( new THREE.Face3(28, 26, 181) ); geom.faces.push( new THREE.Face3(135, 75, 136) ); geom.faces.push( new THREE.Face3(112, 53, 113) ); geom.faces.push( new THREE.Face3(179, 28, 181) ); geom.faces.push( new THREE.Face3(47, 160, 158) ); geom.faces.push( new THREE.Face3(49, 47, 158) ); geom.faces.push( new THREE.Face3(53, 59, 113) ); geom.faces.push( new THREE.Face3(75, 73, 139) ); geom.faces.push( new THREE.Face3(136, 75, 139) ); geom.faces.push( new THREE.Face3(73, 71, 140) ); geom.faces.push( new THREE.Face3(181, 24, 183) ); geom.faces.push( new THREE.Face3(139, 73, 140) ); geom.faces.push( new THREE.Face3(140, 71, 143) ); geom.faces.push( new THREE.Face3(51, 49, 156) ); geom.faces.push( new THREE.Face3(14, 51, 156) ); geom.faces.push( new THREE.Face3(71, 69, 143) ); geom.faces.push( new THREE.Face3(49, 158, 156) ); geom.faces.push( new THREE.Face3(69, 67, 106) ); geom.faces.push( new THREE.Face3(113, 59, 115) ); geom.faces.push( new THREE.Face3(24, 22, 148) ); geom.faces.push( new THREE.Face3(183, 24, 148) ); geom.faces.push( new THREE.Face3(143, 69, 106) ); geom.faces.push( new THREE.Face3(14, 156, 153) ); geom.faces.push( new THREE.Face3(59, 57, 115) ); geom.faces.push( new THREE.Face3(67, 65, 107) ); geom.faces.push( new THREE.Face3(106, 67, 107) ); geom.faces.push( new THREE.Face3(22, 20, 149) ); geom.faces.push( new THREE.Face3(148, 22, 149) ); geom.faces.push( new THREE.Face3(115, 57, 117) ); geom.faces.push( new THREE.Face3(56, 93, 117) ); geom.faces.push( new THREE.Face3(57, 56, 117) ); geom.faces.push( new THREE.Face3(15, 14, 152) ); geom.faces.push( new THREE.Face3(117, 93, 119) ); geom.faces.push( new THREE.Face3(40, 38, 168) ); geom.faces.push( new THREE.Face3(38, 169, 168) ); geom.faces.push( new THREE.Face3(14, 153, 152) ); geom.faces.push( new THREE.Face3(42, 40, 168) ); geom.faces.push( new THREE.Face3(100, 103, 99) ); geom.faces.push( new THREE.Face3(62, 96, 99) ); geom.faces.push( new THREE.Face3(93, 90, 119) ); geom.faces.push( new THREE.Face3(11, 15, 0) ); geom.faces.push( new THREE.Face3(63, 62, 99) ); geom.faces.push( new THREE.Face3(65, 63, 99) ); geom.faces.push( new THREE.Face3(109, 65, 99) ); geom.faces.push( new THREE.Face3(15, 152, 0) ); geom.faces.push( new THREE.Face3(103, 109, 99) ); geom.faces.push( new THREE.Face3(107, 65, 109) ); geom.faces.push( new THREE.Face3(7, 11, 0) ); geom.faces.push( new THREE.Face3(149, 20, 145) ); geom.faces.push( new THREE.Face3(38, 35, 172) ); geom.faces.push( new THREE.Face3(20, 19, 145) ); geom.faces.push( new THREE.Face3(169, 38, 172) ); geom.faces.push( new THREE.Face3(119, 90, 121) ); geom.faces.push( new THREE.Face3(34, 32, 174) ); geom.faces.push( new THREE.Face3(35, 34, 174) ); geom.faces.push( new THREE.Face3(172, 35, 174) ); geom.faces.push( new THREE.Face3(7, 0, 4) ); geom.faces.push( new THREE.Face3(42, 168, 166) ); geom.faces.push( new THREE.Face3(89, 86, 123) ); geom.faces.push( new THREE.Face3(43, 42, 166) ); geom.faces.push( new THREE.Face3(90, 89, 123) ); geom.faces.push( new THREE.Face3(121, 90, 123) ); geom.faces.push( new THREE.Face3(4, 0, 3) ); geom.faces.push( new THREE.Face3(123, 86, 125) ); geom.faces.push( new THREE.Face3(95, 96, 62) ); geom.faces.push( new THREE.Face3(145, 19, 53) ); var basematerial = new THREE.MeshBasicMaterial( { color: 0x888888 } ); var mesh = new THREE.Mesh( geom, basematerial ); scene.add( mesh ); var linematerial = new THREE.LineBasicMaterial({linewidth: 1, color: 0x000000,}); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-100.0, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, -93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, -93.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, -85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-100.0, -93.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, -93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, -93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, -93.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, -93.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(100.0, -93.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, -93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, -93.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(100.0, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(20.75, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(20.75, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, -85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 0.0)); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(20.75, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(20.75, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 0.0)); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(2.75, 67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(2.75, 67.0, 0.0)); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(2.75, 67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(2.75, 67.0, 0.0)); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(100.0, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, 93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, 93.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, 85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(100.0, 93.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, 93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, 93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, 93.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, 93.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-100.0, 93.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, 93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, 93.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-100.0, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-20.75, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-20.75, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, 85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 0.0)); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-20.75, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-20.75, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 0.0)); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-2.75, -67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-2.75, -67.0, 0.0)); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-2.75, -67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-2.75, -67.0, 0.0)); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, -93.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, -93.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(20.75, -85.0, 0.0)); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 0.0)); wire.vertices.push(new THREE.Vector3(2.75, 67.0, 0.0)); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(100.0, 93.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, 93.0, 0.0)); wire.vertices.push(new THREE.Vector3(-100.0, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-20.75, 85.0, 0.0)); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 0.0)); wire.vertices.push(new THREE.Vector3(-2.75, -67.0, 0.0)); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 0.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); var wire = new THREE.Geometry(); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, -93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, -93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(20.75, -85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(2.75, -67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(2.75, 67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(20.75, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(100.0, 93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, 93.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-100.0, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-20.75, 85.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-2.75, 67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-2.75, -67.0, 3000.0)); wire.vertices.push(new THREE.Vector3(-20.75, -85.0, 3000.0)); var line = new THREE.Line(wire, linematerial); scene.add(line); // placeholder for the FreeCAD objects var light = new THREE.PointLight( 0xFFFF00 ); light.position.set( -10000, -10000, 10000 ); scene.add( light ); renderer.render( scene, camera ); animate(); }; function animate(){ requestAnimationFrame( animate ); render(); }; function render(){ controls.update(); renderer.render( scene, camera ); };
Possible Futures Share: The People Behind OpenAI A.I. Revolutionaries | Part I You might think, based on the type of research they're doing, that the OpenAI office would be full of gadgets, full of wonder, full of weird experiments. But you'd be wrong. There are no Faraday cages. No supercomputers. No giant robots. Well, okay, there is a robot. But it's small. And it's tucked away in a side room. It's surrounded by cobbled-together protective material so that it doesn't smash into itself if it starts flailing about due to a programming error. As Jack Clark, OpenAI's strategy and communications director, phrases it: "This room is much more tool-sheddy and hacky than you'd expect AI to feel like." Jack Clark OpenAI is basically just a lot of desks, laptops, and bean bag chairs. On its surface—minus the robot—it feels like any other tech startup. And it functions like one, too. "We do our weekly meetings on Tuesday," Clark says, standing in front of an open area with a few dozen chairs haphazardly strewn about. There's a whiteboard in the corner and a large TV at the front. In these meetings, people stand up and update everyone on their work, whether it's a research breakthrough or details on a new piece of software from engineering. This space is also used for a daily reading group. "We have such a broad spread of expertise here—the people who work on robots, the generative adversarial people—all of them come together to soak up different ideas," Clark says. When you hear about the work people are doing here, you realize there are incredible things happening in this place. Things that have the potential to change the way we use and think about technology, the way the world conducts itself day to day, and the way we think about the nature of intelligence beyond humans. But before going any farther, you need to know about a dinner that happened in August 2015. A Dinner. And a big conversation. This dinner took place at a restaurant in Menlo Park, California, just outside of Palo Alto. "We'd each come to the dinner with our own ideas," Greg Brockman—the co-founder of OpenAI—writes in a blog post. Brockman, who'd previously been the chief technology officer for the online payment platform Stripe, was becoming increasingly interested in AI—a field in which he saw great promise, but knew little about. Then a friend set up a meeting between Brockman and tech entrepreneur/Y Combinator (YC) president Sam Altman. They talked about Brockman's emerging interest in AI. Altman told him, "We've been thinking about spinning up an AI lab through YC. We should keep in touch." A few months later, Altman invited Brockman to the dinner. The other guests included Ilya Sutskever—a research scientist on the Google Brain team—and Elon Musk, among others. During the meal, the conversation quickly turned to AI. "Elon and Sam had a crisp vision of building safe AI in a project dedicated to benefiting humanity," Brockman recalls. The two then floated an idea that went against the current mode of AI development at big tech companies. Instead of intensively training algorithms behind closed doors, they wanted to build AI and share its benefits as widely and as evenly as possible. The conversation centered around what kind of organization could best work to ensure that AI was beneficial. — Greg Brockman "The conversation centered around what kind of organization could best work to ensure that AI was beneficial," Brockman writes. They decided that it would need to be a nonprofit because only then could they prioritize a good outcome for all instead of their own self-interest. Shortly after the dinner, OpenAI was born, with Brockman and Sutskever at the helm. Brockman would focus on building the team and getting the culture right. Sutskever would focus on their research agenda. In a short period, they would raise more than $1 billion in funding. And, one by one, they'd start hiring their team. Over the next several months, they managed to attract some of the top AI researchers in the country, luring them away from major tech companies and academic institutions with the promise of competitive salaries and freedom from business requirements. For many of these researchers, it was the best of all worlds, combining the freedom of academia with the backing of a well-funded tech company. For many of these researchers, it was the best of all worlds, combining the freedom of academia with the backing of a well-funded tech company. They could focus on what was best for AI. Like most of OpenAI's researchers and engineers, Vicki Cheung found the proposition intriguing. It was her chance to do what she always wanted to do, the thing that she couldn't quite pull off at other places where she'd worked: Build technology with a big social impact without worrying about whether it made business sense. If you ask her how she ended up at OpenAI, though, Cheung will start off by telling you how she cheated in her high school physics class back in Hong Kong. From bots to infrastructure "We had these online assignments," Cheung says, "and I just didn't want to do them." So Cheung did what any future software engineer would do: She wrote a bot that filled in the answers for her. And then she shared it with all of her classmates. To her, the logic was simple. If you can automate something—even if that something is schoolwork and you could get in serious trouble for cheating—you don't need to waste your time doing it yourself. Cheung is very matter-of-fact when she tells this story, as if it was the quizzes' fault she wrote the program so easily. Vicki Cheung "I don't think most high school online assignments were that sophisticated, " she says. "They put a lot of their answers and equations on the page, so it was really easy to crawl." In other words, if you design an exploitable system, you must know that someone is eventually going to exploit it, right? The teacher eventually caught on and, without ever confronting Cheung, put an end to it. The new law of the land: No more online assignments. "It was a win for everyone," Cheung says without the slightest sense of vindication. Because what's better than automating a task? Getting rid of the task altogether. Cheung was clearly smart. And she clearly needed a better outlet for her talent. So during high school, she started doing low-level engineering work for a professor at the University of Hong Kong. She also went to a summer camp at Carnegie Mellon University during her final year of high school. One of her computer science professors there was so blown away by her talent that he asked her to apply for admission a week before the fall semester started. She was accepted, and moved to the States. Eventually, after graduating Carnegie Mellon, she found her way into the tech industry, becoming a founding engineer at Duolingo, the free language training company. Throughout all of it—from creating quiz-taking bots to becoming the founding engineer of a leading online language training company—Cheung has stuck to her belief that technology should benefit other people, that it should have a positive impact on society, and that it should be shared. So, when she heard about OpenAI and its mission, Cheung started contacting people who could get her in touch with Greg. And it worked. In their meeting, Greg explained to Cheung his vision for OpenAI and the type of team he wanted to build. She was immediately on board. In her words, it was "the right problem at the right time." Cheung would become one of the first engineers at OpenAI. Along with Brockman, she would build the infrastructure needed to do state-of-the-art AI research. The challenge was that neither Cheung nor Brockman knew exactly what the researchers would be doing. Cheung explains, "We knew that researchers would need somewhere to run their experiments. But we didn't know what kind of stuff they were going to run." It was like designing a city grid without knowing the size of a car, or what normal traffic patterns look like. It was like designing a city grid without knowing the size of a car, or what normal traffic patterns look like. She and Brockman were, more or less, working in the dark. But they continued to build. They studied architectures. They spent a lot of time working with researchers, trying to understand how they preferred to work. And eventually, they put the core infrastructure in place for researchers to run thousands of experiments. When the day finally came to start running those experiments and documenting their results, Cheung and Brockman were amazed to see that the infrastructure held up better than they'd expected. Out of the entire infrastructure, only a few things were scrapped. Then one research team started talking about creating a rather ambitious project. It was big and complicated. And it would push OpenAI's infrastructure to its limits. It was Universe. THE EVER-EXPANDINGUNIVERSE OF A.I. Before going into exactly what Universe is, you should probably know some basic AI terminology. There's the general and all-encompassing term, artificial intelligence (which we won't go into). Then there's machine learning, which is a practice that's basically a subset of AI. And then there's deep learning, which is a subset of machine learning. Machine learning is the practice of teaching machines to perform a certain task, rather than coding them to do it—for example, teaching a machine to recognize a photo without writing a line of code that commands it to. Machine learning is a little like our standard way of teaching math, for instance: A teacher shows a student how to solve a problem in class, and the student applies that lesson to other problems. Deep learning is a form of machine learning in which the machine teaches itself to perform that task through repeated exposure to massive amounts of data. Sticking with the school analogy: Deep learning is akin to that student teaching herself to solve the problem by tackling it over and over and over and over. Along with these subsets, you also have an approach that's called "reinforcement learning." Applied to deep learning, this approach focuses on giving the machine a reward for successfully teaching itself a set of tasks. It's a little like offering the kid who's teaching herself math an ice cream cone if she succeeds at solving her problems. But, for that analogy to truly hold up, the kid would never sleep, eat, go to the bathroom, text her friends, or get bored. She would spend all her time learning math. And, in one training session, she would be given data sets equal to the amount of information a normal child learns over the course of several months, if not years. So there's a few limitations. Anyway, this at least gets you to a place where you can understand exactly what Universe is and why it's important. "A lot of people here at OpenAI are interested in deep reinforcement learning," Dario Amodei—a research scientist using Universe—says. Dario Amodei Amodei notes that reinforcement learning wasn't initially a major part of the deep learning revolution, which started around 2011. Only relatively recently did it start to gain traction with researchers. "It really began picking up steam when it was used by AlphaGo to beat the reigning champion [of Go] back in 2016," he says. For those unfamiliar with AlphaGo, this is sort of like saying human space travel picked up steam after Yuri Gagarin became the first person to leave the Earth's orbit and then return safely. And that's without being hyperbolic. In March 2016, Google DeepMind's AlphaGo AI program went up against Lee Sedol, one of the world's highest-ranked Go players, in a five-game match. When it was over, Sedol had only won a single match. "I am speechless," Sedol was quoted as saying. Unlike chess, which has a total of 400 possible moves, Go has 130,000. So were most of the observers. For decades, researchers had considered Go the Mount Everest of achievements in AI. That's because Go, which dates back to ancient China, involves a ton of strategy. Unlike chess, which has a total of 400 possible moves, Go has 130,000. In addition to intelligence, Go requires ingenuity and improvisation. These additional aspects made AlphaGo's achievement even more remarkable. The victory pointed to a near future in which AI would no longer be confined to a narrow series of tasks. Artificial general intelligence—which some have likened to human intelligence—was nearer than previously thought. And deep reinforcement learning was emerging as the method for achieving it. This is one of the primary reasons that OpenAI decided to develop Universe. But rather than picking Go as the environment for its platform, the OpenAI team decided to turn to a technology more popular among researchers: video games. "Over the last three years," Amodei explains, "the tool that most of the people studying reinforcement learning used to test out new approaches and then compare the results with each other was Atari—literally the Atari games from the 1970s." The Arcade Learning Environment (ALE) was introduced in 2013 by researchers at the University of Alberta. It used an Atari 2600 emulator to train AI to further "the development of algorithms capable of general competency in a variety of tasks and domains without the need for domain-specific tailoring." In other words, AI got closer to artificial general intelligence by playing and replaying Atari games in the ALE. And although the ALE didn't explicitly describe itself as an open source project, no one owned it. "So anyone is free to use it," Amodei says. That made it a suitable starting point for Universe. What happens when the AI—or "agent" as it's known—plays and beats all of the Atari games, like Google DeepMind's AI agent did in 2015? What do you do then? Can that agent take what it's learned from these more primitive environments and apply them to more complexly rendered ones? "We felt that, in order to train an agent that can act more broadly," Amodei says, "we needed more environments than that." So the Universe team sought to expand beyond Atari. Doing so would allow an agent to not only play and conquer the 8-bit worlds of Adventure! and Pitfall, but also more modern, graphical first-person shooter games, 3-D-exploration worlds, and mobile Flash-environments like Candy Crush. With each game, each world, and each environment that it conquers, the AI agent remembers. With each game, each world, and each environment that it conquers, the AI agent remembers. And as it remembers, it learns. It uses this knowledge to adapt to the next game, the next world, and the next environment. And so on and so forth, inching further and further toward general intelligence. As they sought to expand the types of games, worlds, and environments that agents could play in, the Universe team also wanted to avoid creating barriers for other researchers to add new ones in the future. "We felt that the way to give both ourselves and the community more power to train agents was to build a single interface," Amodei says. "And that way, if you wanted to integrate a new game, you just needed to be able to connect to a server on a machine that's playing that game." So, while Universe is not itself an AI agent, it's a useful platform for training those agents. Universe diagram courtesy of OpenAI And, like our actual universe, it's ever-expanding. If you're a researcher out there training an agent to play video poker, Universe gives you the power to let that agent go on to learn to play GoldenEye or Super Mario Bros. or even the horrendously awful Atari E.T. game—if you're so inclined. And, in doing so, your agent can begin to learn to apply the knowledge it's gained to games, worlds, and environments it hasn't even encountered yet. One of the promises of this platform is real transfer learning. This means learning on a set of nine tasks and then doing a 10th task you've never seen before. — Catherine Olsson "One of the promises of this platform is real transfer learning," says Catherine Olsson, a software engineer on the Universe project. "This means learning on a set of nine tasks and then doing a tenth task you've never seen before." This is similar to how humans learn. We take our experience from a ton of narrow tasks and apply that knowledge—along with common sense gleaned through years of experience—to tasks we've never undertaken. Take, for instance, riding a motorcycle. When people first get on one, they're not usually stepping into a completely unknown situation. They most likely draw upon knowledge of similar-yet-different tasks: balancing, riding a bike, driving a car through traffic, etc. Lots of engineers and researchers, like Olsson, joined OpenAI precisely because they wanted to bridge this gap between what we know about human understanding and what we can do with computer algorithms. The MindAs A Computer "It was thinking about human cognition as a computational process," Olsson says about what first sparked her interest in AI. Her interest took root in her public middle school's gifted student program. "They gave us this philosophy of mind course," Olsson says. "They asked a bunch of 12-year-olds to be introspective about the meaning of consciousness. It was very inspiring, thinking: 'Okay, I have this brain and it somehow gives rise to me. What's going on there?'" That question would remain with her as she progressed through school, developing an interest in programming along the way. "In high school, I had a very good friend who was interested in programming," Olsson says. "And for some reason, the school was going to let him teach his own computer science elective." She signed up, despite the fact that her only previous experience with programming—seeing nerdy boys in high school obsess over coding their graphing calculators—had been underwhelming. Catherine Olsson "I was almost certain I was going to hate it," she says. "But then it quickly became my favorite class. There was very little lecturing. We read book chapters and had the occasional quiz, but mostly we'd just come in and build something. Totally free choice. Just taking the tools we'd been given and making whatever we wanted." As an undergraduate at the Massachusetts Institute of Technology (MIT), Olsson pursued her dual interests in philosophy of mind and programming by double-majoring in computer science and cognitive science. Along the way, she gained practical experience doing software engineering internships during her summer breaks. She also got involved in several open source projects. Following undergrad, Olsson went on to a Ph.D. program in neuroscience at New York University. Soon afterward, the deep learning revolution began to take off. "It was clear that the next big thing had arrived," Olsson says. Upon realizing that academia wasn't for her, Olsson decided to make the move from studying how the human brain works to researching how to mimic that process with machines. She decided to pursue a career in machine learning. Specifically, she decided that she wanted to try and get a job at OpenAI. After tracking down Brockman, whom she had briefly met when he was a fellow undergrad at MIT, she asked for a position—any position whatsoever. "I was not expecting that they would be hiring just engineers," Olsson says. "I thought they'd tell me to spend six months brushing up on my machine learning skills. But Greg was, like, 'No, come build something for us.'" The OpenAI opportunity appealed to her on two levels. One, it offered the chance to work on cutting-edge deep learning projects. And two, it gave her the ability to develop projects in the open. "The open source ethic has been extremely important to me," Olsson says. "And that was an important reason to come to OpenAI, specifically." Now that the Universe platform has been released, Olsson—along with the rest of the team—is excited about the prospect of moving beyond video games entirely. "We're trying to bridge the gap from games to real world tasks," she says. "Like booking a flight online." Amodei echoes her. "The goal with Universe is to provide a single platform that allows you to connect to a computer," he says, "and train an agent to do anything a human can do on a computer." On the one hand, this prospect is extremely exciting. On the other, though, it's somewhat concerning. While humans have done and continue to do amazing things with computers, there's also a very obvious flip side. If you can train an agent to mimic the beneficial or benign things a human can do on a computer, can't you also train it to mimic what not-so-great people do? The answer to this is, unfortunately, yes. What's more, you don't have to necessarily train an AI agent to do something not-so-great or downright nefarious. You can, because of numerous security vulnerabilities, trick it into doing those things. For its part, OpenAI is fully aware of these security concerns. WHAT AN OLD GERMAN HORSE HAS TO DO WITH A.I. SECURITY Ian Goodfellow is a big deal in the world of deep learning. In fact, he literally co-wrote the book on the subject. It's called Deep Learning. Authored with two other big names in the field, Yoshua Bengio and Aaron Courville, this 2016 book has already racked up 444 citations on Google Scholar—an extremely impressive feat in the slow-moving world of academic publishing. Ian Goodfellow Goodfellow's path to artificial intelligence, however, didn't start with a grade-school philosophy class or a high-school hack to an online physics quiz. It began with a death sentence. In late 2011, while a Ph.D. student at the University of Montreal, Goodfellow developed a bad headache at the back of his neck. "I went to the doctor just to confirm that I didn't have meningitis," he says. After being examined by the doctor, he received a far graver diagnosis: a brain hemorrhage. I went to the doctor just to confirm that I didn't have meningitis. He told me that I was likely to die in the next few hours. — Ian Goodfellow "He told me that I was likely to die in the next few hours," Goodfellow says. While waiting for an MRI to confirm that diagnosis, Goodfellow decided to call a fellow researcher. "I began brain-dumping all of these machine learning ideas I wanted him to try out if I died," he says. At that point, Goodfellow realized AI was pretty important to him. "I was like, 'OK, if this is the way I spend my final moments in life, I'm pretty clearly committed,'" he says. I was like, 'OK, if this is the way I spend my final moments in life, I'm pretty clearly committed.' — Ian Goodfellow After the MRI failed to show anything (and after Goofellow didn't die), he was sent home and told that nothing was wrong with him. Later, while interning at Google, Goodfellow—who didn't have insurance at the time—paid $600 to see a neurologist in Mountain View about the still-present pain. "By poking me in the neck, he diagnosed me as having a pinched nerve," he says, shaking his head. Despite the psychological toll of being told he might die, Goodfellow isn't entirely bitter about the whole experience. "If there had been a remotely competent doctor in Montreal, I wouldn't have had this experience of realizing that my last wish was to make sure my machine learning ideas got tried out," he says. Goodfellow says that, in retrospect, the ideas that he brain dumped to his friend were not that great. "They were things like sparse coding," he says. "No one cares about sparse coding anymore." Today, he focuses on something people do care a great deal about, adversarial training, or—to put it another way—AI security. "In the past, security has revolved around application-level security, where you try to trick an application into running the wrong instructions," he explains. "Or network security, where you send messages to a server that can get misinterpreted. Like you send a message to a bank saying, 'Hey, I'm totally the account owner, let me in,' and the bank gets fooled into doing it, even though you're not actually the account owner." But with AI, and specifically machine learning, security is a different animal. "With machine learning security, the computer is running all the right code and knows who all the messages are coming from," he says. "But the machine learning system can still be fooled into doing the wrong thing." Goodfellow equates this with phishing. With standard phishing, the computer isn't tricked, but the person operating the computer is. It's the same for AI. Its code remains uncorrupted. But it is tricked into doing different tasks than it was trained for. With standard phishing, the computer isn't tricked, but the person operating the computer is. It's the same for AI. We've all heard stories about someone's grandfather getting a Nigerian-prince-scam-style phishing email, promising untold riches in exchange for sending $1,000 or $2,000. The grandfather, of course, ends up losing the money and gets nothing in return. Well, it turns out AI is even more vulnerable than someone's grandfather. To make things worse, AI has the potential to be more powerful than anyone's grandfather. This is no knock against your or anyone else's elder patriarch. It's just that Gramps falling for the Nigerian Prince scam is not as problematic as, say, a machine learning algorithm used for the financial services sector being tricked into helping hackers defraud a major bank or credit card company. "If you're not trying to fool a machine learning algorithm, it does the right thing most of the time," Goodfellow says. "But if someone who understands how a machine learning algorithm works wanted to try and fool it, that'd be very easy to do." Furthermore, it's very hard for the person building the algorithm to account for the myriad ways it might be fooled. Goodfellow's research focuses on using adversarial training on AI agents. This approach is a "brute force solution" in which a ton of examples meant to fool an AI are generated. The agent is given these examples and trained not to fall for them. For example, you might train the AI used in a self-driving car not to fall for a fake sign telling the AI to halt in the middle of the highway. For example, you might train the AI used in a self-driving car not to fall for a fake sign telling the AI to halt in the middle of the highway. Goodfellow has developed (along with Nicholas Papernot) cleverhans, a library for adversarial training. The name comes from a German horse who became famous in the the early 20th century for his ability to do arithmetic. A German math teacher (also a self-described mystic and part-time phrenologist) bought the horse and claimed that he had taught it to add, subtract, multiply, divide, and even do fractions. People would come from all over and ask Clever Hans to, for example, divide 15 by 3. The horse would then tap his hoof 5 times. Or people would ask it what number comes after 7. The horse would tap his hoof 8 times. The problem was, Clever Hans wasn't that clever—at least not in the way his teacher thought. The problem was, Clever Hans wasn't that clever—at least not in the way his teacher thought. A psychologist named Oskar Pfungst discovered that the horse wasn't actually doing math. Rather, he was taking his cues from the people around him. He'd respond to these people's body language, tapping his hoof until he got a smile or a nod. Pfungst illustrated this by having the horse wear a set of blinders. When asked a question, the horse began tapping his hoof. But, unable to see the person who'd asked the question, he just kept tapping indefinitely. "Machine learning is a little like Clever Hans," Goodfellow says, "in the sense that we've given the AI these rewards for, say, correctly labeling images. It knows how to get the rewards, but it may not always be using the correct cues to get to those rewards. And that's where security researchers come in." Goodfellow's cleverhans library has been open sourced. "With traditional security, open source is important because, when everybody can see the code, they can inspect it and make sure it's safe," he says. "And if there's a problem, they can report it relatively easily or even send the fix themselves." A similar dynamic holds for machine learning security. Generally speaking, that is. "For machine learning, there isn’t really a fix yet," Goodfellow says. "But we can at least study the same systems that everybody is using and see what their vulnerabilities are." When asked if there's anything that has surprised him about his experiences doing machine learning research, Goodfellow talks about the time he ran an experiment for a machine learning algorithm to correctly classify adversarial examples. He had just read a research paper that made some claims he thought were questionable. So he decided to test them. While his experiment was running, Goodfellow decided to step out to grab some lunch with his manager. "I told him," Goodfellow recalls, "'when we get back from lunch, I'm not sure the algorithm's going to correctly classify these examples. I bet it will be too hard. And, even after this training, it will still misclassify them.'" But when he came back, Goodfellow found that the algorithm not only recognized the adversarial examples, it had also set a record for accuracy in classifying the normal ones. "The process of training on the adversarial examples had forced it to get so good at its original task that it was a better model than what we had started with," Goodfellow says. At that moment, Goodfellow realized that, for AI, adversarial training wasn't just important for finding vulnerabilities. "By thinking about security," he says, "we could actually make everything better across the board." By thinking about security, we could actually make everything better across the board. — Ian Goodfellow THE NOT-SO-GLAMOROUS WORK OF THE A.I. REVOLUTION This is how AI is developed. It's Vicki Cheung pouring through research, trying to build a Kubernetes cluster. It's Catherine Olsson sitting at a workstation, helping build a platform for an ever-expanding universe for AI agents. It's Ian Goodfellow stepping away to grab a sandwich while an algorithm he's testing gets smarter and more secure. Ian Goodfellow and Catherine Olsson It's work that seems mundane. Developing AI means sitting down each and every day in front of a computer, thinking about problems when you go home at night and during your commute in the morning (or even while you wait for a potentially grave medical diagnosis), focusing on achieving incremental victories, and dealing with unforeseen setbacks. But amidst this day-in and day-out grind, researchers and engineers are working toward a goal that, for many people outside of AI, is more science fiction than science fact. And, in many respects, it's always been that way. The people behind the AI revolution—the people at OpenAI, the people at the MIT AI lab in the 1960s, the people building AI startups today, the people who work tirelessly to make sure the AI we eventually build isn't an a**hole—they're just people. They just happen to be solving a problem that will change our lives forever.
Early diastolic peak velocity of left ventricular wall segment lying in isovolumic relaxation period as determined by tissue Doppler imaging. The early diastolic peak velocity of left ventricular (LV) wall segment has always been regarded as appearing in the rapid filling phase. However, we find some segments of which early diastolic peak velocities appear in the isovolumic relaxation period (PVIVR segments). The present study aimed to investigate the characteristics of PVIVR segments. Tissue Doppler imaging was performed in each of the 16 segments of LV wall in 99 patients with known or suspected coronary heart disease and 50 normal subjects. Early diastolic velocity pattern was classified as PVIVR, post-systolic shortening (PSS) and normal pattern. The multivariate logistic regression analyses showed that the significant echocardiographic predictors of the presence of PVIVR in a patient were transmitral E/A ratio and isovolumic relaxation time. Segmental early diastolic velocity pattern was significantly associated with actual coronary stenosis, relative coronary stenosis and wall motion score. PVIVR segments had a lower early diastolic peak velocity than other segments. PVIVR segments more frequently appear in the territory with the relatively mildest coronary stenosis, whereas PSS segments more frequently appear in the territory with the relatively most severe coronary stenosis. Patients with PVIVR have lower global LV diastolic function. A decreased early diastolic peak velocity of PVIVR segments does not necessarily mean impaired myocardial relaxation.
Personality, attachment and sexuality related to dating relationship outcomes: contrasting three perspectives on personal attribute interaction. Although people can bring personal attributes to their relationships that affect how satisfying and enduring those relationships are, it is more often personal attribute interaction that directly determines romantic relationship outcomes. In this study, three general perspectives on personal attribute interaction-similarity, complementarity and exchange perspectives-were contrasted empirically in their ability to predict dating relationship outcomes. Based on questionnaires completed by a sample of 44 heterosexual dating couples, feelings of relationship satisfaction were most closely associated with the interaction of socially valuable attributes, generally supporting the exchange perspective. Similarity of personal attributes was also connected with relationship satisfaction; however, this association was in the negative direction. That is, couples with dissimilar personality traits, attachment styles and sexual strategies were significantly more satisfied with their dating relationships. Complementarity of personal attributes had no link to satisfaction, but complementary couples experienced significantly higher ratings of relationship commitment, especially couples with complementary personalities. Discussion focused on the differences between personal attribute connections with romantic satisfaction and commitment and on the limitations of the present study.
Lewis-acid induced disaggregation of dimeric arylantimony oxides. The previously known dimeric arylantimony oxides (Ph3SbO)2 and [2,6-(Me2NCH2)2C6H3SbO]2 were disaggregated by the Lewis acid B(C6F5)3 giving rise to the formation of the Lewis pair complexes Ph3SbOB(C6F5)3 and 2,6-(Me2NCH2)2C6H3SbOB(C6F5)3 having short bipolar single Sb-O bonds.
Not so fabulous: SEC now 1-for-4 in financial crisis court cases On Thursday, nearly five years after the financial crisis, the Securities and Exchange Commission proved that it was able to hold one mid-level, thirtysomething former trader-turned grad student accountable for crimes of the financial crisis. Justice served! But the fact that a jury found former Goldman Sachs GS trader Fabrice Tourre liable isn’t enough to change this: The SEC’s track record on prosecuting financial crisis crimes is pathetic. In nearly five years, the Wall Street regulator has brought just four court cases related to the financial crisis against a total of six individuals, all of whom were relative bit players. Of those, Tourre is the only one to be found liable. “Their track [record] of the cases that have gone to trial has not be very good,” says Thomas Gorman, a partner at law firm Dorsey and Whitney and a former SEC enforcement official And the SEC’s actual record is more like 1-in-7. That’s because the SEC settled charges against former Bear Stearns hedge fund managers Ralph Cioffi and Matthew Tannin after the the two were acquitted in a criminal trial brought by the Department of Justice. A judge signed off on the SEC’s settlement only after calling the fine the regulator imposed “chump change.” In another instance, the SEC had to do an about-face and ask a judge to drop a case it had brought against Edward Steffelin, who had advised JPMorgan Chase JPM on a mortgage bond that went bust. In response, the New York Times wrote that “if Mr. Steffelin is going to emerge as a ‘poster child’ for anything, it will be as a victim of regulatory overreach.” And we know there were far more than nine people who made deals during the financial crisis that were less than they seemed. Private investors have brought cases that have uncovered e-mails and other evidence that prove bankers knew they were selling clients garbage, like the one against Morgan Stanley ms in which its bankers suggested a deal they were putting together be called “shitbag.” Yet, Morgan Stanley had never paid a fine to the SEC related to a mortgage deal. The SEC has recently said it will take a look at some of these private cases to see if there is anything they missed. Good thinking. The SEC has brought 55 other financial crisis related cases that were settled before they went to trial, in some instances for big fines. Goldman, for one, paid $550 million. But most of the settlements were with companies, not individuals. And the total amount comes nowhere near to what investors actually lost on Wall Street’s crappy mortgage bonds, or the pain suffered by the people who got a mortgage funded by these Wall Street deals that they believed was safe but ended in foreclosure. Nevertheless, the SEC was quick to take a victory lap after the Tourre result. Andrew Ceresney, co-director of the SEC’s Division of Enforcement, said in a statement, “We will continue to vigorously seek to hold accountable, and bring to trial when necessary, those who commit fraud on Wall Street.”
Background ========== Polyploidy occurs either through the combination of two or more genomes from different parents (alloploidization), or the multiplication of an endogenous genome (autoploidization). The majority of flowering plants have undergone polyploidization (whole genome duplication events \[WGD\]) during their evolutionary history, suggesting that it provides a mechanism that can increase the fitness of an organism \[[@B1]\], possibly through heterosis \[[@B2]\]. Two WGD events are dated to have occurred before the diversification of extant seed plants and extant angiosperms \[[@B3]\]. Analysis of the *Arabidopsis thaliana* genome supports three more recent WGD events (named γ, β and α). Evidence from investigations on the genome sequences of *Vitis vinifera* and *Medicago truncatula*\[[@B4]-[@B6]\], suggests that the first, or γ event, extends to all the core-eudicots and many other plant species. Polyploidization is relatively common in agricultural and commercial species, such as wheat (*Triticum aestivum*), potato *(Solanum tuberosum),* coffee (*Coffea arabica*) and cotton (*Gossypium hirsutum*), indicating that this evolutionary mechanism may be important in plant domestication. Polyploidization involves complex genetic and epigenetic process and genome duplication is often followed by changes in gene expression and gene loss \[[@B7]-[@B14]\]. Complementary hypotheses that explain this phenomena suggest that either selection is based on absolute gene dosage, or relative gene dosage (dosage balance) \[[@B15]\]. The absolute gene dosage hypothesis states that gene networks have balanced states of interaction that are critical for proper function and any disturbances on the network's stoichiometry of interaction are not optimal for plant survival. The relative dosage hypothesis argues that a gene product can have multiple interactions that may assist in the survival of the plant, upon which selection is based. Duplicated genes generated by polyploidization events are referred to as homeologs. The fate of homeologous genes can be divided in four general categories; conservation or redundancy, nonfunctionalization (gene function loss for one copy), subfunctionalization (partitioning ancestral functions/expression patterns between duplicated genes) and neofunctionalization (evolution of a gene copy to a new function) \[[@B16],[@B17]\]. The relative ratio of these gene fates may differ between species. *Nicotiana* species are excellent models for investigating plant polyploidization. Approximately 40% of *Nicotiana* genera are allotetraploids \[[@B18],[@B19]\]. With an estimated age of 0.2 Myr \[[@B20]\], *Nicotiana tabacum* is a relatively young allotetraploid originating through the hybridization of *Nicotiana sylvesteris* (maternal, S-genome donor) \[[@B21],[@B22]\] and *Nicotiana tomentosiformis* (paternal, T-genome donor) \[[@B21],[@B23]\]. Extensive studies within the *Nicotiana* genus, and specifically within *N. tabacum* including the first generation of a synthetic *N. tabacum*, have revealed a complex landscape for polyploid genome evolution \[[@B24]\]. Many evolutionary changes in the tobacco genome have been elucidated. They include evidence for an early genomic shock \[[@B25]\], a great increase in the frequency of heterozygosity and T-genome repeat losses leading to genome size reduction \[[@B26],[@B27]\]. Other evolutionary events, such as intergenomic translocations \[[@B24],[@B28],[@B29]\] and epigenetic patterns of 45S rDNA expression have been characterized as well \[[@B30]\]. In addition, gene expression studies of *N. tabacum* have been performed using microarrays \[[@B31]\], although the technology may have limited ability to distinguish between homeologs. This study presents a characterization of the *N. tabacum* transcriptome constructed from an evolutionary perspective by combining a Next Generation Sequencing (NGS) and expression analysis with a phylogenetic approach applied on a genomic scale. Results ======= Transcriptome assembly and annotation for a polyploid species ------------------------------------------------------------- A set of expressed sequence tags (ESTs) was generated from leaves of *N. tabacum* and modern day representatives of its progenitor species *N. sylvesteris* and *N. tomentosiformis*. Test assemblies of the *N. sylvesteris* ESTs were generated using several programs (see Materials and Methods). GsAssembler produced longer contigs and was significantly faster than the other assembly programs (data not shown) so it was selected for further optimization. An assembly strategy was adopted to maximize contig length while attempting to separate homeologous genes within *N. tabacum*. To identify optimal parameters, a set of assemblies were conducted using four EST datasets generated with 454 sequencing chemistry: *i- N. tabacum* ESTs, *ii- N. tomentosiformis* ESTs, *iii- N. sylvesteris* ESTs and *iv-* a combined dataset of the *N. sylvesteris* and *N. tomentosiformis* ESTs (to represent a synthetic polyploid transcriptome). The contigs generated from the four data sets were analyzed for a minimum overlap identity parameter set to a range of values between 75% and 99% (Figure [1](#F1){ref-type="fig"}). The *N. tabacum* ESTs and the synthetic polyploid data set produced a similar profile. An increase in the number of contigs was observed using a 97% identity setting (Figure [1](#F1){ref-type="fig"}). Unlike the *N. tabacum* and combined assemblies, the number of contigs in the individual *N. sylvesteris* and *N. tomentosiformis* assemblies did not increase at this level (Figure [1](#F1){ref-type="fig"}). The result suggested that 97% was the optimal identity threshold that could be used to separate homeologous sequences in the *N. tabacum* data set and homologous sequences in the combined data set without having a detrimental impact on the *N. sylvesteris* and *N. tomentosiformis* assemblies. ![***Nicotiana*EST assemblies.** Chart showing the number of contigs in EST assemblies generated with GsAssembler using minimum overlap identity levels between 75 and 99% (see Methods). Assemblies were carried out from four data sets; *N. sylvesteris*(green), *N. tomentosiformis*(blue), *N. tabacum*(red) and a hybrid set of *N. sylvesteris* and *N. tomentosiformis* (brown) sequences.](1471-2164-13-406-1){#F1} All four data sets showed an increase in the number of contigs when an identity setting of 99% was used. This level was considered too stringent as it was likely to be separating sequences based on sequencing errors (Figure [1](#F1){ref-type="fig"}). The assemblies based on an identity of 97% therefore provided the best data sets for subsequent analysis. This was further supported by manual inspection of contigs from the *N. tabacum* assemblies using the Tablet assembly viewer \[[@B32]\]. Manual inspection confirmed that contigs with more than 3 SNPs per 100 bp generated in the 95% identity assembly had correctly been separated into two contigs in the 97% identity assembly (data not shown). Relative to the number of contigs in either individual assembly, the total number of contigs in the combined *N. sylvesteris* and *N. tomentosiformis* assembly was reduced, suggesting the collapse of orthologous sequences in the combined assembly. The lower number of contigs for the *N. tabacum* assembly compared with the *N. sylvesteris* and *N. tomentosiformis* assemblies may be partially explained by the higher number of sequences included in these assemblies. Increasing the number of *N. tabacum* reads with additional sequencing libraries, not included in this study, did indeed increase the number of contigs in the assembly (data not shown). However, a more likely explanation for the lower number of contigs in the *N. tabacum* and combined assemblies was there being no, or very low sequence polymorphisms between the orthologous genes of the ancestral parents, making it impossible to separate them during assembly at 97%. To investigate the percentage of homeologs that were collapsed during the assembly process, reads from *N. tabacum*, *N. sylvesteris* and *N. tomentosiformis* were mapped onto the *N. tabacum* assembly produced with the 97% identity setting. Sequence polymorphisms could not be detected between the reads from the three species for 67% of the *N. tabacum* contigs (9718 contigs), indicating that sequences for a large portion of the assembly, were likely to have collapsed. This also meant that these sequences were not amenable to subsequent phylogenetic analysis. The remaining 33% of the sequences showed SNPs between the *N. tomentosiformis* and *N. sylvesteris* orthologs. When mapped against the *N. tabacum* assembly, a low number of *N. tabacum* sequences (3.4%) showed SNPs either supporting the possibility of collapsing of homeologous sequences in the assembly, or sequencing errors. The three separate assemblies for *N. tabacum*, *N. sylvesteris* and *N. tomentosiformis* transcripts were further assembled using GsAssembler. In order to cluster the homeologous and homologous sequences across all three assemblies, the identity parameter of this combined *Nicotiana* assembly was set to the lower stringency level of 95%. PhygOmicss, a custom data processing pipeline was developed in order to carry out a phylogenetic analysis of the sequences for the entire transcriptome. The 17,220 clusters generated from the combined *Nicotiana* assembly were processed through the pipeline which selected 7974 clusters containing at least one contig of each individual species for further analysis. Alignments were then extracted and filtered by the length of the sequence overlap (minimum of 100 bp) and the average alignment percentage identity (minimum of 75%). The consensus sequences for each of the clusters were annotated based on homology using BlastX \[[@B33]\]. Searches were conducted against four datasets and annotation results are summarized in Table [1](#T1){ref-type="table"}. As expected, the *Nicotiana* clusters demonstrated the highest number of matches against the tomato gene model dataset (ITAG2). InterProScan \[[@B34]\] was used to perform a protein domain analysis on 13,504 of the clusters, 6913 of which had been annotated using the BlastX method previously described. ###### **Annotation results from combined*Nicotiana*assembly based on homology** **Database** **Number annotated (%)** ----------------------- -------------------------- GenBank NR \[[@B35]\] 14,102 (81.9%) Swissprot \[[@B36]\] 9131 (53.0%) TAIR9 \[[@B37]\] 13,219 (76.8%) ITAG2 tomato 14,711 (85.4%) InterPro \[[@B34]\] 13,504 (78.4%) Topology analysis of *Nicotiana* genes -------------------------------------- The combined *Nicotiana* assembly was used to construct a set of phylogenetic trees for each cluster of sequences. Phylogenetic trees were constructed for 14,344 *Nicotiana* clusters that also contained at least one possible *Solanum lycopersicum* homolog as an out group. Bootstrapping analysis and filtering of these clusters (see Materials and Methods) identified 968 as containing either a single *N. tabacum* sequence, or two *N. tabacum* sequences along with the *N. tomentosiformis*, *N. sylvesteris* and *S. lycopersicum* sequence members. Neighbor Join (NJ) and Maximum Likelihood (ML) methods were used to build phylogenetic trees for each of the 968 clusters (Additional files [1](#S1){ref-type="supplementary-material"} and [2](#S2){ref-type="supplementary-material"}). The topologies of these trees were grouped into 11 categories. The distribution of the results in these 11 categories was similar between the NJ and ML methods (Figure [2](#F2){ref-type="fig"}). Approximately 10% of the clusters contained two *N. tabacum* sequences, each of which could be associated with the respective *N. sylvesteris* and *N. tomentosiformis* sequences. This topology would be expected if both the T and S homeologs had been maintained and expressed by the plant following polyploidization (AB_AC; Figure [2](#F2){ref-type="fig"}). The majority (approximately 90%) of clusters, however, contained only a single *N. tabacum* sequence the majority of which could be associated with either the *N. sylvesteris* (AB_C) or *N. tomentosiformis* (AC_B) sequence (Figure [2](#F2){ref-type="fig"}). Given that these clusters contained genes where SNPs existed between the parental homeologs, reducing the likelihood of collapse of the sequences during assembly, the abundance of this latter topology is most likely explained by either gene subfunctionalization (when the absent gene is not expressed in the tissue analyzed), or gene loss/nonfunctionalization. ![**Phylogenomic analysis of*Nicotiana*gene clusters.** Bar chart showing the number of *Nicotiana* genes that were present in a set of pre-defined phylogenetic tree topologies. Genes from the *N. tabacum*, *N. sylvesteris* and *N. tomentosiformis* assemblies were clustered and phylogenetic trees for each cluster were generated by Maximum Likelihood (ML; black bars) and Neighbour Joining (NJ; open bars) methods using *S. lycopersicum* as an out-group. The different tree topologies are shown along the x-axis with *N. tabacum* (A; Ntab) *N. sylvesteris* (B; Nsyl), *N. tomentosiformis* (C; Ntom) and *S. lycopersicum* (Slyc) genes represented in text and/or dendrogram form.](1471-2164-13-406-2){#F2} Gene Ontology (GO) analysis of the most abundant topologies from the *Nicotiana* data (AB_AC, AB_C, AC_B and BC_A) was performed \[[@B38]\]. Figure [3](#F3){ref-type="fig"} shows the representation of GO Biological process terms (levels 2 and 3) for these topologies. No significant differences between AB_AC, AB_C and AC_B were observed relative to the biological process categories. The same was true when comparing AB_C and AC_B topologies for cellular component and molecular function categories compared to the global list of the combined *Nicotiana* consensus sequences. Three percent of the trees showed an unexpected topology with the *N. sylvesteris* and *N. tomentosiformis* sequences being more closely related to each other than to the *N. tabacum* sequences (BC_A; Figure [2](#F2){ref-type="fig"}). While false clustering of an *N. tabacum* paralog with the *N. sylvesteris* and *N. tomentosiformis* sequences is the most likely explanation for the anomaly, GO analysis showed significant overrepresentation (*P* \< 0.05) in some categories, such as pathogenesis signaling (see Additional file [1](#S1){ref-type="supplementary-material"}), for these clusters, suggesting that they may contain genes with an interesting evolutionary history. However, it should be noted that this set of clusters contained fewer than 50 members. ![**Gene Ontology analysis of*Nicotiana*gene clusters.** Plot showing the percentage of *Nicotiana* gene clusters annotated with level 2 (**A**) and level 3 (**B**) Biological Process Gene Ontology terms for all gene clusters and each of the main phylogenetic tree topologies (AB_AC, AB_C, AC_B and BC_A). Bars are coloured according to topology group (see inset key for identification).](1471-2164-13-406-3){#F3} Identification and expression analysis of *Nicotiana tabacum* homeologs ----------------------------------------------------------------------- Estimations of gene expression levels were calculated based on the number of sequence reads and used to compare gene expression levels for the three different *Nicotiana* species. For the 9718 *N. tabacum* transcript clusters (67% of total number of clusters) where there were no reliable inter-specific SNPs that could be used to identify the ancestral origin, only 2171 transcript clusters contained five or more reads for each of the three *Nicotiana* species. The same expression levels (R \< 7, see Materials and Methods) across all three species were observed for 83.6% of these genes. Among the remaining differentially expressed genes, the most frequent category was *N. tabacum* genes (with no distinction between homeologs) overexpressed in comparison with *N. sylvesteris* and *N. tomentosiformis* (4.7%), followed by *N. tabacum* genes with similar expression to the *N. sylvesteris* homolog (4.5%) and *N. tomentosiformis* homolog (4.2%). Only 0.7% of the *N. tabacum* genes were expressed at a lower level when compared with both parental sequences. The rest of the transcripts (2.3%) showed variable trends relative to differential expression between all the transcripts (for example, over-expressed compared to one of the parents and the contrary when compared to the other parent). 741 of the 968 gene clusters described above contained 975 *N. tabacum* consensus sequences (5.3% of the total *N. tabacum* sequences) that could be assigned ancestral origin based on the phylogenetic trees. 482 of these sequences were assigned to *N. sylvesteris* (S) origin and 493 sequences were assigned to *N. tomentosiformis* (T) origin. A total of 103 gene clusters (10.6% of the topology analyzed clusters, 0.6% of the total) showed differential expression (R \> = 7): 51 gene clusters (0.3% of the total) for *N. tabacum* S-homeologs (ie, AB_C), of which 22 clusters (0.1% of the all clusters) were overexpressed in comparison with *N. sylvesteris* homeologs. Similar results were observed for *N. tabacum* T-homeologs (from topology AC_B); 52 clusters (0.3%) showed differential expression (R \> 7) and of them 8 *N. tabacum* clusters (0.05%) were overexpressed in comparison with *N. tomentosiformis*. Of the 274 AB_AC gene clusters containing two *N. tabacum* sequences, 77 clusters were selected where the consensus sequences were built with at least 5 reads. Figure [4](#F4){ref-type="fig"} shows scatter plots comparing expression levels of the homeologous and homologous gene pairs for these gene clusters. ![**Expression level of*Nicotiana*gene pairs.** Scatter plot showing the expression level (RPKM) for the *N. sylvesteris* / S genome gene (x-axis) versus *N. tomentosiformis* / T genome gene (y-axis) for homologous gene pairs (open red circles) and homeologous *N. tabacum* gene pairs (closed black circles). Solid black line across diagonal represents no difference in gene expression level between species.](1471-2164-13-406-4){#F4} Differential expression (R \> = 7, see Materials and Methods) was observed for 27.3% of the *N. tabacum* homeologous gene pairs (21 of the 77 gene clusters, 2.2% of the topology analyzed clusters, 0.1% of the total transcribed genes). In comparison to the parental homeologs, 3.9% of the T-genes (3 clusters) and 11.7% of the S-gene (9 clusters) were over-expressed. Only 3.9% of the genes demonstrated differential expression when comparing T-genes with S-genes in *N. tabacum* (3 clusters, in one of which S-gene expression was higher than the T-gene). In comparison, 22.7% of the homologous genes in this set were differentially expressed between the *N. tomentosiformis* and *N. sylvesteris* samples. A more consistent level of gene expression between *N. tabacum* homeologs was also indicated by the Pearson correlation coefficient, which was higher between these genes than between the *N. sylvesteris* and *N. tomentosiformis* homeologs (*R*^2^ values of 0.93 and 0.82, respectively). The increased level of differential expression between the homologous genes pairs may simply reflect that the comparison was conducted using independent samples and may be due to experimental/biological variation. The homeologs comparison was conducted between genes from the same *N. tabacum* sample. However, the data clearly suggests that in the vast majority of cases when both *N. tabacum* homeologs are expressed in the same tissue (such as the leaf tissue analyzed here) there is little difference in expression on a transcriptional level. Given the small number of homeologous gene pairs showing differential expression, the function of these genes were analyzed. Genes with higher expression of the S homeolog showed over-representation of GO terms associated with the biological processes for proteolysis, protein folding and aldehyde metabolism. Genes with higher expression of the T homeolog showed over-representation of GO terms associated with the biological processes for oligopeptide transport and translation. Non-synonymous and synonymous site substitution rates between *N. tabacum*, *N. sylvesteris* and *N. tomentosiformis* species ----------------------------------------------------------------------------------------------------------------------------- Comparison of expression in only a single tissue/organ is too limited to differentiate between redundancy and subfunctionalization. A more extensive expression study might provide the ability to distinguish between these two evolutionary processes. Cases of neofunctionalization, however, could be distinguished by a comparison of the gene sequences in the current data set. Changes to a gene sequence resulting in an altered protein sequence potentially alters the function of that gene. A comparison of the rate of synonymous (Ks) and non-synonymous (Kn) nucleotide substitutions provides insights into the evolutionary history of a gene \[[@B39],[@B40]\]. Genes showing a low rate of non-synonymous substitutions are likely to have undergone strong selective pressure to be conserved more faithfully and thus their function maintained. Genes showing a relatively high level of non-synonymous substitutions are likely to have undergone positive selection and possible neofunctionalization \[[@B41]\]. To estimate the rate of synonymous (Ks) and non-synonymous (Kn) nucleotide substitutions, an analysis was carried out using clusters of genes selected when topology analysis suggested that either one or both *N. tabacum* homeologs were maintained and could be assigned to T or S origin, (mainly topologies AB_AC, AB_C, AC_B and BC_A, Figure [2](#F2){ref-type="fig"}; 787 clusters, 3251 sequences). The ratio between Kn and Ks (ω) for each pair of sequences was also calculated. Figure [5](#F5){ref-type="fig"} shows the distribution of Ks for sequence pair comparisons between the *Nicotiana* species. For reference, a comparison between *N. tabacum* and *S. lycopersicum* genes is also shown (Figure [5](#F5){ref-type="fig"}A). This older divergence event showed a higher rate of Ks relative to the comparisons between the *Nicotiana* species. Peak distribution of Ks values were approximately 0.27, compared to approximately 0.09 for the *N. sylvesteris* and *N. tomentosiformis*. Ks values were even lower for the *N. tabacum* to *N. sylvesteris* and *N. tomentosiformis* comparisons (Figure [5](#F5){ref-type="fig"}). The high number of sequence pairs with a Ks value \< 0.01 in the comparison between *N. tabacum* and *N. sylvesteris* (397), or *N. tomentosiformis* (390) as compared to the number between *N. tabacum* and *S. lycopersicum* (39) or between *N. sylvesteris* and *N. tomentosiformis* (96) suggests that a high percentage of the *N. tabacum* genes have not diverged from their ancestral sequences (Figure [5](#F5){ref-type="fig"}). ![**Nucleotide substitution rates in*Nicotiana*genes.** Frequency histograms showing the rate of synonymous nucleotide substitutions (Ks) in orthologous genes between *N. tabacum* and *S. lycopersicum* (**A**), *N. sylvesteris* and *N. tomentosiformis* (**B**), *N. tabacum* and *N. sylvesteris* (**C**) and *N. tabacum* and *N. tomentosiformis* (**D**).](1471-2164-13-406-5){#F5} Analysis of the Kn/Ks ratio demonstrated that the majority of genes had an ω value lower than 1, suggesting that few genes had undergone positive selection during the evolution of *N. tabacum* (assuming the limitations of Kn/Ks for the gene positive selection studies \[[@B42]\]). Only 3% of the clusters showed positive selection associated to *N. sylvesteris* and *N. tomentosiformis* homeologs and the correspondent homeologs (*N. tabacum* S- and T-genome), but no positive selection between the *N. tabacum* and the parental homeologs pairs (*N. tabacum* S-genome*/N. sylvesteris* or *N. tabacum* T-genome*/N. tomentosiformis*). The GO annotations associated with the small number of genes that were undergoing positive selection showed a similar distribution across the three species and corresponded to the more representative GO term categories such as metabolic process, cellular process, cell, catalytic activity and binding (Figure [6](#F6){ref-type="fig"}). Genes with an ω \> 1 for the *N. sylvesteris* and *N. tabacum* (T-genome) homolog pairs (22 clusters) showed overrepresentation of the level 2 biological process ontologies: biological regulation, cellular component organization and regulation of a biological process. Examples of these genes include cluster 00509 (similar to *Arabidopsis thaliana* APL transcription factor involved in biological regulation) and 02926 (NAC domain protein involved in biological regulation). The 23 genes with an ω *\>* 1 for the *N. tomentosiformis* and *N. tabacum* (S-genome) homolog pairs showed over-representation of the GO terms cellular process, developmental process and metabolic process. This included clusters 04302 (similar to *Arabidopsis thaliana* E3 ubiquitin-protein ligase SINAT2 involved in some developmental process), 04210 (Tyrosyl-tRNA synthetase) and 02320 (similar to cell division protein ftsy). ![**Evolutionary rates in*Nicotiana*genes from different Gene Ontology groups.** Non-synonymous: synonymous nucleotide subtraction ratio (ω) values for *Nicotiana* genes separated according to their Level 2 (**A**) and Level 3 (**B**) Biological Process, Level 2 (**C**) and Level 4 (**D**) Cellular Component and Level 2 (**E**) and Level 3 (**F**) Molecular Function Gene Ontology annotations. Omega values for comparisons between homologous *N. sylvesteris* and *N. tomentosiformis* gene pairs (black circles), *N. tabacum* and *N. sylvesteris* gene pairs (green circles) and *N. tabacum* and *N. tomentosiformis* gene pairs (red circles) are shown.](1471-2164-13-406-6){#F6} Within the set of genes analyzed, there were no instances of homeologous genes from *N. tabacum* demonstrating positive selection (ω \> 1). A comparison between the respective *N. tomentosiformis* and *N. sylvesteris* homeologs also did not show any instances of positive selection. This absence of positive selection suggests that the majority of positive selection represented in the gene set analyzed occurred following the divergence of the two ancestral species rather than since the formation of *N. tabacum*. It also suggests that the rate of neofunctionalization in *N. tabacum* has been relatively low. Discussion ========== Polyploid species sequence assembly ----------------------------------- Using a next generation sequencing approach, leaf transcriptome sequence data was generated for the allotetraploid *N. tabacum* and its progenitor species *N. sylvesteris* and *N. tomentosiformis*. These sequences were assembled into species specific sets of unigenes and then further combined into a consensus set of clusters for the three species. The process of assembly revealed that default parameters of sequence assemblers were probably not stringent enough when working with sequences originating from polyploid species. Sequencing errors, such as homopolymer length issues associated with pyrosequencing, can further confound this problem by potentially masking low polymorphism content between homeologs. Other sequencing technologies, such as Illumina, may not be impacted by this homopolymer problem, but read length may be a limiting factor given the requirement that a single read must contain at least one polymorphism per overlapping region. These factors should be taken into consideration for any future assembly attempts on polyploid species and the methodology applied for the assembly of an allopolyploid transcriptome in this study could be useful for guiding future genome assembly work in polyploidy. Additionally, the number of collapsed homeologs was estimated in *N. tabacum* assembled transcripts (using the 97% identity assembly) based on SNPs shared with *N. sylvesteris* or *N. tomentosiformis* reads. In this analysis, only 3.4% of *N. tabacum* transcripts were polymorphic and shared SNPs with the parental transcripts. This methodology cannot be applied for transcripts lacking SNPs in the transcript fragment analyzed (67% of the transcripts). More information could be obtained by deeper transcriptomic sequencing (more mapped sequences and more reliable SNP calling), use of longer sequences (increasing the possibility to find a parent relative SNP) or genomic DNA sequencing (where the intron sequencing, being more diverge region, could increase the number of parent relative SNPs). Homeologous gene fate in *Nicotiana tabacum* -------------------------------------------- Based on the leaf transcriptome data for the *Nicotiana* species generated in this study, a pipeline was developed to carry out a phylogenetic analysis on a genomic scale. The PhygOmicss pipeline works on a single transcriptome set, but can be applied to transcriptomic data from multiple tissues/organs, or gene models from genomic sequence data. The majority of the *N. tabacum* transcripts (69%) did not show any polymorphisms with the parental sequences, making it impossible to distinguish the homeologous genes and excluding the possibility of neofunctionalization in these genes. Additionally the expression analysis of clusters with genes expressed above background level (more than 5 reads), revealed that the expression of a majority of these genes was not changed (83.6% of genes in clusters; 57.7% of the total transcribed genes) between these three species. With this level of conserved expression, the possibility of subfunctionalization is low. A more specific topology analysis with the newly developed PhygOmicss pipeline revealed that in *N. tabacum* transcripts where homeologous genes can be differentiated there was evidence for the presence of only a single homeolog (90% of gene clusters, 6% of the total transcribed genes). Given that the data is transcriptomic, it is not possible to distinguish between gene loss and subfunctionalizaton. Tissue-specific gene silencing \[[@B14]\] provides one possible mechanism of gene subfunctionalization and may partially explain the pattern observed in the *Nicotiana* topologies. An analysis of a broader set of tissues might resolve the question and increase the chance of detecting expression differences in any individual genes. However, studies in other polyploid plants suggest that only a small number of genes display tissue specific gene silencing. For example, a similarly low level of gene silencing (around 1-5%) was estimated in both synthetic allotetraploid wheat \[[@B8]\] and synthetic cotton \[[@B10]\], and results from gene expression analysis of *Tragopogon miscellus* showed a similar trend (3.4%) \[[@B43]\]. Even lower estimates of silencing were suggested from experiments with an early allotetraploid formed by the hybridization of *Arabidopsis thaliana* and *Cardaminopsis arenosa* (\> 0.4%) \[[@B7]\]. Based on the distribution of topologies and the relative expression level of homeologous genes, there was little evidence to suggest preferential loss, or transcriptional silencing of genes from one or other progenitor genomes from the sub-set of *Nicotiana* sequences that this analysis could be completed on. This is in contrast to the apparent preferential loss of repetitive sequences from the T genome in *N. tabacum*, as shown in a recent study also using 454 sequencing in these *Nicotiana* species \[[@B27]\]. Previous studies in other allotetraploids have shown preferential expression of homeologous genes. For example, there is evidence of preferential expression of the D genome in cotton \[[@B44]\]. Differential expression was shown for 22% of homeologous genes pairs in the 40 generation-old allotetraploid *T. miscellus*\[[@B13]\], similar to the 27% of *N. tabacum* genes observed in this study. It should also be noted that genes expressed in the leaf tissue at a very low level may have been missed in the transcriptome sets, particularly since clusters with less than 5 sequence members were removed from the analysis. As such, increasing the sequence depth might reveal more differentially expressed homeologous genes, but it is unlikely that this will increase the contribution of subfunctionalization extensively. With the caveat that this study was based on a subset of genes identified in the leaf transcriptomes of *Nicotiana* species, the data would suggest the expression of homeologous genes is mostly conserved between *N. tabacum* and its parent relatives and supporting the hypothesis of gene dosage compensation \[[@B15],[@B45]\] reported previously in other species \[[@B46]\]. This level may be over-estimated as the transcriptome was sampled in only one tissue type, thus reducing the possibility of observing subfunctionalization. However, based on the levels observed in other species \[[@B7],[@B8],[@B10],[@B43]\] subfunctionalization is unlikely to account for a large proportion of genes. There is also limited evidence of neofunctionalization having occurred in *N. tabacum*, based on comparison of the homeologous and homologous gene sequences. Indeed, no genes could be identified as undergoing positive selection in *N. tabacum* that did not also show the same response between *N. sylvesteris* and *N. tomentosiformis*. This suggests that these differences may have predated the formation of tobacco. Again, the apparent low level of neofunctionalization may be explained by only having sampled the leaf transcriptome. Sequencing transcripts from other tissues, perhaps more specifically involved in secondary metabolite synthesis, may increase the likelihood of identifying genes showing positive selection in tobacco; two such examples are trichomes \[[@B47]\] or roots, where alkaloids, including nicotine, are synthesized \[[@B48]\]. In addition to an increased spatial and temporal coverage of the transcriptome for the *Nicotiana* species covered in this study, it would be interesting to compare the proportion of subfunctionalization and neofunctionalization in tobacco with an older *Nicotiana* allotetraploid species, such as *Nicotiana nesophila* (dated approx. 4.5 Myr old), or *Nicotiana benthamiana* (dated \> 10 Myr old) \[[@B20]\]. Similarly, a comparative analysis of allele selection between wild and cultivated *N. tabacum* varieties might provide insight into the role of homologous genes in the species' domestication process. Gene duplication plays an important role in the successful transition of a wild species into its cultivated relatives, as shown for several wheat loci \[[@B49]\]. Indeed there are also examples for duplicated genes from diploid species playing an important role in domestication, including *GRAIN INCOMPLETE FILLING 1* (*GIF1*) and the cell wall invertase *OsCIN1* in \[[@B50]\]. Conclusions =========== This study represents the first time that a phylogenetic analysis of the tobacco genes has been carried out on a genome scale to further elucidate the complex evolutionary history of the species. Transcriptome assembly for polyploid species possesses the intrinsic difficulty of homeolog collapse. 67% of the *N. tabacum* assembled transcripts lack any polymorphism that can be used to elucidate the sequence origin. Read depth, read length and use of more variable regions such as introns will be critical to dissecting these genes. There was evidence of a general maintenance of the expression levels between *N. tabacum*, *N. sylvesteris* and *N. tomentosiformis* homeologs. Despite the conservation of transcriptomic levels in tobacco, there was little evidence for the occurrence of neofunctionalization, suggesting that, at 0.2 Myr old, tobacco may be too young evolutionarily and that this is a more common fate for duplicated genes in older polyploidy species. There may, however, be particular interest in comparing cultivated with more primitive varieties using the method developed here in order to identify the genes selected during the domestication of tobacco. The low level of neofunctionalization may make such an analysis easier. Methods ======= Plant material -------------- *Nicotiana tabacum* (cv. K326), *N. sylvesteris* and *N. tomentosiformis* plants (in-house accessions available at ATC) were grown in a glass house on soil (Levingtons M2) under 16 h: 8 h light dark cycles. Fully expanded green leaves were harvested from 3--4 month old plants, snap frozen in liquid nitrogen and stored at −80°C. Transcriptome sequencing ------------------------ Leaf samples were ground under liquid nitrogen and total RNA was extracted using Trizol (Invitrogen, Paisley) and purified with RNeasy spin columns (Qiagen, Crawley, UK) according to the manufacturer's instructions. mRNA was isolated with a Dynabead kit and quantified using a RiboGreen assay (Invitrogen, Paisley, UK). Sequencing libraries were generated from 200 ng mRNA using the cDNA Rapid Library preparation method and sequenced on the GS FLX Ti according to manufacturer's instructions (Roche, Burgess Hill, UK). Sequence assembly and annotation -------------------------------- Test assemblies of the *N. sylvesteris* ESTs were carried out with default parameters using the assemblers GsAssembler (version 2.5.3; Roche), MIRA (version 3.0.5) and CAP3 (version 10/15/07). Sequence assembly was performed on an SGN server (Red Hat Enterprise Linux Server release 5.4, CPUs: 48 cores, RAM: 256 Gb). GsAssembler version 2.5.3 was used with the option cDNA enabled. Seven different assemblies were created with identity values of 75, 80, 85, 90, 95, 97 and 99 in a minimum overlap length of 40 bp for each of the samples plus an extra sample with half of a 454 run of *N. sylvesteris* and half of a run of *N. tomentosiformis*. Contigs were selected using a perl script with a length cutoff value of 40 bp. The reassembly of the contigs for each sample was performed with the same software using an identity percentage value of 95. Contigs with a length of 2000 bp or longer were reassembled with CAP3 \[[@B51]\]. The collapse of homeologous sequences was evaluated by remapping all the Nicotiana reads with Bowtie \[[@B52]\] using the *N. tabacum* transcriptome assembly with 97% minimum identity as reference sequence. The mapping file in sam format was filtered and SNPs were called using Samtools and Bcftools \[[@B53]\]. SNP files were loaded into a generic Postgres database to perform a simple full outer join search. Sequences were annotated using the basic local alignment search tool (blastx \[[@B35]\]) with the databases: GenBank nr \[[@B33]\], Swissprot \[[@B35]\] and TAIR9 \[[@B36]\] and 1e-20 e-value cutoff. Proteins were predicted using EstScan \[[@B54]\] and domain annotation was performed using InterProScan \[[@B34]\]. Gene Annotations were analyzed with the Bioconductor module goProfile \[[@B38]\]. Tree topology analysis ---------------------- Tree topology analyses were performed with the PhygOmicss pipeline (manuscript in preparation). Sequence alignments were extracted from the assembly ace file with a Perl script integrated into the PhygOmicss pipeline. *Solanum lycopersicum* sequence homeologs were assigned based on the BLAST \[[@B37]\] results of the consensus sequence of the *Nicotiana* alignments with the Tomato Gene Model ITAG2 dataset. Only matches with alignment lengths of more than 100 bp and a nucleotide identity percentage of at least 70% were selected as homeologs. They were aligned with the Nicotiana sequences using ClustalW \[[@B55]\], Mafft \[[@B56]\] and Muscle \[[@B57]\] programs as an integrated part of the PhygOmicss pipeline. Mis-alignments were quantified using the global alignment length and the identity percentage average of each alignment, discarding the alignments with lengths shorter than 100 bp and identity percentages lower than 75%. ClustalW was run with non-default parameters (see Additional file [2](#S2){ref-type="supplementary-material"} for the configuration parameters for the pipeline) as the preferred alignment program based on a minimum number of mis-alignments (data not shown). A pruning Perl script that is part of the pipeline was used to select closely related sequences for each alignment, based on a maximum alignment score (manuscript in preparation). Alignments that did not include members of one of the selected species (*N.tabacum, N.sylvesteris, N.tomentosiformis* and *S. lycopersicum*) were discarded. Phylogenetic trees using *S.lycopersicum* sequence as out-group were constructed with Phylip \[[@B58]\] following two methods: Neighbor-joining \[[@B59]\] and Maximum Likelihood \[[@B60]\]. A bootstrapping with 1000 replicates was also performed for each tree. Trees containing nodes with low bootstrap support (under 60%) were discarded. 968 isotigs (5.6% of the total isotigs produced in the assembly) produced trees that were able to be analysed based on these parameters. Topology comparisons were performed using a Perl module Bio::Tree::Topology as an element of the pipeline (see additional file [3](#S3){ref-type="supplementary-material"} for the PhygOmicss configuration file). This module compares if the tree obtained is the same after the replacement of the tree leaves (contig ids) with the sample source (species) and the branch length with an equivalence value (0.01 as length cutoff value). Homeolog identification ----------------------- Homeolog identification was performed by leaf species identity in a neighbor comparison using the PhygOmicss pipeline. Homeolog assignment was performed based in the closest parental homolog in the tree structure. A cutoff value of 90% identity and 60 bp length in the alignment between the candidate homeologs and the reference sequence were used. Expression analysis ------------------- Expression analysis was performed parsing the assembly .ace files using a Perl script. Read count and RPKM calculation \[[@B61]\] was performed with Perl scripts available at the Solgenomics GitHub page (<https://github.com/solgenomics/sgn-home/tree/master/aure/scripts/phylo/PhygOmicss>). The R statistical differential expression value was calculated as described by Stekel \[[@B62]\]. Non synonymous-synonymous analysis of *N. tabacum* homeologs ------------------------------------------------------------ Non-synonymous to synonymous analysis was performed using Codeml from the PAML software package \[[@B63]\] through the PhygOmic pipeline. CDS sequences used with Codeml were predicted with GsAssembler (longest 6 frame method). The results were parsed using a Perl script. Clusters with pairs with omega values = 99 were removed from the analysis. A close inspection of majority of them revealed sequences with an identity of 100% where the Kn/Ks ratio was 0/0. Abbreviations ============= Myr: Million years; RPKM: Reads per kilobase per million of reads; NGS: Next generation sequencing. Competing interests =================== The authors have no competing interests. Authors\' contributions ======================= AB, KDE and LAM conceived of the study. All authors were involved in the writing and editing of paper. Sequencing was carried out in the laboratory of KDE. PhygOmics pipeline development and bioinformatics analysis was carried out by AB. All authors read and approved the final manuscript. Supplementary Material ====================== ###### Additional file 1 Functional annotation table for clusters BC_A and AB_AC. ###### Click here for file ###### Additional file 2 Configuration file for PhygOmicss pipeline analysis of the Nicotiana transcriptome. ###### Click here for file ###### Additional file 3 Clusters sequence composition, topology and functional annotation using Blast hits. ###### Click here for file Acknowledgments =============== The authors would like to thank Matthew Humphry, Robert Hurst and Fraser Allen for sampling, library preparation and sequencing, Robert Lister for Plant Husbandry and MH and Barbara Nasto for editorial assistance. The authors would like to thank the anonymous reviewer for the constructive criticism that made the paper much stronger. Funding was provided by Advanced Technologies Cambridge Ltd., a wholly owned subsidiary of BAT.
.th DC IV 8/22/73 .sh NAME dc \*- DC-11 communications interface .sh DESCRIPTION The special files /dev/tty0, /dev/tty1, ... refer to the DC11 asynchronous communications interfaces. At the moment there are 12 of them, but the number is subject to change. .s3 When one of these files is opened, it causes the process to wait until a connection is established. In practice user's programs seldom open these files; they are opened by .it init and become a user's input and output file. The very first type\%writer file open in a process becomes the .it "control typewriter" for that process. The control type\%writer plays a special role in handling quit or interrupt signals, as discussed below. The control type\%writer is inherited by a child process during a .it fork. .s3 A terminal associated with one of these files ordinarily operates in full-duplex mode. Char\%ac\%ters may be typed at any time, even while output is occurring, and are only lost when the system's char\%ac\%ter input buffers become completely choked, which is rare, or when the user has accumulated the maximum allowed number of input characters which have not yet been read by some program. Currently this limit is 256 characters. When the input limit is reached all the saved characters are thrown away without notice. .s3 When first opened, the interface mode is 150 baud; either parity accepted; 10 bits/character (one stop bit); and newline action character. The system delays transmission after sending certain function characters. Delays for horizontal tab, newline, and form feed are calculated for the Teletype Model 37; the delay for carriage return is calculated for the GE TermiNet 300. Most of these operating states can be changed by using the system call stty(II). In particular the following hardware states are program settable independently for input and output (see DC11 manual): 134.5, 150, 300, or 1200 baud; one or two stop bits on output; and 5, 6, 7, or 8 data bits/character. In addition, the following software modes can be invoked: acceptance of even parity, odd parity, or both; a raw mode in which all characters may be read one at a time; a carriage return (CR) mode in which CR is mapped into newline on input and either CR or line feed (LF) cause echoing of the sequence LF-CR; mapping of upper case letters into lower case; suppression of echoing; suppression of delays after function characters; and the printing of tabs as spaces. See getty(VII) for the way that terminal speed and type are detected. .s3 Normally, type\%writer input is processed in units of lines. This means that a program attempting to read will be suspended until an entire line has been typed. Also, no matter how many char\%ac\%ters are requested in the read call, at most one line will be returned. It is not however necessary to read a whole line at once; any number of char\%ac\%ters may be requested in a read, even one, without losing information. .s3 During input, erase and kill processing is normally done. The char\%ac\%ter `#' erases the last char\%ac\%ter typed, except that it will not erase beyond the beginning of a line or an EOT. The char\%ac\%ter `@' kills the entire line up to the point where it was typed, but not beyond an EOT. Both these char\%ac\%ters operate on a keystroke basis independently of any backspacing or tabbing that may have been done. Either `@' or `#' may be entered literally by preceding it by `\\'; the erase or kill character remains, but the `\\' disappears. .s3 In upper-case mode, all upper-case letters are mapped into the corresponding lower-case letter. The upper-case letter may be generated by preceding it by `\\'. In addition, the following escape sequences are generated on output and accepted on input: .s3 .lp +14 7 for use .lp +15 7 \*g \\\*a .lp +15 7 .tr || .tc ? .br .tr ? .br \*v \\! .br .tr ?? .lp +15 7 ~ \\^ .lp +15 7 { \\( .lp +15 7 } \\) .s3 .i0 It is possible to use raw mode in which the program reading is awakened on each character. In raw mode, no erase or kill processing is done; and the EOT, quit and interrupt characters are not treated specially. .s3 The ASCII EOT char\%ac\%ter may be used to generate an end of file from a type\%writer. When an EOT is received, all the char\%ac\%ters waiting to be read are immediately passed to the program, without waiting for a new-line. Thus if there are no char\%ac\%ters waiting, which is to say the EOT occurred at the beginning of a line, zero char\%ac\%ters will be passed back, and this is the standard end-of-file signal. The EOT is not passed on except in raw mode. .s3 When the carrier signal from the dataset drops (usually because the user has hung up his terminal) a .it hangup signal is sent to all processes with the typewriter as control typewriter. Unless other arrangements have been made, this signal causes the processes to terminate. If the hangup signal is ignored, any read returns with an end-of-file indication. Thus programs which read a type\%writer and test for end-of-file on their input can terminate appropriately when hung up on. .s3 Two char\%ac\%ters have a special meaning when typed. The ASCII DEL char\%ac\%ter (sometimes called `rub\%out') is not passed to a program but generates an .it interrupt signal which is sent to all processes with the associated control typewriter. Normally each such process is forced to terminate, but arrangements may be made either to ignore the signal or to reveive a simulated trap to an agreed-upon location. See signal (II). .s3 The ASCII char\%ac\%ter FS generates the .it quit signal. Its treatment is identical to the interrupt signal except that unless a receiving process has made other arrangements it will not only be terminated but a core image file will be generated. See signal (II). .s3 Output is prosaic compared to input. When one or more char\%ac\%ters are written, they are actually transmitted to the terminal as soon as previously-written char\%ac\%ters have finished typing. Input characters are echoed by putting them in the output queue as they arrive. When a process produces char\%ac\%ters more rapidly than they can be typed, it will be suspended when its output queue exceeds some limit. When the queue has drained down to some threshold the program is resumed. Even-parity is always generated on output. The EOT character is not transmitted (except in raw mode) to prevent terminals which respond to it from hanging up. .sh FILES /dev/tty[01234567abcd] 113B Dataphones .sh "SEE ALSO" kl (IV), getty (VII), stty (I, II), gtty (I, II), signal (II) .sh BUGS
Q: Missing Delimiter (.inserted) error. How do I fix this? \begin{proof} We are given that there exists a quadrilateral $ABCD$ on a sphere. Let us create a line segment $AC$ that divides our quadrilateral into two triangles. Thus, $\angle{A}$ and $\angle{C}$ have been divided by our line segment $AC$. Let the portion of $\angle{A}$ and $\angle{C}$ on the same side of $AC$ as $B$ be called $\angle{A_1}$ and $\angle{C_1}$. Also, Let the portion of $\angle{A}$ and $\angle{C}$ on the same side of $AC$ as $D$ be called $\angle{A_2}$ and $\angle{C_2}$. Therefore, \begin{align*} \left|ABCD|\right &=\left|\Delta{ABC}|+|\Delta{ACD}|\right\\ \ &= \left p^2(\angle{A_1}+\angle{B}+\angle{C_1}-\pi)+p^2(\angle{A_2}+\angle{C_2}+\angle{D}-\pi)\right\\ \ &= \left p^2(\angle{A_1}+\angle{B}+\angle{C_1}+\angle{A_2}+\angle{C_2}+\angle{D}-2\pi)\right. \end{align*} However, \[\angle{A_1}+\angle{A_2}=\angle{A}\] and \[\angle{C_1}+\angle{C_2}=\angle{C}\] \[\left|ABCD|\right=\left p^2(\angle{A}+\angle{B}+\angle{C}+\angle{D}-2\pi)}\right\]. To generalize furthur the area of any $n$-sided polygon on the sphere is as follows, \[\left{Area of an N-sided Polygon}\right=\left(sum of the angles-(n-2)\pi\right\] \end{proof} This is code that is producing the missing delimiter error, it says the problem is occurring somewhere with the align environment. Anyone have suggestions? A: the problem here is how the \left and \right commands are used. each of these commands always requires either an actual delimiter -- parenthesis, bracket, etc. -- or a "placeholder", namely a period. in the first line of the alignment, the \right is placed after the |; it should be before. so at the beginning of the second and third lines (and perhaps elsewhere, i didn't check), \left is not followed by any delimiter, it's not clear what delimiter is to be "matched". perhaps the opening parenthesis after \p^2? here is the reformulated display: \begin{align*} \left|ABCD\right| &=\left|\Delta{ABC}\right|+\left|\Delta{ACD}\right|\\ \ &= p^2\left(\angle{A_1}+\angle{B}+\angle{C_1}-\pi)+p^2(\angle{A_2} +\angle{C_2}+\angle{D}-\pi\right)\\ \ &= p^2\left(\angle{A_1}+\angle{B}+\angle{C_1}+\angle{A_2} +\angle{C_2}+\angle{D}-2\pi\right). \end{align*} actually, since most of the symbols between the delimiters aren't taller than the normal text, \left and \right aren't really needed. in the paragraph following the align*, i think you're confusing \[ and \] with the actual square brackets. the "escaped" forms indicate the beginning and end of a one-line math display. no backslashes should be used if an actual bracket is intended. however, a typeset brace must be entered preceded by a backslash. and within a math environment, normal text (the "Area of ...") needs to be so indicated, by \text{...}, and within that string, any small math expressions need to be returned to math mode by $...$. here is that paragraph reformulated: However, [\angle{A_1}+\angle{A_2}=\angle{A}] and [\angle{C_1}+\angle{C_2}=\angle{C}] [\left|ABCD\right|= p^2\left(\angle{A}+\angle{B}+\angle{C} +\angle{D}-2\pi\right)}]. To generalize further the area of any $n$-sided polygon on the sphere is as follows, \[\left\{\text{Area of an $N$-sided Polygon}\right\} =\left\{\text{sum of the angles $-(n-2)\pi$}\right\}\] i may have misunderstood your intent in the paragraph following the display; if you really did intend these bracketed expressions to be displayed, then instead of coding each one as a separate display, you should use the gather* environment. you don't say what document class or theorem package you are using. however, in any event, you shouldn't ever leave a blank line before \end{proof} -- that will guarantee that the "tombstone" is always set on a line by itself. also, if you are using amsthm, you can insert \qedhere before the closing \] to place the "tombstone" on the last line of the display.
You have selected: Incredibly versatile and easy-to-wear, this sling sandal is the most casual party shoe around. The textile, novelty and burnished full grain leather combination uppers translate to any summer ensemble and feel good with cushioning comports and additional heel padding. The rubber outsole provides superior traction along with confident 4 ¼ platform wedge that’s all about looks. Title Overall Product Rating (5 out of 5) 06/08/2014 By Anonymous Great shoes. Extremely comfortable and stylish, look great with skinnies, boyfriend jeans, trousers and shorts. The footbed is padded which adds to the comfort factor. Worn them today for the first time and they have not rubbed at all….excellent!!! Product Size Rating: Sized as Expected How do you most use this product?: How do you most use this product? 2 of 2 people found this review to be helpful. Was this review helpful? Really good shoe Overall Product Rating (5 out of 5) 28/07/2014 By Anonymous I purchased these in the sale after looking for some casual wedges for some time. They are very comfy for such a high heel. I have wide feet and have had difficulty finding any shoes which are suitable. These are a really good fit! Product Size Rating: Larger than Expected How do you most use this product?: How do you most use this product? Was this review helpful? Denim wedge Overall Product Rating (5 out of 5) 08/07/2014 By Anonymous Fab shoe not to high, they are quite wide as I normally find this style a bit snug width wise until they give but these were a nice fit from the start. Also nice cushioned foot bed. Only moan is they went in sale after I had ordered and customer service said I had to rtn and re-order if I wanted the sale price and they couldn't just refund difference (which I think is stupid on hush puppies part), but that is what i had to do. Product Size Rating: Larger than Expected How do you most use this product?: How do you most use this product? Was this review helpful? COres SLing Overall Product Rating (3 out of 5) 03/10/2014 By Anonymous Bought the cream version beginning of summer and have worn them non stop as they are so comfortable and go withe everything I own. Decided to buy another pair as had admired the denim look as well. You know when you buy a pair of shoes and they just fit so well that you go back and they no longer have them, that is how I feel about this style. Hence the purchase, if the price goes down again in the sale might even buy the other colour!