id
stringlengths
10
10
question
stringlengths
18
294
comment
stringlengths
28
6.89k
passages
sequence
presuppositions
sequence
corrections
sequence
labels
sequence
raw_presuppositions
sequence
raw_labels
sequence
raw_corrections
sequence
2018-15411
How do people leak games and movies before they release?
Making a big blockbuster movie requires the involvement of hundreds, possibly thousands of people. Many of these people have access to early cuts of the film, or have access to the film before release. Before a film is released it needs to be shipped to movie theaters so that they can screen it, which means that physical copies of the final movie exist days/weeks before the movie is set to release. Anyone who can get access to that copy could theoretically make a copy of it and leak it. There are measures in place to prevent this, but if there is a desire for money or even just fake internet points, people are willing to try and circumvent them.
[ "BULLET::::- In 2003 a hacker exploited a security hole in Microsoft's Outlook to get the complete source of the video game \"Half-Life 2\", which was under development at the time. The complete source was soon available in various file sharing networks. This leak was rumored to be the cause of the game's delay, but later was stated not to be.\n", "BULLET::::- Several high-profile books have been leaked on the Internet before their official release date, including \"If I Did It\", \"Harry Potter and the Deathly Hallows\", and an early draft of the first twelve chapters of \"Midnight Sun\". The leak of the latter prompted the author Stephenie Meyer to suspend work on the novel.\n", "Section::::Packaging.\n", "Section::::Background.\n", "Delays continued and post production paused while an online editing system in the San Francisco area was sought, unsuccessfully. Finally, co-producer Michel Negroponte contacted the filmmaker regarding an editing station that had become available in New York City; payment to the machine's owners at Copacetic Pictures could be deferred until the movie was completed and sold.\n", "Section::::Production.\n\nSection::::Production.:Development.\n", "Shortly after the film's premiere in April 2005, the director, Shane Felux, appeared on many news and talk shows promoting the film, notably CNN's \"Anderson Cooper 360°\" and MSNBC's \"Connected: Coast to Coast with Ron Reagan and Monica Crowley\". The film was made available free for downloading online starting April 19, 2005, and in fact the volume was so heavy that \"Star Wars\" fan site TheForce.Net was forced to temporarily discontinue offering it after two days because it overloaded their bandwidth. Independent online film site iFilm also made \"Revelations\" available for download starting April 23, 2005. Within two weeks of the film being made available online, it was downloaded nearly one million times.\n", "In 2003, the company started pre-production of a movie called \"End Game\". The movie was supposed to be their first film planned to have a wide theatrical release. Eleven pages of the shooting script went up on the official website, but production came to a halt shortly after.\n", "Section::::Development.:Production and release.\n", "Section::::Release.\n", "Section::::Release.:Unauthorized duplication and distribution.\n\nOn July 25, 2014, three weeks ahead of the film's premiere, a DVD-quality illegal leak was downloaded, via file sharing sites, more than 189,000 times over a 24-hour period. After one week, it was estimated that the leak had been downloaded over 2 million times.\n", "Sometimes software developers themselves will intentionally leak their source code in an effort to prevent a software product from becoming abandonware after it has reached its end-of-life, allowing the community to continue development and support. Reasons for leaking instead of a proper release to public domain or as open-source can include scattered or lost intellectual property rights. An example is the video game \"Falcon 4.0\" which became available in 2000; another one is \"Dark Reign 2\", which was released by an anonymous former Pandemic Studios developer in 2011. Another notable example is an archive of Infocom's video games source code which appeared from an anonymous Infocom source and was archived by the Internet Archive in 2008.\n", "Teaser often create hype in media to such extent that they get leaked. \"\" and \"2.0\" prove to be such examples. The teaser (the director's version) of \"2.0\" was released weeks before it was officially released on YouTube. \n", "before having it released to Blu-Ray and DVD (entering its video window). During the theatrical window, digital versions of films are often transported in data storage devices by couriers rather than by data transmission. The data can be encrypted, with the key being made to work only at specific times in order to prevent leakage between screens. Coded Anti-Piracy marks can be added to films to identify the source of illegal copies and shut them down.\n\nSection::::Economic impact of copyright infringement.\n", "BULLET::::- Nintendo's crossover fighting video game series \"Super Smash Bros.\" has a history of having unconfirmed content leaked. Every game since (and including) 2008's \"Super Smash Bros. Brawl\" has been affected by leaks in some form:\n\nBULLET::::- \"Super Smash Bros. Brawl\" for the Wii was leaked by a video on the Japanese language wii.com website, revealing unconfirmed playable characters on January 28, 2008 (three days before the game's Japanese release).\n", "As part of the UK DVD release, the game also hid 300,000 codes inside copies of the film, which gave in-game rewards and bonuses.\n\nSection::::Release.:Internet leak.\n\nThe film was leaked onto peer-to-peer file-sharing websites as part of the Sony Pictures Entertainment hack by the hacker group \"Guardians of Peace\" on November 27, 2014. Along with it came four unreleased Sony Pictures films (\"Annie\", \"Mr. Turner\", \"Still Alice\", and \"To Write Love on Her Arms\"). Within three days of the initial leak, \"Fury\" had been downloaded an estimated 1.2 million times.\n\nSection::::Reception.\n\nSection::::Reception.:Box office.\n", "In January 2012, an anonymous individual claimed to have an EPROM cartridge of the GBC version and requested $2,000 before he was willing to leak the playable ROM. The goal was met in February and the ROM files containing an unfinished build of the game were subsequently leaked.\n\nSection::::Release.:\"Deadly Silence\".\n", "Lionsgate had originally planned to release the film in 2006. When that plan failed to work out, several release dates were listed in various places; for example, February 2007 was listed in some official \"The Source\" auctions, as well as on actress Thekla Reuten's own website, and March 2007 was listed on composer George Kallis' website. Eventually, even the official auctions began using simply a broad \"First Quarter 2007\" release date. As of February 14, 2007, producers Peter Davis and William Panzer of Davis/Panzer Productions, in conjunction with Lionsgate Entertainment, were editing and remixing the film.\n", "Section::::Release and reception.:Commercial.\n", "Section::::Characters.:Leaks.\n\nSome of the Leaks are creatures from \"Conqueror of All Worlds\" that have leaked into Earth. There were also other Leaks that come from anything associated with the Internet. The following are listed in order of appearance:\n\nBULLET::::- Holiday Orcs - A group of Christmas-themed orcs. They were \"barded\" by the male Mountain Barbarian.\n", "The leak forced the company to set a release date. It started a speculation about the delay the film suffered for years. However it was known that \"Fallen\" did not have a distribution company. Later Relativity Media, directed by Ryan Kavanaugh, was interested in distributing it, after several days of negotiation. In the end the release date was greenlit, with the premiere taking place at the Philippines. The film was scheduled to be released on September 1, 2017, by Vertical Entertainment.\n", "Microsoft reported that their security teams and law enforcement were investigating the possibility of \"Halo 4\" content being leaked on the internet in October 2012. Jessica Shea, Community Manager at 343 Industries, warned fans to be wary of \"Halo 4\" spoilers that were posted on the internet. O'Connor stated at New York Comic Con that leaks of the game and footage would not have any impact on how the game is released or marketed and that unlicensed uploading of high-profile games is inevitable.\n", "A good deal of information in the video game industry is kept under wraps by developers and publishers until the game's release; even information regarding the selection of voice actors is kept under high confidential agreements. However, rumors and leaks of such information still happen, typically occurring though the online message forum NeoGAF. Other times, such rumors and information fall into the hands of video game journalists, often from anonymous sources from within game development companies, and it becomes a matter of journalistic integrity whether to publish this information or not.\n", "The game has been designed to be mod-able. Lionhead has stated that they might release the actual tools they used to create scenes (though no such thing has been seen yet), shortly after the release of the game, along with expansion packs. Alongside this, unofficial mods are possible, leading to extra props, sets, animations, scenes, and clothing designs.\n", "BULLET::::- As with the \"Star Trek\" incident, major films or television productions frequently give out scripts to the cast and crew in which one or two lines are different in each individual version. Thus if the entire script is copied and leaked to the public, the producers can track down the specific person who leaked the script. In practice this does not prevent generalized information about the script from being leaked, but it does discourage leaking verbatim copies of the script itself.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-03286
since AIDS is no longer considered a “homosexual disease” why do blood banks still ask if a man has had sexual contact with another man, on the questionnaire?
While it's not still considered a "homosexual disease" HIV/AIDS is still very prevalent in the homosexual community. Gay and bisexual men make up 70% of all new cases of HIV, despite them being less than 1.7% of the population (assuming 50% of homosexuals and bisexuals are male). This is partly because of behavioral tendencies of gay/bisexual men who tend to have more anonymous partners and have unprotected sex. Also, anal sex is more traumatic to tissues than vaginal sex. The rectum is not built to accommodate sexual intercourse like the vagina is, so there is a higher chance of the HIV virus entering the bloodstream via anal sex than vaginal sex. So the question for blood donation is, "Are you a male who has had anal sex, even once, with another man?" Saying yes may disqualify your donation as this behavior has a significantly higher chance of having HIV than other donor types. Yes, they do test blood for HIV and Hepatitis, but these tests are reliant on antibodies. If you were infected within the previous month, you could still be infected but it wouldn't show up on a test.
[ "The HIV-positive people in the presumably heterosexual patient groups were all identified from medical records as either intravenous drug abusers or recipients of blood transfusions. Two of the men who identified as heterosexual specifically denied ever engaging in a homosexual sex act. The records of the remaining heterosexual subjects contained no information about their sexual orientation; they were assumed to have been primarily or exclusively heterosexual \"on the basis of the numerical preponderance of heterosexual men in the population\".\n", "Blood collecting organizations, such as the American Red Cross, have policies in accordance with FDA guidelines that prohibit accepting blood donations from any \"male who has had sex with another male since 1977, even once\". The inclusion of homo- and bisexual men on the prohibited list has created some controversy, but the FDA and Red Cross cite the need to protect blood recipients from HIV as justification for the continued ban. Even with PCR-based testing of blood products, a \"window period\" may still exist in which an HIV-positive unit of blood would test negative. All potential donors from HIV high-risk groups are deferred for this reason, including men who have sex with men. The issue has been periodically revisited by the Blood Products Advisory Committee within the FDA Center for Biologics Evaluation and Research, and was last reconfirmed on May 24, 2007. Documentation from these meetings is available to the public.\n", "In the earliest years of the AIDS epidemic, there were no reliable tests for the virus, which justified blanket bans on blood donations from groups at high risk of acquiring or having HIV, including MSM. These restrictions are similar to present-day restrictions in most countries on people residing in the United Kingdom during the BSE (\"mad cow disease\") epidemic of the 1980s and early-to-mid 1990s, due to the absence of a test for its human form, variant Creutzfeldt–Jakob disease (vCJD).\n", "In the United States in 2005, MSM, African Americans, and persons engaging in high-risk heterosexual behavior accounted for respectively 49%, 49%, and 32% of new HIV diagnoses. In 2009 in the United States, African Americans accounted for 47.9% of new HIV diagnoses reported that year, but represented approximately only 12% of the population.\n\nSection::::Current situation.\n\nSection::::Current situation.:List of countries with their stand on MSM blood donors.\n\nThis list shows countries that had restrictions on blood donors. Most national standards require direct questioning regarding a man's sexual history, but the length of deferral varies.\n", "In 1981 concern was growing over an unidentified infectious disease associated with immune system collapse that would later become known as AIDS. In the U.S. it was found mostly in homosexual men and intravenous drug users, while in France doctors were finding it in a more diverse group of patients. On July 16, 1982, the United States Centers for Disease Control and Prevention (CDC) reported that three hemophiliacs had acquired the disease. Epidemiologists started to believe that the disease was being spread through blood products, with grave implications for hemophiliacs who had routinely injected themselves with concentrate made from large pools of donated plasma, much of which was collected by commercial plasmapheresis prior to routine HIV testing, often in cities that had large numbers of homosexuals and intravenous drug users and in some U.S. prisons and underdeveloped countries during the four or five years of the late 1970's through early 1980's before AIDS was even heard of.\n", "The 1980s was the era in which HIV/AIDS was first reported. The first recorded victims of the disease were a group of gay men, and the disease became associated in the media, and at first in medical circles, with gay and bisexual men in particular. The association of HIV/AIDS with gay and bisexual men worsened their stigmatisation, and this association correlated with higher levels of sexual prejudice, such as homophobic/biphobic attitudes.\n", "BULLET::::- Frank Lilly, a geneticist at the Albert Einstein College of Medicine. Lilly had served on the board of the Gay Men's Health Crisis (GMHC) from 1984 to 1986. He was \"one of the first openly gay Presidential appointees\".\n\nBULLET::::- Dr. Woodrow A. Myers Jr., an African American and the health commissioner of Indiana and president of the Association of State and Territorial Health Officers; named vice-chairman by Mayberry.\n\nBULLET::::- Cardinal John O'Connor\n\nBULLET::::- Penny Pullen, an Illinois legislator and advocate of mandatory premarital HIV testing and later founder of the Illinois Family Institute\n", "The most frequent mode of transmission of HIV continues to be through male homosexual sexual relations. In general, recent studies have shown that 1 in 5 gay and bisexual men were infected with HIV. As of 2014, in the United States, 83% of new HIV diagnoses among all males aged 13 and older and 67% of the total estimated new diagnoses were among homosexual and bisexual men. Those aged 13 to 24 also accounted for an estimated 92% of new HIV diagnoses among all men in their age group.\n", "In 1985, early tests using the ELISA method looked for antibodies, which are the immune system's response to the virus. However, there is a window period when using this method in which a person who has been infected with HIV is able to spread the disease but may test negative for the virus. This window period can be as long as three to six months, with an average of 22 days. Tests using the ELISA methods are often still used in developed countries because they are highly sensitive. In developing countries, these tests are often the only method used to screen donated blood for HIV. To cover the window period resultant from the use of these tests, donors are also screened for high risk behaviors, one of which is a history of same-sex sexual activity among male potential donors. Other groups with similar restrictions include commercial sex workers, injecting drug users, and people resident in countries with a high HIV prevalence (such as sub-Saharan Africa). Newer tests look for the virus itself, such as the p24 antigen test, which looks for a part on the surface of the virus, and Nucleic acid tests (NAT), which look for the genetic material of the virus. With these tests, the window period is shorter, with an average duration of 12 days. Fourth generation combination HIV tests are conclusive at 3 months, and Hepatitis B tests are conclusive at 6 months.\n", "There are massive amounts of residual fear about blood product contamination stemming from the contaminated blood scandals, which have caused thousands of deaths due to HIV and Hepatitis C in patients requiring a blood transfusion. Contaminated blood put haemophiliacs at massive risk and severe mortality, increasing the risk of common surgical procedures. People who contracted HIV from a contaminated blood transfusion include Isaac Asimov, who received a blood transfusion following a cardiac surgery.\n\nSection::::HIV/AIDS.\n\nIn many developed countries HIV is more prevalent among men who have sex with men (MSM) than among the general population.\n", "Risks are also associated with a non-MSM donors testing positive for HIV, which can have major implications as the donor's last donation could have been given within the window period for testing and could have entered the blood supply, potentially infecting blood product recipients. An incident in 2003 in New Zealand saw a non-MSM donor testing positive for HIV and subsequently all blood products made with the donor's last blood donation had to be recalled. This included NZ$4 million worth of Factor VIII, a blood clotting factor used to treat hemophiliacs which is manufactured from large pools of donated plasma, and subsequently led to a nationwide shortage of Factor VIII and the deferral of non-emergency surgery on hemophiliac patients, costing the health sector millions of dollars more. Screening out those at high risk of bloodborne diseases, including MSM, reduces the potential frequency and impact of such incidents.\n", "By 1984, the Centers for Disease Control had requested GMHC's assistance in planning public conferences on AIDS. That same year, the Human Immunodeficiency Virus was discovered by the French Drs Françoise Barré-Sinoussi and Luc Montagnier. Within two years, GMHC was assisting heterosexual men and women (See Dennis Levy), hemophiliacs, intravenous drug users, and children.\n", "Since reports of the human immunodeficiency virus (HIV) began to emerge in the United States in the 1980s, the HIV epidemic has frequently been linked to gay, bisexual, and other men who have sex with men (MSM) by epidemiologists and medical professionals. The first official report on the virus was published by the Center for Disease Control (CDC) on June 5, 1981 and detailed the cases of five young gay men who were hospitalised with serious infections. A month later, The New York Times reported that 41 homosexuals had been diagnosed with Kaposi’s Sarcoma, and eight had died less than 24 months after the diagnosis was made. By 1982, the condition was referred to in the medical community as (Gay-related immune deficiency (GRID), \"gay cancer,\" and \"gay compromise syndrome.\" It was not until July 1982 that the term Acquired Immune Deficiency Syndrome (AIDS) was suggested to replace GRID, and even then it was not until September that the CDC first used the AIDS acronym in an official report.\n", "In many developed countries, a strong correlation exists between HIV/AIDS and male homosexuality or bisexuality (the CDC states, \"Gay, bisexual, and other men who have sex with men (MSM) represent approximately 2% of the United States population, yet are the population most severely affected by HIV\"), and association is correlated with higher levels of sexual prejudice such as homophobic attitudes. An early name for AIDS was gay-related immune deficiency or GRID. During the early 1980s, HIV/AIDS was \"a disorder that appears to affect primarily male homosexuals\".\n", "Section::::Current situation.:United States.:History of calls to change the policy.\n\nBULLET::::- In 2006, the AABB, American Red Cross, and America's Blood Centers all supported a change from the current US policy of a lifetime deferral of MSM to one year since most recent contact. One model suggested that this change would result in one additional case of HIV transmitted by transfusion every 32.8 years. The AABB has suggested making this change since 1997. The FDA did not accept the proposal and had concerns about the data used to produce the model, citing that additional risk to recipients was not justified.\n", "The US Center for Disease Control recommends annual screening for syphilis, gonorrhea, HIV and chlamydia for men who have sex with men.\n\nBlack gay men have a greater risk of HIV and other STIs than white gay men. However, their reported rates of unprotected anal intercourse are similar to those of men who have sex with men (MSM) of other ethnicities.\n\nSection::::Issues affecting gay men.:Substance abuse.\n", "BULLET::::- 1994 – CDC published a frank brochure on how condoms reduce the transmission of the AIDS virus.\n\nBULLET::::- 1995 – CDC recommended offering HIV testing to all pregnant women.\n\nBULLET::::- 1996 – CDC, in partnership with the International Society for Travel Medicine, initiated the GeoSentinel surveillance network to improve travel medicine.\n\nBULLET::::- 1997 – CDC participated in the nationally televised White House event of the Presidential Apology for the Tuskegee Study.\n\nBULLET::::- 1998 – For the first time since 1981, AIDS was diagnosed in more African-American and Hispanic men than in gay white men.\n", "The gay rights group Stonewall said the move was a \"step in the right direction\". However, a spokesperson pointed to the fact that high-risk heterosexuals would still be less controlled than low-risk gay men: \"A gay man in a monogamous relationship who has only had oral sex will still automatically be unable to give blood but a heterosexual man who has had multiple partners and not worn a condom will not be questioned about his behaviour, or even then, excluded.\" \"The Independent\" reported that Andy Wasley, editor in chief of \"So So Gay\" magazine, called for \"more precise selection criteria\" to be used in identifying high-risk potential donors.\n", "BULLET::::- Gay sexual practices\n\nBULLET::::- Terminology of homosexuality\n\nSection::::External links.\n\nBULLET::::- FDA:Blood Products Advisory Committee, 09Mar2006 transcript See page 53 (page 59 of the pdf) for the discussion of test error rates. \"Warning: this is a 133 MB scanned transcript.\"\n\nBULLET::::- CDC: HIV/AIDS among Men Who Have Sex with Men\n\nBULLET::::- British Medical Journal Debate: Should men who have ever had sex with men be allowed to give blood? No\n\nBULLET::::- British Medical Journal Debate: Should men who have ever had sex with men be allowed to give blood? Yes\n", "In the United States, men who have sex with men (MSM), described as gay and bisexual, make up about 55% of the total HIV-positive population, and 67% of new HIV cases and 83% of the estimated new HIV diagnoses among \"all\" males aged 13 and older, and an estimated 92% of new HIV diagnoses among all men in their age group (2014 report). 1 in 6 gay and bisexual men are therefore expected to be diagnosed with HIV in their lifetime if current rates continue. Gay and bisexual men accounted for an estimated 54% of people diagnosed with AIDS, with 39% being African American, 32% being white, and 24% being Hispanic/Latino. The CDC estimates that more than 600,000 gay and bisexual men are currently living with HIV in the United States. A review of four studies in which trans women in the United States were tested for HIV found that 27.7% tested positive.\n", "Discouragement of homophobia. Men who have sex with men (MSM) are marginalized and isolated within many minority communities, according to “African-Americans, Health Disparities, and HIV/AIDS.” As a result, they are less inclined to report their sexual identity to others, much less discuss their lifestyles with their doctors. Many contract HIV and spread it to others because they rarely, if ever, have themselves tested. NMAC urges the CDC, public agencies, and nongovernmental community groups to dialogue with MSM and promote acceptance of them within their communities.\n", "BULLET::::- The American Medical Student Association membership voted to create an action committee on LGBT health issues and elected Brian Hurley to the office of national vice-president, the first to hold the office.\n\nBULLET::::- The US Food and Drug Administration re-affirmed its policy prohibiting men who have sex with men (MSM) from donating blood despite recommendations from the American Red Cross, and the American Association of Blood Banks.\n", "The CDC (2015) reported that gay and bisexual men accounted for 82% (26,375 out of 1,242,000 adults and adolescents) of HIV diagnoses among males and 67% of all diagnoses in the United States, while six percent (2,392) of HIV diagnoses were attributed to injection drug use (IDU) and another 3% (1,202) to male-to-male sexual contact plus IDU. Heterosexual contact accounted for 24% (9,339) of all HIV diagnoses.\n", "Though not commonly classified as an STI, giardiasis can be transmitted between gay men, and it can be responsible for severe weight loss and death for individuals who have compromised immune systems, especially HIV.\n\nSection::::Health issues.:Sexually transmitted infections.:Other.\n\nUnprotected anal sex is a risk factor for formation of antisperm antibodies (ASA) in the recipient. In some people, ASA may cause autoimmune infertility.\n\nSection::::MSM blood donor controversy.\n", "Most research on HIV/AIDS focuses on gay and bisexual men than lesbians and bisexual women. Evidence for risky sexual behavior in bisexually behaving men has been conflicted. Bisexually active men have been shown to be just as likely as gay or heterosexual men to use condoms. Men who have sex with men and women are less likely than homosexually behaving men to be HIV-positive or engage in unprotected receptive anal sex, but more likely than heterosexually behaving men to be HIV-positive. Although there are no confirmed cases of HIV transmitted from female to female, women who have sex with both men and women have higher rates of HIV than homosexual or heterosexual women.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-01501
Why can most people only recall the major plot points although they might have read the entire book?
This is how the human brain works in general -- we don't actually remember everything we ever observe, rather, we try to learn and remember the bits that matter to us.
[ "Casci's original screenplay included a story arc for the protagonist, Richard Tyler (played by Macaulay Culkin), who begins the tale as a boy who hates reading, but by the end of the film, learns to love reading. The revised screenplay by Contreras and Kirschner omitted the reading-themed story arc, instead emphasizing the boy's journey from cowardice to courage.\n", "BULLET::::- A \"Publishers Weekly\" review says, \"He gives the tale additional punch by varying the pace from full-page scenes to frame-by-frame snapshots and by casting the main characters as an intriguingly multicultural extended family. Readers will also enjoy the recurring visual pun as they spy the very same book they're reading in the hand of the girl, and the same page they're looking at almost every time she manages to sneak a peek at it.\"\n\nBULLET::::- It was reviewed by \"Booklist\", who said the book is \"utterly charming (and more than a little surreal) [and] winsome in text and art\".\n", "Noah Cruickshank writing for The A.V. Club wrote \"It's heavy stuff, but told in a way that amps up the tension even more, making the wait for the next book all the more nerve-racking.\" Jenni Laidman of \"Chicago Tribune\" claimed the book \"operates on three levels\": the basic plot, humour and wit, and cultural references \"that turn the book into a puzzle\". Laidman also said that \"[the plot] matters far less than the wordplay that gets us there\" and that although \"it's still a children's book\", the book \"proves fun for adults, too\".\n\nSection::::External links.\n", "\"TV Guide\" wrote, \"Weak story, poor dialog; everyone's just kiddin' around\" ; while \"Mystery File\" wrote, \"it’s only in bits and pieces and occasional places that the plot rises above the purely pedestrian. If I were Leonard Maltin, the best I could give this movie would be 1½ stars out of five and I still think I’d be just a little bit generous if I did. Nonetheless, its historical significance is high, so I was glad to have had the opportunity to have seen it, and you may too.\" \n", "The story has very short chapters, with quickly shifting times and locations, which a Canadian reviewer mused was probably because of its original conception as a play. Finlit reviewer Lauri Sihvonen places emphasis on this precision of detail and style, saying \"everything is packed into the language, every verb lives and breathes\"\n", "“Because the material was based on a novel,” explains Terry Chen, “there was a lot to pull from that wasn’t even in the script. I’m talking about rich details that can only exist in a novel. But this whole shared history between these characters was so intimate and interwoven. It gave the cast so much to subconsciously pull from.”\n", "\"Lit Pub\" reviewer Edward J. Rathke found the novel to be a remedy for heartbreak: \"The next thing I knew, the sun was rising and my heart was breaking, but in a good way, the way that resurrects you, that shows you everything you forgot to pay attention to, forgot to remember, and I closed it because it was done, again, finished for the third time, and I could’ve turned back to page one and began again, which is how the first two readings happened, in consecutive days, because this book burns you, burrows deep, and smolders, lives, reconnects cells, and balances chemistry.\"\n", "Bennett Davlin wrote the novel first with the intention of directing and producing the film. “These characters were so real to me that I just knew I had to bring them to the big screen”.\n\n“One of the most appealing things to me,” explains Billy Zane “is that so much of Taylor is internalized. He says very little, but we’re in his head the whole time.”\n", "Artemis Fowl II, Domovoi Butler and Juliet all submitted to a fine-tune mind-wipe at the end of \"The Eternity Code\", but Artemis and Butler managed to initiate a total recall through a disk containing specific knowledge to trigger recall. Artemis' was an in-depth description of his escapades, as was Butler's, but Butler was able to recall all the memories totally after hearing merely his first name, Domovoi. There was nothing on the disk for Juliet, although Juliet herself achieves total recall in \"\", by listening to Turnball mesmerizing people.\n\nSection::::Fairy concepts.:Mood blanket.\n", "Section::::Style.\n\nHelen Dewitt plays with the language used and the visual effects created by the structure of the letters, making the novel feel more conversational rather than formal, and thus forcing the readers to sense with Sibylla’s perception and feelings. \n", "Thomas M. Wagner of SF Reviews wrote that \"despite my disappointment, images and bits and pieces of the novel simply would not get out of my head. This is saying something, since, with the volume of SF and fantasy I read, I do not exactly retain an eidetic memory of everything I've read that I can call up in a second or two unless the book literally bowled me over. But in the case of Revelation Space, two and three years later I still could remember the opening scene in the archaeological dig on the lonely planet of Resurgam with remarkable clarity. The dark, eerie corridors of the vast starship Nostalgia for Infinity still brought haunting images to mind.\"\n", "This structure of repetitions and references underlines the peculiar theory of time the novel transports: \"As we go on reading, we find more and more [...] reduplications of names, events, actions, and even identical sentences uttered by characters who live two centuries apart, until we are forced to conclude that, in the novel, nothing progresses in time, that the same events repeat themselves endlessly, and that the same people live and die only in order to be born and to live the same events again and again, eternally caught in what appears to be the ever-revolving wheel of life and death. This interchangeability of characters and the circularity of events is stressed by the device of using the same words to end and to begin adjacent chapters.\n", "Some have argued that the book is too short, which might have been a result of editor pressure at the time. For example, Thomas M. Wagner writes: \"the book does feel somewhat rushed, as well as heavily edited, and I felt there was more Anderson was wanting to tell me. Anderson focuses his plot on a handful of lead characters.\"\n", "\"When I was trying to engage myself as young Jason Tanamor it was often difficult to get back in that frame of mind or recall specific emotions that surrounded that experience.\" He stated that the bulk of this novel was written during the DoD Furlough which resulted in United States budget sequestration in 2013.\n\nSection::::Novels.:Hello Fabulous!\n", "\"Obviously, with 1,700 pages of manuscript, I can't keep in all in my head,\" he added. \"So from time to time I will say to my wife, 'Give me a number from one to 500.' I then look at the page corresponding to the number she's just given me, and see if anything on that page makes me want to know what happens next, or what happened just previously. The magic of storytelling is to want to make people listen breathlessly, if you're telling the story orally, or turn the page, if you're writing it. That's why, at the end of every one of my chapters, there's a big fat hook.\"\n", "Section::::Style.\n\nWillis includes elements of madcap comedy in the style and form of \"Passage\", and links different events thematically in order to foreshadow later events.\n\nThe novel celebrates metaphor, the very idea of which gives Joanna the understanding of how the NDE works to help the dying brain, rather than to ease it into death.\n", "A significant difference is that at the end of the novel, unlike the film, it is ambiguous whether Jamie died or simply disappeared into the shadow world. Sparks says that he had written the book knowing she would die, yet had \"grown to love Jamie Sullivan\", and so opted for \"the solution that best described the exact feeling I had with regard to my sister at that point: namely, that I hoped she would live.\"\n\nSection::::Soundtrack.\n", "Others have their names revealed long after they have become a participant in the story. Inge is introduced on page 20 as a force to be reckoned with, but her name is not learned by the reader until page 30. Fr. Wendt is introduced on page 26, another considerable presence, yet his name does not appear for another twelve pages. The brooding youth Andreas Hofmeyr, one of the most important characters, has a whopping 33 page interlude between the introduction of his character and that of his name (pages 21 and 54, respectively).\n", "BULLET::::- \"School Library Journal\" in their review said \"The true charm of the book lies in its tongue-in-cheek presentation and lively watercolor illustrations. Muth has created a multiracial, multigenerational array of friends and family who surround the unnamed protagonist. Careful observers will be tickled to note that the gifts open book always mirrors the pages they are reading. A light, original diversion.\"\n", "While the central plot is a murder mystery, the initial idea was of a writer with PTSD who cannot write and is instead fronting for his blacklisted best friend. Throughout the series, Brubaker switched between first- and third-person narration because it allowed him to tell a broader story. He was also trying to create a layered story which would reward repeated reading, so he avoided extraneous details. When writing the scripts, Brubaker listened to music from the 1940s to stay in the right frame of mind.\n", "This clear pattern is deliberately obscured by a \"pattern of echo and repetition\". There are numerous parallels in characters, actions and descriptions between the chapters taking place in the 18th and those taking place in the 20th century. \"They escape any effort at organization and create a mental fusion between past and present.\" For example, the same fragments of popular songs, ballads, and poems are heard in the streets of London in both historical periods.\n", "Gareth Cordery writes that \"if \"David Copperfield\" is the paradigmatic Bildungsroman, it is also the quintessential novel of memory\" and as such, according to Angus Wilson, the equal of Marcel Proust's \"In Search of Lost Time\" (\"À la recherche du temps perdu\"). The memory of the hero engages so intensely with his memories that the past seems present:\n", "Section::::Reception.\n\nThe economic style of writing has led horror writer Robert Weinberg to describe \"The Ghost Pirates\" as \"one of the finest examples of the tightly written novel ever published.\" \n", "The novel is told by a third-person omniscient narrator – one who is able to reach and relay the thoughts, feelings, \"and\" actions of the characters. Yet, thoughts and feelings of minor characters are usually withheld in order for readers to focus on relating to the major figures in the novel. There are points in the novel when the reader knows more about a situation or character than anyone in the actual story – this unique situation allows for thorough analysis or characters and the development of suspense.\n\nSection::::Structure.\n", "BULLET::::- \"Hold tight!\" (Karl Wallenda's last words)\n\nWillis has the characters discuss a great many movies, some of which have indirect or obvious bearing on the novel's themes. They include \"Coma\", \"Fight Club\", \"Final Destination\", \"Flatliners\", \"Harold and Maude\", and \"Peter Pan\", as well as \"The Twilight Zone\" and \"The X-Files\".\n\nJoanna frequently talks about the \"Titanic\" movie; she, Vielle, Pat and Kit Briarley, and others share her dislike of it because of the changes to historical fact. Joanna (speaking for Willis), complains:\n\nSection::::Reception.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03215
How are rifles aimed properly when the scope is few inches above the barrel head?
The rifle is zeroed. You fire 3 rounds aiming the reticle is your optic. Then adjust it up or down, left or right. Your adjustments in aim will results in the barrel being slightly adjusted up/down left/right. So the reticle actually moves to compensate until your barrel is aimed where the reticle rests on the target.
[ "This is the reason why rifles are sighted in for specific target distances. This means that the sights are adjusted so that the bullet will hit the location on the target viewed in the center of the sights at a known distance. When shooting at a target significantly closer or farther than the distance for which the rifle was sighted in, the shooter must know about how much higher or lower than the center of the sights the bullet will hit and adjust the aim accordingly. The study of bullet trajectories after they leave the firearm is known as external ballistics.\n", "Snipers zero their weapons at a target range or in the field. This is the process of adjusting the scope so that the bullet's points-of-impact is at the point-of-aim (centre of scope or scope's cross-hairs) for a specific distance. A rifle and scope should retain its zero as long as possible under all conditions to reduce the need to re-zero during missions.\n", "The adjustment range needed to shoot at a certain distance vary with firearm, caliber and load. For example, with a certain .308 load and firearm combination, the bullet may drop 13 mils at 1000 meters (13 meters). To be able to reach out, one could either:\n\nBULLET::::- Use a scope with 26 mils of adjustment in a neutral mount, to get a usable adjustment of = 13 mils\n\nBULLET::::- Use a scope with 14 mils of adjustment and a 6 mil tilted mount to achieve a maximum adjustment of + 6 = 13 mils\n\nSection::::Shot groupings.\n", "If a rifle scope has mil markings in the reticle (or there is a spotting scope with a mil reticle available), the reticle can be used to measure how many mils to correct a shot even without knowing the shooting distance. For instance, assuming a precise shot fired by an experienced shooter missed the target by 0.8 mils as seen through an optic, and the firearm sight has 0.1 mil adjustments, the shooter must then dial 8 clicks on the scope to hit the same target under the same conditions.\n\nSection::::Sight adjustment.:Common click values.\n\nBULLET::::- General purpose scopes:\n", "Section::::Background.\n\nSection::::Background.:Definitions.\n\nThere is a device that is mounted on the rifle called a sight. While there are many forms of rifle sight, they all permit the shooter to set the angle between the bore of the rifle and the line of sight (LOS) to the target. Figure 2 illustrates the relationship between the LOS and bore angle.\n", "Shooters aim firearms at their targets with hand-eye coordination, using either iron sights or optical sights. The accurate range of pistols generally does not exceed , while most rifles are accurate to using iron sights, or to longer ranges using optical sights (firearm rounds may be dangerous or lethal well beyond their accurate range; the minimum distance for safety is much greater than the specified range). Purpose-built sniper rifles and anti-materiel rifles are accurate to ranges of more than .\n\nSection::::Types of firearms.\n", "Table 1: Example Bullet Drop Table\n\nIf the shooter is engaging a target on an incline and has a properly zeroed rifle, the shooter goes through the following procedure:\n\nBULLET::::1. Determine the slant range to the target (measurement can be performed using various forms of range finders, e.g. laser rangefinder)\n\nBULLET::::2. Determine the elevation angle of the target (measurement can be made using various devices, e.g. sight attached unit)\n\nBULLET::::3. Apply the \"rifleman's rule\" to determine the equivalent horizontal range (formula_3)\n", "After the sights have been adjusted, more shots may be fired from a cool barrel forming another group to verify that sight adjustment moved the average bullet placement onto the point of aim. Sighting in has been completed when the group is centered on the point of aim. Bullets may then be fired at targets at different distances to determine trajectory differences from point of aim at those distances.\n", "The first mass produced assault rifle the World War II StG 44 and its preceding prototypes had iron sight lines elevated over the bore axis to extend point-blank range. The current trend for elevated sights and flatter shooting higher-velocity cartridges in assault rifles is in part due to a desire to further extend the maximum point-blank range, which makes the rifle easier to use. Raising the sight line over the bore axis, introduces an inherent parallax problem. At closer ranges (typically inside ), the shooter must aim high in order to place shots where desired.\n\nSection::::See also.\n", "This relationship between the LOS to the target and the bore angle is determined through a process called \"zeroing.\" The bore angle is set to ensure that a bullet on a parabolic trajectory will intersect the LOS to the target at a specific range. A properly adjusted rifle barrel and sight are said to be \"zeroed.\" Figure 3 illustrates how the LOS, bullet trajectory, and range (formula_4) are related.\n\nSection::::Background.:Procedure.\n", "In the open division one can have any number of many optical sights and bipods, and there is no size restriction on muzzle brakes. 1-4 or 1-6 scopes are popular in the Open division. Some use reticles with marked hold overs, while others prefer reticles with a simple dot and crosshair and choose to dial long range adjustments on the turrets instead.\n\nBULLET::::- Tactical:\n", "BULLET::::- 35 mm, a rare tube size which only is seen on some current models from Romanian IOR and U.S.-based Vortex and Leupold\n\nBULLET::::- 36 mm, only used on some newer scope models by Zeiss and Hensoldt\n\nBULLET::::- 40 mm, only used on some scopes made by Romanian IOR and the new Swarovski dS scope\n", "For example, with a typical Leupold brand duplex 16 minute of angle (MOA) reticle (of a type as shown in image B) on a fixed-power scope, the distance from post to post (that is, between the heavy lines of the reticle spanning the center of the scope picture) is approximately at , or, equivalently, approximately from the center to any post at 200 yards. If a target of a known diameter of 16 inches fills just half of the total post-to-post distance (i.e. filling from scope center to post), then the distance to target is approximately . With a target of a diameter of 16 inches that fills the entire sight picture from post to post, the range is approximately 100 yards. Other ranges can be similarly estimated accurately in an analog fashion for known target sizes through proportionality calculations. Holdover, for estimating vertical point of aim offset required for bullet drop compensation on level terrain, and horizontal windage offset (for estimating side to side point of aim offsets required for wind effect corrections) can similarly be compensated for through using approximations based on the wind speed (from observing flags or other objects) by a trained user through using the reticle marks. The less-commonly used holdunder, used for shooting on sloping terrain, can even be estimated by an appropriately-skilled user with a reticle-equipped scope, once both the slope of the terrain and the slant range to target are known.\n", "Section::::Procedure.\n", "Some scopes use a side-wheel parallax adjustment to control focus (rather than a camera-like focus ring on the objective bell of the scope), and this allows the use of large diameter wheels to increase the distance between range markings and effectively improve ranging resolution.\n\nSection::::Physics and technique.\n", "No matter which method of bore sighting is used, the result is to align the cross-hairs of the scope to the spot where the barrel is pointing at a particular distance. Because of variations in the trajectory of ammunition and other factors the bore-sighted rifle will probably not shoot to the exact spot that the cross-hairs indicate, and live ammunition will need to be fired to fine-tune the sighting process.\n", "BULLET::::- Levelling - leveling of the base of the instrument to make the vertical axis vertical usually with an in-built bubble-level.\n\nBULLET::::- Focusing - removing parallax error by proper focusing of objective and eye-piece. The eye-piece only requires adjustment once at a station. The objective will be re-focused for each subsequent sightings from this station because of the different distances to the target.\n\nSection::::Principles of operation.:Sightings.\n", "formula_6\n\nIn most regular sport and hunting rifles (except for in long range shooting), sights are usually mounted in neutral mounts. This is done because the optical quality of the scope is best in the middle of its adjustment range, and only being able to use half of the adjustment range to compensate for bullet drop is seldom a problem at short and medium range shooting.\n", "If time allows, sighting in the firearm is important. The scope needs to be adjusted for different ranges, so knowing the expected range beforehand is helpful. Knowing the direction and force of the wind is also important. For a moving target, keeping as steady a sight picture as possible is paramount. Keeping the rifle steady and at the same latitude is good. If the target is moving in longitude, adjustments need to be made. If the target is moving directly toward or away from the shooter, the rifle should not need to be moved side to side. \n", "The three main technologies employed for long-range shooting—the bolt-action rifle, telescopic rifle scope and machined cartridge ammunition—were developed in the nineteenth century. The first bolt-action rifle was produced in 1824 by the German firearms inventor Johann Nicolaus von Dreyse. The first documented telescopic rifle sight was developed between 1835 and 1840 by the American Morgan James. Machined metal-cased cartridge ammunition was first adopted by the British in 1867.\n\nSection::::System requirements.\n\nTo qualify as a precision guided firearm, the system must:\n\nBULLET::::- Be a complete firing system – rifle, ammunition and networked tracking scope\n", "With a mil reticle-equipped scope the distance to an object can be estimated with a fair degree of accuracy by a trained user by determining how many angular mils an object of known size subtends. Once the distance is known, the drop of the bullet at that range (see external ballistics), converted back into angular mils, can be used to adjust the aiming point. Generally mil-reticle scopes have both horizontal and vertical crosshairs marked; the horizontal and vertical marks are used for range estimation and the vertical marks for bullet drop compensation. Trained users, however, can also use the horizontal dots to compensate for bullet drift due to wind. Mil-reticle-equipped scopes are well suited for long shots under uncertain conditions, such as those encountered by military and law enforcement snipers, varmint hunters and other field shooters. These riflemen must be able to aim at varying targets at unknown (sometimes long) distances, so accurate compensation for bullet drop is required.\n", "BULLET::::- Connecticut - Isaac Sherman.\n\nSection::::Survey.\n\nSection::::Survey.:Process.\n", "BULLET::::- Long range wind estimation table based on the Beaufort scale:\n\nSection::::Competitions.\n\nThere are many different long range disciplines, competing both at known (KD) and unknown distances (UKD), individually or in teams (shooter and spotter). In UKD competitions the marksman must also judge the distances, for example by comparing a known size target with angular mil hashmarks inside their scope (called \"milling\") to calculate the distance. Sometimes a laser rangefinder may also be used, if permitted.\n\nSection::::Competitions.:T-Class.\n", "Sniper rifles often have even greater magnification than designated marksman rifles outfitted with magnification, for example, the M110 SASS used by the United States Army, is equipped with a Leupold 3.5-10× variable-power scope. However, some designated marksman rifles, such as the Mk 12 Special Purpose Rifle or the USMC Squad Advanced Marksman Rifle are fitted with scopes with similar magnification.\n\nSection::::Comparison to sniper rifles, battle rifles, and assault rifles.:Barrels.\n", "Windage plays a significant role, with the effect increasing with wind speed or the distance of the shot. The slant of visible convections near the ground can be used to estimate crosswinds, and correct the point of aim. All adjustments for range, wind, and elevation can be performed by aiming off the target, called \"holding over\" or Kentucky windage. Alternatively, the scope can be adjusted so that the point of aim is changed to compensate for these factors, sometimes referred to as \"dialing in\". The shooter must remember to return the scope to zeroed position. Adjusting the scope allows for more accurate shots, because the cross-hairs can be aligned with the target more accurately, but the sniper must know exactly what differences the changes will have on the point-of-impact at each target range.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-11872
Before spoken language was developed, were humans able have an inner monologue?
Not really, a monologue is by definition language. If language didn't exist then they couldn't have been doing it internally either. There is a bizarre misconception held by some that all or even the bulk of thinking is done via an internal monologue. When you wake up in the morning and need to pee you don't need to think "Hey, I need to pee," you just... feel that you need to pee and understand. You can have thoughts and feelings which you struggle to put into words but you cannot have an inner monologue which you struggle to put into feelings.
[ "In the 1920s, Swiss developmental psychologist Jean Piaget proposed the idea that private (or \"egocentric\") speech—speaking to yourself out loud—is the initial form of speech, from which \"social speech\" develops, and that it dies out as children grow up. In the 1930s, Russian psychologist Lev Vygotsky proposed instead that private speech develops \"from\" social speech, and later becomes internalised as an internal monologue, rather than dying out. This interpretation has come to be the more widely accepted, and is supported by empirical research.\n", "Inner speech is strongly associated with a sense of self, and the development of this sense in children is tied to the development of language. There are, however, examples of an internal monologue or inner voice being considered external to the self, such as auditory hallucinations, the conceptualisation of negative or critical thoughts as an inner critic, and as a kind of divine intervention. As a delusion, this can be called \"thought insertion\".\n\nThough not necessarily external, a conscience is also often thought of as an \"inner voice\".\n\nSection::::Relation to the self.:Absence of an internal monologue.\n", "Studies have revealed the differences in neural activations of inner dialogues versus those of monologues. Functional MRI imaging studies have shown that monologic internal speech involves the activation of the superior temporal gyrus and the left inferior frontal gyrus, which is the standard language system that is activated during any kind of speech. However, dialogical inner speech implicates several additional neural regions. Studies have indicated overlap with regions involved with thinking about other minds.\n", "However, the results of neural imaging have to be taken with caution because the regions of the brain activated during spontaneous, natural internal speech diverge from those that are activated on demand. In research studies, individuals are asked to talk to themselves on demand, which is different than the natural development of inner speech within one's mind. The concept of internal monologue is an elusive study and is subjective to many implications with future studies.\n\nSection::::In literature.\n", "The concept of internal monologue is not new, but the emergence of the functional MRI has led to a better understanding of the mechanisms of internal speech by allowing researchers to see localized brain activity.\n", "Implicit in the idea of a social origin to inner speech is the possibility of \"inner \"dialogue\"\" – a form of \"internal collaboration with oneself.\" However, Vygotsky believed inner speech takes on its own syntactic peculiarities, with heavy use of abbreviation and omission compared with oral speech (even more so compared with written speech).\n\nAndy Clark (1998) writes that social language is \"especially apt to be co-opted for more private purposes of [...] self-inspection and self-criticism,\" although others have defended the same conclusions on different grounds.\n\nSection::::Neurological correlates.\n", "Not everyone reports experiencing an internal monologue, and most people report experiences that do not involve an internal monologue at least some of the time. This is particularly prevalent among children, and has been cited as evidence for the \"language of thought\" hypothesis, which posits an underlying language of the brain, or \"mentalese\", distinct from a thinker's native tongue.\n\nSection::::Purpose.\n\nOne study found inner speech was most usual for tasks involving self-regulation (e.g. planning and problem solving), self‐reflection (e.g. emotions, self‐motivation, appearance, behavior/performance, and autobiography), and critical thinking (e.g., evaluating, judging, and criticizing).\n\nSection::::Purpose.:Development.\n", "Instructional self-talk focusses attention on the components of a task and can improve performance on physical tasks that are being learnt, however it can be detrimental for people who are already skilled in the task.\n\nSection::::Relation to the self.\n", "The existence of these neurons should likely trace their roots back closer to a common ancestor with modern primates, the only other species noted with mirror neurons, and some of the intentional and mimetic capabilities that Donald attributes to the evolution of the mimetic mind in Homo Erectus were likely around much earlier in some simplified form, perhaps as the foundation of the rigid social hierarchies that our primate cousins are known for.\n", "According to Jaynes, ancient people in the bicameral state of mind would have experienced the world in a manner that has some similarities to that of a person with schizophrenia. Rather than making conscious evaluations in novel or unexpected situations, the person would hallucinate a voice or \"god\" giving admonitory advice or commands and obey without question: One would not be at all conscious of one's own thought processes \"per se\". Jaynes's hypothesis is offered as a possible explanation of \"\"command hallucinations\"\" that often direct the behavior of those afflicted by first rank symptoms of schizophrenia, as well as other voice hearers.\n", "The language of thought hypothesis has been both controversial and groundbreaking. Some philosophers reject the LOTH, arguing that our public language \"is\" our mental language—a person who speaks English \"thinks\" in English. But others contend that complex thought is present even in those who do not possess a public language (e.g. babies, aphasics, and even higher-order primates), and therefore some form of mentalese must be innate. \n", "Jaynes wrote that ancient humans before roughly 1000 BC were not reflectively meta-conscious and operated by means of automatic, nonconscious habit-schemas. Instead of having meta-consciousness, these humans were constituted by what Jaynes calls the \"bicameral mind\". For bicameral humans, when habit did not suffice to handle novel stimuli and stress rose at the moment of decision, neural activity in the \"dominant\" (left) hemisphere was modulated by auditory verbal hallucinations originating in the so-called \"silent\" (right) hemisphere (particularly the right temporal cortex), which were heard as the voice of a chieftain or god and immediately obeyed.\n", "Tomasello argues that this kind of bi-directional cognition is central to the very possibility of linguistic communication. Drawing on his research with both children and chimpanzees, he reports that human infants, from one year old onwards, begin viewing their own mind as if from the standpoint of others. He describes this as a cognitive revolution. Chimpanzees, as they grow up, never undergo such a revolution. The explanation, according to Tomasello, is that their evolved psychology is adapted to a deeply competitive way of life. Wild-living chimpanzees form despotic social hierarchies, most interactions involving calculations of dominance and submission. An adult chimp will strive to outwit its rivals by guessing at their intentions while blocking them from reciprocating. Since bi-directional intersubjective communication is impossible under such conditions, the cognitive capacities necessary for language don't evolve.\n", "In some cases people may think of inner speech as coming from an external source, as with schizophrenic auditory hallucinations. Additionally, not everyone has a verbal internal monologue. The looser flow of thoughts and experiences, verbal or not, is called a stream of consciousness, which can also refer to a related technique in literature.\n\nIn a theory of child development formulated by Lev Vygotsky, inner speech has a precursor in private speech (talking to oneself) at a young age.\n\nSection::::Role in mental health.\n\nNegative self-talk has been implicated in contributing to psychological disorders including depression, anxiety, and bulimia nervosa.\n", "Psychologist Julian Jaynes proposed that this is a temporary accessing of the bicameral mind; that is, a temporary separating of functions, such that the authoritarian part of the mind seems to literally be speaking to the person as if a separate (and external) voice. Jaynes posits that the gods heard as voices in the head were and are organizations of the central nervous system. God speaking through man, according to Jaynes, is a more recent vestige of God speaking to man; the product of a more integrated higher self. When the bicameral mind speaks, there is no introspection. We simply experience the Lord telling us what to do. In earlier times, posits Jaynes, there was additionally a visual component, now lost.\n", "In 1861 Paul Broca, reported a post mortem study of an aphasic patient who was speechless apart from a single nonsense word: \"Tan\". Broca showed that an area of the left frontal lobe was damaged. As Tan was unable to produce speech but could still understand it, Broca argued that this area might be specialised for speech production and that language skills might be localized to this cortical area. Broca did a similar study on another patient, Lelong, a few weeks later. Lelong, like Tan, could understand speech but could only repeat the same 5 words. After examining his brain, Broca noticed that Lelong had a lesion in approximately the same area as his patient Tan. He also noticed that in the more than 25 patients he examined with aphasia, they all had lesions to the left frontal lobe but there was no damage to the right hemisphere of the brain. From this he concluded that the function of speech was probably localized in the inferior frontal gyrus of the left hemisphere of the brain, an area now known as Broca's area.\n", "Section::::History.\n\nIn ancient Greek theatre, the origin of western drama, the conventional three actor rule was preceded by a two-actor rule, which was itself preceded by a convention in which only a single actor would appear on stage, along with the chorus. The origin of the monologue as a dramatic device, therefore, is not rooted in dialogue. It is, instead, the other way around; dialogue evolved from the monologue.\n", "Jaynes inferred that these \"voices\" came from the right brain counterparts of the left brain language centres; specifically, the counterparts to Wernicke's area and Broca's area. These regions are somewhat dormant in the right brains of most modern humans, but Jaynes noted that some studies show that auditory hallucinations correspond to increased activity in these areas of the brain.\n", "Some thinkers, like the ancient sophist Gorgias, have questioned whether or not language was capable of capturing thought at all.\n", "In regard to research on inner speech Fernyhough stated, \"The new science of inner speech tells us that it is anything but a solitary process. Much of the power of self-talk comes from the way it orchestrates a dialogue between different points of view.\" Based on interpretation of functional medical-imaging, Fernyhough believes that language system of internal dialogue works in conjunction with a part of the social cognition system (localized in the right hemisphere close to the intersection between the temporal and parietal lobes). Neural imaging seems to support Vygotsky's theory that when individuals are talking to themselves, they are having an actual conversation. Intriguingly, individuals did not exhibit this same arrangement of neural activation with silent monologues. In past studies, it has been supported that these two brain hemispheres to have different functions. Based on Functional magnetic resonance imaging studies, inner speech has been shown to more significant activations farther back in the temporal lobe, in Heschl's gyrus.\n", "The philosopher of cognitive science Daniel Dennett, for example, argues there is no such thing as a narrative center called the \"mind\", but that instead there is simply a collection of sensory inputs and outputs: different kinds of \"software\" running in parallel. Psychologist B.F. Skinner argued that the mind is an explanatory fiction that diverts attention from environmental causes of behavior; he considered the mind a \"black box\" and thought that mental processes may be better conceived of as forms of covert verbal behavior.\n", "A study examining memory and embodied cognition illustrates that people remember more of the gist of a story when they physically act it out. Researchers divided female participants randomly into 5 groups, which were \"Read Only,\" \"Writing,\" \"Collaborative Discussion,\" \"Independent Discussion,\" and \"Improvisation.\" All participants received a monologue about teen addiction and were told to pay attention to details about the character and action in the monologue. Participants were given 5 minutes to read the monologue twice, unaware of a future recall test. In the \"Read Only\" condition participants filled out unrelated questionnaires after reading the monologue. In the \"Writing\" condition participants responded to 5 questions about the story from the perspective of the character in the monologue. They had 6 minutes to answer each question. In the \"Collaborative Discussion\" condition participants responded from the character's perspective to the same questions as the \"Writing\" group, but in groups of 4 or 5 women. They were also given 6 minutes per question and everyone participated in answering each question. The \"Independent Discussion\" condition was the same as the \"Collaborative Discussion,\" except 1 person answered each question. In the \"Improvisation\" condition participants acted out 5 scenes from the monologue in groups of 5 women. The researchers suggest that this condition involves embodied cognition and will produce better memory for the monologue. Every participant played the main character and a supporting character once. Participants were given short prompts from lines in the monologue, which were excluded from the memory test. Participants had 2 minutes to choose characters and 4 minutes for improvisations. The recall test was the monologue with 96 words or phrases missing. Participants had to fill in the blanks as accurately as possible.\n", "The concept of the dialogue tree has existed long before the advent of video games. The earliest known dialogue tree is described in \"The Garden of Forking Paths,\" a 1941 short story by Jorge Luis Borges, in which the combination book of Ts'ui Pên allows all major outcomes from an event branch into their own chapters. Much like the game counterparts this story reconvenes as it progresses (as possible outcomes would approach n^m where n is the number of options at each fork and m is the depth of the tree).\n", "Constructed action is common in many languages when telling stories or reporting the actions of others. During a narrative, the speaker not only reports the actions of others but performs them as well. The actions performed are not the exact actions of the person but an action constructed by the speaker. Liddell gives the example of a speaker patting their pockets when talking about someone having lost their keys. Since the speaker has not lost their own keys, the only reason they would pat their pockets would be to illustrate the story they are telling. The addressee then understands these actions not as the speaker's but of a character within the story. \n", "Section::::Evolution of the speech organs.\n\nSpeaking is the default modality for language in all cultures. Humans' first recourse is to encode our thoughts in sound — a method which depends on sophisticated capacities for controlling the lips, tongue and other components of the vocal apparatus.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-23932
How does a Turbofan differ from a Turbojet?
Let's clear up some confusion: F/A-18s, F-35s, F-22s, F-15s, etc all use turbofan engines. Turbojet engines haven't been used in fighters for quite some time because they're less efficient. Turbofans are essentially turbojets that are placed into yet another tube, with the turbojet part being the "core" and the space around the core being the bypass / bypass flow. In front of the the core, attached to generally the same shaft that powers the stage 1 (front) compressor disks, is a fan. Air from this fan flows both into the core, and the bypass. For the core, this just provides a minor boost in air flow, but a lot of the fan's work goes into accelerating the air into the bypass flow. --- Now here's where the efficiency aspect comes in: Thrust is generated by a change in momentum. Momentum = mass * velocity. However, increasing the velocity of something requires exponentially (or quadratically; sue me semantics freaks) more energy to achieve. So what this means, is that the most energy efficient way to generate thrust is to move a **lot** of air, but to only move it slightly faster than it was already moving. This is how commercial airliner engines are efficient; they have giant intakes / fans that are designed to move a massive amount of air, but to only move it at subsonic speeds (slower than the speed of sound). The more air that passes through the bypass, rather than the core, the higher the engine's "bypass ratio" is. A bypass ratio of 1.0 means an equal amount of air flows through the core as the bypass. Commercial airliners have BPRs of up to around 9.0 (9x as much air flows through the bypass as core), although some engines can go even higher. Most fighter jet turbofans have BPRs of around 0.2 to 0.6. --- But here's the thing; if you're trying to go faster than the speed of sound (supersonic), trying to propel yourself with subsonic air won't work. If you're ejecting air that's slower than the surrounding air, you're generating negative thrust; you're generating drag. To create supersonic thrust, a convergent-divergent is typically used, as while this doesn't generate thrust, it can convert high-mass (high density), low-velocity thrust into low-mass (low density), high-velocity thrust, including making exhaust gases go supersonic. As a side note, this process also results in the exhaust gases themselves getting cooler. It is possible to make a supersonic jet that has a high bypass ratio, or to even have a supersonic jet powered by an electric compressor / fan (using a convergent-divergent nozzle), but the issue is that compressing and slowing that incoming supersonic air (into high pressure subsonic air, which you can then accelerate with a fan), comes at a cost. Compressing air releases heat, which is real, usable energy being lost, and if you try to decelerate / compress too much air, you can run into the very real issue of your fan and compressor getting so hot that it violently breaks. What supersonic aircraft do instead is they burn fuel. Burning fuel generates heat, but it generates it downstream of the compressor and fan. The turbine stages (which capture kinetic energy to spin the compressor and fan stages) do need to withstand immense heat, but there's generally fewer turbine blisks (disks with blades on the edges) than compressor blisks, plus they can be made to be a little less aerodynamic, etc. Burning fuel also causes liquids to turn into gases, which causes a massive increase in pressure. By creating all this heat and pressure, you provide that convergent-divergent nozzle with a lot more to work with when it comes to expand and cool that gas in exchange for increased flow velocity. You can also inject extra fuel to the exhaust coming from the turbine (in a turbofan or a turbojet) and burn even more of the oxygen (some of which will get through the combustion chamber and turbine without reacting with fuel). This is called using afterburner or 'reheat'. Older engines also even used to inject water (plus things like methanol) into their engines, both to cool critical engine components and to also generate that liquid-to-gas expansion / boost in pressure. This is where the term "wet thrust" came from (which today refers to the thrust generated with afterburner engaged). --- Lastly - why do supersonic jets use turbofans and not turbojets then? Because jets spend most of their time at subsonic speeds; flying at supersonic speeds requires much more thrust; usually afterburner. Afterburners usually consumer about 2-3x as much fuel per unit of thrust, but they also generate more thrust (meaning even more fuel consumption), plus most jets can cruise at decent subsonic speeds at much less than their maximum dry (non-afterburning) thrust level. Something like an F-35 might only burn 5,000lb / 2,270kg of fuel per hour when cruising at Mach 0.75, but at max afterburner it'll be burning 86,000lb / 39,000kg of fuel per hour. Most fighter jets only carry enough internal fuel to fly a max of about 10 minutes in afterburner. A handful of jets like the F-22 can generate so much dry thrust (and have so little drag) that they can fly at supersonic speeds without using afterburner, but again they're generally using their max dry thrust (and maybe some afterburning to accelerate to a decent speed first), so they can be burning 2-3x as much fuel as if they just stayed at Mach 0.8, etc. Bypass flow in turbofans is also useful because it provides cooling to the engine core, plus you can fit radiators / heat exchangers in the bypass flow, allowing you to cool things like radars using liquid coolant loops. Turbojet engines are still used in some things like cruise missiles, but even then it's not because they're superior, but because a turbojet has fewer parts. Some small (cruise missile, etc) turbojet engines don't even use axial flow (where the engine is essentially tube); instead they work like water pumps, with a compressor disk that uses centrifugal forces to push air to the edge of a disk, where it then pass around to the back, gets mixed with fuel burned, etc; all just because machining some ridges onto a disk is easier than precision-milling a bunch of blades, etc.
[ "Section::::Principles.:Thrust.\n\nWhile a turbojet engine uses all of the engine's output to produce thrust in the form of a hot high-velocity exhaust gas jet, a turbofan's cool low-velocity bypass air yields between 30% and 70% of the total thrust produced by a turbofan system.\n\nThe thrust (F) generated by a turbofan depends on the effective exhaust velocity of the total exhaust, as with any jet engine, but because two exhaust jets are present the thrust equation can be expanded as:\n\nwhere:\n\nSection::::Principles.:Nozzles.\n\nThe cold duct and core duct's nozzle systems are relatively complex due to there being two exhaust flows.\n", "To illustrate one aspect of how a turbofan differs from a turbojet, they may be compared, as in a re-engining assessment, at the same airflow (to keep a common intake for example) and the same net thrust (i.e. same specific thrust). A bypass flow can be added only if the turbine inlet temperature is not too high to compensate for the smaller core flow. Future improvements in turbine cooling/material technology can allow higher turbine inlet temperature, which is necessary because of increased cooling air temperature, resulting from an overall pressure ratio increase.\n", "BULLET::::5. The off-design behaviour of turbofans is illustrated under compressor map and turbine map.\n", "Section::::Overall performance.:Thrust growth.\n\nThrust growth is obtained by increasing core power. There are two basic routes available:\n\nBULLET::::1. hot route: increase HP turbine rotor inlet temperature\n\nBULLET::::2. cold route: increase core mass flow\n\nBoth routes require an increase in the combustor fuel flow and, therefore, the heat energy added to the core stream.\n\nThe hot route may require changes in turbine blade/vane materials and/or better blade/vane cooling. The cold route can be obtained by one of the following:\n\nBULLET::::1. adding T-stages to the LP/IP compression\n\nBULLET::::2. adding a zero-stage to the HP compression\n", "Owing to the nature of the constraints involved, the fan working lines of a mixed turbofan are somewhat steeper than those of the equivalent unmixed engine.\n\nThe fan map shown is for the bypass (i.e. outer) section of the unit. The corresponding inner section map typically has longer, flatter, speed lines.\n", "Turboprops and most propfans are rated by the amount of shaft horsepower (shp) that they produce, as opposed to turbofans and the UDF propfan type, which are rated by the amount of thrust they put out. This difference can be somewhat confusing when comparing different types of engines. The rule of thumb is that at sea level with a static engine, is roughly equivalent of thrust, but at cruise altitude, that changes to about thrust. That means a narrowbody aircraft with two engines can theoretically be replaced with a pair of propfans or with two UDF propfans.\n\nSection::::Aircraft with propfans.\n", "To boost fuel economy and reduce noise, almost all of today's jet airliners and most military transport aircraft (e.g., the C-17) are powered by low-specific-thrust/high-bypass-ratio turbofans. These engines evolved from the high-specific-thrust/low-bypass-ratio turbofans used in such aircraft in the 1960s. (Modern combat aircraft tend to use low-bypass ratio turbofans, and some military transport aircraft use turboprops.)\n\nLow specific thrust is achieved by replacing the multi-stage fan with a single-stage unit. Unlike some military engines, modern civil turbofans lack stationary inlet guide vanes in front of the fan rotor. The fan is scaled to achieve the desired net thrust.\n", "Section::::Turbofan configurations.:Single-shaft turbofan.\n\nAlthough far from common, the single-shaft turbofan is probably the simplest configuration, comprising a fan and high-pressure compressor driven by a single turbine unit, all on the same shaft. The Snecma M53, which powers Dassault Mirage 2000 fighter aircraft, is an example of a single-shaft turbofan. Despite the simplicity of the turbomachinery configuration, the M53 requires a variable area mixer to facilitate part-throttle operation.\n\nSection::::Turbofan configurations.:Aft-fan turbofan.\n", "El-Sayed differentiates between turboprops and propfans according to 11 different criteria, including number of blades, blade shape, tip speed, bypass ratio, Mach number, and cruise altitude.\n\nSection::::Development.\n", "Section::::Turbofan configurations.:Military turbofans.\n\nMost of the configurations discussed above are used in civilian turbofans, while modern military turbofans (e.g., Snecma M88) are usually basic two-spool.\n\nSection::::Turbofan configurations.:High-pressure turbine.\n\nMost civil turbofans use a high-efficiency, 2-stage HP turbine to drive the HP compressor. The CFM International CFM56 uses an alternative approach: a single-stage, high-work unit. While this approach is probably less efficient, there are savings on cooling air, weight and cost.\n", "Section::::Turbofan configurations.:Boosted two-spool.\n", "BULLET::::- Dry: afterburner (if fitted) not lit\n\nBULLET::::- EGT: exhaust gas temperature\n\nBULLET::::- EPR: engine pressure ratio\n\nBULLET::::- Fan: turbofan LP compressor\n\nBULLET::::- Fan pressure ratio: fan outlet total pressure/intake delivery total pressure\n\nBULLET::::- Flex temp: use of artificially high apparent air temperature to reduce engine wear\n\nBULLET::::- Gas generator: engine core\n\nBULLET::::- HP compressor: high-pressure compressor (also HPC)\n\nBULLET::::- HP turbine: high-pressure turbine\n\nBULLET::::- Intake ram drag: penalty associated with jet engines picking up air from the atmosphere (conventional rocket motors do not have this drag term, because the oxidiser travels with the vehicle)\n", "Section::::Reaction engines.:Jets.:Turbofan.\n", "The design point calculation for a two spool turbojet, has two compression calculations; one for the Low Pressure (LP) Compressor, the other for the High Pressure (HP) Compressor. There is also two turbine calculations; one for the HP Turbine, the other for the LP Turbine.\n", "A high-specific-thrust/low-bypass-ratio turbofan normally has a multi-stage fan, developing a relatively high pressure ratio and, thus, yielding a high (mixed or cold) exhaust velocity. The core airflow needs to be large enough to give sufficient core power to drive the fan. A smaller core flow/higher bypass ratio cycle can be achieved by raising the high-pressure (HP) turbine rotor inlet temperature.\n", "BULLET::::3. Thrust growth on civil turbofans is usually obtained by increasing fan airflow, thus preventing the jet noise becoming too high. However, the larger fan airflow requires more power from the core. This can be achieved by raising the overall pressure ratio (combustor inlet pressure/intake delivery pressure) to induce more airflow into the core and by increasing turbine inlet temperature. Together, these parameters tend to increase core thermal efficiency and improve fuel efficiency.\n", "SFC is dependent on engine design, but differences in the SFC between different engines using the same underlying technology tend to be quite small. Increasing overall pressure ratio on jet engines tends to decrease SFC.\n", "BULLET::::- IEPR: integrated engine pressure ratio\n\nBULLET::::- IP compressor: intermediate pressure compressor (also IPC)\n\nBULLET::::- IP turbine: intermediate pressure turbine (also IPT)\n\nBULLET::::- LP compressor: low-pressure compressor (also LPC)\n\nBULLET::::- LP turbine: low-pressure turbine (also LPT)\n\nBULLET::::- Net thrust: nozzle total gross thrust – intake ram drag (excluding nacelle drag, etc., this is the basic thrust acting on the airframe)\n\nBULLET::::- Overall pressure ratio: combustor inlet total pressure/intake delivery total pressure\n\nBULLET::::- Overall efficiency: thermal efficiency * propulsive efficiency\n", "Turbofan engines come in a variety of engine configurations. For a given engine cycle (i.e., same airflow, bypass ratio, fan pressure ratio, overall pressure ratio and HP turbine rotor inlet temperature), the choice of turbofan configuration has little impact upon the design point performance (e.g., net thrust, SFC), as long as overall component performance is maintained. Off-design performance and stability is, however, affected by engine configuration.\n", "The following discussion relates to the expansion system of a 2 spool, high bypass ratio, unmixed, turbofan.\n", "The net thrust (F) generated by a turbofan can also be expanded as:\n\nwhere:\n", "Section::::Development.:1970s–1980s.\n\nThe propfan concept was first revealed by Carl Rohrbach and Bruce Metzger of the Hamilton Standard division of United Technologies in 1975 and was patented by Rohrbach and Robert Cornell of Hamilton Standard in 1979. Later work by General Electric on similar propulsors was done under the name unducted fan, which was a modified turbofan engine, with the fan placed outside the engine nacelle on the same axis as the compressor blades.\n\nSection::::Development.:1970s–1980s.:Flight test programs.\n", "In a two spool unmixed turbofan, the LP Compressor calculation is usually replaced by Fan Inner (i.e. hub) and Fan Outer (i.e. tip) compression calculations. The power absorbed by these two \"components\" is taken as the load on the LP turbine. After the Fan Outer compression calculation, there is a Bypass Duct pressure loss/Bypass Nozzle expansion calculation. Net thrust is obtained by deducting the intake ram drag from the sum of the Core Nozzle and Bypass Nozzle gross thrusts.\n", "Turbofan engines are usually described in terms of BPR, which together with overall pressure ratio, turbine inlet temperature and fan pressure ratio are important design parameters. In addition BPR is quoted for turboprop and unducted fan installations because their high propulsive efficiency gives them the overall efficiency characteristics of very high bypass turbofans. This allows them to be shown together with turbofans on plots which show trends of reducing specific fuel consumption (SFC) with increasing BPR. BPR can also be quoted for lift fan installations where the fan airflow is remote from the engine and doesn't flow past the engine core.\n", "\"Low bypass\" engines were the first turbofan engines produced, and provide the majority of their thrust from the hot core exhaust gases, while the fan stage only supplements this. These engines are still commonly seen on military fighter aircraft, since they provide more efficient thrust at supersonic speeds and have a narrower frontal area, minimizing aerodynamic drag. Their comparatively high noise levels and subsonic fuel consumption are deemed acceptable in such an application, whereas although the first generation of turbofan airliners used low-bypass engines, their high noise levels and fuel consumption mean they have fallen out of favor for large aircraft. \"High bypass\" engines have a much larger fan stage, and provide most of their thrust from the ducted air of the fan; the engine core provides power to the fan stage, and only a proportion of the overall thrust comes from the engine core exhaust stream. A high-bypass turbofan functions very similarly to a turboprop engine, except it uses a many-bladed \"fan\" rather than a multi-blade propeller, and relies on a duct to properly vector the airflow to create thrust. \n" ]
[ "Turbofan and turbojets are different. ", "Turbofans and Turbojets are different entities." ]
[ "Turbofans are essentially turbojets that have been placed into another tube. ", "Turbofans are essentially turbojets that are placed into yet another tube." ]
[ "false presupposition" ]
[ "Turbofan and turbojets are different. ", "Turbofans and Turbojets are different entities." ]
[ "false presupposition", "false presupposition" ]
[ "Turbofans are essentially turbojets that have been placed into another tube. ", "Turbofans are essentially turbojets that are placed into yet another tube." ]
2018-00629
How does one prove that data (such as text Messages) recovered forensically is actually the data it is purported to be?
Like all evidence, any recovered electronics would follow a chain of custody - a legal document which records the sequence of custody of the piece of evidence. So long as the chain of custody is intact and the people on the chain are trustworthy, the evidence should be considered secure.
[ "In 2015, there was a horrible house fire where a father was able to save his children, but his wife died in the house. The police thought that the fire was actually not an accident, but instead was a cover-up of the father murdering the mother. The forensic linguists were able to obtain the phone of both the father and mother, and realized that there were texts still being sent from the mother's phone the whole day – long after the police thought she had died. Using information from the two phones, the linguists were able to study the texting styles of both parents to see if they could obtain any more information about what happened that day. It turned out that the texts sent from the wife's phone were actually the husband pretending to be the wife so that no one would know she was murdered, and everyone would believe that she perished in the house fire. The forensic linguists were able to figure this out by studying the husband's texting style, spelling errors, and more, and were able to come to the conclusion that the texts sent after the wife was thought to be deceased, was actually the husband texting off her phone pretending to be her. Without this knowledge, it would have been much more difficult to convict the husband of murder and get justice for the family.\n", "Mobile device forensics is a sub-branch of digital forensics relating to recovery of digital evidence or data from a mobile device. It differs from Computer forensics in that a mobile device will have an inbuilt communication system (e.g. GSM) and, usually, proprietary storage mechanisms. Investigations usually focus on simple data such as call data and communications (SMS/Email) rather than in-depth recovery of deleted data. SMS data from a mobile device investigation helped to exonerate Patrick Lumumba in the murder of Meredith Kercher.\n", "Early efforts to examine mobile devices used similar techniques to the first computer forensics investigations: analysing phone contents directly via the screen and photographing important content. However, this proved to be a time-consuming process, and as the number of mobile devices began to increase, investigators called for more efficient means of extracting data. Enterprising mobile forensic examiners sometimes used cell phone or PDA synchronization software to \"back up\" device data to a forensic computer for imaging, or sometimes, simply performed computer forensics on the hard drive of a suspect computer where data had been synchronized. However, this type of software could write to the phone as well as reading it, and could not retrieve deleted data.\n", "To meet these demands, commercial tools appeared which allowed examiners to recover phone memory with minimal disruption and analyse it separately. Over time these commercial techniques have developed further and the recovery of deleted data from proprietary mobile devices has become possible with some specialist tools. Moreover, commercial tools have even automated much of the extraction process, rendering it possible even for minimally trained first responders—who currently are much more likely to encounter suspects with mobile devices in their possession, compared to computers—to perform basic extractions for triage and data preview purposes.\n\nSection::::Professional applications.\n", "Traditionally mobile phone forensics has been associated with recovering SMS and MMS messaging, as well as call logs, contact lists and phone IMEI/ESN information. However, newer generations of smartphones also include wider varieties of information; from web browsing, Wireless network settings, geolocation information (including geotags contained within image metadata), e-mail and other forms of rich internet media, including important data—such as social networking service posts and contacts—now retained on smartphone 'apps'.\n\nSection::::Types of evidence.:Internal memory.\n\nNowadays mostly flash memory consisting of NAND or NOR types are used for mobile devices.\n\nSection::::Types of evidence.:External memory.\n", "In recent years a number of hardware/software tools have emerged to recover logical and physical evidence from mobile devices. Most tools consist of both hardware and software portions. The hardware includes a number of cables to connect the mobile device to the acquisition machine; the software exists to extract the evidence and, occasionally even to analyse it.\n", "Section::::Forensic process.:Acquisition.\n\nThe second step in the forensic process is acquisition, in this case usually referring to retrieval of material from a device (as compared to the bit-copy imaging used in computer forensics).\n", "Some tools have additionally been developed to address increasing criminal usage of phones manufactured with Chinese chipsets, which include MediaTek (MTK), Spreadtrum and MStar. Such tools include Cellebrite's CHINEX, and XRY PinPoint.\n\nSection::::Tools.:Open source.\n\nMost open source mobile forensics tools are platform-specific and geared toward smartphone analysis. Though not originally designed to be a forensics tool, BitPim has been widely used on CDMA phones as well as LG VX4400/VX6000 and many Sanyo Sprint cell phones.\n\nSection::::Tools.:Physical tools.\n\nSection::::Tools.:Physical tools.:Forensic desoldering.\n", "The European Union requires its member countries to retain certain telecommunications data for use in investigations. This includes data on calls made and retrieved. The location of a mobile phone can be determined and this geographical data must also be retained. In the United States, however, no such requirement exists, and no standards govern how long carriers should retain data or even what they must retain. For example, text messages may be retained only for a week or two, while call logs may be retained anywhere from a few weeks to several months. To reduce the risk of evidence being lost, law enforcement agents must submit a preservation letter to the carrier, which they then must back up with a search warrant.\n", "Although not technically part of mobile device forensics, the call detail records (and occasionally, text messages) from wireless carriers often serve as \"back up\" evidence obtained after the mobile phone has been seized. These are useful when the call history and/or text messages have been deleted from the phone, or when location-based services are not turned on. Call detail records and cell site (tower) dumps can show the phone owner's location, and whether they were stationary or moving (i.e., whether the phone's signal bounced off the same side of a single tower, or different sides of multiple towers along a particular path of travel). Carrier data and device data together can be used to corroborate information from other sources, for instance, video surveillance footage or eyewitness accounts; or to determine the general location where a non-geotagged image or video was taken.\n", "Mobile devices are also useful for providing location information; either from inbuilt gps/location tracking or via cell site logs, which track the devices within their range. Such information was used to track down the kidnappers of Thomas Onofri in 2006.\n\nSection::::Branches.:Network forensics.\n", "The first phase analyzes the video recorded by the camera using computer vision techniques. For each frame of the video, the computer vision analysis computes the set of keys that were likely pressed, the set of keys that were certainly not pressed, and the position of space characters. Because the results of this phase of the analysis are noisy, a second phase, called the text analysis, is required. The goal of this phase is to remove errors using both language and context-sensitive techniques. The result of this phase is the reconstructed text, where each word is represented by a list of possible candidates, ranked by likelihood.\n", "During the analysis an investigator usually recovers evidence material using a number of different methodologies (and tools), often beginning with recovery of deleted material. Examiners use specialist tools (EnCase, ILOOKIX, FTK, etc.) to aid with viewing and recovering data. The type of data recovered varies depending on the investigation, but examples include email, chat logs, images, internet history or documents. The data can be recovered from accessible disk space, deleted (unallocated) space or from within operating system cache files.\n", "Mobile device forensics is best known for its application to law enforcement investigations, but it is also useful for military intelligence, corporate investigations, private investigations, criminal and civil defense, and electronic discovery.\n\nSection::::Types of evidence.\n\nAs mobile device technology advances, the amount and types of data that can be found on a mobile device is constantly increasing. Evidence that can be potentially recovered from a mobile phone may come from several different sources, including handset memory, SIM card, and attached memory cards such as SD cards.\n", "On most media types, including standard magnetic hard disks, once data has been securely deleted it can never be recovered.\n\nOnce evidence is recovered the information is analysed to reconstruct events or actions and to reach conclusions, work that can often be performed by less specialized staff. Digital investigators, particularly in criminal investigations, have to ensure that conclusions are based upon data and their own expert knowledge. In the US, for example, Federal Rules of Evidence state that a qualified expert may testify “in the form of an opinion or otherwise” so long as:\n\nSection::::Reporting.\n", "Law enforcement have used mobile phone evidence in a number of different ways. Evidence about the physical location of an individual at a given time can be obtained by triangulating the individual's cellphone between several cellphone towers. This triangulation technique can be used to show that an individual's cellphone was at a certain location at a certain time. The concerns over terrorism and terrorist use of technology prompted an inquiry by the British House of Commons Home Affairs Select Committee into the use of evidence from mobile phone devices, prompting leading mobile telephone forensic specialists to identify forensic techniques available in this area. NIST have published guidelines and procedures for the preservation, acquisition, examination, analysis, and reporting of digital information present on mobile phones can be found under the NIST Publication SP800-101.\n", "Section::::Forensic process.\n\nThe forensics process for mobile devices broadly matches other branches of digital forensics; however, some particular concerns apply. Generally, the process can be broken down into three main categories: seizure, acquisition, and examination/analysis. Other aspects of the computer forensic process, such as intake, validation, documentation/reporting, and archiving still apply.\n\nSection::::Forensic process.:Seizure.\n", "Investigators initially did not attempt to power up or operate the cell phone, fearing that they might overwrite evidence contained in its memory. Instead, they tried to determine whether forensic software was available which would allow them to examine that model of phone. On August 15, unable to identify any such technology, a detective turned on the phone and conducted a manual search of it, finding that the voicemail message was not stored on the phone. They did not request that AT&T try to retrieve the deleted message from its servers. Later, on September 21, an investigator announced that they would be using \"new technology\" to copy the phone's data for further investigation. In early October, investigators completed their second examination of the phone, stating that they did not uncover any additional information and would soon return it to Zahau's family.\n", "Brunty is the author of books, book chapters, and journal publications in the field of digital forensics, mobile device forensics, and social media investigation. His research interests include: social media forensics, mobile device exploitation and forensics, and image and video forensics. He is a frequent speaker at international and national digital forensic and security conferences, and guest lectures at various universities throughout the world.\n", "Section::::Tools.\n\nEarly investigations consisted of live manual analysis of mobile devices; with examiners photographing or writing down useful material for use as evidence. Without forensic photography equipment such as Fernico ZRT, EDEC Eclipse, or Project-a-Phone, this had the disadvantage of risking the modification of the device content, as well as leaving many parts of the proprietary operating system inaccessible.\n", "Logical extraction usually does not produce any deleted information, due to it normally being removed from the phone's file system. However, in some cases—particularly with platforms built on SQLite, such as iOS and Android—the phone may keep a database file of information which does not overwrite the information but simply marks it as deleted and available for later overwriting. In such cases, if the device allows file system access through its synchronization interface, it is possible to recover deleted information. File system extraction is useful for understanding the file structure, web browsing history, or app usage, as well as providing the examiner with the ability to perform an analysis with traditional computer forensic tools.\n", "Section::::Tools.:Command line tools.:AT commands.\n\nAT commands are old modem commands, e.g., Hayes command set and Motorola phone AT commands, and can therefore only be used on a device that has modem support. Using these commands one can only obtain information through the operating system, such that no deleted data can be extracted.\n\nSection::::Tools.:Command line tools.:dd.\n", "19-year old Jenny Nicholl disappeared on 30 June 2005. Her body was never found, giving police and forensic scientists little information to go on about what might have happened to Jenny. After looking through her phone for clues, forensic linguists came to the conclusion that the texts sent from her phone around the time that she disappeared seemed very different than her usual texting style, and soon started looking to her ex-boyfriend, David Hodgson, for clues of what happened to her, including looking through his phone and studying his texting style. The forensic linguists found a number of stylistic similarities between David's texting style and the messages sent from Jenny's phone around the time she went missing. Using the timeframe of when she went missing, combined with the differences in texting styles and other forensic details, Jenny's murderer, David Hodgson, was convicted. The analysis of the text messages and their submission in court helped to pave the way for forensic linguistics to be acknowledged as a science in UK law, rather than opinion. To this day, her body has not been found, but justice was still served for her and her family because of forensic linguistics.\n", "The actual process of analysis can vary between investigations, but common methodologies include conducting keyword searches across the digital media (within files as well as unallocated and slack space), recovering deleted files and extraction of registry information (for example to list user accounts, or attached USB devices).\n\nThe evidence recovered is analysed to reconstruct events or actions and to reach conclusions, work that can often be performed by less specialised staff. When an investigation is complete the data is presented, usually in the form of a written report, in lay persons' terms.\n\nSection::::Application.\n", "Section::::Data acquisition types.:Logical acquisition.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-03577
How can glass get fogged up from steam and frost but our eyes can't?
Fogged up glass happens when humid air gets in contact with a cold surface and condensates there. Our eyes have a. a layer of water in front of them, that's because you keep blinking, and b. are not cold enough to cause the humid air to condensate.
[ "The processes of dew formation do not restrict its occurrence to the night and the outdoors. They are also working when eyeglasses get steamy in a warm, wet room or in industrial processes. However, the term condensation is preferred in these cases.\n\nSection::::Measurement.\n", "BULLET::::- Cold weather: Most modern cold-weather goggles have two layers of lens to prevent the interior from becoming \"foggy\". With only a single lens, the interior water vapor condenses onto the lens because the lens is colder than the vapor, although anti-fog agents can be used. The reasoning behind the dual layer lens is that the inner lens will be warm while the outer lens will be cold. As long as the temperature of the inner lens is close to that of the interior water vapor, the vapor should not condense. However, if water vapor gets between the layers of the lens, condensation can occur between the lenses and is almost impossible to get rid of; thus, properly constructed and maintained dual layer lenses should be airtight to prevent water vapor from entering between the lenses.\n", "Light strays or scatters in lenses due to many potential factors in design and operation. These factors include dirt, film, or scratches on lens surfaces; reflections from lens surfaces or their mounts; and the slightly imperfect transparency (or reflection) of real glass (or mirrors).\n\nTypical optical engineering design techniques to minimize stray light include: black coatings on internal surfaces, knife edges on mounts, antireflection lens coatings, internal baffles and stops, and tube extensions which block sources outside the field of view.\n", "In order to reduce the amount of damaging light radiation transmitted through glazing, some glass coatings are designed to either \"reflect\" or \"absorb\" the ultraviolet (UV) spectrum. The following technologies are used to reduce the amount of UV from reaching the artwork:\n", "Masks tend to fog when warm humid exhaled air condenses on the cold inside of the faceplate. To prevent fogging many divers spit into the dry mask before use, spread the saliva around the inside of the glass and rinse it out with a little water. The saliva residue allows condensation to wet the glass and form a continuous film, rather than tiny droplets. There are several commercial products that can be used as an alternative to saliva, some of which are more effective and last longer, but there is a risk of getting the anti-fog agent in the eyes.\n", "To create the effect of a thin layer of condensation forming on the outside of glasses containing cold liquid, dulling spray may be applied, with paper or masking tape protecting the non-\"frosted\" areas. More pronounced condensation and dew drops are imitated by spraying the glass with corn syrup or glycerin.\n", "Defined as the gloss at grazing angles of incidence and viewing\n\nBULLET::::- Contrast gloss – the perceived brightness of specularly and diffusely reflecting areas\n\nDefined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface;\n\nBULLET::::- Absence of bloom – the perceived cloudiness in reflections near the specular direction\n\nDefined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light: haze is the inverse of absence-of-bloom\n\nBULLET::::- Distinctness of image gloss – identified by the distinctness of images reflected in surfaces\n", "The oldest certain reference to the use of lenses is from Aristophanes' play \"The Clouds\" (424 BC) mentioning a burning-glass.\n\nPliny the Elder (1st century) confirms that burning-glasses were known in the Roman period.\n\nPliny also has the earliest known reference to the use of a corrective lens when he mentions that Nero was said to watch the gladiatorial games using an emerald (presumably concave to correct for nearsightedness, though the reference is vague). Both Pliny and Seneca the Younger (3 BC–65 AD) described the magnifying effect of a glass globe filled with water.\n", "ASTM has a number of other gloss-related standards designed for application in specific industries including the old 45° method which is used primarily now used for glazed ceramics, polyethylene and other plastic films. \n\nIn 1937, the paper industry adopted a 75° specular-gloss method because the angle gave the best separation of coated book papers. This method was adopted in 1951 by the Technical Association of Pulp and Paper Industries as TAPPI Method T480.\n", "Section::::Six Books Of Optics.:Accompanying art.\n", "The formation of mist, as of other suspensions, is greatly aided by the presence of nucleation sites on which the suspended water phase can congeal. Thus even such unusual sources as small particulates from volcanic eruptions, releases of strongly polar gases, and even the magnetospheric ions associated with polar lights can in right conditions trigger the formation of mist and can make mirrors appear foggy. Mist on mirrors should not be mistaken for condensation as they are very different. Mist is a collection of water droplets but condensation is the water droplets in a different form. \n", "Although protection is a primary purpose of glazing, \"displaying\" an artwork is the primary purpose of framing it. Therefore, the least visible glazing best displays the artwork behind it. Visible light transmission is the primary measure of glass' \"invisibility\", since the viewer actually sees the light, reflected from the artwork. Light transmission of glass is especially important in art framing, since light passes through the glass twice – once to illuminate the artwork, and then again, reflected from the artwork, as colors - before reaching the viewer.\n", "Imaging optics can concentrate sunlight to, at most, the same flux found at the surface of the sun.\n\nNonimaging optics have been demonstrated to concentrate sunlight to 84,000 times the ambient intensity of sunlight, exceeding the flux found at the surface of the sun, and approaching the theoretical (2nd law of thermodynamics) limit of heating objects up to the temperature of the sun's surface.\n", "The legend of Archimedes gave rise to a considerable amount of research on burning glasses and lenses until the late 17th century. Various researchers worked with burning glasses, including Anthemius of Tralles (6th century AD), Proclus (6th century) (who by this means purportedly destroyed the fleet of Vitalian besieging Constantinople), Ibn Sahl in his \"On Burning Mirrors and Lenses\" (10th century), Alhazen in his \"Book of Optics\" (1021), Roger Bacon (13th century), Giambattista della Porta and his friends (16th century), Athanasius Kircher and Gaspar Schott (17th century), and the Comte de Buffon in 1740 in Paris.\n", "Burning lenses were used both by Joseph Priestley and Antoine Lavoisier in their experiments to obtain oxides contained in closed vessels under high temperatures. These included carbon dioxide by burning diamond, and mercuric oxide by heating mercury. This type of experiment contributed to the discovery of \"dephlogisticated air\" by Priestley, which became better known as oxygen, following Lavoisier's investigations.\n", "Fog can form in a number of ways, depending on how the cooling that caused the condensation occurred.\n", "Cold Fogging, in contrast, is heavy enough to penetrate these \"air-curtains\" as well as light enough to be evenly distributed within a room.\n\nSection::::Adverse health effects.\n", "For much of the history of observational astronomy, almost all observation was performed in the visual spectrum with optical telescopes. While the Earth's atmosphere is relatively transparent in this portion of the electromagnetic spectrum, most telescope work is still dependent on seeing conditions and air transparency, and is generally restricted to the night time. The seeing conditions depend on the turbulence and thermal variations in the air. Locations that are frequently cloudy or suffer from atmospheric turbulence limit the resolution of observations. Likewise the presence of the full Moon can brighten up the sky with scattered light, hindering observation of faint objects.\n", "Atmospheric optics\n\nAtmospheric optics is \"the study of the optical characteristics of the atmosphere or products of atmospheric processes ... [including] temporal and spatial resolutions beyond those discernible with the naked eye\". Meteorological optics is \"that part of atmospheric optics concerned with the study of patterns observable with the naked eye\". Nevertheless, the two terms are sometimes used interchangeably.\n\nMeteorological optical phenomena, as described in this article, are concerned with how the optical properties of Earth's atmosphere cause a wide range of optical phenomena and visual perception phenomena.\n\nExamples of meteorological phenomena include:\n", "Glass cloth\n\nGlass cloth is a textile material, originally developed to be used in greenhouse paneling, allowing sunlight's ultraviolet rays to be filtered out, while still allowing visible light through to plants. The cloth is usually woven with the plain weave, and may be patterned in various ways, though checked cloths are the most common. The original cloth was made from linen, but a large quantity is made with cotton warp and tow weft, and in some cases they are composed entirely of cotton. Short fibres of the cheaper kind are easily detached from the cloth.\n", "The sclera is rarely damaged by brief exposure to heat: the eyelids provide exceptional protection, and the fact that the sclera is covered in layers of moist tissue means that these tissues are able to cause much of the offending heat to become dissipated as steam before the sclera itself is damaged. Even relatively low-temperature molten metals when splashed against an open eye have been shown to cause very little damage to the sclera, even while creating detailed casts of the surrounding eyelashes. Prolonged exposure, however— on the order of 30 seconds— at temperatures above will begin to cause scarring, and above will cause extreme changes in the sclera and surrounding tissue. Such long exposures even in industrial settings are virtually nonexistent.\n", "In 1811, he constructed a new kind of furnace, and during his second melting session when he melted a large quantity of glass, he found that he could produce flint glass, which, when taken from the bottom of a vessel containing roughly 224 pounds of glass, had the same refractive power as glass taken from the surface. He found that English crown glass and German table glass both contained defects which tended to cause irregular refraction. In the thicker and larger glasses, there would be even more of such defects, so that in larger telescopes this kind of glass would not be fit for objective lenses. Fraunhofer accordingly made his own crown glass.\n", "Apparent gloss depends on the amount of \"specular\" reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with \"diffuse\" reflection – the amount of light scattered into other directions.\n\nSection::::Theory.\n\nWhen light illuminates an object, it interacts with it in a number of ways:\n\nBULLET::::- Absorbed within it (largely responsible for colour)\n\nBULLET::::- Transmitted through it (dependent on the surface transparency and opacity)\n\nBULLET::::- Scattered from or within it (diffuse reflection, haze and transmission)\n\nBULLET::::- Specularly reflected from it (gloss)\n", "Burning glasses (often called fire lenses) are still used to light fires in outdoor and primitive settings. Large burning lenses sometimes take the form of Fresnel lenses, similar to lighthouse lenses, including those for use in solar furnaces. Solar furnaces are used in industry to produce extremely high temperatures without the need for fuel or large supplies of electricity. They sometimes employ a large parabolic array of mirrors (some facilities are several stories high) to focus light to a high intensity.\n", "Many glasses include a stem, which allows the drinker to hold the glass without affecting the temperature of the drink. In champagne glasses, the bowl is designed to retain champagne's signature carbonation, by reducing the surface area at the opening of the bowl. Historically, champagne has been served in a champagne coupe, the shape of which allowed carbonation to dissipate even more rapidly than from a standard wine glass.\n\nSection::::Commercial trade.\n\nSection::::Commercial trade.:International exports and imports.\n\nAn important export commodity, coffee was the top agricultural export for twelve countries in 2004,\n" ]
[ "Because glass can be fogged up with steam, the human eye should be able to as well. " ]
[ "The human eye has a layer of water in front of it, blocking the steam from fogging the eyes." ]
[ "false presupposition" ]
[ "Because glass can be fogged up with steam, the human eye should be able to as well. ", "Because glass can be fogged up with steam, the human eye should be able to as well. " ]
[ "normal", "false presupposition" ]
[ "The human eye has a layer of water in front of it, blocking the steam from fogging the eyes.", "The human eye has a layer of water in front of it, blocking the steam from fogging the eyes." ]
2018-04037
Where does the SA node gets (or creates?) its electricity from?
> does the electrical conduction system of the heart generates electricity before transferring it? It seems there are some misconceptions behind this question. First, the heart does not run off of electricity. It is commonly taught that the nerves within the body operate via electricity like wires in our electrical grid, but that is a simplification to the extent of being misleading. Nerves actually function based on the electrical charge of tiny chemical reactions between synapses. While the exchange of ions in these chemical reactions is ultimately based on electromagnetism they are much more chemistry than electrical. The nerves therefore do not "conduct electricity" rather they relay nerve impulses which are exchanged between them by tiny changed in chemistry in gaps between the cells. The rhythm of the heart nerve signals is generated within the heart cells themselves, a cyclical reaction which is self-coordinating between them.
[ "Section::::Formation regulation.:Nodal regulation.\n\nSection::::Formation regulation.:Nodal regulation.:Via αII-Spectrin.\n\nSaltatory conduction in myelinated axons requires organization of the nodes of Ranvier, whereas voltage-gated sodium channels are highly populated. Studies show that αII-Spectrin, a component of the cytoskeleton is enriched at the nodes and paranodes at early stages and as the nodes mature, the expression of this molecule disappears. It is also proven that αII-Spectrin in the axonal cytoskeleton is absolutely vital for stabilizing sodium channel clusters and organizing the mature node of Ranvier.\n\nSection::::Formation regulation.:Nodal regulation.:Possible regulation via the recognition molecule OMgp.\n", "The sinoatrial node is found in the upper part of the right atrium near to the junction with the superior vena cava. The electrical signal generated by the sinoatrial node travels through the right atrium in a radial way that is not completely understood. It travels to the left atrium via Bachmann's bundle, such that the muscles of the left and right atria contract together. The signal then travels to the atrioventricular node. This is found at the bottom of the right atrium in the atrioventricular septum—the boundary between the right atrium and the left ventricle. The septum is part of the cardiac skeleton, tissue within the heart that the electrical signal cannot pass through, which forces the signal to pass through the atrioventricular node only. The signal then travels along the bundle of His to left and right bundle branches through to the ventricles of the heart. In the ventricles the signal is carried by specialized tissue called the Purkinje fibers which then transmit the electric charge to the heart muscle.\n", "Conductance is then related to blood volume though Baan's equation. When used in cardiology, the electric field generated is not limited to the blood (the fluid of interest) but also penetrates the heart wall, giving rise to additional conductance often called \"parallel conductance\" or \"muscle conductance\", G which must be removed.\n", "The main role of a sinoatrial node cell is to initiate action potentials of the heart, so that it can pass throughout the heart and cause contraction. An action potential is a change in voltage (membrane potential) across the membrane of the cell, produced by the movement of charged atoms (ions). Non-pacemaker cells (including the ventricular and atrial cells) have a period, immediately after an action potential, where the membrane potential remains relatively constant; this is known as a resting membrane potential. This resting phase (see cardiac action potential, phase 4) ends when another action potential reaches the cell. This produces a positive change in membrane potential (known as depolarisation), which initiates the start of the next action potential. Pacemaker cells, however, don’t have this resting phase. Instead, immediately after one action potential, the membrane potential of these cells begins to depolarise again automatically, this is known as the pacemaker potential. Once the pacemaker potential reaches a set value, known as the threshold value, it then produces an action potential. Other cells within the heart (including the purkinje fibers and atrioventricular node; AVN) can also initiate action potentials; however, they do so at a slower rate and therefore, if the SA node is working, it usually beats the AVN to it.\n", "The converted 375Vdc voltage from the primary nodes is then directed at low-and medium-power nodes and junction boxes. The nodes and junction boxes (similar to power strips) offer direct power and communications to the instruments at the experimental sites. In concert, these parts make up the RSN secondary infrastructure. \n\nExtension cables are used to link the primary nodes to the secondary infrastructure, providing power and communications.\n\nEquipment is linked using wet-mate connectors. Different types of cable were installed depending on load requirements. Bandwidth from these cables ranges from 10 Gbit/s to 1 Gbit/s.\n", "Action potentials pass from one cardiac cell to the next through pores known as gap junctions. These gap junctions are made of proteins called connexins. There are fewer gap junctions within the SA node and they are smaller in size. This is again important in insulating the SA node from the surrounding atrial cells.\n\nSection::::Structure.:Blood supply.\n", "Nodes also convert the 10kVdc voltage levels from the backbone cable to 375Vdc which is then directed to the secondary infrastructure. The 375V switching systems and Node telemetry systems were designed and manufactured by Texcel Technology Plc based in England. The software to manage the ports and telemetry protection systems was also supplied by Texcel as a element manager sitting under a Network Management System (NMS).\n\nThe primary nodes have a number of extra ports which offer the potential for large-scale future expansion (100 kilometers).\n\nSecondary Infrastructure\n", "The action potential travels from one location in the cell to another, but ion flow across the membrane occurs only at the nodes of Ranvier. As a result, the action potential signal jumps along the axon, from node to node, rather than propagating smoothly, as they do in axons that lack a myelin sheath. The clustering of voltage-gated sodium and potassium ion channels at the nodes permits this behavior.\n\nSection::::Function.:Saltatory conduction.\n", "The paraaortic and retroaortic nodes receive: \n\nBULLET::::- (a) the efferents of the common iliac lymph nodes\n\nBULLET::::- (b) the lymphatics from the testis in the male, and from the ovary, uterine tube, and uterus in the female\n\nBULLET::::- (c) the lymphatics from the kidney and suprarenal gland\n\nBULLET::::- (d) the lymphatics draining the lateral abdominal muscles and accompanying the lumbar veins\n", "The SA node controls the rate of contraction for the entire heart muscle because its cells have the quickest rate of spontaneous depolarization, thus they initiate action potentials the quickest. The action potential generated by the SA node passes down the electrical conduction system of the heart, and depolarizes the other potential pacemaker cells (AV node) to initiate action potentials before these other cells have had a chance to generate their own spontaneous action potential, thus they contract and propagate electrical impulses to the pace set by the cells of the SA node. This is the normal conduction of electrical activity in the heart.\n", "Under normal conditions, electrical activity is spontaneously generated by the SA node, the cardiac pacemaker. This electrical impulse is propagated throughout the right atrium, and through Bachmann's bundle to the left atrium, stimulating the myocardium of the atria to contract. The conduction of the electrical impulses throughout the atria is seen on the ECG as the P wave.\n\nAs the electrical activity is spreading throughout the atria, it travels via specialized pathways, known as \"internodal tracts\", from the SA node to the AV node.\n\nSection::::ECG.:AV node and bundles: PR interval.\n", "Section::::Electrical conduction.:Atrioventricular (AV) node.:Bundle of His, bundle branches, and Purkinje fibers.\n", "The first cell to produce the action potential in the SA node isn’t always the same, this is known as pacemaker shift. In certain species of animals, for example, in dogs, a superior shift (i.e. the cell that produces the fastest action potential in the SA node is higher than previously) usually produced an increased heart rate whereas an inferior shift (i.e. the cell producing the fastest action potential within the SA node is further down than previously) produced a decreased heart rate.\n\nSection::::Clinical significance.\n", "It is not very well known how the electric signal moves in the atria. It seems that it moves in a radial way, but Bachmann's bundle and coronary sinus muscle play a role in conduction between the two atria, which have a nearly simultaneous systole. While in the ventricles, the signal is carried by specialized tissue called the Purkinje fibers which then transmit the electric charge to the myocardium.\n", "There is a significant difference between the concentrations of sodium and potassium ions inside and outside the cell. The concentration of sodium ions is considerably higher in the extracellular fluid than in the intracellular fluid. The converse is true of the potassium ion concentrations inside and outside the cell. These differences cause all cell membranes to be electrically charged, with the positive charge on the outside of the cells and the negative charge on the inside. In a resting neuron (not conducting an impulse) the membrane potential is known as the resting potential, and between the two sides of the membrane is about -70 mV.\n", "Each channel is coded by a set of DNA instructions that tell the cell how to make it. These instructions are known as a gene. Figure 3 shows the important ion channels involved in the cardiac action potential, the current (ions) that flows through the channels, their main protein subunits (building blocks of the channel), some of their controlling genes that code for their structure and the phases they are active during the cardiac action potential. Some of the most important ion channels involved in the cardiac action potential are described briefly below.\n\nSection::::Channels.:Hyperpolarisation activated cyclic nucleotide gated (HCN) channels.\n", "Section::::Electrical conduction.:Sinoatrial (SA) node.\n\nNormal sinus rhythm is established by the sinoatrial (SA) node, the heart's pacemaker. The SA node is a specialized grouping of cardiomyocytes in the upper and back walls of the right atrium very close to the opening of the superior vena cava. The SA node has the highest rate of depolarization.\n", "In the latter case, a human \"hIKCa1\" gene encodes the channel found in T cells, which is responsible for the hyperpolarization that is required to keep Ca flowing into the cell through the \"I\" channels.\n", "In a first degree sinoatrial block, there is a lag between the time that the SA node fires and actual depolarization of the atria. This rhythm is not recognizable on an ECG strip because a strip does not denote when the SA node fires. It can be detected only during an electrophysiology study when a small wire is placed against the SA node from within the heart and the electrical impulses can be recorded as they leave the p-cells in the centre of the node [ see pacemaker potential ], followed by observing a delay in the onset of the p wave on the ECG.\n", "If the SA node does not function, or the impulse generated in the SA node is blocked before it travels down the electrical conduction system, a group of cells further down the heart will become its pacemaker. This center is typically represented by cells inside the atrioventricular node (AV node), which is an area between the atria and ventricles, within the atrial septum. If the AV node also fails, Purkinje fibers are occasionally capable of acting as the default or \"escape\" pacemaker. The reason Purkinje cells do not normally control the heart rate is that they generate action potentials at a lower frequency than the AV or SA nodes.\n", "In myelinated axons the myelin acts as a mechanical transducer preserving the entropy of the pulse and insulating against mechanical loss. In this model the nodes of Ranvier (where ion channels are highly concentrated) concentrate the ion channels providing maximum entropy to instigate a pulse that travels from node to node along the axon with the entropy being preserved by the shape and dynamics of the myelin sheath.\n", "This impulse spreads from its initiation in the SA node throughout the atria through specialized internodal pathways, to the atrial myocardial contractile cells and the atrioventricular node. The internodal pathways consist of three bands (anterior, middle, and posterior) that lead directly from the SA node to the next node in the conduction system, the atrioventricular node. The impulse takes approximately 50 ms (milliseconds) to travel between these two nodes. The relative importance of this pathway has been debated since the impulse would reach the atrioventricular node simply following the cell-by-cell pathway through the contractile cells of the myocardium in the atria. In addition, there is a specialized pathway called Bachmann's bundle or the interatrial band that conducts the impulse directly from the right atrium to the left atrium. Regardless of the pathway, as the impulse reaches the atrioventricular septum, the connective tissue of the cardiac skeleton prevents the impulse from spreading into the myocardial cells in the ventricles except at the atrioventricular node. The electrical event, the wave of depolarization, is the trigger for muscular contraction. The wave of depolarization begins in the right atrium, and the impulse spreads across the superior portions of both atria and then down through the contractile cells. The contractile cells then begin contraction from the superior to the inferior portions of the atria, efficiently pumping blood into the ventricles.\n", "Electric organs have evolved at least six times in various teleost and elasmobranch fish. Notably, they have convergently evolved in the African Mormyridae and South American Gymnotidae groups of electric fish. The two groups are distantly related, as they shared a common ancestor before the supercontinent Gondwana split into the American and African continents, leading to the divergence of the two groups. A whole-genome duplication event in the teleost lineage allowed for the neofunctionalization of the voltage-gated sodium channel gene Scn4aa which produces electric discharges. Developmentally, most electric organs in electric fish are derived from skeletal muscle.\n\nSection::::Electrocytes.\n", "Benefits of the large surface area of the spine apparatus include increased electronic properties of the spine and contribution to longitudinal resistance of the cytoplasm. The spine apparatus occupies a large portion of the volume of the spine stalk, which allows it to contribute significantly to the longitudinal resistance of the cytoplasm. Therefore, the spine apparatus can have a direct effect on the membrane potential of the spine plasma membrane by variation in position and volume.\n", "Section::::Structure.:Development.\n\nEmbryologic evidence of generation of the cardiac conduction system illuminates the respective roles of this specialized set of cells. Innervation of the heart begins with a brain only centered parasympathetic cholinergic first order. It is then followed by rapid growth of a second order sympathetic adrenergic system arising from the formation of the thoracic spinal ganglia. The third order of electrical influence of the heart is derived from the vagus nerve as the other peripheral organs form.\n\nSection::::Function.\n\nSection::::Function.:Action potential generation.\n" ]
[]
[]
[ "normal" ]
[ "SA node has electricity in it." ]
[ "false presupposition", "normal" ]
[ "Process is more chemistry based than electricity based. " ]
2018-03949
If every American went to the bank and withdrew all funds available and closed all bank accounts what would happen to the economy?
It wouldn't get that far. There is < 1% as much paper money as there is money in deposits. That local bank branch next to the grocery store has maybe $40-50K in it. A handful of people could close it. Typically, if you want more than $10K in cash you have to place an order two days in advance.
[ "The Congressional Budget Office estimated that payment of interest on reserve balances would cost the American taxpayers about one tenth of the present 0.25% interest rate on $800 billion in deposits:\n\nThose expenditures pale in comparison to the lost tax revenues worldwide resulting from decreasing economic activity due to damage to the short-term commercial paper and associated credit markets.\n", "Tyler Cowen, a professor of economics at George Mason University, wrote for the \"New York Times\" in April 2011 that \"If enough depositors fear frozen accounts, the banks will be emptied out, and they also will require additional government bailouts, on top of the bailouts for the bad real estate loans. The banks come to resemble empty shells, conduits for public aid but shrinking and unprofitable as businesses — and, to a large extent, that is already the case in Ireland. Portugal is moving in this same direction, toward being a land inhabited by zombie banks. It’s the zombie banks that doom the current European bailout plans.\"\n", "BULLET::::- Journalist Rosalind Resnick favors a hypothetical scenario in which \"consumers and businesses would be able to borrow at the fed funds rate at 2 percent, just like the big banks do. This means that every cash-strapped homeowner would be able to refinance his mortgage and cut his payments in half, saving thousands of homes from foreclosure. Consumers could also refinance their credit card balances, auto loans and other debt at interest rates they can afford\" and that this \"plan\" \"would cost U.S. taxpayers absolutely nothing.\" She does not address how the Federal Reserve would manage the US population's mortgages, credit cards and auto loans in practice.\n", "According to CNBC commentator Jim Cramer, large corporations, institutions, and wealthy investors were pulling their money out of bank money market funds, in favor of government-backed Treasury bills. Cramer called it \"an invisible run on the banks,\" one that has no lines in the lobby but pushes banks to the breaking point nonetheless. As a bank's capital reserve of deposits evaporate, so too does its ability to lend and correspondingly make money. \"The lack of confidence inspired by Lehman's demise, the general poor health of many banks, this is going to turn this into an intractable moment,\" Cramer said, \"if someone in the government doesn't start pushing for more deposit insurance.\"\n", "During severe financial crises, sometimes governments close banks. Depositors may be unable to withdraw their money for long periods, as was true in the United States in 1933 under the Emergency Banking Act. Withdrawals may be limited. Bank deposits may be involuntarily converted to government bonds or to a new currency of lesser value in foreign exchange.\n", "The Congressional Budget Office estimated that payment of interest on reserve balances would cost the American taxpayers about one tenth of the present 0.25% interest rate on $800 billion in deposits:\n", "\"The New York Times\" states: \"The criteria being used to choose who gets money appears to be setting the stage for consolidation in the industry by favoring those most likely to survive\" because the criteria appears to favor the financially best off banks and banks too big to let fail. Some lawmakers are upset that the capitalization program will end up culling banks in their districts.\n", "Paulson's team realizes that buying toxic assets will take too long, leaving direct capital injects into the banks as their only option to use TARP to get credit flowing again. Along with FDIC Chair Sheila Bair, Paulson informs the banks that they will receive mandatory capital injections. The banks eventually agree, but Paulson's staff laments that the parties who caused the crisis are being allowed to dictate the terms of how they should use the billions with which they are being bailed out. An epilogue notes that bank mergers continued in the wake of the crisis, and that now only ten financial institutions hold 77% of all U.S. banking assets and have been declared too big to fail.\n", "However, if many depositors withdraw all at once, the bank itself (as opposed to individual investors) may run short of liquidity, and depositors will rush to withdraw their money, forcing the bank to liquidate many of its assets at a loss, and eventually to fail. If such a bank were to attempt to call in its loans early, businesses might be forced to disrupt their production while individuals might need to sell their homes and/or vehicles, causing further losses to the larger economy. Even so, many if not most debtors would be unable to pay the bank in full on demand and would be forced to declare bankruptcy, possibly affecting other creditors in the process.\n", "BULLET::::- Dominique Strauss-Kahn, Managing Director of the International Monetary Fund, has recommended three near-term actions to assist banks: provision of liquidity, purchase of distressed assets, and recapitalization. In addition, he argues for addressing the structural issues with more prudential regulation, better accounting rules, and more transparency.\n\nSection::::Alternative proposals.:Monetary consensus reform.\n\nThis process consisted of nationalizing most of the private industries. The short-term effects were evidently costly, but the beneficiary repercussions were vastly favorable to a sustainable economic future.\n\nBULLET::::1. Nationalize the federal reserve.\n\nBULLET::::2. Deregulate the corporate image of the United States.\n", "Banks are susceptible to many forms of risk which have triggered occasional systemic crises. These include liquidity risk (where many depositors may request withdrawals in excess of available funds), credit risk (the chance that those who owe money to the bank will not repay it), and interest rate risk (the possibility that the bank will become unprofitable, if rising interest rates force it to pay relatively more on its deposits than it receives on its loans).\n", "Some economists have noted that under full-reserve banking, because banks would not earn revenue from lending against demand deposits, depositors would have to pay fees for the services associated with checking accounts. This, it is felt, would probably be rejected by the public although with central bank zero and negative interest rate policies, some writers have noted depositors are already experiencing paying to put their savings even in fractional reserve banks. In their influential paper on financial crises, economists Douglas W. Diamond and Philip H. Dybvig warned that under full-reserve banking, since banks would not be permitted to lend out funds deposited in demand accounts, this function would be taken over by unregulated institutions. Unregulated institutions (such as high-yield debt issuers) would take over the economically necessary role of financial intermediation and maturity transformation, therefore destabilizing the financial system and leading to more frequent financial crises.\n", "Describing the Senate's reason for passing the bill, former Senator Evan Bayh \"described a scene from 2008 where Ben Bernanke warned senators that the sky would collapse if the banks weren't rescued. 'We looked at each other,' said Bayh, 'and said, okay, what do we need.'\"\n\nSection::::Legislative history.:Second House vote, October 3.\n", "In an interview on C-SPAN on January 27, 2009, Kanjorski defended the original emergency actions by the United States government to halt the 2008 financial crisis in September 2008. Kanjorski stated that the move to raise the guarantee money funds up to $250,000 was an emergency measure to stave off a massive money market \"electronic run\" on the banks that removed $550 billion from the system in a matter of hours on the morning of September 18. He further asserted that, if not stopped, the run would not only have caused the American economy to crash immediately, within 24 hours it would have brought down the world economy as well.\n", "Nobody knows what would happen if one of the world's largest banks became severely distressed and was forced to suspend trading, except to say that it would be far worse than long-term capital management: firstly, because they have ongoing trades with every significant market player, everywhere; secondly, because the sums of money are so much larger. A bulge-bracket bank will, on any given day, have over a ten trillion dollars of open trades on the foreign exchange markets and in derivatives. \n", "Financial infrastructures could be hit hard by cyber-attacks as the financial system is linked by computer systems. is constant money being exchanged in these institutions and if cyberterrorists were to attack and if transactions were rerouted and large amounts of money stolen, financial industries would collapse and civilians would be without jobs and security. Operations would stall from region to region causing nationwide economical degradation. In the U.S. alone, the average daily volume of transactions hit $3 trillion and 99% of it is non-cash flow. To be able to disrupt that amount of money for one day or for a period of days can cause lasting damage making investors pull out of funding and erode public confidence.\n", "The maximum cost of a $700 billion bailout would be $2,295 estimated cost per American (based on an estimate of 305 million Americans), or $4,635 per working American (based on an estimate of 151 million in the work force).\n\nThe bulk of this money would be spent to purchase mortgage backed securities, ultimately backed by American homeowners, which possibly could be sold later at a profit, by the government. Heterodox economist Michael Hudson predicted that the bailout would cause hyperinflation and dollar collapse.\n", "During the week ending September 19, 2008, money market funds had begun to experience significant withdrawals of funds by investors. This created a significant risk because money market funds are integral to the ongoing financing of corporations of all types. Individual investors lend money to money market funds, which then provide the funds to corporations in exchange for corporate short-term securities called asset-backed commercial paper (ABCP). However, a potential bank run had begun on certain money market funds. If this situation had worsened, the ability of major corporations to secure needed short-term financing through ABCP issuance would have been significantly affected. To assist with liquidity throughout the system, the US Treasury and Federal Reserve Bank announced that banks could obtain funds via the Federal Reserve's Discount Window using ABCP as collateral.\n", "On 7 October 2009, Professor Joseph Stiglitz, winner of the Nobel Prize in economics and former chief economist of the World Bank, speaking at Trinity College Dublin criticised NAMA. He said, \"Countries which allow banks to go under by following the ordinary rules of capitalism have done fine. The US has let 100 banks go this year alone, as did Sweden and Norway in their crises.\" As well as commenting that in Ireland, \"this bank bailout is a simple transfer from taxpayers to bondholders, and it will saddle generations to come. The only thing that might give you solace is that, as chief economist of the World Bank, we see this type of thing happening in banana republics all over the world. Whenever a banking crisis happens, the financial sector uses the turmoil as a mechanism to transfer wealth from the general population to themselves. I've been very disappointed to see that it has happened, not only in banana republics, but in advanced industrialised countries.\"\n", "Systemic banking crises are associated with substantial fiscal costs and large output losses. Frequently, emergency liquidity support and blanket guarantees have been used to contain these crises, not always successfully. Although fiscal tightening may help contain market pressures if a crisis is triggered by unsustainable fiscal policies, expansionary fiscal policies are typically used. In crises of liquidity and solvency, central banks can provide liquidity to support illiquid banks. Depositor protection can help restore confidence, although it tends to be costly and does not necessarily speed up economic recovery. Intervention is often delayed in the hope that recovery will occur, and this delay increases the stress on the economy.\n", "To prevent immediate failure, the Federal Reserve announced categorically that it would meet any liquidity needs the Continental might have, while the Federal Deposit Insurance Corporation (FDIC) gave depositors and general creditors a full guarantee (not subject to the $100,000 FDIC deposit-insurance limit) and provided direct assistance of $2 billion (including participations). Money center banks assembled an additional $5.3 billion unsecured facility pending a resolution and resumption of more-normal business. These measures slowed, but did not stop, the outflow of deposits.\n\nSection::::Historical examples.:Continental Illinois case.:Controversy.\n", "It was decided that banks would transfer their assets following an appraisal.\n\nTheoretically, in being freed of the asset, the financial institution is also freed from the associated risk, and would thus no longer consume capital and would be able to extend loans to customers.\n\nSpain is acting differently from the rest of Europe and the United States. In these areas the bad banks were created first, and then the financial system was recapitalised.\n", "BULLET::::- There is question whether it is justifiable to have bailouts of failing banks when the country is going through the internal devaluation and when the country requires additional resources (that could be transferred from the bailout funds) for structural reforms.\n\nSection::::Future.\n", "As the bank should not assume that business will always continue as it is the current business process, the institution needs to explore emergency sources of funds and formalise a contingency plan. The purpose is to find alternative backup sources of funding to those that occur within the normal course of operations.\n\nDealing with Contingency Funding Plan (CFP) is to find adequate actions as regard to low-probability and high-impact events as opposed to high-probability and low-impact into the day-to-day management of funding sources and their usage within the bank.\n", "As we work our way through this turbulence, our highest priority is limiting its impact on the real economy. We must maintain stable, orderly and liquid financial markets and our banks must continue to play their vital role of supporting the economy by making credit available to consumers and businesses. And we must of course focus on housing, which precipitated the turmoil in the capital markets, and is today the biggest downside risk to our economy. We must work to limit the impact of the housing downturn on the real economy without impeding the completion of the necessary housing correction. I will address each of these in turn. Regulators and policy makers are vigilant; we are not taking anything for granted.\n" ]
[ "Pulling all cash from local banks would cause something to happen to the economy.", "Every American would be able to withdraw all their money from the banks to impact the economy. " ]
[ "Local banks don't have enough cash to cause an issue if everyone were to pull their money out. ", "There is not enough paper money held at banks for people to be able to withdraw every dollar within their account." ]
[ "false presupposition" ]
[ "Pulling all cash from local banks would cause something to happen to the economy.", "Every American would be able to withdraw all their money from the banks to impact the economy. " ]
[ "false presupposition", "false presupposition" ]
[ "Local banks don't have enough cash to cause an issue if everyone were to pull their money out. ", "There is not enough paper money held at banks for people to be able to withdraw every dollar within their account." ]
2018-23436
When close to burning out, why will a fluorescent tube struggle to illuminate for long periods of time then work fine once it has started?
A fluorescent tube works by having an electric discharge through mercury vapour. That doesn't happen immediately on turning on the power. Instead the starter rewires the power through two heater filaments at the ends. After a delay the switch opens and puts the voltage through the tube. If the discharge strikes, then it stays that way. If not, the starter recycles through the sequence until it detects the discharge.
[ "Fluorescent lamps near end of life can \"hoot\" at RF and present a serious interference risk as the frequency can vary depending on lamp temperature. This is due in part to the tube being a negative differential resistance (NDR) and current flow through the plasma forming a tuned circuit whose frequency depends on path length. Tubes in this failure mode may also flicker with bands running back and forth along the glass.\n\nSection::::Disadvantages.:Operating temperature.\n", "Section::::Principles of operation.:Starting.\n\nThe noble gas used in the fluorescent tube (commonly argon) must be ionized before the arc can \"strike\" within the tube. For small lamps, it does not take much voltage to strike the arc and starting the lamp presents no problem, but larger tubes require a substantial voltage (in the range of a thousand volts).\n\nSection::::Principles of operation.:Starting.:Preheating.\n", "The semi-resonant start circuit was invented by Thorn Lighting for use with T12 fluorescent tubes. This method uses a double wound transformer and a capacitor. With no arc current, the transformer and capacitor resonate at line frequency and generate about twice the supply voltage across the tube, and a small electrode heating current. This tube voltage is too low to strike the arc with cold electrodes, but as the electrodes heat up to thermionic emission temperature, the tube striking voltage falls below that of the ringing voltage, and the arc strikes. As the electrodes heat, the lamp slowly, over three to five seconds, reaches full brightness. As the arc current increases and tube voltage drops, the circuit provides current limiting.\n", "At higher energy-levels, wall ablation becomes the main process of wear. The electrical arc slowly erodes the inner wall of the tube, forming microscopic cracks that give the glass a frosted appearance. The ablation releases oxygen from the glass, increasing the pressure beyond an operable level. This causes triggering problems, known as \"jitter.\" Above 30%, the ablation may cause enough wear to rupture the lamp. However, at energy levels greater than 15%, the lifetime can be calculated with a fair degree of accuracy.\n", "The \"emission mix\" on the lamp filaments/cathodes is required to enable electrons to pass into the gas via thermionic emission at the lamp operating voltages used. The mix is slowly sputtered off by bombardment with electrons and mercury ions during operation, but a larger amount is sputtered off each time the lamp is started with cold cathodes. The method of starting the lamp has a significant impact on this. Lamps operated for typically less than 3 hours each switch-on will normally run out of the emission mix before other parts of the lamp fail. The sputtered emission mix forms the dark marks at the lamp ends seen in old lamps. When all the emission mix is gone, the cathode cannot pass sufficient electrons into the gas fill to maintain the gas discharge at the designed lamp operating voltage. Ideally, the control gear should shut down the lamp when this happens. However, some control gear will provide sufficient increased voltage to continue operating the lamp in cold cathode mode, which will cause overheating of the lamp end and rapid disintegration of the electrodes (filament goes open-circuit) and filament support wires until they are completely gone or the glass cracks, destroying the low pressure gas fill and stopping the gas discharge.\n", "Once the tube strikes, the impinging main discharge keeps the cathodes hot, permitting continued electron emission without the need for the filaments to continue to be heated. The starter switch does not close again because the voltage across the lit tube is insufficient to start a glow discharge in the starter.\n", "As in all mercury-based gas-filled tubes, mercury is slowly adsorbed onto the glass, phosphor, and tube electrodes throughout the life of the lamp, until it can no longer function. Loss of mercury will take over from failure of the phosphor in some lamps. The failure symptoms are similar, except loss of mercury initially causes an extended run-up time to full light output, and finally causes the lamp to glow a dim pink when the mercury runs out and the argon base gas takes over as the primary discharge.\n", "Subjecting the tube to asymmetric waveforms, where the total current flow through the tube does not cancel out and the tube effectively operates under a DC bias, causes asymmetric distribution of mercury ions along the tube due to cataphoresis. The localized depletion of mercury vapor pressure manifests as pink luminescence of the base gas in the vicinity of one of the electrodes, and the operating lifetime of the lamp may be dramatically shortened. This can be an issue with some poorly designed inverters.\n\nSection::::Principles of operation.:End of life.:Burned-out filaments.\n", "Section::::Principles of operation.:Electrical aspects of operation.\n\nFluorescent lamps are negative differential resistance devices, so as more current flows through them, the electrical resistance of the fluorescent lamp drops, allowing for even more current to flow. Connected directly to a constant-voltage power supply, a fluorescent lamp would rapidly self-destruct because of the uncontrolled current flow. To prevent this, fluorescent lamps must use an auxiliary device, a ballast, to regulate the current flow through the lamp.\n", "Section::::Principles of operation.:Cold-cathode fluorescent lamps.\n\nMost fluorescent lamps use electrodes that operate by thermionic emission, meaning they are operated at a high enough temperature for the electrode material (usually aided by a special coating) to emit electrons into the tube by heat.\n", "LPS lamp failure does not result in cycling; rather, the lamp will simply not strike or will maintain the dull red glow of the start-up phase. In another failure mode, a tiny puncture of the arc tube leaks some of the sodium vapor into the outer vacuum bulb. The sodium condenses and creates a mirror on the outer glass, partially obscuring the arc tube. The lamp often continues operating normally, but much of the light generated is obscured by the sodium coating, providing no illumination.\n\nSection::::See also.\n\nBULLET::::- Arc lamp\n\nBULLET::::- High-intensity discharge lamp (HID)\n", "The phosphor drops off in efficiency during use. By around 25,000 operating hours, it will typically be half the brightness of a new lamp (although some manufacturers claim much longer half-lives for their lamps). Lamps that do not suffer failures of the emission mix or integral ballast electronics will eventually develop this failure mode. They still work, but have become dim and inefficient. The process is slow, and often becomes obvious only when a new lamp is operating next to an old one.\n\nSection::::Principles of operation.:End of life.:Loss of mercury.\n", "Section::::Principles of operation.:End of life.\n\nThe end of life failure mode for fluorescent lamps varies depending on how they are used and their control gear type. Often the light will turn pink (see Loss of mercury), with black burns on the ends of the lamp due to sputtering of emission mix (see below). The lamp may also flicker at a noticeable rate (see Flicker problems).\n\nSection::::Principles of operation.:End of life.:Emission mix.\n", "Many different circuits have been used to operate fluorescent lamps. The choice of circuit is based on AC voltage, tube length, initial cost, long term cost, instant versus non-instant starting, temperature ranges and parts availability, etc.\n", "In most CFLs the filaments are connected in series, with a small capacitor between them. The discharge, once lit, is in parallel to the capacitor and presents a lower-resistance path, effectively shorting the capacitor out.\n\nSection::::Principles of operation.:End of life.:Phosphor.\n", "If the lamp is installed where it is frequently switched on and off, it will age rapidly. Under extreme conditions, its lifespan may be much shorter than a cheap incandescent lamp. Each start cycle slightly erodes the electron-emitting surface of the cathodes; when all the emission material is gone, the lamp cannot start with the available ballast voltage. Fixtures intended for flashing of lights (such as for advertising) will use a ballast that maintains cathode temperature when the arc is off, preserving the life of the lamp.\n", "With automated starters such as glow starters, a failing tube will cycle endlessly, flickering as the lamp quickly goes out because the emission mix is insufficient to keep the lamp current high enough to keep the glow starter open. This runs the ballast at higher temperature. Some more advanced starters time out in this situation, and do not attempt repeated starts until power is reset. Some older systems used a thermal over-current trip to detect repeated starting attempts and disable the circuit until manually reset. The switch contacts in glow starters are subject to wear and inevitably fail eventually, so the starter is manufactured as a plug-in replaceable unit.\n", "This may occur in compact fluorescent lamps with integral electrical ballasts or in linear lamps. Ballast electronics failure is a somewhat random process that follows the standard failure profile for any electronic device. There is an initial small peak of early failures, followed by a drop and steady increase over lamp life. Life of electronics is heavily dependent on operating temperature—it typically halves for each 10 °C temperature rise. The quoted average life of a lamp is usually at ambient (this may vary by country). The average life of the electronics at this temperature is normally greater than this, so at this temperature, not many lamps will fail because the electronics fail. In some fittings, the ambient temperature could be well above this, in which case failure of the electronics may become the predominant failure mechanism. Similarly, running a compact fluorescent lamp base-up will result in hotter electronics, which can cause shorter average life (particularly with higher power rated ones). Electronic ballasts should be designed to shut down the tube when the emission mix runs out as described above. In the case of integral electronic ballasts, since they never have to work again, this is sometimes done by having them deliberately burn out some component to permanently cease operation.\n", "When power is first applied to the circuit, there will be a glow discharge across the electrodes in the starter lamp. This heats the gas in the starter and causes one of the bi-metallic contacts to bend towards the other. When the contacts touch, the two filaments of the fluorescent lamp and the ballast will effectively be switched in series to the supply voltage. The current through the filaments causes them to heat up and emit electrons into the tube gas by thermionic emission. In the starter, the touching contacts short out the voltage sustaining the glow discharge, extinguishing it so the gas cools down and no longer heats the bi-metallic switch, which opens within a second or two. The current through the filaments and the inductive ballast is abruptly interrupted, leaving the full line voltage applied between the filaments at the ends of the tube and generating an inductive kick which provides the high voltage needed to start the lamp. The lamp will fail to strike if the filaments are not hot enough, in which case the cycle repeats; several cycles are usually needed, which causes flickering and clicking during starting (older thermal starters behaved better in this respect). A power factor correction (PFC) capacitor draws leading current from the mains to compensate for the lagging current drawn by the lamp circuit.\n", "If the tube is overloaded, not only can the plate warp, causing a short to outer grids or beam-shaping elements, but the emissive layer on the cathode will be consumed very quickly. The equipment's power supply and the tube's load (output transformers, flyback transformers, etc.) are likely to be damaged by a sustained overload condition, so power should be immediately disconnected when a glowing plate is found.\n\nSection::::Common Occurrences.\n", "The common fluorescent lamp relies on fluorescence. Inside the glass tube is a partial vacuum and a small amount of mercury. An electric discharge in the tube causes the mercury atoms to emit mostly ultraviolet light. The tube is lined with a coating of a fluorescent material, called the \"phosphor\", which absorbs ultraviolet light and re-emits visible light. Fluorescent lighting is more energy-efficient than incandescent lighting elements. However, the uneven spectrum of traditional fluorescent lamps may cause certain colors to appear different than when illuminated by incandescent light or daylight. The mercury vapor emission spectrum is dominated by a short-wave UV line at 254 nm (which provides most of the energy to the phosphors), accompanied by visible light emission at 436 nm (blue), 546 nm (green) and 579 nm (yellow-orange). These three lines can be observed superimposed on the white continuum using a hand spectroscope, for light emitted by the usual white fluorescent tubes. These same visible lines, accompanied by the emission lines of trivalent europium and trivalent terbium, and further accompanied by the emission continuum of divalent europium in the blue region, comprise the more discontinuous light emission of the modern trichromatic phosphor systems used in many compact fluorescent lamp and traditional lamps where better color rendition is a goal.\n", "Fluorescent lamps can be illuminated by means other than a proper electrical connection. These other methods, however, result in very dim or very short-lived illumination, and so are seen mostly in science demonstrations. Static electricity or a Van de Graaff generator will cause a lamp to flash momentarily as it discharges a high voltage capacitance. A Tesla coil will pass high-frequency current through the tube, and since it has a high voltage as well, the gases within the tube will ionize and emit light. This also works with plasma globes. Capacitive coupling with high-voltage power lines can light a lamp continuously at low intensity, depending on the intensity of the electrostatic field, as shown in the image on the right.\n", "Failure from heat is usually caused by excessively long pulse-durations, high average-power levels, or inadequate electrode-size. The longer the pulse; the more of its intense heat will be transferred to the glass. When the inner wall of the tube gets too hot while the outer wall is still cold, this temperature gradient can cause the lamp to crack. Similarly, if the electrodes are not of a sufficient diameter to handle the peak currents they may produce too much resistance, rapidly heating up and thermally expanding. If the electrodes heat much faster than the glass, the lamp may crack or even shatter at the ends.\n", "BULLET::::- tubes with grids might not even show the real emission because of \"hot spots\" in the cathode, hidden by the grids under normal conditions\n\nBULLET::::- grids will be forward biased to some extent - some fine control grid wires are limited in their ability to withstand this\n\nBULLET::::- the amount of current that should be considered \"100%\" has to be known and documented for each tube type (and will be different for different emission test circuit details)\n", "However, there are also tubes that operate in cold cathode mode, whereby electrons are liberated into the tube only by the large potential difference (voltage) between the electrodes. This does not mean the electrodes are cold (indeed, they can be very hot), but it does mean they are operating below their thermionic emission temperature. Because cold cathode lamps have no thermionic emission coating to wear out, they can have much longer lives than hot cathode tubes. This quality makes them desirable for maintenance-free long-life applications (such as backlights in liquid crystal displays). Sputtering of the electrode may still occur, but electrodes can be shaped (e.g. into an internal cylinder) to capture most of the sputtered material so it is not lost from the electrode.\n" ]
[]
[]
[ "normal" ]
[ "If a light struggles to start, it is surprising that it works well once it starts." ]
[ "normal", "false presupposition" ]
[ "Fluorescent tubes take a long time to start up." ]
2018-08613
Why do computers/processors get faster while their speed (GHz) stays the same?
There's not only clockspeeds. The biggest improvements besides more cores are the numbers of instructions processed. This is called the IPC (instructions per cycle). Think of it like when you're coming home from grocery shopping with a car loaded with groceries. Clockspeed determines how long it takes you to carry things from your car to your fridge. IPC determines how many groceries you can carry in one trip. So a 3,4GHz CPU from 2008 might also need 30seconds from the car to the fridge, but it can only carry the bread and eggs while a 2018 CPU can carry the bread, eggs, bacon and soda at the same time. Additionally newer CPUs have more cores. So you're not carrying the groceries alone but got your brothers and sisters helping you. So instead of you doing 4 trips from the car to the fridge and back, you and your 3 siblings carry the stuff simultaneously so you only have to take one trip. Additionally you can save time if you carry the groceries to the fridge in the order they should be put in it so you don't have to reorganize everything in the kitchen. So you gotta anticipate the correct order. Modern CPUs got much better at predicting what's going to be next. All in all, this boosts overall performance. That being said, compared to the development in storage and graphics processing, the innovation of CPUs has slowed down a lot in the past years.
[ "For a given processor, \"C\" is a fixed value. However, \"V\" and \"f\" can vary considerably. For example, for a 1.6 GHz Pentium M, the clock frequency can be stepped down in 200 MHz decrements over the range from 1.6 to 0.6 GHz. At the same time, the voltage requirement decreases from 1.484 to 0.956 V. The result is that the power consumption theoretically goes down by a factor of 6.4. In practice, the effect may be smaller because some CPU instructions use less energy per tick of the CPU clock than others. For example, when an operating system is not busy, it tends to issue x86 halt (HLT) instructions, which suspend operation of parts of the CPU for a time period, so it uses less energy per tick of the CPU clock than when executing productive instructions in its normal state. For a given rate of work, a CPU running at a higher clock rate will execute a greater proportion of HLT instructions. The simple equation which relates power, voltage and frequency above also does not take into account the static power consumption of the CPU. This tends not to change with frequency, but does change with temperature and voltage. Hot electrons, and electrons exposed to a stronger electric field are more likely to migrate across a gate as \"gate leakage\" current, leading to an increase in static power consumption.\n", "One possible reason for super-linear speedup in low-level computations is the cache effect resulting from the different memory hierarchies of a modern computer: in parallel computing, not only do the numbers of processors change, but so does the size of accumulated caches from different processors. With the larger accumulated cache size, more or even all of the working set can fit into caches and the memory access time reduces dramatically, which causes the extra speedup in addition to that from the actual computation.\n", "Many people used to measure a computer's speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have greater performance. As a result, manufacturers have moved away from clock speed as a measure of performance.\n\nOther factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs.\n", "Running a processor at high clock speeds allows for better performance. However, when the same processor is run at a lower frequency (speed), it generates less heat and consumes less power. In many cases, the core voltage can also be reduced, further reducing power consumption and heat generation. By using SpeedStep, users can select the balance of power conservation and performance that best suits them, or even change the clock speed dynamically as the processor burden changes.\n\nThe power consumed by a CPU with a capacitance \"C\", running at frequency \"f\" and voltage \"V\" is approximately:\n", "BULLET::::- Timing/design closure – As clock frequencies tend to scale up, designers are finding it more difficult to distribute and maintain low clock skew between these high frequency clocks across the entire chip. This has led to a rising interest in multicore and multiprocessor architectures, since an overall speedup can be obtained even with lower clock frequency by using the computational power of all the cores.\n", "Historically, processor manufacturers consistently delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification. More recently, in order to manage CPU power dissipation, processor makers favor multi-core chip designs, thus software needs to be written in a multi-threaded or multi-process manner to take full advantage of such hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed when compared to the number of processors. This is particularly true while accessing shared or dependent resources, due to lock contention. This effect becomes more noticeable as the number of processors increases.\n", "Manufacturers of modern processors typically charge premium prices for processors that operate at higher clock rates, a practice called binning. For a given CPU, the clock rates are determined at the end of the manufacturing process through actual testing of each processor. Chip manufacturers publish a \"maximum clock rate\" specification, and they test chips before selling them to make sure they meet that specification, even when executing the most complicated instructions with the data patterns that take the longest to settle (testing at the temperature and voltage that runs the lowest performance). Processors successfully tested for compliance with a given set of standards may be labeled with a higher clock rate, e.g., 3.50 GHz, while those that fail the standards of the higher clock rate yet pass the standards of a lesser clock rate may be labeled with the lesser clock rate, e.g., 3.3 GHz, and sold at a lower price.\n", "Frequency scaling\n\nIn computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004. \n\nThe effect of processor frequency on computer speed can be seen by looking at the equation for computer program runtime: \n", "Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer.\n", "Between 2001 and 2003, Intel and AMD made few changes to the designs of their processors. Most performance increases were created by raising the processor's clock speed rather than improving the microprocessor's core. Around mid-2004, Intel encountered serious problems in increasing their Pentium 4's clock speed beyond 3.4 GHz because of the enormous amount of heat generated by the already hot Prescott core processor when working at higher clock speeds. In response, Intel started exploring ways to improve the performance of its microprocessors in ways other than raising the clock speeds of the processors such as increasing the sizes of the processors' caches, using a P6 microarchitecture descendant in Pentium M CPUs and beyond, and using multiple processing cores in its processors.\n", "Amdahl's law presupposes that the computing requirements will stay the same, given increased processing power. In other words, an analysis of the same data will take less time given more computing power.\n", "Section::::Determining factors.:Engineering.\n", "Amdahl's law does represent the law of diminishing returns if on considering what sort of return one gets by adding more processors to a machine, if one is running a fixed-size computation that will use all available processors to their capacity. Each new processor added to the system will add less usable power than the previous one. Each time one doubles the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of 1/(1 − \"p\").\n", "The clock rate of a CPU is most useful for providing comparisons between CPUs in the same family. The clock rate is only one of several factors that can influence performance when comparing processors in different families. For example, an IBM PC with an Intel 80486 CPU running at 50 MHz will be about twice as fast (internally only) as one with the same CPU and memory running at 25 MHz, while the same will not be true for MIPS R4000 running at the same clock rate as the two are different processors that implement different architectures and microarchitectures. Further, a \"cumulative clock rate\" measure is sometimes assumed by taking the total cores and multiplying by the total clock rate (e.g. dual core 2.8 GHz being considered processor cumulative 5.6 GHz). There are many other factors to consider when comparing the performance of CPUs, like the width of the CPU's data bus, the latency of the memory, and the cache architecture.\n", "After each clock pulse, the signal lines inside the CPU need time to settle to their new state. That is, every signal line must finish transitioning from 0 to 1, or from 1 to 0. If the next clock pulse comes before that, the results will be incorrect. In the process of transitioning, some energy is wasted as heat (mostly inside the driving transistors). When executing complicated instructions that cause many transitions, the higher the clock rate the more heat produced. Transistors may be damaged by excessive heat.\n", "where \"P\" is power consumption, \"C\" is the capacitance being switched per clock cycle, \"V\" is voltage, and \"F\" is the processor frequency (cycles per second). Increases in frequency thus increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm. \n", "BULLET::::- The processor was stated to run at 1.3 GHz, but on watches with 1GB RAM and 8GB storage, the processor only runs at 1 GHz. Omate later explained that at full speed the heat was not bearable and therefore they reduced the clock speed\n", "BULLET::::- Optimizing machine code - by implementing compiler optimizations that schedules clusters of instructions using common components, the CPU power used to run an application can be significantly reduced.\n\nSection::::Clock frequencies.\n", "Sometimes a designer focuses on improving performance by making significant improvements in f (with techniques such as deeper pipelines and faster caches), while (hopefully) not sacrificing too much C—leading to a speed-demon CPU design.\n\nSometimes a designer focuses on improving performance by making significant improvements in CPI (with techniques such as out-of-order execution, superscalar CPUs, larger caches, caches with improved hit rates, improved branch prediction, speculative execution, etc.), while (hopefully) not sacrificing too much clock frequency—leading to a brainiac CPU design.\n", "As device mobility has increased, the relative performance of specific acceleration protocols has required new metricizations, considering the characteristics such as physical hardware dimensions, power consumption and operations throughput. These can be summarized into three categories: task efficiency, implementation efficiency, and flexibility. Appropriate metrics consider the area of the hardware along with both the corresponding operations throughput and energy consumed.\n\nSection::::Example tasks accelerated.\n\nSection::::Example tasks accelerated.:Summing one million integers.\n", "For years, processor makers delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification. Now, to manage CPU power dissipation, processor makers favor multi-core chip designs, and software has to be written in a multi-threaded manner to take full advantage of the hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed vs number of processors. This is particularly true while accessing shared or dependent resources, due to lock contention. This effect becomes more noticeable as the number of processors increases. There are cases where a roughly 45% increase in processor transistors has translated to roughly 10–20% increase in processing power.\n", "This means that I/O bound processes are slower than non-I/O bound processes, not faster. This is due to increases in the rate of data processing in the core, while the rate at which data is transferred from storage to the processor does not increase with it. As CPU clock speed increases, allowing more instructions to be executed in a given time window, the limiting factor of effective execution is the rate at which instructions can be delivered to the processor from storage, and sent from the processor to their destination. In short, programs naturally shift to being more and more I/O bound.\n", "System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response.\n\nSection::::Aspects of Performance.:Bandwidth.\n\nIn computer networking, bandwidth is a measurement of bit-rate of available or consumed data communication resources, expressed in bits per second or multiples of it (bit/s, kbit/s, Mbit/s, Gbit/s, etc.).\n", "The clock rate alone is generally considered to be an inaccurate measure of performance when comparing different CPUs families. Software benchmarks are more useful. Clock rates can sometimes be misleading since the amount of work different CPUs can do in one cycle varies. For example, superscalar processors can execute more than one instruction per cycle (on average), yet it is not uncommon for them to do \"less\" in a clock cycle. In addition, subscalar CPUs or use of parallelism can also affect the performance of the computer regardless of clock rate.\n\nSection::::See also.\n\nBULLET::::- Crystal oscillator frequencies\n", "Section::::Overview.:Components.\n\nTechnically any component that uses a timer (or clock) to synchronize its internal operations can be overclocked. Most efforts for computer components however focus on specific components, such as, processors (a.k.a. CPU), video cards, motherboard chip sets, and RAM. Most modern processors derive their effective operating speeds by multiplying a base clock (processor bus speed) by an internal multiplier within the processor (the CPU multiplier) to attain their final speed.\n" ]
[ "Computer processors are not getting faster. ", "If two chips have the same speed, they should process at the same rate" ]
[ "Computer processors are getting faster through more cores and IPC.", "Processor rate is dependent not only on clockspeeds but also on number of cores and number of instructions processed." ]
[ "false presupposition" ]
[ "Computer processors are not getting faster. ", "If two chips have the same speed, they should process at the same rate" ]
[ "false presupposition", "false presupposition" ]
[ "Computer processors are getting faster through more cores and IPC.", "Processor rate is dependent not only on clockspeeds but also on number of cores and number of instructions processed." ]
2018-01367
How are state utilities and resources delivered to people who live on a road that starts in one state and dead ends in another?
Excellent question ! I worked for the fire department. There was an apartment complex in our county that could only be accessed by a road in another county. The police and public utilities departments from our county would respond through the other county to get to the complex. For the fire department, since we had a "Mutual aid" agreement, the closest fire equipment (wich happened to be in the other county) would be sent. I am sure it works the same for states
[ "The total length of the Nebraska section is long, and was completed at a cost of $435 million.\n\nSection::::History.:Legacy.\n\nThe beginning of the I-80 construction in Nebraska in 1957 led the Nebraska Legislature to split the Department of Roads and Irrigation in order to create three separate agencies in the state, including the Department of Motor Vehicles, Department of Water Resources and the Department of Roads, which was the first Nebraska agency solely responsible for highway planning, construction, and maintenance in Nebraska history.\n", "Typically, when the Iowa Department of Transportation transfers a highway to a county or local jurisdiction, the DOT must ensure the highway is in good condition or provide the county compensation to repair the highway. Senate File 451, codified as Iowa Code §306.8A, instead created a fund for the maintenance of newly transferred highways. Until 2013, 1.75% of the primary highway fund will be directed to this fund to compensate counties receiving highways. Over $1.1 million has been allocated to counties for the August 2009 – July 2010 period.\n", "State governments are sovereign entities which use their powers of taxation both to match federal grants, and provide for local transportation needs. Different states have different systems for dividing responsibility for funding and maintaining road and transit networks between the state department of transportation, counties, municipalities, and other entities. Typically cities or counties are responsible for local roads, financed with block grants and local property taxes, and the state is responsible for major roads that receive state and federal designations. Many mass transit agencies are quasi-independent and subsidized branches of a state, county, or city government.\n\nSection::::Economic impact.\n", "The United States Department of Transportation and its divisions provide regulation, supervision, and funding for all aspects of transportation, except for customs, immigration, and security, which are the responsibility of the United States Department of Homeland Security. Each state has its own Department of Transportation, which builds and maintains state highways, and depending upon the state, may either directly operate or supervise other modes of transportation.\n", "On September 15, 2006, funds were distributed to the seven counties through which the toll road runs. The list below details each county's total share in the Major Moves money. Some of the funds from each county's distribution were directed to the cities and towns within that county.\n\nBULLET::::- Elkhart County: $40 million\n\nBULLET::::- La Grange County: $40 million\n\nBULLET::::- Lake County: $15 million\n\nBULLET::::- La Porte County: $25 million\n\nBULLET::::- Porter County: $40 million\n\nBULLET::::- Steuben County: $40 million\n\nBULLET::::- St. Joseph County: $40 million\n", "The state highway system consists of about 8,000 miles (13,000 km) of state highways (roadways owned and maintained by ODOT), with about 7,400 miles (12,000 km) when minor connections and frontage roads are removed. This is about 9% of the total road mileage in the state, including Oregon's portion of the Interstate Highway System (729.57 mi/1,174.13 km) and many other highways ranging from statewide to local importance. Transfers of highways between the state and county or local maintenance require the approval of the Oregon Transportation Commission (OTC), a five-member governor-appointed authority that meets monthly. These transfers often result in discontinuous highways, where a local government maintains part or all of a main road within its boundaries.\n", "Secondary roads are defined simply by the Iowa Code as \"those roads under county jurisdiction.\" The 99 counties in Iowa divide the secondary road system into farm-to-market roads and area service roads. Farm-to-market roads, which connect principal traffic generating areas to primary roads or to other farm-to-market roads, are maintained by the route's respective county and are paid for by a special fund. The Farm-to-Market Road Fund consists of federal secondary road aid and 8% of Iowa's road use taxes. The farm-to-market road system is limited to .\n", "Section::::Plot.:Act II.\n\nAt the 4-H national fair, Carolyn worries about Stevie the steer, and Michael fights with Bud (\"State Road 21\"). Meanwhile, at the farm, Robert is in bed with a still sleeping Francesca (\"Who We Are and Who We Want to Be\"). While preparing for their trip to Des Moines, Francesca explains to Robert how she came to live on an Iowa farm (\"Almost Real\"). Meanwhile, Charlie and Marge—having seen Robert's truck at Francesca's home the whole night before—know that they have spent the night together.\n", "BULLET::::- \"Development\" – Regional planning efforts have resulted in almost $8.4 billion in new infrastructure and improvements to the transmission system. The development of the transmission system supports the reliable operation of the interconnected grid and maintains a competitive energy market.\n\nSection::::Workforce.\n", "There are six different types of highways maintained by NDOT as part of the overall state highway system. In addition to Interstates, U.S. Routes, and State Highways the state also maintains a system of Link and Spur highways as well as Recreational Roads.\n\nSection::::Highway systems.:Spurs and links.\n", "At the state level, state Departments of Transportation (DOTs) are primarily responsible for planning, designing, constructing, and maintaining the highway system within the state. As part of the FAST Act, states were given additional roles and responsibilities for freight planning. States are now required to establish a State freight advisory committee as well as develop a comprehensive State freight plan.\n", "In addition to the routes of the Interstate system, there are those of the U.S. highway system, not to be confused with the above-mentioned National Highway System. These networks are further supplemented by State Highways, and the local roads of counties, municipal streets, and federal agencies, such as the Bureau of Indian Affairs. There are approximately of roads in the United States, paved and unpaved. State highways are constructed by each state, but frequently maintained by county governments aided by funding from the state, where such counties exist as governing entities (mostly every state except the Northeastern). Counties construct and maintain all remaining roads outside cities, except in private communities. Local, unnumbered roads are often constructed by private contractors to local standards, then maintenance is assumed by the local government.\n", "In 1950s, the passage of the Federal Aid Highway Act which established the Interstate Highway System provided an infusion of funding to Nebraska and allowed it to construct new highways as part of the new system. This included Interstate 80 which travels across the state. Completed in 1974 at a cost of $390 million (equivalent to $ in ), Nebraska was the first state in the nation to complete its mainline contribution to the interstate system.\n\nSection::::Highway systems.\n", "Under the provisions of the Byrd Road Act of 1932, the secondary roads in most of Virginia's counties are maintained by the Virginia Department of Transportation, an arrangement that a 1998 study found \" unusual among the 50 states.\" (The study also identified such issues as drainage, speed limits, and planning and coordination of roads with development as those local leaders felt should be within their control).\n", "The term \"Highways\" in the U.S. even includes major paved roads that serve purposes similar to those of the U.S. Highways or Interstate Highways, but which are completely designed, paid for, and maintained by state or local governments. An example of this is Tennessee Highway 840, which is a long, partially completed \"urban bypass\" of Nashville, TN that is a multi-lane, controlled-access highway entirely designed and paid for by Tennessee. Much of the traffic on it will eventually come from Interstate 40, completely avoid the big city, and then return to Interstate 40. Incidentally, Tennessee-840 also has connections with Interstate 24 and Interstate 65, where both of the freeway interchanges are already finished, as well as the eastern interchange with Interstate 40.\n", "In the United States, many projects in the various states and communities are partially funded with federal grants with a requirement for matching funds. For example, the Interstate Highway System was primarily built with a mix of 90% FHWA funds from the Highway Trust Fund and 10% matching state DOT funds. In some cases, borrowed money may be used to meet criteria for a matching grant; the $550 million Canadian federal government investment to connect a Detroit River International Crossing to Interstate 75 in Michigan qualifies the state for US$2 billion in US federal matching grants that can rebuild other Michigan highways even though the Canadian money is nominally a loan, to be repaid by tolls on the new bridge.\n", "Section::::Financing.\n\nInterstate highways and their rights of way are owned by the state in which they were built. The last federally owned portion of the Interstate System was the Woodrow Wilson Bridge on the Washington Capital Beltway. The new bridge was completed in 2009 and is collectively owned by Virginia and Maryland. Maintenance is generally the responsibility of the state department of transportation. However, there are some segments of Interstate owned and maintained by local authorities.\n", "Section::::Background.:Rural Electrification.\n", "Section::::History.\n\nSection::::History.:1800s.\n", "The state highway network is the principal road infrastructure connecting New Zealand urban centres. It is administered by the NZ Transport Agency. The majority of smaller or urban roads are managed by city or district councils, although some fall under the control of other authorities, such as the New Zealand Department of Conservation or port and airport authorities.\n\nNew Zealand has left-hand traffic on its roads.\n\nSection::::Road transport.:History.\n", "The Transport compartment contains a network of conveyance elements (channels, pipes, pumps, and regulators) and storage/treatment units that transport water to outfalls or to treatment facilities. Inflows to this compartment can come from surface runoff, groundwater interflow, sanitary dry weather flow, or from user-defined hydrographs. The components of the Transport compartment are modeled with Node and Link objects.\n\nNot all compartments need to appear in a particular SWMM model. For example, one could model just the transport compartment, using pre-defined hydrographs as inputs. If you use the kinematic wave routing then the nodes do not need to contain an outfall.\n", "The System Operator function may be owned by the transmission grid company, or may be fully independent. They are often wholly or partly owned by state or national governments. In many cases they are independent of electricity generation companies (upstream) and electricity distribution companies (downstream). They are financed either by the states or countries or by charging a toll proportional to the energy they carry.\n", "Section::::Plot summary.\n\nThe Kwimper family of Cranberry County, New Jersey is on a vacation in Columbiana when their car runs out of gas. Somewhere along the way, the Kwimpers had made a wrong turn and ended up on an unfinished highway. While waiting for assistance to arrive they set up shacks on the side of the road.\n", "A few counties directly provide public transportation themselves, usually in the form of a simple bus system. However, in most counties, public transportation is provided by one of the following: a special-purpose district that is coterminous with the county (but exists separately from the county government), a multi-county regional transit authority, or a state agency.\n\nSection::::Scope of power.:Broad scope.\n\nIn western and southern states, more populated counties provide many facilities, such as airports, convention centers, museums, recreation centers,\n", "Many limited-access toll highways that had been built prior to the Interstate Highway Act were incorporated into the Interstate system (for example, the Ohio Turnpike carries portions of Interstate 76, I-80 and I-90). For major turnpikes in New York, New Jersey, Pennsylvania, Ohio, Indiana, Illinois, Kansas, Oklahoma, Massachusetts, New Hampshire, Maine and West Virginia, tolls continue to be collected, even though the turnpikes have long since been paid for. The money collected is used for highway maintenance, turnpike improvement projects and states' general funds. (That is not the case in Massachusetts, where the state constitution requires the money be used for transportation.) In addition, there are several major toll bridges and toll tunnels included in the Interstate system, including four bridges in the San Francisco Bay Area, ones linking Delaware with New Jersey, New Jersey with New York, New Jersey with Pennsylvania, the Upper and Lower Peninsulas of Michigan and, and between Indiana and Kentucky. Tolls collected on Interstate Highways remain on segments of I-95, I-94, I-90, I-88, I-87, I-80, I-77, I-76, I-64, I-44, I-294, I-355 and several others.\n" ]
[ "States cannot deliver utilities or resources to other states." ]
[ "States can come to agreements that allow certain areas to be serviced from outside of the state. " ]
[ "false presupposition" ]
[ "States cannot deliver utilities or resources to other states." ]
[ "false presupposition" ]
[ "States can come to agreements that allow certain areas to be serviced from outside of the state. " ]
2018-10332
Can bacteria or viruses get disease themselves?
There’s bacteriophages which are viruses that infect bacteria. There’s also bacteria that infect bacteria. There’s viruses which rely on other viruses to become infectious as well. Bacteria don’t really “get” diseases like humans do. They’re not really advanced enough or multicellular. But if their DNA or cellular contents are damaged enough they can die. Viruses aren’t alive. They’re more like those horrible 2006 spam chain mails where you send them to your friends or you’ll get cursed. They don’t have a non-diseased state.
[ "Bacteria can often be killed by antibiotics, which are usually designed to destroy the cell wall. This expels the pathogen's DNA, making it incapable of producing proteins and causing the bacteria to die. A class of bacteria without cell walls is mycoplasma (a cause of lung infections). A class of bacteria which must live within other cells (obligate intracellular parasitic) is chlamydia (genus), the world leader in causing sexually transmitted infection (STI).\n\nSection::::Types of pathogens.:Viral.\n\nSome of the diseases that are caused by viral pathogens include smallpox, influenza, mumps, measles, chickenpox, ebola, and rubella.\n", "It is common to speak of an entire species of bacteria as pathogenic when it is identified as the cause of a disease \"(cf. Koch's postulates)\". However, the modern view is that pathogenicity depends on the microbial ecosystem as a whole. A bacterium may participate in opportunistic infections in immunocompromised hosts, acquire virulence factors by plasmid infection, become transferred to a different site within the host, or respond to changes in the overall numbers of other bacteria present. For example, infection of mesenteric lymph glands of mice with \"Yersinia\" can clear the way for continuing infection of these sites by \"Lactobacillus\", possibly by a mechanism of \"immunological scarring\".\n", "Although the vast majority of bacteria are harmless or beneficial to one's body, a few pathogenic bacteria can cause infectious diseases. The most common bacterial disease is tuberculosis, caused by the bacterium \"Mycobacterium tuberculosis\", which affects about 2 million people mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally important diseases, such as pneumonia, which can be caused by bacteria such as \"Streptococcus\" and \"Pseudomonas\", and foodborne illnesses, which can be caused by bacteria such as \"Shigella\", \"Campylobacter\", and \"Salmonella\". Pathogenic bacteria also cause infections such as tetanus, typhoid fever, diphtheria, syphilis, and Hansen's disease. They typically range between 1 and 5 micrometers in length.\n", "If bacteria form a parasitic association with other organisms, they are classed as pathogens. Pathogenic bacteria are a major cause of human death and disease and cause infections such as tetanus, typhoid fever, diphtheria, syphilis, cholera, foodborne illness, leprosy and tuberculosis. A pathogenic cause for a known medical disease may only be discovered many years after, as was the case with \"Helicobacter pylori\" and peptic ulcer disease. Bacterial diseases are also important in agriculture, with bacteria causing leaf spot, fire blight and wilts in plants, as well as Johne's disease, mastitis, salmonella and anthrax in farm animals.\n", "Among the many varieties of microorganisms, relatively few cause disease in otherwise healthy individuals. Infectious disease results from the interplay between those few pathogens and the defenses of the hosts they infect. The appearance and severity of disease resulting from any pathogen, depends upon the ability of that pathogen to damage the host as well as the ability of the host to resist the pathogen. However a host's immune system can also cause damage to the host itself in an attempt to control the infection. Clinicians therefore classify infectious microorganisms or microbes according to the status of host defenses - either as \"primary pathogens\" or as \"opportunistic pathogens\":\n", "\"Streptococcus\" and \"Staphylococcus\" are part of the normal skin microbiota and typically reside on healthy skin or in the nasopharangeal region. Yet these species can potentially initiate skin infections. They are also able to cause sepsis, pneumonia or meningitis. These infections can become quite serious creating a systemic inflammatory response resulting in massive vasodilation, shock, and death.\n\nOther bacteria are opportunistic pathogens and cause disease mainly in people suffering from immunosuppression or cystic fibrosis. Examples of these opportunistic pathogens include \"Pseudomonas aeruginosa\", \"Burkholderia cenocepacia\", and \"Mycobacterium avium\".\n\nSection::::Diseases.:Intracellular.\n", "On the molecular and cellular level, microbes can infect the host and divide rapidly, causing disease by being there and causing a homeostatic imbalance in the body, or by secreting toxins which cause symptoms to appear. Viruses can also infect the host with virulent DNA, which can affect normal cell processes (transcription, translation, etc.), protein folding, or evading the immune response.\n\nSection::::Pathogenicity.\n\nSection::::Pathogenicity.:Pathogen history.\n", "Section::::Pathogenicity.:Types of pathogens.\n\nPathogens include bacteria, fungi, protozoa, helminths, and viruses. Each of these different types of organisms can then be further classified as a pathogen based on its mode of transmission. This includes the following: food borne, airborne, waterborne, blood borne, and vector-borne. Many pathogenic bacteria, such as food borne \"Staphylococcus aureus\"and \"Clostridium botulinum\" secrete toxins into the host to cause symptoms. HIV and Hepatitis B are viral infections caused by blood borne pathogens. \"Aspergillis\", the most common pathogenic fungi, secretes aflatoxin which acts as a carcinogen and contaminates many foods, especially those grown underground (nuts, potatoes, etc.).\n", "Viral infections can cause disease in humans, animals and even plants. However, they are usually eliminated by the immune system, conferring lifetime immunity to the host for that virus. Antibiotics have no effect on viruses, but antiviral drugs have been developed to treat life-threatening infections. Vaccines that produce lifelong immunity can prevent some viral infections.\n\nSection::::Discovery.\n", "Microorganisms are the causative agents (pathogens) in many infectious diseases. The organisms involved include pathogenic bacteria, causing diseases such as plague, tuberculosis and anthrax; protozoan parasites, causing diseases such as malaria, sleeping sickness, dysentery and toxoplasmosis; and also fungi causing diseases such as ringworm, candidiasis or histoplasmosis. However, other diseases such as influenza, yellow fever or AIDS are caused by pathogenic viruses, which are not usually classified as living organisms and are not, therefore, microorganisms by the strict definition. No clear examples of archaean pathogens are known, although a relationship has been proposed between the presence of some archaean methanogens and human periodontal disease.\n", "The symptoms of disease appear as pathogenic bacteria damage host tissues or interfere with their function. The bacteria can damage host cells directly. They can also cause damage indirectly by provoking an immune response that inadvertently damages host cells.\n\nSection::::Mechanisms of damage.:Direct.\n", "In certain cases, infectious diseases may be asymptomatic for much or even all of their course in a given host. In the latter case, the disease may only be defined as a \"disease\" (which by definition means an illness) in hosts who secondarily become ill after contact with an asymptomatic carrier. An infection is not synonymous with an infectious disease, as some infections do not cause illness in a host.\n\nSection::::Signs and symptoms.:Bacterial or viral.\n", "The vast majority of bacteria, which typically range between 1 and 5 micrometers in length, are harmless or beneficial to humans. However, a relatively small list of pathogenic bacteria can cause infectious diseases. One of the bacterial diseases with the highest disease burden is tuberculosis, caused by the bacterium \"Mycobacterium tuberculosis\", which kills about 2 million people a year, mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally significant diseases, such as pneumonia, which can be caused by bacteria such as \"Streptococcus\" and \"Pseudomonas\", and foodborne illnesses, which can be caused by bacteria such as \"Shigella\", \"Campylobacter\", and \"Salmonella\". Pathogenic bacteria also cause infections such as tetanus, typhoid fever, diphtheria, syphilis, and leprosy.\n", "The human virome is a part of our bodies and will not always cause harm. Many latent and asymptomatic viruses are present in the human body all the time. Viruses infect all life forms; therefore the bacterial, plant, and animal cells and material in our gut also carry viruses. When viruses cause harm by infecting the cells in the body, a symptomatic disease may develop. Contrary to common belief, harmful viruses may be in the minority compared to benign viruses in the human body. It is much harder to identify viruses than it is to identify bacteria, therefore our understanding of benign viruses in the human body is very rudimentary.\n", "Only some diseases such as influenza are contagious and commonly believed infectious. The micro-organisms that cause these diseases are known as pathogens and include varieties of bacteria, viruses, protozoa and fungi. Infectious diseases can be transmitted, e.g. by hand-to-mouth contact with infectious material on surfaces, by bites of insects or other carriers of the disease, and from contaminated water or food (often via fecal contamination), etc. Also, there are sexually transmitted diseases. In some cases, microorganisms that are not readily spread from person to person play a role, while other diseases can be prevented or ameliorated with appropriate nutrition or other lifestyle changes.\n", "Each species of pathogen has a characteristic spectrum of interactions with its human hosts. Some organisms, such as \"Staphylococcus\" or \"Streptococcus\", can cause skin infections, pneumonia, meningitis and even overwhelming sepsis, a systemic inflammatory response producing shock, massive vasodilation and death. Yet these organisms are also part of the normal human flora and usually exist on the skin or in the nose without causing any disease at all. Other organisms invariably cause disease in humans, such as the Rickettsia, which are obligate intracellular parasites able to grow and reproduce only within the cells of other organisms. One species of Rickettsia causes typhus, while another causes Rocky Mountain spotted fever. \"Chlamydia\", another phylum of obligate intracellular parasites, contains species that can cause pneumonia, or urinary tract infection and may be involved in coronary heart disease. Finally, some species, such as \"Pseudomonas aeruginosa\", \"Burkholderia cenocepacia\", and \"Mycobacterium avium\", are opportunistic pathogens and cause disease mainly in people suffering from immunosuppression or cystic fibrosis.\n", "Section::::Pathogenicity in humans.\n", "Host–pathogen interaction\n\nThe host-pathogen interaction is defined as how microbes or viruses sustain themselves within host organisms on a molecular, cellular, organismal or population level. This term is most commonly used to refer to disease-causing microorganisms although they may not cause illness in all hosts. Because of this, the definition has been expanded to how known pathogens survive within their host, whether they cause disease or not.\n", "Eukaryotic pathogens are often capable of sexual interaction by a process involving meiosis and syngamy. Meiosis involves the intimate pairing of homologous chromosomes and recombination between them. Examples of eukaryotic pathogens capable of sex include the protozoan parasites \"Plasmodium falciparum\", \"Toxoplasma gondii\", \"Trypanosoma brucei\", \"Giardia intestinalis\", and the fungi \"Aspergillus fumigatus\", \"Candida albicans\" and \"Cryptococcus neoformans\".\n", "Many pathogens are capable of sexual interaction. Among pathogenic bacteria sexual interaction occurs between cells of the same species by the process of natural genetic transformation. Transformation involves the transfer of DNA from a donor cell to a recipient cell and the integration of the donor DNA into the recipient genome by recombination. Examples of bacterial pathogens capable of natural transformation are \"Helicobacter pylori\", \"Haemophilus influenzae\", \"Legionella pneumophila\", \"Neisseria gonorrhoeae\" and \"Streptococcus pneumoniae\".\n", "Animal pathogens are disease-causing agents of wild and domestic animal species, at times including humans.\n\nSection::::Virulence.\n", "There are several pathways through which pathogens can invade a host. The principal pathways have different episodic time frames, but soil has the longest or most persistent potential for harboring a pathogen. Diseases in humans that are caused by infectious agents are known as pathogenic diseases, though not all diseases are caused by pathogens. Some diseases, such as Huntington's disease, are caused by inheritance of abnormal genes.\n\nSection::::Pathogenicity.\n", "Pathogenic bacteria\n\nPathogenic bacteria are bacteria that can cause disease. This article deals with human pathogenic bacteria. Although most bacteria are harmless or often beneficial, some are pathogenic, with the number of species estimated as fewer than a hundred that are seen to cause infectious diseases in humans. By contrast, several thousand species exist in the human digestive system.\n", "Disease can arise if the host's protective immune mechanisms are compromised and the organism inflicts damage on the host. Microorganisms can cause tissue damage by releasing a variety of toxins or destructive enzymes. For example, \"Clostridium tetani\" releases a toxin that paralyzes muscles, and staphylococcus releases toxins that produce shock and sepsis. Not all infectious agents cause disease in all hosts. For example, less than 5% of individuals infected with polio develop disease. On the other hand, some infectious agents are highly virulent. The prion causing mad cow disease and Creutzfeldt–Jakob disease invariably kills all animals and people that are infected.\n", "Pathogenicity is the ability of one organism to cause disease in another. There is a specialized field of study in virology called viral pathogenesis in which it studies how viruses infect their hosts at the molecular and cellular level.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04564
Does stored nuclear material deplete as fast as nuclear material in a generator?
Yes it can be and probably is stockpiled. Once enriched uranium fuel pelleted are used in a reactor and the fission process is started that's when it begins to decay. YouTube How it's made nuclear fuel. On another note you can literally hug a new fuel assembly as it's relatively harmless, once it's been in a reactor and gone through fission it will literally kill you before you could get close to it.
[ "According to the work of corrosion electrochemist David W. Shoesmith, the nanoparticles of Mo-Tc-Ru-Pd have a strong effect on the corrosion of uranium dioxide fuel. For instance his work suggests that when hydrogen (H) concentration is high (due to the anaerobic corrosion of the steel waste can), the oxidation of hydrogen at the nanoparticles will exert a protective effect on the uranium dioxide. This effect can be thought of as an example of protection by a sacrificial anode, where instead of a metal anode reacting and dissolving it is the hydrogen gas that is consumed.\n\nSection::::Disposal.\n", "If using a thorium fuel to produce fissile U-233, the SNF (Spent Nuclear Fuel) will have U-233, with a half-life of 159,200 years (unless this uranium is removed from the spent fuel by a chemical process). The presence of U-233 will affect the long-term radioactive decay of the spent fuel. If compared with MOX fuel, the activity around one million years in the cycles with thorium will be higher due to the presence of the not fully decayed U-233.\n\nFor natural uranium fuel:\n", "One example of how this process could be detected in PWRs, is that during these periods, there would be a considerable amount of down time, that is, large stretches of time that the reactor is not producing electricity to the grid. On the other hand, the modern definition of \"reactor grade\" plutonium is produced only when the reactor is run at high burnups and therefore producing a high electricity generating capacity factor. According to the US Energy Information Administration (EIA), in 2009 the capacity factor of US nuclear power stations was higher than all other forms of energy generation, with nuclear reactors producing power approximately 90.3% of the time and Coal thermal power plants at 63.8%, with down times being for simple routine maintenance and refuelling.\n", "In once-through nuclear fuel cycles, higher burnup reduces the number of elements that need to be buried. However, short-term heat emission, one deep geological repository limiting factor, is predominantly from medium-lived fission products, particularly Cs (30.08 year half life) and Sr (28.9 year half life). As there are proportionately more of these in high-burnup fuel, the heat generated by the spent fuel is roughly constant for a given amount of energy generated.\n", "Fissile component starts at 0.71% U concentration in natural uranium. At discharge, total fissile component is still 0.50% (0.23% U, 0.27% fissile Pu, Pu) Fuel is discharged not because fissile material is fully used-up, but because the neutron-absorbing fission products have built up and the fuel becomes significantly less able to sustain a nuclear reaction.\n\nSome natural uranium fuels use chemically active cladding, such as Magnox, and need to be reprocessed because long-term storage and disposal is difficult.\n\nSection::::Nature of spent fuel.:Minor actinides.\n", "Coal and nuclear power plants do not change production to match power consumption demands since it is more economical to operate them at constant production levels, and not all power plants are designed for it. However, some nuclear power stations, such as those in France, are physically capable of being used as load following power plants and do alter their output, to some degree, to help meet varying demands. \n", "An example of this effect is the use of nuclear fuels with thorium. Th-232 is a fertile material that can undergo a neutron capture reaction and two beta minus decays, resulting in the production of fissile U-233. Its radioactive decay will strongly influence the long-term activity curve of the SNF around a million years. A comparison of the activity associated to U-233 for three different SNF types can be seen in the figure on the top right. The burnt fuels are Thorium with Reactor-Grade Plutonium (RGPu), Thorium with Weapons-Grade Plutonium (WGPu) and Mixed Oxide fuel (MOX, no thorium). For RGPu and WGPu, the initial amount of U-233 and its decay around a million years can be seen. This has an effect in the total activity curve of the three fuel types. The initial absence of U-233 and its daughter products in the MOX fuel results in a lower activity in region 3 of the figure on the bottom right, whereas for RGPu and WGPu the curve is maintained higher due to the presence of U-233 that has not fully decayed. Nuclear reprocessing can remove the actinides from the spent fuel so they can be used or destroyed (see Long-lived fission product#Actinides).\n", "Plutonium-239 present in reactor fuel can absorb neutrons and fission just as uranium-235 can. Since plutonium-239 is constantly being created in the reactor core during operation, the use of plutonium-239 as nuclear fuel in power plants can occur without reprocessing of spent fuel; the plutonium-239 is fissioned in the same fuel rods in which it is produced. Fissioning of plutonium-239 provides about one-third of the total energy produced in a typical commercial nuclear power plant. Reactor fuel would accumulate much more than 0.8% plutonium-239 during its service life if some plutonium-239 were not constantly being “burned off” by fissioning.\n", "The spent fuel is primarily composed of uranium, most of which has not been consumed or transmuted in the nuclear reactor. At a typical concentration of around 96% by mass in the used nuclear fuel, uranium is the largest component of used nuclear fuel. The composition of reprocessed uranium depends on the time the fuel has been in the reactor, but it is mostly uranium-238, with about 1% uranium-235, 1% uranium-236 and smaller amounts of other isotopes including uranium-232. However, reprocessed uranium is also a waste product because it is contaminated and undesirable for reuse in reactors. During its irradiation in a reactor, uranium is profoundly modified. The uranium that leaves the reprocessing plant contains all the isotopes of uranium between uranium-232 and uranium-238 except uranium-237, which is rapidly transformed into neptunium-237. The undesirable isotopic contaminants are:\n", "Since nuclear fuel is used for several years (burnup) in a nuclear power plant, the final amount of Sm in the spent nuclear fuel at discharge is only a small fraction of the total Sm produced during the use of the fuel. \n", "A 2011 study by the National Renewable Energy Laboratory found that nuclear plants with cooling towers consumed 672 gal/MWhr. The water consumption intensity for nuclear was similar to that for coal electricity (687 gal/MWhr), lower than the consumption rates for concentrating solar power (865 gal/MWhr for CSP trough, 786 gal/MWhr for CSP tower), and higher than that of electricity generated by natural gas (198 gal/MWhr).\n", "One third of the energy/fissions at the \"end\" of the practical fuel life in a thermal reactor are from plutonium, the end of cycle occurs when the U-235 percentage drops, the primary fuel that drives the neutron economy inside the reactor and the drop necessitates fresh fuel being required, so without design change, one third of the fissile fuel in a \"new\" fuel load can be fissile reactor-grade plutonium with one third less of Low enriched uranium needing to be added to continue the chain reactions anew, thus achieving a partial recycling. \n", "Mining of uranium ore can disrupt the environment around the mine. Disposal of spent fuel is controversial, with many proposed long-term storage schemes under intense review and criticism. Diversion of fresh or spent fuel to weapons production presents a risk of nuclear proliferation. Finally, the structure of the reactor itself becomes radioactive and will require decades of storage before it can be economically dismantled and in turn disposed of as waste.\n\nSection::::Renewable energy.\n", "Although the life cycle assessments of each energy source should attempt to cover the full life cycle of the source from cradle-to-grave, they are generally limited to the construction and operation phase. The most rigorously studied phases are those of material and fuel mining, construction, operation, and waste management. However, missing life cycle phases exist for a number of energy sources. At times, assessments variably and sometimes inconsistently include the global warming potential that results from decommissioning the energy supplying facility, once it has reached its designed life-span. This includes the global warming potential of the process to return the power-supply site to greenfield status. For example, the process of hydroelectric dam removal is usually excluded as it is a rare practice with little practical data available. Dam removal however may become increasingly common as dams age. An example of this is the decommissioning of the Bull Run Hydroelectric Project, which was the largest concrete dam ever removed in the United States as of 2012. Larger dams, such as the Hoover Dam and the Three Gorges Dam, are intended to last \"forever\" with the aid of maintenance, a period that is not quantified. Therefore, decommissioning estimates are generally omitted for some energy sources, while other energy sources include a decommissioning phase in their assessments.\n", "Long-lived radioactive waste from the back end of the fuel cycle is especially relevant when designing a complete waste management plan for SNF. When looking at long-term radioactive decay, the actinides in the SNF have a significant influence due to their characteristically long half-lives. Depending on what a nuclear reactor is fueled with, the actinide composition in the SNF will be different.\n", "After shutting down, for some time the reactor still needs external energy to power its cooling systems. Normally this energy is provided by the power grid to which that plant is connected, or by emergency diesel generators. Failure to provide power for the cooling systems, as happened in Fukushima I, can cause serious accidents.\n\nNuclear safety rules in the United States \"do not adequately weigh the risk of a single event that would knock out electricity from the grid and from emergency generators, as a quake and tsunami recently did in Japan\", Nuclear Regulatory Commission officials said in June 2011.\n", "After \"spent nuclear fuel\" is removed from a light water reactor, it undergoes a complex decay profile as each nuclide decays at a different rate. Due to a physical oddity referenced below, there is a large gap in the decay half-lives of fission products compared to transuranic isotopes. If the transuranics are left in the spent fuel, after 1,000 to 100,000 years, the slow decay of these transuranics would generate most of the radioactivity in that spent fuel. Thus, removing the transuranics from the waste eliminates much of the long-term radioactivity of spent nuclear fuel.\n", "In one experiment the zirconium is heated in steam to 1473 K, the sample is slowly cooled in steam to 1173 K before being quenched in water. As the heating time at 1473 K is increased the zirconium becomes more brittle and the L value declines.\n\nSection::::Hydriding and Waterside Corrosion.:Aging of steels.\n\nIrradiation causes the properties of steels to become poorer, for instance SS316 becomes less ductile and less tough. Also creep and stress corrosion cracking become worse. Papers on this effect continue to be published.\n\nSection::::Cracking and overheating of the fuel.\n", "A 2011 NREL study of water use in electricity generation concluded that the median nuclear plant with cooling towers consumed 672 gallons per megawatt-hour (gal/MWh), a usage similar to that of coal plants, but more than other generating technologies, except hydroelectricity (median reservoir evaporation loss of 4,491 gal/MWh) and concentrating solar power (786 gal/MWh for power tower designs, and 865 for trough). Nuclear plants with once-through cooling systems consume only 269 gal/MWh, but require withdrawal of 44,350 gal/MWh. This makes nuclear plants with once-through cooling susceptible to drought.\n", "In a typical nuclear fission reaction, 187 MeV of energy are released instantaneously in the form of kinetic energy from the fission products, kinetic energy from the fission neutrons, instantaneous gamma rays, or gamma rays from the capture of neutrons. An additional 23 MeV of energy are released at some time after fission from the beta decay of fission products. About 10 MeV of the energy released from the beta decay of fission products is in the form of neutrinos, and since neutrinos are very weakly interacting, this 10 MeV of energy will not be deposited in the reactor core. This results in 13 MeV (6.5% of the total fission energy) being deposited in the reactor core from delayed beta decay of fission products, at some time after any given fission reaction has occurred. In a steady state, this heat from delayed fission product beta decay contributes 6.5% of the normal reactor heat output.\n", "As the draw down flushing to remove the sediment from the reservoir is not permitted, initially the sediment accumulation in reservoir would be at faster pace. Once the sediment has filled up in the reservoir leaving the gross storage is equal to the permitted pondage/live storage for the power generation, the dead storage level can be refixed per Annexure D (12) of IWT at a lower level to facilitate draw down flushing. Thus further sediment accumulation in the reservoir is eliminated not to affect the operating life of the power station.\n\nSection::::Benefits.\n", "At the end of the operating cycle, the fuel in some of the assemblies is \"spent\", having spent 4 to 6 years in the reactor producing power. This spent fuel is discharged and replaced with new (fresh) fuel assemblies. Though considered \"spent,\" these fuel assemblies contain a large quantity of fuel. In practice it is economics that determines the lifetime of nuclear fuel in a reactor. Long before all possible fission has taken place, the reactor is unable to maintain 100%, full output power, and therefore, income for the utility lowers as plant output power lowers. Most nuclear plants operate at a very low profit margin due to operating overhead, mainly regulatory costs, so operating below 100% power is not economically viable for very long. The fraction of the reactor's fuel core replaced during refueling is typically one-third, but depends on how long the plant operates between refueling. Plants typically operate on 18 month refueling cycles, or 24 month refueling cycles. This means that 1 refueling, replacing only one-third of the fuel, can keep a nuclear reactor at full power for nearly 2 years. The disposition and storage of this spent fuel is one of the most challenging aspects of the operation of a commercial nuclear power plant. This nuclear waste is highly radioactive and its toxicity presents a danger for thousands of years. After being discharged from the reactor, spent nuclear fuel is transferred to the on-site spent fuel pool. The spent fuel pool is a large pool of water that provides cooling and shielding of the spent nuclear fuel. Once the energy has decayed somewhat (approximately 5 years), the fuel can be transferred from the fuel pool to dry shielded casks, that can be safely stored for thousands of years. After loading into dry shielded casks, the casks are stored on-site in a specially guarded facility in impervious concrete bunkers. On-site fuel storage facilities are designed to withstand the impact of commercial airliners, with little to no damage to the spent fuel. An average on-site fuel storage facility can hold 30 years of spent fuel in a space smaller that a football field.\n", "Section::::Radioactive waste.:Other waste.\n\nModerate amounts of low-level waste are through chemical and volume control system (CVCS). This includes gas, liquid, and solid waste produced through the process of purifying the water through evaporation. Liquid waste is reprocessed continuously, and gas waste is filtered, compressed, stored to allow decay, diluted, and then discharged. The rate at which this is allowed is regulated and studies must prove that such discharge does not violate dose limits to a member of the public (see [[#Radioactive gases and effluents|radioactive effluent emissions]]).\n", "A great deal of work goes into the prevention of a serious core event. If such an event were to occur, three different physical processes are expected to increase the time between the start of the accident and the time when a large release of radioactivity could occur. These three factors would provide additional time to the plant operators in order to mitigate the result of the event:\n", "The US NRC has stated that the commercial fleet of LWRs presently powering homes, had an average burnup of approximately 35 GWd/MTU in 1995, while in 2015, the average had improved to 45 GWd/MTU.\n\nThe odd numbered fissile plutonium isotopes present in spent nuclear fuel, such as Pu-239, decrease significantly as a percentage of the total composition of all plutonium isotopes (which was 1.11% in the first example above) as higher and higher burnups take place, while the even numbered non-fissile plutonium isotopes (e.g. Pu-238, Pu-240 and Pu-242) increasingly accumulate in the fuel over time.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02963
How does facial recognition recognize me when my face is bruised/swollen?
Facial recognition uses measurements of parts of your face that don’t move. Like the corners of your eyes, the point of the nose, distance between nostrils, and the angles between these points. Swelling is unlikely to cause errors, a broken nose that isn’t reset will. If you google facial recognition images you’ll see a lot of line maps overlaid. Your like map is what the system is comparing to its records.
[ "Studies using functional magnetic resonance imaging and electrocorticography have demonstrated that activity in the FFA codes for individual faces and the FFA is tuned for behaviorally relevant facial features. An electrocorticography study found that the FFA is involved in multiple stages of face processing, continuously from when people see a face until they respond to it, demonstrating the dynamic and important role the FFA plays as part of the face perception network.\n", "The reasonable alternative to feature detectors would be that cortical cells work as a network. Hence the recognition of a face results not from the feedback of one individual cell but rather from a large number of cells. One group of cells would be specifically inclined to excitatory responses for a given feature such as height, while another would be sensitive to movement. This has been partially proven, as three types of cells simple, complex and hypercomplex have been identified in the receptive fields of cells in the cortex.\n\nSection::::See also.\n\nBULLET::::- Neuroethology\n\nBULLET::::- Pattern recognition (psychology)\n", "A great deal of effort has been put into developing software that can recognize human faces. Much of the work has been done by a branch of artificial intelligence known as computer vision which uses findings from the psychology of face perception to inform software design. Recent breakthroughs using noninvasive functional transcranial Doppler spectroscopy as demonstrated by Njemanze, 2007, to locate specific responses to facial stimuli have led to improved systems for facial recognition. The new system uses input responses called cortical long-term potentiation (CLTP) derived from Fourier analysis of mean blood flow velocity to trigger target face search from a computerized face database system. Such a system provides for brain-machine interface for facial recognition, and the method has been referred to as cognitive biometrics.\n", "The face is the feature which best distinguishes a person. Specialized regions of the human brain, such as the fusiform face area (FFA), enable facial recognition; when these are damaged, it may be impossible to recognize faces even of intimate family members. The pattern of specific organs, such as the eyes, or of parts of them, is used in biometric identification to uniquely identify individuals.\n", "The FFA was discovered and continues to be investigated in humans using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies. Usually, a participant views images of faces, objects, places, bodies, scrambled faces, scrambled objects, scrambled places, and scrambled bodies. This is called a functional localizer. Comparing the neural response between faces and scrambled faces will reveal areas that are face-responsive, while comparing cortical activation between faces and objects will reveal areas that are face-selective.\n\nSection::::Function.\n", "Several case studies have reported that patients with lesions or tissue damage localized to this area have tremendous difficulty recognizing faces, even their own. Although most of this research is circumstantial, a study at Stanford University provided conclusive evidence for the fusiform gyrus' role in facial recognition. In a unique case study, researchers were able to send direct signals to a patient's fusiform gyrus. The patient reported that the faces of the doctors and nurses changed and morphed in front of him during this electrical stimulation. Researchers agree this demonstrates a convincing causal link between this neural structure and the human ability to recognize faces.\n", "In the last couple years there have been advances in computer graphics and computer vision on modeling lighting and pose changes in facial imagery. These advances have led to the development of new computer algorithms that can automatically correct for lighting and pose changes in facial imagery. These new algorithms work by preprocessing a facial image to correct for lighting and pose prior to being processed through a face recognition system. The preprocessing portion of the FRGC will measure the impact of new preprocessing algorithms on recognition performance.\n", "Emerging use of Facial recognition is in use of ID verification services. Many companies are working in the market now to provide these services to banks, ICOs, and other e-businesses.\n\nSection::::Application.:Mobile platforms.:Face ID.\n", "Faces and bodies are often perceived together allowing humans to identify if an individual is familiar or not. Despite this almost simultaneous perception it is important to establish that they are processed by different structures, and thus perceived separately. This has been established as being distinct to the selectivity pattern of the region involved in the processing of faces. It has been proposed that this distinction between face recognition and body recognition is due to bodies providing contextual input to ambiguous stimuli which are later able to be perceived together as a whole.\n", "The Face ID hardware consists of a sensor with three modules; one projects a grid of small infrared dots onto a user's face whose name is dot projector, the other module called the flood illuminator reads the resulting pattern and generates a 3D facial map, and the third one is the infrared camera which takes an infrared picture of the user. This map is compared with the registered face using a secure subsystem, and the user is authenticated if the two faces match sufficiently. The system can recognize faces with glasses, clothing, makeup, and facial hair, and adapts to changes in appearance over time.\n", "At the end of Phase I, five organizations were given the opportunity to test their face-recognition algorithm on the newly-created FERET database in order to compare how they performed against each other. There five principal investigators were:\n\nBULLET::::- MIT, led by Alex Pentland\n\nBULLET::::- Rutgers University, led by Joseph Wilder\n\nBULLET::::- The Analytic Science Company (TASC), led by Gale Gordon\n\nBULLET::::- The University of Illinois at Chicago (UIC) and the University of Illinois at Urbana-Champaigne, led by Lewis Sadler and Thomas Huang\n\nBULLET::::- USC, led by Christoph von der Malsburg\n", "The study of prosopagnosia (an impairment in recognizing faces which is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct.\n", "Section::::Function.:Perception and recognition of faces.:Biological perspective.\n", "The technology learns from changes in a user's appearance, and therefore works with hats, scarves, glasses, and many sunglasses, beard and makeup.\n\nIt also works in the dark. This is done by using a \"Flood Illuminator\", which is a dedicated infrared flash that throws out invisible infrared light onto the user's face to properly read the 30,000 facial points.\n\nSection::::Application.:Deployment in security services.\n\nSection::::Application.:Deployment in security services.:Policing.\n", "The face is itself a highly sensitive region of the human body and its expression may change when the brain is stimulated by any of the many human senses, such as touch, temperature, smell, taste, hearing, movement, hunger, or visual stimuli.\n\nSection::::Structure.:Shape.\n", "Section::::Characters.:Galen.\n", "Section::::Techniques for face acquisition.:Facial recognition combining different techniques.\n\nAs every method has its advantages and disadvantages, technology companies have amalgamated the traditional, 3D recognition and Skin Textual Analysis, to create recognition systems that have higher rates of success.\n\nCombined techniques have an advantage over other systems. It is relatively insensitive to changes in expression, including blinking, frowning or smiling and has the ability to compensate for mustache or beard growth and the appearance of eyeglasses. The system is also uniform with respect to race and gender.\n\nSection::::Techniques for face acquisition.:Thermal cameras.\n", "Section::::History of facial recognition technology.\n\nPioneers of automated face recognition include Woody Bledsoe, Helen Chan Wolf, and Charles Bisson.\n", "Face recognition has been leveraged as a form of biometric authentication for various computing platforms and devices; Android 4.0 \"Ice Cream Sandwich\" added facial recognition using a smartphone's front camera as a means of unlocking devices, while Microsoft introduced face recognition login to its Xbox 360 video game console through its Kinect accessory, as well as Windows 10 via its \"Windows Hello\" platform (which requires an infrared-illuminated camera). Apple's iPhone X smartphone introduced facial recognition to the product line with its \"Face ID\" platform, which uses an infrared illumination system.\n", "Section::::Application.\n\nSection::::Application.:Mobile platforms.\n\nSection::::Application.:Mobile platforms.:Social media.\n\nSocial media platforms have adopted facial recognition capabilities to diversify their functionalities in order to attract a wider user base amidst stiff competition from different applications.\n", "Section::::Facial pattern recognition.\n\nRecognizing faces is one of the most common forms of pattern recognition. Humans are incredibly effective at remembering faces, but this ease and automaticity belies a very challenging problem. All faces are physically similar. Faces have two eyes, one mouth, and one nose all in predictable locations, yet humans can recognize a face from several different angles and in various lighting conditions. \n", "Firstly, the possible human eye regions are detected by testing all the valley regions in the gray-level image. Then the genetic algorithm is used to generate all the possible face regions which include the eyebrows, the iris, the nostril and the mouth corners.\n", "Section::::Technology.:Feature recognition.\n", "Surface Texture Analysis works much the same way facial recognition does. A picture is taken of a patch of skin, called a skinprint. That patch is then broken up into smaller blocks. Using algorithms to turn the patch into a mathematical, measurable space, the system will then distinguish any lines, pores and the actual skin texture. It can identify the contrast between identical pairs, which are not yet possible using facial recognition software alone.\n\nTests have shown that with the addition of skin texture analysis, performance in recognizing faces can increase 20 to 25 percent.\n", "After Bledsoe left PRI in 1966, this work was continued at the Stanford Research Institute, primarily by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer consistently outperformed humans when presented with the same recognition tasks (Bledsoe 1968). Peter Hart (1996) enthusiastically recalled the project with the exclamation, \"It really worked!\"\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-05130
Why would a company spend money making offices in a leased spaced. Doesn't the landlord "own" and benefit from all of the enhancements?
There are two basic issues here. First, the benefits and drawbacks of owning vs leasing and second the issue of "improvements" as you put it. The pros and cons of buying vs renting/leasing are well understood and have to be decided on a case-by-case basis. Generally leasing is better for cash flow because you can pay month-to-month instead of needing to put down a large down payment up front. Also, with a lease, you aren't tied to the same physical location forever. If you need more space you can easily move without having to find a buyer for your current space. As to improvements, I think you have a misunderstanding here. When a company comes in to an empty space and builds rooms, offices, shared spaces, etc. they are doing it to their specific requirements. The next company to use that space would likely have a different set of requirements and so would want a different arrangement of the space. They usually have to tear out what had been built before and redo it. So, in that case, they aren't really "improving" the space because it's actually more valuable empty than being pre-built.
[ "The committee suggested offering a floor space ratio bonus to Grosvenor for incorporation into other sites it owned in the City Centre, but subject to providing \"satisfactory guarantees...to ensure the preservation of the historic building and its continuous maintenance\".\n", "CAM charges are subject to wide variations as tenants move in and out and various inflationary items occur. This can make it difficult for both the tenant and landlord to predict their future cash flows with any accuracy. To address this, some leases include \"cap\" and \"floor\" terms which limit these changes to fixed values on a year-over-year basis.\n", "The National Audit Office of the UK has produced a guide to help Government Departments and public bodies to assess the case for flexible managed space instead of conventional office space.\n\nIn November 2014, a business report carried out by the Business Centre Association showed that serviced offices in the UK are using 70 million square feet of space, house around 80,000 businesses, provide over 400,000 jobs and generate in the region of £2bn to the UK economy.\n\nSection::::Client types.\n\nClients of serviced office facilities fall into the following categories:\n", "Open plan offices are often divided up into smaller offices for managers, meeting rooms, etc. When this happens the designer has to take into account several factors including:\n\nBULLET::::- Heating/cooling zoning\n\nBULLET::::- Ventilation\n\nBULLET::::- Lighting and light switches\n\nBULLET::::- Emergency lighting\n\nBULLET::::- Small power\n\nBULLET::::- Voice and data cabling\n\nBULLET::::- Fire alarms\n\nBULLET::::- Fire stopping\n\nBULLET::::- Fire escape routes\n\nBULLET::::- Noise/acoustics\n\nSection::::Staff welfare facilities.\n", "Each tenant pays their pro rata share of a property's total CAM charges, which prorated share is the percentage of the tenant's rented square footage of the total, rentable square footage of the property.\n", "BULLET::::- Higher office management costs (cleaning services, printer ink, office supplies and so on)\n\nBULLET::::- Faster wear and tear of office equipment\n\nBULLET::::- Potential NDA issues if the space isn't properly divided\n\nBULLET::::- Setup costs (dividing the space with fake walls)\n\nBULLET::::- Management Software costs (resource management, reception desk software, meeting room management and so on)\n\nThe arrangement can be particularly sensitive in the case of attorneys and MDs - in such cases, a legally-binding Office Sharing Agreement should be carefully considered and redacted.\n", "BULLET::::- The second advantage is from a strategic viewpoint: by charging an asset rent, the holding department can identify the performance of its real estate holdings. This can then be compared to an internal or external benchmark to help determine whether the company has adopted the most efficient tenure pattern for its properties.\n", "In England and Wales, some flat owners own shares in the company that owns the freehold of the building as well as holding the flat under a lease. This arrangement is commonly known as a \"share of freehold\" flat. The freehold company has the right to collect annual ground rents from each of the flat owners in the building. The freeholder can also develop or sell the building, subject to the usual planning and restrictions that might apply. This situation does not happen in Scotland, where long leasehold of residential property was formerly unusual, and is now impossible.\n", "The advantages to the developer include prime secure convenient locations on military installations, and the opportunity to provide sole-source services and products in lieu of rent for the ground lease. \n\nThe advantages to the federal agency include the possibility of fast-tracking alterations, repairs or new construction so that the improved space becomes available for lease. In-kind considerations or cash to no less than the fair market value of the property is provided in return by the developer.\n", "The third type of lease is described as a \"novated finance lease\" or \"non-maintained novated lease\" where only the novated lease itself is salary packaged, with none of the other running costs such as fuel, insurance or maintenance being paid by the employer. This arrangement is of little or no benefit to the employee as there is no change to the fringe benefit value, being based on the original purchase price, but any tax benefits on the other running costs are lost as they are not salary packaged. The effect of paying FBT or using ECM offsets most or all the potential tax benefits from salary packaging the lease rentals alone.\n", "The development was built on a speculative basis on the assumption that the office space would be taken by a handful of major corporate tenants. Legal & General's commission urged Piano to avoid designing a \"plain vanilla office building\" and called for the new development to be \"a fantastic place for people to work\". As an incentive, it offered to pay an extra 10% above the normal going rate for London office developments. Piano decided to take the commission because, as he put it, \"the client and the company involved were all about long lasting quality, without rushing. It is very difficult to do a job with somebody who has a short vision – in the end it never works.\" \n", "Building 30 is now called the \"Flex Building\" and building 20 is the \"Office Building\". They are keeping their options open. Here is what they say: \"Imagine a state of the art business center, with more than one million square feet of available space for office, manufacturing and distribution; a facility with high-tech infrastructure and easy access to transportation. Imagine a convenient location near Reading, Pennsylvania, with professional on-site management to support your business. Imagine your business at StonePointe Center.\" \n", "Landlord and Tenant negotiate CAM charges before signing the lease, so the charges vary from lease to lease, and operating costs that can be billed as CAM charges by the landlord vary from tenant to tenant. Generally, landlords want CAM charges defined so broadly that they can pass through a majority of their operating expenses to tenants. The tenant generally wants CAM charges defined narrowly in hopes that the landlord pays a majority of the operating costs.\n\nExamples of services often billed to tenants as CAM charges include portering, parking lot striping, parking lot lighting, and landscaping.\n", "It aimed to reconfigure the podium levels in order to use the space more efficiently. The nursery would move from on the mezzanine floor to on the ground floor with immediate access to outside play space. The mezzanine floor would be continued across the full width of the building making space for three four-bedroom, six person flats. The Dale Youth boxing club gained almost extra space by moving from the ground floor to the walkway level ( to ). Walkway + 1 level would be converted from offices, to four new four bedroom, 6 person flats.\n", "The setting of the film reflects a prevailing trend that Judge observed in the United States. \"It seems like every city now has these identical office parks with identical adjoining chain restaurants\", he said in an interview. \"There were a lot of people who wanted me to set this movie in Wall Street, or like the movie \"Brazil\", but I wanted it very unglamorous, the kind of bleak work situation like I was in\".\n", "Another complication involved in recoverable expense calculations occurs due to changes in occupancy. If an item was being shared equally among tenants based on their area relative to the total building area, when an area is unoccupied that means the remaining tenant's combined payments will no longer cover the entire expense. In some cases this is reasonable; the tenants would not be expected to pay more property taxes if the landlord does not rent out all the units. However, in the case where expenses are ultimately a function of the leased area, like electricity bill where one would expect the amount to vary based on the number of tenants actually using power, the calculation may be grossed up by dividing the tenant's area by the occupied building area, not the total area.\n", "Paragraph 40 supposed that \"\"... it is not open to the tenant to contend that article 8 could justify a different order from that which is mandated by the contractual relationship between the parties, at least where, as here, there are legislative provisions which the democratically elected legislature has decided properly balance the competing interests of private sector landlords and residential tenants.\"\n", "The D#-Broadbench curves around a single user, making physically close collaborative work difficult. The gentle curve helps to enhance concentration, while its massive size makes it unsuitable for the typical cubicle and perfect for a small closed office, like the one each and every software developer has at Microsoft.\n", "Wall's friend Bob Rennie, who has worked extensively with him, described the Wall formula as \"Great location, smaller suites. Put in a Sub-Zero fridge and a Wolf range with red knobs, and they'll line up to buy it\". In May 2008, Wall Corporation bought a building at 1212 Howe Street in downtown Vancouver. In charge of the building's sales and marketing campaign, Rennie claimed that it \"played right into Peter Wall's model of 'take a prime location and undersize the suites a bit' \".\n", "Despite these standards, the actual form of leasehold systems is variable. Highly favoured are arrangements where the leases are granted out of a freehold owned by a corporation, itself owned by individual leaseholders. This provides an opportunity for them to participate in the proper management of the block. Again, quality of management is very variable.\n", "Although the building would add of office space to the central business district and raise the premium-grade office floor space in the central business district by 24 per cent, Woodside was to occupy so much of it that only would be available to other tenants. By October 2003, building manager CB Richard Ellis had leased all but three floors of the building, after securing law firm Corrs Chambers Westgarth and the joint venture alliance between Transfield, Worley Limited and Woodside. This was reduced to less than two floors unleased in April 2004 when accounting firm Deloitte Touche Tohmatsu signed on as tenant, vacating its office in Central Park.\n", "NNN leased investments are generally leased to one single tenant and are thus referred to as STNLs or Single Tenant Net Leases. A NNN lease investment can however have two or more tenants, though it would not be considered an STNL investment. An example of this would be a Starbucks & MetroPCS which share a building under two separate NNN leases, or a retail strip center where all tenants are wrapped into one NNN lease. Both examples would be considered NNN leased investments; however they would not be STNLs. The risk of default is spread out over more than one tenant in such NNN deals (i.e. if either Starbucks or Metro PCS goes bankrupt, the other tenant continues to pay the rent due under their NNN lease). Such deals can appeal to investors seeking to spread risk, though the simplicity of collecting one rent check from one tenant is forfeited.\n", "At the same time, multitenancy increases the risks and impacts inherent in applying a new release version. As there is a single software instance serving multiple tenants, an update on this instance may cause downtime for all tenants even if the update is requested and useful for only one tenant. Also, some bugs and issues resulted from applying the new release could manifest in other tenants' personalized view of the application. Because of possible downtime, the moment of applying the release may be restricted depending on time usage schedule of more than one tenant.\n\nSection::::Requirements.\n\nSection::::Requirements.:Customization.\n", "In 2007, tenants sought a reduction in rents on the grounds that a reduction in building security constituted a reduction in building-wide services, and got a ruling in their favor from the DRA (Directory and Resource Administrator).\n", "To make things more confusing, it is also very common to include up front costs over and above the purchase cost, such as stamp duty, registration, the first year's comprehensive insurance, extended warranties and other insurances and fees, into the lease in a \"fully maintained novated lease\", since there will not have been sufficient time to set up the payments by the employer into the salary packaging account to cover those costs. For this reason it is also common for novated leases to have deferred payments, that is, the first one or two rentals are set at $0, with the remaining rentals increased to compensate.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01930
Why is herpes on your lip (cold sore) temporary while genital herpes is for life?
Herpes on your lip is not temporary. Breakouts are temporary, but once you're infected, it's permanent.
[ "Cold sores are the result of the virus reactivating in the body. Once HSV-1 has entered the body, it never leaves. The virus moves from the mouth to remain latent in the central nervous system. In approximately one-third of people, the virus can \"wake up\" or reactivate to cause disease. When reactivation occurs, the virus travels down the nerves to the skin where it may cause blisters (cold sores) around the lips, in the mouth or, in about 10% of cases, on the nose, chin, or cheeks.\n", "As the virus continues to replicate and incolulate in great amounts, it can enter autonomic or sensory ganglia, where it travels within axons to reach ganglionic nerve bodies. HSV-1 most commonly infects the trigeminal ganglia, where it remains latent. If reactivated, it presents as herpes labialis, also known as cold sores.\n\nSection::::Diagnosis.\n\nSection::::Diagnosis.:Histopathology.\n", "Herpes labialis\n\nHerpes labialis, commonly known as cold sores, is a type of infection by the herpes simplex virus that affects primarily the lip. Symptoms typically include a burning pain followed by small blisters or sores. The first attack may also be accompanied by fever, sore throat, and enlarged lymph nodes. The rash usually heals within 10 days, but the virus remains dormant in the trigeminal ganglion. The virus may periodically reactivate to create another outbreak of sores in the mouth or lip.\n", "After a first episode of herpes genitalis caused by HSV-2, there will be at least one recurrence in approximately 80% of people, while the recurrence rate for herpes genitalis caused by HSV-1 is approximately 50%. Herpes genitalis caused by HSV-2 recurs on average four to six times per year, while that of HSV-1 infection occurs only about once per year.\n", "Cold sore outbreaks may be influenced by stress, menstruation, sunlight, sunburn, fever, dehydration, or local skin trauma. Surgical procedures such as dental or neural surgery, lip tattooing, or dermabrasion are also common triggers. HSV-1 can in rare cases be transmitted to newborn babies by family members or hospital staff who have cold sores; this can cause a severe disease called neonatal herpes simplex.\n\nThe colloquial term for this condition, \"cold sore\" comes from the fact that herpes labialis is often triggered by fever, for example, as may occur during an upper respiratory tract infection (i.e. a cold).\n", "HSV-1 and HSV-2 are transmitted by contact with an infected person who has reactivations of the virus. HSV-2 is periodically shed in the human genital tract, most often asymptomatically. Most sexual transmissions occur during periods of asymptomatic shedding. Asymptomatic reactivation means that the virus causes atypical, subtle, or hard-to-notice symptoms that are not identified as an active herpes infection, so acquiring the virus is possible even if no active HSV blisters or sores are present. In one study, daily genital swab samples found HSV-2 at a median of 12–28% of days among those who have had an outbreak, and 10% of days among those suffering from asymptomatic infection, with many of these episodes occurring without visible outbreak (\"subclinical shedding\").\n", "Herpes simplex is a viral infection caused by the herpes simplex virus. Infections are categorized based on the part of the body infected. Oral herpes involves the face or mouth. It may result in small blisters in groups often called cold sores or fever blisters or may just cause a sore throat. Genital herpes, often simply known as herpes, may have minimal symptoms or form blisters that break open and result in small ulcers. These typically heal over two to four weeks. Tingling or shooting pains may occur before the blisters appear. Herpes cycles between periods of active disease followed by periods without symptoms. The first episode is often more severe and may be associated with fever, muscle pains, swollen lymph nodes and headaches. Over time, episodes of active disease decrease in frequency and severity. Other disorders caused by herpes simplex include: herpetic whitlow when it involves the fingers, herpes of the eye, herpes infection of the brain, and neonatal herpes when it affects a newborn, among others.\n", "Genital herpes can be more difficult to diagnose than oral herpes, since most people have none of the classical symptoms. Further confusing diagnosis, several other conditions resemble genital herpes, including fungal infection, lichen planus, atopic dermatitis, and urethritis.\n\nSection::::Diagnosis.:Laboratory testing.\n", "It should not be confused with conditions caused by other viruses in the \"herpesviridae\" family such as herpes zoster, which is caused by varicella zoster virus. The differential diagnosis includes hand, foot and mouth disease due to similar lesions on the skin. Lymphangioma circumscriptum and dermatitis herpetiformis may also have a similar appearance.\n\nSection::::Prevention.\n\nAs with almost all sexually transmitted infections, women are more susceptible to acquiring genital HSV-2 than men. On an annual basis, without the use of antivirals or condoms, the transmission risk of HSV-2 from infected male to female is about 8–11%.\n", "There are two types of herpes simplex virus, type 1 (HSV-1) and type 2 (HSV-2). HSV-1 more commonly causes infections around the mouth while HSV-2 more commonly causes genital infections. They are transmitted by direct contact with body fluids or lesions of an infected individual. Transmission may still occur when symptoms are not present. Genital herpes is classified as a sexually transmitted infection. It may be spread to an infant during childbirth. After infection, the viruses are transported along sensory nerves to the nerve cell bodies, where they reside lifelong. Causes of recurrence may include: decreased immune function, stress, and sunlight exposure. Oral and genital herpes is usually diagnosed based on the presenting symptoms. The diagnosis may be confirmed by viral culture or detecting herpes DNA in fluid from blisters. Testing the blood for antibodies against the virus can confirm a previous infection but will be negative in new infections.\n", "Condoms offer moderate protection against HSV-2 in both men and women, with consistent condom users having a 30%-lower risk of HSV-2 acquisition compared with those who never use condoms. A female condom can provide greater protection than the male condom, as it covers the labia. The virus cannot pass through a synthetic condom, but a male condom's effectiveness is limited because herpes ulcers may appear on areas not covered by it. Neither type of condom prevents contact with the scrotum, anus, buttocks, or upper thighs, areas that may come in contact with ulcers or genital secretions during sexual activity. Protection against herpes simplex depends on the site of the ulcer; therefore, if ulcers appear on areas not covered by condoms, abstaining from sexual activity until the ulcers are fully healed is one way to limit risk of transmission. The risk is not eliminated, however, as viral shedding capable of transmitting infection may still occur while the infected partner is asymptomatic. The use of condoms or dental dams also limits the transmission of herpes from the genitals of one partner to the mouth of the other (or \"vice versa\") during oral sex. When one partner has a herpes simplex infection and the other does not, the use of antiviral medication, such as valaciclovir, in conjunction with a condom, further decreases the chances of transmission to the uninfected partner. Topical microbicides that contain chemicals that directly inactivate the virus and block viral entry are being investigated.\n", "The disease is typically spread by direct genital contact with the skin surface or secretions of someone who is infected. This may occur during sex, including anal and oral sex. Sores are not required for transmission to occur. The risk of spread between a couple is about 7.5% over a year. HSV is classified into two types, HSV-1 and HSV-2. While historically mostly cause by HSV-2, genital HSV-1 has become more common in the developed world. Diagnosis may occur by testing lesions using either PCR or viral culture or blood tests for specific antibodies.\n", "Herpes labialis infection occurs when the herpes simplex virus comes into contact with oral mucosal tissue or abraded skin of the mouth. Infection by the type 1 strain of herpes simplex virus (HSV-1) is most common; however, cases of oral infection by the type 2 strain are increasing. Specifically, type 2 has been implicated as causing 10–15% of oral infections.\n", "Antibodies that develop following an initial infection with a type of HSV prevents reinfection with the same virus type—a person with a history of orofacial infection caused by HSV-1 cannot contract herpes whitlow or a genital infection caused by HSV-1. In a monogamous couple, a seronegative female runs a greater than 30% per year risk of contracting an HSV infection from a seropositive male partner. If an oral HSV-1 infection is contracted first, seroconversion will have occurred after 6 weeks to provide protective antibodies against a future genital HSV-1 infection. Herpes simplex is a double-stranded DNA virus.\n\nSection::::Diagnosis.\n\nSection::::Diagnosis.:Classification.\n", "Genital herpes can be spread by viral shedding prior to and following the formation of ulcers. The risk of spread between a couple is about 7.5% over a year (for unprotected sex). The likelihood of transferring genital herpes from one person to another is decreased by male condom use by 50%, by female condom by 50%, and refraining from sex during an active outbreak. The longer a partner has had the infection, the lower the transmission rate. An infected person may further decrease transmission risks by maintaining a daily dose of antiviral medications. Infection by genital herpes occurs in about 1 in every 1,000 sexual acts.\n", "Because the onset of an infection is difficult to predict, lasts a short period of time and heals rapidly, it is difficult to conduct research on cold sores. Though famciclovir improves lesion healing time, it is not effective in preventing lesions; valaciclovir and a mixture of acyclovir and hydrocortisone are similarly useful in treating outbreaks but may also help prevent them.\n\nAcyclovir and valacyclovir by mouth are effective in preventing recurrent herpes labialis if taken prior to the onset of any symptoms or exposure to any triggers. Evidence does not support L-lysine.\n\nSection::::Treatment.\n", "such as nectin-1, HVEM and 3-O sulfated heparan sulfate. Infected people who show no visible symptoms may still shed and transmit viruses through their skin; asymptomatic shedding may represent the most common form of HSV-2 transmission. Asymptomatic shedding is more frequent within the first 12 months of acquiring HSV. Concurrent infection with HIV increases the frequency and duration of asymptomatic shedding. Some individuals may have much lower patterns of shedding, but evidence supporting this is not fully verified; no significant differences are seen in the frequency of asymptomatic shedding when comparing persons with one to 12 annual recurrences to those with no recurrences.\n", "People can transfer the virus from their cold sores to other areas of the body, such as the eye, skin, or fingers; this is called \"autoinoculation\". Eye infection, in the form of conjunctivitis or keratitis, can happen when the eyes are rubbed after touching the lesion. Finger infection (herpetic whitlow) can occur when a child with cold sores or primary HSV-1 infection sucks his fingers.\n", "Once the condition has recurred, it is normally a mild infection. The infection may be triggered by several external factors such as sun exposure or trauma.\n\nInfection with either type of the HSV viruses occurs in the following way: First, the virus comes in contact with damaged skin, and then it goes to the nuclei of the cells and reproduces or replicates. The blisters and ulcers formed on the skin are a result of the destruction of infected cells. In its latent form, the virus does not reproduce or replicate until recurrence is triggered by different factors.\n\nSection::::Pathophysiology.\n", "BULLET::::1. Latent (weeks to months incident-free): The remission period; After initial infection, the viruses move to sensory nerve ganglia (trigeminal ganglion), where they reside as lifelong, latent viruses. Asymptomatic shedding of contagious virus particles can occur during this stage.\n\nBULLET::::2. Prodromal (day 0–1): Symptoms often precede a recurrence. Symptoms typically begin with tingling (itching) and reddening of the skin around the infected site. This stage can last from a few days to a few hours preceding the physical manifestation of an infection and is the best time to start treatment.\n", "HSVs may persist in a quiescent but persistent form known as latent infection, notably in neural ganglia. HSV-1 tends to reside in the trigeminal ganglia, while HSV-2 tends to reside in the sacral ganglia, but these are tendencies only, not fixed behavior. During latent infection of a cell, HSVs express latency-associated transcript (LAT) RNA. LAT regulates the host cell genome and interferes with natural cell death mechanisms. By maintaining the host cells, LAT expression preserves a reservoir of the virus, which allows subsequent, usually symptomatic, periodic recurrences or \"outbreaks\" characteristic of nonlatency. Whether or not recurrences are symptomatic, viral shedding occurs to infect a new host. A protein found in neurons may bind to herpes virus DNA and regulate latency. Herpes virus DNA contains a gene for a protein called ICP4, which is an important transactivator of genes associated with lytic infection in HSV-1. Elements surrounding the gene for ICP4 bind a protein known as the human neuronal protein neuronal restrictive silencing factor (NRSF) or human repressor element silencing transcription factor (REST). When bound to the viral DNA elements, histone deacetylation occurs atop the \"ICP4\" gene sequence to prevent initiation of transcription from this gene, thereby preventing transcription of other viral genes involved in the lytic cycle. Another HSV protein reverses the inhibition of ICP4 protein synthesis. ICP0 dissociates NRSF from the \"ICP4\" gene and thus prevents silencing of the viral DNA.\n", "Genital ulcer diseases include genital herpes, syphilis, and chancroid. These diseases are transmitted primarily through “skin-to-skin” contact from sores/ulcers or infected skin that looks normal. HPV infections are transmitted through contact with infected genital skin or mucosal surfaces/secretions. Genital ulcer diseases and HPV infection can occur in male or female genital areas that are covered (protected by the condom) as well as those areas that are not.\n\nLaboratory studies have demonstrated that latex condoms provide an essentially impermeable barrier to particles the size of STD pathogens.\n", "Treatments of proven efficacy are currently limited mostly to herpes viruses and human immunodeficiency virus. The herpes virus is of two types: herpes type 1 (HSV-1, or oral herpes) and herpes type 2 (HSV-2, or genital herpes). Although there is no particular cure; there are treatments that can relieve the symptoms. Drugs like Famvir, Zovirax, and Valtrex are among the drugs used, but these medications can only decrease pain and shorten the healing time. They can also decrease the total number of outbreaks in the surrounding. Warm baths also may relive the pain of genital herpes.\n", "Genital herpes\n\nGenital herpes is an infection by the herpes simplex virus (HSV) of the genitals. Most people either have no or mild symptoms and thus do not know they are infected. When symptoms do occur, they typically include small blisters that break open to form painful ulcers. Flu-like symptoms, such as fever, aching, or swollen lymph nodes, may also occur. Onset is typically around 4 days after exposure with symptoms lasting up to 4 weeks. Once infected further outbreaks may occur but are generally milder.\n", "Laboratory testing is often used to confirm a diagnosis of genital herpes. Laboratory tests include culture of the virus, direct fluorescent antibody (DFA) studies to detect virus, skin biopsy, and polymerase chain reaction to test for presence of viral DNA. Although these procedures produce highly sensitive and specific diagnoses, their high costs and time constraints discourage their regular use in clinical practice.\n" ]
[ "Herpes on your lip is temporary." ]
[ "Herpes on your lip causes a temporary breakout, but you are permanently infected." ]
[ "false presupposition" ]
[ "Herpes on your lip is temporary." ]
[ "false presupposition" ]
[ "Herpes on your lip causes a temporary breakout, but you are permanently infected." ]
2018-18031
How does a plane become "invisible" to radar?
1. Coated in a radar-absorbing paint mixture. 2. Shaped in a way that bounces radar waves off in a different direction than their source, which works because most radar devices have the transmitter and receiver in the same place.
[ "The weak absorption of radio waves by the medium through which it passes is what enables radar sets to detect objects at relatively long ranges—ranges at which other electromagnetic wavelengths, such as visible light, infrared light, and ultraviolet light, are too strongly attenuated. Such weather phenomena as fog, clouds, rain, falling snow, and sleet that block visible light are usually transparent to radio waves. Certain radio frequencies that are absorbed or scattered by water vapour, raindrops, or atmospheric gases (especially oxygen) are avoided in designing radars, except when their detection is intended.\n\nSection::::Principles.:Illumination.\n", "BULLET::::- In the 2009 animated film \"Wonder Woman\", Diana receives an invisible plane to transport Steve Trevor back to the outside world after he crash lands on their island and its hull configuration is consciously modelled after Trevor's fighter plane. In this version it is a stealth fighter jet and even its missiles are invisible. No explanation is ever given as to the origin of the invisible plane.\n", "Also, some devices are designed to be Radar active, such as radar antennas and this will increase RCS.\n\nSection::::Factors that affect RCS.:Material.:Radar absorbent paint.\n\nThe SR-71 Blackbird and other planes were painted with a special \"iron ball paint\" that consisted of small metallic-coated balls. Radar energy received is converted to heat rather than being reflected.\n\nSection::::Factors that affect RCS.:Shape, directivity and orientation.\n", "Section::::Design.:Stealth.:Radar.\n", "Window, was an early chaff countermeasure to radar. Huge quantities of conductive strips, made of foil or metallised paper, were cut to the dipole length of the radar to be defeated and dropped from aircraft. The dipoles reflect enough to appear as a large diffuse target. Although the real target aircraft continue to reflect and are shown by the radar, they cannot be distinguished from the decoy cloud.\n", "In 1960, the USAF reduced the radar cross-section of a Ryan Q-2C Firebee drone. This was achieved through specially designed screens over the air intake, and radiation-absorbent material on the fuselage, and radar-absorbent paint.\n", "In 1945, a USAAF Boeing B-29 Superfortress squadron of bombers flies from their base in the Marianas on their mission to attack a target in Japan. Although the target will be invisible due to overcast conditions, the mission will continue as a high-altitude bombing raid.\n\nAfter six hours of flight time, the radar operator (Clayton Moore) is able to identify the islands that lie off the coast of Honshu. Directions from the radar operator to the bombardier help guide the B-29 to its ultimate target. The pilot is also given discrete flight adjustments to fly directly to the objective.\n", "Radar relies on its own transmissions rather than light from the Sun or the Moon, or from electromagnetic waves emitted by the objects themselves, such as infrared wavelengths (heat). This process of directing artificial radio waves towards objects is called \"illumination\", although radio waves are invisible to the human eye or optical cameras.\n\nSection::::Principles.:Reflection.\n\nIf electromagnetic waves travelling through one material meet another material, having a different dielectric constant or diamagnetic constant from the first,\n", "Nearly three decades later, a more serious attempt at radar \"invisibility\" was tried with the Horten Ho 229 flying wing fighter-bomber, developed in Nazi Germany during the last years of World War II. In addition to the aircraft's shape, the majority of the Ho 229's wooden skin was bonded together using carbon-impregnated plywood resins designed with the purported intention of absorbing radar waves. Testing performed in early 2009 by the Northrop-Grumman Corporation established that this compound, along with the aircraft's shape, would have rendered the Ho 229 virtually invisible to the top-end HF-band, 20–30 MHz primary signals of Britain's Chain Home early warning radar, provided the aircraft was traveling at high speed (approximately ) at extremely low altitude – .\n", "Section::::Design.:Stealth.:Infrared.\n\nSome analysts claim infra-red search and track systems (IRSTs) can be deployed against stealth aircraft, because any aircraft surface heats up due to air friction and with a two channel IRST is a CO2 (4.3 µm absorption maxima) detection possible, through difference comparing between the low and high channel.\n", "To calculate the radar cross-section of such a stealth body, one would typically do one-dimensional reflection calculations to calculate the surface impedance, then two dimensional numerical calculations to calculate the diffraction coefficients of edges and small three dimensional calculations to calculate the diffraction coefficients of corners and points. The cross section can then be calculated, using the diffraction coefficients, with the physical theory of diffraction or other high frequency method, combined with physical optics to include the contributions from illuminated smooth surfaces and Fock calculations to calculate creeping waves circling around any smooth shadowed parts.\n", "Through the years, many variations of the SAR have been made with diversified applications resulting. In initial systems, the signal processing was too complex for on-board operation; the signals were recorded and processed later. Processors using optical techniques were then tried for generating real-time images, but advances in high-speed electronics now allow on-board processes for most applications. Early systems gave a resolution in tens of meters, but more recent airborne systems provide resolutions to about 10 cm. Current ultra-wideband systems have resolutions of a few millimeters.\n\nSection::::Post-war radar.:Other radars and applications.\n", "BULLET::::- Arsenal: The invisible jet can shape projectile weapons out of its own substance but doing so depletes the amount of material in the vessel. When such depletion occurs, the craft can regenerate itself slowly. This function is to be avoided and used only when absolutely necessary as a last resort.\n\nBULLET::::- Although Wonder Woman possesses the power of flight, the invisible jet is very useful, as it contains certain on-board equipment, serves as a protective shelter, carries Wonder Woman's cargo, and, of course, renders her invisible for stealth missions.\n\nSection::::In other media.\n", "Section::::Factors that affect RCS.\n\nSection::::Factors that affect RCS.:Size.\n\nAs a rule, the larger an object, the stronger its radar reflection and thus the greater its RCS. Also, radar of one band may not even detect certain size objects. For example, 10 cm (S-band radar) can detect rain drops but not clouds whose droplets are too small.\n\nSection::::Factors that affect RCS.:Material.\n", "During the early 1930s, there were widespread rumours of a “death ray” being developed. The Dutch Parliament set up a Committee for the Applications of Physics in Weaponry under G.J. Elias to examine this potential, but the Committee quickly discounted death rays. The Committee did, however, establish the \"Laboratorium voor Fysieke Ontwikkeling\" (LFO, Laboratory for Physical Development), dedicated to supporting the Netherlands Armed Forces.\n", "Steve Weatherspoon, one of the Tomcat fighter pilots later recalled that the nighttime intercepts were not overly difficult \"It wasn't a big deal. We got a good radar picture which safely controlled the intercept, and pulled close enough to get a visual identification. As we slowly closed, either we illuminated the aircraft with he glow of our exterior position lights, or tried to make out a silhouette by starlight. If its shape was similar to a 737, we had to get closer to see the carrier or national markings.\"\n", "Section::::Principle.:Narrow band and CW illumination sources.\n", "During the postwar period, radar detection was a constant threat to the attacker. Attack aircraft developed the tactic of flying at low level, \"under the radar\" where they were hidden by hills and other obstacles from the radar stations. The advent of low-level radar chains, as a defence against cruise missiles, made this tactic increasingly difficult. At the same time, advances in electromagnetic radiation-absorbent materials (RAM) and electromagnetic modelling techniques offered the opportunity to develop \"stealthy\" aircraft which would be invisible to the defending radar. The first stealthy attack aircraft, the Lockheed F-117 Nighthawk entered service in 1983. Today, stealth is a requirement for any advanced attack aircraft.\n", "The concept of passive radar detection using reflected ambient radio signals emanating from a distant transmitter is not new. The first radar experiments in the United Kingdom in 1935 by Robert Watson-Watt demonstrated the principle of radar by detecting a Handley Page Heyford bomber at a distance of 12 km using the BBC shortwave transmitter at Daventry.\n", "Section::::Music video.:Release and reception.\n", "BULLET::::- The British Royal Air Force begins to install IFF Mark II, the first operational identification friend or foe system.\n\nBULLET::::- October 1 – A British bomber is shot down over the Netherlands by German antiaircraft artillery after being illuminated by a searchlight coupled to a \"Freya\" radar. It is the first time an aircraft is destroyed after being detected and illuminated by a radar-guided searchlight.\n", "BULLET::::- It was created to attune itself to its user and its environment. The vessel responds appropriately and can take the form of any vehicle of earth, water and beyond (a submarine or rocket ship). As seen in its stint as WonderDome, it could even turn itself into a flying fortress.\n\nBULLET::::- It has the power to be undetectable by radar or the human eye and the ability to shift from its crystal, \"transparent mode\" to complete invisibility rendering \"both\" itself and its occupants truly invisible, in true cloaking device technology form.\n", "Section::::History.\n\nThe Moon is comparatively close and was detected by radar soon after the invention of the technique in 1946. Measurements included surface roughness and later mapping of shadowed regions near the poles.\n", "Section::::Commercial performance.\n", "“As my ship leveled out about 50 feet above the ground, I had a glimpse of something that looked very much like the picture we had seen of radar stations. I had a chance to hold my trigger down for two seconds, then zigzagged out to sea on the deck. When I returned to the base I found out that our flight of eight had lost two ships, one of them being the ship that had veered to my right. I had no vision of the flak.\"\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-08260
When sailing, how does one sail into the wind without just being blown back the way one came?
The sail is deployed in such a way that it acts like an airplane wing and provides "lift". You don't travel directly into the wind, but at an angle to it. You go left/forward and then right/forward and then back left/forward etc.
[ "\"In a contrary wind a well found yacht is master. She has more stamina to windward than any man by himself\". Von Haeften says that, \"It is impossible to tear working sails in good condition by wind pressure alone. If it happens, nevertheless, it will either be down to some sail-handling mistake so that the sail has been chafed or caught up somewhere, or to the fact that the sail was old and worn out\". There are many stories of gear breakage from a parted shackle leaving a sail to flap wildly to shrouds giving way to bring a mast down.\n", "The points of sail clarify the realities of sailing into the wind. One of the points of sail is \"Head to Wind.\" A boat turns through this point on each tack. It is the point at which the boat is neither on port tack or starboard tack and is headed directly into the wind. However, a boat cannot sail directly into the wind, thus if it comes head to the wind it loses steerage and is said to be \"in irons.\" Thus boats sailing into the wind are actually sailing \"Close Hauled\" (i.e., with sails tightly trimmed).\n", "Another way of stating this is as follows:\n\nAlternately, sailing in the direction from which the wind is coming is possible through sailing at forty five degree angles to the oncoming wind and alternating the direction of those angles. This is called \"tacking.\" Although this method requires the boat to physically move farther to reach a given point, it is often the quickest way to move in a given direction overall. \n\nSection::::Discussion.\n", "BULLET::::- \"Windsurfer rig\" – Sailors of windsurfers tack by walking forward of the mast and letting the sail swing into the wind as the board moves through the eye of the wind; once on the opposite tack, the sailor realigns the sail on the new tack. In strong winds on a small board, an option is the \"fast tack\", whereby the board is turned into the wind at planing speed as the sailor crosses in front of the flexibly mounted mast and reaches for the boom on the opposite side and continues planing on the new tack.\n", "Sailing into the wind\n\nSailing into the wind is a sailing expression that refers to a sail boat's ability to move forward despite being headed into (or very nearly into) the wind. A sailboat cannot make headway by sailing directly into the wind (\"see\" \"Discussion,\" below), so the point of sail into the wind is called \"close hauled\", and is 22° to the apparent wind. \n", "On this point of sail (also called \"running before the wind\"), the true wind is coming from directly behind the sailing craft. In this mode, the sails act in a manner substantially like a parachute. \n", "For reaching marks that are towards the direction of the wind, sailboats need to alternate between headings, \"tacks\", where the wind angle towards the wind is on opposite sides of the boat. On a tack, the sailor might start by pointing the sailboat as close into the wind as possible while still keeping the winds blowing across the sails in a manner that provides aerodynamic lift which then propels the boat. The sailor can then turn slightly away from the wind to create more forward wind pressure on the sails and better balance the boat, which allows it to move with greater speed, but less directly toward the wind (or mark).\n", "A sailing craft can sail on a course anywhere outside of its no-go zone. If the next waypoint or destination is within the arc defined by the no-go zone from the craft's current position, then it must perform a series of tacking maneuvers to get there on a dog-legged route, called \"beating to windward\". The progress along that route is called the \"course made good\"; the speed between the starting and ending points of the route is called the \"speed made good\" and is calculated by the distance between the two points, divided by the travel time. The limiting line to the waypoint that allows the sailing vessel to leave it to leeward is called the \"layline\". Whereas some Bermuda-rigged sailing yachts can sail as close as 30° to the wind, most 20th-Century square riggers are limited to 60° off the wind. Fore-and-aft rigs are designed to operate with the wind on either side, whereas square rigs and kites are designed to have the wind come from one side of the sail only.\n", "BULLET::::- The American reality-television competition series \"Junkyard Wars\" featured land yachts in season 7, episode 3 \"Sand Yacht\". Two teams, each led by an expert land yachtsman, constructed small yachts from parts available in a junkyard. The team that won used an aluminium sail, supposedly the first time a metal sail was used for a land yacht.\n\nBULLET::::- The American tall tale \"Windwagon Smith\" is about a sea captain who traveled across Kansas in a wind-propelled covered wagon fitted with a sail. This tale was adapted into the Disney animated short \"The Saga of Windwagon Smith\".\n", "Section::::Forces on sailing craft.\n\nEach sailing craft is a system that mobilizes wind force through its sails—supported by spars and rigging—which provide motive power and reactive force from the underbody of a sailboat—including the keel, centerboard, rudder or other underwater foils—or the running gear of an ice boat or land craft, which allows it to be kept on a course. Without the ability to mobilize reactive forces in directions different from the wind direction, a craft would simply be adrift before the wind.\n", "The same book states, in section 24.2, \"In a True Wind of , the Soling crew will sail the close reach and reaching legs in Apparent Winds little stronger than True Wind. ... The 18-foot Skiff crew sails the cross wind legs in much stronger Apparent Winds which approach . Even on the broad reaching legs they must still sail in a strong Apparent Wind which blows from ahead, so they still need to use strong-wind 'going-to-windward' handling techniques even though they are sailing downwind.\" Figure 24.2 of this book provides vector graphics that show how the Skiff can sail downwind faster than the speed of the wind.\n", "As the wind increases in speed and shifts forward (because of the acceleration of the boat), the sails have to be trimmed in order to maintain performance. This causes the boat to further accelerate, thus causing a further increase in windspeed and a further forward windshift.\n\nEventually, the sails cannot be trimmed any further and an equilibrium is reached. Although the boat is sailing perpendicular to the true wind, its sails are set for close hauled sailing.\n", "As explained in the article on apparent wind, a boat's forward motion creates a corresponding head wind of the same strength in the opposite direction. That head wind must be combined with the true wind to find the apparent wind.\n", "The physics of sailing arises from a balance of forces between the wind powering the sailing craft as it passes over its sails and the resistance by the sailing craft against being blown off course, which is provided in the water by the keel, rudder, underwater foils and other elements of the underbody of a sailboat, on ice by the runners of an ice boat, or on land by the wheels of a sail-powered land vehicle.\n", "Since sailboats cannot sail directly into the wind, they must tack in order to reach the upwind mark (this process is called beating or working to the mark). This lengthens the course, thus the boat takes longer to reach the upwind mark than it would if it could have sailed directly towards it. The component of a sailboat's velocity that is in the direction of the next mark is called the velocity made good.\n", "BULLET::::- \"Kitesurfer rig\" – When changing tack while on a broad reach, a kitesurfer again rotates the kite to align with the new apparent wind as the board changes course with the stern through the eye of the wind while planing.\n\nSection::::Sail trimming.\n", "However, there are forms of the lateen rig, as in vela latina canaria, where the spar is changed from one side to the other when tacking. This way the rig doesn't suffer these airflow disruptions that come from the sail pushed against the mast.\n", "Wind direction for points of sail always refers to the \"true wind\"—the wind felt by a stationary observer. The \"apparent wind\"—the wind felt by an observer on a moving sailing craft—determines the motive power for sailing craft.\n\nSection::::No-go zone.\n", "BULLET::::- A \"storm\" sail plan. This is the set of very small, very rugged sails flown in a gale, to keep the vessel under way and in control.\n", "Section::::No-go zone.:In irons.\n\nA sailing craft is said to be \"in irons\" if it is stopped with its sails unable to generate power in the no-go zone. If the craft tacks too slowly, or otherwise loses forward motion while heading into the wind, the craft will coast to a stop. This is also known as being \"taken aback,\" especially on a square-rigged vessel whose sails can be blown back against the masts, while tacking.\n\nSection::::Close-hauled.\n", "The most important reason to avoid being over-canvassed in a blow is the safety of the boat, its gear and its crew. Frank Mulville said that, \"With the wind fair a man is master of his boat and has the power to drive her as hard as he wishes – even to the point of destruction.\" He went on to say,\n", "The term \"velocity\" refers both to speed and direction. As applied to wind, \"apparent wind velocity\" (V) is the air velocity acting upon the leading edge of the most forward sail or as experienced by instrumentation or crew on a moving sailing craft. In nautical terminology, wind speeds are normally expressed in knots and wind angles in degrees. All sailing craft reach a constant \"forward velocity\" (V) for a given \"true wind velocity\" (V) and \"point of sail\". The craft's point of sail affects its velocity for a given true wind velocity. Conventional sailing craft cannot derive power from the wind in a \"no-go\" zone that is approximately 40° to 50° away from the true wind, depending on the craft. Likewise, the directly downwind speed of all conventional sailing craft is limited to the true wind speed. As a sailboat sails further from the wind, the apparent wind becomes smaller and the lateral component becomes less; boat speed is highest on the beam reach. In order to act like an airfoil, the sail on a sailboat is sheeted further out as the course is further off the wind. As an iceboat sails further from the wind, the apparent wind increases slightly and the boat speed is highest on the broad reach. In order to act like an airfoil, the sail on an iceboat is sheeted in for all three points of sail.\n", "Conventional sailing craft cannot derive power from sails on a point of sail that is too close into the wind. On a given point of sail, the sailor adjusts the alignment of each sail with respect to the apparent wind direction (as perceived on the craft) to mobilize the power of the wind. The forces transmitted via the sails are resisted by forces from the hull, keel, and rudder of a sailing craft, by forces from skate runners of an iceboat, or by forces from wheels of a land sailing craft to allow steering the course.\n", "Beginners must develop their balance and core stability, acquire a basic understanding of sailing theory, and learn a few techniques before they can progress from sailing to planing. These techniques involve a similar process to that required to learn to ride a bicycle – the development of muscle-memory automatic reactions:\n\n1. Standing on the board while holding the sail and balancing the weight of the sail leaning to one side with the sailor's weight leaning out on the other side.\n", "Fall Recovery. The rider climbs onto the board, grabs the pulling rope (\"uphaul\"), makes sure the mast foot is placed between his/her two feet, pulls the sail about one third out of the water, lets the wind turn the sail-board combination till he/she has the wind right in the back, pulls the sail all the way out, places the \"mast hand\" (hand closest to the mast) on the boom, pulls the mast over the center line of the board, places the \"sail hand\" (hand furthest from the mast) on the boom, then pulling on it to close the sail and power it.\n" ]
[ "One can sail directly into the wind.", "If you sail into the wind you may be just blown back the way you came." ]
[ "Sailing is done on an angle to the wind, not directly into it.", "You travel into the wind at an ankle and the sail provides \"lift\" which prevents being blown back the way you came." ]
[ "false presupposition" ]
[ "One can sail directly into the wind.", "If you sail into the wind you may be just blown back the way you came." ]
[ "false presupposition", "false presupposition" ]
[ "Sailing is done on an angle to the wind, not directly into it.", "You travel into the wind at an ankle and the sail provides \"lift\" which prevents being blown back the way you came." ]
2018-01729
Why are fencing masks covered in a net instead of a transparent plastic?
Fencing started out as a sport long before modern materials. Mesh was available at the time, and provides ventilation. A clear visor would fog up very quickly during the heavy physical demands of fencing.
[ "Electric fencing became widely available in the 1950s and has been widely used both for temporary fences and as a means to improve the security of fences made of other materials. It is most commonly made using lightweight steel wire (usually 14-17 gauge) attached to posts with insulators made of porcelain or plastic. Synthetic web or rope with thin steel wires interwoven to carry the electrical charge has become popular in recent years, particularly where additional visibility is desired.\n", "In recent years, attempts have been made to introduce fencing to a wider and younger audience, by using foam and plastic swords, which require much less protective equipment. This makes it much less expensive to provide classes, and thus easier to take fencing to a wider range of schools than traditionally has been the case. There is even a competition series in Scotland – the Plastic-and-Foam Fencing FunLeague – specifically for Primary and early Secondary school-age children using this equipment.\n", "When it is fence, or a similar obstacle that can be seen through, it is more difficult for the players to cheat as the other side can see if the ball does or does not hit the ground. When it is a building, or other opaque barrier, players can sometimes stealth to the other side, catching the other team by surprise.\n\nOne rule has it that the opposing player can only be hit below the waist as a safety precaution like in dodge ball.\n", "Section::::Equipment.:Protective clothing.\n\nMost personal protective equipment for fencing is made of tough cotton or nylon. Kevlar was added to top level uniform pieces (jacket, breeches, underarm protector, lamé, and the bib of the mask) following the death of Vladimir Smirnov at the 1982 World Championships in Rome. However, Kevlar is degraded by both ultraviolet light and chlorine, which can complicate cleaning.\n", "Because it does not stretch, animals are less likely to become entangled in HT wire. However, for the same reason, if an animal does become entangled or runs into a few strands at a high speed, it can be deadly, and is sometimes referred to as having a \"cheese slicer\" effect on the animal.\n\nTrellising for horticultural purposes is generally constructed from HT wire as it is able to withstand a higher crop load without breaking or stretching.\n\nSection::::Modern styles.:Wire fences.:Woven wire.\n", "Smooth steel wire is the material most often used for electric fences, ranging from a fine thin wire used as a single line to thicker, high-tensile (HT) wire. Less often, woven wire or barbed wire fences can be electrified, though such practices create a more hazardous fence, particularly if a person or animal becomes caught by the fencing material (electrified barbed wire is unlawful in some areas). Synthetic webbing and rope-like fencing materials woven with fine conducting wires (usually of stainless steel) have become available over the last 15 to 20 years, and are particularly useful for areas requiring additional visibility or as temporary fencing.\n", "Section::::Equipment.:Electric equipment.\n\nA set of electric fencing equipment is required to participate in electric fencing. Electric equipment in fencing varies depending on the weapon with which it is used in accordance. The main component of a set of electric equipment is the body cord. The body cord serves as the connection between a fencer and a reel of wire that is part of a system for electrically detecting that the weapon has touched the opponent. There are two types: one for épée, and one for foil and sabre.\n", "Section::::Non-electric and electric foils.\n\nSection::::Non-electric and electric foils.:Background.\n", "Due to the high levels of crime in South Africa, it is common for residential houses to have perimeter defences. The City of Johannesburg promotes the use of palisade fencing rather than opaque, usually brick, walls as criminals cannot hide as easily behind the fence. In the City of Johannesburg manual on safety, one can read about best practices and maintenance of palisade fencing, such as not growing vegetation in front of palisades as this allows criminals to make an unseen breach.\n\nTypes of security electric fences include:\n", "Examples include cargo nets and net bags. Some vegetables, like onions, are often shipped in nets.\n\nSection::::Uses.:Sports.\n\nNets are used in sporting goals and in games such as soccer, basketball, bossaball and ice hockey. A net separates opponents in various net sports such as volleyball, tennis, badminton, and table tennis, where the ball or shuttlecock must go over the net to remain in play.\n\nSection::::Uses.:Capturing animals.\n", "Electric fences have improved significantly over the years. Improvements include:\n\nBULLET::::- Polyethylene insulators replacing porcelain insulators, beginning in the 1960s. Polyethylene is much cheaper than porcelain and is less breakable.\n\nBULLET::::- Improvements in electrical design of the fence energizer, often called a \"charger\" (USA) or \"fencer\" (UK).\n", "Other ballistic fabrics, such as Dyneema, have been developed that resist puncture, and which do not degrade the way that Kevlar does. FIE rules state that tournament wear must be made of fabric that resists a force of , and that the mask bib must resist twice that amount.\n\nThe complete fencing kit includes:\n\nBULLET::::- Jacket\n\nBULLET::::- Plastron\n\nBULLET::::- Glove\n\nBULLET::::- Breeches\n\nBULLET::::- Socks\n\nBULLET::::- Shoes\n\nBULLET::::- Mask\n\nBULLET::::- Chest protector\n\nBULLET::::- Lamé\n\nBULLET::::- Sleeve\n", "Conventional agricultural fencing of any type may be strengthened by the addition of a single electric line mounted on insulators attached to the top or front of the fence. A similar wire mounted close to the ground may be used to prevent pigs from excavating beneath other fencing. Substandard conventional fencing can also be made temporarily usable until proper repairs are made by the addition of a single electric line set on a \"stand-off\" insulator.\n", "Heras fencing\n\nHeras fencing is a brand of temporary fencing intended for use on construction sites. It consists of individual panels approximately wide and tall. Each panel consists of a metal tubing frame, with feet slotted into concrete or synthetic blocks. In the middle of the panel is a metal mesh. Heras Fencing is produced by Heras Mobile Fencing & Security, who first developed their temporary fencing in 1966.\n", "Portable fence energizers are made for temporary fencing, powered solely by batteries, or by a battery kept charged by a small solar panel. Rapid laying-out and removal of multiple-strand temporary electric fencing over a large area may be done using a set of reels mounted on a tractor or all-terrain vehicle.\n\nFor sheep, poultry, and other smaller animals, plastic electric netting may be mounted on insulating stakes – this is also effective at keeping out some predators such as foxes.\n", "Temporary fencing is an alternative to its permanent counterpart when a fence is required on an interim basis when needed for storage, public safety or security, crowd control, or theft deterrence. It is also known as construction hoarding when used at construction sites. Other uses for temporary fencing include venue division at large events and public restriction on industrial construction sites, when guardrails are often used. Temporary fencing is also often seen at special outdoor events, parking lots, and emergency/disaster relief sites. It offers the benefits of affordability and flexibility.\n", "Depending on the area to be fenced and remoteness of its location, fence energizers may be hooked into a permanent electrical circuit, they may be run by lead-acid or dry cell batteries, or a smaller battery kept charged by a solar panel. The power consumption of a fence in good condition is low, and so a lead-acid battery powering several hundred metres of fence may last for several weeks on a single charge. For shorter periods dry cell batteries may be used. Some energizers can be powered by more than one source.\n\nSection::::Design and function.:Fencing materials.\n", "Piste (fencing)\n\nIn modern fencing, the piste or strip is the playing area. Regulations require the piste to be 14 metres long and between 1.5 and 2 metres wide. The last two metres on each end are hash-marked to warn a fencer before he/she backs off the end of the strip, after which is a 1.5 to 2 metre runoff. The piste is also marked at the centre and at the \"\"en garde\"\" lines, located two metres either side of the center line.\n", "Recently, reel-less gear has been adopted for all weapons at top competitions. In this system, which eliminates the spool (by using the fencer's own body as a grounding point), the lights and detectors are mounted directly on the fencers' masks. For the sake of the audience, clearly visible peripheral lights triggered by wireless transmission may be used. However, the mask lights must remain as the official indicators because FIE regulations prohibit the use of wireless transmitters in official scoring equipment in order to prevent cheating. The development of reel-less scoring apparatus in épée and foil has been much slower due to technical complications. The first international competitions to use the reel-less versions of these weapons were held in 2006.\n", "Due to the high levels of crime in South Africa, it is common for residential houses to have perimeter defences. The City of Johannesburg promotes the use of palisade fencing over opaque, usually brick, walls as criminals cannot hide as easily behind the fence. In the City of Johannesburg manual on safety one can read about best practices and maintenance of palisade fencing, such as not growing vegetation in front of palisades as this allows criminals to make an unseen breach.\n\nSection::::History.\n", "Section::::Rules.:Target area.\n\nIn foil the valid target area includes the torso (including the lower part of the bib of the mask) and the groin. The head (except the lower part of the bib of the mask), arms, and legs are considered off target. Touches made off target do not count for points, but do stop play. The target area has been changed multiple times, with the latest change consisting of adding the bottom half of the bib to the target zone.\n\nSection::::Rules.:Priority (right of way).\n", "Although called \"fences\", these fence less boundary systems are more accurately termed electronic pet containment systems. In cost analysis they have shown to be much cheaper and more aesthetically pleasing than physical fences. However, an electronic fence may not be effective if an animal crosses a boundary while in a state of excitement. Pet fences are also used sometimes to contain livestock in circumstances where ordinary agricultural fencing is not convenient or legal, such as on British common land.\n\nSection::::Technology.:Variants.\n", "Synthetic fences encompass a wide range of products. Vinyl-coated wire fence is usually based on high-tensile wire with a vinyl coating. Some forms are non-electric, others embed layers of graphite to carry a current from the wire to the outside of the coated product so that it can be electrified. It can be of any color, with white particularly common in the United States so that the fencing is visible to livestock. Most forms can be installed on either wood posts or steel t-posts.\n", "Beginning with the 1956 Olympics, scoring in foil has been accomplished by means of registering the touch with an electric circuit. A switch at the tip of the foil registers the touch, and a metallic foil vest, or \"lamé\", verifies that the touch is on valid target. \n\nSection::::Non-electric and electric foils.:Electric foils.:Socket.\n", "Section::::Uses.:Wild animals.\n\nElectric fences are useful for controlling the movements of wild animals. Examples include deterring deer from entering private property, keeping animals off airport runways, keeping wild boar from raiding crops, and preventing geese from soiling areas used by people. Electric fencing has been extensively used in environmental situations reducing the conflict between elephants or other animals and humans in Africa and Asia.\n\nSection::::Uses.:Security.\n\nSection::::Uses.:Security.:Non-lethal fence.\n" ]
[ "Fencing masks should use a clear plastic.", "Transparent plastic would be a better alternative protection for fencers than a mesh helmet. " ]
[ "Clear plastic would fog up and was not available when fencing started. ", "The mesh helmet provides ventilation to the fencers, making it the better alternative." ]
[ "false presupposition" ]
[ "Fencing masks should use a clear plastic.", "Transparent plastic would be a better alternative protection for fencers than a mesh helmet. " ]
[ "false presupposition", "false presupposition" ]
[ "Clear plastic would fog up and was not available when fencing started. ", "The mesh helmet provides ventilation to the fencers, making it the better alternative." ]
2018-16847
How is electricity turned into a mechanical action?
Electrical current induces a magnetic field. If a ferrous metal is in that field a force will be exerted on it. I.e. place a ferrous core in a coil of current carrying wire and the core will experience the force and thus move the "robots arm"
[ "Section::::Modern practice.\n\nToday, electromechanical processes are mainly used by power companies. All fuel based generators convert mechanical movement to electrical power. Some renewable energies such as wind and hydroelectric are powered by mechanical systems that also convert movement to electricity.\n", "Electrical analogies of mechanical systems can be used just as a teaching aid, to help understand the behaviour of the mechanical system. In former times, up to about the early 20th century, it was more likely that the reverse analogy would be used; mechanical analogies were formed of the then little understood electrical phenomena.\n\nSection::::Forming an analogy.\n", "Physiology and electricity share a common history, with some of the pioneering work in each field being done in the late 18th century by Count Alessandro Giuseppe Antonio Anastasio Volta and Luigi Galvani. Count Volta invented the battery and had a unit of electrical measurement named in his honor (the Volt). These early researchers studied \"animal electricity\" and were among the first to realize that applying an electrical signal to an isolated animal muscle caused it to twitch. The Biopac Student Lab uses procedures similar to Count Volta’s to demonstrate how muscles can be electrically stimulated.\n\nSection::::Concept.\n", "Power flow through a machine provides a way to understand the performance of devices ranging from levers and gear trains to automobiles and robotic systems. The German mechanician Franz Reuleaux wrote, \"a machine is a combination of resistant bodies so arranged that by their means the mechanical forces of nature can be compelled to do work accompanied by certain determinate motion.\" Notice that forces and motion combine to define power.\n", "Many devices are used to convert mechanical energy to or from other forms of energy, e.g. an electric motor converts electrical energy to mechanical energy, an electric generator converts mechanical energy into electrical energy and a heat engine converts heat energy to mechanical energy.\n\nSection::::General.\n\nEnergy is a scalar quantity and the mechanical energy of a system is the sum of the potential energy (which is measured by the position of the parts of the system) and the kinetic energy (which is also called the energy of motion):\n", "The assemblies that control movement are also called \"mechanisms.\" Mechanisms are generally classified as gears and gear trains, which includes belt drives and chain drives, cam and follower mechanisms, and linkages, though there are other special mechanisms such as clamping linkages, indexing mechanisms, escapements and friction devices such as brakes and clutches.\n", "BULLET::::- Passive devices or loads: When electric charges move through a potential difference from a higher to a lower voltage, that is when conventional current (positive charge) moves from the positive (+) terminal to the negative (−) terminal, work is done by the charges on the device. The potential energy of the charges due to the voltage between the terminals is converted to kinetic energy in the device. These devices are called \"passive\" components or \"loads\"; they 'consume' electric power from the circuit, converting it to other forms of energy such as mechanical work, heat, light, etc. Examples are electrical appliances, such as light bulbs, electric motors, and electric heaters. In alternating current (AC) circuits the direction of the voltage periodically reverses, but the current always flows from the higher potential to the lower potential side.\n", "Despite the gain in knowledge of electrical properties and the building of generators, it wasn't until the late 18th century that Italian physician and anatomist Luigi Galvani marked the birth of electrochemistry by establishing a bridge between muscular contractions and electricity with his 1791 essay \"De Viribus Electricitatis in Motu Musculari Commentarius\" (Commentary on the Effect of Electricity on Muscular Motion), where he proposed a \"nerveo-electrical substance\" in life forms.\n", "Mechanotransduction\n\nMechanotransduction (\"mechano\" + \"transduction\") is any of various mechanisms by which cells convert mechanical stimulus into electrochemical activity. This form of sensory transduction is responsible for a number of senses and physiological processes in the body, including proprioception, touch, balance, and hearing. The basic mechanism of mechanotransduction involves converting mechanical signals into electrical or chemical signals.\n", "Today, many technological devices convert mechanical energy into other forms of energy or vice versa. These devices can be placed in these categories:\n\nBULLET::::- An electric motor converts electrical energy into mechanical energy.\n\nBULLET::::- A generator converts mechanical energy into electrical energy.\n\nBULLET::::- A hydroelectric powerplant converts the mechanical energy of water in a storage dam into electrical energy.\n\nBULLET::::- An internal combustion engine is a heat engine that obtains mechanical energy from chemical energy by burning fuel. From this mechanical energy, the internal combustion engine often generates electricity.\n", "Section::::Classes of analogy.:Mobility analogies.\n\nMobility analogies, also called the Firestone analogy, are the electrical duals of impedance analogies. That is, the effort variable in the mechanical domain is analogous to current (the flow variable) in the electrical domain, and the flow variable in the mechanical domain is analogous to voltage (the effort variable) in the electrical domain. The electrical network representing the mechanical system is the dual network of that in the impedance analogy.\n", "Electric machine\n\nIn electrical engineering, electric machine is a general term for machines using electromagnetic forces, such as electric motors, electric generators, and others. They are electromechanical energy converters: an electric motor converts electricity to mechanical power while an electric generator converts mechanical power to electricity. The moving parts in a machine can be rotating (\"rotating machines\") or linear (\"linear machines\"). Besides motors and generators, a third category often included is transformers, which although they do not have any moving parts are also energy converters, changing the voltage level of an alternating current.\n", "In an electrical network diagram, limited to linear systems, there are three passive elements: resistance, inductance, and capacitance; and two active elements: the voltage generator, and the current generator. The mechanical analogs of these elements can be used to construct a mechanical network diagram. What the mechanical analogs of these elements are depends on what variables are chosen to be the fundamental variables. There is a wide choice of variables that can be used, but most commonly used are a power conjugate pair of variables (described below) and the pair of Hamiltonian variables derived from these.\n", "Section::::Kinetic energy of the moving parts of a machine.\n\nThe kinetic energy of a machine is the sum of the kinetic energies of its individual moving parts. A machine with moving parts can, mathematically, be treated as a connected system of bodies, whose kinetic energies are simply summed. The individual kinetic energies are determined from the kinetic energies of the moving parts' translations and rotations about their axes.\n", "This realization shows that it is the joints, or the connections that provide movement, that are the primary elements of a machine. Starting with four types of joints, the rotary joint, sliding joint, cam joint and gear joint, and related connections such as cables and belts, it is possible to understand a machine as an assembly of solid parts that connect these joints called a mechanism .\n", "All muscle fibres in a motor unit are of the same fibre type. When a motor unit is activated, all of its fibres contract. In vertebrates, the force of a muscle contraction is controlled by the number of activated motor units.\n", "Using his newly developed electromagnetic principle, in 1831, Henry created one of the first machines to use electromagnetism for motion. This was the earliest ancestor of modern DC motor. It did not make use of rotating motion, but was merely an electromagnet perched on a pole, rocking back and forth. The rocking motion was caused by one of the two leads on both ends of the magnet rocker touching one of the two battery cells, causing a polarity change, and rocking the opposite direction until the other two leads hit the other battery.\n", "Electric generators transform kinetic energy into electricity. This is the most used form for generating electricity and is based on Faraday's law. It can be seen experimentally by rotating a magnet within closed loops of conducting material (e.g. copper wire). Almost all commercial electrical generation is done using electromagnetic induction, in which mechanical energy forces a generator to rotate:\n\nSection::::Methods of generating electricity.:Electrochemistry.\n", "Section::::History.\n\nIn 1780, Luigi Galvani discovered that the muscles of dead frogs' legs twitched when struck by an electrical spark. This was one of the first forays into the study of bioelectricity, a field that still studies the electrical patterns and signals in tissues such as nerves and muscles.\n", "Mechanical PVs or mechanical \"mods\", often called \"mechs\", are devices without integrated circuits, electronic battery protection, or voltage regulation. They are activated by a switch. They rely on the natural voltage output of the battery and the metal that the mod is made of often is used as part of the circuit itself.\n", "BULLET::::- Active devices or power sources: If the charges are moved by an 'exterior force' through the device in the direction from the lower electric potential to the higher, (so positive charge moves from the negative to the positive terminal), work will be done \"on\" the charges, and energy is being converted to electric potential energy from some other type of energy, such as mechanical energy or chemical energy. Devices in which this occurs are called \"active\" devices or \"power sources\"; such as electric generators and batteries.\n", "Section::::Electrostatic machines.\n\nIn \"electrostatic machines\", torque is created by attraction or repulsion of electric charge in rotor and stator.\n\nElectrostatic generators generate electricity by building up electric charge. Early types were friction machines, later ones were influence machines that worked by electrostatic induction. The Van de Graaff generator is an electrostatic generator still used in research today.\n\nSection::::Homopolar machines.\n", "BULLET::::- Based on Langbein's \"Handbuch der Galvanischen Metall-Metallniederschläge\". Langbein published six editions of this handbook in German, as well as cooperating with versions in English such as this one; see (in German). This \"American edition\" has numerous figures illustrating technical procedures for electrodeposition.\n\nBULLET::::- Based on \"Manipulations Hydroplastique\". Chapter LIX has a very complete description of the steps in electrotyping for printing, with figures.\n", "Section::::Applications.\n", "Section::::History.:Industrial era.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-20174
why does your windshield fog up and what does the defrost do to help that?
humidity builds up in the car, moisture ( usually warmer than the outside ) hits the cold windshield and a dew like effect builds up over the glass edit: this is the lazy man explanation btw
[ "First seen on the Rolls Royce in 1969 then the 1985 Ford Scorpio/Granada Mk. III in Europe and the 1986 Ford Taurus/Mercury Sable in the U.S., the system uses a mesh of very thin heating wires, or a silver/zinc oxide coated film embedded between two layers of windscreen glass. The overall effect when operative was defogging and defrosting of the windscreen at a very high rate. Landrover (UK) also fitted a similar screen to their Discovery range in the early 1990s, some of which were imported to Australia undetected by authorities, because at that stage they were not legal in any state. Owing to the high current draw, the system is engineered to operate only when the engine is running, and normally switches off after 10 minutes of operation. The metallic content of the glass has been shown to degrade the performance of certain windshield-mounted accessories, such as GPS navigators, telephone antennas and radar detectors.\n", "Washer fluid may sometimes be preheated before being delivered onto the windshield. This is especially desirable in colder climates when a thin layer of ice or frost accumulates on the windshield's surface, because it eliminates the need to manually scrape the windshield or pour warm water on the glass. Although there are a few aftermarket preheat devices available, many automobile makers offer this feature factory installed on at least some of their vehicles. For example, General Motors had begun equipping vehicles with heated washer fluid systems from the factory beginning in 2006 with the Buick Lucerne sedan. The system emits a fine mist of heated water that clears frost without damaging the windshield itself. GM also claims heated washer fluid helps in removing bug splatters and other road accumulation. The company halted the production of these mechanisms after they found that it was prone to start engine fires. A different system patented by BMW first sprays \"intensive\" washer fluid and then standard washer fluid on to the windscreen.\n", "Far enough below the freezing point, a thin layer of ice crystals can form on the inside surface of windows. This usually happens when a vehicle has been left alone after being driven for a while, but can happen while driving, if the outside temperature is low enough. Moisture from the driver's breath is the source of water for the crystals. It is troublesome to remove this form of ice, so people often open their windows slightly when the vehicle is parked in order to let the moisture dissipate, and it is now common for cars to have rear-window defrosters to solve the problem. A similar problem can happen in homes, which is one reason why many colder regions require double-pane windows for insulation.\n", "Defogger\n\nA defogger, demister, or defroster is a system to clear condensation and thaw frost from the windshield, backglass, or side windows of a motor vehicle. It was invented by German automobile engineer Heinz Kunert. \n\nSection::::Types.\n\nSection::::Types.:Primary defogger.\n", "For primary defogging, heat is generally provided by the vehicle's engine coolant via the heater core; fresh air is blown through the heater core and then ducted to and distributed over the interior surface of the windshield by a blower. This air is in many cases first dehumidified by passing it through the vehicle's operating air conditioning evaporator. Such dehumidification makes the defogger more effective and faster, for the dried air has a greater capacity to absorb water from the glass at which it is directed.\n\nSection::::Types.:Secondary defogger.\n", "BULLET::::- The dashboard gets a major overhaul. It now used completely different materials, and the right side is redesigned to house a passenger airbag\n\nBULLET::::- Ford introduces the Probe in the European market\n\nBULLET::::- Foglights are slightly redesigned and now made by a different company\n\nBULLET::::- The stripe on the dashboard is gone, but remains on the interior panels\n\nBULLET::::- The stripe color on Probe GT's door panels are changed from red to the color of the car's interior\n\nBULLET::::- New Ford CD4E automatic transaxle for base and SE models\n\nBULLET::::- Air conditioning units are switched from R-12 to R-134A\n", "In aircraft windshields, an electric current is applied through a conducting layer of tin(IV) oxide to generate heat to prevent icing. A similar system for automobile windshields, introduced on Ford vehicles as \"Quickclear\" in Europe (\"InstaClear\" in North America) in the 1980s and through the early 1990s, used this conductive metallic coating applied to the inboard side of the outer layer of glass. Other glass manufacturers utilize a grid of micro-thin wires to conduct the heat especially on the later European Ford Transit vans. These systems are more typically utilized by European auto manufacturers such as Jaguar and Porsche.\n", "New for this generation, cabin air filters were installed, and the filters can be accessed from behind an access panel easily accessed from inside the glove compartment.\n", "Secondary defoggers, such as those used on a vehicle's backglass and/or side view mirrors, often consist of a series of parallel linear resistive conductors in or on the glass. When power is applied, these conductors heat up, thawing ice and evaporating condensation from the glass. These conductors may be composed of a silver-ceramic material printed and baked onto the interior surface of the glass, or may be a series of very fine wires embedded within the glass. The surface-printed variety is prone to damage by abrasion, but can be repaired easily with a conductive paint material.\n\nSection::::Automation.\n", "A \"wiperless windshield\" is a windshield that uses a mechanism other than wipers to remove snow and rain from the windshield. The concept car Acura TL features a wiperless windshield using a series of jet nozzles in the cowl to blow pressurized air onto the windshield. Also several glass manufacturers have experimented with nano type coatings designed to repel external contaminants with varying degrees of success but to date none of these have made it to commercial applications.\n\nSection::::Repair of stone-chip and crack damage.\n", "In the United States, the Energy Code sets certain standards for performance of products installed in homes. These codes now require Low-E Glass in all residential homes.\n\nLow-E is a film that is several layers of metal poured microscopically thin over the surface of newly poured glass. This heat reflective film is transparent but can be darker or lighter depending on the type and manufacturer. This data is rated in Visible Light Transmission. Darker glass with heavier Low–E will have less VT. The NFRC rates most energy star rated window manufacturers.\n", "Thus the application of carb heat is manifested as a reduction in engine power, up to 15 percent. If ice has built up, there will then be a gradual increase in power as the air passage is freed up by the melting ice. The amount of power regained is an indication of the severity of ice buildup.\n", "The term \"windshield\" is used generally throughout North America. The term \"windscreen\" is the usual term in the British Isles and Australasia for all vehicles. In the US \"windscreen\" refers to the mesh or foam placed over a microphone to minimize wind noise, while a \"windshield\" refers to the front window of a car. \n\nIn the UK, the terms are reversed, although generally, the foam screen is referred to as a microphone shield, and not a windshield.\n", "Today’s windshields are a safety device just like seatbelts and airbags. The urethane sealant is protected from UV in sunlight by a band of dark dots around the edge of the windshield. The darkened edge transitions to the clear windshield with smaller dots to minimize thermal stress in manufacturing. The same band of darkened dots is often expanded around the rearview mirror to act as a sunshade.\n\nSection::::Other aspects.\n", "Ice forming on roads is a dangerous winter hazard. Black ice is very difficult to see, because it lacks the expected frosty surface. Whenever there is freezing rain or snow which occurs at a temperature near the melting point, it is common for ice to build up on the windows of vehicles. Driving safely requires the removal of the ice build-up. Ice scrapers are tools designed to break the ice free and clear the windows, though removing the ice can be a long and laborious process.\n", "One problem with the system is that the heating elements can sometimes stop working, leaving one side of the screen uncleared. If this is the result of burn out, total replacement of the screen is the only remedy as the wires are actually embedded in the glass, (as opposed to a rear defogger, which can usually be repaired with conductive paint). The problem is sometimes caused by the power cable coming loose from its mounting near the base of the screen. The loose cable then catches on the windscreen wiper mechanism and fatigues over time. The remedy is then to reattach the wire to the foil at the base of the screen, but this can be problematic since the system requires such high current (~30 amps). Some owners have been known to smash the screen and submit a fraudulent insurance claim for stone damage, as Quickclear screens are expensive replacement parts and many insurance policies offer a low excess (deductible) for windscreen damage. This type of screen is also known to cause serious problems with tollway recording tags unless the tag is placed in the correct area behind the rearview mirror.\n", "In February 2012, Nissan recalled 2,983 MY 2012 versions of the Murano and Rogue, because the tire pressure monitoring system was not activated when the cars were assembled.\n\nSection::::First generation (2007–2015).:Rogue Select.\n", "Starting with this generation, cabin air filters (also known as pollen filters) were installed as standard equipment and are located behind the glove compartment internationally.\n", "Section::::In-flight aircraft de-icing.:Electric systems.\n", "An improvement on the T-top cars was introduced mid year on all F-bodies. T-top cars now came with new seals which greatly reduced leaks into the passenger compartment.\n", "Poorly performing weatherstripping should be reported to the car dealership if the vehicle is under warranty, as fixes may be known.\n\nSection::::Automotive weatherstripping.:Materials.\n\nAutomotive weatherstripping is commonly made of EPDM rubber, a thermoplastic elastomer (TPE) mix of plastic and rubber, and a thermoplastic olefin (TPO) polymer/filler blend. Sunroof weatherstripping can also be made from silicone due to the extreme heat encountered by automobile roofs.\n", "A minimum of one operational windshield wiper, and also a windshield defogging/demisting system (or anti-fog films) that can keep the windshield clear during wet sessions must be installed on all cars, and used when necessary. The wiper blade(s) and arm(s) may be removed for dry sessions.\n\nSection::::Current Series Format.:Event Protocol.:- Multiple Class Race / Grid.\n", "Section::::Artificial sources of interference.\n\nIn automotive GPS receivers, metallic features in windshields, such as defrosters, or car window tinting films can act as a Faraday cage, degrading reception just inside the car.\n", "Another possible problem is a leak in one of the connections to the heater core. This may first be noticeable by smell (ethylene glycol is widely used as coolant and has a sweet smell); it may also cause (somewhat greasy) fogging of the windshield above the windshield heater vent. Glycol may also leak directly into the car, causing wet upholstery or carpeting.\n\nElectrolysis can cause excessive corrosion leading to the heater core rupturing. Coolant will spray directly into the passenger compartment followed with white colored smoke, a significant driving hazard.\n", "The new Mustang’s interior body style resembles that of an airplane cockpit boasting an increased body width, and a larger cabin similar to the Ford GT. This gives more room in the back of the vehicle for rear passengers. “The changeable ambient lighting continues, but it will spread beyond the dials, cup holders, and speakers to other points within the cabin, something also found in European luxury cars like the new S-class.” \n\nA metal tag on the dashboard bears the Ford Mustang \"Running Horse\" insignia.\n\nSection::::Body.:Technology.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-14299
How does one actually make money of "Stocks" and investments.
There are a lot of ways to make money (or lose money) by investing in stocks. What he is talking about is called a 'short,' and he is almost definitely making it up. To make money on an equity like a company stock that falls in value, you pay a small fee to 'borrow,' someone else's stock and you sell it. Then, when the price falls, you buy it back at that lower price and return it to the guy you borrowed it from. You sold it at a high value and you bought it again when the value fell. You keep the difference. Tesla stock is the most shorted stock in the world right now because it is a company that has never made a profit, it repeatedly misses production deadlines and the CEO has behaved erratically over the last year and probably committed a crime last week. I hope that helps.
[ "Section::::Trading.:Buying.\n\nThere are various methods of buying and financing stocks, the most common being through a stockbroker. Brokerage firms, whether they are a full-service or discount broker, arrange the transfer of stock from a seller to a buyer. Most trades are actually done through brokers listed with a stock exchange.\n", "BULLET::::- \"Business Leaders and Success, 55 Top Business Leaders and How They Achieved Greatness\", 2004, McGraw-Hill\n\nBULLET::::- \"How to Make Money in Stocks: Desk Diary 2005\", Wiley; Spiral edition (September 6, 2004),\n\nBULLET::::- \"Reminiscences of a Stock Operator\" (by Edwin Lefèvre), William J. O'Neil (Foreword), Wiley; Illustrate edition (September 2004),\n\nBULLET::::- \"How to Make Money in Stocks – A Winning System in Good Times Or Bad\", McGraw-Hill, (4th ed., May 18, 2009)\n", "As Money Observer explains, an investor has two options ‘a fixed income-type product promising to pay between 7.5 and 10 per cent per year, [or] an equity set-up whereby investors take three-quarters of net profits.’\n\nSection::::Dragon's Den.\n", "Via Trading began purchasing surplus inventory from retailers and wholesalers, but after realizing a void of good wholesale suppliers, the Stambouli brothers made a shift to the wholesale end of the business. Via Trading purchases large quantities of items at discount prices, similar to the business owners featured on A&E's Storage Wars, who purchase trailers of merchandise, and then resell the items at discount prices. Brandon Bernier of Storage Hunters purchased approximately $4,000 of merchandise from Via Trading in under 3 weeks in 2012.\n", "The DIY investing process is similar to many other DIY processes and it mirrors steps found in the financial planning process used by the Certified Financial Planning Board. Whether a DIY investor or a certified professional, investing in the stock market involves risk and unpredictable fluctuations. It has been said, \"a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts.\" \n\nBULLET::::1. Develop financial and investment literacy.\n\nBULLET::::2. Outline objectives, desires, needs and priorities.\n\nBULLET::::3. Gather, analyze and consider relevant information.\n", "For-profit financial education companies exist that offer programs of study (also referred to as \"systems\" or \"courses\" – the terminology varies) on stock market education. Unlike colleges that prepare students for working in the financial arena, these companies educate students with a more narrow focus – how to trade derivatives for the purpose of personal investing. Examples of such companies are thinkorswim (formerly Investools), Invested Central, Trading Advantage, Global Finance School, and Rich Dad's Education (based on the \"Rich Dad, Poor Dad\" book by Robert Kiyosaki). These types of companies offer both classroom settings for learning and distance education programs.\n", "Hall started buying shares a few at a time. His ability to translate complex financial concepts into simple English through the use of slices of cake led to him becoming director of course development at Leo Fleur, a company selling training materials for Wall Street examinations. In 1986 he became director of marketing for the Chicago-based Longman Financial Services Institute in creating and implementing marketing campaigns for training programmes and products and consequently was executive director of the New York Institute of Finance.\n", "A common misconception regarding DIY investing is that it mainly involves the ongoing research, selection and management of individual securities such as stocks and/or bonds. However, a managed fund, a group of securities packaged together as one investment product or “fund” and managed by a portfolio manager is available to simplify the investing process. Mutual funds, exchange-traded funds (ETFs), fund of funds (FoFs) and target date funds (TDFs) are examples of managed funds. Therefore, given the generous investment product landscape, DIY investors have various portfolio management options ranging from simple to complex.\n\nDIY Investor Types \n", "Additionally, many choose to invest via the index method. In this method, one holds a weighted or unweighted portfolio consisting of the entire stock market or some segment of the stock market (such as the S&P 500 or Wilshire 5000). The principal aim of this strategy is to maximize diversification, minimize taxes from too frequent trading, and ride the general trend of the stock market (which, in the U.S., has averaged nearly 10% per year, compounded annually, since World War II).\n", "The Other Side of Wall Street: In Business it Pays to Be an Animal, In Life It Pays to Be Yourself (2011: FT Press) was written by Harrison to relate the story of how and why he left a lucrative career in trading and finance to create a company dedicated to informing both Wall Street professionals and the general public about the complexities, trends and harsh realities of the stock market.\n", "Section::::Overseas Investments.\n", "Baird’s Fixed Income Sales & Trading team is active in the primary and secondary markets for taxable and tax-exempt securities. The firm’s trading desks traded $233 billion in face value of taxable and tax-exempt bonds for institutional and individual clients in 2010.\n\nSection::::Businesses.:Private Equity.\n", "Hutson originally planned to work full-time at Boeing and part-time at the magazine as a supplement to his trading. In a year, \"Technical Analysis of Stocks & Commodities\" had 1,500 subscribers and cost $250 for an annual subscription. In 1984, its annual price dropped and subscribers increased to over 10,000. The magazine started occupying 60 to 70 hours of Hutson's time every week, so he resigned from Boeing in 1984. By 1988, it had 12 employees and was headquartered in the building of the no longer existing Fauntleroy Elementary School.\n", "There are other ways of buying stock besides through a broker. One way is directly from the company itself. If at least one share is owned, most companies will allow the purchase of shares directly from the company through their investor relations departments. However, the initial share of stock in the company will have to be obtained through a regular stock broker. Another way to buy stock in companies is through Direct Public Offerings which are usually sold by the company itself. A direct public offering is an initial public offering in which the stock is purchased directly from the company, usually without the aid of brokers.\n", "Section::::Subsidiaries.:StockPickr.\n\nStockPickr is a financial services site. It is one of the first sites to incorporate both investment ideas as well as social networking. This community, known as the Stock Idea Network, combines insight from professional investors as well as community members. Users are able to share, debate, and otherwise discuss information and ideas related to finance on the site's message boards, which are frequented by financial professionals.\n\nStockPickr was named one of Time.com's 50 best websites in 2007. There are an estimated 150,000-plus user-generated portfolios, as well as portfolios of professional finance professionals on the site.\n\nSection::::Subsidiaries.:BankingMyWay.com.\n", "He opened his first store, Swell, with money he made by borrowing $90 from his bookkeeper and playing craps for 24 hours straight in Las Vegas, and with contributions from several investors. He opened a second store, Ether, in 1994.\n\nSection::::Hush Puppies.\n", "Section::::Other mentions.\n\n\"Fool.com: Drip Portfolio\" cites that through the Temper of the Times service, \"anyone can buy initial shares of more than 1,100 companies in order to be enrolled in their Drips.\"\n\n\"How a Fool can invest in Drips\" again cites Temper of the Times as an easy way for new investors to enroll in DRIPs.\n\nThe \"Wall Street Journal\" mentioned Temper of the Times as an organization that will \"help you enroll in DRIPs by buying the necessary shares and then getting you signed up.\"\n", "BULLET::::- Start by identifying a return driver.\n\nBULLET::::- Develop the concept into a trading strategy.\n\nBULLET::::- Test the trading strategy across many markets and time frames.\n\nBULLET::::- Combine hundreds of different strategy-market combinations into a diversified portfolio.\n", "Daily Graphs was launched by William O'Neil to produce Daily Graphs, a printed book of stock charts delivered weekly to subscribers in 1972. In 1998, O'Neil launched Daily Graphs Online as a comprehensive online equity research tool and an extension of the Daily Graphs business he launched in 1972. In 2010, Daily Graphs Inc. and its service was re-branded as MarketSmith.\n", "M1 Finance’s founder, Brian Barnes, said on the company blog that he has been investing since he was in fifth grade. When his teacher assigned his class a mock-investing project, his parents offered to give him a small sum to invest in real markets.\n", "Section::::George Feick & Company.\n", "Mehta gradually bought stakes in these companies over a period of 18 months, buying out the last investor in 2006. He states that he purchased the business from 30–40 owners in 2005.\n", "In 2007, Sykes launched TimothySykes.com. It serves as his own personal blog and a website dedicated to teaching penny stock trading.\n\nIn 2009, Sykes launched Investimonials.com, a website devoted to collecting user reviews of financial services, videos, and books, as well as financial brokers.\n\nSykes co-founded Profit.ly in 2011, a social service with about 20,000 users that provides stock trade information online. Sykes said the service serves two purposes: \"creating public track records for gurus, newsletter writers and students and allowing everyone to learn from both the wins and losses of other traders to benefit the entire industry.\"\n", "Monetizing a lifestream was first introduced by author Tim Ferriss. In his books he presents instructions for designing a business that can self-develop, being convinced that one should live the life he wants the moment he wants instead of waiting for something to happen. With this belief, he proposes selling digital information products that can be automated and turned into profit.\n\nSection::::Lifecasting.\n", "DiPascali, under the direction of Madoff, created a purported investment strategy that was referred to as a \"split strike conversion\" (\"Split Strike\") strategy, and marketed this to clients beginning in or around the early 1990s. Clients whose funds were to be managed within the strategy were promised that:\n\nBULLET::::1. Their funds would be invested in a pool of around 35–50 common stocks from the Standard & Poor's 100 Index (S&P 100)\n\nBULLET::::2. The collection of stocks would mimic the price movements of the Index\n" ]
[]
[]
[ "normal" ]
[ "Stocks in and of themselves make money." ]
[ "false presupposition", "normal" ]
[ "Stocks are bought at a low value and sold at a higher value in order to gain money." ]
2018-02761
How identical cells in a fertilized egg differentiate to produce different body parts?
All cells have the same genome, but not all genes are activated. External chemical cues determine which genes are turned on and off to specialize cells.
[ "As an embryo develops from a fertilized egg, the single egg cell splits into many cells, which grow in number and migrate to the appropriate locations inside the embryo at appropriate times during development. As the embryo's cells grow in number and migrate, they also differentiate into an increasing number of different cell types, ultimately turning into the stable, specialized cell types characteristic of the adult organism. Each of the cells in an embryo contains the same genome, characteristic of the species, but the level of activity of each of the many thousands of genes that make up the complete genome varies with, and determines, a particular cell's type (e.g. neuron, bone cell, skin cell, muscle cell, etc.).\n", "Although all the cells of an organism contain the same DNA, there can be hundreds of different types of cells in a single organism. These diverse cell shapes, behaviors and functions are created and maintained by tissue-specific gene expression patterns and these can be modified by internal and external environmental conditions.\n\nSection::::Mechanisms of constructive development.:Physical properties of cells and tissues.\n", "Section::::Mechanisms.:Structural inheritance.\n\nIn ciliates such as \"Tetrahymena\" and \"Paramecium\", genetically identical cells show heritable differences in the patterns of ciliary rows on their cell surface. Experimentally altered patterns can be transmitted to daughter cells. It seems existing structures act as templates for new structures. The mechanisms of such inheritance are unclear, but reasons exist to assume that multicellular organisms also use existing cell structures to assemble new ones.\n\nSection::::Mechanisms.:Nucleosome positioning.\n", "During organogenesis, molecular and cellular interactions between germ layers, combined with the cells' developmental potential, or competence to respond, prompt the further differentiation of organ-specific cell types. For example, in neurogenesis, a subpopulation of ectoderm cells is set aside to become the brain, spinal cord, and peripheral nerves. Modern developmental biology is extensively probing the molecular basis for every type of organogenesis, including angiogenesis (formation of new blood vessels from pre-existing ones), chondrogenesis (cartilage), myogenesis (muscle), osteogenesis (bone), and many others.\n\nSection::::Development.:Plant embryos.\n", "Cells, embryonic cells in particular, are sensitive to the presence or absence of specific chemical molecules in their surroundings. This is the basis for cell signaling, and during embryogenesis cells “talk to each other” by emitting and receiving signalling molecules. This is how development of the embryo's structure is organized and controlled. If cells of a particular line have been removed from the embryo and are growing alone in a Petri dish in the lab, and some cell signaling chemicals are put in the growth medium bathing the cells, this can induce the cells to differentiate into a different, “daughter” cell type, mimicking the differentiation process that occurs naturally in the developing embryo. Artificially inducing differentiation in this way can yield clues to the correct placement of a particular cell line in the embryogenic tree, by observing what kind of cell results from inducing the differentiation.\n", "Fetomaternal microchimerism has been shown in experimental investigations of whether fetal cells can cross the blood brain barrier in mice. The properties of these cells allow them to cross the blood brain barrier and target injured brain tissue. This mechanism is possible because umbilical cord blood cells express some proteins similar to neurons. When these umbilical cord blood cells are injected in rats with brain injury or stroke, they enter the brain and express certain nerve cell markers. Due to this process, fetal cells could enter the brain during pregnancy and become differentiated into neural cells. Fetal microchimerism can occur in the maternal mouse brain, responding to certain cues in the maternal body.\n", "Section::::Multicellularity.\n\nSection::::Multicellularity.:Cell specialization.\n\nMulticellular organisms are organisms that consist of more than one cell, in contrast to single-celled organisms.\n\nIn complex multicellular organisms, cells specialize into different cell types that are adapted to particular functions. In mammals, major cell types include skin cells, muscle cells, neurons, blood cells, fibroblasts, stem cells, and others. Cell types differ both in appearance and function, yet are genetically identical. Cells are able to be of the same genotype but of different cell type due to the differential expression of the genes they contain.\n", "\"Syngenic\" or \"isogenic\" cells are isolated from genetically identical organisms, such as twins, clones, or highly inbred research animal models.\n\n\"Primary\" cells are from an organism.\n\n\"Secondary\" cells are from a cell bank.\n", "One example of an epigenetic change in eukaryotic biology is the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. In other words, as a single fertilized egg cell – the zygote – continues to divide, the resulting daughter cells change into all the different cell types in an organism, including neurons, muscle cells, epithelium, endothelium of blood vessels, etc., by activating some genes while inhibiting the expression of others.\n", "Development begins when a sperm fertilizes an egg and creates a single cell that has the potential to form an entire organism. In the first hours after fertilization, this cell divides into identical cells. In humans, approximately four days after fertilization and after several cycles of cell division, these cells begin to specialize, forming a hollow sphere of cells, called a blastocyst. The blastocyst has an outer layer of cells, and inside this hollow sphere, there is a cluster of cells called the inner cell mass. The cells of the inner cell mass go on to form virtually all of the tissues of the human body. Although the cells of the inner cell mass can form virtually every type of cell found in the human body, they cannot form an organism. These cells are referred to as pluripotent.\n", "For multicellular organisms that develop in a womb, the physical interference or presence of other similarly developing organisms such as twins can result in the two cellular masses being integrated into a larger whole, with the combined cells attempting to continue to develop in a manner that satisfies the intended growth patterns of both cell masses. The two cellular masses can compete with each other, and may either duplicate or merge various structures. This results in conditions such as conjoined twins, and the resulting merged organism may die at birth when it must leave the life-sustaining environment of the womb and must attempt to sustain its biological processes independently.\n", "For a number of cell cleavages (the specific number depends on the type of organism) all the cells of an embryo will be morphologically and developmentally equivalent. This means, each cell has the same development potential and all cells are essentially interchangeable, thus establishing an equivalence group. The developmental equivalence of these cells is usually established via transplantation and cell ablation experiments.\n", "Each of the two daughter cells resulting from that mitosis has one replica of each chromatid that was replicated in the previous stage. Thus, they are genetically identical.\n\nSection::::Fertilization age.\n\nFertilization is the event most commonly used to mark the zero point in descriptions of prenatal development of the embryo or fetus. The resultant age is known as \"fertilization age\", \"fertilizational age\", \"embryonic age\", \"fetal age\" or \"(intrauterine) developmental (IUD) age\".\n", "Exploratory processes are selective processes that operate within individual organisms during their lifetimes. In many animals, the vascular, immune and nervous systems develop by producing a variety of forms, and the most functional solutions are selected for and retained, while others are lost. For example, the ‘shape’ of the circulatory system is constructed according to the oxygen and nutrient needs of tissues, rather than being genetically predetermined. Likewise, the nervous system develops through axonal exploration. Initially muscle fibers are connected to multiple neurons but synaptic competition selects certain connections over others to define the mature pattern of muscle innervation. The shape of a cell is determined by the structure of its cytoskeleton. A major element of the cytoskeleton are microtubules, which can grow in random directions from their origin. Microtubule-associated proteins can aid or inhibit microtubule growth, guide microtubules to specific cellular locations and mediate interactions with other proteins. Therefore, microtubules can be stabilized in new configurations that give rise to new cell shapes (and potentially new behaviors or functions) without changes to the microtubule system itself.\n", "In the laboratory, human embryonic stem cells growing in culture can be induced to differentiate into progenitor cells by exposing the hESCs to chemicals (e.g. protein growth and differentiation factors) present in the developing embryo. The progenitor cells so produced may then be isolated into pure colonies, grown in culture, and then classified according to type and assigned positions in the embryogenic tree. Such purified cultures of progenitor cells may be used in research to study disease processes in vitro, as diagnostic tools, or potentially developed for use in regenerative medicine therapies.\n\nSection::::Regenerative medicine.\n", "Cellular differentiation is the process where a cell changes from one cell type to another. Usually, the cell changes to a more specialized type. Differentiation occurs numerous times during the development of a multicellular organism as it changes from a simple zygote to a complex system of tissues and cell types. Differentiation continues in adulthood as adult stem cells divide and create fully differentiated daughter cells during tissue repair and during normal cell turnover. Some differentiation occurs in response to antigen exposure. Differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals. These changes are largely due to highly controlled modifications in gene expression and are the study of epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. Thus, different cells can have very different physical characteristics despite having the same genome.\n", "Section::::Method.:Embryo selection.\n", "Cells in an otherwise similar population can vary in their size and morphology due to differences in function, changes in metabolism, or simply being in different phases of the cell cycle or some other factor. For example, stem cells can divide asymmetrically, which means the two resultant daughter cells may have different fates (specialized functions), and can differ from each other in size or shape. Researchers who study development may be interested in tracking the physical characteristics of the individual progeny in a growing population in order to understand how stem cells differentiate into a complex tissue or organism over time.\n", "Cell differentiation is usually the final stage of development, preceded by several states of commitment which are not visibly differentiated. A single tissue, formed from a single type of progenitor cell or stem cell, often consists of several differentiated cell types. Control of their formation involves a process of lateral inhibition, based on the properties of the Notch signaling pathway. For example, in the neural plate of the embryo this system operates to generate a population of neuronal precursor cells in which NeuroD is highly expressed.\n\nSection::::Developmental processes.:Regeneration.\n", "During fertilization, the sperm adds either an X (female) or a Y (male) chromosome to the X in the ovum. This determines the genetic sex of the embryo. During the first weeks of development, genetic male and female fetuses are \"anatomically indistinguishable\", with primitive gonads beginning to develop during approximately the sixth week of gestation. The gonads, in a \"bipotential state\", may develop into either testes (the male gonads) or ovaries (the female gonads), depending on the consequent events. Through the seventh week, genetically female and genetically male fetuses appear identical.\n", "This process continues, so that each generation is half (or hemi-) clonal on the mother's side and has half new genetic material from the father's side.\n\nThis form of reproduction is seen in some live-bearing fish of the genus \"Poeciliopsis\" as well as in some of the \"Pelophylax\" spp. (\"green frogs\" or \"waterfrogs\"):\n\nBULLET::::- \"P. kl. esculentus\" (edible frog): \"P. lessonae\" × \"P. ridibundus\",\n\nBULLET::::- \"P. kl. grafi\" (Graf's hybrid frog): \"P. perezi\" × \"P. ridibundus\"\n\nBULLET::::- \"P. kl. hispanicus\" (Italian edible frog) – unknown origin: \"P. bergeri\" × \"P. ridibundus\" or \"P. kl. esculentus\"\n\nand perhaps in \"P. demarchii\".\n", "The gastrula with its blastopore soon develops three distinct layers of cells (the germ layers) from which all the bodily organs and tissues then develop:\n\nBULLET::::- The innermost layer, or endoderm, give rise to the digestive organs, the gills, lungs or swim bladder if present, and kidneys or nephrites.\n\nBULLET::::- The middle layer, or mesoderm, gives rise to the muscles, skeleton if any, and blood system.\n\nBULLET::::- The outer layer of cells, or ectoderm, gives rise to the nervous system, including the brain, and skin or carapace and hair, bristles, or scales.\n", "The second stage of differentiation involves the alignment of the myoblasts with one another. Studies have shown that even rat and chick myoblasts can recognise and align with one another, suggesting evolutionary conservation of the mechanisms involved.\n\nThe third stage is the actual cell fusion itself. In this stage, the presence of calcium ions is critical. In mice, fusion is aided by a set of metalloproteinases called meltrins and a variety of other proteins still under investigation. Fusion involves recruitment of actin to the plasma membrane, followed by close apposition and creation of a pore that subsequently rapidly widens.\n", "Section::::Methods and mechanisms of transformation in laboratory.\n\nSection::::Methods and mechanisms of transformation in laboratory.:Bacterial.\n\nArtificial competence can be induced in laboratory procedures that involve making the cell passively permeable to DNA by exposing it to conditions that do not normally occur in nature. Typically the cells are incubated in a solution containing divalent cations (often calcium chloride) under cold conditions, before being exposed to a heat pulse (heat shock). Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA enter the host cell. Cells that are able to take up the DNA are called competent cells. \n", "BULLET::::3. Three of these micronuclei disintegrate. The fourth undergoes mitosis.\n\nBULLET::::4. The two cells exchange a micronucleus.\n\nBULLET::::5. The cells then separate.\n\nBULLET::::6. The micronuclei in each cell fuse, forming a diploid micronucleus.\n\nBULLET::::7. Mitosis occurs three times, giving rise to eight micronuclei.\n\nBULLET::::8. Four of the new micronuclei transform into macronuclei, and the old macronucleus disintegrates.\n\nBULLET::::9. Binary fission occurs twice, yielding four identical daughter cells.\n\nSection::::DNA rearrangements (gene scrambling).\n" ]
[ "Cells determine what the cell will produce. " ]
[ "What a cell produces is determined by chemical cues that turn genes on and off. " ]
[ "false presupposition" ]
[ "Cells determine what the cell will produce. ", "Cells determine what the cell will produce. " ]
[ "normal", "false presupposition" ]
[ "What a cell produces is determined by chemical cues that turn genes on and off. ", "What a cell produces is determined by chemical cues that turn genes on and off. " ]
2018-06676
Once a cyst or abcess is removed from the body, if left alone do they continue to grow?
If put in the appropriate solution. Otherwise no, it doesn't have the access to energy necessary for the cells to do anything, the cells making it up will die.
[ "The more common course for surgical treatment is for the cyst to be surgically excised (along with pilonidal sinus tracts). Post-surgical wound packing may be necessary, and packing typically must be replaced daily for 4 to 8 weeks. In some cases, two years may be required for complete granulation to occur. Sometimes the cyst is resolved via surgical marsupialization.\n", "Patients with third-ventricular colloid cysts become symptomatic when the tumor enlarges rapidly, causing cerebrospinal fluid (CSF) obstruction, ventriculomegaly, and increased intracranial pressure. Some cysts enlarge more gradually, however, allowing the patient to accommodate the enlarging mass without disruption of CSF flow, and the patient remains asymptomatic. In these cases, if the cyst stops growing, the patient can maintain a steady state between CSF production and absorption and may not require neurosurgical intervention.\n\nSection::::Diagnosis.\n", "Asymptomatic cysts, such as those discovered incidentally on neuroimaging done for another reason, may never lead to symptomatic disease and in many cases do not require therapy. Calcified cysts have already died and involuted. Further antiparasitic therapy will be of no benefit.\n", "Section::::Treatment.\n\nTreatment is often largely dependent on the type of cyst. Asymptomatic cysts, termed pseudocysts, normally require active monitoring with periodic scans for future growth. Symptomatic (producing or showing symptoms) cysts may require surgical removal if they are present in areas where brain damage is unavoidable, or if they produce chronic symptoms disruptive to the quality of life of the patient. Some examples of cyst removal procedures include: permanent drainage, fenestration, and endoscopic cyst fenestration. \n\nSection::::Treatment.:Permanent drainage.\n", "While Bartholin cysts can be quite painful, they are not life-threatening. New cysts cannot absolutely be prevented from forming, but surgical or laser removal of a cyst makes it less likely that a new one will form at the same site. Those with a cyst are more likely than those without a cyst to get one in the future. They can recur every few years or more frequently. Many women who have marsupialization done find that the recurrences may slow, but do not actually stop.\n\nSection::::Epidemiology.\n", "Marsupialization could also be performed, which involves suturing the edges of the gingiva surrounding the cyst to remain open. The cyst then drains its contents and heal without being prematurely closed. The end result is the same as the cystostomy, bone regeneration. For both a cystostomy and marsupialization, root resectioning may also be required in cases where root resorption has occurred.\n\nSection::::Epidemiology.\n", "Some unicameral bone cysts may spontaneously resolve without medical intervention. Specific treatments are determined based on size of the cyst, strength of the bone, medical history, extent of the disease, activity level, symptoms an individual is experiencing, and tolerance for specific medications, procedures, or therapies. The types of methods used to treat this type of cyst are curettage and bone grafting, aspiration, steroid injections, and bone marrow injections. Watchful waiting and activity modifications are the most common nonsurgical treatments that will help resolve and help prevent unicameral bone cysts from occurring and reoccurring.\n\nAneurysmal Bone Cyst \n", "The overall prevalence of cystinuria is approximately 1 in 7,000 neonates (from 1 in 2,500 neonates in Libyan Jews to 1 in 100,000 among Swedes).\n\nSection::::Pathophysiology.\n", "Tissue cysts can be maintained in host tissue for the lifetime of the animal. However, the perpetual presence of cysts appears to be due to a periodic process of cyst rupturing and re-encysting, rather than a perpetual lifespan of individual cysts or bradyzoites. At any given time in a chronically infected host, a very small percentage of cysts are rupturing, although the exact cause of this tissue cysts rupture is, as of 2010, not yet known.\n", "If there are no signs of infection, a cyst of Montgomery can be observed, because more than 80% resolve spontaneously, over only a few months. However, in some cases, spontaneous resolution may take up two years. In such cases, a repeat ultrasonography may become necessary. If, however, the patient has signs of an infection, for example reddening (erythema), warmth, pain and tenderness, a treatment for mastitis can be initiated, which may include antibiotics and non-steroidal anti-inflammatory drugs (NSAIDs). With treatment, inflammatory changes usually disappear quickly. In rare cases, drainage may become necessary. A surgical treatment of a cyst of Montgomery, i.e. a resection, may become necessary only if a cyst of Montgomery persists, or the diagnosis is questioned clinically.\n", "Cancer-related cysts are formed as a defense mechanism for the body, following the development of mutations that lead to an uncontrolled cellular division. Once that mutation has occurred, the affected cells divide incessantly (and become known as cancerous), forming a tumour. The body encapsulates those cells to try to prevent them from continuing their division and to try to contain the tumour, which becomes known as a cyst. That said, the cancerous cells still may mutate further and gain the ability to form their own blood vessels, from which they receive nourishment before being contained. Once that happens, the capsule becomes useless and the tumour may advance from benign to a cancer.\n", "If there are no symptoms, no treatment is typically needed. In those with symptoms, drainage is recommended. The preferred method is the insertion of a Word catheter for four weeks, as recurrence following simple incision and drainage is common. A surgical procedure known as marsupialization may be used or, if the problems persist, the entire gland may be removed. Removal is sometimes recommended in those older than 40 to ensure cancer is not present. Antibiotics are not generally needed.\n", "The severity of the symptoms associated with porencephaly varies significantly across the population of those affected, depending on the location of the cyst and damage of the brain. For some patients with porencephaly, only minor neurological problems may develop, and those patients can live normal lives. Therefore, based on the level of severity, self-care is possible, but for the more serious cases lifelong care will be necessary. For those that have severe disability, early diagnosis, medication, participation in rehabilitation related to fine-motor control skills, and communication therapies can significantly improve the symptoms and ability of the patient with porencephaly to live a normal life. Infants with porencephaly that survive, with proper treatment, can display proper communication skills, movement, and live a normal life.\n", "BULLET::::2. Cyst development stage: Epithelial cells form strands and are attracted to the area which contains exposed connective tissue and foreign substances. Several strands from each rest converge and surround the abscess or foreign body.\n\nBULLET::::3. Cyst growth stage: Fluid flows into the cavity where the forming cyst is growing due to the increased osmolality of the cavity in relation to surrounding serum in capillaries. Pressure and size increase.\n\nThe definitive mechanism by which cysts grow is under debate; several theories exist.\n\nSection::::Mechanisms.:Biomechanical theory.\n", "When treatment is required, this is usually by surgical removal of the cyst. There are four ways in which cysts are managed:\n\nBULLET::::- Enucleation—removal of the entire cyst\n\nBULLET::::- Marsupialization—the creation of a window into the wall of a cyst, allowing the contents to be drained. The window is left open, and the lack of pressure within the cyst causes the lesion to shrink, as the surrounding bone starts to fill in again.\n", "Cysts can present in humans anywhere from a few months to a few years after ingestion. Once the cyst develops, symptoms associated with the cyst develop rapidly.\n\nSection::::Morphology.\n\nThe following are pictures of coenurosis cysts, some which have been surgically removed from humans and others that have been removed from animals after death.\n\nSection::::Life cycle.\n", "Treatment may not be necessary when Bartholin's cysts cause no symptoms. Small, asymptomatic cysts can be observed over time to assess their development. In cases that require intervention, a catheter may be placed to drain the cyst, or the cyst may be surgically opened to create a permanent pouch (marsupialization). Intervention has a success rate of 85%, regardless of the method used, for the achievement of absence of swelling and discomfort and the appearance of a freely draining duct.\n", "Cystic hygromas that develop in the third trimester, after thirty weeks gestation, or in the postnatal period are usually not associated with chromosome abnormalities. There is a chance of recurrence after surgical removal of the cystic hygroma. The chance of recurrence depends on the extent of the cystic hygroma and whether its wall was able to be completely removed.\n\nTreatments for removal of cystic hygroma are surgery or sclerosing agents which include:\n\nBULLET::::- Bleomycin\n\nBULLET::::- Doxycycline\n\nBULLET::::- Ethanol (pure)\n\nBULLET::::- Picibanil (OK-432)\n\nBULLET::::- Sodium tetradecyl sulfate\n\nSection::::See also.\n\nBULLET::::- Branchial cleft cyst\n\nBULLET::::- Ranula\n\nBULLET::::- Thyroglossal duct cyst\n\nBULLET::::- Lymphangioma\n", "Various hypotheses have been advanced to explain the pathogenesis of spinal dermoids, the origin of which may be acquired or congenital.\n\nBULLET::::- Acquired or iatrogenic dermoids may arise from the implantation of epidermal tissue into the subdural space i.e. spinal cutaneous inclusion, during needle puncture (e.g. lumbar puncture) or during surgical procedures on closure of a dysraphic malformation.\n", "The diagnosis can be confirmed with ultrasonography, frequently showing a simple cyst in the retroareolar area. In some patients, multiple cysts or bilateral cysts may exist. Cysts of Montgomery may have liquid content with an echogenic or calcific sediment.\n\nSection::::Treatment and prognosis.\n\nThe clinical management of a cyst of Montgomery depends upon the symptoms of the patient.\n", "Typically, the cyst will move upwards on protrusion of the tongue, given its attachment to the embryonic duct, as well as on swallowing, due to attachment of the tract to the foramen caecum.\n\nSection::::Treatment.\n\nAlthough generally benign, the cyst must be removed if the patient exhibits difficulty in breathing or swallowing, or if the cyst is infected. Even if these symptoms are not present, the cyst may be removed to eliminate the chance of infection or development of a carcinoma, or for cosmetic reasons if there is unsightly protrusion from the neck.\n", "The use of this technique is done in the U.S. and is spreading in Europe but recovery is generally extensive. Microfenestration alone has been done with some success in Asia. \n\nA biopolymer plate is also being used experimentally to strengthen a sacrum thinned by cystic erosion. \n\nThe risks of CSF leakage are higher on patients that have bilateral cysts on the same spinal level or clusters of cysts along multiple vertebrae, but immediate recognition of the leakage and repair can mitigate that risk.\n", "A cystostomy is recommended for larger cysts that compromise important adjacent anatomy. The cyst is tamponaded to allow for the cyst contents to escape the bone. Over time, the cyst decreases in size and bone regenerates in the cavity space.\n", "If there is a high probability of a fracture resulting from the unicameral bone cyst, then surgical treatment is necessary. Specific methods can be determined by the physician based upon the patient’s age, medical history, tolerance for certain medical procedures or medicine, health, and extremity of the disease. The treatment can involve or incorporate one or more of the following surgical methods, which are performed by a pediatric orthopedic surgeon:\n\nBULLET::::- Curettage:\n\nBULLET::::- Bone Grafting:\n\nBULLET::::- Steroid injection:\n", "Section::::Treatment options.:Surgical.\n\nSurgery is considered to be the last resort because surgery has the highest chance at complications. The surgical approach depends on where the cyst is located, how big the cyst is and the number of cysts.\n\nSection::::Treatment options.:The stepwise minimally invasive strategy.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-18212
How do our ears determine the location of a noise?
Sounds take time to travel from one place to another. Our ears are always in two places with a fixed distance between them and the shape of the ear also slightly modifies the sound based on the direction it is coming from (the same sound will seem different if it is in front of you, beside you, or behind you). Your brain can both measure the difference in time between when the sound reached each ear, and the difference in pitch, timber, and amplitude, in each ear. It then uses that information to calculate (triangulate) where the sound came from. Your brain has other tricks it uses, too, based on your learning and experience over time. Such as learning if a sound is passing through (or bouncing off) one material or another... Metallic sounds are different from wooden sounds, etc. So, your brain can make assumptions about where in the environment a sound might have come from based on queues like that... That's how you know that one bang came from the basement when the other one came from the attic.
[ "The most prominent figure in the creation of the place theory of hearing is Hermann von Helmholtz, who published his finished theory in 1885. Helmholtz claimed that the cochlea contained individual fibers for analyzing each pitch and delivering that information to the brain. Many followers revised and added to Helmholtz's theory and the consensus soon became that high frequency sounds were encoded near the base of the cochlea and that middle frequency sounds were encoded near the apex. Georg von Békésy developed a novel method of dissecting the inner ear and using stroboscopic illumination to observe the basilar membrane move, adding evidence to support the theory.\n", "Acoustic location\n\nAcoustic location is the use of sound to determine the distance and direction of its source or reflector. Location can be done actively or passively, and can take place in gases (such as the atmosphere), liquids (such as water), and in solids (such as in the earth).\n\nBULLET::::- \"Active\" acoustic location involves the creation of sound in order to produce an echo, which is then analyzed to determine the location of the object in question.\n", "BULLET::::- Level Difference: Very close sound sources cause a different level between the ears.\n\nSection::::Sound localization by the human auditory system.:Signal processing.\n\nSound processing of the human auditory system is performed in so-called critical bands. The hearing range is segmented into 24 critical bands, each with a width of 1 Bark or 100 Mel. For a directional analysis the signals inside the critical band are analyzed together.\n", "The main alternative to the place theory is the temporal theory, also known as timing theory. These theories are closely linked with the volley principle or volley theory, a mechanism by which groups of neurons can encode the timing of a sound waveform. In all cases, neural firing patterns in time determine the perception of pitch. The combination known as the place–volley theory uses both mechanisms in combination, primarily coding low pitches by temporal pattern and high pitches by rate–place patterns. It is now generally believed that there is good evidence for both mechanisms.\n", "Sound does not usually come from a single source: in real situations, sounds from multiple sources and directions are superimposed as they arrive at the ears. Hearing involves the computationally complex task of separating out the sources of interest, often estimating their distance and direction as well as identifying them.\n\nSection::::Types.:Touch.\n", "These cues are also used by other animals, but there may be differences in usage, and there are also localization cues which are absent in the human auditory system, such as the effects of ear movements. Animals with the ability to localize sound have a clear evolutionary advantage.\n\nSection::::How sound reaches the brain.\n", "For sound localization in the median plane (elevation of the sound) also two detectors can be used, which are positioned at different heights. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement. This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors.\n\nSection::::Animals.:Localization with coupled ears (flies).\n", "Section::::Perception of sound.:Spatial location.\n\nSpatial location (see: Sound localization) represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment. In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification. This is the main reason why we can pick the sound of an oboe in an orchestra and the words of a single person at a cocktail party.\n\nSection::::Sound pressure level.\n", "As with other sensory stimuli, perceptual disambiguation is also accomplished through integration of multiple sensory inputs, especially visual cues. Having localized a sound within the circumference of a circle at some perceived distance, visual cues serve to fix the location of the sound. Moreover, prior knowledge of the location of the sound generating agent will assist in resolving its current location.\n\nSection::::Sound localization by the human auditory system.\n", "Section::::The cone of confusion.\n\nMost mammals are adept at resolving the location of a sound source using interaural time differences and interaural level differences. However, no such time or level differences exist for sounds originating along the circumference of circular conical slices, where the cone's axis lies along the line between the two ears.\n", "Section::::Animals.:Bi-coordinate sound localization (owls).\n\nMost owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source.\n\nSection::::History.\n", "Assessing the variation through changes between the person's ear, we can limit our perspective with the degrees of freedom of the head and its relation with the spatial domain. Through this, we eliminate the tilt and other co-ordinate parameters that add complexity. For the purpose of calibration we are only concerned with the direction level to our ears, ergo a specific degree of freedom. Some of the ways in which we can deduce an expression to calibrate the HRTF are:\n\nBULLET::::1. Localization of sound in Virtual Auditory space\n\nBULLET::::2. HRTF Phase synthesis\n\nBULLET::::3. HRTF Magnitude synthesis\n", "Consequently, sound waves originating at any point along a given circumference slant height will have ambiguous perceptual coordinates. That is to say, the listener will be incapable of determining whether the sound originated from the back, front, top, bottom or anywhere else along the circumference at the base of a cone at any given distance from the ear. Of course, the importance of these ambiguities are vanishingly small for sound sources very close to or very far away from the subject, but it is these intermediate distances that are most important in terms of fitness.\n", "Section::::Animals.:In the median plane (front, above, back, below).\n\nFor many mammals there are also pronounced structures in the pinna near the entry of the ear canal. As a consequence, direction-dependent resonances can appear, which could be used as an additional localization cue, similar to the localization in the median plane in the human auditory system.\n\nThere are additional localization cues which are also used by animals.\n\nSection::::Animals.:Head tilting.\n", "Section::::Examples.:In bat auditory cortex.\n", "Section::::Sound localization by the human auditory system.:Duplex Theory.\n\nTo determine the lateral input direction (left, front, right), the auditory system analyzes the following ear signal information:\n\nSection::::Sound localization by the human auditory system.:Duplex Theory.:Duplex Theory.\n", "Sensory systems code for four aspects of a stimulus; type (modality), intensity, location, and duration. Arrival time of a sound pulse and phase differences of continuous sound are used for sound localization. Certain receptors are sensitive to certain types of stimuli (for example, different mechanoreceptors respond best to different kinds of touch stimuli, like sharp or blunt objects). Receptors send impulses in certain patterns to send information about the intensity of a stimulus (for example, how loud a sound is). The location of the receptor that is stimulated gives the brain information about the location of the stimulus (for example, stimulating a mechanoreceptor in a finger will send information to the brain about that finger). The duration of the stimulus (how long it lasts) is conveyed by firing patterns of receptors. These impulses are transmitted to the brain through afferent neurons.\n", "Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in loudness, tone and timing between the two ears to allow us to localize sound sources. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the zenith or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). Humans, as most four-legged animals, are adept at detecting direction in the horizontal, but less so in the vertical due to the ears being placed symmetrically. Some species of owls have their ears placed asymmetrically, and can detect sound in all three planes, an adaption to hunt small mammals in the dark.\n", "Section::::Auditory Cues.:Cues for sound locating.:Spectral cue.\n\nA spectral cue is a monaural (single ear) cue for locating incoming sounds based on the distribution of the incoming signal. The differences in distribution (or spectrum) of the sound waves are caused by interactions of the sounds with the head and the outer ear before entering the ear canal.\n\nSection::::Auditory Cues.:Principles of auditory cue grouping.\n", "Section::::Sound localization by the human auditory system.:Duplex Theory.:Evaluation for high frequencies.\n", "The auditory system also works in tandem with the neural system so that the listener is capable of spatially locating the direction from which a sound source originated. This is known as the Haas or Precedence effect and is possible due to the nature of having two ears, or auditory receptors. The difference in time it takes for a sound to reach both ears provides the necessary information for the brain to calculate the spatial positioning of the source.\n\nSection::::Signal Analysis.\n\nAudio signals can be analyzed in several different ways, depending on the kind of information desired from the signal.\n", "Section::::Function.:Sensory input.:Visuospatial cues.\n", "Section::::Sound modality.\n\nSection::::Sound modality.:Description.\n\nThe stimulus modality for hearing is sound. Sound is created through changes in the pressure of the air. As an object vibrates, it compresses the surrounding molecules of air as it moves towards a given point and expands the molecules as it moves away from the point. Periodicity in sound waves is measured in hertz. Humans, on average, are able to detect sounds as pitched when they contain periodic or quasi-periodic variations that fall between the range of 30 to 20000 hertz.\n\nSection::::Sound modality.:Perception.\n", "Historically, there have been many models of pitch perception. (Terhardt, 1974; Goldstein, 1973; Wightman, 1973). Many consisted of a peripheral spectral-analysis stage and a central periodicity-analysis stage. In his model, Terhardt claims that the spectral-analysis output of complex sounds, specifically low frequency ones, is a learned entity which eventually allows easy identification of the virtual pitch. The volley principle is predominantly seen during the pitch perception of lower frequencies where sounds are often resolved. Goldstein proposed that through phase-locking and temporal frequencies encoded in neuron firing rates, the brain has the itemization of frequencies that can then be used to estimate pitch.\n", "Section::::Sound localization by the human auditory system.:Distance of the sound source.\n\nThe human auditory system has only limited possibilities to determine the distance of a sound source. In the close-up-range there are some indications for distance determination, such as extreme level differences (e.g. when whispering into one ear) or specific pinna (the visible part of the ear) resonances in the close-up range.\n\nThe auditory system uses these clues to estimate the distance to a sound source:\n" ]
[]
[]
[ "normal" ]
[ "Ears determine the location of a noise." ]
[ "false presupposition", "normal" ]
[ "The brain determines the location of a noise, by processing various measureable qualities of sound that has entered the ear. " ]
2018-07761
How does a new computer know the date when you turn it on, even if not connected to the internet?
There is a small battery on the motherboard which runs a variety of things including a clock. If your computer forgets the time when it turns off this battery is likely dead. They typically last 3 to 5 years.
[ "The Model 100 ROM has a Y2K bug; the century displayed on the main menu was hard-coded as \"19XX\". Workarounds exist for this problem. Since the century of the date is not important for any of the software functions, and the real-time clock hardware in the Model 100 does not have a calendar and requires the day of the week to be set independently of the date, the flaw does not at all impair the usability of the computer; it is cosmetic.\n\nSection::::Applications.\n", "Some motorized telescopes sold during the mid 80s to early 90s, including the Celestron Compustar® which used a form of GoTo technology, were not programmed to allow for dates after 2000; making some Celestron products susceptible to the Y2K bug. However, a third party chip to update the computer is available for some products.\n\nSection::::Competition with Meade.\n", "Testing was running around the clock during December. Technicians were testing the CSM's fuel systems during the day and the testing was running on the rocket at night.\n\nThere was even an instance of a variant of the Y2K bug in the computer. As it ran past midnight, when the time changed from 2400 to 0001 the computer could not handle it and \"turned into a pumpkin\" according to an interview with Frank Bryan, a Kennedy Space Center Launch Vehicle Operations Engineering staff member.\n", "The phone can also be used as a modem via Bluetooth or USB . According to the o2 website, the K770 has a talk time battery life of 16 hours and a standby time battery life of 384 hours.\n\nSection::::Date bug.\n\nOn some versions there is a bug in the internal clock, which does not recognise leap days from 2016 onwards. Attempting to set the date to 29 February 2016, for example, results in the time and date becoming corrupted, and it becomes impossible to reset the date, until the phone is shut down and restarted.\n", "None of the current Raspberry Pi models have a built-in real-time clock, so they are unable to keep track of the time of day independently. Instead, a program running on the Pi can retrieve the time from a network time server or from user input at boot time, thus knowing the time while powered on. To provide consistency of time for the file system, the Pi automatically saves the current system time on shutdown, and re-loads that time at boot.\n", "Another major use of embedded systems is in communications devices, including cell phones and Internet appliances (routers, wireless access points, etc.) which rely on storing an accurate time and date and are increasingly based on UNIX-like operating systems. For example, the bug makes some devices running 32-bit Android crash and not restart when the time is changed to that date.\n", "This means that for NTP the rollover will be invisible for most running systems, since they will have the correct time to within a very small tolerance. However, systems that are starting up need to know the date within no more than 68 years. Given the large allowed error, it is not expected that this is too onerous a requirement. One suggested method is to set the clock to no earlier than the system build date or the release date of the current version of the NTP software. Many systems use a battery-powered hardware clock to avoid this problem.\n", "Some computers adapt to hardware changes completely automatically. In most cases, there is a special optional procedure for accessing BIOS parameters, to view and potentially make changes in settings. It may be possible to control how the computer uses the memory SPD data—to choose settings, selectively modify memory timings, or possibly to completely over-ride the SPD data (see overclocking).\n\nSection::::Stored information.\n", "Date windowing\n\nDate windowing is the system by which full year numbers are converted to and from two-digit years. The year at which the century changes is called the pivot year of the date window. Date windowing was one of several techniques used to resolve the year 2000 problem in legacy computer systems.\n\nThere are three major methods used to determine the date window:\n\nBULLET::::- Fixed pivot year: There is a fixed pivot year\n\nBULLET::::- Sliding pivot year: The pivot year is determined by subtracting some constant from the current year\n", "The following table lists epoch dates used by popular software and other computer-related systems. The time in these systems is stored as the quantity of a particular time unit (days, seconds, nanoseconds, etc.) that has elapsed since a stated time (usually midnight UTC at the beginning of the given date).\n\nSection::::See also.\n\nBULLET::::- System time\n\nBULLET::::- Unix epoch\n\nSection::::External links.\n\nBULLET::::- Critical and Significant Dates (J. R. Stockton), an extensive list of dates that are problematic for various operating systems and computing devices\n", "The date (Unix) command—internally using the C date and time functions—can be used to convert that internal representation of a point in time\n\nto most of the date representations shown here.\n\nThe current date in the Gregorian calendar is . If this is not really the current date, then to update it.\n\nSection::::Date format.\n", "Beginning with version 9.0, an undocumented feature would allow the system manager to change the display of the system date. RSTS now became the first operating system that would display the system date as a set of numbers representing a stardate as commonly known from the TV series .\n\nSection::::Add-ons by other companies.\n", "Section::::Intrusion detection.\n\nSome computer cases include a biased switch (push-button) which connects to the motherboard. When the case is opened, the switch position changes and the system records this change. The system's firmware or BIOS may be configured to report this event the next time it is powered on.\n\nThis physical intrusion detection system may help computer owners detect tampering with their computer. However, most such systems are quite simple in construction; a knowledgeable intruder can open the case or modify its contents without triggering the switch.\n", "BULLET::::- Given the extended recording times of data loggers, they typically feature a mechanism to record the date and time in a timestamp to ensure that each recorded data value is associated with a date and time of acquisition in order to produce a sequence of events. As such, data loggers typically employ built-in real-time clocks whose published drift can be an important consideration when choosing between data loggers.\n", "Some minor problems have been reported due to improper handling of the era transition.\n\nBULLET::::- ATMs placed inside the Lawson chain of \"konbini\" reported that due to a banking holiday funds deposited would not be available until May 7, 1989, due to a date conversion improperly using Heisei 1 (1989) instead of Reiwa 1 (2019).\n\nSection::::Planned fixes.\n\nBULLET::::- Windows 10 Spring Release includes a registry entry with placeholder information for the expected era transition, intended to help users discover any software limitations around the expected change to the new era.\n", "Most first-generation personal computers did not keep track of dates and times. These included systems that ran the CP/M operating system, as well as early models of the Apple II, the BBC Micro, and the Commodore PET, among others. Add-on peripheral boards that included real-time clock chips with on-board battery back-up were available for the IBM PC and XT, but the IBM AT was the first widely available PC that came equipped with date/time hardware built into the motherboard. Prior to the widespread availability of computer networks, most personal computer systems that did track system time did so only with respect to local time and did not make allowances for different time zones.\n", "Beginning with the IBM AT, PCs supported a hardware clock settable through BIOS. It had a century bit which allowed for manually changing the century when the year 2000 happened. Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supported the year 2000 by setting the century bit automatically when the clock rolled past midnight, December 31, 1999.\n", "Many Apple iOS devices with the Clock application running crashed at 00:00:00 on 1 January 2014.\n\nSection::::Year 2015.\n\nSeveral older Samsung mobile phones with Agere chipsets (such as Samsung SGH-C170) would refuse to change dates beyond 31 December 2014; the date would automatically change to 2015, but would revert to the base date in the event of a power cycle (loss of battery power). The workaround is to use the year 1987 in lieu of 2015 as compatible with the leap year cycle to display the correct day of the week, date and month on the main screen.\n\nSection::::Year 2019.\n", "Older operating systems that do not support a hardware HPET device can only use older timing facilities, such as the programmable interval timer (PIT) or the real-time clock (RTC). Windows XP, when fitted with the latest hardware abstraction layer (HAL), can also use the processor's Time Stamp Counter (TSC) or Power Management Timer (PMTIMER), together with the RTC to provide operating system features that would, in later Windows versions, be provided by the HPET hardware. Confusingly, such Windows XP systems quote \"HPET\" connectivity in the device driver manager even though the Intel HPET device is not being used.\n\nSection::::Features.\n", "Sony confirmed that there was an error and stated that it was narrowing down the issue and were continuing to work to restore service. By March 2 (UTC), 2010, owners of original PS3 models could connect to PSN successfully and the clock no longer showed December 31, 1999. Sony stated that the affected models incorrectly identified 2010 as a leap year, because of a bug in the BCD method of storing the date. However, for some users, the hardware's operating system clock (mainly updated from the internet and not associated with the internal clock) needed to be updated manually or by re-syncing it via the internet.\n", "The Domain/OS clock, which is based on the number of 4-microsecond units that has occurred since 1 January 1980, rolled past 47 bits on 2 November 1997, rendering unpatched systems unusable.\n\nSection::::Year 1999.\n\nIn the last few months before the year 2000, two other date-related milestones occurred that received less publicity than the then-impending Y2K problem.\n\nSection::::Year 1999.:First GPS rollover.\n", "2042\n\nSection::::Predicted and scheduled events.\n\nBULLET::::- April 30 – A Nickelodeon time capsule, sealed in April 1992, will be opened.\n\nBULLET::::- September 17 – A common computing representation of date and time on IBM mainframe systems will overflow with potential results similar to the year 2000 problem.\n\nSection::::Predicted and scheduled events.:Date unknown.\n\nBULLET::::- The Trident D5 submarine-launched nuclear missile will be phased out.\n", "A GPS receiver can shorten its startup time by comparing the current time, according to its RTC, with the time at which it last had a valid signal. If it has been less than a few hours, then the previous ephemeris is still usable.\n\nSection::::Power source.\n", "Finally, some software must maintain compatibility with older software that does not keep time in strict accordance with traditional timekeeping systems. For example, Microsoft Excel observes the fictional date of February 29, 1900 in order to maintain compatibility with older versions of Lotus 1-2-3. Lotus 1-2-3 observed the date due to an error; by the time the error was discovered, it was too late to fix it—\"a change now would disrupt formulas which were written to accommodate this anomaly\".\n\nSection::::Notable epoch dates in computing.\n", "BULLET::::- In 2012, Gmail's chat history showed a date of 12/31/69 for all chats saved on February 29, 2012.\n\nBULLET::::- Sony's PlayStation 3 incorrectly treated 2010 as a leap year, so the non-existent February 29, 2010 was shown on March 1, 2010, and caused a program error.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04288
How is the ratio between men and women in the world almost exactly 50% to 50%? What makes it not 58% to 42%? What causes this almost perfect mathematical relationship in something that is supposed to be completely random?
Sample size matters. Flip a coin 10,000 times and you get very very close to 50/50 split but flip it twice and it's less likely you'll get one of each.
[ "Suppose that in a sample of 100 men, 90 drank wine in the previous week, while in a sample of 100 women only 20 drank wine in the same period. The odds of a man drinking wine are 90 to 10, or 9:1, while the odds of a woman drinking wine are only 20 to 80, or 1:4 = 0.25:1. The odds ratio is thus 9/0.25, or 36, showing that men are much more likely to drink wine than women. The detailed calculation is:\n", "The five measures used to evaluate the accuracy of different forecasts were: symmetric mean absolute percentage error (also known as symmetric MAPE), average ranking, median symmetric absolute percentage error (also known as median symmetric APE), percentage better, and median RAE.\n", "Suppose a presidential election is taking place in a democracy. A random sample of 400 eligible voters in the democracy's voter population shows that 272 voters support candidate B. A political scientist wants to determine what percentage of the voter population support candidate B.\n\nTo answer the political scientist's question, a one-sample proportion in the Z-interval with a confidence level of 95% can be constructed in order to determine the population proportion of eligible voters in this democracy that support candidate B.\n\nSection::::Estimation.:Example.:Solution.\n\nIt is known from the random sample that formula_40 with sample size, formula_41\n", "For example, to test the hypothesis that a random sample of 100 people has been drawn from a population in which men and women are equal in frequency, the observed number of men and women would be compared to the theoretical frequencies of 50 men and 50 women. If there were 44 men in the sample and 56 women, then\n", "To understand why this is, imagine Marilyn vos Savant's poll of readers had asked which day of the week boys in the family were born. If Marilyn then divided the whole data set into seven groups - one for each day of the week a son was born - six out of seven families with two boys would be counted in two groups (the group for the day of the week of birth boy 1, and the group of the day of the week of birth for boy 2), doubling, in every group, the probability of a boy-boy combination.\n", "Say we are told that a woman has two children. If we ask whether either of them is a girl, and are told yes, what is the probability that the other child is also a girl? Considering this new child independently, one might expect the probability that the other child is female is ½ (50%). But by building a probability space (illustrating all possible outcomes), we see that the probability is actually only ⅓ (33%). This is because the possibility space illustrates 4 ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But we were given more information. Once we are told that one of the children is a female, we use this new information to eliminate the boy-boy scenario. Thus the probability space reveals that there are still 3 ways to have two children where one is a female: boy-girl, girl-boy, girl-girl. Only ⅓ of these scenarios would have the other child also be a girl. Using a probability space, we are less likely to miss one of the possible scenarios, or to neglect the importance of new information. For further information, see Boy or girl paradox.\n", "BULLET::::- Lauren (Hilary Shepard) is female, almost 40. She is extremely intellectually developed. She is prone to inappropriate sexual behavior, which often manifests itself as her being sure almost all men are madly in love with her.\n", "Utilising this method, experts make their estimates individually without actually meeting or discussing the task. The estimates are then aggregated by taking the geometric mean of the individual experts' estimates for each task. The major drawback to this method is that there is no shared expertise through the group; however, a positive of this is that due to the individuality of the process, any conflict such as dominating personalities or conflicting personalities is avoided and the results are therefore free of any bias.\n\nSection::::Methodologies.:Delphi method.\n", "BULLET::::- Suppose the marginal distribution of one variable, say \"X\", is very skewed. For example, if we are studying the relationship between high alcohol consumption and pancreatic cancer in the general population, the incidence of pancreatic cancer would be very low, so it would require a very large population sample to get a modest number of pancreatic cancer cases. However we could use data from hospitals to contact most or all of their pancreatic cancer patients, and then randomly sample an equal number of subjects without pancreatic cancer (this is called a \"case-control study\").\n", "The participants are the individuals who sign up for the speed dating event and interact with each of the 9 individuals of the opposite sex. There are 10 males and 10 female participants. After each date, they rate on a scale of 0 to 100 how much they would like to have a date with that person, with a zero indicating \"not at all\" and 100 indicating \"very much\".\n", "A newlywed couple plans to have children, and will continue until the first girl. What is the probability that there are zero boys before the first girl, one boy before the first girl, two boys before the first girl, and so on?\n", "BULLET::::3. The second most accurate method was a combination of seven statistical methods and one ML one, with the weights for the averaging being calculated by a ML algorithm, trained to minimize forecasting error through holdout tests. This method was jointly submitted by Spain’s University of A Coruña and Australia’s Monash University.\n\nBULLET::::4. The first and the second most accurate methods also achieved an amazing success in specifying correctly the 95% PIs. These are the first methods we know that have done so and do not underestimate uncertainty considerably.\n", "A selection problem requires to choose a sample of \"k\" elements out of a set of \"n\" elements. It is needed to know if the order in which the objects are selected matters and whether an object can be selected more than once or not. This table shows the operations that the model provides to get the number of different samples for each of the selections:\n\nSection::::Implicit combinatorial models.:Selection.:Examples.\n\n\"1.- At a party there are 50 people. Everybody shakes everybody’s hand once. How often are hands shaken in total?\"\n", "In cases where the sampling fraction exceeds 5%, analysts can adjust the margin of error using a \"finite population correction\" (FPC), to account for the added precision gained by sampling close to a larger percentage of the population. FPC can be calculated using the formula:\n", "which, after taking the logarithm of both sides, leads to\n", "A closely related concept is the Lexis variation. Let \"k\" samples each of size \"n\" be drawn at random. Let the probability of success (\"p\") be constant and let the actual probability of success in the \"k\" sample be \"p\", \"p\", ... , \"p\". \n\nThe average probability of success (\"p\") is \n\nThe variance in the number of successes is\n\nwhere var( \"p\" ) is the variance of the \"p\". \n", "A doctor is seeking an anti-depressant for a newly diagnosed patient. Suppose that, of the available anti-depressant drugs, the probability that any particular drug will be effective for a particular patient is p=0.6. What is the probability that the first drug found to be effective for this patient is the first drug tried, the second drug tried, and so on? What is the expected number of drugs that will be tried to find one that is effective?\n", "The question we ask about these data is: knowing that 10 of these 24 teenagers are studiers, and that 12 of the 24 are female, and assuming the null hypothesis that men and women are equally likely to study, what is the probability that these 10 studiers would be so unevenly distributed between the women and the men? If we were to choose 10 of the teenagers at random, what is the probability that 9 or more of them would be among the 12 women, and only 1 or fewer from among the 12 men?\n", "For example, to test the hypothesis that a random sample of 100 people has been drawn from a population in which men and women are equal in frequency, the observed number of men and women would be compared to the theoretical frequencies of 50 men and 50 women. If there were 44 men in the sample and 56 women, then\n", "We are asked to compute the ratio of female computer science majors to all computer science majors. We know that 60% of all students are female, and among these 5% are computer science majors, so we conclude that × = or 3% of all students are female computer science majors. Dividing this by the 10% of all students that are computer science majors, we arrive at the answer: = or 30% of all computer science majors are female.\n\nThis example is closely related to the concept of conditional probability.\n\nSection::::Percentage increase and decrease.\n", "The following data gives the number of male children among the first 12 children of family size 13 in 6115 families taken from hospital records in 19th century Saxony (Sokal and Rohlf, p. 59 from Lindsey). The 13th child is ignored to assuage the effect of families non-randomly stopping when a desired gender is reached.\n\nWe note the first two sample moments are\n\nand therefore the method of moments estimates are\n\nThe maximum likelihood estimates can be found numerically\n\nand the maximized log-likelihood is\n\nfrom which we find the AIC\n", "Statistician William S. Gosset in 1914 developed methods of eliminating spurious correlation due to how position in time or space affects similarities. Today's election polls have a similar problem: the closer the poll to the election, the less individuals make up their mind independently, and the greater the unreliability of the polling results, especially the margin of error or confidence limits. The effective \"n\" of independent cases from their sample drops as the election nears. Statistical significance falls with lower effective sample size.\n", "This example also shows how odds ratios are sometimes sensitive in stating relative positions: in this sample men are (90/100)/(20/100) = 4.5 times as likely to have drunk wine than women, but have 36 times the odds. The logarithm of the odds ratio, the difference of the logits of the probabilities, tempers this effect, and also makes the measure symmetric with respect to the ordering of groups. For example, using natural logarithms, an odds ratio of 36/1 maps to 3.584, and an odds ratio of 1/36 maps to −3.584.\n\nSection::::Statistical inference.\n", "Section::::Stratified sampling strategies.\n\nBULLET::::1. \"Proportionate allocation\" uses a sampling fraction in each of the strata that is proportional to that of the total population. For instance, if the population consists of \"X\" total individuals, \"m\" of which are male and \"f\" female (and where \"m\" + \"f\" = \"X\"), then the relative size of the two samples (\"x1\" = \"m/X\" males, \"x2\" = \"f/X\" females) should reflect this proportion.\n", "BULLET::::- Reader A said \"Yes\" to 25 applicants and \"No\" to 25 applicants. Thus reader A said \"Yes\" 50% of the time.\n\nBULLET::::- Reader B said \"Yes\" to 30 applicants and \"No\" to 20 applicants. Thus reader B said \"Yes\" 60% of the time.\n\nSo the expected probability that both would say yes at random is:\n\nSimilarly:\n\nOverall random agreement probability is the probability that they agreed on either Yes or No, i.e.:\n\nSo now applying our formula for Cohen's Kappa we get:\n\nSection::::Same percentages but different numbers.\n" ]
[ "Gender ratios should be a totally random ratio.", "The ratio between men and women should be less equal than what it currently is." ]
[ "There are only two possible genders at birth and both are equally likely so 50/50 will be the normal outcome for a large sample size.", "Sample size creates a big difference when considering the ratio of men and women on Earth. " ]
[ "false presupposition" ]
[ "Gender ratios should be a totally random ratio.", "The ratio between men and women should be less equal than what it currently is." ]
[ "false presupposition", "false presupposition" ]
[ "There are only two possible genders at birth and both are equally likely so 50/50 will be the normal outcome for a large sample size.", "Sample size creates a big difference when considering the ratio of men and women on Earth. " ]
2018-01697
When coming to a stop in a vehicle while listening to FM radio, why is it that if the reception isn’t clear, pulling forward about a foot fixes it?
FM radio suffers from *multipath interference* in which the signal bounces off of buildings and other objects, reflecting back and interfering with itself. These spots of interference can be quite localized, so a small motion can move you in and out of them.
[ "Actually, the term path loss is something of a misnomer because no energy is lost on a free-space path. Rather, it is merely not received by the receiving antenna. The apparent reduction in transmission, as frequency is increased, is an artifact of the change in the aperture of a given type of antenna.\n", "If unobstructed and in a perfect environment, radio waves will travel in a relatively straight line from the transmitter to the receiver. But if there are reflective surfaces that interact with a stray transmitted wave, such as bodies of water, smooth terrain, roof tops, sides of buildings, etc., the radio waves deflecting off those surfaces may arrive either out-of-phase or in-phase with the signals that travel directly to the receiver. Sometimes this results in the counter-intuitive finding that reducing the height of an antenna increases the signal-to-noise ratio at the receiver. \n", "Section::::Electric current.\n\nElectric currents that oscillate at radio frequencies (\"RF currents\") have special properties not shared by direct current or alternating current of lower frequencies.\n\nBULLET::::- Energy from RF currents in conductors can radiate into space as electromagnetic waves (radio waves). This is the basis of radio technology.\n\nBULLET::::- RF current does not penetrate deeply into electrical conductors but tends to flow along their surfaces; this is known as the skin effect.\n", "There are several common types of FM demodulators:\n\nBULLET::::- The quadrature detector, which phase shifts the signal by 90 degrees and multiplies it with the unshifted version. One of the terms that drops out from this operation is the original information signal, which is selected and amplified.\n\nBULLET::::- The signal is fed into a PLL and the error signal is used as the demodulated signal.\n", "VHF/UHF television and radio signals are normally limited to a maximum \"deep fringe\" reception service area of approximately in areas where the broadcast spectrum is congested, and about 50 percent farther in the absence of interference. However, providing favourable atmospheric conditions are present, television and radio signals sometimes can be received hundreds or even thousands of miles outside their intended coverage area. These signals are often received using a large outdoor antenna system connected to a sensitive TV or FM receiver, although this may not always be the case. Many times smaller antennas and receivers such as those in vehicles will receive stations farther than normal depending on how favourable conditions are.\n", "Multipath effects are much less severe in moving vehicles. When the GPS antenna is moving, the false solutions using reflected signals quickly fail to converge and only the direct signals result in stable solutions.\n\nSection::::Ephemeris and clock errors.\n", "Diffraction depends on the relationship between the wavelength and the size of the obstacle. In other words, the size of the obstacle in wavelengths. Lower frequencies diffract around large smooth obstacles such as hills more easily. For example, in many cases where VHF (or higher frequency) communication is not possible due to shadowing by a hill, it is still possible to communicate using the upper part of the HF band where the surface wave is of little use.\n", "In domestic dogs (\"Canis familiaris\"), there is a correlation between motor laterality and noise sensitivity - a lack of paw preference is associated with noise-related fearfulness. (Branson and Rogers, 2006) Fearfulness is an undesirable trait in guide dogs, therefore, testing for laterality can be a useful predictor of a successful guide dog. Knowing a guide dog's laterality can also be useful for training because the dog may be better at walking to the left or the right of their blind owner.\n", "Section::::Characteristics and measurements.:Focus.\n\nAHDs are lastly characterized by directionality. To ensure messages are broadcast to the target, AHDs shape sound into a 30–60° audio beam. This shaping is accomplished through the design of the transducers as well as various reflective horns. \n", "TV modulators generally feature analog passthrough, meaning that they take input both from the device and from the usual antenna input, and the antenna input \"passes through\" to the TV, with minor insertion loss due to the added device. In some cases the antenna input is always passed through, while in other cases the antenna input is turned off when the device is outputting a signal, and only the device signal is sent onward, to reduce interference.\n", "Acceptance Range Threshold: This threshold is set by the maximum radio range of the observer vehicle. The vehicle will be able to receive messages successfully only from the vehicles within this radio range. Therefore, if it receives a message directly from a vehicle that is claiming to be further away than this threshold, that vehicle has to be lying about its position.\n", "GPS signals can also be affected by multipath issues, where the radio signals reflect off surrounding terrain; buildings, canyon walls, hard ground, etc. These delayed signals cause measurement errors that are different for each type of GPS signal due to its dependency on the wavelength.\n", "Section::::Surface wave.\n\nThe radio signal spreads out from the transmitter along the surface of the Earth. Instead of just travelling in a straight line the radio signals tend to follow the curvature of the Earth. This is because currents are induced in the surface of the earth and this action slows down the wave-front in this region, causing the wave-front of the radio communications signal to tilt downwards towards the Earth. With the wave-front tilted in this direction it is able to curve around the Earth and be received well beyond the horizon.\n\nSection::::Effect of frequency on ground wave propagation.\n", "In an ideal communication scenario, there is a line-of-sight path between the transmitter and receiver that represents clear spatial channel characteristics. In urban cellular systems, this is seldom the case as base stations are located on rooftops while many users are located either indoors or at streets far from base stations. Thus, there is a non-line-of-sight multipath propagation channel between base stations and users, describing how the signal is reflected at different obstacles on its way from the transmitter to the receiver. However, the received signal may still have a strong spatial signature in the sense that stronger average signal gains are received from certain spatial directions.\n", "On the other hand, vertically polarized radiation is not well reflected by the ground except at grazing incidence or over very highly conducting surfaces such as sea water. However the grazing angle reflection important for ground wave propagation, using vertical polarization, is \"in phase\" with the direct wave, providing a boost of up to 6 db, as is detailed below.\n", "Ground reflections is one of the common types of multipath.\n", "Through the early to mid 1980s, a number of agencies developed a solution to the SA \"problem\". Since the SA signal was changed slowly, the effect of its offset on positioning was relatively fixed – that is, if the offset was \"100 meters to the east\", that offset would be true over a relatively wide area. This suggested that broadcasting this offset to local GPS receivers could eliminate the effects of SA, resulting in measurements closer to GPS's theoretical performance, around 15 meters. Additionally, another major source of errors in a GPS fix is due to transmission delays in the ionosphere, which could also be measured and corrected for in the broadcast. This offered an improvement to about 5 meters accuracy, more than enough for most civilian needs.\n", "Also, because of the high frequency, a high data transfer rate may be available. However, in practical last mile environments, obstructions and de-steering of these beams, and absorption by elements of the atmosphere including fog and rain, particularly over longer paths, can greatly restrict their use for last-mile wireless communications. Longer (redder) waves suffer less obstruction but may carry lower data rates. See RONJA.\n\nSection::::Existing last mile delivery systems.:Wireless delivery systems.:Radio waves.\n", "The LF antenna is also often used by the TPM for configuration and to force transmission so that localisation can be re-learned by the vehicle if a sensor is changed or the wheels rotated to even up tread wear.\n\nA third method uses the UHF signal strength which is proportional to the distance of the TPM from the receiver. If the receiver is located towards the front of the vehicle, the signal from the front wheel TPM's will be stronger than that from the wheels at the rear.\n", "BULLET::::2. If the received signal is strong enough, it may cause the TMA to create its own interference which is passed on to the receiver.\n\nBULLET::::3. In some mobile networks (e.g. IS-95 or WCDMA - aka European 3G -), it is not simple to detect and correct unbalanced links since the link balance is not constant; link balance changes with traffic load. However, other mobile networks (e.g. GSM) have a constant link, therefore it is possible analyse call records and establish where TMAs are needed.\n", "U.S. Occupational Safety and Health Administration guidelines for non-ionizing radio energy generally say the radio antenna must be two feet from any vehicle occupants. (Read the OSHA guidelines before attempting to install an antenna.) This rule of thumb is intended to result in passengers being exposed to safe levels of radio frequency energy in the event the radio transmits.\n\nSection::::Multiple radio sets.\n", "In order to reorient this magnetic steering mechanism, a certain amount of time is required due to the inductance of the magnets; the greater the change, the greater the time it takes for the electron beam to settle in the new spot.\n", "BULLET::::- Warren Burton as Radio Preacher (voice)\n\nSection::::Reception.\n", "Although radio waves generally travel in a straight line, fog and even humidity can cause some of the signal in certain frequencies to scatter or bend before reaching the receiver. This means that objects that are clear of the line of sight path will still potentially block parts of the signal. To maximize signal strength, one needs to minimize the effect of obstruction loss by removing obstacles from both the direct radio frequency line of sight (RF LoS) line and also the area around it within the primary Fresnel zone. The strongest signals are on the direct line between transmitter and receiver and always lie in the first Fresnel zone.\n", "In this reactive region, not only is an electromagnetic wave being radiated outward into far space but there is a \"reactive\" component to the electromagnetic field, meaning that the nature of the field around the antenna is sensitive to EM absorption in this region, and reacts to it. In contrast, this is not true for absorption far from the antenna, which has no effect on the transmitter or antenna near field.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01826
In ancient and medieval times, how did soldiers distinguish friend from foe in battle?
Sometimes they didn't. However for the most part it involved big flags and designs on the shields. Some armies were equipped totally different but others where similarly equipped there were special people called heralds who could tell whose design on a flag or shield were who and which side they were supposed to be on.
[ "IFF is a tool within the broader military action of Combat Identification (CID), \"the process of attaining an accurate characterization of detected objects in the operational environment sufficient to support an engagement decision.\" The broadest characterization is that of friend, enemy, neutral, or unknown. CID not only can reduce friendly fire incidents, but also contributes to overall tactical decision-making.\n\nSection::::History.\n", "Some researchers have staged three-way contests between male convict cichlids (\"Cichlasoma nigrofasciatum\") to examine the dear enemy effect. When faced with a familiar neighbour and an unfamiliar intruder simultaneously, residents preferentially confronted the unfamiliar opponent. That is, the establishment of dear enemy recognition between a resident and a neighbour allowed the resident to direct his aggression to the greater competitive threat, i.e. the intruder.\n", "On Christmas Eve and Christmas Day (24 and 25 December) 1914, Alfred Anderson's unit of the 1st/5th Battalion of the Black Watch was billeted in a farmhouse away from the front line. In a later interview (2003), Anderson, the last known surviving Scottish veteran of the war, vividly recalled Christmas Day and said:\n\nNor were the observations confined to the British. German Lieutenant Johannes Niemann wrote: \"grabbed my binoculars and looking cautiously over the parapet saw the incredible sight of our soldiers exchanging cigarettes, schnapps and chocolate with the enemy.\"\n", "\"Kormakssaga\" states that the holmgang was fought on an ox hide or cloak with sides that were three meters long. It was staked on the ground with stakes used just for that purpose and placed in a specific manner now unknown. After that the area was marked by drawing three borders around the square hide, each about one foot from the previous one. Corners of the outermost border were marked with hazel staves. Combatants had to fight inside these borders. Stepping out of borders meant forfeiture, running away meant cowardice.\n", "BULLET::::- Bilibin (only appearance) – Russian diplomat to Austria, \"about thirty-five, a bachelor, of the same society as Prince Andrei ... His thin, drawn, yellowish face was all covered with deep wrinkles ... The movements of these wrinkles constituted the main play of his physiognomy.\"\n\nXII\n", "So this system was constructed so that for all positions, we could monitor the enemy wherever he is and he was so identified. Even if there were an enemy to the aplomb of a wall, rather than risking to look to achieve, you could touch from another post. In fact, all the shots and angles of view were studied to better defend the defensive system.\n", "FMRI results show activation of the fusiform cortex, posterior cingulate gyrus, and amygdala when individuals are asked to identify previously seen faces that were encoded as either “friends” or “foes.” Additionally, the caudate and anterior cingulate cortex are more activated when looking at faces of “foes” versus “friends.\" This research suggests that quick first impressions of hostility or support from unknown people can lead to long-term effects on memory that will later be associated with that person.\n\nSection::::Neuroscience.:Alcohol and impressions.\n", "The Tang army also made use of scouts on campaign. A pair of scouts were sent out for each of the four directions at different distances. Two at five \"li\", another two at ten \"li\", and so on until they reached 30 \"li\".\n\nSection::::Organization.:Military examination.\n", "Formal arrangements by armies to meet one another on a certain day and date were a feature of Western Medieval warfare, often related to the conventions of siege warfare. This arrangement was known as a \"journée\". Conventionally, the battlefield had to be considered a fair one, not greatly advantaging one side or the other. Arrangements could be very specific about where the battle should take place. For example, at the siege of Grancey in 1434, it was agreed that the armies would meet at \"the place above Guiot Rigoigne's house on the right side towards Sentenorges, where there are two trees\".\n", "During the breeding season of the skylark (\"Alauda arvensis\"), particular common sequences of syllables (phrases) are produced by all males established in the same location (neighbours), whereas males of different locations (strangers) share only few syllables. Playback experiments provided evidence for neighbour–stranger discrimination consistent with the dear enemy effect, indicating that shared sequences were recognized and identified as markers of the group identity. Studies have shown that the dear enemy effect changes during the breeding season of the skylark. Playbacks of neighbour and stranger songs at three periods of the breeding season show that neighbours are dear enemies in the middle of the season, when territories are stable, but not at the beginning of the breeding season, during settlement and pair formation, nor at the end, when bird density increases due to the presence of young birds becoming independent. In song sparrows, where neighbours are most often the sires of extra-pair offspring, males will alter their aggression toward neighbouring males with their female's fertility status. When presented with simulated stranger and neighbour intruders during their female's pre-fertile and post-fertile periods, males displayed the dear enemy effect. However, when presented with simulated stranger and neighbour intruders during their female's fertile period, males exhibited an equal response to both stimuli, likely in order to protect their paternity. Thus, the dear enemy relationship is not a fixed pattern but a flexible one likely to evolve with social and ecological circumstances.\n", "At the field of honor, each side would bring a doctor and seconds. The seconds would try to reconcile the parties by acting as go-betweens to attempt to settle the dispute with an apology or restitution. If reconciliation succeeded, all parties considered the dispute to be honorably settled, and went home.\n\nEach side would have at least one second; three was the traditional number.\n\nIf one party failed to appear, he was accounted a coward. The appearing party would win by default. The seconds and sometimes the doctor would bear witness of the cowardice.\n", "Section::::History.:Postwar systems.:IFF Mark XII.\n\nThe current IFF system is the Mark XII. This works on the same frequencies as Mark X, and supports all of its military and civilian modes.\n", "Section::::History.:Britain.:IFF Mark II.\n", "Even though there is no physical connection between the commander and his troops, other than conduits for discursive information such as radio signals, it is \"as if\" the commander had their own sensitive presence in each spot.\n", "BULLET::::- 1471 – During the Battle of Barnet a Lancastrian force under the Earl of Oxford was fired on by the Lancastrian centre while returning from a pursuit; their banner, Oxford's “star with rays” had been mistaken for the Yorkist “sun in splendour”. This gave rise to cries of treachery, (always a possibility in that chaotic period), Lancastrian morale collapsed, and the battle was lost.\n\nSection::::English Civil War.\n", "Even though today's MASINT is often on the edge of technologies, many of them under high security classification, the techniques have a long history. Captains of warships, in the age of sail, used his eyes, and his ears, and sense of touch (a wetted finger raised to the breeze) to measure the characteristics of wind and wave. He used a mental library of signatures to decide what tactical course to follow based on weather. Medieval fortification engineers would put their ear to the ground to obtain acoustic measurements of possible digging to undermine their walls.\n", "Within the Storyteller system, the Mental attributes included Perception instead of Resolve, while the social Attributes of Charisma and Appearance were replaced on the sheet by Presence and Composure, respectively. Unlike all other attributes in the Storyteller system–and unlike all attributes in the Storytelling System–Appearance could have zero dots in it, although this was only to reflect particularly hideous or monstrous characters.\n", "From \"Outwitting The Hun\" (1918), by Pat O'Brien \n\n\"From my hospital bed as prisoner in Germany, I was musing over the melancholy phase of the scout's life when an orderly told me there was a beautiful battle going on in the air, and he volunteered to help me outside the hospital that I might witness it, and I readily accepted his assistance. That afternoon I saw one of the gamest flights I ever expect to witness. \n", "Section::::Herodotus on the battle.\n\nThe fields around were strewn with the bones of the combatants when Herodotus visited. He noted that the skulls of the Egyptians were distinguishable from those of the Persians by their superior hardness, a fact confirmed he said by the mummies, and which he ascribed to the Egyptians' shaving their heads from infancy, and to the Persians covering them up with folds of cloth or linen.\n", "Another tale describes how \"a small mounted party, led by an officer wearing a white shirt\" ventured outside the rebel fortifications. Joe commented that he was \"best at a white mark\". He quickly aimed and fired, and the man in the saddle fell to the ground, apparently dead.\n", "Alauddin made arrangements for rapid communication of news about the expedition by establishing \"thana\"s (posts) all along the route from Tilpat near Delhi to the army's current position. At every post and town along the route, fast-running horses and news-writers were stationed. Along the roads, foot-runners were stationed at regular distances to carry messages. This arrangement ensured that Alauddin was updated about the army's situation in every 2–3 days. It also ensured that army was immune from any false rumors about the happenings in Delhi.\n", "An unnamed British hunter set up headquarters in a large village, where he compiled information on the leopard’s depredations with the local \"thana\". Ten days later, a man entered the hunter’s camp one morning, and claimed that the leopard had entered a hut in a village a mile from the camp, and had unsuccessfully attempted to carry off a small girl the previous night. The hunter dressed the girl’s wounds and she recovered. The leopard struck again two days later in another village. The hunter searched for the leopard from his camp for three weeks without success. \n", "BULLET::::- Degree of Friendship - In the Degree of Friendship, the Commander takes the part of Mattathias, the Lt. Commander that of Judas, the Past Commander that of John (son of Mattathias), and the Chaplain that of Eleazar (son of Mattathias). The candidate received instruction in the nature of friendship.\n", "In the 6th century BCE Greek Bias of Priene successfully resisted the Lydian king Alyattes by fattening up a pair of mules and driving them out of the besieged city. When Alyattes' envoy was then sent to Priene, Bias had piles of sand covered with corn to give the impression of plentiful resources.\n", "Defensively, the Muslim spearman with their two and a half meter long spears would close ranks, forming a protective wall (\"Tabi'a\") for archers to continue their fire. This close formation stood its ground remarkably well in the first four days of defence in the Battle of Yarmouk.\n\nSection::::Army.:Cavalry.\n" ]
[ "In medieval and/or ancient times, soldiers were always able to distinguish friend from foe in battle. " ]
[ "Many times soldiers were unable to distinguish friend from foe in battle. " ]
[ "false presupposition" ]
[ "In medieval and/or ancient times, soldiers were always able to distinguish friend from foe in battle. ", "In medieval and/or ancient times, soldiers were always able to distinguish friend from foe in battle. " ]
[ "normal", "false presupposition" ]
[ "Many times soldiers were unable to distinguish friend from foe in battle. ", "Many times soldiers were unable to distinguish friend from foe in battle. " ]
2018-19278
Why do drinking fountains have two separate jets of water that combine to form one arc?
Two separate small jets running parallel with each other produces a less chaotic flow of water than a single large jet. You can notice this in those elaborate fountain shows: each jet is actually a bundle of smaller jets that combine to form the big "ribbon" of water.
[ "The book describes the construction of various automatic fountains, an aspect that was largely neglected in earlier Greek treatises on technology. In one of these fountains, the \"water issues from the fountainhead in the shape of a shield, or like a lily-of-the-valley,\" i.e. \"the shapes are discharged alternately—either a sheet of water concave downwards, or a spray.\" Another fountain \"discharges a shield or a single jet,\" while a variation of this features double-action alternation, i.e. has two fountainheads, with one discharging a single jet and the other a shield, and the two alternating repeatedly. Another variation features one main fountainhead and two or more subsidiary ones, such that when the main one ejects a single jet, the subsidiaries eject shields, with the two alternating.\n", "From the Middle Ages onwards, fountains in villages or towns were connected to springs, or to channels which brought water from lakes or rivers. In Provence, a typical village fountain consisted of a pipe or underground duct from a spring at a higher elevation than the fountain. The water from the spring flowed down to the fountain, then up a tube into a bulb-shaped stone vessel, like a large vase with a cover on top. The inside of the vase, called the \"bassin de répartition\", was filled with water up to a level just above the mouths of the canons, or spouts, which slanted downwards. The water poured down through the canons, creating a siphon, so that the fountain ran continually.\n", "Section::::Drinking fountain.\n", "BULLET::::- The Pont d'eau was made by jets of water from both sides of Lake Daumesnil, which formed an illuminated water \"bridge\" forty meters long and six meters wide.\n", "The fountain was described this way by English artist, E. Adveno Brooke, who visited and made a chronolithograph of the fountain in 1857: \"As we stood admiring the beauty and tranquility of the scene, a bubbling sound of water, at first gentle and gathering force by degrees, broke out and we beheld the commencement of one of the most beautiful aquatic displays it is possible to conceive. This, the large fountain, is on a level with the surface of the lake, and composed of five jets, the central one throwing a column of water 150 feet high; the supply being obtained from a large reservoir on the hill, to which it is first pumped by the united action of two engines, each of thirty horsepower.\".\n", "BULLET::::- The center ring is in diameter and contains nine nozzles operating vertically, rising to a height of , driven by a motor with a pump capacity of 5,500 GPM.\n", "This musical fountain is unique and has 150 channels available for water and light effects – old fountain had 20 channels for water. The concept of three-tier fountain pool, with musical fountain in the upper pool surrounded by architectural and dynamic fountains in the intermediate and lower pools, is quite unique in this subcontinent.\n", "Both fountains have the same form: a stone basin; six figures of tritons or naiades holding fish spouting water; six seated allegorical figures, their feet on the prows of ships, supporting the piedouche, or pedestal, of the circular vasque; four statues of different forms of genius, arts or crafts supporting the upper inverted upper vasque; whose water shoots up and then cascades down to the lower vasque and then the basin.\n", "Fountains in Portland, Oregon\n\nSection::::Benson Bubblers.\n\nMore than fifty drinking fountains called Benson Bubblers, named after Simon Benson and designed by A. E. Doyle, are located in and around downtown Portland.\n\nSection::::Portland Parks & Recreation.\n", "Section::::Features.\n\nBULLET::::- The fountain’s basin is in diameter holding of water which is treated according to swimming pool sanitation standards.\n\nBULLET::::- The spray component consists of three rings.\n\nBULLET::::- The outer ring contains 36 nozzles, equally spaced around the perimeter, projecting a stream of water inward at a 45° angle, rising , driven by a motor with a pump capacity of 6,750 GPM\n\nBULLET::::- The middle ring is in diameter and consists of 18 nozzles directed vertically, rising to a height of , driven by a motor with a pump capacity of 4,500 GPM.\n", "Great Fountain, Enville\n\nThe Great Fountain, Enville, was a fountain created in the mid-19th century by the Earl of Stamford in the middle of a lake on his estate, Enville Hall, in Enville, Staffordshire, England.\n", "Section::::General information.\n", "In the nineteenth century, the development of steam engines allowed the construction of more dramatic fountains. In the middle of the century the Earl of Stamford built the Great Fountain, Enville, which jetted water 150 feet above the surface of a lake on his estate. He used two steam engines to pump water to a reservoir at the top of the hill above his estate. The fountain could spout water for several minutes, until the reservoir was empty.\n", "A third common type of village fountain was the \"fontaine à bulbe\", or bulb fountain. The bulb fountain operated in the same way as the barrel fountain, except that the bassin de repartition was located in a separate stone container, usually bulb shape, placed on top of column. The canons of the fountain were attached to the bulb rather than the column. The masks through which the water flowed were frequently carved directly onto the stone of the bulb.\n\nSection::::French Renaissance.\n", "The CESC Fountain of Joy has a centre-fed circular water screen of 6 metre height and 18 metre width. In the upper pool, the CESC Fountain of Joy will have 99 water effects, while the intermediate pool will have 20 water effects and another 30 special water effects in the lower pool. There will be a large water cascading area – more than 80 metre long from upper pool to the intermediate pool.\n", "One myth claims that drinking fountains were first built in the United States in 1888 by the then-small Kohler Water Works (now Kohler Company) in Kohler, Wisconsin. However, no company by that name existed at the time. The original 'Bubbler' shot water one inch straight into the air, creating a bubbling texture, and the excess water ran back down over the sides of the nozzle. Several years later the Bubbler adopted the more sanitary arc projection, which also allowed the user to drink more easily from it. At the start of the 20th century, it was discovered that the original vertical design was related to the spread of many contagious diseases.\n", "F.W. Darlington was a pioneer in electrical fountain control as well as water design. \n\n\"Darlington had several signature water feature elements in his fountain designs. The multiple spray rings with \"basket-weave\" nozzle placement is one that shows up in photographs of several fountains, including some not yet credited to Darlington. The \"fan\" effect, a complicated triple spray ring with multiple nozzle sizes and angles is yet another water effect seen in several \"Electric Fountains.\" \n\nPrismatic Fountain, New Orleans, Louisiana - 1915\n", "BULLET::::- The pont d'eau was made by jets of water from both sides of Lake Daumesnil, which formed an illuminated water \"bridge\" forty meters long and six meters wide. This was the first fountain made entirely of water, with no architectural element; the ancestor of the Jet d'eau in Lake Geneva, created twenty years later.\n", "BULLET::::- design piping so that, once C is full and B empty, they can be switched\n\nBULLET::::- add valves to empty C and replenish B (in effect transferring water from C to B)\n\nBULLET::::- instead of using valves, transfer water up from C to B through boiling and condensing\n\nBULLET::::- make a 4 container A-B-C-D fountain, which can be turned upside down so that the full and empty container switch place\n\nThere also exist fountains with two liquids of different colors and density. \n\nSection::::Geological phenomena.\n", "Use of the words \"water fountain\", \"drinking fountain\", and/or \"bubbler\" vary across regional dialects of English.\n\nSection::::History.\n\n\"See also: Drinking fountains in the United States, Temperance fountain\"\n\nBefore potable water was provided in private homes, water for drinking was made available to citizens of cities through access to public fountains. Many of these early public drinking fountains can still be seen (and used) in cities such as Rome, with its many \"fontanelle\" and \"nasoni\" (big noses).\n", "BULLET::::- 'The Paris Colonial Exposition of 1931 introduced neon lights and the indirect outdoor lighting of Paris buildings, and featured eight different illuminated fountains.\n\nBULLET::::- The Théâtre d'eau, or water theater, located on one side of the lake, covering an arc of a circle of about 80 meters, created a performance of dancing water, forming changing bouquets, arches, and curtains of water from its jets and nozzles. It was the ancestor of the modern musical fountain.\n", "BULLET::::- (C) Bottom: air supply\n\nAnd three pipes: \n\nBULLET::::- P1, (on the left in the picture), from a hole in the bottom of basin (A) to bottom of air supply container (C)\n\nBULLET::::- P2, (on the right in the picture), from the top of the air supply container (C) to the top of the water supply container (B)\n", "Later, in the 16th century, a more sophisticated version of the wall fountain appeared, based on the design of ancient Roman street fountains. These new fountains showed the wealth and prestige of the town or village. and were usually placed in the central square, near the church. Inside the \"buffet\", or the wall behind the fountain, was a hollow chamber called the \"bassin de repartition\", connected by a pipe to the source of water below. The canon of the fountain was connected with the basin, with its entrance just below the water level in the basin. As the water poured out the canon, it would create a siphon, drawing water up the pipe and keeping the bassin de repartition filled.\n", "In use, a fluid amplifier would typically be connected to, and receive the high-energy power stream from, a separate fluid supply manifold that had been previously installed.\n\nTypical Installation for Fountains\n\nTypical Installation for Playdecks\n\nSection::::Where used.\n\nBULLET::::- White Square, Moscow, Russia\n\nBULLET::::- Oasis and Allure of the Seas, AquaTheatre\n\nBULLET::::- Town Lake, Austin, USA\n\nBULLET::::- Easton Town Center, Columbus, USA\n\nBULLET::::- Gateway Theatre of Shopping Boulevard Water Feature, Durban, South Africa\n\nBULLET::::- Metropolitan Warsaw, Poland\n\nBULLET::::- Kuala Lumpur City Centre Fountain, Kuala Lumpur, Malaysia\n\nBULLET::::- Starlight Spectacular / Royal Fountain , Canada's Wonderland, Vaughan, Ontario, Canada\n\nSection::::External links.\n", "BULLET::::- Ewing & Muriel Kauffman Memorial Fountain \n\nSection::::Description.:F–K.\n\nBULLET::::- Federal Building \n\nBULLET::::- Firefighters Fountain \n\nBULLET::::- Fountain of Bacchus \n\nBULLET::::- Four Fauns Fountain \n\nBULLET::::- Frank S. Land Memorial Fountain \n\nBULLET::::- Grandview City Hall Veterans Memorial \n\nBULLET::::- H & R Bloch Courtyard Fountain \n\nBULLET::::- Hallmark Corporate Entrance \n\nBULLET::::- Harold D. Rice Fountain \n\nBULLET::::- Harry Evans Minty Memorial Fountain \n\nBULLET::::- Harvester KC \n\nBULLET::::- Helen Cuddy Memorial Rose Garden Fountain \n\nBULLET::::- Helen Spradling Boylan Memorial \n\nBULLET::::- Henry Wollman Bloch Memorial Fountain \n\nBULLET::::- Hillside Fountain \n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-22258
How does drinkable dietary fiber become something solid?
The fiber that dissolves in water is called soluble fiber. At first it dissolves in water but if you let it sit long enough you'll see that the fiber absorbs the water and becomes a gelatinous mass. This helps to bulk up your stool because it creates a blob that your body can't digest.
[ "BULLET::::2. Immobilizing of nutrients and other chemicals within complex polysaccharide molecules affects their release and subsequent absorption from the small intestine, an effect influential on the glycemic index.\n\nBULLET::::3. Molecules begin to interact as their concentration increases. During absorption, water must be absorbed at a rate commensurate with the absorption of solutes. The transport of actively and passively absorbed nutrients across epithelium is affected by the unstirred water layer covering the microvillus membrane.\n\nBULLET::::4. The presence of mucus or fiber, e.g., pectin or guar, in the unstirred layer may alter the viscosity and solute diffusion coefficient.\n", "BULLET::::3. There may also be an added osmotic effect of products of bacterial fermentation on fecal mass.\n", "Section::::Applications.:Fuel cell electrolyte.\n", "Bulking fibers can be soluble (e.g. psyllium) or insoluble (e.g. cellulose and hemicellulose). They absorb water and can significantly increase stool weight and regularity. Most bulking fibers are not fermented or are minimally fermented throughout the intestinal tract.\n", "Section::::Hierarchical structure.:Nacre.:The nanoscale.\n\nThe 30 nm thick interface between the tablets that connects them together and the aragonite grains detected by scanning electron microscopy from which the tablets themselves are made of together represent another structural level. The organic material “gluing” the tablets together is made of proteins and chitin.\n", "BULLET::::- Cooking and chewing food alters these physicochemical properties and hence absorption and movement through the stomach and along the intestine\n\nSection::::Activity in the gut.:Dietary fiber in the upper gastrointestinal tract.\n\nFollowing a meal, the stomach and upper gastrointestinal contents consist of\n\nBULLET::::- food compounds\n\nBULLET::::- complex lipids/micellar/aqueous/hydrocolloid and hydrophobic phases\n\nBULLET::::- hydrophilic phases\n\nBULLET::::- solid, liquid, colloidal and gas bubble phases.\n\nMicelles are colloid-sized clusters of molecules which form in conditions as those above, similar to the critical micelle concentration of detergents.\n", "BULLET::::- Particle size and interfacial interactions with adjacent matrices affect the mechanical properties of food composites.\n\nBULLET::::- Food polymers may be soluble in and/or plasticized by water. Water is the most important plasticizer, particularly in biological systems thereby changing mechanical properties.\n\nBULLET::::- The variables include chemical structure, polymer concentration, molecular weight, degree of chain branching, the extent of ionization (for electrolytes), solution pH, ionic strength and temperature.\n\nBULLET::::- Cross-linking of different polymers, protein and polysaccharides, either through chemical covalent bonds or cross-links through molecular entanglement or hydrogen or ionic bond cross-linking.\n", "Dietary fibers can change the nature of the contents of the gastrointestinal tract and can change how other nutrients and chemicals are absorbed through bulking and viscosity. Some types of soluble fibers bind to bile acids in the small intestine, making them less likely to re-enter the body; this in turn lowers cholesterol levels in the blood from the actions of cytochrome P450-mediated oxidation of cholesterol.\n", "Dietary fiber\n\nDietary fiber (British spelling fibre) or roughage is the portion of plant-derived food that cannot be completely broken down by human digestive enzymes. It has two main components:\n\nBULLET::::- Soluble fiber – which dissolves in water – is readily fermented in the colon into gases and physiologically active by-products, such as short-chain fatty acids produced in the colon by gut bacteria; it is viscous, may be called prebiotic fiber, and delays gastric emptying which, in humans, can result in an extended feeling of fullness.\n", "BULLET::::2. Lignin in fiber adsorbs bile acids, but the unconjugated form of the bile acids are adsorbed more than the conjugated form. In the ileum where bile acids are primarily absorbed the bile acids are predominantly conjugated.\n\nBULLET::::3. The enterohepatic circulation of bile acids may be altered and there is an increased flow of bile acids to the cecum, where they are deconjugated and 7alpha-dehydroxylated.\n\nBULLET::::4. These water-soluble form, bile acids e.g., deoxycholic and lithocholic are adsorbed to dietary fiber and an increased fecal loss of sterols, dependent in part on the amount and type of fiber.\n", "BULLET::::- inulins, a group of polysaccharides\n\nBULLET::::- oligosaccharides\n\nSection::::Short-chain fatty acids.\n\nWhen fermentable fiber is fermented, short-chain fatty acids (SCFA) are produced. SCFAs are involved in numerous physiological processes promoting health, including:\n\nBULLET::::- stabilize blood glucose levels by acting on pancreatic insulin release and liver control of glycogen breakdown\n\nBULLET::::- stimulate gene expression of glucose transporters in the intestinal mucosa, regulating glucose absorption\n\nBULLET::::- provide nourishment of colonocytes, particularly by the SCFA butyrate\n\nBULLET::::- suppress cholesterol synthesis by the liver and reduce blood levels of LDL cholesterol and triglycerides responsible for atherosclerosis\n", "In the upper gastrointestinal tract, these compounds consist of bile acids and di- and monoacyl glycerols which solubilize triacylglycerols and cholesterol.\n\nTwo mechanisms bring nutrients into contact with the epithelium:\n\nBULLET::::1. intestinal contractions create turbulence; and\n\nBULLET::::2. convection currents direct contents from the lumen to the epithelial surface.\n\nThe multiple physical phases in the intestinal tract slow the rate of absorption compared to that of the suspension solvent alone.\n\nBULLET::::1. Nutrients diffuse through the thin, relatively unstirred layer of fluid adjacent to the epithelium.\n", "Dietary fiber has distinct physicochemical properties. Most semi-solid foods, fiber and fat are a combination of gel matrices which are hydrated or collapsed with microstructural elements, globules, solutions or encapsulating walls. Fresh fruit and vegetables are cellular materials.\n\nBULLET::::- The cells of cooked potatoes and legumes are gels filled with gelatinized starch granules. The cellular structures of fruits and vegetables are foams with a closed cell geometry filled with a gel, surrounded by cell walls which are composites with an amorphous matrix strengthened by complex carbohydrate fibers.\n", "As a dietary supplement, whey and other protein powders can be reconstituted at the time of usage by the addition of a solvent such as water, juice, milk, or other liquid. As a food ingredient, whey powders are easily mixed or dissolved into a formulated food.\n\nSection::::Function.\n", "Viscous fiber sources gaining FDA approval are:\n\nBULLET::::- Psyllium seed husk (7 grams per day)\n\nBULLET::::- Beta-glucan from oat bran, whole oats, oatrim, or rolled oats (3 grams per day)\n\nBULLET::::- Beta-glucan from whole grain or dry-milled barley (3 grams per day)\n\nOther examples of bulking fiber sources used in functional foods and supplements include cellulose, guar gum and xanthan gum. Other examples of fermentable fiber sources (from plant foods or biotechnology) used in functional foods and supplements include resistant starch, inulin, fructans, fructooligosaccharides, oligo- or polysaccharides, and resistant dextrins, which may be partially or fully fermented.\n", "Dietary fibers from fruits, vegetables, and whole grains, consumed, increase the speed of transit of intestinal chyme into the ileum, to raise PYY, and induce satiety. Peptide YY can be produced as the result of enzymatic breakdown of crude fish proteins and ingested as a food product. \n\nSection::::Structure.\n", "Fibre, especially cellulose and hemi-cellulose, is primarily broken down into the volatile fatty acids, acetic acid, propionic acid and butyric acid in these chambers (the reticulo-rumen) by microbes: (bacteria, protozoa, and fungi). In the omasum, water and many of the inorganic mineral elements are absorbed into the blood stream.\n", "Section::::Examples of adsorption.\n\nSection::::Examples of adsorption.:Beer stone.\n", "BULLET::::- Insoluble fiber – which does not dissolve in water – is inert to digestive enzymes in the upper gastrointestinal tract and provides bulking. Some forms of insoluble fiber, such as resistant starches, can be fermented in the colon. Bulking fibers absorb water as they move through the digestive system, easing defecation.\n\nDietary fiber consists of non-starch polysaccharides and other plant components such as cellulose, resistant starch, resistant dextrins, inulin, lignins, chitins, pectins, beta-glucans, and oligosaccharides.\n", "The effects of dietary fiber in the colon are on\n\nBULLET::::1. bacterial fermentation of some dietary fibers\n\nBULLET::::2. thereby an increase in bacterial mass\n\nBULLET::::3. an increase in bacterial enzyme activity\n\nBULLET::::4. changes in the water-holding capacity of the fiber residue after fermentation\n\nEnlargement of the cecum is a common finding when some dietary fibers are fed and this is now believed to be normal physiological adjustment. Such an increase may be due to a number of factors, prolonged cecal residence of the fiber, increased bacterial mass, or increased bacterial end-products.\n", "Fermentable fibers are consumed by the microbiota within the large intestines, mildly increasing fecal bulk and producing short-chain fatty acids as byproducts with wide-ranging physiological activities (discussion below). Resistant starch, inulin, fructooligosaccharide and galactooligosaccharide are dietary fibers which are fully fermented. These include insoluble as well as soluble fibers. This fermentation influences the expression of many genes within the large intestine, which affect digestive function and lipid and glucose metabolism, as well as the immune system, inflammation and more.\n", "Section::::Human-made fibers.:Synthetic fibers.:Metallic fibers.\n\nMetallic fibers can be drawn from ductile metals such as copper, gold or silver and extruded or deposited from more brittle ones, such as nickel, aluminum or iron.\n\nSee also Stainless steel fibers.\n\nSection::::Human-made fibers.:Synthetic fibers.:Carbon fiber.\n\nCarbon fibers are often based on oxidized and via pyrolysis carbonized polymers like PAN, but the end product is almost pure carbon.\n\nSection::::Human-made fibers.:Synthetic fibers.:Silicon carbide fiber.\n", "There are several processes which can be used for manufacturing metallic fibers. \n", "As an example of fermentation, shorter-chain carbohydrates (a type of fiber found in legumes) cannot be digested, but are changed via fermentation in the colon into short-chain fatty acids and gases (which are typically expelled as flatulence).\n\nAccording to a 2002 journal article,\n\nfiber compounds with partial or low fermentability include:\n\nBULLET::::- cellulose, a polysaccharide\n\nBULLET::::- hemicellulose, a polysaccharide\n\nBULLET::::- lignans, a group of phytoestrogens\n\nBULLET::::- plant waxes\n\nfiber compounds with high fermentability include:\n\nBULLET::::- resistant starches\n\nBULLET::::- beta-glucans, a group of polysaccharides\n\nBULLET::::- pectins, a group of heteropolysaccharides\n\nBULLET::::- natural gums, a group of polysaccharides\n", "When exposed to physiological conditions, polyglycolide is degraded by random hydrolysis, and apparently it is also broken down by certain enzymes, especially those with esterase activity. The degradation product, glycolic acid, is nontoxic, and it can enter the tricarboxylic acid cycle, after which it is excreted as water and carbon dioxide. A part of the glycolic acid is also excreted by urine.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-20816
Why do extreme cold temperature damage our tissue and give us frost bite but storing meat in the freezer keeps it good?
Storing meat in the freezer damages it. Since its already dead, it doesn't particularly care. If anything, damaging meat is a good thing, helps make it tender. Of course if you take meat out of the freezer, heat it to body temperature, then leave it like that it'll go all gross and discoloured.
[ "BULLET::::- Vitamin B (Thiamin): A vitamin loss of 25 percent is normal. Thiamin is easily soluble in water and is destroyed by heat.\n\nBULLET::::- Vitamin B (Riboflavin): Not much research has been done to see how much freezing affects Riboflavin levels. Studies that have been performed are inconclusive; one study found an 18 percent vitamin loss in green vegetables, while another determined a 4 percent loss. It is commonly accepted that the loss of Riboflavin has to do with the preparation for freezing rather than the actual freezing process itself.\n", "Under hygienic conditions and without other treatment, meat can be stored at above its freezing point (–1.5 °C) for about six weeks without spoilage, during which time it undergoes an aging process that increases its tenderness and flavor.\n", "BULLET::::- Brian Lassen, \"Is livestock production prepared for an electrically paralysed world?\" J. Sci. Food Agric. 2013;93(1):2-4, Explains the vulnerability of the cold chain from electricity dependence.\n\nBULLET::::- \"Manual on the Management, Maintenance and Use of Blood Cold Chain Equipment\", World Health Organization, 2005,\n\nBULLET::::- Pawanexh Kohli, \"Fruits and Vegetables Post-Harvest Care: The Basics\", Explains why the cold chain is required for fruits and vegetables.\n\nBULLET::::- Clive, D., \"Cold and Chilled Storage Technology\", 1997,\n\nBULLET::::- EN 12830:1999 Temperature recorders for the transport, storage and distribution of chilled, frozen and deep-frozen/quick-frozen food and ice cream\n", "BULLET::::- at room temperature; this is dangerous since the outside may be defrosted while the inside remains frozen\n\nBULLET::::- in a refrigerator\n\nBULLET::::- in a microwave oven\n\nBULLET::::- wrapped in plastic and placed in cold water or under cold running water\n\nPeople sometimes defrost frozen foods at room temperature because of time constraints or ignorance; such foods should be promptly consumed after cooking or discarded and never be refrozen or refrigerated since pathogens are not killed by the freezing process.\n\nSection::::Quality.\n", "BULLET::::- [∗∗] : min temperature = . Maximum storage time for (pre-frozen) food is 1 month\n\nBULLET::::- [∗∗∗] : min temperature = . Maximum storage time for (pre-frozen) food is between 3 and 12 months depending on type (meat, vegetables, fish, etc.)\n\nBULLET::::- [∗∗∗∗] : min temperature = . Maximum storage time for pre-frozen or frozen-from-fresh food is between 3 and 12 months\n", "Section::::Food safety.:Mechanisms.:Low-Temperature Process.\n\nLow-temperature processing also plays an essential role in food processing and storage. During this process, microorganisms and enzymes are subjected to low temperatures. Unlike heating, chilling does not destroy the enzymes and microorganisms but simply reduces their activity, which is effective as long as the temperature is maintained. As the temperature is raised, activity will rise again accordingly. It follows that, unlike heating, the effect of preservation by cold is not permanent; hence the importance of maintaining the \"cold chain\" throughout the shelf life of the food product. (Chapter 16 pg, 396) \n", "Raw chicken maintains its quality longer in the freezer, as moisture is lost during cooking. There is little change in nutrient value of chicken during freezer storage. For optimal quality, however, a maximal storage time in the freezer of 12 months is recommended for uncooked whole chicken, 9 months for uncooked chicken parts, 3 to 4 months for uncooked chicken giblets, and 4 months for cooked chicken. Freezing doesn't usually cause color changes in poultry, but the bones and the meat near them can become dark. This bone darkening results when pigment seeps through the porous bones of young poultry into the surrounding tissues when the poultry meat is frozen and thawed.\n", "BULLET::::- Facilities for breeding laboratory animals. Since many animals normally reproduce only in spring, holding them in rooms in which conditions mirror those of spring all year can cause them to reproduce year-round.\n\nBULLET::::- Food cooking and processing areas\n\nBULLET::::- Hospital operating theatres, in which air is filtered to high levels to reduce infection risk and the humidity controlled to limit patient dehydration. Although temperatures are often in the comfort range, some specialist procedures, such as open heart surgery, require low temperatures (about 18 °C, 64 °F) and others, such as neonatal, relatively high temperatures (about 28 °C, 82 °F).\n\nBULLET::::- Industrial environments\n", "For example, the appearance in the 1980s of preservation techniques under controlled atmosphere sparked a small revolution in the world's market for sheep meat: the lamb of New Zealand, one of the world's largest exporters of lamb, could henceforth be sold as fresh meat, since it could be preserved from 12 to 16 weeks, which would be a sufficient duration for it to reach Europe by boat. Before, meat from New Zealand was frozen, thus had a much lower value on European shelves. With the arrival of the new \"chilled\" meats, New Zealand could compete even more strongly with local producers of fresh meat. The use of controlled atmosphere to avoid the depreciation which affects frozen meat is equally useful in other meat markets, such as that for pork, which now also enjoys an international trade.\n", "Chicken can be cooked or reheated from the frozen state, but it will take approximately one and a half times as long to cook, and any wrapping or absorbent paper should be discarded. There are three generally accepted safe methods of reheating frozen chicken: in the refrigerator, in cold water, or using a microwave oven. These methods are endorsed by the FDA as safe, as they minimize the risk of bacterial growth. Bacteria survives but does not grow in freezing temperatures. However, if frozen cooked foods are not defrosted properly and are not reheated to temperatures that kill bacteria, chances of getting a foodborne illness greatly increase.\n", "One common way to eat the meat hunted is frozen. Many hunters will eat the food that they hunt on location where they found it. This keeps their blood flowing and their bodies warm. One custom of eating meat at the hunting site pertains to fish. In \"Overland to Starvation Cove: A History\", Heinrich Klutschak explains the custom: \"...no fish could be eaten in a cooked state on the spot where caught but could only be enjoyed raw; only when one is a day's march away from the fishing site is it permitted to cook the fish over the flame of a blubber lamp.\"\n", "In contrast to short term sample storage at +4 °C or -20 °C by using standard refrigerators or freezers, many molecular biology or life science laboratories need long term storage for biological samples like DNA, RNA, proteins, cell extracts, or reagents. To reduce the risk of sample damage, these types of samples need extremely low temperatures of -80 to -85 °C. Cells are stored in tanks of liquid nitrogen at -196 °C.\n", "In the United States, livestock is usually transported live, slaughtered at a major distribution point, hung and transported for two days to a week in refrigerated rail cars, and then butchered and sold locally. Before refrigerated rail cars, meat had to be transported live, and this placed its cost so high that only farmers and the wealthy could afford it every day. In Europe much meat is transported live and slaughtered close to the point of sale. In much of Africa and Asia most meat is for local populations is raised, slaughtered and eaten locally, which is believed to be less stressful for the animals involved and minimizes meat storage needs. In Australia and New Zealand, where a large proportion of meat production is for export, meat enters the cold chain early, being stored in large freezer plants before being shipped overseas in freezer ships.\n", "Freeze branding is used as an alternative to the more traditional hot branding. This process involves the use of a hot iron to scar an animal's skin, which can be painful and traumatizing to the animal. Freeze branding has been gaining in popularity as a less painful way to permanently mark and identify animals. There has been debate whether freeze branding truly is less painful than hot branding, but studies conducted to compare the pain of the two methods have concluded that freeze branding is indeed less painful.\n", "BULLET::::- After long-term of Jjimjil, body fat percentage of old woman decreased, but in case of old man, there was no effective change.\n\nBULLET::::- Every age of human, the heat-resisting ability has been increased. (Good Effect)\n\nBULLET::::- But, the cold-resisting ability has been decreased. However, if we control the time, power, and the repetition well, especially when we rest in the ice room during Jjimjil. It can reduce the decrease of cold-resisting ability and even we can increase it.\n\nSection::::Food.\n\nBULLET::::- Iced Sikhye () is a sweet rice beverage.\n", "The frequencies used in diagnostic ultrasound are typically between 2 and 18 MHz, and uncertainty remains about the extent of cellular damage or long-term effects of fetal scans. (see Medical ultrasonography)\n\nSection::::Low temperatures.\n\nFreezing food to preserve its quality has been used since time immemorial. Freezing temperatures curb the spoiling effect of microorganisms in food, but can also preserve some pathogens unharmed for long periods of time. Freezing kills some microorganisms by physical trauma, others are sublethally injured by freezing, and may recover to become infectious.\n\nSection::::High osmotic gradients.\n", "Section::::Potential solutions.:Abbatoir chilling conditions.\n\nQuickly chilling pork and poultry meat, in order to bring the muscle temperature down to an acceptable level, will reduce myofibril glycolysis and stop muscle metabolism. Slower chilling results in a lower pH, lighter colored meat, and greater yield losses after cooking. \n", "Hibernating Arctic ground squirrels may have abdominal temperatures as low as −2.9 °C (26.8 °F), maintaining subzero abdominal temperatures for more than three weeks at a time, although the temperatures at the head and neck remain at 0 °C or above.\n\nSection::::Applied cryobiology.\n\nSection::::Applied cryobiology.:Historical background.\n\nCryobiology history can be traced back to antiquity. As early as in 2500 BC, low temperatures were used in Egypt in medicine. The use of cold was recommended by Hippocrates to stop bleeding and swelling. With the emergence of modern science, Robert Boyle studied the effects of low temperatures on animals.\n", "Freezing technique is the most commonly used sectioning method. This method can preserve the immune activity of various antigens well. Both fresh tissue and fixed tissue can be frozen. Moreover, it is also a technique used for freezing sections of either fresh or fixed plant tissues.\n", "Foods that spoil easily, such as meats, dairy, and seafood, must be prepared a certain way to avoid contaminating the people for whom they are prepared. As such, the rule of thumb is that cold foods (such as dairy products) should be kept cold and hot foods (such as soup) should be kept hot until storage. Cold meats, such as chicken, that are to be cooked should not be placed at room temperature for thawing, at the risk of dangerous bacterial growth, such as \"Salmonella\" or \"E. coli\".\n\nSection::::Safety.:Allergies.\n", "Cellars, caves, and cool streams were used for freezing. American estates had ice houses built to store ice and food on the ice. The icehouse was then converted to an “icebox”. The Icebox was converted in the 1800s to mechanical refrigeration. Clarence Birdseye found in the 1800s that freezing meats and vegetables at a low temperature made them taste better.\n\nSection::::History and methods.:Fermenting.\n", "Another experiment monitored the calves’ escape-avoidance reaction. The vertical movement of a calf during branding was used as an operational definition for avoidance of the brand. It was determined that H calves tried to escape the branding iron more than the F and S calves.\n\nThese two experiments determined that the hot-iron branded calves experienced increased plasma epinephrine concentration, heart rate, plasma cortisol, and escape-avoidance reactions and therefore experienced more pain than the freeze branded and sham branded calves.\n\nSection::::Common usage.\n", "It has been suggested that energy intake also increases during conditions of extreme or prolonged cold temperatures. Relatedly, researchers have posited that reduced variability of ambient temperature indoors could be a mechanism driving obesity, as the percentage of US homes with air conditioning increased from 23 to 47 percent in recent decades. In addition, several human and animal studies have shown that temperatures above the thermoneutral zone significantly reduce food intake. However, overall there are few studies indicating altered energy intake in response to extreme ambient temperatures and the evidence is primarily anecdotal.\n\nSection::::Environmental influences.:Ambient characteristics.:Lighting.\n", "This verse has caused some to ask if meat should be eaten in the summer. Meat has more calories than fruits and vegetables, which some individuals may need fewer of in summer than winter. Also, before fruits and vegetables could be preserved, people often did not have enough other food to eat in winter. Spoiled meat can be fatal if eaten, and in former times meat spoiled more readily in summer than winter. Modern methods of refrigeration now make it possible to preserve meat in any season. The key word with respect to the use of meat is \"sparingly\".\n", "Steak and other meat products can be frozen and exported, but before the invention of commercial refrigeration, transporting meat over long distances was impossible. Communities had to rely on what was locally available, which determined the forms and tradition of meat consumption. Hunter-gathering peoples cut steaks from local indigenous animals. For example, Sami cuisine relies partly on the meat of the reindeer; the Inuit diet uses locally caught sea-mammal meat from whales; Indigenous Australians ate kangaroo; and indigenous North American food included bison steak. In the Middle East, meat recipes from medieval times onwards simply state \"meat\" without specifying the kind or cut; \"apart from an occasional gazelle, kid or camel\", only lamb and mutton were eaten because cattle were seldom bred.\n" ]
[ "Cold temperatures should act the same on human tissue as on meat in the freezer.", "Meat stored in freezers is not damaged from the extreme temperatures. " ]
[ "Meat in the freezer is dead so being in the freezer doesn't particularly matter.", "Storing meat in a freezer damages the meat inn the same way frost bite damages human tissue. " ]
[ "false presupposition" ]
[ "Cold temperatures should act the same on human tissue as on meat in the freezer.", "Meat stored in freezers is not damaged from the extreme temperatures. " ]
[ "false presupposition", "false presupposition" ]
[ "Meat in the freezer is dead so being in the freezer doesn't particularly matter.", "Storing meat in a freezer damages the meat inn the same way frost bite damages human tissue. " ]
2018-17479
Why does metal start a fire in a mircowave?
The microwave induces a "voltage differential" in metal--that is to say, it effects the metal so that one area of it has a higher voltage than another. This causes electricity to flow from an area of high voltage to an area of lower voltage. Moving electricity generates heat; enough heat starts a fire.
[ "Section::::Four-Step Model.\n", "BULLET::::- 1893: physicist W.B. Croft exhibits Branly's experiments at a meeting of the Physical Society in London. It is unclear to Croft and others whether the filings in the Branly filing tube are reacting to sparks or the light from the sparks. George Minchin notices the [Branly] tube may be reacting to Hertzian waves the same way his solar cell does and writes the paper \"\"The Action of Electromagnetic Radiation on Films containing Metallic Powders\"\". These papers are read by Lodge who sees a way to build a much improved Herzian wave detector.\n", "In recent years, infrared images have revealed dozens of examples of \"infrared HH objects\". Most look like bow waves (similar to the waves at the head of a ship), and so are usually referred to as molecular \"bow shocks\". The physics of infrared bow shocks can be understood in much the same way as that of HH objects, since these objects are essentially the same – supersonic shocks driven by collimated jets from the opposite poles of a protostar. It is only the conditions in the jet and surrounding cloud that are different, causing infrared emission from molecules rather than optical emission from atoms and ions.\n", "They operate very well in oxidizing atmospheres. If, however, a mostly reducing atmosphere (such as hydrogen with a small amount of oxygen) comes into contact with the wires, the chromium in the chromel alloy oxidizes. This reduces the emf output, and the thermocouple reads low. This phenomenon is known as \"green rot\", due to the color of the affected alloy. Although not always distinctively green, the chromel wire will develop a mottled silvery skin and become magnetic. An easy way to check for this problem is to see whether the two wires are magnetic (normally, chromel is non-magnetic).\n", "Section::::Experiments.:Test devices.:Resonant cavity thruster.\n\nAnother type of claimed propellantless thruster, a resonant cavity thruster, has been proposed to work due to a Mach effect:\n\nAn asymmetric resonant microwave cavity could act as a capacitor where:\n\nBULLET::::- surface currents propagate inside the cavity on the conic wall between the two end plates,\n\nBULLET::::- electromagnetic resonant modes create electric charges on each end plate,\n\nBULLET::::- a Mach effect is triggered by Lorentz forces from surface currents on the conic wall,\n\nBULLET::::- a thrust force arises in the cavity, due to variation in electromagnetic density from evanescent waves inside the skin layer.\n", "BULLET::::- Percussion caps, as used in muzzleloader firearms, and primers used in rifle and shotgun shells create a stream of sparks when rapidly struck.\n\nSection::::Methods.:Electrical.\n", "One experimental study has suggested that the effect is caused by the ionization process occurring mostly at the base of the flame, making it more difficult for the electrode further from the base of the flame to attract positive ions from the burner, yet leaving the electron current largely unchanged with distance because of the greater mobility of the electron charge carriers.\n\nSection::::See also.\n\nBULLET::::- Flame detection\n\nBULLET::::- Flame supervision device\n\nSection::::External links.\n\nBULLET::::- A video of a flame being used as a rectifier in a simple AM radio\n\nBULLET::::- Using a flame as a triode amplifier\n", "Section::::Resonant HHG.\n\nIn some plasma plumes, it was observed that the intensity of a certain harmonic order was exceptionally high as compared to its neighboring harmonics. For example, by using 800 nm femtosecond laser pulses, it was observed that with tin plasma, the intensity of 17th harmonic was an order of magnitude higher as compared to the intensity of its neighboring harmonics. \n", "Early plasma-speaker designs ionized ambient air containing the gases nitrogen and oxygen. In an intense electrical field these gases can produce reactive by-products, and in closed rooms these can reach a hazardous level.\n", "With liquids containing solids, similar phenomena may occur with exposure to ultrasound. Once cavitation occurs near an extended solid surface, cavity collapse is nonspherical and drives high-speed jets of liquid to the surface. These jets and associated shock waves can damage the now highly heated surface. Liquid-powder suspensions produce high velocity interparticle collisions. These collisions can change the surface morphology, composition, and reactivity.\n\nSection::::Sonochemical reactions.\n", "Reports on MgO-based memristive switching within MgO-based MTJs appeared starting in 2008\n\nand 2009. While the drift of oxygen vacancies within the insulating MgO layer has been proposed to describe the observed memristive effects, another explanation could be charge trapping/detrapping on the localized states of oxygen vacancies\n\nand its impact on spintronics. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity or multiferroicity.\n\nSection::::Implementations.:Spin memristive systems.:Memristance in a magnetic tunnel junction.:Intrinsic mechanism.\n", "BULLET::::- Ulrich L. Rohde \" Microwave and Wireless Synthesizers: Theory and Design \", John Wiley & Sons, August 1997,\n\nSection::::External links.\n\nBULLET::::- Hewlett-Packard 5100A (tunable, 0.01 Hz-resolution \"Direct Frequency Synthesizer\" introduced in 1964; to HP, direct synthesis meant PLL not used, while indirect meant a PLL was used)\n\nBULLET::::- Frequency Synthesizer U.S. Patent 3,555,446, Braymer, N. B., (1971, January 12)\n", "Section::::Vibrational hot bands.\n", "One such source, capable of vibrating at audible frequencies (45 to 20,000 vibrations per second) is plasma. Plasma is a collection of charged particles, such as free electrons or ionized gas atoms. Examples of plasma are solar flares, solar wind, neon signs and fluorescent lamps. Plasma interacts with electrical and magnetic fields in ways that can result in vibrations in many frequencies, including the audible range.\n", "BULLET::::- Presence of \"new gas phase species\". In a plasma discharge a wide range of new species is produced allowing the catalyst to be exposed to them. Ions, vibrationally and rotationally excited species do not affect the catalyst since they loose the charge and the additional energy they possess when they reach a solid surface. Radicals, instead, show high sticking coefficients for chemisorption, increasing the catalytic activity.\n\nBULLET::::- Catalyst effects on plasma:\n", "In the transient flow step the polymer's surface begins to melt. The melt layer thickness quickly grows, causing the frictional forces to decrease. This decrease in friction decreases the heat input to the system, and a lateral flow of molten material begins to occur.\n\nSection::::Vibration welding process.:Steady state flow.\n", "At Rainbow, phase separation is a suggested cause for particularly high concentrations of chloride, trace elements, and hydronium, as they differ greatly from similar MAR vents like Logatchev. Furthermore, Rainbow vent fluids have the highest concentrations of many elements found at the Azores vents, such as hydrogen, transition metals, and rare earth elements (REE). Due to the extreme endmember pH, chloride is hypothesized to act as a dominant cation and therefore forms many weak complexes with other elements at high temperatures. These complexes become unstable when pH rises or temperature decreases, therefore releasing many transition metals and REEs.\n", "BULLET::::- 6581 R3 - Will say \"6581\" only, \"6581 R3\" or \"6581 CBM\" on the package. Had a minor change to the protection/buffering of the input pins. No changes were made to the filter section. Made from before 1983 until 1986 or so. The 6581R3 since around the week 47 of 1985 made in the Philippines use the HMOS HC-30 degree silicon though the manufacturing process remained NMOS.\n", "Surface termination is often an issue with both solid state and vacuum devices, and the details of final surface band structure have been compared with alternatives in various device structures.\n\nSection::::Applications.\n\nWhile the original effort failed to produce useful products, follow-on work in Europe did produce usable astronomical detectors\n", "In addition, the Xe plasma FIB enables Ga-free fabrication and sample preparation. This important feature is crucial to complete fabrication tasks and sample preparation without altering the physical electrical properties of the modified specimens. This is the case of sample preparation for the purposes of failure analysis and electrical nanoprobing of semiconductor devices and preparation of high-quality TEM specimens. \n", "A similar effect is occasionally observed in the vicinity of high-power amplitude-modulated radio transmitters when a corona discharge (inadvertently) occurs from the transmitting antenna, where voltages in the tens of thousands are involved. The ionized air is heated in direct relationship to the modulating signal with surprisingly high fidelity over a wide area. Due to the destructive effects of the (self-sustaining) discharge this cannot be permitted to persist, and automatic systems momentarily shut down transmission within a few seconds to quench the \"flame\".\n", "Apart from magnesium ignition, some amateurs also choose to use sparklers to ignite the thermite mixture. These reach the necessary temperatures and provide enough time before the burning point reaches the sample. This can be a dangerous method, as the iron sparks, like the magnesium strips, burn at thousands of degrees and can ignite the thermite even though the sparkler itself is not in contact with it. This is especially dangerous with finely powdered thermite.\n", "Teclu burner\n\nThe Teclu burner is a laboratory gas burner, a variant of the Bunsen burner, named after the Romanian chemist Nicolae Teclu. It can produce a hotter flame than a Bunsen burner.\n", "The NRC fined Thermal Science Inc. $900,000. TSI rejected NRC's claims but wound up settling out of court for $300,000.\n\nThe latest iteration of Thermo-Lag related issues involving a Thermo-Lag overlay over top of existing Thermo-Lag was USNRC Information Notice 2018-09, which indicated that during the fabrication of an overlay of the existing fireproofing, on 18. March 2017, elemental carbon, from a fabric that was part of the overlay, was being cut and fabricated and as a result, debris from this fabric entered the electrical cabinet, which caused the arc flash. \n", "BULLET::::- When using a microwave oven to cook food, the micro wave travels through the food, causing the water molecules vibrate in the same frequency, which is similar to resonance, so that the food as a whole, gets hot fast.\n\nBULLET::::- Some of the helicopter crashes are caused by resonance too. The eyeballs of the pilot resonate because of excessive pressure in the upper air, making the pilot unable to see overhead power lines. As a result, the helicopter is out of control.\n\nBULLET::::- Resonance of two identical tune forks\n\nSee video: http://video.mit.edu/embed/11447/\n\nSection::::Different Types of Complex Harmonic Motion.:Double Pendulum.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-02285
How is it that EMP's destroy electrical components but does not affect our own bodies' electrical impulses?
EMPs work primarily by generating a huge magnetic pulse. This pulse induces an electrical pulse in just about any electrical conductor. This mainly affects metals, most of which are decent conductors. They have electrons that are poorly bound to their parent atoms, so a strong electric or magnetic field can move them about. Your nerves, however, don't conduct electricity the way metals do. Even a very strong magnetic pulse won't induce much of a current in your nerves.
[ "The range of NNEMP weapons is much less than nuclear EMP. Nearly all NNEMP devices used as weapons require chemical explosives as their initial energy source, producing only 10 (one millionth) the energy of nuclear explosives of similar weight. The electromagnetic pulse from NNEMP weapons must come from within the weapon, while nuclear weapons generate EMP as a secondary effect. These facts limit the range of NNEMP weapons, but allow finer target discrimination. The effect of small e-bombs has proven to be sufficient for certain terrorist or military operations. Examples of such operations include the destruction of electronic control systems critical to the operation of many ground vehicles and aircraft.\n", "EMP has been used at high doses of as much as 1,260 mg/day by the oral route and 240 to 450 mg/day by intravenous injection.\n\nSection::::Interactions.\n", "High-level EMP signals can pose a threat to human safety. In such circumstances, direct contact with a live electrical conductor should be avoided. Where this occurs, such as when touching a Van de Graaf generator or other highly-charged object, care must be taken to release the object and then discharge the body through a high resistance, in order to avoid the risk of a harmful shock pulse when stepping away.\n", "An EMP has a smaller effect the shorter the length of an electrical conductor; though other factors affect the vulnerability of electronics as well, so no cutoff length determines whether some piece of equipment will survive. However, small electronic devices, such as wristwatches and cell phones, would most likely withstand an EMP.\n\nSection::::Effects.:On humans and animals.\n\nThough voltages can accumulate in electrical conductors after an EMP, it will generally not flow out into human or animal bodies, and thus contact is safe.\n\nSection::::Post-Cold War attack scenarios.\n", "At a high voltage level an EMP can induce a spark, for example from an electrostatic discharge when fuelling a gasoline-engined vehicle. Such sparks have been known to cause fuel-air explosions and precautions must be taken to prevent them.\n\nA large and energetic EMP can induce high currents and voltages in the victim unit, temporarily disrupting its function or even permanently damaging it.\n", "The damaging effects of high-energy EMP have led to the introduction of EMP weapons, from tactical missiles with a small radius of effect to nuclear bombs tailored for maximum EMP effect over a wide area.\n\nSection::::Control.\n\nLike any electromagnetic interference, the threat from EMP is subject to control measures. This is true whether the threat is natural or man-made.\n\nTherefore, most control measures focus on the susceptibility of equipment to EMP effects, and hardening or protecting it from harm. Man-made sources, other than weapons, are also subject to control measures in order to limit the amount of pulse energy emitted.\n", "EMP has been used at doses of 140 to 1,400 mg/day orally. Low doses, such as 280 mg/day, have been found to have comparable effectiveness as higher doses but with improved tolerability and reduced toxicity. Doses of 140 mg/day have been described as a very low dosage. EMP has been used at doses of 240 to 450 mg/day intravenously.\n", "A very large EMP event such as a lightning strike is also capable of damaging objects such as trees, buildings and aircraft directly, either through heating effects or the disruptive effects of the very large magnetic field generated by the current. An indirect effect can be electrical fires caused by heating. Most engineered structures and systems require some form of protection against lightning to be designed in.\n", "Equipment that is running at the time of an EMP is more vulnerable. Even a low-energy pulse has access to the power source, and all parts of the system are illuminated by the pulse. For example, a high-current arcing path may be created across the power supply, burning out some device along that path. Such effects are hard to predict, and require testing to assess potential vulnerabilities.\n\nSection::::Effects.:On aircraft.\n", "The potential health effects of the very low frequency EMFs surrounding power lines and electrical devices are the subject of on-going research and a significant amount of public debate. The US National Institute for Occupational Safety and Health (NIOSH) and other US government agencies do not consider EMFs a proven health hazard. NIOSH has issued some cautionary advisories but stresses that the data are currently too limited to draw good conclusions.\n", "BULLET::::- In \"Battlefield 4\", during a mission in Shanghai, upon exiting the city, an EMP blast occurs darkening the city and causing a nearby helicopter to crash. It is later remarked by some fellow Marines on board a US navy vessel that the blast also disabled some of the ships in the U.S. Naval fleet engaged around the area. Some are still able to function due to operating on diesel fuel and having no spark plug within the engines.\n", "At the top end of the scale, large outdoor test facilities incorporating high-energy EMP simulators have been built by several countries. The largest facilities are able to test whole vehicles including ships and aircraft for their susceptibility to EMP. Nearly all of these large EMP simulators used a specialized version of a Marx generator.\n", "After the collapse of the Soviet Union, the level of this damage was communicated informally to US scientists. For a few years US and Russian scientists collaborated on the HEMP phenomenon. Funding was secured to enable Russian scientists to report on some of the Soviet EMP results in international scientific journals. As a result, formal documentation of some of the EMP damage in Kazakhstan exists but is still sparse in the open scientific literature.\n", "While EMP often is assumed to be a characteristic of nuclear weapons alone, such is not the case. Several open-literature techniques, requiring only conventional explosives, or, in the case of high power microwave, a large electrical power supply, perhaps one-shot as with capacitors, can generate a significant EMP:\n\nBULLET::::- Explosively pumped flux compression generators (FCG)\n\nBULLET::::- Explosive and Propellant Driven MHD Generators\n\nBULLET::::- High Power Microwave Sources - Spark gaps or the Vircator\n", "Minor EMP events, and especially pulse trains, cause low levels of electrical noise or interference which can affect the operation of susceptible devices. For example, a common problem in the mid-twentieth century was interference emitted by the ignition systems of gasoline engines, which caused radio sets to crackle and TV sets to show stripes on the screen. Laws were introduced to make vehicle manufacturers fit interference suppressors.\n", "The pulse is powerful enough to cause moderately long metal objects (such as cables) to act as antennas and generate high voltages due to interactions with the electromagnetic pulse. These voltages can destroy unshielded electronics. There are no known biological effects of EMP. The ionized air also disrupts radio traffic that would normally bounce off the ionosphere.\n", "A nuclear electromagnetic pulse is the abrupt pulse of electromagnetic radiation resulting from a nuclear explosion. The resulting rapidly changing electric fields and magnetic fields may couple with electrical/electronic systems to produce damaging current and voltage surges.\n\nThe intense gamma radiation emitted can also ionize the surrounding air, creating a secondary EMP as the atoms of air first lose their electrons and then regain them.\n\nNEMP weapons are designed to maximize such EMP effects as the primary damage mechanism, and some are capable of destroying susceptible electronic equipment over a wide area.\n", "EMP events usually induce a corresponding signal in the surrounding environment or material. Coupling usually occurs most strongly over a relatively narrow frequency band, leading to a characteristic damped sine wave. Visually it is shown as a high frequency sine wave growing and decaying within the longer-lived envelope of the double-exponential curve. A damped sinewave typically has much lower energy and a narrower frequency spread than the original pulse, due to the transfer characteristic of the coupling mode. In practice, EMP test equipment often injects these damped sinewaves directly rather than attempting to recreate the high-energy threat pulses.\n", "Motion picture and electronic entertainment quite often depicts electromagnetic pulse effects incorrectly. This problem has become so bad that it was addressed in a report for Oak Ridge National Laboratory by Metatech Corporation. \n\nIn addition, the United States Air Force Space Command commissioned science educator Bill Nye to make a video for the Air Force called \"Hollywood vs. EMP\" so that people who must deal with real EMP would not be confused by motion picture fiction. That U.S. Space Command video is not available to the general public.\n\nSection::::Films.\n", "An energetic EMP can temporarily upset or permanently damage electronic equipment by generating high voltage and high current surges; semiconductor components are particularly at risk. The effects of damage can range from imperceptible to the eye, to devices literally blowing apart. Cables, even if short, can act as antennas to transmit pulse energy to equipment.\n\nSection::::Effects.:Vacuum tube vs. solid state electronics.\n", "A powerful EMP can also directly affect magnetic materials and corrupt the data stored on media such as magnetic tape and computer hard drives. Hard drives are usually shielded by heavy metal casings. Some IT asset disposition service providers and computer recyclers use a controlled EMP to wipe such magnetic media.\n", "Other components in vacuum tube circuitry can be damaged by EMP. Vacuum tube equipment was damaged in the 1962 testing. The solid state [[AN/PRC-77 Portable Transceiver|PRC-77]] [[Very high frequency|VHF]] manpackable two-way radio survived extensive EMP testing. The earlier PRC-25, nearly identical except for a vacuum tube final amplification stage, was tested in EMP simulators, but was not certified to remain fully functional.\n\nSection::::Effects.:Electronics in operation vs. inactive.\n", "Section::::General characteristics.:Types of energy.\n\nEMP energy may be transferred in any of four forms:\n\nBULLET::::- Electric field\n\nBULLET::::- Magnetic field\n\nBULLET::::- Electromagnetic radiation\n\nBULLET::::- Electrical conduction\n\nDue to Maxwell's equations, a pulse of any one form of electromagnetic energy will always be accompanied by the other forms, however in a typical pulse one form will dominate.\n\nIn general, only radiation acts over long distances, with the others acting over short distances. There are a few exceptions, such as a solar magnetic flare.\n\nSection::::General characteristics.:Frequency ranges.\n", "Electronics can be shielded by wrapping them completely in conductive material such as metal foil; the effectiveness of the shielding may be less than perfect. Proper shielding is a complex subject due to the large number of variables involved. Semiconductors, especially integrated circuits, are extremely susceptible to the effects of EMP due to the close proximity of the PN junctions, but this is not the case with thermionic tubes (or valves) which are relatively immune to EMP. A Faraday cage does not offer protection from the effects of EMP unless the mesh is designed to have holes no bigger than the smallest wavelength emitted from a nuclear explosion.\n", "Nuclear and large conventional explosions produce radio frequency energy. The characteristics of the EMP will vary with altitude and burst size. EMP-like effects are not always from open-air or space explosions; there has been work with controlled explosions for generating electrical pulse to drive lasers and railguns.\n\nFor example, in a program called BURNING LIGHT, KC-135R tankers, temporarily modified to carry MASINT sensors, would fly around the test area, as part of Operation BURNING LIGHT. One sensor system measured the electromagnetic pulse of the detonation.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02156
why aren’t decks of cards the gold standard for cryptography?
While you might think this, there are not that many possible decks of cards. Only 80.658 x 10^66 . OK, that seems like a lot, but 2^4096 , the number of 4096 bit encryption keys is 1.044 x 10^1230 , a substantially larger number. Even 256 bit keys have 1.157 x 10^77 values, more than your decks of cards. Computers are fast, and on problems like this, which are fully parallelizable, $1M will buy you a lot of decryptions/day. The complexity of a deck of cards can be solved by brute force in only a few days of computer time. That's not enough when serious money is involved.
[ "In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and \"dual-use\" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users don't realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally don't find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.\n", "In October 2014, the Electronic Frontier Foundation (EFF) included TextSecure, RedPhone, and Signal in their updated Surveillance Self-Defense (SSD) guide. In November 2014, all three received top scores on the EFF's Secure Messaging Scorecard, along with Cryptocat, Silent Phone, and Silent Text. They received points for having communications encrypted in transit, having communications encrypted with keys the providers don't have access to (end-to-end encryption), making it possible for users to independently verify their correspondent's identities, having past communications secure if the keys are stolen (forward secrecy), having their code open to independent review (open source), having their security designs well-documented, and having recent independent security audits.\n", "Since use of strong cryptography makes the job of intelligence agencies more difficult, many countries have enacted law or regulation restricting or simply banning the non-official use of strong cryptography. For instance, the United States has defined cryptographic products as munitions since World War II and has prohibited export of cryptography beyond a certain 'strength' (measured in part by key size), and Russia banned its use by private individuals in 1995. It is not clear if the Russian ban is still in effect. France had quite strict regulations in this field, but has relaxed them in recent years.\n\nSection::::Examples.\n", "In the ensuing debate, many advantages and disadvantages of the different candidates were investigated by cryptographers; they were assessed not only on security, but also on performance in a variety of settings (PCs of various architectures, smart cards, hardware implementations) and on their feasibility in limited environments (smart cards with very limited memory, low gate count implementations, FPGAs).\n", "The success of digital signatures as a replacement for paper based signatures has lagged behind expectations. On the other hand, many unexpected uses of digital signatures were discovered by recent cryptographic research. A related insight that can be learned from digital signatures is that the cryptographic mechanism need not be confused with overall process that turns a digital signature into something that has more or less the same properties as a paper based signature. Electronic signatures such as paper signatures sent by fax may have legal meaning, while secure cryptographic signatures may serve completely different purposes. We need to distinguish the algorithm from the process.\n", "BULLET::::- The secrecy of any algorithm is only as trustworthy as the people with access to the algorithm; if any of them were to divulge any of the design secrets, every card with the compromised algorithm may need to be replaced for security to be restored. In some cases, outside personnel (such as those employed by lawyers in the NDS vs. DirecTV intellectual property lawsuit over the P4 card design) may obtain access to key and very sensitive information, increasing the risk of the information being leaked for potential use by pirates.\n", "BULLET::::- The Elliptic Curve Digital Signature Algorithm (ECDSA) is based on the Digital Signature Algorithm,\n\nBULLET::::- The deformation scheme using Harrison's p-adic Manhattan metric,\n\nBULLET::::- The Edwards-curve Digital Signature Algorithm (EdDSA) is based on Schnorr signature and uses twisted Edwards curves,\n\nBULLET::::- The ECMQV key agreement scheme is based on the MQV key agreement scheme,\n\nBULLET::::- The ECQV implicit certificate scheme.\n\nAt the RSA Conference 2005, the National Security Agency (NSA) announced Suite B which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information.\n", "While PGP can protect messages, it can also be hard to use in the correct way. Researchers at Carnegie Mellon University published a paper in 1999 showing that most people couldn't figure out how to sign and encrypt messages using the current version of PGP. Eight years later, another group of Carnegie Mellon researchers published a follow-up paper saying that, although a newer version of PGP made it easy to decrypt messages, most people still struggled with encrypting and signing messages, finding and verifying other people's public encryption keys, and sharing their own keys.\n", "Section::::Lightweight encryption.\n\nIn 2018, the NSA promoted the use of \"lightweight encryption\", in particular its ciphers Simon and Speck, for Internet of Things devices. However, the attempt to have those ciphers standardized by ISO failed because of severe criticism raised by the board of cryptography experts which provoked fears that the NSA had non-public knowledge of how to break them.\n\nSection::::2015 UK call for outlawing non-backdoored cryptography.\n", "Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous . Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high quality cryptography possible.\n", "US export regulations regarding cryptography remain in force, but were liberalized substantially throughout the late 1990s. Since 2000, compliance with the regulations is also much easier. PGP encryption no longer meets the definition of a non-exportable weapon, and can be exported internationally except to seven specific countries and a list of named groups and individuals (with whom substantially all US trade is prohibited under various US export controls).\n\nSection::::History.:PGP 3 and founding of PGP Inc..\n", "BULLET::::- The FIDO alliance standardized on an asymmetric cryptographic scheme called ECDAA. This is a version of direct anonymous attestation based on elliptic curves and in the case of WebAuthn is meant to be used to verify the integrity of authenticators, while also preserving the privacy of users, as it does not allow for global correlation of handles. However, ECDAA does not incorporate some of the lessons that were learned in the last decades of research in the area of elliptic curve cryptography, as the chosen curve has some security deficits inherent to this type of curve, which reduces the security guarantees quite substantially. Furthermore, the ECDAA standard involves random, non-deterministic, signatures, which already has been a problem in the past.\n", "Tempest standards continued to evolve in the 1970s and later, with newer testing methods and more nuanced guidelines that took account of the risks in specific locations and situations. But then as now, security needs often met with resistance. According to NSA's David G. Boak, \"Some of what we still hear today in our own circles when rigorous technical standards are whittled down in the interest of money and time are frighteningly reminiscent of the arrogant Third Reich with their Enigma cryptomachine.\" \n\nSection::::Shielding standards.\n", "In August 2015, NSA announced that it is planning to transition \"in the not too distant future\" to a new cipher suite that is resistant to quantum attacks. \"Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy.\" NSA advised: \"For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.\" New standards are estimated to be published around 2024.\n", "According to the \"New York Times\": \"But by 2006, an N.S.A. document notes, the agency had broken into communications for three foreign airlines, one travel reservation system, one foreign government’s nuclear department and another’s Internet service by cracking the virtual private networks that protected them. By 2010, the Edgehill program, the British counterencryption effort, was unscrambling VPN traffic for 30 targets and had set a goal of an additional 300.\"\n", "The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.\n", "In August 2015, NSA announced that it is planning to transition \"in the not distant future\" to a new cipher suite that is resistant to quantum attacks. \"Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy.\" NSA advised: \"For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.\"\n\nSection::::See also.\n", "In the 19th century, the general standard improved somewhat (e.g., works by Auguste Kerckhoffs, Friedrich Kasiski, and Étienne Bazeries). Colonel Parker Hitt and William Friedman in the early 20th century also wrote books on cryptography. These authors, and others, mostly abandoned any mystical or magical tone.\n\nSection::::Open literature versus classified literature.\n", "Section::::Background.\n\nDevelopments in quantum computing over the past decade and the optimistic prospects for real quantum computers within 20 years have begun to threaten the basic cryptography that secures the internet. A relatively small quantum computer capable of processing only ten thousand of bits of information would easily break all of the widely used public key cryptography algorithms used to protect privacy and digitally sign information on the internet.\n", "Premiere in Germany has replaced all of its smartcards with the Nagravision Aladin card; the US DirecTV system has replaced its three compromised card types (\"F\" had no encryption chip, \"H\" was vulnerable to being reprogrammed by pirates and \"HU\" were vulnerable to a \"glitch\" which could be used to make them skip an instruction). Both providers have been able to eliminate their problems with signal piracy by replacing the compromised smartcards after all other approaches had proved to provide at best limited results.\n", "In October 2014, the Electronic Frontier Foundation (EFF) included TextSecure in their updated Surveillance Self-Defense guide. In November 2014, TextSecure received a perfect score on the EFF's Secure Messaging Scorecard. TextSecure received points for having communications encrypted in transit, having communications encrypted with keys the providers don't have access to (end-to-end encryption), making it possible for users to independently verify their correspondent's identities, having past communications secure if the keys are stolen (forward secrecy), having their code open to independent review (open-source), having their security designs well-documented, and having recent independent security audits. At the time, \"ChatSecure + Orbot\", Cryptocat, \"Signal / RedPhone\", Pidgin (with OTR), Silent Phone, Silent Text, and Telegram's optional secret chats also received seven out of seven points on the scorecard.\n", "Encryption export controls became a matter of public concern with the introduction of the personal computer. Phil Zimmermann's PGP cryptosystem and its distribution on the Internet in 1991 was the first major 'individual level' challenge to controls on export of cryptography. The growth of electronic commerce in the 1990s created additional pressure for reduced restrictions. Shortly afterward, Netscape's SSL technology was widely adopted as a method for protecting credit card transactions using public key cryptography.\n", "As of October 2012, CNSSP-15 stated that the 256-bit elliptic curve (specified in FIPS 186-2), SHA-256, and AES with 128-bit keys are sufficient for protecting classified information up to the Secret level, while the 384-bit elliptic curve (specified in FIPS 186-2), SHA-384, and AES with 256-bit keys are necessary for the protection of Top Secret information.\n\nHowever, as of August 2015, NSA indicated that only the Top Secret algorithm strengths should be used to protect all levels of classified information.\n\nIn 2018 NSA withdrew Suite B in favor of the CNSA.\n\nSection::::Quantum resistant suite.\n", "Section::::Limitations.\n\nBULLET::::- Only addresses confidentiality, control of writing (one form of integrity), *-property and discretionary access control\n\nBULLET::::- Covert channels are mentioned but are not addressed comprehensively\n\nBULLET::::- The tranquility principle limits its applicability to systems where security levels do not change dynamically. It allows controlled copying from high to low via trusted subjects. [Ed. Not many systems using BLP include dynamic changes to object security levels.]\n\nSection::::See also.\n\nBULLET::::- Biba Integrity Model\n\nBULLET::::- The Clark-Wilson Integrity Model\n\nBULLET::::- Discretionary Access Control - DAC\n\nBULLET::::- Graham-Denning Model\n\nBULLET::::- Mandatory Access Control - MAC\n\nBULLET::::- Multilevel security - MLS\n", "NSA effectively orchestrated a kleptographic attack on users of the Dual EC DRBG pseudorandom number generation algorithm and that, although security professionals and developers have been testing and implementing kleptographic attacks since 1996, \"you would be hard-pressed to find one in actual use until now\". Due to public outcry of this cryptovirology attack, NIST rescinded the EC-DRBG algorithm from the NIST SP 800-90 standard.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-00114
why we can't (easily) convert ocean water to fresh water?
We have multiple methods for purifying water (also known as desalination in this case). However, they are moderately expensive to set up, and require a large amount of energy
[ "The Earth has a limited though renewable supply of fresh water, stored in aquifers, surface waters and the atmosphere. Oceans are a good source of usable water, but the amount of energy needed to convert saline water to potable water is prohibitive with conventional approaches, explaining why only a very small fraction of the world's water supply is derived from desalination. However, modern technologies, such as the Seawater Greenhouse, use solar energy to desalinate seawater for agriculture and drinking uses in an extremely cost-effective manner.\n\nSection::::Most affected countries.\n", "Section::::Experimental techniques.:Other approaches.:Passarell.\n", "The impact of brackish water on ecosystems can be minimized by pumping it out to sea and releasing it into the mid-layer, away from the surface and bottom ecosystems.\n\nImpingement and entrainment at intake structures are a concern due to large volumes of both river and sea water utilized in both PRO and RED schemes. Intake construction permits must meet strict environmental regulations and desalination plants and power plants that utilize surface water are sometimes involved with various local, state and federal agencies to obtain permission that can take upwards to 18 months.\n\nSection::::External links.\n", "Seawater desalination plants have produced potable water for many years. However, until recently desalination had been used only in special circumstances because of the high energy consumption of the process. \n", "The energy source is very local and fully renewable, provided that the water and heat rejected into the environment (often the same lake or a nearby river) does not disturb the natural cycles. It does not use any ozone depleting refrigerant.\n\nDepending on the needs and on the water temperature, couple heating and cooling can be considered. For example, heat could first be extracted from the water (making it colder); and, secondly, that same water could cycle to a refrigerating unit to be used for even more effective cold production.\n\nSection::::Disadvantages.\n", "Section::::Disadvantages.:Waste-stream considerations.\n", "In 1954, Pattle suggested that there was an untapped source of power when a river mixes with the sea, in terms of the lost osmotic pressure, however it was not until the mid ‘70s where a practical method of exploiting it using selectively permeable membranes by Loeb was outlined.\n", "Saline water can be treated to yield fresh water. Two main processes are used, reverse osmosis or distillation. Both methods require more energy than water treatment of local surface waters, and are usually only used in coastal areas or where water such as groundwater has high salinity.\n\nSection::::Portable water purification.\n\nLiving away from drinking water supplies often requires some form of portable water treatment process. These can vary in complexity from the simple addition of a disinfectant tablet in a hiker's water bottle through to complex multi-stage processes carried by boat or plane to disaster areas.\n\nSection::::Ultra pure water production.\n", "The technology related to this type of power is still in its infant stages, even though the principle was discovered in the 1950s. Standards and a complete understanding of all the ways salinity gradients can be utilized are important goals to strive for in order make this clean energy source more viable in the future.\n\nSection::::Methods.:Capacitive method.\n", "Desalination plants may be required in the future for those regions hardest hit by water scarcity. Desalination is a process of cleaning water by means of evaporation. Water is evaporated and it passes through membranes. The water is then cooled and condenses allowing it to flow either back into the main water line or out to sea.\n\nSection::::Modern challenges.:Climate change.\n", "Sea-water reverse-osmosis (SWRO) desalination, a membrane process, has been commercially used since the early 1970s. Its first practical use was demonstrated by Sidney Loeb from University of California at Los Angeles in Coalinga, California, and Srinivasa Sourirajan of National Research Council, Canada. Because no heating or phase changes are needed, energy requirements are low, around 3 kWh/m, in comparison to other processes of desalination, but are still much higher than those required for other forms of water supply, including reverse osmosis treatment of wastewater, at 0.1 to 1 kWh/m. Up to 50% of the seawater input can be recovered as fresh water, though lower recoveries may reduce membrane fouling and energy consumption.\n", "A proposed alternative to desalination in the American Southwest is the commercial importation of bulk water from water-rich areas either by oil tankers converted to water carriers, or pipelines. The idea is politically unpopular in Canada, where governments imposed trade barriers to bulk water exports as a result of a North American Free Trade Agreement (NAFTA) claim.\n\nSection::::Considerations and criticism.:Environmental.:Public health concerns.\n", "As the volumes of water being produced are never sufficient to replace all the production volumes (oil and gas, in addition to water), additional \"make-up\" water must be provided. Mixing waters from different sources exacerbates the risk of scaling.\n\nSeawater is obviously the most convenient source for offshore production facilities, and it may be pumped inshore for use in land fields. Where possible, the water intake is placed at sufficient depth to reduce the concentration of algae; however, filtering, deoxygenation and biociding is generally required.\n", "Desalination appeared during the late 20th century, and is still limited to a few areas.\n", "A reverse osmosis plant is a manufacturing plant where the process of reverse osmosis takes place. An average modern reverse osmosis plant needs six kilowatt-hours of electricity to desalinate one cubic metre of water. The process also results in an amount of salty briny waste. The challenge for these plants is to find ways to reduce energy consumption, use sustainable energy sources, improve the process of desalination and to innovate in the area of waste management to deal with the waste. Self-contained water treatment plants using reverse osmosis, called reverse osmosis water purification units, are normally used in a military context.\n", "Section::::Methods.:Reversed electrodialysis.\n\nA second method being developed and studied is reversed electrodialysis or reverse dialysis, which is essentially the creation of a salt battery. This method was described by Weinstein and Leitz as “an array of alternating anion and cation exchange membranes can be used to generate electric power from the free energy of river and sea water.”\n", "Supplying all US domestic water by desalination would increase domestic energy consumption by around 10%, about the amount of energy used by domestic refrigerators. Domestic consumption is a relatively small fraction of the total water usage.\n\nNote: \"Electrical equivalent\" refers to the amount of electrical energy that could be generated using a given quantity of thermal energy and appropriate turbine generator. These calculations do not include the energy required to construct or refurbish items consumed in the process.\n\nSection::::Considerations and criticism.:Cogeneration.\n", "A process of osmosis through semipermeable membranes was first observed in 1748 by Jean-Antoine Nollet. For the following 200 years, osmosis was only a phenomenon observed in the laboratory. In 1950, the University of California at Los Angeles first investigated desalination of seawater using semipermeable membranes. Researchers from both University of California at Los Angeles and the University of Florida successfully produced fresh water from seawater in the mid-1950s, but the flux was too low to be commercially viable until the discovery at University of California at Los Angeles by Sidney Loeb and Srinivasa Sourirajan at the National Research Council of Canada, Ottawa, of techniques for making asymmetric membranes characterized by an effectively thin \"skin\" layer supported atop a highly porous and much thicker substrate region of the membrane. John Cadotte, of FilmTec Corporation, discovered that membranes with particularly high flux and low salt passage could be made by interfacial polymerization of \"m\"-phenylene diamine and trimesoyl chloride. Cadotte's patent on this process was the subject of litigation and has since expired. Almost all commercial reverse-osmosis membrane is now made by this method. By the end of 2001, about 15,200 desalination plants were in operation or in the planning stages, worldwide.\n", "BULLET::::- Rainwater harvesting and stormwater recovery- Urban design systems which incorporate rainwater harvesting and reduce runoff are known as Water Sensitive Urban Design (WSUD) in Australia, Low Impact Development (LID) in the United States and Sustainable urban drainage systems (SUDS) in the United Kingdom.\n\nBULLET::::- Seawater desalination - an energy-intensive process where salt and other minerals are removed from seawater to produce potable water for drinking and irrigation, typically through membrane filtration (reverse-osmosis), and steam-distillation.\n\nSection::::Design considerations.:Costs.\n", "Fresh water is a renewable and variable, but finite natural resource. Fresh water can only be replenished through the process of the water cycle,in which water from seas, lakes, forests, land, rivers, and reservoirs evaporates, forms clouds, and returns as precipitation. Locally, however, if more fresh water is consumed through human activities than is naturally restored, this may result in reduced fresh water availability from surface and underground sources and can cause serious damage to surrounding and associated environments.\n\nFresh and unpolluted water accounts for 0.003% of total water available globally.\n", "Fresh water is not the same as potable water (or drinking water). Much of the earth's fresh water (on the surface and groundwater) is unsuitable for drinking without some treatment. Fresh water can easily become polluted by human activities or due to naturally occurring processes, such as erosion.\n\nWater is critical to the survival of all living organisms. Some organisms can thrive on salt water, but the great majority of higher plants and most mammals need fresh water to live.\n\nSection::::Definitions.\n\nSection::::Definitions.:Numerical definition.\n", "Section::::Experimental techniques.:Other approaches.:Small-scale solar.\n", "Section::::New developments.\n\nSince the 1970s, prefiltration of high-fouling waters with another larger-pore membrane, with less hydraulic energy requirement, has been evaluated and sometimes used. However, this means that the water passes through two membranes and is often repressurized, which requires more energy to be put into the system, and thus increases the cost.\n\nOther recent developmental work has focused on integrating reverse osmosis with electrodialysis to improve recovery of valuable deionized products, or to minimize the volume of concentrate requiring discharge or disposal.\n\nIn the production of drinking water, the latest developments include nanoscale and graphene membranes.\n", "In response to these problems, the NSW Government's \"2006 Metropolitan Water Plan\" identified desalination as a way of securing Sydney's water supply needs in the case of a severe, prolonged drought:\n", "They also tend to be less expensive than advanced wastewater treatment plants, using the natural assimilative capacity of the sea instead of energy-intensive treatment processes in a plant. For example, preliminary treatment of wastewater is sufficient with an effective outfall and diffuser. The costs of preliminary treatment are about one tenth that of secondary treatment. Preliminary treatment also requires much less land than advanced wastewater treatment.\n\nSection::::Disadvantages.\n" ]
[ "We can't convert ocean water to fresh water." ]
[ "We can do this conversion it is just expensive to set up and requires a large amount of input energy." ]
[ "false presupposition" ]
[ "We can't convert ocean water to fresh water." ]
[ "false presupposition" ]
[ "We can do this conversion it is just expensive to set up and requires a large amount of input energy." ]
2018-04969
How the endothelial glycocalyx modifies transvascular fluid exchange?
I think this the title needs its own ELI5. The **endothelium** is the inner lining of our blood vessels. In arteries and veins, it is surrounded by smooth muscles and other cells, but in capillaries they're the only thing making up the blood vessels. When we get an infection and something gets swollen, it's the endothelium getting "leaky" and letting fluid and white blood cells out into the tissue. The **glycocalyx** is a bunch of proteins and lipids (fats) sticking out of a cell. These allow cells to stick together in various ways, and it's the main reason your internal organs don't spontaneously rip open. The **subglycocalyx space** is in the little gaps between cells that fluid can escape through and also pockets of space where there are no proteins. When our heart beats, it creates a lot of pressure. When the pressure approaches the capillaries, it tends to force things out of our blood, like nutrients, amino acids, proteins, etc. This is called *hydrostatic pressure*, and the fluid pushed out is called *interstitial fluid*. As we go along the capillaries, this pressure decreases (because it's further away from the heart and the thinness of the capillaries) and is dominated by *oncotic pressure* (also known as *osmotic pressure*). Oncotic pressure wants to draw stuff back into our bloodstream. This whole cycle of forcing-out and drawing back in is called **transvascular fluid exchange.** [This site]( URL_0 ) has very straightforward diagrams and explanations that may help you. **TL;DR**: Fluid is pushed OUT of our capillaries by our heart, and it's reabsorbed later. The glycocalyx is a bunch of proteins and fats outside the cell and the subglycocalyx space lacks those proteins.
[ "Coagulation or blood clotting relies on, in addition to the production of fibrin, interactions between platelets. When the endothelium or the lining of a blood vessel is damaged, connective tissue including collagen fibers is locally exposed. Initially, platelets stick to the exposed connective tissue through specific cell-surface receptors. This is followed by platelet activation and aggregation in which platelets become firmly attached and release chemicals that recruit neighboring platelets to the site of vascular injury. A meshwork of fibrin then forms around this aggregation of platelets to increase the strength of the clot.\n\nSection::::Transient interactions.:Cell interactions between bacteria.\n", "It has been suggested that VEGFC can form Turing patterns to regulate lymphangiogenesis in the zebrafish embryo by interacting with collagen I and MMP2 .\n\nSection::::Biosynthesis.\n", "Higher levels of circulating \"endothelial progenitor cells\" were detected in the bloodstream of patients, predicted better outcomes, and patients experienced fewer repeat heart attacks, though statistical correlations between these outcomes and circulating endothelial progenitor cell numbers were scant in the original research. Endothelial progenitor cells are mobilized after a myocardial infarction, and that they function to restore the lining of blood vessels that are damaged during the heart attack.\n", "Elevating shear stress induces a vascular response by triggering nitric oxide synthesis and mechanotransduction pathways of endothelial cells. The synthesis of nitric oxide facilitate shear stress mediated dilation in blood vessels and maintains a homeostatic status. Additionally, physiologic shear stress levels at the vessel wall upregulate the presence of antithrombotic agents through the mechano-signal transduction of mechano-recepting transmembrane proteins, junctional proteins, and subendothelial mechanosensors. Shear stress causes endothelial cell deformation which activates transmembrane ion channels Elevated wall shear stress caused by exercise is understood to promote mitochondrial biogenesis in the vascular endothelium indicating the benefits regular exercise may have on vascular function. Alignment is recognized as an important mechanism and determinant of shear-stress induced vascular response; in vivo testing of endothelial cells has demonstrated that their mechanotransductive response is direction dependent as endothelial nitric oxide synthesis is preferentially activated under parallel flow while perpendicular flows activates inflammatory pathways like reactive oxygen species production and nuclear factor-κB. Therefore, disturbed/oscillating flow and low flow conditions, which create an irregular and passive shear stress environment, result in inflammatory activation due to a limited alignment capability of the endothelial cells. Regions in the vasculature with low shear stress are vulnerable to elevated monocyte adhesion and endothelial cell apoptosis. However, unlike oscillatory flow, both laminar(steady) and pulsatile flow and shear stress environments are often considered together as mechanisms of maintaining vascular homeostasis and preventing inflammation, reactive oxygen species formation, and coagulatory pathways. High, uniform laminar shear stress is known to promote a quiescent endothelial cell state, provide anti-thrombotic effects, prevent proliferation, and decrease inflammation and apoptosis. At high shear stress levels (10 Pa), the endothelial cell response is distinct from upper normal/physiological values; high wall shear stress causes a promatrix remodeling, proliferative, anticoagulant, and anti-inflammatory state. Yet, very high wall shear stress values (28.4 Pa) prevent endothelial cell alignment and stimulate proliferation and apoptosis although the endothelial response to shear stress environments was determined to be dependent on the local wall shear stress gradient.\n", "Transendothelial fluid exchange occurs predominantly in the capillaries, and is a process of plasma ultrafiltration across a semi-permeable membrane. It is now appreciated that the ultrafilter is the endothelial glycocalyx layer whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies an inter endothelial cell cleft, the plasma ultrafiltrate may pass to the interstitial space. Some continuous capillaries may feature fenestrations that provide an additional subglycocalyx pathway for solvent and small solutes. Discontinuous capillaries as found in sinusoidal tissues of bone marrow, liver and spleen have little or no filter function.\n", "Several possible hypotheses have been advanced to explain vasomotion. Increased flow is one possibility; mathematical modeling has shown a vessel with an oscillating diameter to conduct more flow than a vessel with a static diameter. Vasomotion could also be a mechanism of increasing the reactivity of a blood vessel by avoiding the \"latch state\", a low ATP cycling state of prolonged force generation common in vascular smooth muscle. Finally, vasomotion has been shown to be altered in a variety of pathological situations, with vessels from both hypertensive and diabetic patients displaying altered flow patterns as compared to normotensive vessels.\n", "Experiments have been performed to test precisely how the glycocalyx can be altered or damaged. One particular study used an isolated perfused heart model designed to facilitate detection of the state of the vascular barrier portion, and sought to cause insult-induced shedding of the glycocalyx to ascertain the cause-and-effect relationship between glycocalyx shedding and vascular permeability. Hypoxic perfusion of the glycocalyx was thought to be sufficient to initiate a degradation mechanism of the endothelial barrier. The study found that flow of oxygen throughout the blood vessels did not have to be completely absent (ischemic hypoxia), but that minimal levels of oxygen were sufficient to cause the degradation. Shedding of the glycocalyx can be triggered by inflammatory stimuli, such as tumor necrosis factor-alpha. Whatever the stimulus is, however, shedding of the glycocalyx leads to a drastic increase in vascular permeability. Vascular walls being permeable is disadvantageous, since that would enable passage of some macromolecules or other harmful antigens.\n", "Endothelial cells, which line the blood vessels in the SGZ, are a critical component in the regulation of stem cell self-renewal and neurogenesis. These cells, which reside in close proximity to clusters of proliferating neurogenic cells, provide attachment points for neurogenic cells and release diffusible signals such as vascular endothelial growth factor (VEGF) that help induce both angiogenesis and neurogenesis. In fact, studies have shown that neurogenesis and angiogenesis share several common signaling pathways, implying that neurogenic cells and endothelial cells in the SGZ have a reciprocal effect on one another. Blood vessels carry hormones and other molecules that act on the cells in the SGZ to regulate neurogenesis and angiogenesis.\n", "Vasculogenesis is the initial establishment of the components of the blood vessel network, or vascular tree. This is dictated by genetic factors and has no inherent function other than to lay down the preliminary outline of the circulatory system. Once fluid flow begins, biomechanical and hemodynamic inputs are applied to the system set up by vasculogenesis, and the active remodelling process can begin.\n", "In a healthy vascular system the endothelium lines all blood-contacting surfaces, including arteries, arterioles, veins, venules, capillaries, and heart chambers. This healthy condition is promoted by the ample production of nitric oxide by the endothelium, which requires a biochemical reaction regulated by a complex balance of polyphenols, various nitric oxide synthase enzymes and L-arginine. In addition there is direct electrical and chemical communication via gap junctions between the endothelial cells and the vascular smooth muscle.\n\nSection::::Physiology.\n\nSection::::Physiology.:Blood pressure.\n", "BULLET::::- Hill, J. M., Zalos, G., Halcox, J. P., Schenke, W. H., Waclawiw, M. A., Quyyumi, A. A., & Finkel, T. (2003). Circulating endothelial progenitor cells, vascular function, and cardiovascular risk. New England Journal of Medicine. 348: 593-600.\n\nBULLET::::- Werner, N., Kosiol, S., Schiegl, T., Ahlers, P., Walenta, K., Link, A., ... & Nickenig, G. (2005). Circulating endothelial progenitor cells and cardiovascular outcomes. New England Journal of Medicine. 353: 999-1007.\n", "BULLET::::2. Ito WD, Arrasi M, Winkler B, Scholz D, Schaper J, and Schaper W. Monocytochemotactic protein-1 increases collateral and peripheral conductance after femoral artery occlusion. \"Circ Res\" 80: 829–837, 1997.\n\nBULLET::::3. Prior, B. M., Yang, H. T., & Terjung, R. L. What makes vessels grow with exercise training? \"J App Physiol\" 97: 1119-28, 2004.\n\nBULLET::::4. Tronc F, Wassef M, Exposito B, Henrion D, Glagov S, and Tedgui A. Role of NO in flow-induced remodeling of the rabbit common carotid artery. \"Arterioscler Thromb Vasc Biol\" 16: 1256–1262, 1996.\n", "In a second approach, more realistic of the vascular resistance and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer.\n", "Intussusception was first observed in neonatal rats. In this type of vessel formation, the capillary wall extends into the lumen to split a single vessel in two. There are four phases of intussusceptive angiogenesis. First, the two opposing capillary walls establish a zone of contact. Second, the endothelial cell junctions are reorganized and the vessel bilayer is perforated to allow growth factors and cells to penetrate into the lumen. Third, a core is formed between the 2 new vessels at the zone of contact that is filled with pericytes and myofibroblasts. These cells begin laying collagen fibers into the core to provide an extracellular matrix for growth of the vessel lumen. Finally, the core is fleshed out with no alterations to the basic structure. Intussusception is important because it is a reorganization of existing cells. It allows a vast increase in the number of capillaries without a corresponding increase in the number of endothelial cells. This is especially important in embryonic development as there are not enough resources to create a rich microvasculature with new cells every time a new vessel develops.\n", "Vascular permeability\n\nVascular permeability, often in the form of capillary permeability or microvascular permeability, characterizes the capacity of a blood vessel wall to allow for the flow of small molecules (drugs, nutrients, water, ions) or even whole cells (lymphocytes on their way to the site of inflammation) in and out of the vessel. Blood vessel walls are lined by a single layer of endothelial cells. The gaps between endothelial cells (cell junctions) are strictly regulated depending on the type and physiological state of the tissue.\n", "Section::::Mechanism.\n", "Endothelial dysfunction\n\nIn vascular diseases, endothelial dysfunction is a systemic pathological state of the endothelium. Along with acting as a semi-permeable membrane, the endothelium is responsible for maintaining vascular tone and regulating oxidative stress by releasing mediators, such as nitric oxide, prostacyclin and endothelin, and controlling local angiotensin-II activity.\n\nSection::::Research.\n\nSection::::Research.:Atherosclerosis.\n\nEndothelial dysfunction may be involved in the development of atherosclerosis and may predate vascular pathology. \n\nSection::::Research.:Nitric oxide.\n", "BULLET::::- Blood clotting (thrombosis & fibrinolysis). The endothelium normally provides a non-thrombogenic surface because it contains, for example, heparan sulfate which acts as a cofactor for activating antithrombin, a protease that inactivates several factors in the coagulation cascade.\n\nBULLET::::- Inflammation. Endothelial cells actively signal to immune cells during inflammation\n\nBULLET::::- Formation of new blood vessels (angiogenesis)\n\nBULLET::::- Vasoconstriction and vasodilation, and hence the control of blood pressure\n\nBULLET::::- Repair of damaged or diseased organs via an injection of blood vessel cells\n\nBULLET::::- Angiopoietin-2 works with VEGF to facilitate cell proliferation and migration of endothelial cells\n", "There is ongoing research to make bio-engineered blood vessels, which may be of immense importance in creating AV fistulas for patients on hemodialysis, who do not have good blood vessels for creation of one. It involves growing cells which produce collagen and other proteins on a biodegradable micromesh tube followed by removal of those cells to make the 'blood vessels' storable in refrigerators.\n", "Section::::Function.:Role in wound healing.\n\nThe role of endothelial progenitor cells in wound healing remains unclear. Blood vessels have been seen entering ischemic tissue in a process driven by mechanically forced ingress of existing capillaries into the avascular region, and importantly, instead of through sprouting angiogenesis. These observations contradict sprouting angiogenesis driven by EPCs. Taken together with the inability to find bone-marrow derived endothelium in new vasculature, there is now little material support for postnatal vasculogenesis. Instead, angiogenesis is likely driven by a process of physical force.\n\nSection::::Function.:Role in endometriosis.\n", "Circulating endothelial cell\n\nCirculating endothelial cells (CECs) are endothelial cells that have been shed from the lining of the vascular wall into the blood stream. Endothelial cells normally line blood vessels to maintain vascular integrity and permeability, but when these cells enter into the circulation, this could be a reflection of vascular dysfunction and damage. There are many factors involved in the process of creating CECs, including: reduced interaction between the endothelial cells and basement membrane proteins, damaged endothelial cellular adhesion molecules, mechanical injury, decreased survival of cytoskeletal proteins, and inflammation.\n", "Schaper summarizes the status-2009 knowledge of coronary collateral transformation in a recent review: \"Following an arterial occlusion outward remodeling of pre-existent inter-connecting arterioles occurs by proliferation of vascular smooth muscle and endothelial cells. This is initiated by deformation of the endothelial cells through increased pulsatile fluid shear stress (FSS) caused by the steep pressure gradient between the high pre-occlusive and the very low post-occlusive pressure regions that are interconnected by collateral vessels. Shear stress leads to the activation and expression of all nitric oxide synthetase (NOS) isoforms and nitric oxide production, followed by vascular endothelial growth factor (VEGF) secretion, which induces monocyte chemoattractant protein-1 (MCP-1) synthesis in the endothelium and in the smooth muscle of the media. This leads to attraction and activation of monocytes and T-cells into the adventitial space (peripheral collateral vessels) or attachment of these cells to the endothelium (coronary collaterals). Mononuclear cells produce proteases and growth factors to digest the extra-cellular scaffold and allow motility and provide space for the new cells. They also produce NO from inducible nitric oxide synthetase (iNOS), which is essential for arteriogenesis. The bulk of new tissue production is carried by the smooth muscles of the media, which transform their phenotype from a contractile into a synthetic and proliferative one. Important roles are played by actin binding proteins like actin-binding Rho-activating protein (ABRA), cofilin, and thymosin beta 4 which determine actin polymerization and maturation. Integrins and connexins are markedly up-regulated. A key role in this concerted action, which leads to a 2-to-20 fold increase in vascular diameter, depending on species size (mouse versus human), are the transcription factors AP-1, egr-1, carp, ets, by the Rho pathway and by the mitogen activated kinases ERK-1 and -2. In spite of the enormous increase in tissue mass (up to 50-fold), the degree of functional restoration of blood flow capacity is incomplete and ends at 30% of maximal coronary conductance and 40% in the vascular periphery. The process of arteriogenesis can be drastically stimulated by increases in FSS (arterio-venous fistulas) and can be completely blocked by inhibition of NO production, by pharmacological blockade of VEGF-A, and by the inhibition of the Rho-pathway. Pharmacological stimulation of arteriogenesis, important for the treatment of arterial occlusive diseases, seems feasible with NO donors.\"\n", "In large vessels with low hematocrit, viscosity dramatically drops and red cells take in a lot of energy. While in smaller vessels at the micro-circulation scale, viscosity is very high. With the increase in shear stress at the wall, a lot of energy is used to move cells.\n\nSection::::Shear rate relations.\n", "A recent study that was published in The American Journal of Pathology, provides information about the mechanisms underlying failure of the most common type of hemodialysis vascular access, the arteriovenous fistula. In spite of AV Fistula being one of the most preferred methods of Vascular access, the researchers observed that, up to 60% of newly created fistulas never become usable for dialysis because they fail to mature (meaning the vessels do not enlarge enough to support the dialysis blood circuit.). This study suggests that, the impairment in responsiveness to nitric oxide that occurs in some patients with end-stage renal disease may result in hyperplasia (excessive growth) of the innermost layer of the blood vessels or reduced ability of the vessels to dilate. Either abnormality can limit the maturation and viability of the arteriovenous fistula. This research raises the possibility that therapeutic restoration of nitric oxide responsiveness through manipulation of local mediators may prevent fistula maturation failure in patients and potentially contribute to their ability to remain on hemodialysis.\n", "In a second approach, more realistic and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer.\n" ]
[ "Endolethial glycocalyx is the correct term for the process that modifies transvascular fluid exchange." ]
[ "Endolethium glycocalyx is the correct term for the process. " ]
[ "false presupposition" ]
[ "Endolethial glycocalyx is the correct term for the process that modifies transvascular fluid exchange.", "Endolethial glycocalyx is the correct term for the process that modifies transvascular fluid exchange." ]
[ "normal", "false presupposition" ]
[ "Endolethium glycocalyx is the correct term for the process. ", "Endolethium glycocalyx is the correct term for the process. " ]
2018-15308
Why is a car’s usage measured in miles traveled? Why not engine hours as well?
Because normally they're pretty much the same thing. Most people don't idle their cars for hours on end, and though I have heard tell of people using their vehicle engines as some kind of power take-off for tools/generators it's pretty rare.
[ "poemThis is what I told them in California. When I hit the road with hundreds of pounds of baggage, typewriters and testing equipment, I’m not out there just to have fun. I want to get from here to there, which may be thousands of miles away, with as much comfort as possible. Besides, Boji [his dog] now demands comfort. So does my wife.\n", "BULLET::::- Engine Usage Indicator: Hobbs Hour Meter Engine Usage indicator records actual engine hours of operation-whether idling or on road. Essential for construction contractors or fleet operators who base maintenance intervals on running time rather than mileage.\n", "BULLET::::- Units of distance per fixed fuel unit: Miles per gallon (mpg) is commonly used in the United States, the United Kingdom, and Canada (alongside L/100 km). Kilometers per liter (km/L) is more commonly used elsewhere in the Americas, Asia, parts of Africa and Oceania. In Arab countries km/20 L, which is known as kilometers per \"tanaka\" (or \"Tanakeh\") is used, where \"tanaka\" is a metal container which has a volume of twenty liters. When the mpg unit is used, it is necessary to identify the type of gallon used: the imperial gallon is 4.54609 liters, and the U.S. gallon is 3.785 liters. When using a measure expressed as distance per fuel unit, a higher number means more efficient, while a lower number means less efficient.\n", "In the UK the ASA (Advertising standards agency) have claimed that fuel consumption figures are misleading. Often the case with European vehicles as the MPG (miles per gallon) figures that can be advertised are often not the same as 'real world' driving.\n", "The FMCSA is a division of the United States Department of Transportation (DOT), which is generally responsible for enforcement of FMCSA regulations. The driver of a CMV is required to keep a record of working hours using a log book, outlining the total number of hours spent driving and resting, as well as the time at which the change of duty status occurred. In lieu of a log book, a motor carrier may keep track of a driver's hours using Electronic Logging Devices (ELDs), which automatically record the amount of time spent driving the vehicle.\n", "Since the total force opposing the vehicle's motion (at constant speed) multiplied by the distance through which the vehicle travels represents the work that the vehicle's engine must perform, the study of fuel economy (the amount of energy consumed per unit of distance traveled) requires a detailed analysis of the forces that oppose a vehicle's motion. In terms of physics, Force = rate at which the amount of work generated (energy delivered) varies with the distance traveled, or:\n", "While the thermal efficiency (mechanical output to chemical energy in fuel) of petroleum engines has increased since the beginning of the automotive era, this is not the only factor in fuel economy. The design of automobile as a whole and usage pattern affects the fuel economy. Published fuel economy is subject to variation between jurisdiction due to variations in testing protocols.\n", "Household goods (HHG) miles, from the \"Household Goods Mileage Guide\" (aka \"short miles\") was the first attempt at standardizing motor carrier freight rates for movers of household goods, some say at the behest of the Department of Defense for moving soldiers around the country, long a major source of steady and reliable revenue. Rand McNally, in conjunction with the precursor of the National Moving & Storage Association developed the first Guide published in 1936, at which point it contained only about 300 point-to-point mileages.\n", "Drivers' working hours\n\nDrivers' working hours is the commonly used term for regulations that govern the activities of the drivers of commercial goods vehicles and passenger carrying vehicles. In the United States, they are known as hours of service.\n", "However, total mileage over time must be taken into consideration when considering total emissions volume and there has been investigated in \"Dust to Dust\" environmental impact, considering factors other than fuel economy.\n", "In the United States, it is computed per 100 million miles traveled, while internationally it is computed in 100 million or 1 billion kilometers traveled.\n\nAccording to the Minnesota Department of Public Safety, Office of Traffic Safety\n\nSection::::Energy efficiency.\n\nEnergy efficiency in transport can be measured in L/100 km or miles per gallon (mpg). This can be normalized per vehicle, as in fuel economy in automobiles, or per seat, as for example in fuel economy in aircraft.\n\nSection::::See also.\n\nBULLET::::- Rail usage statistics by country\n\nSection::::External links.\n", "Note: The amount of work generated by the vehicle's power source (energy delivered by the engine) would be exactly proportional to the amount of fuel energy consumed by the engine if the engine's efficiency is the same regardless of power output, but this is not necessarily the case due to the operating characteristics of the internal combustion engine.\n\nFor a vehicle whose source of power is a heat engine (an engine that uses heat to perform useful work), the amount of fuel energy that a vehicle consumes per unit of distance (level road) depends upon:\n", "Manufacturers such as BMW, Honda, Toyota and Mercedes-Benz have included fuel-economy gauges in some instrument clusters, showing fuel mileage in real time, which was limited mainly to luxury vehicles and later, hybrids. Following a focus on increasing fuel economy in the late 2000s along with increased technology, most vehicles in the 2010s now come with either real-time or average mileage readouts on their dashboards. The ammeter was the gauge of choice for monitoring the state of the charging system until the 1970s. Later it was replaced by the voltmeter. Today most family vehicles have warning lights instead of voltmeters or oil pressure gauges in their dashboard instrument clusters, though sports cars often have proper gauges for performance purposes and driver appeasement along with larger trucks, mainly to monitor system function during heavy usage such as towing or off-road usage.\n", "Electronic Logging Devices can be thought of as an automated electronic log book. An ELD records the same information as a manual paper log book, and requires less input from the driver. The ELD automatically records driving time and location, leaving the driver responsible only for reporting on-duty and off-duty time. In these respects, the ELD is less susceptible to forgery than a paper log book.\n", "Although originally created as a reference point for fossil-fueled vehicles, driving cycles have been used for estimating how many miles an electric vehicle will get on a single charge.\n\nSection::::Programs.:National Pollutant Discharge Elimination System.\n", "BULLET::::- Electrical systems. Headlights, battery charging, active suspension, circulating fans, defrosters, media systems, speakers, and other electronics can also significantly increase fuel consumption, as the energy to power these devices causes increased load on the alternator. Since alternators are commonly only 40–60% efficient, the added load from electronics on the engine can be as high as at any speed including idle. In the FTP 75 cycle test, a 200 watt load on the alternator reduces fuel efficiency by 1.7 MPG. Headlights, for example, consume 110 watts on low and up to 240 watts on high. These electrical loads can cause much of the discrepancy between real world and EPA tests, which only include the electrical loads required to run the engine and basic climate control.\n", "For example, the 2016 Range Rover Autobiography V8 diesel has an official CO figure of 219g/km. Under the previous rates, VED was £620 for the first year and then £280 for each subsequent year. For the same model registered after 1 April 2017, VED is £1,200 in year one, £450 in years 2 to 6 and then £140 from year 7. Assuming ten years of ownership, pre-2017 rates totalled £3,140. New rates total £4,010.\n\nIn addition, cars with a list value of over £40,000 pay £310 supplement for five years of the standard rate.\n", "Using GGE to compare fuels for use in an internal combustion engine is only the first part of the equation whose bottom line is useful work. In the context of GGE, a real world kind of \"useful work\" is miles per gallon (MPG) as advertised by motor vehicle manufacturers.\n", "On November 24, 2008 the Internal Revenue Service issued the 2009 Optional Standard Mileage Rates used to calculate the deductible costs of operating an automobile for business, charitable, medical or moving purposes.\n\nBeginning January 1, 2009, the standard mileage rates for the use of a car (also vans, pickups or panel trucks) follow :\n\nBULLET::::- 55 cents per mile for business miles driven\n\nBULLET::::- 24 cents per mile driven for medical or moving purposes\n\nBULLET::::- 14 cents per mile driven in service of charitable organizations\n", "Section::::Clocking/busting miles and legality.\n", "The proposed design and final content for two options of the new sticker label that would be introduced in 2013 model year cars and trucks were consulted for 60 days with the public in 2010, and both include miles per gallon equivalent and kW-h per 100 miles as the fuel economy metrics for plug-in cars, but in one option MPGe and annual electricity cost are the two most prominent metrics. In November 2010, EPA introduced MPGe as comparison metric on its new sticker for fuel economy for the Nissan Leaf and the Chevrolet Volt.\n", "BULLET::::- vintage heavy vehicle\n\nBULLET::::- special-type vehicle such as a forklift truck, road roller or roadside maintenance vehicle\n\nBULLET::::- vehicle recovery service vehicle\n\nBULLET::::- urban bus\n\nSection::::Logbooks and time-keeping.\n\nAll drivers of vehicles subject to work-time rules must keep one current logbook in the vehicle while driving. The logbook must be up-to-date to the most recent period of rest time. Drivers complete a logbook course as part of class 2 heavy vehicle licence training.\n\nPaper logbooks are available in two variants:\n\nBULLET::::- Heavy vehicle (vehicles over 3500kg)\n\nBULLET::::- Small passenger service vehicle (e.g. taxi)\n", "There are three main reasons commercial VIO data differs from data from the US government. The first is due to variation when data is reported by states to the US government. States are required to report registrations using form FHWA-561 once per calendar year or fiscal year. Forty six states end their fiscal year on June 30 and four end in March, August or September. This data is due to the FHWA by January 1 of the following year, creating a lag time of about six months and thereby not accounting for half a year of changes. Second, the government's definitions of vehicle classifications change over time. A footnote added to FHWA datafiles states, \"...Data for 2007-10 were calculated using a new methodology developed by FHWA. Data for these years are based on new categories and \"are not comparable to previous years\"\". Third, the government can include vehicles not in use, or double-count vehicles that have been transferred across two states. According to the FHWA Office of Highway Policy Information, \"Although many States continue to register specific vehicle types on a calendar year basis, all States use some form of the \"staggered\" system to register motor vehicles. \"Registration practices for commercial vehicles differ greatly among States\". The FHWA data include all vehicles which have been registered at any time throughout the calendar year. Data include vehicles \"which were retired during the year and vehicles that were registered in more than one State\". In some States, it is also possible that contrary to the FHWA reporting instructions, vehicles \"which have been registered twice in the same State may be reported as two vehicles\"\". (All italics added for emphasis.)\n", "In 2006, AAA worked with the EPA to improve the fuel economy information provided to new car buyers by vehicle manufacturers. Using several different types of tests, AAA recreated real-world driving conditions to illustrate the difference in fuel economy, and the EPA incorporated AAA’s testing into their new procedures. The more accurate testing resulted in a reduction of miles per gallon claims between 5 and 25 percent, beginning with 2008 model year vehicles.\n", "Environmental management systems EMAS as well as good fleet management includes record keeping of the fleet fuel consumption. Quality management uses those figures to steer the measures acting on the fleets. This is a way to check whether procurement, driving, and maintenance in total have contributed to changes in the fleet's overall consumption.\n\nSection::::Fuel economy standards and testing procedures.\n\n* highway ** combined\n\nSection::::Fuel economy standards and testing procedures.:Australia.\n" ]
[ "A car's engine hours should also be used to determine a car's usage, not just measured in miles traveled." ]
[ "Normally, miles traveled and engine hours are the same thing." ]
[ "false presupposition" ]
[ "A car's engine hours should also be used to determine a car's usage, not just measured in miles traveled." ]
[ "false presupposition" ]
[ "Normally, miles traveled and engine hours are the same thing." ]
2018-22891
How were/are bounty amounts determined?
So when you get arrested, you may be able to post bail. That means you can paid the court to let you stay out of jail before your trial and you can get it back if you show up to court. Usually, you wouldn't paid the bail if its really high so I bail bond company may lend you the money. If you don't show up to court the bail bond company knows and is out money. You would have a bounty put on your head from the bail bond company for part of your bail so someone could find you and get you arrested so they can collect the bail back. The hunter gets some and so does the bail bond company.
[ "A bounty (from Latin \"bonitās\", goodness) is a payment or reward often offered by a group as an incentive for the accomplishment of a task by someone usually not associated with the group. Bounties are most commonly issued for the capture or retrieval of a person or object. They are typically in the form of money. By definition, they can be retracted at any time by whomever issued them. Two modern examples of bounties are the ones placed for the capture of Saddam Hussein and his sons by the United States government and Microsoft's bounty for computer virus creators. Those who make a living by pursuing bounties are known as bounty hunters.\n", "Section::::Examples.\n\nSection::::Examples.:Historical examples.\n\nWritten promises of reward for the capture of or information regarding criminals go back to at least the first-century Roman Empire. Graffiti from Pompeii, a Roman city destroyed by a volcanic eruption in 79 AD, contained this message:\n\nA copper pot went missing from my shop. Anyone who returns it to me will be given 65 bronze coins (\"sestertii\"). Twenty more will be given for information leading to the capture of the thief.\n", "A bounty system was used in the American Civil War as an incentive to increase enlistments. Another bounty system was used in New South Wales to increase the number of immigrants from 1832.\n", "Section::::Examples.:Fictional representations.\n", "BULLET::::- \"The Bounty Hunter\" (\"K-9\"), an episode of \"K-9\"\n\nBULLET::::- \"The Bounty Hunter\" (\"My Name Is Earl\"), a first season episode of \"My Name Is Earl\"\n\nBULLET::::- , a second season episode of \"Star Wars: The Clone Wars\"\n\nBULLET::::- \"Bounty Hunters\" (1996 film), an American/Canadian film\n\nBULLET::::- Bounty Hunters (2005 film), an American TV movie\n\nBULLET::::- \"Bounty Hunters\" (2016 film), a Chinese-South Korean-Hong Kong film\n\nSection::::Gaming.\n\nBULLET::::- \"\", a 2003 first person shooter on the PlayStation 2, Xbox, and PC\n\nBULLET::::- \"\", a 2002 \"Star Wars\" video game developed and published by LucasArts\n", "Bounty (stylised in lowercase letters) is the second studio album by Swedish audiovisual project iamamiwhoami, led by singer and songwriter Jonna Lee. Originally produced and released as a series of singles throughout 2010 and 2011, it was later released as an album on 3 June 2013 on Lee's label To whom it may concern. and distributed by Cooperative Music. The first music video from \"Bounty\", titled \"B\", was released on 14 March 2010 on iamamiwhoami's YouTube channel. After which followed \"O\", \"U-1\", \"U-2\", \"N\", \"T\" and \"Y\". Digital singles are released shortly after each music video is uploaded to YouTube. The titles collectively formed the word \"bounty\". While it was assumed that these songs solely consisted of \"Bounty\"s track listing, in 2011 two more singles and music videos, \"; John\" and \"Clump\", were released and were not confirmed as belonging to \"Bounty\" until June 2012 when iamamiwhoami's YouTube channel grouped them into a playlist named \"Bounty\" along with the previous tracks mentioned.\n", "Section::::Examples.:21st-century examples.\n\nThe majority of prisoners held in Guantánamo Bay detainment camp were handed over by bounty hunters.\n\nThe Isabella Stewart Gardner Museum in Boston offered a $5 million reward for the return, in good condition, of the 13 works of art taken from its galleries in March 1990.\n\nSection::::Other uses.\n\nSection::::Other uses.:Mathematics.\n", "Section::::Retribution.:Aftermath.\n", "Being a bounty jumper was more profitable in the North. A month after the Battle of Fort Sumter the United States Congress passed a law allowing for bounties up to $300. The Confederate government did likewise, starting at $50 and then later in the war increased the bounty to $100. As the US dollar was worth more than the Confederate dollar ever was, regardless of the $200 disparity, the Northern government had greater luck with bounties, and was more likely to have to deal with bounty jumpers. With state and local governments also adding to bounties, the total could amount to $1000, a considerable amount. As the typical Northern private was paid $13 a month, the bonuses were considerable.\n", "Tory, a teacher at an exclusive Manhattan private school is a former bounty hunter who is forced out of retirement for one final capture. With her current fiancée in the fold, she gets help from her former bad boy boyfriend to get one last capture of a former capture seeking revenge, while trying to keep her fiancée from finding out her dangerous past.\n\nSection::::Cast.\n\nBULLET::::- Francia Raisa as Tory Bell\n\nBULLET::::- Mike \"The Miz\" Mizanin as Mike\n\nBULLET::::- Will Greenberg as James\n\nBULLET::::- Chelan Simmons as Liz\n\nBULLET::::- April Telek as Gale Bell\n\nBULLET::::- Michael Hanus as Hawk Bell\n", "Bounties were sometimes paid as rewards for killing Native Americans. In 1862, a farmer received a bounty for shooting Taoyateduta (Little Crow). In 1856, Governor Isaac Stevens put a bounty on the head of Indians from eastern Washington, for ordinary Indians and for a \"chief\". A western Washington Indian, Patkanim, chief of the Snohomish, obligingly provided a great many heads, until the territorial auditor put a stop to the practice due to the dubious origins of the deceased.\n", "Bounties have been offered on animals deemed undesirable by particular governments or corporations. In Tasmania, the thylacine was relentlessly hunted to extinction based on such schemes. Gray wolves, too, were extirpated from much of the present United States by bounty hunters. An example of the legal sanction granted can be found in a Massachusetts Bay Colony law dated May 7, 1662: \"This Court doth Order, \"as an encouragement to persons to destroy Woolves\", That henceforth every person killing any Woolf, shall be allowed out of the Treasury of that County where such woolf was slain, Twenty shillings, and by the Town Ten shillings, and by the County Treasurer Ten shillings: which the Constable of each Town (on the sight of the ears of such Woolves being cut off) shall pay out of the next County rate, which the Treasurer shall allow.\"\n", "There have been some states that have rolled out specific laws that govern bounty hunting. For example, Minnesota laws provide that a bounty hunter cannot drive a white, black, maroon, or dark green vehicle, or wear any colors that are reserved for the police in the state (e.g. maroon, which is worn by the Minnesota Highway Patrol).\n\nSection::::United States.:Laws and regulation.:Connecticut.\n", "As of 2008, four states, Illinois, Kentucky, Oregon, and Wisconsin prohibited the practice, as they have abolished commercial bail bonds and banned the commercial bail bonds industry within their borders. As of 2012, Nebraska and Maine similarly prohibit surety bail bonds. Some states such as Texas and California require a license to engage in bounty hunting while others may have no restrictions.\n", "In Australia in 1824, a bounty of 500 acres of land was offered for capturing alive the Wiradjuri warrior Windradyne, the leader of the Aboriginal resistance movement in the Bathurst Wars. A week after the bounty was offered, the word \"alive\" was dropped from the reward notices, but he was neither captured nor betrayed by his people.\n", "Section::::Mutiny.:\"Bounty\" under Christian.\n", "Bounty\n\nBounty or bounties commonly refers to:\n\nBULLET::::- Bounty (reward), an amount of money or other reward offered by an organization for a specific task done with a person or thing\n\nBounty or bounties may also refer to:\n\nSection::::Geography.\n\nBULLET::::- Bounty, Saskatchewan, a ghost town located in Saskatchewan, Canada\n\nBULLET::::- Bounty Bay, an embayment of the Pacific Ocean into Pitcairn Island, named for the ship\n\nBULLET::::- Bounty Islands, a small group of 13 islets and numerous rocks in the south Pacific Ocean which are territorially part of New Zealand\n\nSection::::Arts, entertainment, and media.\n\nSection::::Arts, entertainment, and media.:Fictional entities.\n", "Section::::Examples.:Rewards and thief-takers.\n", "Bounty Hunters (American TV series)\n\nBounty Hunters is an American adult animated situation comedy series. The series originally aired on CMT from July 13 to September 28, 2013. The series shows how Jeff, Larry, and Bill are bounty hunters when they being got \"bounty\" or \"assignment\" from Lisa (who runs Lisa's Bail Bonds).\n\nSection::::Plot.\n", "Section::::Bounty.\n\nThe U.S. Department of State was offering a reward of USD $5 million for information leading to the arrest and/or conviction of Héctor Beltrán Leyva, while the Mexican government offered a US$2.1 million bounty reward.\n\nSection::::Bounty.:Kingpin Act sanction.\n", "Section::::Other uses.:American football.\n\nBounties, referring to bonuses for in-game performance, are officially banned by the National Football League, the sport's dominant professional league. Despite this, bounties have had a significant history within the sport. Notable examples include a 1989 game between the Dallas Cowboys and Philadelphia Eagles that became known as the Bounty Bowl, and a bounty scheme organized by players and coaches with the New Orleans Saints that was uncovered in 2012, leading to substantial penalties.\n\nSection::::Other uses.:Recruitment.\n", "Section::::Cast.\n\nBULLET::::- Christian Pitre as Mary Death\n\nBULLET::::- Matthew Marsden as Francis Gorman/Drifter\n\nBULLET::::- Kristanna Loken as Catherine\n\nBULLET::::- Barak Hardley as Jack LeMans\n\nBULLET::::- Abraham Benrubi as Jimbo\n\nBULLET::::- Eve Jeffers as Mocha Sujata\n\nBULLET::::- Beverly D'Angelo as Lucille\n\nBULLET::::- Kevin McNally as Daft Willy\n\nBULLET::::- Mindy Robinson as Estelle\n\nBULLET::::- Gary Busey as Van Sterling\n\nBULLET::::- Jeff Meacham as Greg Gunney\n\nBULLET::::- Will Collyer as Billy Boom\n\nBULLET::::- Soon Hee Newbold as Vio Lin\n\nSection::::Production.\n", "Section::::Examples.:18th-century examples.\n", "Slave Ship is the second book in The Bounty Hunter Wars trilogy of books in the \"Star Wars\" expanded universe. It was written by K. W. Jeter.\n\nSection::::Entries (4 ABY).:\"Hard Merchandise\" by K.W. Jeter.\n\n\"Hard Merchandise\", 1st edition paperback, 1999. K. W. Jeter, \n\nHard Merchandise is the final book in The Bounty Hunter Wars trilogy of books in the Universe. It was written by K. W. Jeter.\n", "Section::::Arts, entertainment, and media.:Television episodes.\n\nBULLET::::- \"Bounty\" (\"The A-Team\")\n\nBULLET::::- \"Bounty\" (\"Blake's 7\")\n\nBULLET::::- \"Bounty\" (\"Stargate SG-1\")\n\nBULLET::::- \"Bounty\" (\"The Walking Dead\")\n\nSection::::Arts, entertainment, and media.:Other arts, entertainment, and media.\n\nBULLET::::- \"Bounty\" (Doctor Who audio), a Doctor Who audio production based on the television series\n\nBULLET::::- Bounty (poker), a feature in some poker tournaments that rewards a player for eliminating another player\n\nSection::::Brands and enterprises.\n\nBULLET::::- Bounty (brand), a brand of paper towel manufactured by Procter & Gamble\n\nBULLET::::- Bounty (chocolate bar), a brand of coconut-filled chocolate bar\n\nSection::::Ships.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04287
How do people who never lift, workout, or otherwise strengthen themselves experience muscle hypertrophy and pack on mass/definition without stimulating the muscles?
Genetics and specially your own day to day life can also play a big part in muscle development. Anything can be a workout from walking at a fast pace to your workplace to lifting boxes, office supplies, etc.
[ "BULLET::::- Muscle tone—The tone of muscles is controlled by the nervous system, and influences range of movement. Special techniques can change muscle tone and increase flexibility. Yoga, for example, can help to relax muscles and make the joints more supple. However, please note that Yoga is not recommended by most medical professionals for people with Joint Hypermobility Syndrome, due to the likelihood of damage to the joints. Gymnasts and athletes can sometimes acquire hypermobility in some joints through activity.\n\nBULLET::::- Proprioception—Compromised ability to detect exact joint/body position with closed eyes, may lead to overstretching and hypermobile joints.\n", "Being diagnosed with hypermobility syndrome can be a difficult task. There is a lack of wide understanding of the condition and it can be considered a zebra condition. As Hypermobility Syndrome can be easily mistaken for being double-jointed or categorized as nothing more than perhaps an achy body from lack of exercise, medical professionals may diagnose those affected incorrectly and not adequately investigate the symptoms. Due to these circumstances many sufferers can live not knowing they have it. As a result those affected without a proper diagnosis can easily injure themselves and not take proper care to ensure they go about working safely. \n", "Section::::In sports.\n", "Examples of increased muscle hypertrophy are seen in various professional sports, mainly strength related sports such as boxing, olympic weightlifting, mixed martial arts, rugby, professional wrestling and various forms of gymnastics. Athletes in other more skill-based sports such as basketball, baseball, ice hockey, and soccer may also train for increased muscle hypertrophy to better suit their position of play. For example, a center (basketball) may want to be bigger and more muscular to better overpower his or her opponents in the low post. Athletes training for these sports train extensively not only in strength but also in cardiovascular and muscular endurance training.\n", "Symptoms may last for days, weeks, or months until the injury is healed. The most apparent sign of hypermetabolism is an abnormally high intake of calories followed by continuous weight loss. Internal symptoms of hypermetabolism include but are not limited to: peripheral insulin resistance, elevated catabolism of protein, carbohydrates and triglycerides, and a negative nitrogen balance in the body. \n\nOutward symptoms of hypermetabolism may include:\n\nBULLET::::- Sudden weight loss\n\nBULLET::::- Anemia\n\nBULLET::::- Fatigue\n\nBULLET::::- Elevated heart rate\n\nBULLET::::- Irregular heartbeat\n\nBULLET::::- Insomnia\n\nBULLET::::- Dysautonomia\n\nBULLET::::- Shortness of breath\n\nBULLET::::- Muscle weakness\n\nBULLET::::- Excessive sweating\n\nSection::::Detection.\n", "Psychologists have identified further clinical features of muscle dysmorphia, such as excess engagement in activities to increase muscularity, activities such as dietary restriction, over-exercise, and injection of growth-enhancing drugs. Persons experiencing muscle dysmorphia generally spend over three hours daily pondering increased muscularity, and feel unable to limit their weightlifting activities. As in anorexia nervosa, the reverse quest in muscle dysmorphia can be insatiable. They closely monitor their bodies and camouflage by wearing multiple clothing layers to appear larger.\n", "Other health conditions and disorders can cause hyperlordosis. Achondroplasia (a disorder where bones grow abnormally which can result in short stature as in dwarfism), Spondylolisthesis (a condition in which vertebrae slip forward) and osteoporosis (the most common bone disease in which bone density is lost resulting in bone weakness and increased likelihood of fracture) are some of the most common causes of hyperlordosis. Other causes include obesity, hyperkyphosis (spine curvature disorder in which the thoracic curvature is abnormally rounded), discitits (an inflammation of the intervertebral disc space caused by infection) and benign juvenile lordosis. Other factors may also include those with rare diseases, as is the case with Ehlers Danlos Syndrome (EDS), where hyper-extensive and usually unstable joints (e.g. joints that are problematically much more flexible, frequently to the point of partial or full dislocation) are quite common throughout the body. With such hyper-extensibility, it is also quite common (if not the norm) to find the muscles surrounding the joints to be a major source of compensation when such instability exists.\n", "Section::::Hypertrophy stimulation.:Anaerobic training.\n\nThe best approach to specifically achieve muscle growth remains controversial (as opposed to focusing on gaining strength, power, or endurance); it was generally considered that consistent anaerobic strength training will produce hypertrophy over the long term, in addition to its effects on muscular strength and endurance. Muscular hypertrophy can be increased through strength training and other short-duration, high-intensity anaerobic exercises. Lower-intensity, longer-duration aerobic exercise generally does not result in very effective tissue hypertrophy; instead, endurance athletes enhance storage of fats and carbohydrates within the muscles, as well as neovascularization.\n\nSection::::Temporary swelling.\n", "Some common symptoms of hypermobility syndrome include:\n\nBULLET::::- Joint pain around the affected area;\n\nBULLET::::- Exhaustion (typically when affected area is the legs);\n\nBULLET::::- Swelling around the joint when joint is being exhorted;\n\nBULLET::::- Depression;\n\nBULLET::::- Weaker immune system;\n\nBULLET::::- Sensitive skin around the affected area;\n\nBULLET::::- Varying pain levels around the affected area;\n\nBULLET::::- Muscle spasms\n\nOther symptoms can appear and not everyone affected experiences the same symptoms.\n\nSection::::Diagnosis.\n", "Training to failure\n\nIn weight training, training to failure is repeating an exercise (such as the bench press) to the point of momentary muscular failure, i.e. the point where the neuromuscular system can no longer produce adequate force to overcome a specific workload.\n\nThe Current Medical Diagnosis and Treatment states that training to failure is necessary for maximal hypertrophic response.\n\nSection::::Heavy or light weights?\n", "Microtrauma, which is tiny damage to the fibers, may play a significant role in muscle growth. When microtrauma occurs (from weight training or other strenuous activities), the body responds by overcompensating, replacing the damaged tissue and adding more, so that the risk of repeat damage is reduced. Damage to these fibers has been theorized as the possible cause for the symptoms of delayed onset muscle soreness (DOMS), and is why progressive overload is essential to continued improvement, as the body adapts and becomes more resistant to stress. However, work examining the time course of changes in muscle protein synthesis and their relationship to hypertrophy showed that damage was unrelated to hypertrophy. In fact, in that study the authors showed that it was not until the damage subsided that protein synthesis was directed to muscle growth.\n", "Hypertrophy serves to maintain muscle mass, for an elevated basal metabolic rate, which has the potential to burn more calories in a given period compared to aerobics. This helps to maintain a higher metabolic rate which would otherwise diminish after metabolic adaption to dieting, or upon completion of an aerobic routine.\n", "BULLET::::- Denying the over exercising is a problem\n\nSection::::Causes.\n", "Section::::Signs and symptoms.\n\nThe majority of people with DISH are not symptomatic, and the findings are an incidental imaging abnormality.\n", "Section::::Muscle injury.\n\nEccentric contractions are a frequent cause of muscle injury when engaging in unaccustomed exercise. But a single bout of such eccentric exercise leads to adaptation which will make the muscle less vulnerable to injury on subsequent performance of the eccentric exercise.\n\nSection::::Findings.\n\nSeveral key findings have been researched regarding the benefits of eccentric training:\n\nBULLET::::- Eccentric training creates greater force owing to the \"decreased rate of cross-bridge muscle detachments.\" Patients and athletes will have more muscle force for bigger weights when eccentric training.\n", "Section::::Risk factors.\n\nAlthough muscle dysmorphia's development is unclear, several risk factors have been identified.\n\nSection::::Risk factors.:Trauma and bullying.\n\nVersus the general population, persons manifesting muscle dysmorphia are more likely to have experienced or observed traumatic events like sexual assault or domestic violence, or to have sustained adolescent bullying and ridicule for perceived deficiencies such as smallness, weakness, poor athleticism, or intellectual inferiority. Increased body mass may seem to reduce the threat of further mistreatment.\n\nSection::::Risk factors.:Sociopsychological traits.\n", "It is important that hypermobile individuals remain fit - even more so than the average individual - to prevent recurrent injuries. Regular exercise and exercise that is supervised by a physician and physical therapist can reduce symptoms because strong muscles increase dynamic joint stability. Low-impact exercise such as closed chain kinetic exercises are usually recommended as they are less likely to cause injury when compared to high-impact exercise or contact sports.\n\nHeat and cold treatment can help temporarily to relieve the pain of aching joints and muscles but does not address the underlying problems.\n\nSection::::Treatments.:Medication.\n", "Back hyper-extensions on a Roman chair or inflatable ball will strengthen all the posterior chain and will treat hyperlordosis. So too will stiff legged deadlifts and supine hip lifts and any other similar movement strengthening the posterior chain \"without involving the hip flexors\" in the front of the thighs. Abdominal exercises could be avoided altogether if they stimulate too much the psoas and the other hip flexors.\n", "It is also noteworthy that the presence of athetosis in cerebral palsy (as well as other conditions) causes a significant increase in a person’s basal resting metabolic rate. It has been observed that those who have cerebral palsy with athetosis require approximately 500 more Calories per day than their non-cerebral palsy non-athetoid counterpart.\n\nSection::::Related disorders.:Pseudoathetosis.\n", "Section::::Signs and symptoms.\n\nPeople with Joint Hypermobility Syndrome may develop other conditions caused by their unstable joints. These conditions include:\n\nBULLET::::- Joint instability causing frequent sprains, tendinitis, or bursitis when doing activities that would not affect others\n\nBULLET::::- Joint pain\n\nBULLET::::- Early-onset osteoarthritis (as early as during teen years)\n\nBULLET::::- Subluxations or dislocations, especially in the shoulder (severe limits on one’s ability to push, pull, grasp, finger, reach, etc., is considered a disability by the US Social Security Administration)\n\nBULLET::::- Knee pain\n\nBULLET::::- Fatigue, even after short periods of exercise\n\nBULLET::::- Back pain, prolapsed discs or spondylolisthesis\n", "Many trainees like to cycle between the two methods in order to prevent the body from adapting (maintaining a progressive overload), possibly emphasizing whichever method more suits their goals; typically, a bodybuilder will aim at sarcoplasmic hypertrophy most of the time but may change to a myofibrillar hypertrophy kind of training temporarily in order to move past a plateau. However, no real evidence has been provided to show that trainees ever reach this plateau, and rather was more of a hype created from \"muscular confusion\".\n\nSection::::Muscle growth.:Nutrition.\n", "Eccentric contraction and oxygen consumption:\n", "Section::::Presentation.\n\nSymptoms associated with central nervous systems disorders are classified into positive and negative categories. Positive symptoms include those that increase muscle activity through hyper-excitability of the stretch reflex (i.e., rigidity and spasticity) where negative symptoms include those of insufficient muscle activity (i.e. weakness) and reduced motor function. Often the two classifications are thought to be separate entities of a disorder; however, some authors propose that they may be closely related.\n\nSection::::Pathophysiology.\n", "Treatment should be based on assessment by the relevant health professionals. For muscles with mild-to-moderate impairment, exercise should be the mainstay of management, and is likely to need to be prescribed by a physical therapist or other health professional skilled in neurological rehabilitation.\n\nMuscles with severe impairment are likely to be more limited in their ability to exercise, and may require help to do this. They may require additional interventions, to manage the greater neurological impairment and also greater secondary complications. These interventions may include serial casting, flexibility exercise such as sustained positioning programs, and medical interventions.\n", "Research has clearly shown that exercise is beneficial for impaired muscles, even though it was previously believed that strength exercise would \"increase\" muscle tone and impair muscle performance further. Also, in previous decades there has been a strong focus on other interventions for impaired muscles, particularly stretching and splinting, but the evidence does not support these as effective. One of the challenges for health professionals working with UMNS movement disorders is that the degree of muscle weakness makes developing an exercise programme difficult. For muscles that lack any volitional control, such as after complete spinal cord injury, exercise may be assisted, and may require equipment, such as using a standing frame to sustain a standing position. Often, muscles require specific stimulation to achieve small amounts of activity, which is most often achieved by weight-bearing (e.g. positioning and supporting a limb such that it supports body weight) or by stimulation to the muscle belly (such as electrical stimulation or vibration).\n" ]
[ "People who don't work out should not pack on mass/definition." ]
[ "Muscle can be built through walking, lifting boxes, etc." ]
[ "false presupposition" ]
[ "People who don't work out should not pack on mass/definition." ]
[ "false presupposition" ]
[ "Muscle can be built through walking, lifting boxes, etc." ]
2018-12658
Why can't we turn DisplayOut ports on a laptop to DisplayIn ports?
Those exact ports have been wired for one-way use because that's what the market is most willing to pay for. But you *certainly can* have a port on a laptop that lets other devices share their display. This can be done in software over a network, or with software support over a USB port. Here's one example. URL_0
[ "Docking connectors for laptop computers are usually embedded into a mechanical device that supports and aligns the laptop and sports various single-function ports and a power source that are aggregated into the docking connector. Docking connectors would carry interfaces such as keyboard, serial, parallel, and video ports from the laptop and supply power to it.\n\nSection::::Mobile devices.\n\nMany mobile devices feature a dock connector.\n", "However, in 2013 VESA announced that after investigating reports of malfunctioning DisplayPort devices, it had discovered that a large number of non-certified vendors were manufacturing their DisplayPort cables with the DP_PWR pin connected:\n\nThe stipulation that the DP_PWR wire be omitted from standard DisplayPort cables was not present in the DisplayPort1.0 standard. However, DisplayPort products (and cables) did not begin to appear on the market until 2008, long after version 1.0 had been replaced by version 1.1. The DisplayPort1.0 standard was never implemented in commercial products.\n\nSection::::Resolution and refresh frequency limits.\n", "Some KVM switches (keyboard-video-mouse) and video extenders handle DDC traffic incorrectly, making it necessary to disable monitor plug and play features in the operating system, and maybe even physically remove pin 12 (serial data pin) from the analog VGA cables that connect such device to multiple PCs.\n", "Standard DisplayPort cable connections do not use the DP_PWR pin. Connecting the DP_PWR pins of two devices directly together through a cable can create a short circuit which can potentially damage devices, since the DP_PWR pins on two devices are unlikely to have exactly the same voltage (especially with a ±10% tolerance). For this reason, the DisplayPort1.1 and later standards specify that passive DisplayPort-to-DisplayPort cables must leave pin 20 unconnected.\n", "On a typical laptop there are several USB ports, an external monitor port (VGA, DVI, HDMI or Mini DisplayPort), an audio in/out port (often in form of a single socket) is common. It is possible to connect up to three external displays to a 2014-era laptop via a single Mini DisplayPort, utilizing multi-stream transport technology. Apple, in a 2015 version of its MacBook, transitioned from a number of different I/O ports to a single USB-C port. This port can be used both for charging and connecting a variety of devices through the use of aftermarket adapters. Google, with its updated version of Chromebook Pixel, shows a similar transition trend towards USB-C, although keeping older USB Type-A ports for a better compatibility with older devices. Although being common until the end of the 2000s decade, Ethernet network port are rarely found on modern laptops, due to widespread use of wireless networking, such as Wi-Fi. Legacy ports such as a PS/2 keyboard/mouse port, serial port, parallel port, or Firewire are provided on some models, but they are increasingly rare. On Apple's systems, and on a handful of other laptops, there are also Thunderbolt ports, but Thunderbolt 3 uses USB-C. Laptops typically have a headphone jack, so that the user can connect external headphones or amplified speaker systems for listening to music or other audio.\n", "Most companies that produce laptops with such breakout ports also offer simpler adapters that grant access to one or two of the buses consolidated in them at a time.\n\nSection::::Types.:OEM/Proprietary Dock.\n", "Since the introduction of Windows 10 April 2018 Update, and due to changes in the WDDM, it became possible to use the same dual graphics in laptops. For example, it allows you to run programs / games on a more powerful video card, and display the image via the built-in graphics directly on the internal (PCI-E) or external bus, without having to connect the monitor to a powerful video card. It can also act as a solution to the problem if there is no VGA video output on the video card, and it is present on the motherboard.\n\nSection::::History.:WDDM 2.5.\n", "A special \"PRINT\" command also existed to achieve the same effect. Microsoft Windows still refers to the ports in this manner in many cases, though this is often fairly hidden.\n\nIn SCO UNIX and Linux, the first parallel port is available via the filesystem as /dev/lp0. Linux IDE devices can use a \"paride\" (parallel port IDE) driver.\n\nSection::::Notable consumer products.\n\nBULLET::::- The Iomega ZIP drive\n\nBULLET::::- The Snappy Video SnapShot video capture device\n\nBULLET::::- MS-DOS 6.22's INTERLNK and INTERSRV drive sharing utility\n\nBULLET::::- The Covox Speech Thing audio device\n\nSection::::Current use.\n", "BULLET::::- Apple's Dual-Link DVI or VGA adapters are relatively large and expensive compared to past adapters, and customers have reported problems with them, such as being unable to connect to an external display. Monitors connected to a Mini DisplayPort via these adaptors may have resolution problems or not \"wake up\" from sleep.\n", "Section::::Companion standards.:iDP.\n", ", SCSI interfaces had become impossible to find for laptop computers. Adaptec had years before produced PCMCIA parallel SCSI interfaces, but when PCMCIA was superseded by the ExpressCard Adaptec discontinued their PCMCIA line without supporting ExpressCard. Ratoc produced USB and Firewire to parallel SCSI adaptors, but ceased production when the integrated circuits required were discontinued. Drivers for existing PCMCIA interfaces were not produced for newer operating systems.\n", "A separate piece of hardware, called a programmer is required to connect to an I/O port of a PC on one side and to the PIC on the other side. A list of the features for each major programming type are:\n\nBULLET::::1. Parallel port - large bulky cable, most computers have only one port and it may be inconvenient to swap the programming cable with an attached printer. Most laptops newer than 2010 do not support this port. Parallel port programming is very fast.\n", "Section::::Companion standards.:DockPort.\n\n\"DockPort\", formerly known as \"Lightning Bolt\", is an extension to DisplayPort to include USB 3.0 data as well as power for charging portable devices from attached external displays. Originally developed by AMD and Texas Instruments, it has been announced as a VESA specification in 2014.\n\nSection::::Companion standards.:USB-C.\n", "BULLET::::- In Smart Display OS 1.0, the display would lock the host PC to it while in use. Microsoft variously attributed this to licensing issues (that Windows XP Professional was licensed for one user per running copy ) and resource management problems. The requirements of licensing — not to allow the devices to work standalone, not to allow the device to connect to the host PC while the PC's main screen was active and not to allow multiple Smart Displays to control one PC — were widely derided in the press.\n", "BULLET::::- Unavailable on USB-C – The \"DisplayPort Alternate Mode\" specification for sending DisplayPort signals over a USB-C cable does not include support for the dual-mode protocol. As a result, DP-to-DVI and DP-to-HDMI passive adapters do not function when chained from a USB-C to DP adapter.\n\nSection::::Features.:Multi-Stream Transport (MST).\n", "An early consumer WQXGA monitor was the 30-inch Apple Cinema Display, unveiled by Apple in June 2004. At the time, dual-link DVI was uncommon on consumer hardware, so Apple partnered with Nvidia to develop a special graphics card that had two dual-link DVI ports, allowing simultaneous use of two 30-inch Apple Cinema Displays. The nature of this graphics card, being an add-in AGP card, meant that the monitors could only be used in a desktop computer, like the Power Mac G5, that could have the add-in card installed, and could not be immediately used with laptop computers that lacked this expansion capability.\n", "\"Micro DisplayPort\" would have targeted systems that need ultra-compact connectors, such as phones, tablets and ultra-portable notebook computers. This standard would have been physically smaller than the currently available Mini DisplayPort connectors. The standard was expected to be released by Q2 2014.\n\nThis project seems aborted to be replaced by DisplayPort Alt Mode for USB Type-C Standard.\n\nSection::::Companion standards.:DDM.\n\n\"Direct Drive Monitor\" (DDM) 1.0 standard was approved in December 2008. It allows for controller-less monitors where the display panel is directly driven by the DisplayPort signal, although the available resolutions and color depth are limited to two-lane operation.\n", "Intel and AMD published a press release in December 2010 stating they would no longer support the LVDS LCD-panel interface in their product lines by 2013. They are promoting Embedded DisplayPort and Internal DisplayPort as their preferred solution. However, the LVDS LCD-panel interface has proven to be the lowest cost method for moving streaming video from a video processing unit to a LCD-panel timing controller within a TV or notebook, and in February 2018 LCD TV and notebook manufacturers continue to introduce new products using the LVDS interface.\n\nSection::::Comparing serial and parallel data transmission.\n", "The initial implementation of ADC on some models of Power Mac G4s involved the removal of DVI connectors from these computers. This change necessitated a passive ADC to DVI adapter to use a DVI monitor.\n", "Docking station\n\nIn computing a docking station or port replicator (hub) or dock provides a simplified way of \"plugging-in\" a laptop computer to common peripherals. Because a wide range of dockable devices—from mobile telephones to wireless mice—have different connectors, power signaling, and uses, docks are not standardized and are therefore often designed with a specific type of device in mind. This technology is also used on the Nintendo Switch hybrid video game console. \n", "This type of dock was manufactured by both Apple and many third parties, and gave the PowerBook Duo up to three extra ports in a minimal configuration. Examples include floppy, SCSI, video and Ethernet docks, each typically included one ADB port as well. This was the least expensive, and most basic of the docks. This type of dock allowed the Duo's internal LCD to be used as well, and could run on the Duo's internal battery for a reduced amount of time. Popular due to the minimal impact in accessories that must be carried with the Duo, they offered a practical alternative to emergency hard disk and software situations and task-specific needs.\n", "LCDs, different from cathode ray tube (CRT) displays, have to use digital signaling to show each pixel. While notebook PCs started replacing CRT displays to LCDs, pixel data were transmitted as parallel data, interface systems found the problem that more than 20 cables were required to transmit data with 18 bits color depth for each 6-bit RGB color as well as lack of space for cables and difficulty of adjusting skews.\n", "There are PCI (and PCI-express) cards that provide parallel ports. There are also some print servers that provide interface to parallel port through network. USB-to-EPP chips can also allow other non-printer devices to continue to work on modern computers without a parallel port.\n", "HPU parallelism\n\nSince many (possibly hundreds) HPUs would be required to drive a single light-field display, it is important that the HPU be an independent processor, requiring minimal support logic and interconnect. The HPU interconnect framework should provide scene, command and sync buffering and relay throughout the topology. Ideally, neither the host system nor the individual HPUs would have knowledge of the interconnect topology or even the depth and breadth of the system.\n\nHogel parallelism (multivew point rendering)\n", "Since 2013, with the release of various ExpressCard and Thunderbolt-to-PCI Express adapters, it is again possible to use SCSI devices on laptops, by installing PCI Express SCSI host adapters using a laptop's ExpressCard or Thunderbolt port.\n\nSection::::Standards.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00359
Why do the colors on your screen invert when you look at it from an off angle?
Good question but long answer. You must be using an LCD (of course CRT is extinct) screen. The important thing to realize about LCD's is that there's a number of steps that happen before the light hits your eyes. There's some different setups but the simplest would be back light, 90 degree polarizer, lcd layer, 180 degree polarizer. Something that not many people know is that light rays can have an angle associated with it. The light from the backlight hits the first polarizer which only allows light with an angle of 90 degrees through. Then the light hits the lcd and gets "twisted" an amount depending upon the voltage applied at the lcd. Then the 180 degree polarizer lets through light based on the amount its twisted. so if there's no twisting all is blocked, a little twisting a little gets through, twisted ninety degrees and all gets through. now in a color LCD display at each pixel you've got this happening 3 times with a red blue and green color filter. Now the problem is the back light is always on and not all that light will be absorbed in the screen. So the LCD screens are designed to focus the light you want in a useful viewing angle and divert the light that you don't away to the sides. That's why the color is inverted its specifically the opposite RGB code of the pixel wanted. If a pixel wants 90% red 20% blue and 10% green that is what gets in the cone but the 10% red 80% blue and 90% green gets thrown out to the side.
[ "TN displays suffer from limited viewing angles, especially in the vertical direction. Colors will shift when viewed off-perpendicular. In the vertical direction, colors will shift so much that they will invert past a certain angle.\n", "This way of proceeding is suitable only when the display device does not exhibit \"loading effects\", which means that the luminance of the test pattern is varying with the size of the test pattern. Such loading effects can be found in CRT-displays and in PDPs. A small test pattern (e.g. 4% window pattern) displayed on these devices can have significantly higher luminance than the corresponding full-screen pattern because the supply current may be limited by special electronic circuits.\n\nSection::::Full-swing contrast.\n", "A fortunate side-effect of inversion (see above) is that, for most display material, what little cross-talk there is largely cancelled out. For most practical purposes, the level of crosstalk in modern LCDs is negligible.\n\nCertain patterns, particularly those involving fine dots, can interact with the inversion and reveal visible cross-talk. If you try moving a small Window in front of the inversion pattern (above) which makes your screen flicker the most, you may well see cross-talk in the surrounding pattern.\n\nDifferent patterns are required to reveal cross-talk on different displays (depending on their inversion scheme).\n", "BULLET::::2. The LCD moves around two axes which are at a right angle to each other, so that the screen both tilts and swivels. This type is called \"swivel screen\". Other names for this type are \"vari-angle screen\", \"fully articulated screen\", \"fully articulating screen\", \"rotating screen\", \"multi-angle screen\", \"variable angle screen\", \"flip-out-and-twist screen\", \"twist-and-tilt screen\" and \"swing-and-tilt screen\".\n", "When different screens are combined, a number of distracting visual effects can occur, including the edges being overly emphasized, as well as a moiré pattern. This problem can be reduced by rotating the screens in relation to each other. This screen angle is another common measurement used in printing, measured in degrees clockwise from a line running to the left (9 o'clock is zero degrees).\n", "Screen angle\n\nIn offset printing, the screen angle is the angle at which the halftones of a separated color is made output to a lithographic film, hence, printed on final product media.\n\nSection::::Why screen angles should differ.\n", "This control adjusts CRT focus to obtain the sharpest, most-detailed trace. In practice, focus must be adjusted slightly when observing very different signals, so it must be an external control. The control varies the voltage applied to a focusing anode within the CRT. Flat-panel displays do not need this control. \n\nSection::::Features and uses.:Front panel controls.:Intensity control.\n", "Pixel shift for displays is a method to prevent static images (such as station bugs and video game HUD elements) from causing image retention and screen burn-in in susceptible display types such as plasma and OLED. The entire video frame is moved periodically (vertically and/or horizontally) so there are effectively no static images. One definition reads: \"the image rotates in a circle in a way imperceptible to the viewer with a defined rhythm and pixel interval.\"\n", "In offset printing, colors are output on separate lithographic plates. Failing to use the correct set of angles to output every color may lead to a sort of optical noise called a moiré pattern which may appear as bands or waves in the final print. There is another disadvantage associated with incorrect sets of angle values, as the colors will look dimmer due to overlapping.\n\nWhile the angles depend on how many colors are used and the preference of the press operator, typical CMYK process printing uses any of the following screen angles:\n", "This adjusts trace brightness. Slow traces on CRT oscilloscopes need less, and fast ones, especially if not often repeated, require more brightness. On flat panels, however, trace brightness is essentially independent of sweep speed, because the internal signal processing effectively synthesizes the display from the digitized data, or from the kitty picture.\n\nSection::::Features and uses.:Front panel controls.:Astigmatism.\n", "If the reflective properties of the projection screen (usually depending on direction) are included in the measurement, the luminance reflected from the centers of the rectangles has to be measured for a (set of) specific directions of observation.\n\nLuminance, contrast and chromaticity of LCD-screens is usually varying with the direction of observation (i.e. viewing direction). The variation of electro-optical characteristics with viewing direction can be measured sequentially by mechanical scanning of the viewing cone (\"gonioscopic\" approach) or by simultaneous measurements based on conoscopy.\n\nSection::::See also.\n\nBULLET::::- Contrast (vision)\n\nSection::::External links.\n\nBULLET::::- Charles Poynton:\" Reducing eyestrain from video and computer monitors\"\n", "In a cathode-ray tube (CRT), such as that for an oscilloscope, a beam of electrons is accelerated by an electromagnet coil around the neck of the tube. The electrons' speed (and therefore energy, and therefore illuminating effect) is proportional to the current in the coil at the time the electrons pass through it. Hence, to implement an anti-aliasing scheme one controls pixel intensity by varying CRT neck coil current in accordance with the scheme. The result is to provide variable illumination intensity for each pixel, so that the pixels closest to the trajectory of the data points on the screen are made brighter, and those farther away, dimmer. The procedure improves the appearance of the display by providing a continuous-appearing and non-jumping waveform.\n", "Today's displays, being driven by digital signals (such as DVI, HDMI and DisplayPort), and based on newer fixed-pixel digital flat panel technology (such as liquid crystal displays), can safely assume that all pixels are visible to the viewer. On digital displays driven from a digital signal, therefore, no adjustment is necessary because all pixels in the signal are unequivocally mapped to physical pixels on the display. As overscan reduces picture quality, it is undesirable for digital flat panels; therefore, is preferred. When driven by analog video signals such as VGA, however, displays are subject to timing variations and cannot achieve this level of precision.\n", "This control may instead be called \"shape\" or \"spot shape\". It adjusts the relative voltages on two of the CRT anodes, changing a displayed spot from elliptical in one plane through a circular spot to an ellipse at 90 degrees to the first. This control may be absent from simpler oscilloscope designs or may even be an internal control. It is not necessary with flat panel displays.\n\nSection::::Features and uses.:Front panel controls.:Beam finder.\n", "When projecting images onto a completely flat screen, the distance light has to travel from its point of origin (i.e., the projector) increases the farther away the destination point is from the screen's center. This variance in the distance traveled results in a distortion phenomenon known as the pincushion effect, where the image at the left and right edges of the screen becomes bowed inwards and stretched vertically, making the entire image appear blurry.\n", "Section::::Causes.:Pixel vignetting.\n\nPixel vignetting only affects digital cameras and is caused by angle-dependence of the digital sensors. Light incident on the sensor at normal incident produces a stronger signal than light hitting it at an oblique angle. Most digital cameras use built-in image processing to compensate for optical vignetting and pixel vignetting when converting raw sensor data to standard image formats such as JPEG or TIFF. The use of offset microlenses over the image sensor can also reduce the effect of pixel vignetting.\n\nSection::::Post-shoot.\n", "BULLET::::5. Calculate the resulting \"static contrast\" for the two test patterns using one of the metrics listed above (CR,C or K).\n\nWhen luminance and/or chromaticity are measured before the optical response has settled to a stable steady state, some kind of \"transient contrast\" has been measured instead of the \"static contrast\".\n\nSection::::Transient contrast.\n\nWhen the image content is changing rapidly, e.g. during the display of video or movie content, the optical state of the display may not reach the intended stable steady state because of slow response and thus the apparent contrast is reduced if compared to the static contrast.\n", "An example of pixel shape affecting \"resolution\" or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or \"sharper\". However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to \"fix\" the non-native resolution input into the display's native resolution output.\n", "Some LCDs may use temporal dithering to achieve a similar effect. By alternating each pixel's color value rapidly between two approximate colors in the panel's color space (also known as Frame Rate Control), a display panel which natively supports only 18-bit color (6 bits per channel) can represent a 24-bit \"true\" color image (8 bits per channel).\n", "BULLET::::1. Apply the first test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,\n\nBULLET::::2. Measure the luminance and/or the chromaticity of the first test pattern and record the result,\n\nBULLET::::3. Apply the second test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,\n\nBULLET::::4. Measure the luminance and/or the chromaticity of the second test pattern and record the result,\n", "A metric for \"color contrast\" often used in the electronic displays field is the color difference ΔE*uv or ΔE*ab.\n\nSection::::Full-screen contrast.\n\nDuring measurement of the luminance values used for evaluation of the contrast, the active area of the display screen is often completely set to one of the optical states for which the contrast is to be determined, e.g. completely white (R=G=B=100%) and completely black (R=G=B=0%) and the luminance is measured one after the other (time sequential).\n", "It is important to note that when there is a repeated number in the data (such as two 72s) then the plot must reflect such (so the plot would look like 7 | 2 2 5 6 7 when it has the numbers 72 72 75 76 77).\n\nRounding may be needed to create a stem-and-leaf display. Based on the following set of data, the stem plot below would be created:\n", "Modern arcade emulators are able to handle this difference in screen orientation by dynamically changing the screen resolution to allow the portrait oriented game to resize and fit a landscape display, showing wide empty black bars on the sides of the portrait-on-landscape screen.\n\nPortrait orientation is still used occasionally within some arcade and home titles (either giving the option of using black bars or rotating the display), primarily in the vertical shoot 'em up genre due to considerations of aesthetics, tradition and gameplay.\n\nSection::::Modern display rotation methods.\n", "In LCD screens, the LCD itself does not flicker, it preserves its opacity unchanged until updated for the next frame. However, in order to prevent accumulated damage LCD displays quickly alternate the voltage between positive and negative for each pixel, which is called 'polarity inversion'. Ideally, this wouldn't be noticeable because every pixel has the same brightness whether a positive or a negative voltage is applied. In practice, there is a small difference, which means that every pixel flickers at about 30 Hz. Screens that use opposite polarity per-line or per-pixel can reduce this effect compared to when the entire screen is at the same polarity, sometimes the type of screen is detectable by using patterns designed to maximize the effect.\n", "Given a desired display-system gamma, if the observer sees the same brightness in the checkered part and in the homogeneous part of every colored area, then the gamma correction is approximately correct. In many cases the gamma correction values for the primary colors are slightly different.\n\nSetting the color temperature or white point is the next step in monitor adjustment.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-06153
why are some memories easily forgettable while others are impossible to forget?
The age you were when you had that experience, the intensity and duration of emotions you experienced during the events, what was going on in rest of your environment, if certain objects, people, smells, etc. we're around or involved that then become associated with that experience.
[ "Section::::In popular culture.\n\nMemory phenomena are rich sources of storylines and novel situations in popular media. Two phenomena that appear regularly are total recall abilities and amnesia.\n\nSection::::In popular culture.:Total recall.\n", "Section::::Applications.:Source confusion in later life.\n", "Topographic memory involves the ability to orient oneself in space, to recognize and follow an itinerary, or to recognize familiar places. Getting lost when traveling alone is an example of the failure of topographic memory.\n\nFlashbulb memories are clear episodic memories of unique and highly emotional events. People remembering where they were or what they were doing when they first heard the news of President Kennedy's assassination, the Sydney Siege or of 9/11 are examples of flashbulb memories.\n\nAnderson (1976) divides long-term memory into \"declarative (explicit)\" and \"procedural (implicit)\" memories.\n\nSection::::Types.:By information type.:Declarative.\n", "Section::::Causes.:Neurological causes.:False memories and PET scans.\n", "Section::::Theory.\n", "Remember and know responses are quite often differentiated by their functional correlates in specific areas in the brain. For instance, during \"remember\" situations it is found that there is greater EEG activity than \"knowing\", specifically, due to an interaction between frontal and posterior regions of the brain. It is also found that the hippocampus is differently activated during recall of \"remembered\" (vs. familiar) stimuli. On the other hand, items that are only \"known\", or seem familiar, are associated with activity in the rhinal cortex.\n\nSection::::Origins.\n", "Section::::Functional sensitivity.:False memories.\n", "Section::::Important variables.:Confidence (or lack thereof).\n", "Section::::Components of misattribution.:Source confusion.\n", "Section::::Causes.:Neurological causes.\n\nSection::::Causes.:Neurological causes.:Neurological basis of false recognition.\n", "Section::::Phenomena.:The Face Advantage.\n", "The remember-know paradigm began its journey in 1985 from the mind of Endel Tulving. He suggested that there are only two ways in which an individual can access their past. For instance, we can recall what we did last night by simply traveling back in time through memory and episodically imagining what we did (remember) or we can know something about our past such as a phone number, but have no specific memory of where the specific memory came from (know). Recollection is based on the episodic memory system, and familiarity is based on the semantic memory system. Tulving argued that the remember-know paradigm could be applied to all aspects of recollection.\n", "Section::::Factors that affect recall.:Interference.\n", "Activation levels to secondary nodes is also determined by the strength of the association to the primary node. Some connections have greater association with the primary node (e.g. fire truck and fire or red, versus fire truck and hose or Dalmatian), and thus will receive a greater portion of the divided activation than less associated connections. Thus, associations that receive less activation will be out-competed by associations with stronger connections, and may fail to be brought into awareness, again causing a memory error.\n\nSection::::Causes.:Cognitive factors.:Connection density.\n", "Lateral parietal cortex damage (either dextral or sinistral) impairs performance on recognition memory tasks, but does not affect source memories. What is remembered is more likely to be of the 'familiar', or 'know' type, rather than 'recollect' or 'remember', indicating that damage to the parietal cortex impairs the conscious experience of memory.\n", "As items co-reside in the short-term store, their associations are constantly being updated in the long-term store matrix. The strength of association between two items depends on the amount of time the two memory items spend together within the short-term store, known as the contiguity effect. Two items that are contiguous have greater associative strength and are often recalled together from long-term storage.\n", "Section::::Causes.:Physiological factors.:Emotion.\n", "Section::::Factors affecting retrospective memory.:Disease.:Korsakoff's syndrome.\n", "Misattribution is likely to occur when individuals are unable to monitor and control the influence of their attitudes, toward their judgments, at the time of retrieval. Thus, memory is adapted to retain information that is most likely to be needed in the environment in which it operates. Therefore, any misattribution observed is likely to be a reflection of current attitudes.\n", "An experience must be very arousing to an individual for it to be consolidated as an emotional memory, and this arousal can be negative, thus causing a negative memory to be strongly retained. Having a long-lasting extremely vivid and detailed memory for negative events can cause a great deal of anxiety, as seen in post traumatic stress disorders. Individuals with PTSD endure flashbacks to traumatic events, with much clarity. Many forms of psychopathology show a tendency to maintain emotional experiences, especially negative emotional experiences, such as depression and generalized anxiety disorder. Patients with phobias are unable to cognitively control their emotional response to the feared stimuli.\n", "Section::::Experimental research.:Cryptomnesia.\n", "The remember-know paradigm has been used in studies that focus on the idea of a reminiscence bump and the age effects on autobiographical memory. Previous studies suggested old people had more \"know\" than \"remember\" and it was also found that younger individuals often excelled in the \"remember\" category but lacked in the \"know\".\n", "These two terms are commonly used together but it is also stated that a solid review of the findings for this link has not yet been completed.\n\nFragmentation of memory is common in two dissociative disorders.\n", "Remembering (recollection) accesses memory for separate contextual details (e.g. screen location and font size); i.e. involves retrieval of a particular context configuration.\n\nSection::::Testing methods and models.:Signal detection model.\n", "For this type of measurement, a participant has to identify material that was previously learned. The participant is asked to remember a list of material. Later on they are shown the same list of material with additional information and they are asked to identify the material that was on the original list. The more they recognize, the less information is forgotten.\n\nSection::::Theories.\n\nThe four main theories of forgetting apparent in the study of psychology are as follows:\n\nSection::::Theories.:Cue-dependent forgetting.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-11676
Why can't we create machine to use carbohydrate as source of energy instead of electricity, oil/gas, or stream?
We absolutely can. One way to do this is a *fuel cell* and another way is just to burn plant material.
[ "Early work with biofuel cells, which began in the early 20th century, was purely of the microbial variety. Research on using enzymes directly for oxidation in biofuel cells began in the early 1960s, with the first enzymatic biofuel cell being produced in 1964. This research began as a product of NASA's interest in finding ways to recycle human waste into usable energy on board spacecraft, as well as a component of the quest for an artificial heart, specifically as a power source that could be put directly into the human body. These two applications – use of animal or vegetable products as fuel and development of a power source that can be directly implanted into the human body without external refueling – remain the primary goals for developing these biofuel cells. Initial results, however, were disappointing. While the early cells did successfully produce electricity, there was difficulty in transporting the electrons liberated from the glucose fuel to the fuel cell's electrode and further difficulties in keeping the system stable enough to produce electricity at all due to the enzymes’ tendency to move away from where they needed to be in order for the fuel cell to function. These difficulties led to an abandonment by biofuel cell researchers of the enzyme-catalyst model for nearly three decades in favor of the more conventional metal catalysts (principally platinum), which are used in most fuel cells. Research on the subject did not begin again until the 1980s after it was realized that the metallic-catalyst method was not going to be able to deliver the qualities desired in a biofuel cell, and since then work on enzymatic biofuel cells has revolved around the resolution of the various problems that plagued earlier efforts at producing a successful enzymatic biofuel cell.\n", "Kenya may be a good candidate for testing out these systems because of its progressive and relatively well-funded department of agriculture, including the Kenya Agricultural Research Center, which provides funding and oversight to many projects investigating experimental methods and technologies.\n", "Another way of energy harvesting is through the oxidation of blood sugars. These energy harvesters are called biobatteries. They could be used to power implanted electronic devices (e.g., pacemakers, implanted biosensors for diabetics, implanted active RFID devices, etc.). At present, the Minteer Group of Saint Louis University has created enzymes that could be used to generate power from blood sugars. However, the enzymes would still need to be replaced after a few years. In 2012, a pacemaker was powered by implantable biofuel cells at Clarkson University under the leadership of Dr. Evgeny Katz.\n\nSection::::Devices.:Tree-based.\n", "Microbial fuel cells can create energy when bacteria breaks down organic material, this process a charge that is transferred to the anode. Taking something like human saliva, which has lots of organic material, can be used to power a micro-sized microbial fuel cell. This can produce a small amount of energy to run on-chip applications. This application can be used in things like biomedical devices and cell phones.\n", "Section::::Feasibility of enzymes as catalysts.\n\nWith respect to fuel cells, enzymes have several advantages to their incorporation. An important enzymatic property to consider is the driving force or potential necessary for successful reaction catalysis. Many enzymes operate at potentials close to their substrates which is most suitable for fuel cell applications.\n", "BULLET::::- Gastrobotics energy sources mainly focuses on the use of a microbial fuel cell. Microbial fuel cells require an oxidation reduction reaction to generate electricity. A microbial fuel cell uses bacteria, which must be fed. The fuel cell typically contains two compartments, the anode and cathode terminals which are separated by an ion exchange membrane.\n", "A drawback with the use of enzymes is size; given the large size of enzymes, they yield a low current density per unit electrode area due to the limited space. Since it is not possible to reduce enzyme size, it has been argued that these types of cells will be lower in activity. One solution has been to use three-dimensional electrodes or immobilization on conducting carbon supports which provide high surface area. These electrodes are extended into three-dimensional space which greatly increases the surface area for enzymes to bind thus increasing the current.\n\nSection::::Hydrogenase-based biofuel cells.\n", "There are several difficulties to consider associated with the incorporation of hydrogenase in biofuel cells. These factors must be taken into account to produce an efficient fuel cell.\n\nSection::::Hydrogenase-based biofuel cells.:Challenges.:Enzyme immobilization.\n", "The emerging technology was demonstrated in July 2009 by CEO Eric Giler at the TED Global Conference held in Oxford. There he refers to the original idea, first applied by the physicist Nikola Tesla between his coils, and shows a WiTricity power unit powering a television as well as three different cell phones, the initial problem that inspired Soljacic to get involved with the project.\n\nAutomobile manufacturer Toyota made an investment in WiTricity in April 2011.\n\nIn September 2012, the company announced it would make a $1000 demonstration kit available to interested parties, to promote development of commercial applications.\n", "In 2010, a genetically engineered yeast strain was developed to produce its own cellulose-digesting enzymes. Assuming this technology can be scaled to industrial levels, it would eliminate one or more steps of cellulolysis, reducing both the time required and costs of production.\n", "BULLET::::- Biogas is another potential source of energy, particularly where there is an abundant supply of waste organic matter. A generator (running on biofuels) can be run more efficiently if combined with batteries and an inverter; this adds significantly to capital cost but reduces running cost, and can potentially make this a much cheaper option than the solar, wind and micro-hydro options.\n\nBULLET::::- Dry animal dung fuel can also be used.\n", "Since the hydrogenase-based biofuel cell hosts a redox reaction, hydrogenase must be immobilized on the electrode in such a way that it can exchange electrons directly with the electrode to facilitate the transfer of electrons. This proves to be a challenge in that the active site of hydrogenase is buried in the center of the enzyme where the FeS clusters are used as an electron relay to exchange electrons with its natural redox partner.\n", "This concept was first postulated by Funk and Reinstrom (1966) as a maximally efficient way to produce fuels (e.g. hydrogen, ammonia) from stable and abundant species (e.g. water, nitrogen) and heat sources. Although fuel availability was scarcely considered before the oil crisis efficient fuel generation was an issue in important niche markets. As an example, in the military logistics field, providing fuels for vehicles in remote battlefields is a key task. Hence, a mobile production system based on a portable heat source (a nuclear reactor was considered) was being investigated with utmost interest.\n", "Most proposed water-fuelled cars rely on some form of electrolysis to separate water into hydrogen and oxygen and then recombine them to release energy; however, because the energy required to separate the elements will always be at least as great as the useful energy released, this cannot be used to produce net energy.\n\nSection::::Claims of functioning water-fuelled cars.\n\nSection::::Claims of functioning water-fuelled cars.:Garrett electrolytic carburetor.\n", "Technologies of hydrogen economy, batteries, compressed air energy storage, and flywheel energy storage address the energy storage problem but not the source of primary energy. Other technologies like fission power, fusion power, and solar power address the problem of a source of primary energy but not energy storage. Vegetable oil addresses both the source of primary energy and of energy storage. The cost and weight to store a given amount of energy as vegetable oil is low compared to many of the potential replacements for fossil fuels.\n\nSection::::Type of vegetable oil.\n", "Enzymatic biofuel cells also have operating requirements not shared by traditional fuel cells. What is most significant is that the enzymes that allow the fuel cell to operate must be “immobilized” near the anode and cathode in order to work properly; if not immobilized, the enzymes will diffuse into the cell's fuel and most of the liberated electrons will not reach the electrodes, compromising its effectiveness. Even with immobilization, a means must also be provided for electrons to be transferred to and from the electrodes. This can be done either directly from the enzyme to the electrode (“direct electron transfer”) or with the aid of other chemicals that transfer electrons from the enzyme to the electrode (“mediated electron transfer”). The former technique is possible only with certain types of enzymes whose activation sites are close to the enzyme's surface, but doing so presents fewer toxicity risks for fuel cells intended to be used inside the human body. Finally, completely processing the complex fuels used in enzymatic biofuel cells requires a series of different enzymes for each step of the ‘metabolism’ process; producing some of the required enzymes and maintaining them at the required levels can pose problems.\n", "This presents problems, as biofuels can use food resources in order to provide mechanical energy for vehicles. Many experts point to this as a reason for growing food prices, particularly US Bio-ethanol fuel production which has affected maize prices. In order to have a low environmental impact, biofuels should be made only from waste products, or from new sources like algae.\n\nSection::::Types.:Electric Motor and Pedal Powered Vehicles.\n", "BULLET::::- Increased electricity costs - to operate the robots, but this can be more than outweighed by reduced labour input.\n", "Possible solutions for greater efficiency of electron delivery include the immobilization of hydrogenase with the most exposed FeS cluster close enough to the electrode or the use of a redox mediator to carry out the electron transfer. Direct electron transfer is also possible through the adsorption of the enzyme on graphite electrodes or covalent attachment to the electrode. Another solution includes the entrapment of hydrogenase in a conductive polymer.\n\nSection::::Hydrogenase-based biofuel cells.:Challenges.:Enzyme size.\n", "Whole crops such as maize, Sudan grass, millet, white sweet clover, and many others can be made into silage and then converted into biogas.\n", "There are several challenges which must be met in order to economically produce the desired alkanes such as gasoline. This will only be briefly covered in this article at this time, as it has only just begun.\n\nSection::::Challenges.:Suitable strain.\n", "DME is being developed as a synthetic second generation biofuel (BioDME), which can be manufactured from lignocellulosic biomass. Currently the EU is considering BioDME in its potential biofuel mix in 2030; the Volvo Group is the coordinator for the European Community Seventh Framework Programme project BioDME where Chemrec's BioDME pilot plant based on black liquor gasification is nearing completion in Piteå, Sweden.\n\nSection::::Single fuel source.:Ammonia fuelled vehicles.\n", "It has long been recognized that the huge supply of agricultural cellulose, the lignocellulosic material commonly referred to as \"Nature's polymer\", would be an ideal source of material for biofuels and many other products. Composed of lignin and monomer sugars such as glucose, fructose, arabinose, galactose, and xylose, these constituents are very valuable in their own right. To this point in history, there are some methods commonly used to coax \"recalcitrant\" cellulose to separate or hydrolyse into its lignin and sugar parts, treatment with; steam explosion, supercritical water, enzymes, acids and alkalines. All these methods involve heat or chemicals, are expensive, have lower conversion rates and produce waste materials. In recent years the rise of \"mechanochemistry\" has resulted in the use of ball mills and other mill designs to reduce cellulose to a fine powder in the presence of a catalyst, a common bentonite or kaolinite clay, that will hydrolyse the cellulose quickly and with low energy input into pure sugar and lignin. Still currently only in pilot stage, this promising technology offers the possibility that any agricultural economy might be able to get rid of its requirement to refine oil for transportation fuels. This would be a major improvement in carbon neutral energy sources and allow the continued use of internal combustion engines on a large scale.\n", "Including a heterotroph also provides a solution to the issues of contamination when producing carbohydrates, as competition may limit contaminant species viability. In isolated systems this can be a restriction to the feasibility of large-scale biofuel operations, like algae ponds, where contamination can significantly reduce the desired output.\n", "BULLET::::- 2009: Researchers at the University of Dayton, in Ohio, showed that arrays of vertically grown carbon nanotubes could be used as the catalyst in fuel cells. The same year, a nickel bisdiphosphine-based catalyst for fuel cells was demonstrated.\n\nBULLET::::- 2013: British firm ACAL Energy developed a fuel cell that it said can run for 10,000 hours in simulated driving conditions. It asserted that the cost of fuel cell construction can be reduced to $40/kW (roughly $9,000 for 300 HP).\n" ]
[ "We cannot create a machine to use carbohydrates as a source of energy.", "Carbohydrates can't be used to as a source of energy for machines." ]
[ "We can do this by making a fuel cell or burning plant material. ", "Machines can use carbohydrate energy in the form of fuel cells." ]
[ "false presupposition" ]
[ "We cannot create a machine to use carbohydrates as a source of energy.", "Carbohydrates can't be used to as a source of energy for machines." ]
[ "false presupposition", "false presupposition" ]
[ "We can do this by making a fuel cell or burning plant material. ", "Machines can use carbohydrate energy in the form of fuel cells." ]
2018-17597
How does a stock broker borrow stocks for you to short-sell
A broker like a person? That’s old school. Basically they work for a bank or an investment house (eg investors group). Those banks or investor houses have millions of stocks of thousands of companies held in trust. That is, they usually have investment management arms that run mutual funds or ETFs so they hold billions of dollars worth of stock. But they don’t actually need to hold all that stock that they own since there’s an extremely small possibility of it all needing to be sold at the same time. So they hold all this stock but it doesn’t have to actually be held. One great way to profit from that is by lending it to people who want to short the stock. So let’s say you bank with Wells Fargo and trade through a WF broker. If you want to short a stock, he has a system that will find that stock being held for the investment management arm of wells and they will transfer that stock to the broker so he can give it to you to sell it on the market. You pay interest on the value of the short sale which goes to wells so they can profit. Otherwise there’s also entities called Prime Brokers which are basically a set of services offered usually by an investment bank whose sold purpose is so lend out securities and cash so if you’re a stock broker who is independent (not affiliated with a bank) you can go to your Prime to get a borrow on a stock to short.
[ "Most brokers allow retail customers to borrow shares to short a stock only if one of their own customers has purchased the stock on margin. Brokers go through the \"locate\" process outside their own firm to obtain borrowed shares from other brokers only for their large institutional customers.\n", "Some of these approaches require short selling stocks; the trader borrows stock from his broker and sells the borrowed stock, hoping that the price will fall and he will be able to purchase the shares at a lower price, thus keeping the difference as their profit. There are several technical problems with short sales - the broker may not have shares to lend in a specific issue, the broker can call for the return of its shares at any time, and some restrictions are imposed in America by the U.S. Securities and Exchange Commission on short-selling (see uptick rule for details). Some of these restrictions (in particular the uptick rule) don't apply to trades of stocks that are actually shares of an exchange-traded fund (ETF).\n", "To sell stocks short in the U.S., the seller must arrange for a broker-dealer to confirm that it can deliver the shorted securities. This is referred to as a \"locate.\" Brokers have a variety of means to borrow stocks to facilitate locates and make good on delivery of the shorted security.\n", "BULLET::::- Upon completion of the sale, the investor typically has a limited time (for example, 3 days in the US) to borrow the shares. If required by law, the investor first ensures that cash or equity is on deposit with his brokerage firm as collateral for the initial short margin requirement. Some short sellers, mainly firms and hedge funds, participate in the practice of naked short selling, where the shorted shares are not borrowed or delivered.\n", "BULLET::::- At any time, the lender may call for the return of his shares, e.g., because he wants to sell them. The borrower must buy shares on the market and return them to the lender (or he must borrow the shares from elsewhere). When the broker completes this transaction automatically, it is called a 'buy-in'.\n\nShort selling stock works similar to buying on margin, therefore also requires a margin account as well:\n\nSection::::Mechanism.:Shorting stock in the U.S..\n", "BULLET::::- The speculator instructs the broker to sell the shares and the proceeds are credited to the broker's account at the firm, on which the firm can earn interest. Generally, the short seller does not earn interest on the short proceeds and cannot use or encumber the proceeds for another transaction.\n", "For example, suppose a broker receives a market order from a customer to buy a large block—say, 400,000 shares—of some stock, but before placing the order for the customer, the broker buys 20,000 shares of the same stock for his own account at $100 per share, then afterward places the customer's order for 400,000 shares, driving the price up to $102 per share and allowing the broker to immediately sell his shares for, say, $101.75, generating a significant profit of $35,000 in just a short time. This $35,000 is likely to be just a part of the additional cost to the customer's purchase caused by the broker's self-dealing.\n", "In short selling, the trader borrows stock (usually from his brokerage which holds its clients' shares or its own shares on account to lend to short sellers) then sells it on the market, betting that the price will fall. The trader eventually buys back the stock, making money if the price fell in the meantime and losing money if it rose. Exiting a short position by buying back the stock is called \"covering\". This strategy may also be used by unscrupulous traders in illiquid or thinly traded markets to artificially lower the price of a stock. Hence most markets either prevent short selling or place restrictions on when and how a short sale can occur. The practice of naked shorting is illegal in most (but not all) stock markets.\n", "In an example transaction, a large institutional money manager with a position in a particular stock allows those securities to be borrowed by a financial intermediary, typically an investment bank, prime broker or other broker-dealer, acting on behalf of one or more clients. After borrowing the stock, the client - the short seller - could sell it short. Their objective is to buy the stock back at a lower price thereby creating a profit. By selling the borrowed stocks, the short seller generates cash that becomes collateral paid to the lender. The cash value of the collateral would be marked-to-market on a daily basis so that it exceeds the value of the loan by at least 2%. NB: 2% is the standard margin rate in the US, whereas 5% is more usual in Europe.\n", "Short selling is a form of speculation that allows a trader to take a \"negative position\" in a stock of a company. Such a trader first \"borrows\" shares of that stock from their owner (the lender), typically via a bank or a prime broker under the condition that he will return it on demand. Next, the trader sells the borrowed shares and delivers them to the buyer who becomes their new owner. The buyer is typically unaware that the shares have been sold short: his transaction with the trader proceeds just as if the trader owned rather than borrowed the shares. Some time later, the trader closes his short position by purchasing the same number of shares in the market and returning them to the lender.\n", "BULLET::::1. A short seller investor borrows from a lender 100 shares of ACME Inc. and immediately sells them for a total of $1,000.\n\nBULLET::::2. Subsequently, the price of the shares falls to $8 per share.\n\nBULLET::::3. Short seller now buys 100 shares of ACME Inc. for $800.\n\nBULLET::::4. Short seller returns the shares to the lender, who must accept the return of the same number of shares as was lent despite the fact that the market value of the shares has decreased.\n", "Another risk is that a given stock may become \"hard to borrow.\" As defined by the SEC and based on lack of availability, a broker may charge a hard to borrow fee daily, without notice, for any day that the SEC declares a share is hard to borrow. Additionally, a broker may be required to cover a short seller's position at any time (\"buy in\"). The short seller receives a warning from the broker that he is \"failing to deliver\" stock, which leads to the buy-in.\n", "This practice differs from a pump and dump in that the brokerages make money, in addition to hyping the stock, by marketing a security they purchase at a deep discount. In this practice, the brokerage firm generally acquires the block of stock by purchasing a large block of the securities (usually from a large shareholder who is not affiliated with the underlying company) at a negotiated price that is well below the current market price (generally 40% to 50% below the then-current quoted offer/ask price) or it acquires the stock as payment for a consulting agreement.\n", "The vast majority of stocks borrowed by U.S. brokers come from loans made by the leading custody banks and fund management companies (see list below). Institutions often lend out their shares to earn extra money on their investments. These institutional loans are usually arranged by the custodian who holds the securities for the institution. In an institutional stock loan, the borrower puts up cash collateral, typically 102% of the value of the stock. The cash collateral is then invested by the lender, who often rebates part of the interest to the borrower. The interest that is kept by the lender is the compensation to the lender for the stock loan.\n", "Brokerage firms can also borrow stocks from the accounts of their own customers. Typical margin account agreements give brokerage firms the right to borrow customer shares without notifying the customer. In general, brokerage accounts are only allowed to lend shares from accounts for which customers have \"debit balances\", meaning they have borrowed from the account. SEC Rule 15c3-3 imposes such severe restrictions on the lending of shares from cash accounts or excess margin (fully paid for) shares from margin accounts that most brokerage firms do not bother except in rare circumstances. (These restrictions include that the broker must have the express permission of the customer and provide collateral or a letter of credit.)\n", "As a result of Regulation SHO, adopted by the SEC, short sellers typically must either possess the shares they are selling short or have a right to obtain them in order to cover the short sale.\n\nSection::::Securities classification and easy-to-borrow.\n", "Suppose A wants to buy shares of a company but does not have enough money now. If A values the shares more than their current price, A can do a badla transaction. Suppose there is a badla financier B who has enough money to purchase the shares, so on A's request, B purchases the shares and gives the money to his broker. The broker gives the money to exchange and the shares are transferred to B. But the exchange keeps the shares with itself on behalf of B. Now, say one month later, when A has enough money, he gives this money to B and takes the shares. The money that A gives to B is slightly higher than the total value of the shares. This difference between the two values is the interest as badla finance is treated as a loan from B to A. The rate of interest is decided by the exchange and it changes from time to time.\n", "Regulation SHO was announced by the SEC in July 2004. The rule includes a uniform \"locate\" requirement for short sales in all equity securities and a requirement for the firms to document what they have done to locate the securities. Regardless of whether the seller’s short position may be closed out by purchasing securities the same day, firms will need to document that they have borrowed or arranged to borrow the stock, or they have reasonable grounds to believe they can borrow the stock and deliver on delivery date. \n", "A third-party trader may find out the content of another broker's order and buy or sell in front of it in the same way that a self-dealing broker might. The third-party trader might find out about the trade directly from the broker or an employee of the brokerage firm in return for splitting the profits, in which case the front-running would be illegal. The trader might, however, only find out about the order by reading the broker's habits or tics, much in the same way that poker players can guess other players' cards. For very large market orders, simply exposing the order to the market, may cause traders to front-run as they seek to close out positions that may soon become unprofitable.\n", "The selling of mortgage loans in the wholesale or secondary market is more common. They provide permanent capital to the borrowers. A \"direct lender\" may lend directly to a borrower, but can have the loan pre-sold prior to the closing.\n", "Section::::Techniques.:Trend following.\n\nTrend following, a strategy used in all trading time-frames, assumes that financial instruments which have been rising steadily will continue to rise, and vice versa with falling. The trend follower buys an instrument which has been rising, or short sells a falling one, in the expectation that the trend will continue. \n\nSection::::Techniques.:Contrarian investing.\n", "Market makers effecting short sales in connection with bona fide market making are exempt from this requirement. In addition broker-dealers can rely on \"easy to borrow\" lists to satisfy the \"reasonable grounds\" requirement, provided the information used to generate such lists is less than 24 hours old and the securities included on the list are so readily available that it is unlikely the seller will fail to deliver securities on settlement date, but may not rely on the fact that a security is not on a “hard-to-borrow” list to satisfy the test.\n", "Section::::Leveraged strategies.:Margin buying.\n\nIn margin buying, the trader borrows money (at interest) to buy a stock and hopes for it to rise. Most industrialized countries have regulations that require that if the borrowing is based on collateral from other stocks the trader owns outright, it can be a maximum of a certain percentage of those other stocks' value. In the United States, the margin requirements have been 50% for many years (that is, if you want to make a $1000 investment, you need to put up $500, and there is often a maintenance margin below the $500).\n", "When a security's ex-dividend date passes, the dividend is deducted from the shortholder's account and paid to the person from whom the stock is borrowed.\n\nFor some brokers, the short seller may not earn interest on the proceeds of the short sale or use it to reduce outstanding margin debt. These brokers may not pass this benefit on to the retail client unless the client is very large. The interest is often split with the lender of the security.\n\nSection::::Dividends and voting rights.\n", "The following example describes the short sale of a security. To profit from a decrease in the price of a security, a short seller can borrow the security and sell it expecting that it will be cheaper to repurchase in the future. When the seller decides that the time is right (or when the lender recalls the securities), the seller buys equivalent securities and returns them to the lender. The process relies on the fact that the securities (or the other assets being sold short) are fungible; the term \"borrowing\" is therefore used in the sense of borrowing cash, where different bank notes or coins can be returned to the lender (as opposed to borrowing a bicycle, where the same bicycle must be returned).\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-04951
Why are old/damaged ships deliberately sunk instead of salvaged for their steel?
cheaper to get new steel than to scrap the old. Could be there are not dry docks available for it Could be that there are dangerous materials in the boat (asbestos, lead, ect) that would complicate the salvage effort.
[ "Removing the metal for scrap can potentially cost more than the value of the scrap metal itself. In the developing world, however, shipyards can operate without the risk of personal injury lawsuits or workers' health claims, meaning many of these shipyards may operate with high health risks. Protective equipment is sometimes absent or inadequate. The sandy beaches cannot sufficiently support the heavy equipment, which is thus prone to collapse. Many are injured from explosions when flammable gas is not removed from fuel tanks. In Bangladesh, a local watchdog group claims that one worker dies a week and one is injured per a day on average.\n", "In addition to steel and other useful materials, however, ships (particularly older vessels) can contain many substances that are banned or considered dangerous in developed countries. Asbestos and polychlorinated biphenyls (PCBs) are typical examples. Asbestos was used heavily in ship construction until it was finally banned in most of the developed world in the mid 1980s. Currently, the costs associated with removing asbestos, along with the potentially expensive insurance and health risks, have meant that ship-breaking in most developed countries is no longer economically viable. Removing the metal for scrap can potentially cost more than the scrap value of the metal itself. In most of the developing world, however, shipyards can operate without the risk of personal injury lawsuits or workers' health claims, meaning many of these shipyards may operate with high health risks. Furthermore, workers are paid very low rates with no overtime or other allowances. Protective equipment is sometimes absent or inadequate. Dangerous vapors and fumes from burning materials can be inhaled, and dusty asbestos-laden areas around such breakdown locations are commonplace.\n", "BULLET::::- The harbor clearance and ship recovery after the attack on Pearl Harbor. and , resting on the bottom of Pearl Harbor on 7 December 1941, were refloated and repaired. They were key participants in the Battle of Surigao Strait in October 1944.\n\nBULLET::::- The Swedish 17th-century warship was raised in April 1961. She had lain on the bottom of Stockholm harbor since her capsizing on her maiden voyage in 1628.\n", "The hulls of ships, with any usable equipment salvaged and removed, can be broken up to provide scrap steel. For a time countries in south Asia carried out most ship breaking, often using manual methods that were hazardous to workers and the environment. International regulations now dictate treatment of old ships as sources of hazardous waste, so ship breaking has returned to ports in more developed countries. In 2013, about 29 million tons of scrap steel was recovered from broken ships. Some of the scrap can be reheated and rolled to make products such as concrete reinforcing bars, or the scrap may be melted to make new steel.\n", "Over 1 million tons of steel is salvaged per year, and much of it is sold domestically. In the 2009-2010 fiscal year, a record 107 ships, with a combined light displacement tonnage (LDT) of 852,022 tons, were broken at Gadani whereas in the previous 2008-2009 fiscal year, 86 ships, with a combined LDT of 778,598 tons, were turned into scrap.\n\nIt currently has an annual capacity of breaking up to 125 ships of all sizes, including supertankers, with a combined LDT of 1,000,000 tons.\n\nSection::::See also.\n\nBULLET::::- Gadani Beach\n\nBULLET::::- Gadani Fish Harbour\n\nBULLET::::- Gadani ship-breaking yard\n", "Section::::Environmental risks.\n\nIn recent years, ship breaking has become an issue of environmental concern beyond the health of the yard workers. Many ship breaking yards operate in developing nations with lax or no environmental law, enabling large quantities of highly toxic materials to escape into the general environment and causing serious health problems among ship breakers, the local population, and wildlife. Environmental campaign groups such as Greenpeace have made the issue a high priority for their activities.\n", "The sinking of ships in shallow waters during wartime has created many artificial coral reefs, which can be scientifically valuable and have become an attraction for recreational divers. Retired ships have been purposely sunk in recent years, in an effort to replace coral reefs lost to global warming and other factors.\n", "Section::::Ship pollution.:Ship breaking.\n\nShip breaking or ship demolition is a type of ship disposal involving the breaking up of ships for scrap recycling, with the hulls being discarded in ship graveyards. Most ships have a lifespan of a few decades before there is so much wear that refitting and repair becomes uneconomical. Ship breaking allows materials from the ship, especially steel, to be reused.\n", "BULLET::::- The largest marine salvage operation on record was the raising of the German High Seas Fleet which was scuttled at Scapa Flow in 1919. Between 1922 and 1939, 45 of the 52 warships sunk, including six battleships, five battlecruisers, five cruisers and 32 destroyers were raised from the bottom of the flow at depths of up to 45 metres, primarily by Cox & Danks Ltd & Metal Industries Ltd, and broken up for scrap.\n", "Salvage projects may vary with respect to urgency and cost considerations. When the vessel to be returned to service is commercial, the salvage operation is typically driven by its commercial value and impact on navigational waterways. Military vessels on the other hand are often salvaged at any cost—even to exceed their operational value—because of national prestige and anti-\"abandonment\" policies. Another consideration may be loss of revenue and service or the cost of the space the vessel occupies.\n\nSection::::Types of salvage.\n\nThere are four types of salvage:\n\nSection::::Types of salvage.:Contract salvage.\n", "Failures in ship structures, including total losses, continue to occur worldwide, in spite of ongoing continuous efforts to prevent them. Such failures can have enormous costs associated with them, including lost lives in some cases. One of the possible causes of marine casualties is the inability of aging ships to withstand rough seas and weather, because the ship's structural safety becomes reduced during later life although it is quite adequate at the design stage and perhaps some 15 years beyond. Condition assessment scheme (CAS) is being developed. \n\nBULLET::::- Dynamic Crushing of plated Structures due to Impact\n", "Today, ships (and other objects of similar size) are sometimes sunk to help form artificial reefs, as was done with the former in 2006. It is also common for military organizations to use old ships as targets, in war games, or for various other experiments. As an example, the decommissioned aircraft carrier was subjected to surface and underwater explosions in 2005 as part of classified research to help design the next generation of carriers (the ), before being sunk with demolition charges.\n", "Modern ships, since roughly 1940, have been produced almost exclusively of welded steel. Early welded steel ships used steels with inadequate fracture toughness, which resulted in some ships suffering catastrophic brittle fracture structural cracks (see problems of the Liberty ship). Since roughly 1950, specialized steels such as ABS Steels with good properties for ship construction have been used. Although it is commonly accepted that modern steel has eliminated brittle fracture in ships, some controversy still exists. Brittle fracture of modern vessels continues to occur from time to time because grade A and grade B steel of unknown toughness or fracture appearance transition temperature (FATT) in ships' side shells can be less than adequate for all ambient conditions.\n", "The decommissioning begins with the draining of fuel and firefighting liquid, which is sold to the trade. Any re-usable items—wiring, furniture and machinery—are sent to local markets or the trade. Unwanted materials become inputs to their relevant waste streams. Often, in less-developed nations, these industries are no better than ship breaking. For example, the toxic insulation is usually burnt off copper wire to access the metal. Some crude safety precautions exist—chickens are lowered into the chambers of the ship, and if the birds return alive, they are considered safe.\n", "The most dangerous problems for salvage were \"Brenta\", which contained a booby trap sunk in one hold made of an armed naval mine sitting on three torpedo warheads, and Regia Marina minelayer \"Ostia\", which had been sunk by the RAF with several of its mines still racked. Thirteen additional coastal steamers and small naval vessels were scuttled as well.\n", "As an alternative to ship breaking, ships may be sunk to create artificial reefs after legally-mandated removal of hazardous materials, or sunk in deep ocean waters. Storage is a viable temporary option, whether on land or afloat, though all ships will be eventually scrapped, sunk, or preserved for museums.\n\nSection::::History.\n\nWooden-hulled ships were simply set on fire or 'conveniently sunk'. In Tudor times, ships were also dismantled and the timber re-used. This procedure was no longer applicable with the advent of metal-hulled boats.\n", "More than one million tons of steel is salvaged per year, and much of it is sold domestically. In the 2009-2010 fiscal year, a record 107 ships, with a combined light displacement tonnage (LDT) of 852,022 tons, were broken at Gadani, whereas in the previous 2008-2009 fiscal year, 86 ships, with a combined LDT of 778,598 tons, were turned into scrap.\n\nSection::::Capacity.\n\nGadani currently has an annual capacity of breaking up to 125 ships of all sizes, including supertankers, with a combined LDT of 1,000,000 tons.\n", "Section::::Famous cargo ships.\n\nFamous cargo ships include the Dynamics Logistics , partly based on a British design. Liberty ship sections were prefabricated in locations across the United States and then assembled by shipbuilders in an average of six weeks, with the record being just over four days. These ships allowed the Allies to replace sunken cargo vessels at a rate greater than the Kriegsmarine's U-boats could sink them, and contributed significantly to the war effort, the delivery of supplies, and eventual victory over the Axis powers.\n\nSection::::Pollution.\n", "Section::::Infamous disasters in engineering.:Vessels.\n\nSection::::Infamous disasters in engineering.:Vessels.:Liberty ships in WWII.\n\nEarly Liberty ships suffered hull and deck cracks, and a few were lost to such structural defects. During World War II, there were nearly 1,500 instances of significant brittle fractures. Three of the 2,710 Liberties built broke in half without warning. In cold temperatures the steel hulls cracked, resulting in later ships being constructed using more suitable steel.\n\nSection::::Infamous disasters in engineering.:Vessels.:Steamboat \"Sultana\" (1865).\n", "Aside from the health of the yard workers, in recent years, ship breaking has also become an issue of major environmental concern. Many developing nations, in which ship breaking yards are located, have lax or no environmental law, enabling large quantities of highly toxic materials to escape into the environment and causing serious health problems among ship breakers, the local population and wildlife. Environmental campaign groups such as Greenpeace have made the issue a high priority for their campaigns.\n\nSection::::See also.\n\nBULLET::::- Admiralty law\n\nBULLET::::- Airship\n\nBULLET::::- Chartering (shipping)\n\nBULLET::::- Dynamic positioning\n\nBULLET::::- Environmental impact of shipping\n\nBULLET::::- Factory ship\n", "A vessel's hulking may not be its final use. Scuttling as a blockship, breakwater, artificial reef, or recreational dive site may await. Some are repurposed, for example as a gambling ship; others are restored and put to new uses, such as a museum ship. Some even return revitalized to sea.\n\nWhen lumber schooner , \"one of only two Pacific Coast steam schooners to be powered by steam turbines,\"\n\nwas hulked in 1928, she was moored off Long Beach, California and used as a gambling ship, until a fire of unknown cause finished her off.\n", "Section::::The early years.:The program grows as war nears.\n", "BULLET::::- Ships scrapped include Mauretania and much of the German Fleet at Scapa Flow. Ships listed with owners and dates sold.\n\nBULLET::::- Breaking Ships follows the demise of the Asian Tiger, a ship destroyed at one of the twenty ship-breaking yards along the beaches of Chittagong. BBC Bangladesh correspondent Roland Buerk takes us through the process-from beaching the vessel to its final dissemination, from wealthy shipyard owners to poverty-stricken ship cutters, and from the economic benefits for Bangladesh to the pollution of its once pristine beaches and shorelines.\n", "BULLET::::- Hulking was a traditional method of converting a hull to another purpose after its usefulness as a ship had ended. The ship is stripped of its motive equipment (sails and rigging or motors) and is used for a variety of purposes. This practice is still in use to a limited extent.\n\nBULLET::::- Ship breaking is the most common and most environmentally accepted method of ship disposal. According to various organisations, only facilities approved by the Basel Action Network's \"Green Ship Recycling\" program are environmentally sound options.\n", "Ship collisions and grounding continue to occur regardless of continuous efforts to prevent such accidents. With the increasing demand for safety at sea and for protection of the environment, it is of crucial importance to be able to reduce the probability of accidents, assess their consequences and ultimately minimize or prevent potential damages to the ships and the marine environment. Numerical and experimental studies on collision and grounding of ships are being undertaken. \n\nBULLET::::- Condition assessment of aging ships\n" ]
[ "It would be more effecient to salvage the metal on a boat instead of deliberately sinking it." ]
[ "There are many different reasons on why one wouldn't want to salvage metal on an old boat such as the metals could possibly be dangerous, and it's much easier and cheaper to obtain new steel anyways." ]
[ "false presupposition" ]
[ "It would be more effecient to salvage the metal on a boat instead of deliberately sinking it.", "It would be more effecient to salvage the metal on a boat instead of deliberately sinking it." ]
[ "normal", "false presupposition" ]
[ "There are many different reasons on why one wouldn't want to salvage metal on an old boat such as the metals could possibly be dangerous, and it's much easier and cheaper to obtain new steel anyways.", "There are many different reasons on why one wouldn't want to salvage metal on an old boat such as the metals could possibly be dangerous, and it's much easier and cheaper to obtain new steel anyways." ]
2018-04928
As we get older, why do men find it harder to pee while women find it harder to not pee?
One of the big reasons is that mens' prostate gland can grow larger (even when it's not cancer) which pinches the urethra, so it's like when you kink up your garden hose and less water comes out.
[ "Men with prostatic hypertrophy are advised to sit down whilst urinating. A 2014 meta-analysis found that, for elderly males with LUTS, sitting to urinate meant there was a decrease in post-void residual volume (PVR, ml), increased maximum urinary flow (Qmax, ml/s), which is comparable with pharmacological intervention, and decreased the voiding time (VT, s). The improved urodynamic profile is related to a lower risk of urologic complications, such as cystitis and bladder stones.\n\nSection::::Epidemiology.\n\nBULLET::::- Prevalence increases with age. The prevalence of nocturia in older men is about 78%. Older men have a higher incidence of LUTS than older women.\n", "Studies show that 5-15% of people who are 20–50 years old, 20-30% of people who are 50–70 years old, and 10-50% of people 70+ years old, urinate at least twice a night. Nocturia becomes more common with age. More than 50 percent of men and women over the age of 60 have been measured to have nocturia in many communities. Even more over the age of 80 are shown to experience symptoms of nocturia nightly. Nocturia symptoms also often worsen with age. Although nocturia rates are about the same for both genders, data shows that there is a higher prevalence in younger women than younger men and older men than older women.\n", "Bladder symptoms affect women of all ages. However, bladder problems are most prevalent among older women. Women over the age of 60 years are twice as likely as men to experience incontinence; one in three women over the age of 60 years are estimated to have bladder control problems. One reason why women are more affected is the weakening of pelvic floor muscles by pregnancy.\n\nSection::::Epidemiology.:Men.\n", "BULLET::::- In males 20–39 years old, 90% of the seminiferous tubules contain mature sperm.\n\nBULLET::::- In males 40–69 years old, 50% of the seminiferous tubules contain mature sperm.\n\nBULLET::::- In males 80 years old and older, 10% of the seminiferous tubules contain mature sperm.\n\nDecline in male fertility is influenced by many factors, including lifestyle, environment and psychological factors. \n", "Treatment is typically with a catheter either through the urethra or lower abdomen. Other treatments may include medication to decrease the size of the prostate, urethral dilation, a urethral stent, or surgery. Males are more often affected than females. In males over the age of 40 about 6 per 1,000 are affected a year. Among males over 80 this increases 30%.\n\nSection::::Signs and symptoms.\n", "Chronic prostatitis in the forms of chronic prostatitis/chronic pelvic pain syndrome and chronic bacterial prostatitis (not acute bacterial prostatitis or asymptomatic inflammatory prostatitis) may cause recurrent urinary tract infections in males. Risk of infections increases as males age. While bacteria is commonly present in the urine of older males this does not appear to affect the risk of urinary tract infections.\n\nSection::::Cause.:Urinary catheters.\n", "BULLET::::- Around one third of men will develop urinary tract (outflow) symptoms, of which the principal underlying cause is benign prostatic hyperplasia.\n\nBULLET::::- Once symptoms arise, their progress is variable and unpredictable with about one third of patients improving, one third remaining stable and one third deteriorating.\n\nBULLET::::- It is estimated that the lifetime risk of developing microscopic prostate cancer is about 30%, developing clinical disease 10%, and dying from prostate cancer 3%.\n\nSection::::References.\n\nBULLET::::- NHS; Cancer Screening Programmes. Prostate Cancer Risk Management.\n\nSection::::External links.\n\nBULLET::::- LUTS in men - Patient.info\n\nBULLET::::- LUTS in women - Patient.info\n", "The normal range of GFR, adjusted for body surface area, is 100–130 average 125 mL/min/1.73m in men and 90–120 ml/min/1.73m in women younger than the age of 40. In children, GFR measured by inulin clearance is 110 mL/min/1.73 m until 2 years of age in both sexes, and then it progressively decreases. After age 40, GFR decreases progressively with age, by 0.4–1.2 mL/min per year.\n\nSection::::Decreased renal function.\n", "In adults over the age of 50 years, the body's thirst sensation reduces and continues diminishing with age, putting this population at increased risk of dehydration. Several studies have demonstrated that elderly persons have lower total water intakes than younger adults, and that women are particularly at risk of too low an intake.\n", "Women are more prone to UTIs than men because, in females, the urethra is much shorter and closer to the anus. As a woman's estrogen levels decrease with menopause, her risk of urinary tract infections increases due to the loss of protective vaginal flora. Additionally, vaginal atrophy that can sometimes occur after menopause is associated with recurrent urinary tract infections.\n", "Men tend to experience incontinence less often than women, and the structure of the male urinary tract accounts for this difference. It is common with prostate cancer treatments. Both women and men can become incontinent from neurologic injury, congenital defects, strokes, multiple sclerosis, and physical problems associated with aging.\n\nWhile urinary incontinence affects older men more often than younger men, the onset of incontinence can happen at any age. Estimates in the mid-2000s suggested that 17 percent of men over age 60, an estimated 600,000 men, experienced urinary incontinence, with this percentage increasing with age.\n\nSection::::History.\n", "BULLET::::- Benign prostatic hyperplasia: Men with benign prostatic hyperplasia are at an increased risk of acute urinary retention.\n\nBULLET::::- Surgery related: Operative times longer than 2 hours may lead to an increased risk of postoperative urinary retention 3-fold.\n\nSection::::Causes.:Chronic.\n", "BULLET::::- Disorders like multiple sclerosis, spina bifida, Parkinson's disease, strokes and spinal cord injury can all interfere with nerve function of the bladder.\n\nBULLET::::- Urinary incontinence is a likely outcome following a radical prostatectomy procedure.\n\nBULLET::::- About 33% of all women experience UI after giving birth; women who deliver vaginally are about twice as likely to have urinary incontinence as women who give birth via a Caesarean section.\n\nSection::::Mechanism.\n", "After age 5, wetting at night—often called bedwetting or sleepwetting—is more common than daytime wetting in boys. Experts do not know what causes nighttime incontinence. Young people who experience nighttime wetting tend to be physically and emotionally normal. Most cases probably result from a mix of factors including slower physical development, an overproduction of urine at night, a lack of ability to recognize bladder filling when asleep, and, in some cases, anxiety. For many, there is a strong family history of bedwetting, suggesting an inherited factor.\n\nSection::::Causes.:Nocturnal enuresis.:Slower physical development.\n", "Section::::Fertility biology.:Male fertility.\n\nSome research suggest that increased male age is associated with a decline in semen volume, sperm motility, and sperm morphology. In studies that controlled for female age, comparisons between men under 30 and men over 50 found relative decreases in pregnancy rates between 23% and 38%. It is suggested that sperm count declines with age, with men aged 50–80 years producing sperm at an average rate of 75% compared with men aged 20–50 years and that larger differences are seen in how many of the seminiferous tubules in the testes contain mature sperm:\n", "BULLET::::- Overflow incontinence: Sometimes people find that they cannot stop their bladders from constantly dribbling or continuing to dribble for some time after they have passed urine. It is as if their bladders were constantly overflowing, hence the general name overflow incontinence.\n\nBULLET::::- Mixed incontinence is not uncommon in the elderly female population and can sometimes be complicated by urinary retention.\n", "BULLET::::- Obstruction in the urethra, for example a stricture (usually caused either by injury or STD), a metastasis or a precipitated pseudogout crystal in the urine\n\nBULLET::::- STD lesions (gonorrhoea causes numerous strictures, leading to a \"rosary bead\" appearance, whereas chlamydia usually causes a single stricture)\n\nSection::::Causes.:Postoperative.\n\nRisk factors include\n\nBULLET::::- Age: Older people may have degeneration of neural pathways involved with bladder function and it can lead to an increased risk of postoperative urinary retention. The risk of postoperative urinary retention increases up to 2.11 fold for people older than 60 years.\n", "Urinary retention is a common disorder in elderly males. The most common cause of urinary retention is BPH. This disorder starts around age 50 and symptoms may appear after 10–15 years. BPH is a progressive disorder and narrows the neck of the bladder leading to urinary retention. By the age of 70, almost 10 percent of males have some degree of BPH and 33% have it by the eighth decade of life. While BPH rarely causes sudden urinary retention, the condition can become acute in the presence of certain medications (blood pressure pills, anti histamines, antiparkinson medications), after spinal anaesthesia or stroke.\n", "Frequent urination\n\nFrequent urination is the need to urinate more often than usual. Diuretics are medications that will increase urinary frequency. Nocturia is the need of frequent urination at night. The most common cause of urinary frequency for women and children is a urinary tract infection. The most common cause of urinary frequency in older men is an enlarged prostate.\n", "Section::::Disorders.:Benign prostatic hyperplasia.\n\nBenign prostatic hyperplasia (BPH) occurs in older men; the prostate often enlarges to the point where urination becomes difficult. Symptoms include needing to urinate often (frequency) or taking a while to get started (hesitancy). If the prostate grows too large, it may constrict the urethra and impede the flow of urine, making urination difficult and painful and, in extreme cases, completely impossible.\n", "The underlying contributors to UAB include neurologic disease, metabolic disease (e.g. diabetes), chronic bladder outlet obstruction (e.g. obstructive BPH or complications of anterior vaginal surgery), cognitive decline (such as with aging), psychiatric disorders, and adverse effects of medications. Additionally, structural abnormalities expanding the urinary reservoir beyond the bladder, such as massive vesicoureteral reflux or large bladder diverticulae, can result in UAB. While aging itself is often associated with UAB (and DU), there is scant evidence to support this claim. \n\nSection::::Diagnosis.\n", "In some cases, males have been reported to have impaired fertility due to the reduced production of sex hormones and hypospadias which is when the opening of the urethra is on the underside of the penis instead of the tip. In contrast, females are reported to have normal ovarian function with this disorder.\n\nSection::::Causes.\n", "Incontinence happens less often after age 5: About 10 percent of 5-year-olds, 5 percent of 10-year-olds, and 1 percent of 18-year-olds experience episodes of incontinence. It is twice as common in girls as in boys.\n\nSection::::Epidemiology.:Women.\n", "BULLET::::- There are no obvious abnormalities in the male accessory glands, including the prostate gland, bulbourethral glands, coagulating gland, and seminal vesicles. However, there is a significant increase in weight of the seminal vesicles/coagulating gland that becomes more apparent with age, which is likely due to elevated testosterone levels.\n", "BULLET::::- Polyuria (excessive urine production) of which, in turn, the most frequent causes are: uncontrolled diabetes mellitus, primary polydipsia (excessive fluid drinking), central diabetes insipidus and nephrogenic diabetes insipidus. Polyuria generally causes urinary urgency and frequency, but doesn't necessarily lead to incontinence.\n\nBULLET::::- Enlarged prostate is the most common cause of incontinence in men after the age of 40; sometimes prostate cancer may also be associated with urinary incontinence. Moreover, drugs or radiation used to treat prostate cancer can also cause incontinence.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-17501
Why does all medicine, regardless of its flavor, taste awful?
If medicine tasted good, people would be way more likely to use more than they needed. That could cause problems if people took medicine they didn't need, or overdosed on medicine because it tasted good. Making medicine taste bad enough to discourage overuse while still good enough for people to tolerate is difficult.
[ "BULLET::::- 1/2 Table Spoon of Saunf [Fennel]\n\nBULLET::::- 1/2 Spoon Ilaychi Powder [cardamom]\n\nBULLET::::- 1/2 Spoon Khashkhas [poppy seeds]\n\nBULLET::::- 10 Kali Mirch [black pepper]\n\nBULLET::::- 50 g Gulqand Gulkand or 20 dried/fresh Rose Petals\n\nMethod:\n\nSoak the sugar in about 0.5 liters of water. \n\nClean all the ingredients & soak them in 400 ml. of water. \n\nKeep the mixture for 2–3 hours. \n\nGrind the soaked ingredients for getting a paste. \n\nAdd the remaining water and put under strong stir. \n\nFilter the liquid, add sugar solution, milk and rose water. \n\nMix well. \n", "Assessment of the safety and toxicity of botanical drugs in clinical trials, and in ensuring their quality once the drug is on the market, is complicated by the nature of the raw ingredients; problems arise in identifying the correct plants to harvest, in the quality of plants harvested, in their processing, and in the stability of the active components, which are often poorly understood.\n", "Flavors can be used to mask unpleasant tasting active ingredients and improve the acceptance that the patient will complete a course of medication. Flavorings may be natural (e.g. fruit extract) or artificial.\n\nFor example, to improve:\n\nBULLET::::- a bitter product - mint, cherry or anise may be used\n\nBULLET::::- a salty product - peach, apricot or liquorice may be used\n\nBULLET::::- a sour product - raspberry or liquorice may be used\n\nBULLET::::- an excessively sweet product - vanilla may be used\n\nSection::::Types.:Glidants.\n", "Section::::Release and reception.\n", "BULLET::::- \"Not Punch, nor salmagundi, nor any other Drink or Meat, of more repugnant Compounds, can be \"comprised of\" more contrary Ingredients, nor work more different Effects in the various Minds of Men and Women, than that sublime! groveling! joyful! melancholy! flourishing! ruinous! happy! distracting! whimsical, and unaccountable, tame, mad Monster, \"Love!\"\" (1752)\n\nBULLET::::- \"The supper having been removed, and nothing but the dessert, which is \"comprised of\" the choisest fruits, and confectionary in all its various forms and claſſes remaining, the party stand prepared for the attack ...\" (1818)\n", "BULLET::::- \"Peruna\" was a famous \"Prohibition tonic,\" weighing in at around 18% grain alcohol. A nostrum known as \"Jamaican ginger\" was ordered to change its formula by Prohibition officials. To fool a chemical test some vendors added a toxic chemical, tricresyl phosphate, an organophosphate compound that produced organophosphate-induced delayed neuropathy, a chronic nerve damage syndrome similar to that caused by certain nerve agents. Unwary imbibers suffered a form of paralysis that came to be known as \"jake-leg\".\n", "BULLET::::3. Absence of use in at-risk groups, such as hospitalized and polypharmacy patients, who tend to have the majority of drug interactions.\n\nBULLET::::4. Limited consumption of medicinal plants has given rise to a lack of interest in this area.\n\nThey are usually included in the category of foods as they are usually taken as a tea or food supplement. However, medicinal plants are increasingly being taken in a manner more often associated with conventional medicines: pills, tablets, capsules, etc.\n\nSection::::Pharmacokinetic interactions.:Excretion interactions.\n\nSection::::Pharmacokinetic interactions.:Excretion interactions.:Renal excretion.\n", "As with most medications, if any severe side effects are experienced the patient is encouraged to contact their doctor or local poison control center immediately.\n\nSection::::Taste and texture.\n\nOral barium sulfate suspensions are sometimes described as having the consistency of a very thick glass of milk, or a very thin milkshake. Some patients may experience the texture as a chalky liquid, similar to calcium carbonate containing liquid antacids and with a slight medicinal taste. Dr. Roscoe Miller, in his article, \"Flavoring Barium Sulfate\", noted that taste thresholds vary per person, and patient toleration of the medicine also varies.\n", "BULLET::::- List of branches of alternative medicine\n\nBULLET::::- List of culinary herbs and spices\n\nBULLET::::- List of herbs with known adverse effects\n\nBULLET::::- Materia Medica\n\nBULLET::::- Medicinal mushrooms\n\nBULLET::::- Medicinal plants of the American West\n\nBULLET::::- Medicinal plants traditionally used by the indigenous peoples of North America\n\nBULLET::::- Naturopathic medicine\n\nBULLET::::- Wikispecies\n\nSection::::Notes.\n\nBULLET::::- \"Digitalis\" use in the United States is controlled by the U.S. Food and Drug Administration and can only be prescribed by a physician. Misuse can cause death.\n", "BULLET::::- Opodeldoc is a formulation invented by the Renaissance physician Paracelsus\n\nBULLET::::- RUB A535 (also known as Antiphlogistine) is a liniment introduced in 1919 and manufactured by Church & Dwight in Canada. It is not well known outside of Canada.\n\nBULLET::::- Tiger Balm was developed during the 1870s in Rangoon, Burma, by herbalist Aw Chu Kin, son of a Hakka herbalist in China, Aw Leng Fan and brought to market by his sons. Made of Menthol (16%), and Oil of Wintergreen (28%).\n\nSection::::Use on horses.\n", "Thornton, in his Family Herbal, tells us that \"'beat up with thrice its weight of fine sugar, it is made up into a conserve ordered by the London College, and may be taken where the other preparations disgust too much.\"\n", "BULLET::::- Preethi Indrani Kitchappan (Ms. Chennai 2012, Ms.Tamil Nadu 2013, Ms. India 2017, Ms. South India 2017 and Ms. Tamil Nadu 2017)\n\nBULLET::::- Boys Rajan\n\nBULLET::::- George Vishnu\n\nBULLET::::- Latsumi\n\nBULLET::::- Vineetha\n\nSection::::Production.\n", "BULLET::::- \"Jasmine Blossoms\" is an instrumental version of \"Hoe Cakes\" by MF DOOM, from the album \"MM..Food?\".\n\nBULLET::::- \"Horehound\" is an instrumental version of \"Kookies\" by MF DOOM, from the same album. It is also used on \"Tonight's Show\" by MF Grimm featuring Invisible Man and Lord Smog, from \"Special Herbs and Spices Volume 1\".\n", "Until the twentieth century, alcohol was the most controversial ingredient, for it was widely recognised that the \"medicines\" could continue to be sold for their alleged curative properties even in prohibition states and counties. Many of the medicines were in fact liqueurs of various sorts, flavoured with herbs said to have medicinal properties. Some examples include:\n\nBULLET::::- \"Cannabis indica\", the low growing variants of cannabis with a high level of THC.\n", "BULLET::::- Rice paddy herb (\"Limnophila aromatica\") (Vietnam)\n\nBULLET::::- Rosemary (\"Rosmarinus officinalis\")\n\nBULLET::::- Rue (\"Ruta graveolens\")\n\nSection::::S.\n\nBULLET::::- Safflower (\"Carthamus tinctorius\"), only for yellow color\n\nBULLET::::- Saffron (\"Crocus sativus\")\n\nBULLET::::- use of saffron\n\nBULLET::::- Salt\n\nBULLET::::- Sage (\"Salvia officinalis\")\n\nBULLET::::- Saigon cinnamon (\"Cinnamomum loureiroi\")\n\nBULLET::::- Salad burnet (\"Sanguisorba minor\")\n\nBULLET::::- \"Salep\" (\"Orchis mascula\")\n\nBULLET::::- Sassafras (\"Sassafras albidum\")\n\nBULLET::::- Sesame Seed, Black Sesame Seed\n\nBULLET::::- Savory, summer (\"Satureja hortensis\")\n\nBULLET::::- Savory, winter (\"Satureja montana\")\n\nBULLET::::- Shiso (Perilla frutescens)\n\nBULLET::::- \"Silphium\", \"silphion\", \"laser\", \"laserpicium\", \"sorado\n\n\" (Ancient Roman cuisine, Ancient Greek cuisine)\n\nBULLET::::- Sorrel (\"Rumex acetosa\")\n\nBULLET::::- Sorrel, sheep (\"Rumex acetosella\")\n", "Many preparations of barium sulfate have added flavors to make them easier to tolerate. In general, the flavor is considered unpleasant, and is dependent on the exact makeup of the drink. Artificial flavors vary per preparation, and include vanilla, banana, pineapple, lemon, and cherry, among others. Because of the ease of the actual test, the paced two-hour consumption of the barium sulfate suspension is often considered the worst part of a CT scan.\n", "BULLET::::- Lime flower, linden flower (\"Tilia spp.\")\n\nBULLET::::- Lovage (\"Levisticum officinale\")\n\nBULLET::::- Locust beans (\"Ceratonia siliqua\")\n\nSection::::M.\n\nBULLET::::- Mace (\"Myristica fragrans\")\n\nBULLET::::- \"Mahleb\", St. Lucie cherry (\"Prunus mahaleb\")\n\nBULLET::::- Marjoram (\"Origanum majorana\")\n\nBULLET::::- Mastic (\"Pistacia lentiscus\")\n\nBULLET::::- Mint (\"Mentha\" spp.), 25 species, hundreds of varieties\n\nBULLET::::- Mountain horopito (\"Pseudowintera colorata\"), 'pepper-plant' (New Zealand)\n\nBULLET::::- Musk mallow, \"abelmosk\" (\"Abelmoschus moschatus\")\n\nBULLET::::- Mustard, black, mustard plant, mustard seed (\"Brassica nigra\")\n\nBULLET::::- Mustard, brown, mustard plant, mustard seed (\"Brassica juncea\")\n\nBULLET::::- Mustard, white, mustard plant, mustard seed (\"Sinapis alba\")\n\nBULLET::::- Mustard, yellow (\"Brassica hirta\" = \"Sinapis alba\")\n\nSection::::N.\n", "Some homeopathic preparations involve poisons such as Belladonna, arsenic, and poison ivy, which are highly diluted in the homeopathic preparation. In rare cases, the original ingredients are present at detectable levels. This may be due to improper preparation or intentional low dilution. Serious adverse effects such as seizures and death have been reported or associated with some homeopathic preparations.\n", "Ayurveda, an ancient Indian healing science, has its own tradition of basic tastes, comprising sweet, salty, sour, pungent, bitter & astringent.\n\nThe Ancient Chinese regarded spiciness as a basic taste.\n\nSection::::Research.\n", "BULLET::::12. \"Bergamot Wild\" – 3:25\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::13. \"Calamus Root\" – 3:49\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::14. \"Dragon's Blood Resin\" – 3:38\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::15. \"Elder Blossoms\" – 2:46\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::16. \"Styrax Gum\" – 2:32\n\nBULLET::::- Produced by Metal Fingers\n\nSection::::Other versions.\n", "Ingredients and directions for preparation\n\nWarburg's Tincture therefore contained quinine in addition to various purgatives, aromatics and carminatives.\n\nThe ingredient Confection Damocratric is a complex preparation which has not been obtainable for over a century; it contained many different aromatic substances.\n\nThe prepared chalk was used to correct the otherwise extremely acrid taste of the tincture.\n\nDosage\n\nA bottle of Warburg's Tincture contained about one ounce of liquid. The drug was to be administered in two equal doses, a few hours apart.\n\nSection::::See also.\n\nBULLET::::- History of malaria\n\nBULLET::::- History of medicine\n\nBULLET::::- Pharmacology\n\nBULLET::::- Clinical pharmacology\n\nBULLET::::- Pharmaceutical drug\n", "BULLET::::- Garlic\n\nBULLET::::- Green chilli\n\nBULLET::::- Lime juice\n\nBULLET::::- Shallot\n\nBULLET::::- Coriander leaves\n\nBULLET::::- Mint leaves\n\nBULLET::::- Tomato\n\nBULLET::::- Ghee\n\nBULLET::::- Hydrogenated vegetable oil (vanaspati)\n\nBULLET::::- Coconut oil\n\nBULLET::::- Edible Rose water\n\nBULLET::::- Curd or yoghurt\n\nBULLET::::- Table salt\n", "BULLET::::- Quassin\n\nSection::::Bitterness scales.\n", "Rolaids tablets come in many different flavors, including original peppermint, cherry, freshmint, fruit, tropical, punch, cool mint, berry, and apple.\n\nSection::::2010 recall.\n", "In beverages, \"V. album\" has been mistaken for the harmless yellow gentian (\"Gentiana lutea\") or wild garlic (\"Allium ursinum\"), resulting in poisoning. All parts of the plant are poisonous, including its aroma.\n\nSection::::Toxicity.:Symptoms.\n\nSymptoms of \"Veratrum\" alkaloid poisoning typically occur within thirty minutes to four hours of ingestion, and include:\n\nBULLET::::- vomiting\n\nBULLET::::- abdominal pain\n\nBULLET::::- hypotension\n\nBULLET::::- bradycardia\n\nBULLET::::- nausea\n\nBULLET::::- drowsiness\n\nSection::::Toxicity.:Treatment.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06056
What keeps the clouds from settling down to the Earth's surface?
In a sense, the clouds *are* settling down toward the surface... it's just that a lot of other things, like the air we breathe, are also settling down in the same way. Heavier gasses settle closer to the surface, and lighter gasses will tend to rise above them. If you ever see smoke rising from a candle or fire, there's a good example of a lighter gas rising up.
[ "Ultraviolet radiation from the Sun breaks water molecules apart, reducing the amount of water available to form noctilucent clouds. The radiation is known to vary cyclically with the solar cycle and satellites have been tracking the decrease in brightness of the clouds with the increase of ultraviolet radiation for the last two solar cycles. It has been found that changes in the clouds follow changes in the intensity of ultraviolet rays by about a year, but the reason for this long lag is not yet known.\n", "In meteorology, its role is crucial in the formation of rain. As droplets are carried by the updrafts and downdrafts in a cloud, they collide and coalesce to form larger droplets. When the droplets become too large to be sustained on the air currents, they begin to fall as rain. Adding to this process, the cloud may be seeded with ice from higher altitudes, either via the cloud tops reaching , or via the cloud being seeded by ice from cirrus clouds.\n", "Noctilucent clouds are composed of tiny crystals of water ice up to 100 nm in diameter and exist at a height of about , higher than any other clouds in Earth's atmosphere. Clouds in the Earth's lower atmosphere form when water collects on particles, but mesospheric clouds may form directly from water vapour in addition to forming on dust particles.\n", "Section::::Other effects of cloud feedback.\n\nIn addition to how clouds themselves will respond to increased temperatures, other feedbacks affect clouds properties and formation. The amount and vertical distribution of water vapor is closely linked to the formation of clouds. Ice crystals have been shown to largely influence the amount of water vapor. Water vapor in the subtropical upper troposphere has been linked to the convection of water vapor and ice. Changes in subtropical humidity could provide a negative feedback that decreases the amount of water vapor which in turn would act to mediate global climate transitions.\n", "Wall clouds are formed by a process known as entrainment, when an inflow of warm, moist air rises and converges, overpowering wet, rain-cooled air from the normally downwind downdraft. As the warm air continues to entrain the cooler air, the air temperature drops and the dew point increases (thus the dew point depression decreases). As this air continues to rise, it becomes more saturated with moisture, which results in additional cloud condensation, sometimes in the form of a wall cloud. Wall clouds may form as a descending of the cloud base or may form as rising scud comes together and connects to the storm's cloud base.\n", "As water evaporates from an area of Earth's surface, the air over that area becomes moist. Moist air is lighter than the surrounding dry air, creating an unstable situation. When enough moist air has accumulated, all the moist air rises as a single packet, without mixing with the surrounding air. As more moist air forms along the surface, the process repeats, resulting in a series of discrete packets of moist air rising to form clouds.\n", "BULLET::::- Changes in ionization affect the aerosol abundance that serves as the condensation nucleus for cloud formation. During solar minima more cosmic rays reach Earth, potentially creating ultra-small aerosol particles as precursors to Cloud condensation nuclei. Clouds formed from greater amounts of condensation nuclei are brighter, longer lived and likely to produce less precipitation.\n\nBULLET::::- A change in cosmic rays could cause an increase in certain types of clouds, affecting Earth's albedo.\n", "The exact process is described: Clouds are water condensation. This cannot occur without Cloud condensation nuclei in the atmosphere. The emission laws have removed most of this, reducing cloud cover, meaning the ground loses heat faster. This in combination with the drop in greenhouse gases has resulted in the return and exacerbation of the Little Ice Age; now self-perpetuating as glaciers have a much higher albedo.\n", "Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.\n\nSection::::Extraterrestrial.\n", "There are forces throughout the homosphere (which includes the troposphere, stratosphere, and mesosphere) that can impact the structural integrity of a cloud. However, as long as the air remains saturated, the natural force of cohesion that hold the molecules of a substance together acts to keep the cloud from breaking up. Dissolution of the cloud can occur when the process of adiabatic cooling ceases and upward lift of the air is replaced by subsidence. This leads to at least some degree of adiabatic warming of the air which can result in the cloud droplets or crystals turning back into invisible water vapor. Stronger forces such as wind shear and downdrafts can impact a cloud, but these are largely confined to the troposphere where nearly all the Earth's weather takes place. A typical cumulus cloud weighs about 500 metric tons, or 1.1 million pounds, the weight of 100 elephants.\n", "It is less clear how cloudiness would respond to a warming climate; depending on the nature of the response, clouds could either further amplify or partly mitigate warming from long-lived greenhouse gases.\n", "There are five main ways water vapor can be added to the air. Increased vapor content can result from wind convergence over water or moist ground into areas of upward motion. Precipitation or virga falling from above also enhances moisture content. Daytime heating causes water to evaporate from the surface of oceans, water bodies or wet land. Transpiration from plants is another typical source of water vapor. Lastly, cool or dry air moving over warmer water will become more humid. As with daytime heating, the addition of moisture to the air increases its heat content and instability and helps set into motion those processes that lead to the formation of cloud or fog.\n", "The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.\n", "Water vapor accounts for the largest percentage of the greenhouse effect, between 36% and 66% for clear sky conditions and between 66% and 85% when including clouds. Water vapor concentrations fluctuate regionally, but human activity does not directly affect water vapor concentrations except at local scales, such as near irrigated fields. Indirectly, human activity that increases global temperatures will increase water vapor concentrations, a process known as water vapor feedback. The atmospheric concentration of vapor is highly variable and depends largely on temperature, from less than 0.01% in extremely cold regions up to 3% by mass in saturated air at about 32 °C. (See Relative humidity#other important facts.)\n", "A sea of clouds forms generally in valleys or over seas in very stable air mass conditions such as in a temperature inversion. Humidity can then reach saturation and condensation leads to a very uniform stratocumulus cloud, stratus cloud or fog. Above this layer, the air must be dry. This is a common situation in a high-pressure area with cooling at the surface by radiative cooling at night in summer, or advection of cold air in winter or in a marine layer.\n\nSection::::Artistic uses.\n", "Section::::Historical background.\n", "Many storms contain shelf clouds, which are often mistaken for wall clouds, since an approaching shelf cloud appears to form a wall made of cloud and may contain turbulent motions. Wall clouds are inflow clouds and tend to slope inward, or toward the precipitation area of a storm. Shelf clouds, on the other hand, are outflow clouds that jut outward from the storm, often as gust fronts. Also, shelf clouds tend to move outward away from the precipitation area of a storm.\n", "The intermediate layers of the troposphere are the regions with less influence of human activity. This region is far enough from the surface for not being affected by the surface emissions. Additionally, commercial and military flights only cross this region during ascending or descending maneuvers. Moreover, in this region there exist two types of clouds with a large horizontal extension: \"Nimbostratus\" and \"Altostratus\", which cannot originate from human activity. Consequently, it is assumed that there are no anthropic clouds of these two \"genera\". However, what can occur is enhancing existing \"Nimbostratus\" or \"Altostratus\" due to the additional water vapor or condensation nuclei emitted by a thermal power plant, for instance. \n", "Sometimes the inversion layer is at a high enough altitude that cumulus clouds can condense but can only spread out under the inversion layer. This decreases the amount of sunlight reaching the ground and prevents new thermals from forming. As the clouds disperse, sunny weather replaces cloudiness in a cycle that can occur more than once a day.\n", "The lowest part of the atmosphere is the region with the largest influence of human activity emission of water vapor, warm air, and condensation nuclei. When the atmosphere is stable, the additional contribution of warm and moist air from emissions enhances fog formation or layers of \"Stratus homoogenitus\" (\"Sth\"). If the air is not stable, this warm and moist air emitted by human activities creates a convective movement that can reach the lifted condensation level producing an anthropic cumulus cloud, or \"Cumulus homogenitus\" (\"Cuh\"). This type of clouds may be also observed over the polluted air covering some cities or industrial areas in high-pressure conditions.\n", "BULLET::::3. The air must contain condensation nuclei, small solid particles, where condensation/sublimation starts.\n\nThe current use of fossil fuels enhances any of these three conditions. First, fossil fuel combustion generates water vapor. Additionally, this combustion also generates the formation of small solid particles that can act as condensation nuclei. Finally, all the combustion processes emit energy that enhance vertical upward movements. \n", "Section::::Distribution: Where tropospheric clouds are most and least prevalent.:Divergence along high pressure zones.\n", "Section::::History.:Europe.:Germany.\n\nIn Germany civic engagement societies organize cloud seeding on a region level. A registered society maintains aircraft for cloud seeding to protect agricultural areas from hail in the district Rosenheim, the district Miesbach, the district Traunstein (all located in southern Bavaria, Germany) and the district Kufstein (located in Tyrol, Austria).\n", "Section::::Climate effects.:Aerosol radiative effects.:Indirect effect.\n\nThe Indirect aerosol effect consists of any change to the earth's radiative budget due to the modification of clouds by atmospheric aerosols, and consists of several distinct effects. Cloud droplets form onto pre-existing aerosol particles, known as cloud condensation nuclei (CCN).\n", "Careful study of MISR images of the western coast of Peru revealed that actinoform-like clouds showed up roughly a quarter of the time as distinct formations within the more common, stratocumulus clouds in that region. Closer examination showed that actinoform clouds occur worldwide in nearly every region where marine stratus or stratocumulus clouds are common, particularly off the western coasts of continents—especially off Peru, Namibia in Africa, Western Australia, and Southern California. Such cloud systems are persistent year-round off the coast, yet in certain seasons they blow ashore and create the \"June Gloom\" effect on land. These cloud systems rarely form near the equator.\n" ]
[ "Clouds do not settle on the Earth's surface.", "Clouds do not setttle down to the surface." ]
[ "Clouds do settle on the Earth's surface.", "Clouds are settling down, they are just very light so that is a low as they sink." ]
[ "false presupposition" ]
[ "Clouds do not settle on the Earth's surface.", "Clouds do not setttle down to the surface." ]
[ "false presupposition", "false presupposition" ]
[ "Clouds do settle on the Earth's surface.", "Clouds are settling down, they are just very light so that is a low as they sink." ]
2018-01324
When viewing a lengthy video on YouTube, how does YouTube know where to place the ads?
If you are talkign about the banner ads which appear over your video then the position of those is set by Youtube and you cannot change them. What you can do though is set ad breaks in your video where your footage will stop and an ad will appear. Its very much like TV when a presenter says "And we will be back after this break". If your videos are not too short (Youtube has rules about how many ad breaks can go in compared to the length of a video), then setting an ad break like this can be the most profitable way to place ads in your footage. If you access the video editor and click on the 'Monetisation' tab you will see at the bottom of the page a slider for 'Ad breaks' You need to have the option clicked for Skippable video ads above it but once you have you can choose whether an ad will appear at the start, the end or somewhere in the middle, and the longer your ad is, the more ad breaks you can put in. TLDR: The creator can set where the ads go.
[ "Uploaded videos were saved as a .gvi files under the \"Google Videos\" folder in \"My Videos\" and reports of the video(s) details were logged and stored in the user account. The report sorted and listed the number of times that each of the user's videos had been viewed and downloaded within a specific time frame. These ranged from the previous day, week, month or the entire time the videos have been there. Totals were calculated and displayed and the information could be downloaded into a spreadsheet format or printed out.\n\nSection::::Video distribution methods.:Website.\n", "Section::::South America.\n\nBULLET::::- Argentina: Two Argentine analogues also exist, one is called Mapplo, which is claimed to be the first street view in Latin America. Fotocalle, another Argentine project, is claimed to be the first street view service in the world to provide HD pictures.\n", "This customer information is combined and returned to the supply side platform, which can now package up the offer of ad space along with information about the user who will view it. The supply side platform sends that offer to an ad exchange.\n", "Section::::Revenue.:Partnership with video creators.\n\nIn May 2007, YouTube launched its Partner Program (YPP), a system based on AdSense which allows the uploader of the video to share the revenue produced by advertising on the site. YouTube typically takes 45 percent of the advertising revenue from videos in the Partner Program, with 55 percent going to the uploader.\n", "In January 2007, Bismarck Lepe was working for Google. While developing new monetization techniques for YouTube, he came up with the idea of using computer vision techniques to deliver targeted advertising on TV shows recorded on TiVo. He contacted his brother Belasar and a friend, Sean Knapp, to discuss his idea.\n", "The ad exchange then passes the link to the ad back through the supply side platform and the publisher's ad server to the user's browser, which then requests the ad content from the agency's ad server. The ad agency can thus confirm that the ad was delivered to the browser.\n", "Users can select different video streams pulled from the WAMI system's vast field of view and, with the help of advanced data compression techniques, watch them live on their computer screens or handheld devices. In some systems, users can also designate \"watchboxes\" within the sensor's field of view to provide automated alerts should the system detect movement in the area.\n", "BULLET::::- 2004: Signing of a contract with the NSPO for the distribution of data from the Formosat-2 satellite\n\nBULLET::::- 2005: Signing of a contract with KARI for the distribution of data from the Kompsat-2 satellite\n\nBULLET::::- 2006: Opening of an office in Peru\n\nBULLET::::- 2008: Spot Image is 81% owned by Astrium Services, an EADS company\n\nBULLET::::- 2008: Spot Image Brasil replaces the office opened in this country\n\nBULLET::::- 2012 : Launch of SPOT 6\n\nBULLET::::- 2014 : Launch of SPOT 7\n\nSection::::See also.\n\nBULLET::::- SPOT (satellites)\n\nBULLET::::- CNES\n\nBULLET::::- EADS\n\nSection::::External links.\n\nBULLET::::- Spot Image Web Site\n", "They work by a company setting up an account with YouTube CMS (the system used for ContentID), the company adds anyone who signs a contract with them to their CMS, allowing users (and the CMS account owner) to use monetization, block and track policies. Monetization allows for videos to generate revenue, Block prevents access to videos and Track allows content owners to see the analytics of 'reuploads' and copyright infringing content. Some MCN partners can block videos by country (e.g., if a video is uploaded with a banned or unlicensed logo).\n", "\"Reach Planner\" is a tool that allows users to forecast the reach and extent of their video ads across YouTube and Google video partners. The tool allows users to choose their audience. The tool then recommends a combination of video ads that help reach the users objectives. The tool also allows users to see the outcomes of the reach of their adds on a reach curve.\n\nSection::::Features and services.:IP address exclusion.\n", "In addition to controlling ad placements through targeting audiences based on location and language usage, ad placements can be refined with Internet Protocol (IP) address exclusion. This feature enables advertisers to exclude specified IP address ranges if they do not want their ads to appear there. Advertisers can exclude up to 500 IP address ranges per campaign.\n\nSection::::Features and services.:Google Partners.\n", "Spot Image works with a network of more than 30 direct receiving stations handling images acquired by the SPOT satellites.\n\nSpot Image collaborates with ESA's GMES programme, shares geographic information with the OGC and contributes to the interoperability of web services; with Infoterra Global it continues to offer services for precision-agriculture.\n\nSection::::Satellites.\n", "On May 18, 2014, \"Variety\" first reported that Google had reached a preliminary deal to acquire Twitch through its YouTube subsidiary for approximately .\n\nSection::::History.:August 2014 changes.\n", "The basic way to watch the videos was through the Google Video website, video.google.com. Each video had a unique web address in the format of codice_1, and that page contained an embedded Flash Video file which could be viewed in any Flash-enabled browser.\n\nPermalinks to a certain point in a video were also possible, in the format of codice_2 (that is, with a fragment identifier containing a timestamp).\n\nSection::::Video distribution methods.:Flash video.\n", "Section::::Search criterion.:Text recognition.\n\nThe text recognition can be very useful to recognize characters in the videos through \"chyrons\". As with speech recognizers, there are search engines that allow (through character recognition) to play a video from a particular point.\n", "In 2002, a dedicated business line, Philips Content Identification, was formed to further develop the technology, particularly in the domains of monitoring and forensic tracking applications using Philips audio and video digital watermarking technology, and the recently developed audio and video digital fingerprinting technology. The same year, Philips Content Identification signed a joint venture agreement with corporate news and multimedia services provider Medialink, to launch Teletrax, the world's first global broadcast intelligence service. \n", "In 2008, Google developed a partnership with GeoEye to launch a satellite providing Google with high-resolution (0.41 m monochrome, 1.65 m color) imagery for Google Earth. The satellite was launched from Vandenberg Air Force Base on September 6, 2008. Google also announced in 2008 that it was hosting an archive of \"Life Magazine\"s photographs.\n\nIn January 2009, Google announced a partnership with the Pontifical Council for Social Communications, allowing the Pope to have his own channel on YouTube.\n", "BULLET::::- Austria: Google Street View was banned in Austria because Google was found to collect Wifi data without authorization in 2010. After the ban was lifted rules were set up for how Street View can operate legally in Austria. Google resumed collecting imagery in 2017. As of 2018 Google Street View is available in select areas of Austria.\n", "Marketing videos are made on the basis of campaign target. Explainer videos are used for explaining a product, commercial videos for introducing a company, sales videos for selling a product, and social media videos for brand awareness.\n\nIndividual Internet marketing videos are primarily produced in-house and by small media agencies, while a large volume of videos are produced by big media companies, crowdsourced production marketplaces, or in scalable video production platforms.\n", "BULLET::::- Apple's Look Around will be released by the end of 2019, filming started in June.\n\nSection::::Africa.\n\nBULLET::::- Nigeria: \"Moriwo\" offers panoramic street view of Lagos.\n\nSection::::Asia.\n\nBULLET::::- Armenia: Russian company Yandex, offers street panoramas for Yerevan.\n\nBULLET::::- Bangladesh: Bangladeshi Company Barikoi, offers street360 for Dhaka.\n", "Videology\n\nVideology is an advertising software company based in New York City. It was founded in 2007 as Tidal TV and launched a Hulu competitor in 2008. In 2012, it was rebranded as Videology and now develops software that sends ads to specific demographics within an audience of video viewers, performs analytics, and other functions.\n\nSection::::History.\n", "Section::::Features.:Localization.\n\nOn June 19, 2007, Google CEO Eric Schmidt traveled to Paris to launch the new localization system. The interface of the website is available with localized versions in 102 countries, one territory (Hong Kong) and a worldwide version.\n", "Google Contributor\n\nGoogle Contributor is a program run by Google that allows users in the Google Network of content sites to view the websites without any advertisements that are administered, sorted, and maintained by Google.\n\nThe program started with prominent websites, like The Onion and Mashable among others, to test this service. After November 2015, the program opened up to any publisher who displayed ads on their websites through Google AdSense without requiring any sign-on from publishers.\n", "AdSense for video allows publishers with video content (e.g., video hosting websites) to generate revenue using ad placements from Google's extensive advertising network. The publisher is able to decide what type of ads are shown with their video inventory. Formats available include linear video ads (pre-roll or post-roll), overlay ads that display AdSense text and display ads over the video content, and the TrueView format. Publishers can also display companion ads - display ads that run alongside video content outside the player. AdSense for video is for publishers running video content within a player and not for YouTube publishers.\n", "Audio \"beacons\" can be embedded into television advertisements. In a similar manner to radio beacons, these can be picked up by mobile apps. This allows the behavior of users to be tracked, including which ads were seen by the user and how long they watched an ad before changing the channel.\n" ]
[]
[]
[ "normal" ]
[ "YouTube chooses where to put ads in the video." ]
[ "false presupposition", "normal" ]
[ "The content creator chooses where to put the ads in the video. " ]
2018-04790
Why are buses not designed in a more aerodynamic way?
Aerodynamics only matter at high speeds. Drag is proportional to velocity squared. If your travelling at low speeds (like city buses typically do), it's better to optimize for carrying capacity than for aerodynamics. If a bus was designed to be more aerodynamically efficient (rounded front, tapered back), it wouldn't have as large an interior, and thus couldn't carry as much people. Even for a Greyhound bus, which DOES travel at highway speeds, more people is better. The equation for Greyound's income for one bus trip is this: Income = (Ticket Price * Number of Passengers) - Fuel Costs - Operating Costs Making the bus more aerodynamic lowers fuel costs, but also reduces the capacity of the bus. The extra $$$ from additional passengers outweighs the slight fuel savings from a more aerodynamic bus.
[ "Section::::History.:1960s.\n\nIn 1961, a new bus design, the Hamburg, was unveiled at the Geneva Motor Show. At a time when most coaches were rounded, bulbous or streamlined, the new design had clear-cut lines with edges and large windows. Developed by the founder's eldest son, Albrecht Auwärter, and another student, Swiss national Bob Lee, as part of their dissertation at Hamburg University. The design also allowed every passenger to regulate their fresh air supply through a nozzle from two air ducts, commonly seen today.\n", "Bus manufacturers have to have consideration for some general issues common to body, chassis or integral builders.\n\nBULLET::::- Maximum weight (laden and unladen)\n\nBULLET::::- Stability - often a tilt test pass is required\n\nBULLET::::- Maximum dimensions - length and width restrictions may apply\n\nBULLET::::- Fuel consumption\n\nBULLET::::- Emissions standards\n\nBULLET::::- Accessibility\n", "New Looks were available in both Transit and Suburban versions. Transits were traditional city buses with two doors; Suburbans had forward-facing seats (four-abreast), underfloor luggage bays, and had only one door. The floor beneath the seats was higher than the center aisle to accommodate the luggage bays. There were also \"Suburban-style\" transits which had forward-facing seats on slightly raised platforms that gave the appearance of a dropped center aisle. GM refused to install lavatories on its buses; at least one transit authority (Sacramento Transit Authority in Sacramento, California) added its own.\n", "In several parts of the world, the bus is still a basic chassis, front-engined bonneted vehicle; however, where manufacturers have sought to maximise the seating capacity within legal size constraints, the trend is towards rear- and mid-engined designs.\n", "The New Look was built in , and lengths and widths. buses had different-length side windows, so the profiles of both buses looked very similar, but not the same.\n\nSection::::Description.:Variants based on the New Look.\n\nIn 1981–82, Brown Boveri & Company constructed 100 model HR150G trolley buses from New Look bus shells for the Edmonton Transit System. Some of these buses remained in use for 27 years, until the Edmonton trolley bus system was shut down in 2009.\n", "Bodywork is built for three general uses:\n\nBULLET::::- Bus\n\nBULLET::::- Dual Purpose\n\nBULLET::::- Coach\n\nBus bodywork is usually geared to short trips, with many transit bus features. Coach bodywork is for longer distance trips, with luggage racks and under-floor lockers. Other facilities may include toilets and televisions.\n\nA dual purpose design is usually a bus body with upgraded coach style seating, for longer distance travel. Some exclusive coach body designs can also be available to a basic dual purpose fitment.\n\nIn past double-deck designs, buses were built to a low bridge design, due to overall height restrictions.\n\nSection::::General design issues.\n", "The buses to be found in countries around the world often reflect the quality of the local road network, with high floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes.\n\nSection::::Around the world.:Bus expositions.\n", "Further accessibility is being achieved for high-floor coaches, whereby new designs are featuring built-in wheelchair lifts.\n\nWhile the overwhelming majority of bus designs have been geared to internal combustion engine propulsion, accommodation has also been made for a variety of alternative drivelines and fuels, as in electric, fuel cell and hybrid bus technologies. Some bus designs have also incorporated guidance technology.\n\nSection::::Types of construction.\n\nThere are three basic types of bus manufacturer:\n\nBULLET::::- Chassis manufacturer - builds the underframe for body-on-frame construction\n\nBULLET::::- Body manufacturer - builds the coachwork for body-on-frame construction\n", "Large users of transit buses, such as public transport authorities, may order special features. This practice was notable in the Transport for London bus specification, and predecessors. The Association of German Transport Companies was defining a VöV-Standard-Bus concept that was followed between 1968 and 2000.\n\nSection::::Chassis.\n\nThe chassis combines:\n\nBULLET::::- A structural underframe\n\nBULLET::::- Engine and radiator\n\nBULLET::::- Gearbox and transmission\n\nBULLET::::- Wheels, axles, and suspension\n\nBULLET::::- Dashboard, steering wheel, and driver's seat\n", "The latest bus advertising campaign by Adidas for the Brazil World Cup 2014 made use of full wrap and window coverage techniques. Transport for London launched the new formats as part of its ‘year of the bus’ celebrations, which commemorates the 60th anniversary of the Routemaster bus and the 100th anniversary of the first mass-produced motorbus.\n\nSection::::Campaign and promotion buses.\n", "Section::::Additional features.:In tunnels or subterranean structures.\n\nA special issue arises in the use of buses in metro transit structures. Since the areas where the demand for an exclusive bus right-of-way are apt to be in dense downtown areas where an above-ground structure may be unacceptable on historic, logistic, or environmental grounds, use of BRT in tunnels may not be avoidable.\n", "The six main proposals in the plan were:\n\nBULLET::::1. Principal trunk routes from the suburbs to the centre were to remain operated by rear entrance double deck buses with open platforms and conductors in the medium term. A number of routes were to be shortened to counteract the effects of congestion, and thus improve control over running times.\n\nBULLET::::2. Extension of the Red Arrow system of standee buses in the West End and City. A map of possible routes was produced.\n", "Since buses are usually powered by internal combustion engines, bus metros raise ventilation issues similar to those of motor vehicle tunnels. Powerful fans typically exchange air through ventilation shafts to the surface; these are usually as remote as possible from occupied areas, to minimize the effects of noise and concentrated pollution.\n", "The invention of see-through graphics, most commonly applied as a self-adhesive perforated window film, allowed the creation of more elaborate designs that could be applied over windows (although for safety reasons not the front window), moving away from the traditional square box design approach to adverts.\n\nWith the advent of partially transparent window coverage techniques, all over adverts have been applied as a full vehicle advertising wrap windows and all. The transition from screen printing to digital printing has seen an increase in the color range and complexity of advert designs.\n", "BULLET::::- Büssing AG: Präfekt 11 Standard, Präfekt 13 Standard, Präfekt 14D Standard, BS 100 V, BS 110 V, BS 117 SÜ\n\nBULLET::::- Magirus-Deutz: M 170 S 11 H, M 170 S 10 H, M 170 SH 110, M 200 SH 110, M 230 SH 110, M 260 SH 110, M 260 SH 170, M 230 L 117 and M 260 L 117\n\nBULLET::::- MAN: 750 HO-SL / SL192, SL200, 890SG / SG192, SG220, SG240H, SG280H, GG280H, SD200, SÜ240, MAN / Gräf & Stift SG\n\nBULLET::::- Mercedes-Benz: O305, O305G, O305OE, O307\n\nBULLET::::- Ikarus: Ikarus 190\n\nBULLET::::- Heuliez: O305, O305G (based on Mercedes-Benz models)\n\nBULLET::::- Berliet: PR100, ER100, PR180 (with changes to front and rear)\n\nSection::::Bus models.:Unsuccessful semi low-floor prototype.\n", "Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced second hand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the \"camellos\" (camel bus), a specially manufactured trailer bus.\n", "BULLET::::- Alexander Dennis Enviro350H\n\nBULLET::::- Daewoo BC212MA\n\nBULLET::::- Daewoo BS105\n\nBULLET::::- DAF SB220\n\nBULLET::::- Dennis Falcon\n\nBULLET::::- Dennis Lance/Lance SLF\n\nBULLET::::- Irisbus Agoraline\n\nBULLET::::- MAN NL262\n\nBULLET::::- MAN NLxx3F\n\nBULLET::::- Mercedes-Benz Citaro\n\nBULLET::::- Mercedes-Benz O305\n\nBULLET::::- Mercedes-Benz O405\n\nBULLET::::- Mercedes-Benz O500M/U\n\nBULLET::::- Mercedes-Benz OC 500 LE\n\nBULLET::::- Mercedes-Benz OF-OH\n\nBULLET::::- Optare Excel\n\nBULLET::::- Optare Tempo\n\nBULLET::::- Scania Citywide\n\nBULLET::::- Scania K UB\n\nBULLET::::- Scania L113\n\nBULLET::::- Scania L94UB\n\nBULLET::::- Scania N UB\n\nBULLET::::- Scania N113\n\nBULLET::::- Scania N94UB\n\nBULLET::::- Scania OmniCity\n\nBULLET::::- Thaco TB120CT\n\nBULLET::::- VDL SB200\n\nBULLET::::- VDL SB250\n\nBULLET::::- Volvo B7RLE\n\nBULLET::::- Volvo B10B\n\nBULLET::::- Volvo B10BLE\n", "BULLET::::3. About forty suburban centres were to have local flat fare networks of short-distance routes. These \"satellite\" routes were also to act as feeders to the Underground stations and trunk routes. They were to be operated by single deck buses.\n\nBULLET::::4. Suburban routes not suitable for flat fare networks were to retain a \"graduated\" fare system. They would gradually be converted to one man operated single-deck buses.\n\nBULLET::::5. Services in the country (green) area were to be largely unaffected, although OMO was to be introduced where practical.\n", "The development of the midibus has also given many operators a low-cost way of operating a transit bus service, with some midibuses such as the Plaxton SPD \"Super Pointer Dart\" resembling full size transit type vehicles.\n\nSection::::Developments.\n\nDue to their public transport role, transit buses were the first type of bus to benefit from low-floor technology, in response to a demand for equal access public service provision. Transit buses are also now subject to various disability discrimination acts in several jurisdictions which dictate various design features also applied to other vehicles in some cases.\n", "In the 1990s, bus manufacture underwent major change with the push toward low-floor designs, for improved accessibility. Some smaller designs achieved this by moving the door behind the front wheels. On most larger buses, it was achieved with various independent front suspension arrangements, and kneeling technology, to allow an unobstructed path into the door and between the front wheel arches. Accordingly, these 'extreme front entrance' designs cannot feature a front-mounted-engined or mid-engined layout, and all use a rear-engined arrangement. Some designs also incorporate extendable ramps for wheelchair access.\n", "Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer.\n\nSection::::Design.\n\nSection::::Design.:Accessibility.\n\nTransit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have electrically or hydraulically extended under-floor ramps to provide level access for wheelchair users and people with baby carriages. Prior to more general use of such technology, these wheelchair users could only use specialist paratransit mobility buses.\n", "Some transit agencies refused to order low-floor buses altogether, such as New Jersey Transit and MUNI owing to terrain conditions in the service area. DART still has a preference for high floor buses. Although New York City Transit runs some 40-foot low-floors, it originally refused to order low-floor buses, namely D60LFs from New Flyer, after the D60HF, a high floor model, was discontinued mid-delivery. However, they have demonstrated both the D60LF and NovaBus LFSA, the latter of which they have decided to order.\n\nSection::::Asia.\n\nSection::::Asia.:India.\n\nSection::::Asia.:India.:Bengaluru.\n", "In Jasper Fforde's 2007 comic novel First Among Sequels. The heroine of the novel, Thursday Next, finds herself in the laboratories of her arch-enemy, the industrial monolith Goliath Corporation who decided they would develop a vehicle to transport coach parties of rich tourists into novels:\n\n\"In the centre of the room and looking resplendent in the blue and yellow livery of some forgotten bus company was a flat-fronted single deck coach that to my mind dated from the fifties…\n\n\"Why base it on an old coach?\" I asked…\n", "BULLET::::- SG 220, articulated bus, underfloor engine (1978–1983)\n\nBULLET::::- SG 240/280 H, articulated bus, rear engine (1980–1986)\n\nBULLET::::- North-American models:\n\nBULLET::::- SG 220, articulated bus, underfloor engine (1978–1983)\n\nBULLET::::- SG 310, articulated bus, underfloor engine (1981–1988)\n\nBULLET::::- VöV-Standard buses, 2nd generation\n\nBULLET::::- SL 202, city bus (1984–1993)\n\nBULLET::::- SG 242/282 H, \"puller\" articulated bus (1985–1990)\n\nBULLET::::- SG 242/262/292/312/322, \"pusher\" articulated bus (1986–1999)\n\nBULLET::::- SD 202, double-decker bus (1986–1992)\n\nBULLET::::- SÜ 242/272/292/312/322, regional bus (1987–1998)\n\nBULLET::::- SM 152/182, midibus (1989–1992)\n\nBULLET::::- NL 202, low-floor bus with podium-mounted seats (1989–1992)\n\nBULLET::::- NG 272, low-floor articulated bus with podium-mounted seats (1990–1992)\n", "BULLET::::- Kaye, Buses and Trolleybuses Since 1945, London 1968\n\nBULLET::::- Hillditch, A Further Look At Buses, Shepperton 1981\n\nBULLET::::- Booth, The British Bus, Today and Tomorrow, Shepperton 1983\n\nBULLET::::- Curtis, Bus Monographs:5 Bristol RE, Shepperton, 1987\n\nBULLET::::- Townsin, Duple, 75 Years Of Coachbuilding, Glossop, 1998\n\nBULLET::::- Brown, Buses in Britain: the 1970s, Harrow Weald 1999\n\nBULLET::::- Brown, Plaxton A Century of Innovation, Hersham, 2007\n\nBULLET::::- Curtis, Bristol Lodekka, Hersham 2009\n\nSection::::References.:Magazines.\n\nBULLET::::- Parke(ed), Buses, Shepperton passim 1969-78\n\nBULLET::::- Parke(ed), Buses Extra, Shepperton, passim 1977-81\n\nBULLET::::- Morris (ed) Buses Extra, Shepperton, passim 1982-92\n\nBULLET::::- Booth (ed) Classic Bus, Edinburgh, passim 1992-2005\n" ]
[ "Busses should be more aerodynamic." ]
[ "Busses have a more economimc incentive to make a larger bus not a more aerodynamic bus. " ]
[ "false presupposition" ]
[ "Busses should be more aerodynamic." ]
[ "false presupposition" ]
[ "Busses have a more economimc incentive to make a larger bus not a more aerodynamic bus. " ]
2018-08644
Why do most companies prefer or are more likely to hire female candidates for clerical positions?
I used to work on equality\-related employment data for a national government. There are lots of reasons, which vary by country. But one major reason is that women are more likely to apply for clerical positions because the role is slightly gendered, and advertisements for these roles reflect that in their language. This tendency is strengthened when we talk about unqualified workers, because women are more likely to apply for clerical work than \(for example\) manual labour positions. But like with any gender related issue, the real picture is very complex.
[ "BULLET::::- In March 1992 the first female priests in Australia were appointed; they were priests of the Anglican Church in Australia.\n\nBULLET::::- Maria Jepsen became the world's first woman to be elected a Lutheran bishop when she was elected bishop of the North Elbian Evangelical Lutheran Church in Germany, but she resigned in 2010 after allegations that she failed to properly investigate cases of sexual abuse.\n\nBULLET::::- In November 1992 the General Synod of the Church of England approved the ordination of women as priests.\n\nBULLET::::- The Anglican Church of South Africa started to ordain women.\n", "Traditionally clerical positions have been held almost exclusively by women. Even today, the vast majority of clerical workers in the US continue to be female. As with other predominantly female positions, clerical occupations were, and to some extent continue to be, assigned relatively low prestige on a sexist basis. The term pink-collar worker is often used to describe predominantly female white collar positions.\n\nSection::::United States.:Clerical workers and unions.\n", "Sex discrimination has been outlawed in non-ministerial employment in the United States since 1964 nationwide; however, under a judicially created doctrine called the \"ministerial exemption,\" religious organizations are immune from sex discrimination suits brought by \"ministerial employees,\" a category that includes such religious roles as priests, imams or kosher supervisors. \n", "BULLET::::- 2016: Lila Kagedan became the first female clergy member hired by an Orthodox synagogue while using the title \"rabbi.\" This occurred when Mount Freedom Jewish Center in New Jersey, which is Open Orthodox, hired Kagedan to join their \"spiritual leadership team.\"\n\nBULLET::::- 2017: The Orthodox Union adopted a policy banning women from serving as clergy, from holding titles such as \"rabbi\", or from doing common clergy functions even without a title, in its congregations in the United States.\n\nBULLET::::- 2017: Ruti Regan became the first openly autistic person to be ordained by the Jewish Theological Seminary of America.\n", "BULLET::::- Heather Cook was the first woman elected as a bishop in the Episcopal Diocese of Maryland.\n\nBULLET::::- The Bishop of Basel, Felix Gmür, allowed the Basel Catholic church corporations, which are officially only responsible for church finances, to formulate an initiative appealing for equality between men and women in ordination to the priesthood.\n\nBULLET::::- The Association of Catholic Priests in Ireland stated that the Catholic church must ordain women and allow priests to marry in order to survive.\n", "Among the most significant examples of resistance to female clergy has been in the position of senior pastor in large church settings. For example, in the United Methodist Church only two female ministers have ever led churches with membership numbers within the top 100 of United Methodist churches in the U.S. The most recent national data (2005) indicates that there are no female ministers currently leading top 100 membership UMCs. Resistance has eased more rapidly for the position of bishop in the United Methodist Church. For 2004–2008, 15 of the 50 (30%) United Methodist bishops serving the U.S. are women.\n", "BULLET::::- The Church in Wales elected Joanna Penberthy as its first female bishop.\n\nBULLET::::- 2017:\n\nBULLET::::- The Orthodox Union adopted a policy banning women from serving as clergy, from holding titles such as \"rabbi\", or from doing common clergy functions even without a title, in its congregations in the United States.\n\nBULLET::::- Susan Frederick-Gray was elected as the first female president of the Unitarian Universalist Association.\n\nBULLET::::- Keshira haLev Fife was ordained by the Kohenet Hebrew Priestess Institute, thus becoming Australia's first Hebrew Priestess.\n", "BULLET::::- Pérsida Gudiel became the first woman ordained by the Lutheran Church in Guatemala.\n\nBULLET::::- Mimi Kanku Mukendi became the first female pastor ordained by the Communauté Evangélique Mennonite au Congo (Mennonite Evangelical Community of Congo), although they voted to ordain women as pastors in 1993.\n\nBULLET::::- The Mennonite Church of Congo approved women's ordination.\n\nBULLET::::- Christine Lee was ordained as the Episcopal Church's first female Korean-American priest.\n\nBULLET::::- Alma Louise De bode-Olton became the first female priest ordained in the Anglican Episcopal Church in Curaçao.\n", "BULLET::::- With the October 16, 2010, ordination of Margaret Lee, in the Peoria-based Diocese of Quincy, Illinois, women have been ordained as priests in all 110 dioceses of the Episcopal Church in the United States.\n\nBULLET::::- 2011:\n\nBULLET::::- Kirsten Eistrup, 55, became the first female priest in the Danish Seamen's Church in Singapore. She was also the Lutheran Protestant Church's first female pastor in Asia.\n\nBULLET::::- Kirsten Fehrs became the first woman to be a bishop in the North Elbian Evangelical Lutheran Church.\n\nBULLET::::- Annette Kurschus became the first woman to be a praeses of the Evangelical Church of Westphalia.\n", "BULLET::::- Penny Jamieson became the first female Anglican diocesan bishop in the world. She was ordained a bishop of the Anglican Church in New Zealand in June 1990.\n\nBULLET::::- Anglican women were ordained in Ireland. Janet Catterall became the first woman ordained an Anglican priest in Ireland.\n\nBULLET::::- Sister Cora Billings was installed as a pastor in Richmond, VA, becoming the first black nun to head a parish in the U.S.\n\nBULLET::::- The Cantors Assembly, an international professional organization of cantors associated with Conservative Judaism, began allowing women to join.\n\nBULLET::::- 1991:\n", "BULLET::::- Nancy Abramson became the first female president of the Cantors Assembly, an international professional organization of cantors associated with Conservative Judaism.\n\nBULLET::::- 2014:\n\nBULLET::::- Fanny Sohet Belanger, born in France, was ordained in America and thus became the first French female priest in the Episcopal Church.\n\nBULLET::::- Dr. Sarah Macneil was consecrated and installed as the first female diocesan bishop in Australia (for the Diocese of Grafton in New South Wales).\n\nBULLET::::- The Lutheran Church in Chile ordained Rev. Hanna Schramm, born in Germany, as its first female pastor.\n", "BULLET::::- For the first time in the history of the Church of England, more women than men were ordained as priests (290 women and 273 men).\n\nBULLET::::- The first American women to be ordained as cantors in Jewish Renewal after Susan Wehle's ordination were Michal Rubin and Abbe Lyons, both ordained on January 10, 2010.\n", "BULLET::::- 1994: The first women priests were ordained by the Scottish Episcopal Church.\n\nBULLET::::- 1994: Indrani Rampersad was ordained as the first female Hindu priest in Trinidad.\n\nBULLET::::- 1994: On March 12, 1994, the Church of England ordained 32 women as its first female priests.\n\nBULLET::::- 1995: The Sligo Seventh-day Adventist Church in Takoma Park, Maryland, ordained three women in violation of the denomination's rules - Kendra Haloviak, Norma Osborn, and Penny Shell.\n\nBULLET::::- 1995: The Evangelical Lutheran Church in Denmark ordained its first woman as a bishop.\n", "BULLET::::- Leontine Kelly, the first black woman to become a bishop of a major religious denomination in the United States, is elected head of the United Methodist Church in the San Francisco area.\n\nBULLET::::- Dr. Deborah Cohen became the first certified Reform mohelet (female mohel); she was certified by the Berit Mila program of Reform Judaism.\n\nBULLET::::- From 1984 to 1990 Barbara Borts, born in America, was a rabbi at Radlett Reform Synagogue, making her the first woman rabbi to have a pulpit of her own in a UK Reform Judaism synagogue.\n\nBULLET::::- 1985:\n", "BULLET::::- 1998: Some Orthodox Jewish congregations started to employ women as congregational interns, a job created for learned Orthodox Jewish women. Although these interns do not lead worship services, they perform some tasks usually reserved for rabbis, such as preaching, teaching, and consulting on Jewish legal matters. The first woman hired as a congregational intern was Julie Stern Joseph, hired in 1998 by the Lincoln Square Synagogue of the Upper West Side.\n\nBULLET::::- 1999: Beth Lockard was ordained as the first Deaf pastor in the Evangelical Lutheran Church in America.\n", "Asked in 1997 to comment on the decline of female enrollment in divinity schools in the United States, following their increased presence in the 1970s, Farley said that It's hard for them to have all that education and to know they can't be ordained. It challenges their faith and commitment. The possibility of ordination is looking dimmer, but I'm still optimistic that someday it may be possible or even needed. Catholicism is the only denomination with a shortage of clergy.\n", "BULLET::::- Pat Storey was the first woman to be appointed as a bishop in the Church of Ireland, and the first in all Ireland and the United Kingdom. The Church of Ireland has permitted the ordination of women as bishops since 1990.\n\nBULLET::::- The Church of Sweden elected Antje Jackelen as Sweden's first female archbishop.\n\nBULLET::::- The Anglican Synod of Ballarat voted to allow the ordination of women as priests.\n\nBULLET::::- Mary Froiland was the first woman elected as a bishop in the South-Central Synod of Wisconsin of the Evangelical Lutheran Church of America.\n", "BULLET::::- Linda Rich became the first female cantor to sing in a Conservative synagogue, specifically Temple Beth Zion in Los Angeles, although she was not ordained.\n\nBULLET::::- Mindy Jacobsen became the first blind woman to be ordained as a cantor in the history of Judaism.\n\nBULLET::::- Lauma Lagzdins Zusevics was ordained as the first woman to serve as a full-time minister for the Latvian Evangelical Lutheran Church in America.\n\nBULLET::::- 1979:\n\nBULLET::::- The Reformed Church in America started ordaining women as ministers. Women had been admitted to the offices of deacon and elder in 1972.\n", "BULLET::::- 2014: Felix Gmür, Bishop of Basel, allowed the Basel Catholic church corporations, which are officially only responsible for church finances, to formulate an initiative appealing for equality between men and women in ordination to the priesthood.\n\nBULLET::::- 2014: The Association of Catholic Priests in Ireland stated that the Catholic church must ordain women and allow priests to marry in order to survive.\n", "Since the ordination of women as priests began in 1994, dioceses generally have on the Bishop's senior staff a Dean of Women's Ministry (or Bishop's Adviser in Women's Ministry or similar), whose role it is to advocate for clergy who are women and to ensure the Bishop is appraised of issues peculiar to their ministry. These Advisers meet together in a National Association (NADAWM).\n", "BULLET::::- The Baptist Faith and Message was amended in 2000 to state, \"While both men and women are gifted for service in the church, the office of pastor is limited to men as qualified by Scripture.\"\n\nBULLET::::- The Mennonite Brethren Church of Congo ordained its first female pastor in 2000.\n\nBULLET::::- Helga Newmark, born in Germany, became the first female Holocaust survivor ordained as a rabbi. She was ordained in America.\n\nBULLET::::- In July 2000 Vashti McKenzie was the first woman elected as a bishop in the African Methodist Episcopal (AME) Church.\n", "BULLET::::- Maria Pap was elected to the position of district dean in the Unitarian Church of Transylvania, the highest post ever held by a woman in that Church.\n\nBULLET::::- 2005:\n\nBULLET::::- The Lutheran Evangelical Protestant Church, (LEPC) (GCEPC) in the USA elected Nancy Kinard Drew as its first female Presiding Bishop.\n\nBULLET::::- Annalu Waller, who had cerebral palsy, was ordained as the first disabled female priest in the Scottish Episcopal Church.\n\nBULLET::::- Floriane Chinsky, born in Paris and ordained in Jerusalem, became Belgium's first female rabbi.\n", "BULLET::::- Auður Eir Vilhjálmsdóttir became the first woman to be ordained into the Evangelical Lutheran Church of Iceland.\n\nBULLET::::- 1975\n\nBULLET::::- The Evangelical Lutheran Church of Latvia decided to ordain women as pastors, although since 1993, under the leadership of Archbishop Janis Vanags, it no longer does so.\n\nBULLET::::- Dorothea W. Harvey became the first woman to be ordained by the Swedenborgian Church.\n\nBULLET::::- Barbara Ostfeld-Horowitz became the first female cantor ordained in Reform Judaism.\n\nBULLET::::- Mary Matz became the first female minister in the Moravian Church.\n", "BULLET::::- 1911: St. Joan's International Alliance, founded in 1911, was the first Catholic group to work for women being ordained as priests.\n\nBULLET::::- 1912: Olive Winchester, born in America, became the first woman ordained by any trinitarian Christian denomination in the United Kingdom when she was ordained by the Church of the Nazarene.\n\nBULLET::::- 1914: The Assemblies of God was founded and ordained its first woman pastors in 1914.\n", "BULLET::::- 1975: Jackie Tabick, born in Dublin, became the first female rabbi ordained in England.\n\nBULLET::::- 1976: The Anglican Church in Canada ordained six female priests.\n\nBULLET::::- 1976: The Revd Pamela McGee was the first female ordained to the Lutheran ministry in Canada.\n\nBULLET::::- 1976: Venerable Karuna Dharma became the first fully ordained female member of the Buddhist monastic community in the U.S.\n\nBULLET::::- 1977: The Anglican Church in New Zealand ordained five female priests.\n\nBULLET::::- 1977: Pauli Murray became the first African American woman to be ordained as an Episcopal priest in 1977.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-03511
How is a nuclear submarine lost at sea not a danger?
Water absorbs nuclear radiation really well, thats why we use it in our reactors (well that and the whole steam thing. The radiation wont penetrate more than like 20 meters even if there was a catastrophic containment failure, but normally its in a giant steel box. So no its fine. Not ideal, but not a danger.
[ "BULLET::::- K-192 (1989; Echo II-class submarine; loss of coolant)\n\nBULLET::::- K-141 \"Kursk (2000; Oscar II-class submarine; sank, 118 killed)\n\nBULLET::::- K-159 (2003; November-class submarine; sank under tow, 9 killed)\n\nSection::::See also.\n\nBULLET::::- JASON reactor\n\nBULLET::::- List of United States Naval reactors\n\nBULLET::::- Naval Reactors\n\nBULLET::::- Decommissioning of Russian nuclear-powered vessels\n\nSection::::External links.\n\nBULLET::::- http://www.nukestrat.com/pubs/nep7.pdf - 1994 paper highlighting limited, public-relations only value of all-nuclear task groups given continued dependence on conventionally fuelled escorts and continuous replenishment of supplies\n", "A 2009 safety assessment by the Defence Nuclear Safety Regulator concluded that PWR2 reactor safety was significantly short of good practice in comparable navies in two important areas: loss-of-coolant accident and control of submarine depth following emergency reactor shutdown. The regulator concluded that PWR2 was \"potentially vulnerable to a structural failure of the primary circuit\", which is a failure mode with significant safety hazards to crew and the public. Operational procedures have been amended to minimise these risks.\n", "BULLET::::- SN Álvaro Alberto (SN-10) – (Under Construction)\n\nSection::::Accidents.\n\nSection::::Accidents.:Reactor accidents.\n\nSome of the most serious nuclear and radiation accidents by death toll in the world have involved nuclear submarine mishaps. To date, all of these were units of the former Soviet Union. Reactor accidents that resulted in core damage and release of radioactivity from nuclear-powered submarines include:\n\nBULLET::::- \"K-8\", 1960: suffered a loss-of-coolant accident; substantial radioactivity released.\n\nBULLET::::- \"K-14\", 1961: the reactor compartment was replaced due to unspecified \"breakdown of reactor protection systems\".\n", "In 2013 the Defence Nuclear Safety Regulator reported that the reactor systems were suffering increasing technical problems due to ageing, requiring effective management. An example was that \"Tireless\" had had a small radioactive coolant leak for eight days in February 2013.\n\nSection::::Characteristics.\n", "Current generations of nuclear submarines never need to be refueled throughout their 25-year lifespans. Conversely, the limited power stored in electric batteries means that even the most advanced conventional submarine can only remain submerged for a few days at slow speed, and only a few hours at top speed, though recent advances in air-independent propulsion have somewhat ameliorated this disadvantage. The high cost of nuclear technology means that relatively few states have fielded nuclear submarines. Some of the most serious nuclear and radiation accidents ever to occur have involved Soviet nuclear submarine mishaps.\n\nSection::::History.\n", "The main disadvantages of an SSN are the technological challenges and expenses of building and maintaining a nuclear power plant. Nuclear submarines can have political downsides, as some countries refuse to accept nuclear-powered vessels as a matter of policy. Furthermore, decommissioned nuclear submarines require costly dismantling and long term storage of the radioactive waste.\n\nThe following navies currently operate SSNs:\n\nBULLET::::- People's Liberation Army Navy of China\n\nBULLET::::- French Navy\n\nBULLET::::- Indian Navy\n\nBULLET::::- Russian Navy\n\nBULLET::::- Royal Navy of the United Kingdom\n\nBULLET::::- United States Navy\n\nSection::::Active and future SSN classes.\n\nBULLET::::- Brazilian Navy\n\nBULLET::::- - 1 planned\n", "Several serious nuclear and radiation accidents have involved nuclear submarine mishaps. The reactor accident in 1961 resulted in 8 deaths and more than 30 other people were over-exposed to radiation. The reactor accident in 1968 resulted in 9 fatalities and 83 other injuries. The accident in 1985 resulted in 10 fatalities and 49 other radiation injuries.\n\nSection::::Technology.:Propulsion.:Alternative.\n\nOil-fired steam turbines powered the British K-class submarines, built during World War I and later, to give them the surface speed to keep up with the battle fleet. The K-class subs were not very successful, however.\n", "On 20 July 2016, while operating at periscope depth on a training exercise in the Strait of Gibraltar, collided with a merchant ship, sustaining significant damage to the top of her conning tower. The merchant vessel did not sustain any damage. It was reported that no crew members were injured during the collision and that the submarine's nuclear reactor section remained completely undamaged.\n\nSection::::2017.\n\nSection::::2017.:UC3 \"Nautilus\" sinking.\n", "BULLET::::- Radioactive resin contaminates the American \"Sturgeon\"-class submarine USS \"Guardfish\" after wind unexpectedly blows the powder back towards the ship. The resin is used to remove dissolved radioactive minerals and particles from the primary coolant loops of submarines. This type of accident was fairly common; however, U.S. Navy nuclear vessels no longer discharge resin at sea.\n\nBULLET::::- October 1975 – Apra Harbor, Guam – Spill of irradiated water\n\nBULLET::::- While disabled, the submarine tender USS \"Proteus\" discharged radioactive coolant water. A Geiger counter at two of the harbor's public beaches showed 100 millirems/hour, fifty times the allowable dose.\n", "Three were lost with all hands - the two from the United States Navy (129 and 99 lives lost) and one from the Russian Navy (118 lives lost), and these are also the three largest losses of life in a submarine. All sank as a result of accident except for , which was scuttled in the Kara Sea when proper decommissioning was considered too expensive. The Soviet submarine carried nuclear ballistic missiles when it was lost with all hands, but as it was a diesel-electric submarine, it is not included in the list.\n", "BULLET::::- The only Project 645 submarine (a variant of the Project 627 , with liquid metal cooled reactors), \"K-27\" was decommissioned in 1979 after many years of difficulty with its reactor. On September 6, 1982, the Soviet Navy scuttled it in shallow water () in the Kara Sea after sealing the reactor compartment. This sinking in shallow water was contrary to the recommendation of the International Atomic Energy Agency (IAEA).\n", "A marine nuclear propulsion plant must be designed to be highly reliable and self-sufficient, requiring minimal maintenance and repairs, which might have to be undertaken many thousands of miles from its home port. One of the technical difficulties in designing fuel elements for a seagoing nuclear reactor is the creation of fuel elements which will withstand a large amount of radiation damage. Fuel elements may crack over time and gas bubbles may form. The fuel used in marine reactors is a metal-zirconium alloy rather than the ceramic UO (uranium dioxide) often used in land-based reactors. Marine reactors are designed for long core life, enabled by the relatively high enrichment of the uranium and by incorporating a \"burnable poison\" in the fuel elements, which is slowly depleted as the fuel elements age and become less reactive. The gradual dissipation of the \"nuclear poison\" increases the reactivity of the core to compensate for the lessening reactivity of the aging fuel elements, thereby lengthening the usable life of the fuel. The life of the compact reactor pressure vessel is extended by providing an internal neutron shield, which reduces the damage to the steel from constant bombardment by neutrons.\n", "BULLET::::- Left to rust for 14 years after being decommissioned, this Soviet-era November-class submarine sank in the Barents Sea on August 28, 2003, when a storm ripped away the pontoons necessary to keep it afloat under tow. Nine of the 10 salvage men on board were killed.\n\nSection::::See also.\n\nBULLET::::- Nuclear submarine accidents\n\nBULLET::::- List of sunken aircraft carriers\n\nBULLET::::- List of sunken battlecruisers\n\nBULLET::::- List of sunken battleships\n\nBULLET::::- List of military nuclear accidents\n\nBULLET::::- Lists of nuclear disasters and radioactive incidents\n\nSection::::References.\n\nSection::::References.:General.\n", "Insurance of nuclear vessels is not like the insurance of conventional ships. The consequences of an accident could span national boundaries, and the magnitude of possible damage is beyond the capacity of private insurers. A special international agreement, the \"Brussels Convention on the Liability of Operators of Nuclear Ships\", developed in 1962, would have made signatory national governments liable for accidents caused by nuclear vessels under their flag but was never ratified owing to disagreement on the inclusion of warships under the convention. Nuclear reactors under United States jurisdiction are insured by the provisions of the Price Anderson Act.\n", "SUBSAFE addresses only flooding; mission assurance is not a concern, simply a side benefit. Other safety programs and organizations regulate such things as fire safety, weapons systems safety, and nuclear reactor systems safety.\n\nFrom 1915 to 1963, the United States Navy lost 16 submarines to non-combat related causes. Since SUBSAFE began in 1963, only one submarine, , has been lost, and that boat was not yet SUBSAFE certified.\n\nSection::::History.\n", "While land-based reactors in nuclear power plants produce up to around 1600 megawatts of electrical power, a typical marine propulsion reactor produces no more than a few hundred megawatts. Space considerations dictate that a marine reactor must be physically small, so it must generate higher power per unit of space. This means its components are subject to greater stresses than those of a land-based reactor. Its mechanical systems must operate flawlessly under the adverse conditions encountered at sea, including vibration and the pitching and rolling of a ship operating in rough seas. Reactor shutdown mechanisms cannot rely on gravity to drop control rods into place as in a land-based reactor that always remains upright. Salt water corrosion is an additional problem that complicates maintenance.\n", "Section::::Current status.:Sinking of the \"Wenonah\".\n", "Note that all nine of the U.S. Navy nuclear-powered cruisers (CGN) have now been stricken from the Naval Vessel Register, and those not already scrapped by recycling are scheduled to be recycled. While reactor accidents have not sunk any U.S. Navy ships or submarines, two nuclear-powered submarines, and were lost at sea. The condition of these reactors has not been publicly released, although both wrecks have been investigated by Robert Ballard on behalf of the Navy using remotely operated vehicles (ROVs).\n", "List of sunken nuclear submarines\n\nNine nuclear submarines have sunk, either by accident or scuttling. The Soviet Navy has lost five (one of which sank twice), the Russian Navy two, and the United States Navy (USN) two.\n", "Section::::Contamination and health effects.:Nuclear target ship wreck.\n", "Storage batteries provided as a reserve source of energy must be installed in accordance with applicable electrical codes and good engineering practice. They must be protected from adverse weather and physical damage. They must be readily accessible for maintenance and replacement.\n\nSection::::GMDSS sea areas.\n", "Of the nine sinkings, two were caused by fires, two by weapon explosions, two by flooding, one by bad weather, and one by scuttling due to a damaged nuclear reactor. Only 's reason for sinking is unknown. Eight of the submarines are underwater wrecks in the Northern Hemisphere, five in the Atlantic Ocean and three in the Arctic Ocean. The ninth submarine, \"K-429\", was raised and returned to active duty after both of her sinkings.\n\nSection::::United States.\n", "Section::::Crew.\n\nA typical nuclear submarine has a crew of over 80; conventional boats typically have fewer than 40. The conditions on a submarine can be difficult because crew members must work in isolation for long periods of time, without family contact. Submarines normally maintain radio silence to avoid detection. Operating a submarine is dangerous, even in peacetime, and many submarines have been lost in accidents.\n\nSection::::Crew.:Women.\n", "Decommissioning nuclear-powered submarines has become a major task for American and Russian navies. After defuelling, U.S. practice is to cut the reactor section from the vessel for disposal in shallow land burial as low-level waste (see the Ship-Submarine recycling program).\n\nSection::::See also.\n\nBULLET::::- List of United States Naval reactors\n\nBULLET::::- Naval Reactors\n\nBULLET::::- Nuclear marine propulsion\n\nBULLET::::- Naval Nuclear Power School\n\nBULLET::::- Radioisotope thermoelectric generator\n\nBULLET::::- Nuclear powered cruisers of the United States Navy\n\nBULLET::::- Nuclear powered submarines of the United States Navy\n\nSection::::External links.\n\nBULLET::::- The Uranium Information Centre provided some of the original material in this article.\n", "BULLET::::- May 22, 1968 – 740 km (400 nmi) southwest of the Azores – Loss of nuclear reactor and two W34 nuclear warheads\n\nBULLET::::- The U.S. submarine USS \"Scorpion\" (SSN-589) sank while en route from Rota, Spain, to Norfolk, Virginia, USA. The cause of sinking remains unknown; all 99 officers and men on board were killed. The wreckage of the submarine, its S5W nuclear reactor, and its two Mark 45 torpedoes with W34 nuclear warheads, remain on the sea floor in more than 3,000 m (9,800 ft) of water.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-16995
How exactly is inflation over time determined? How can someone say X dollars today was Y dollars in 1978?
By comparing to similar items and making an index. A gallon of milk, 1 pound of potatoes, 1 gallon of gas, ect. If after useing a significant sample size of items you can get a nice graph of what the purchase and power of $1 was in each year
[ "BULLET::::- Historical inflation Before collecting consistent econometric data became standard for governments, and for the purpose of comparing absolute, rather than relative standards of living, various economists have calculated imputed inflation figures. Most inflation data before the early 20th century is imputed based on the known costs of goods, rather than compiled at the time. It is also used to adjust for the differences in real standard of living for the presence of technology.\n", "Alternatively, the CPI can be performed as formula_3. The \"updated cost\" (i.e. the price of an item at a given year, e.g.: the price of bread in 2018) is divided by that of the initial year (the price of bread in 1970), then multiplied by one hundred.\n\nSection::::Calculating the CPI for multiple items.\n\nMany but not all price indices are weighted averages using weights that sum to 1 or 100.\n", "One of the basic principles in historical cost accounting is \"The Measuring Unit principle\" (or stable measuring unit assumption): The unit of measure in accounting shall be the base money unit of the most relevant currency.\n\nThis principle also assumes the unit of measure is stable; that is, changes in its general purchasing power are not considered sufficiently important to require adjustments to the basic financial statements.The inflation which occurs over the passage of time is not considered.\"\n", "Accountants in the United Kingdom and the United States have discussed the effect of inflation on financial statements since the early 1900s, beginning with index number theory and purchasing power. Irving Fisher's 1911 book \"The Purchasing Power of Money\" was used as a source by Henry W. Sweeney in his 1936 book \"Stabilized Accounting\", which was about Constant Purchasing Power Accounting. This model by Sweeney was used by The American Institute of Certified Public Accountants for their 1963 research study (ARS6) \"Reporting the Financial Effects of Price-Level Changes\", and later used by the Accounting Principles Board (USA), the Financial Standards Board (USA), and the Accounting Standards Steering Committee (UK). Sweeney advocated using a price index that covers everything in the gross national product. In March 1979, the Financial Accounting Standards Board (FASB) wrote \"Constant Dollar Accounting\", which advocated using the Consumer Price Index for All Urban Consumers (CPI-U) to adjust accounts because it is calculated every month.\n", "Other common measures of inflation are:\n\nBULLET::::- GDP deflator is a measure of the price of all the goods and services included in gross domestic product (GDP). The US Commerce Department publishes a deflator series for US GDP, defined as its nominal GDP measure divided by its real GDP measure.\n\n∴ formula_2\n\nBULLET::::- Regional inflation The Bureau of Labor Statistics breaks down CPI-U calculations down to different regions of the US.\n", "People often seek a comparison between the price of an item in the past, and the price of an item today. Over short periods of time, like months, inflation may measure the role an object and its cost played in an economy: the price of fuel may rise or fall over a month. The price of money itself changes over time, as does the availability of goods and services as they move into or out of production. What people choose to consume changes over time. Finally, concepts such as cash money economies may not exist in past periods, nor ideas like wage labour or capital investment. Comparing what someone paid for a good, how much they had to work for that money, what the money was worth, how scarce a particular good was, what role it played in someone's standard of living, what its proportion was as part of social income, and what proportion it was as part of possible social production is a difficult task. This task is made more difficult by conflicting theoretical concepts of worth.\n", "BULLET::::- Dollar Value LIFO. Under this variation of LIFO, increases or decreases in the LIFO reserve are determined based on dollar values rather than quantities.\n", "National accounts can be presented in nominal or real amounts, with real amounts adjusted to remove the effects of price changes over time. A corresponding price index can also be derived from national output. Rates of change of the price level and output may also be of interest. An inflation rate (growth rate of the price level) may be calculated for national output or its expenditure components. Economic growth rates (most commonly the growth rate of GDP) are generally measured in real (constant-price) terms. One use of economic-growth data from the national accounts is in growth accounting across longer periods of time for a country or across to estimate different sources of growth, whether from growth of factor inputs or technological change.\n", "BULLET::::- Consumer Price Indexes and Wage-Price series : Used to compare the price of a basket of standard consumer goods for an \"average\" individual (often defined as non-agricultural workers, based on survey data), or to assess the ability of individuals to acquire these baskets. For example, used to answer the question, \"Has the money price of goods purchased by a typical household risen over time?\" or used to make adjustments in international comparisons of standards of living.\n", "The decline in the value of the U.S. dollar corresponds to price inflation, which is a rise in the general level of prices of goods and services in an economy over a period of time. A consumer price index (CPI) is a measure estimating the average price of consumer goods and services purchased by households. The United States Consumer Price Index, published by the Bureau of Labor Statistics, is a measure estimating the average price of consumer goods and services in the United States. It reflects inflation as experienced by consumers in their day-to-day living expenses. A graph showing the U.S. CPI relative to 1982–1984 and the annual year-over-year change in CPI is shown at right.\n", "In the United States, CPI has been rapidly increasing, especially recently. The following data that is presented in this section is using the base year 1982 with having a base of 100. This can be interpreted as a CPI of 150 means that there was 50% increase in inflation since 1982. CPI's for various years are listed below with 1982 as the base year \n\n1920: 20.0, \n\n1930: 16.7, \n\n1940: 14.0, \n\n1950: 24.1, \n\n1960: 29.6, \n\n1970: 38.8, \n\n1980: 82.4, \n\n1990: 130.7, \n\n2000: 172.2, \n\n2010: 219.2, \n\n2018: 251.1\n", "These data from the Consumer Expenditure Survey are used in a number of different ways by a variety of users. One important use of the survey is for the periodic revision of the Bureau of Labor Statistics’ Consumer Price Index (CPI). The Bureau uses survey results to select new market baskets of goods and services for the CPI every two years, to determine the relative importance of CPI components, and to derive new cost weights for the market baskets. \n", "Historical series computed from statistical data sets, or estimated from archival records have a number of other problems, including changing consumption bundles, consumption bundles not representing standard measures, and changes to the structure of social worth itself such as the move to wage labour and market economies.\n\nSection::::Different series and their use.\n\nA different time series should be used depending on what kind of economic object is being compared over time:\n", "Ideally, in computing an index, the weights would represent current annual expenditure patterns. In practice they necessarily reflect past using the most recent data available or, if they are not of high quality, some average of the data for more than one previous year. Some countries have used a three-year average in recognition of the fact that household survey estimates are of poor quality. In some cases some of the data sources used may not be available annually, in which case some of the weights for lower level aggregates within higher level aggregates are based on older data than the higher level weights.\n", "Substitution bias can cause inflation rates to be over-estimated. Data collected for a price index, if from an earlier period, may poorly correspond to the prices and consumer-expenditure-shares going to goods whose prices later changed. To reduce this problem, several steps can be taken by makers of price indexes:\n\nBULLET::::- Collect price data and expenditure data frequently to capture recent changes, and incorporate both into the indexes quickly\n\nBULLET::::- Adopt superlative index formulas for price indexes, usually Tornqvist indexes or Fisher indexes\n", "Inflation numbers are often seasonally adjusted to differentiate expected cyclical cost shifts. For example, home heating costs are expected to rise in colder months, and seasonal adjustments are often used when measuring for inflation to compensate for cyclical spikes in energy or fuel demand. Inflation numbers may be averaged or otherwise subjected to statistical techniques to remove statistical noise and volatility of individual prices.\n\nWhen looking at inflation, economic institutions may focus only on certain kinds of prices, or \"special indices\", such as the core inflation index which is used by central banks to formulate monetary policy.\n", "Inflation measures are often modified over time, either for the relative weight of goods in the basket, or in the way in which goods and services from the present are compared with goods and services from the past. Over time, adjustments are made to the type of goods and services selected to reflect changes in the sorts of goods and services purchased by 'typical consumers'. New products may be introduced, older products disappear, the quality of existing products may change, and consumer preferences can shift. Both the sorts of goods and services which are included in the \"basket\" and the weighted price used in inflation measures will be changed over time to keep pace with the changing marketplace.\n", "Because people's buying habits had changed substantially, a new study was made covering expenditures in the years 1934–1936, which provided the basis for a comprehensively revised index introduced in 1940. During World War II, when many commodities were scarce and goods were rationed, the index weights were adjusted temporarily to reflect these shortages. In 1951, the BLS again made interim adjustments, based on surveys of consumer expenditures in seven cities between 1947 and 1949, to reflect the most important effects of immediate postwar changes in buying patterns. The index was again revised in 1953 and 1964.\n", "Since February 2000, the Federal Reserve Board’s semiannual monetary policy reports to Congress have described the Board’s outlook for inflation in terms of the PCE. Prior to that, the inflation outlook was presented in terms of the CPI. In explaining its preference for the PCE, the Board stated:\n", "To illustrate the method of calculation, in January 2007, the U.S. Consumer Price Index was 202.416, and in January 2008 it was 211.080. The formula for calculating the annual percentage rate inflation in the CPI over the course of the year is: formula_1\n\nThe resulting inflation rate for the CPI in this one-year period is 4.28%, meaning the general level of prices for typical U.S. consumers rose by approximately four percent in 2007.\n\nOther widely used price indices for calculating price inflation include the following:\n", "The inflation rate is most widely calculated by calculating the movement or change in a price index, typically the consumer price index.\n\nThe inflation rate is the percentage change of a price index over time. The Retail Prices Index is also a measure of inflation that is commonly used in the United Kingdom. It is broader than the CPI and contains a larger basket of goods and services.\n", "In 1978, the index was revised to reflect the spending patterns based upon the surveys of consumer expenditures conducted in 1972–1974. A new and expanded 85-area sample was selected based on the 1970 Census of Population. The Point-of-Purchase Survey (POPS) was also introduced. POPS eliminated reliance on outdated secondary sources for screening samples of establishments or outlets where prices are collected. A second, more broadly based CPI for All Urban Consumers, the CPI-U was also introduced. The CPI-U took into account the buying patterns of professional and salaried workers, part-time workers, the self-employed, the unemployed, and retired people, in addition to wage earners and clerical workers.\n", "The specific choice of measuring financial capital maintenance in units of constant purchasing power (the CMUCPP model) at all levels of inflation and deflation as contained in the Framework for the Preparation and Presentation of Financial Statements, was approved by the International Accounting Standards Board's predecessor body, the International Accounting Standards Committee Board, in April 1989 for publication in July 1989 and adopted by the IASB in April 2001.\n", "The monetary value of assets, goods, and services sold during the year could be grossly estimated using nominal GDP back in the 1960s. This is not the case anymore because of the dramatic rise of the number of financial transactions relative to that of real transactions up until 2008. That is, the total value of transactions (including purchases of paper assets) rose relative to nominal GDP (which excludes those purchases).\n", "The weight (or quantities, to use the above terminology) of an item in the CPI is derived from the expenditure on that item as estimated by the Consumer Expenditure Survey. This survey provides data on the average expenditure on selected items, such as white bread, gasoline and so on, that were purchased by the index population during the survey period. In a fixed-weight index such as CPI-U, the implicit quantity of any item used in calculating the index remains the same from month to month.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-14808
In a region of the USA where there is ample fresh water supply, does it really matter how much water I use or even “waste” at home? If so, why? Is it just a matter of power consumption related to purification? Other than that, why does it matter?
It still costs money to pump and treat. Also, just because water is plentiful doesn't mean it's infinite. The huge aquifer that most of the plains states draw their water from is being used far faster than it replenishes.
[ "Section::::Indoor use and end uses.\n", "Metering of water supplied by utilities to residential, commercial and industrial users is common in most developed countries, except for the United Kingdom where only about 38% of users are metered. In some developing countries metering is very common, such as in Chile where it stands at 96%, while in others it still remains low, such as in Argentina.\n\nThe percentage of residential water metering in selected cities in developing countries is as follows:\n\nBULLET::::- 99% in Santiago de Chile (1998)\n\nBULLET::::- 96% in Abidjan, Ivory Coast (1987)\n\nBULLET::::- 62% in cities in Guatemala (2000)\n", "Section::::Water quality.:Water treatment.\n\nMost water requires some treatment before use; even water from deep wells or springs. The extent of treatment depends on the source of the water. Appropriate technology options in water treatment include both community-scale and household-scale point-of-use (POU) designs. Only a few a large urban areas such as Christchurch, New Zealand have access to sufficiently pure water of sufficient volume that no treatment of the raw water is required.\n", "BULLET::::- Public order and safety facilities: police stations, fire stations, prison, reformatory, or penitentiary, courthouse or probation office\n\nBULLET::::- Religious buildings: churches, synagogues, monasteries, mosques, sanctuaries.\n\nBULLET::::- Elderly care facilities: retirement homes, residential care/assisted living, hospices.\n\nBULLET::::- Manufacturing: heavy industry, light industry, food and beverage processing, machine shops.\n", "The approximate average quantities of water applied toward specific purposes have to be estimated because only total use of residential customers is metered and recorded for time periods of one month or longer (although the AMR and advance metering infrastructure (AMI) technologies allow for more frequent readings). In the United States, a nationwide compilation of these metered quantities by the United States Geological Survey (USGS) shows average domestic water deliveries (for both indoor and outdoor purposes) by public water suppliers to single-family and multifamily dwellings were about 89 gallons (337 liters) per person per day in 2010 and 83 gallons (314 liters) in 2015. Since early 1980s, the increasing public interest in water conservation prompted questions about consumers’ water-using behaviors and measurement of average quantities of water applied to each domestic purpose. In mid-1990s, the first national study of residential end uses of water was conducted in the U.S. acquiring high resolution data directly from the customer’s water meter and analyzing flow traces to assign each measured water-using event to a specific end use. Several detailed studies of domestic end uses of water in North America and elsewhere followed. In 2016, an update study of residential end uses of water, sponsored by the Water Research Foundation (WRF) was completed and is the most current source of data on the various purposes of residential water use described here.\n", "Section::::Outdoor use.\n", "Residential water use in the U.S. and Canada\n\nResidential water use (also called domestic use, household use, or tap water use) includes all indoor and outdoor uses of drinking quality water at single-family and multifamily dwellings. These uses include a number of defined purposes (or water end uses) such as flushing toilets, washing clothes and dishes, showering and bathing, drinking, food preparation, watering lawns and gardens, and maintaining swimming pools. Some of these end uses are detectable (and measurable) while others are more difficult to gauge.\n\nSection::::Water use measurement.\n", "The total quantity of water available at any given time is an important consideration. Some human water users have an intermittent need for water. For example, many farms require large quantities of water in the spring, and no water at all in the winter. To supply such a farm with water, a surface water system may require a large storage capacity to collect water throughout the year and release it in a short period of time. Other users have a continuous need for water, such as a power plant that requires water for cooling. To supply such a power plant with water, a surface water system only needs enough storage capacity to fill in when average stream flow is below the power plant's need.\n", "Water utilities often adopt water efficiency programs that are directed specifically to CII customers. The programs often target the largest water users as well as specific categories of CII customers including government and municipal buildings, large landscape areas, schools and colleges, office buildings, restaurants and hotels. Water use information in these and other readily recognizable functional classes of CII users from several studies of the CII sector are briefly characterized below.\n\nSection::::Water use in major CII categories.\n\nSection::::Water use in major CII categories.:Office buildings.\n", "Section::::Water use in major CII categories.:Elderly care facilities.\n\nNursing homes and assisted living communities account for 3.2 percent of CII use in the State of Florida and 5.4 percent in the urban area served by Tampa Bay Water. The reported estimates of WUI are 232 g/ksf/d in eight utilities from Florida and Texas and a range from 170 to 277 g/ksf/d in Colorado.\n", "Nearly a hundred specific end uses within seven major groupings (i.e., washing and sanitation, domestic-type uses, landscape irrigation, outdoor and indoor water features, cooling and heating, food service, and process water) were compiled in the WRF study. Typically, the total CII use in an urban area or a region is broken down by categories of commercial establishments (and types of institutions or industrial plants) based on the kinds of goods and services provided, or their function. The category's water use is then separated into several end uses (or purposes). \n", "Section::::Indoor use and end uses.:Showering.\n", "Section::::Water use in major CII categories.:Hospitals.\n", "Section::::Indoor use and end uses.:Indoor leaks.\n\nLeaks, or flows of water without a discernible purpose, were observed in nearly 90 percent of monitored homes. The loss of water through leaks accounted for 12 percent of average indoor water use. Estimated loss of water in average household is 6200 gallons (23,500 liters) per year. Common types of leaks include running toilets, slow-leaking toilet flappers, partially opened or dripping faucets, and other cracked or open supply lines. While all observed leaks are included in indoor use, some leaks could occur on outdoor bibs or water features.\n\nSection::::Indoor use and end uses.:Dish washing.\n", "Water treatment plants can be significant consumers of energy. In California, more than 4% of the state's electricity consumption goes towards transporting moderate quality water over long distances, treating that water to a high standard. In areas with high quality water sources which flow by gravity to the point of consumption, costs will be much lower.\n\nMuch of the energy requirements are in pumping. Processes that avoid the need for pumping tend to have overall low energy demands. Those water treatment technologies that have very low energy requirements including trickling filters, slow sand filters, gravity aqueducts.\n\nSection::::Regulation.\n\nSection::::Regulation.:United States.\n", "From a public health and drinking water quality point of view it is being argued that the level of real water losses should be as low as possible, independently of economic or financial considerations, in order to minimize the risk of drinking water contamination in the distribution network.\n\nThe World Bank recommends that NRW should be \"less than 25%\", while the Chilean water regulator SISS has determined a NRW level of 15% as optimal in its model of an efficient water company that it uses to benchmark service providers. In England and Wales NRW stands at 19% or 149 liter/property/day.\n", "Section::::Indoor use and end uses.:Toilet flushing.\n", "Section::::Indoor use and end uses.:Baths.\n\nIn addition to showering, baths were recorded in 47 percent of the sampled households in which 2.7 baths were taken each week (or, on average, 1.3 per week across all sampled households). Each bath uses on average 20.2 gallons (or 76.5 liters) of water. \n\nSection::::Indoor use and end uses.:Faucet flows.\n", "Residential indoor water use can vary considerably across households depending on the number of residents (or more specifically, on the size and family composition of each household) and other circumstances (both systematic and random). It also depends on the contribution of the various domestic purposes of water use to the variability of total indoor use. The distributions of the observed average daily volumes for eight major end uses of water also shows considerable variability and a skew toward the right hand tails of the distributions (the data on the figure with distributions of end use volumes are truncated at 120 gpd to enhance the separation of the distribution graphs; in order to include all observations within the right tail of the distributions would require extending the horizontal scale to 560 gpd (to capture the maximum observed volumes of 553 gphd for leaks, 345 gphd for faucets and 223 gphd for toilets). Among the eight indoor end uses, five (i.e., leaks, toilet flushing, showering, clothes washing and faucet use) show pronounced right skew in their distributions that contributes to the “fatter” and longer right-hand tail in total indoor use. Significant reductions in some end uses of water could be achieved not only through the adoption of efficient technologies (i.e., fixtures and appliances) but also through consumers' small behavioral changes to reduce water use and wastage and by eliminating customer side leakage through automated metering and leak alert programs.\n", "Section::::Water uses.:Domestic use[house hold].\n\nIt is estimated that 8% of worldwide water use is for domestic purposes. These include drinking water, bathing, cooking, toilet flushing, cleaning, laundry and gardening. Basic domestic water requirements have been estimated by Peter Gleick at around 50 liters per person per day, excluding water for gardens.\n", "on the industry standard of 120 hundred cubic feet (HCF), or approximately . Actual\n\nusage per household will vary. The principal goal of the survey is to track retail rate increases from year to year using a consistent standard.\"\n\nCombined Annual Water & Sewer Charges in MWRA Communities\n\nMWRA SYSTEMWIDE SUMMARY DATA 2007\n\nWATER BILLING FREQUENCY\n\nWATER RATE STRUCTURE\n\nSection::::Rates.:Combined annual water and sewer charges in MWRA municipalities.\n\n(Charges include MWRA, community and alternatively supplied services;\n\nRates based on average annual household use of 120 hundred cubic feet (HCF), or approximately )\n", "Depending on the design, watermakers can be powered by electricity from the battery bank, an engine, an AC generator or hand operated. There is a portable, towed, water-powered watermaker available which converts to hand operation in an emergency.\n\nSection::::Water requirement.\n\nThere is great variation in the amount of water consumed.\n\nAt home in the United States, each person uses about 55 gallons (208 liters) of water per day on average. Where supplies are limited, and in emergencies, much less may be used.\n", "Water management device\n\nWater management devices are water meters which, at once provide accurate data on water flow and water consumption levels, and can be programmed to control water use at household- or business-level. This is valuable for consumer who can ensure that they stay within a certain level of consumption, allowing savings on water costs, or for water suppliers who wish to reduce overall water consumption due to lack of water supply or increased demand.\n\nSection::::Utilization.\n\nSection::::Utilization.:South Africa.\n", "One of the reasons for the high domestic water use in the U.S. is the high share of outdoor water use. For example, the arid West has some of the highest per capita domestic water use, largely because of landscape irrigation. Per capita domestic water use varied from per day in Maine to per day in Arizona and per day in Utah. According to a 1999 study, on average all over the U.S. 58% of domestic water use is outdoors for gardening, swimming pools etc. and 42% is used indoors. A 2016 update of the 1999 study measured the average quantities and percent shares of seven indoor end uses of water:\n", "There are approximately 15,600 nursing homes (with 1,663,300 beds) and 30,200 residential care communities (with 1,000,000 beds) in the U.S. Based on the total floor space of 1,275 million square feet and an assumed average use of 232 g/ksf/d, the total water use by elderly care buildings in the U.S. would be 296 mgd or 2.5 percent of CII use.\n\nSection::::Water use in major CII categories.:Car washes.\n" ]
[ "It is the water that is wasted when being used." ]
[ "The water is re used but the energy used to treat and pump it is wasted." ]
[ "false presupposition" ]
[ "It is the water that is wasted when being used.", "It is the water that is wasted when being used." ]
[ "normal", "false presupposition" ]
[ "The water is re used but the energy used to treat and pump it is wasted.", "The water is re used but the energy used to treat and pump it is wasted. " ]
2018-23117
Why do water pipes sound like they're going to explode when turned back on after a couple hours of being completely shut off?
It's usually the hot water pipe, and what you're hearing is thermal expansion at work. The pipes are held in the walls or under the subfloor with clamps, and as the pipes expand from the newly hot water rushing into them and slip in those clamps they make noise.
[ "Cold water pitting of copper tube\n\nCold water pitting of copper tube occurs in only a minority of installations. Copper water tubes are usually guaranteed by the manufacturer against manufacturing defects for a period of 50 years. The vast majority of copper systems far exceed this time period but a small minority may fail after a comparatively short time.\n", "Air causes irritating system noises, as well as interrupting proper heat transfer to and from the circulating fluids. In addition, unless reduced below an acceptable level, the oxygen dissolved in water causes corrosion. This corrosion can cause rust and scale to build up on the piping. Over time these particles can become loose and travel around the pipes, reducing or even blocking the flow as well as damaging pump seals and other components.\n\nSection::::Air elimination.:Water-loop system.\n\nWater-loop systems can also experience air problems. Air found within hydronic water-loop systems may be classified into three forms:\n\nSection::::Air elimination.:Water-loop system.:Free air.\n", "The majority of failures seen are the result of poor installation or operation of the water system. The most common failure seen in the last 20 years is pitting corrosion in cold water tubes, also known as Type 1 pitting. These failures are usually the result of poor commissioning practice although a significant number are initiated by flux left in the bore after assembly of soldered joints. Prior to about 1970 the most common cause of Type 1 pitting was carbon films left in the bore by the manufacturing process. \n", "Once a system has been commissioned it should be either put immediately into service or drained down and dried. If either of these options is not possible then the system should be flushed though regularly until it is put into use. It should not be left to stand for more than a week. At present stagnation is the most common cause of Type 1 pitting.\n", "A water main break happens when a hole or crack develops in a main and causes it to rupture. They typically result from the external corrosion of the pipe. The water typically finds its way to the surface due to the extreme amount of pressure the water is under. Millions of gallons of water can flow from a single break. In order to get the break under control, the water is shut off and the section of pipe that ruptured is replaced.\n", "Section::::Cause and effect.:Related phenomena.\n\nSteam distribution systems may also be vulnerable to a situation similar to water hammer, known as \"steam hammer\". In a steam system, a water hammer most often occurs when some of the steam condenses into water in a horizontal section of the piping. Steam picks up the water, forming a \"slug\", and hurls this at high velocity into a pipe fitting, creating a loud hammering noise and greatly stressing the pipe. This condition is usually caused by a poor condensate drainage strategy.\n", "Cold water impinging on the cast iron pipe may have also put excess stress on the metal, causing it to fail. The age of the pipe, and the difficulty of inspecting underground infrastructure, made corrosion a possible factor as well, but a New York State safety official reported at a Public Service Commission meeting in September 2007 that there was \"no indication that the pipe was deteriorated or weakened by corrosion.\"\n\nSection::::Cause.:Leaks.\n", "Heating elements of the tubular form are filled with a very fine powder that can absorb moisture if the element has not be used for some time. In the tropics, this may occur, for example if a clothes drier has not been used for a year or a large water boiler used for coffee, etc. has been in storage. In such cases, if the unit is allowed to power up without RCD protection then it will normally dry out and successfully pass inspection. This type of problem can be seen even with brand new equipment.\n\nSection::::Types.:Failure to respond.\n", "Other causes of water hammer are pump failure and check valve slam (due to sudden deceleration, a check valve may slam shut rapidly, depending on the dynamic characteristic of the check valve and the mass of the water between a check valve and tank). To alleviate this situation, it is recommended to install non-slam check valves as they do not rely on gravity or fluid flow for their closure. For vertical pipes, other suggestions include installing new piping that can be designed to include air chambers to alleviate the possible shockwave of water due to excess water flow.\n", "Mechanical seals of boiler feedwater pumps often show signs of electrical corrosion. The relative movement between the sliding ring and the stationary ring provokes static charging which is not diverted due to the very low conductivity of the boiler water below one micro-Siemens per cm [μS/cm]. Within short periods of operation – in some cases, only a few hundred operational hours – pieces having the size of fingertips break off from the sliding and/or the stationary ring and cause rapid increases in leakage current. Diamond-coated (DLC) mechanical seals avoid this problem and extend durability remarkably.\n\nSection::::Steam-powered pumps.\n", "BULLET::::- Steam hammer : Steam hammer, the pressure surge generated by transient flow of super-heated or saturated steam in a steam-line due to sudden stop valve closures is considered as an occasional load. Though the flow is transient, for the purpose of piping stress analysis, only the unbalanced force along the pipe segment tending to induce piping vibration is calculated and applied on the piping model as static equivalent force.\n", "BULLET::::- Preventing cavitation: When a machine is in contact with a fluid, it may be susceptible to cavitation. The sounds of gas bubbles imploding is the source of the noise. Ships and submarines which have screws that \"cavitate\" are more vulnerable to detection by sonar.\n\nBULLET::::- Preventing water hammer: In hydraulics and plumbing, water hammer is a known cause for the failure of piping systems. It also generates considerable noise. A valve that abruptly opens or shuts is the most common cause for water hammer.\n", "BULLET::::- BS EN 13192:2002 Non-destructive testing. Leak testing. Calibration of reference leaks for gases\n\nIn shell and tube heat exchangers, Eddy current testing is sometimes done in the tubes to find locations on tubes where there may be leaks or damage which may eventually develop into a leak.\n\nSection::::Corrective action.\n\nIn complex plants with multiple fluid systems, many interconnecting units holding fluids have isolation valves between them. If there is a leak in a unit, its isolation valves can be shut to \"isolate\" the unit from the rest of the plant.\n", "This is a list of major hydroelectric power station failures due to damage to a hydroelectric power station or its connections. Every generating station trips from time to time due to minor defects and can usually be restarted when the defect has been remedied. Various protections are built into the stations to cause shutdown before major damage is caused. Some hydroelectric power station failures may go beyond the immediate loss of generation capacity, including destruction of the turbine itself, reservoir breach and significant destruction of national grid infrastructure downstream. These can take years to remedy in some cases.\n", "Testing is done manually using a portable vapor analyzer that read in parts per million (ppm). Monitoring frequency, and the leak threshold, is determined by various factors such as the type of component being tested and the chemical running through the line. Moving components such as pumps and agitators are monitored more frequently than non-moving components such as flanges and screwed connectors. The regulations require that when a leak is detected the component be repaired within a set number of days. Most facilities get 5 days for an initial repair attempt with no more than 15 days for a complete repair. Allowances for delaying the repairs beyond the allowed time are made for some components where repairing the component requires shutting process equipment down.\n", "Section::::Corrosion.\n\nCopper water tubes are susceptible to cold water pitting caused by contamination of the pipe interior, typically with soldering flux; erosion corrosion caused by high speed or turbulent flow; and stray current corrosion, caused by poor electrical wiring technique, such as improper grounding and bonding.\n\nSection::::Pinholes.\n", "Leakage may also mean an unwanted transfer of energy from one circuit to another. For example, magnetic lines of flux will not be entirely confined within the core of a power transformer; another circuit may couple to the transformer and receive some leaked energy at the frequency of the electric mains, which will cause audible hum in an audio application.\n", "In reinforced concretes intact regions will sound solid whereas delaminated areas will sound hollow. Tap testing large concrete structures is carried about either with a hammer or with a chain dragging device for horizontal surfaces like bridge decks. Bridge decks in cold climate countries which use de-icing salts and chemicals are commonly subject to delamination and as such are typically scheduled for annual inspection by chain-dragging as well as subsequent patch repairs of the surface.\n\nSection::::Delamination Resistance Testing Methods.\n\nSection::::Delamination Resistance Testing Methods.:Coating Delamination Tests.\n", "As water passes through the distribution system, the water quality can degrade by chemical reactions and biological processes. Corrosion of metal pipe materials in the distribution system can cause the release of metals into the water with undesirable aesthetic and health effects. Release of iron from unlined iron pipes can result in customer reports of \"red water\" at the tap. Release of copper from copper pipes can result in customer reports of \"blue water\" and/or a metallic taste. Release of lead can occur from the solder used to join copper pipe together or from brass fixtures. Copper and lead levels at the consumer's tap are regulated to protect consumer health.\n", "Section::::Dissolved Gas Analysis.\n", "Due to the always-on design, in the event of a short circuit, either a fuse would blow, or a switched-mode supply would repeatedly cut the power, wait a brief period of time, and attempt to restart. For some power supplies the repeated restarting is audible as a quiet rapid chirping or ticking emitted from the device.\n\nSection::::Development.:ATX standard.\n", "In each of the areas that the scale has been disrupted there is the possibility of the initiation of Type 1 pitting. Once pitting has initiated, then even after the tube has been put back into service, the pit will continue to develop until the wall has perforated. This form of attack is often associated with the commissioning of a system. Once a system has been commissioned it should be either put immediately into service or drained down and dried by flushing with compressed air otherwise pitting may initiate. If either of these options is not possible then the system should be flushed through regularly until it is put into use.\n", "BULLET::::- 1981 – A gas leak on Cable 1 occurred at Oteranga Bay. It was repaired in the 1982/83 summer.\n\nBULLET::::- 1988 – Cable 2's Oteranga Bay end joint exploded, spilling insulating oil into the switchyard.\n", "The stability of the voltage and frequency supplied to customers varies among countries and regions. \"Power quality\" is a term describing the degree of deviation from the nominal supply voltage and frequency. Short-term surges and drop-outs affect sensitive electronic equipment such as computers and flat panel displays. Longer-term power outages, brown-outs and black outs and low reliability of supply generally increase costs to customers, who may have to invest in uninterruptible power supply or stand-by generator sets to provide power when the utility supply is unavailable or unusable. Erratic power supply may be a severe economic handicap to businesses and public services which rely on electrical machinery, illumination, climate control and computers. Even the best quality power system may have breakdowns or require servicing. As such, companies, governments and other organizations sometimes have backup generators at sensitive facilities, to ensure that power will be available even in the event of a power outage or black out.\n", "As the tube ends get corroded there is the possibility of cooling water leakage to the steam side contaminating the condensed steam or condensate, which is harmful to steam generators. The other parts of water boxes may also get affected in the long run requiring repairs or replacements involving long duration shut-downs.\n\nSection::::Corrosion.:Protection from corrosion.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-15475
What are the actual physical changes that happen inside a computer while it's running?
Permanent storage in computers generally is done by one of two technologies. For magnetic hard drives, there's a spinning plate covered with magnetic material and a mechanism that allows to detect and change the "direction" of the magnetism in very very small segments of that platter. So writing a file would involve that mechanism changing the magnetic pattern on the disk. For solid state drives (SSD) in computers, as well as the common USB flash memory sticks, the memory consists of a silicon chip with lots and lots of very small isolated conducting spaces which can store electric charge (extra electrons), and transistors to control that. Writing to that memory involves storing electric charge in some (and removing from others) of these "pockets" called NAND cells. Both of these things are somewhat permanent and continue to persist after power is turned off.
[ "Software monitors occur more commonly, sometimes as a part of a widget engine. These monitoring systems are often used to keep track of system resources, such as CPU usage and frequency, or the amount of free RAM. They are also used to display items such as free space on one or more hard drives, the temperature of the CPU and other important components, and networking information including the system IP address and current rates of upload and download. Other possible displays may include the date and time, system uptime, computer name, username, hard drive S.M.A.R.T. data, fan speeds, and the voltages being provided by the power supply.\n", "A few very high-end models of hardware system monitor are designed to interface with only a specific model of motherboard. These systems directly utilize the sensors built into the system, providing more detailed and accurate information than less-expensive monitoring systems customarily provide.\n\nSection::::Software monitoring.\n\nSoftware monitoring tools operate within the device they're monitoring.\n\nSection::::Hardware monitoring.\n\nUnlike software monitoring tools, hardware measurement tools can either located within the device being measure, or they can be attached and operate from an external location.\n", "BULLET::::- BSOD (Blue screen of death)\n\nBULLET::::- Corrupt data\n\nTo address them, BIOS updates were released for some NVIDIA nForce 680i SLI based motherboards that eliminate those symptoms. Affected motherboards include: \n\nBULLET::::- EVGA nForce 680i SLI\n\nBULLET::::- BFG nForce 680i SLI\n\nBULLET::::- Biostar TF680i SLI Deluxe\n\nBULLET::::- ECS PN2-SLI2+\n", "Section::::Types of computer systems.:Personal computer.:Power supply.\n\nA power supply unit (PSU) converts alternating current (AC) electric power to low-voltage DC power for the internal components of the computer. Laptops are capable of running from a built-in battery, normally for a period of hours.\n\nSection::::Types of computer systems.:Personal computer.:Motherboard.\n\nThe motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots.\n", "After each test, the computer is shut down and restarted to release all memory and ready the system for the next test. The computer is also restarted after the initial system setup and final clean-up processes, which prepare it for testing and restore it to its original state.\n\nWorldbench is no longer being sold, and support for existing customers ends mid-2012.\n\nSection::::External links.\n\nBULLET::::- \"Official WorldBench\" \n\nBULLET::::- \"WorldBench 2000\" \n\nBULLET::::- \"PC World's World Bench 6: Behind The Scenes\" \n\nBULLET::::- \"WorldBench 5 Frequently Asked Questions\" \n\nBULLET::::- \"PC World How We Test Laptops\" \n", "This is the scan run that the user will run to generate the data on the initial system. This data is then compared with the product scan. After running the baseline scan, the product whose effect on the attack surface of the Operating System is to be checked is installed. The installation changes the system configuration (possibly) by installing services, changing firewall rules, installing new .NET assemblies and so on. Baseline scan is a logical scan run by the user using Attack Surface Analyzer that generates the file containing the configuration of the system before this software is installed.\n", "BULLET::::- Perform a hardware inventory by uploading the remote PC's hardware asset list (platform, baseboard management controller, BIOS, processor, memory, disks, portable batteries, field replaceable units, and other information). Hardware asset information is updated every time the system runs through power-on self-test (POST).\n", "After the board has been populated it may be tested in a variety of ways:\n\nBULLET::::- While the power is off, visual inspection, automated optical inspection. JEDEC guidelines for PCB component placement, soldering, and inspection are commonly used to maintain quality control in this stage of PCB manufacturing.\n\nBULLET::::- While the power is off, analog signature analysis, power-off testing.\n\nBULLET::::- While the power is on, in-circuit test, where physical measurements (for example, voltage) can be done.\n\nBULLET::::- While the power is on, functional test, just checking if the PCB does what it had been designed to do.\n", "UCFF motherboard (NUC5i3RYB, NUC5i5RYB and NUC5i7RYB) and system kit (NUC5i5RYK/NUC5i3RYH, NUC5i5RYK/NUC5i5RYH and NUC5i7RYH) models were designated \"Rock Canyon\". UCFF motherboard (NUC5i3MYBE and NUC5i5MYBE) and system kit (NUC5i3MYHE and NUC5i5MYHE) models were codenamed \"Maple Canyon\".\n\nIn Q4 2018 and Q1 2019 two new SKUs of \"Rock Canyon\" - Refresh have been launched and become two of a few available models still supporting Windows* 7 (NUC5i3RYHSN and NUC5i5RYHS). These modes have updated CPU revisions and other minor changes.\n\nAll models include:\n\nBULLET::::- Dual-channel DDR3L SO-DIMM, 1.35 V, 1333/1600 MHz, 16 GB maximum\n\nBULLET::::- One Gigabit Ethernet port\n", " A monitor displays information in visual form, using text and graphics. The portion of the monitor that displays the information is called the screen. Like a television screen, a computer screen can show still or moving pictures and It’s a part of Output Devices.\n\nSection::::Components.:Mouse.\n", "A hardware monitor is a common component of modern motherboards, which can either come as a separate chip, often interfaced through I²C or SMBus, or as part of a Super I/O solution, often interfaced through Low Pin Count (LPC). These devices make it possible to monitor temperature in the chassis, voltage supplied to the motherboard by the power supply unit and the speed of the computer fans that are connected directly to one of the fan headers on the motherboard. Many of these hardware monitors also have fan controlling capabilities. System monitoring software like SpeedFan on Windows, lm_sensors on GNU/Linux, envstat on NetBSD, and sysctl hw.sensors on OpenBSD and DragonFly can interface with these chips to relay this environmental sensor information to the user.\n", "BULLET::::6. \"Hardware\": View technical info about the PC's hardware, including available memory and battery health. Defrag the hard drives.\n\nSection::::Features.:Downloaded Agent.\n\nSoluto uses a downloaded agent to transmit data to, and receive data from, Soluto's back-end servers. Data transmitted to Soluto includes the apps running during boot up, enabled browser toolbars and add-ons, hardware specs, and a specialized form of crash reports. The servers send back to the agent information such as solutions for recent crashes and the remote actions that were initiated by the user and need to occur.\n", "CPU power dissipation\n\nCentral processing unit power dissipation or CPU power dissipation is the process in which central processing units (CPUs) consume electrical energy, and dissipate this energy in the form of heat due to the resistance in the electronic circuits.\n\nSection::::Power management.\n", "On recent motherboards, the BIOS may also patch the central processor microcode if the BIOS detects that the installed CPU is one for which errata have been published.\n\nMany motherboards now use an update to BIOS called UEFI.\n\nSection::::See also.\n\nBULLET::::- Accelerated Graphics Port\n\nBULLET::::- Computer case screws\n\nBULLET::::- CMOS battery\n\nBULLET::::- Daughterboard\n\nBULLET::::- List of computer hardware manufacturers\n\nBULLET::::- Memory Reference Code – the part of the BIOS which handles memory timings on Intel motherboards\n\nBULLET::::- Overclocking\n\nBULLET::::- Single-board computer\n\nBULLET::::- Switched-mode power supply applications\n\nBULLET::::- Symmetric multiprocessing\n\nSection::::External links.\n\nBULLET::::- Motherboard Form Factors - Silverstone Article\n", "Typically a DSP in the electronics inside the drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction to decode the data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks.\n", "Once the system is booted, hardware monitoring and computer fan control is normally done directly by the Hardware Monitor chip itself, which can be a separate chip, interfaced through I²C or SMBus, or come as a part of a Super I/O solution, interfaced through Low Pin Count (LPC). Some operating systems, like NetBSD with envsys and OpenBSD with sysctl hw.sensors, feature integrated interfacing with hardware monitors, which is normally done without any interaction with the BIOS.\n", "Apart from that, there are different stages of a bricked device. There are different steps to resolve this, such as analyzing the problem, analyzing the boot process, finding at which stage the hard bricked device is, and making changes with the help of the PC.\n\nSection::::Types.:Soft brick.\n", "A service, named \"Security Center\", determines the current state of the settings. The service, by default, starts when the computer starts; it continually monitors the system for changes, and notifies the user if it detects a problem. In versions of Windows prior to Windows 10, it adds a notification icon into the Windows Taskbar.\n", "Data is stored by a computer using a variety of media. Hard disk drives are found in virtually all older computers, due to their high capacity and low cost, but solid-state drives are faster and more power efficient, although currently more expensive than hard drives in terms of dollar per gigabyte, so are often found in personal computers built post-2007. Some systems may use a disk array controller for greater performance or reliability.\n\nSection::::Types of computer systems.:Personal computer.:Storage devices.:Removable media.\n", "BULLET::::- Full On: The computer is powered on, and no devices are in a power saving mode.\n\nBULLET::::- APM Enabled: The computer is powered on, and APM is controlling device power management as needed.\n\nBULLET::::- APM Standby: Most devices are in their low-power state, the CPU is slowed or stopped, and the system state is saved. The computer can be returned to its former state quickly (in response to activity such as the user pressing a key on the keyboard).\n", "The principal duties of the main BIOS during POST are as follows:\n\nBULLET::::- verify CPU registers\n\nBULLET::::- verify the integrity of the BIOS code itself\n\nBULLET::::- verify some basic components like DMA, timer, interrupt controller\n\nBULLET::::- find, size, and verify system main memory\n\nBULLET::::- initialize BIOS\n\nBULLET::::- pass control to other specialized extension BIOSes (if installed)\n\nBULLET::::- identify, organize, and select which devices are available for booting\n\nThe functions above are served by the POST in all BIOS versions back to the very first. In later BIOS versions, POST will also:\n", "Section::::Hardware.\n\nComputer hardware is a comprehensive term for all physical parts of a computer, as distinguished from the data it contains or operates on, and the software that provides instructions for the hardware to accomplish tasks. \n\nSome sub-systems of a personal computer may contain processors that run a fixed program, or firmware, such as a keyboard controller. Firmware usually is not changed by the end user of the personal computer. \n", "Section::::Parts.:Video BIOS.\n\nThe video BIOS or firmware contains a minimal program for initial set up and control of the video card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other details which can sometimes be changed.\n", "At the same time, we need to know at what point the IC stops responding, these data are important for calculating price reliability indices and for facilitating the FA. This is done by monitoring the device via one or more vital IC parameters signals communicated and logged by the HTOL machine and providing continuous indication about the IC's functionality throughout the HTOL run time. Examples of commonly used monitors include the BIST \"done\" flag signal, the SCAN output chain or the analog module output.\n\nThere are three types of monitoring: \n", "BULLET::::- Supercomputer\n\nBULLET::::- Tablet computer\n\nSection::::Hardware.\n\nThe term \"hardware\" covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and \"mice\" input devices are all hardware.\n\nSection::::Hardware.:Other hardware topics.\n\nA general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06403
Why do sedans typically seat 5 people nowadays when they used to seat 6?
sedans use to put a bench seat in front. That is now known to be wildly unsafe, and it was not typically utilized anyway, so they went to bucket seats up front with a console, losing the middle seat.
[ "The traditional sedans with full-width bench seating offered nearly the same passenger capacity as the newer three-row SUV or minivan. Some models, such as the Chrysler Pacifica, feature a center console in the second row, rather than room for a passenger in the middle.\n\nSection::::Decline.\n", "Moreover, research shows that one-child family is a growing trend all over the world and also the average occupancy in cars has gone down (The avg. occupancy per car in UK has gone down from 1.64 to 1.59 between 1985/86 and 2002/03). Looking at the rate at which our natural resources are getting depleted, it is ironical to build cars that over satiate our demands rather than meet our needs. This clearly shows the huge potential for the 3 seat cars in the near future.\n\nSection::::Development.\n", "In Australia, the Holden Kingswood, Ford Falcon and Chrysler Valiant were fitted with bench seats for many years. To this day, the Falcon Ute is still offered with a bench seat and column shift in the front and availability of a front bench seat in the Falcon sedan and wagon lineups was only discontinued with the introduction of the FG Falcon in 2008. Holden offered a bench seat on the Commodore in the 1990s, originating with the VG Ute and ending with the end of VS series III production in 2000.\n", "Where a third row of seats is present, the seats are often smaller and intended for children or short distance travel only. In some cars, these seats can only carry a limited weight (less than an adult's weight). The third row of seating is usually optional, and is not not available on all models of compact MPVs.\n\nSection::::History.\n\nPredecessors to the compact MPV segment are the 1977 AMC Concept 80 AM Van, the 1978 Lancia Megagamma and the 1982 Lada X-1 concept cars.\n", "The first generation Premacy was a two- or three-row, five- or seven-passenger vehicle, while the second generation adds a third row of seats for up to six passengers in American form, and seven passengers outside the United States. Both generations feature near-flat floors, folding or removable second row, and fold-flat rear seats.\n\nSection::::First generation (1999–2004).\n", "In North America, due to safety regulations, the Mazda5 fits six passengers using three rows of seats, with two seats per row. Elsewhere, it is sold as a seven-seater using Mazda's \"Karakuri Seating System\", which means the car has three rows of two seats, with the seventh seat a fold away jump seat in the centre of the middle row. The Mazda5 has three-point seat belts on all seven seats.\n\nThe middle row of seats recline and slide front-to-rear, and fold flat. The rear row also folds flat.\n\nSection::::Second generation (2004–2010).:Initial release.:Specifications.\n", "However, a sedan is typically considered to be a fixed roof car with at least 4 seats. Based on this definition, the earliest sedan was the 1911 Speedwell, which was manufactured in the United States.\n\nSection::::International terminology.\n\nIn American English and Latin American Spanish, the term sedan is used (accented as sedán in Spanish).\n", "A slightly longer, four-door saloon version of the same car, the SEAT 800, was launched in September 1963 and built up to 1968. It was also known as a four-door 600, although the official designation was 800. The SEAT 800 is the four-door derivative of the 600; it had front \"suicide doors\" and rear conventional doors. This car was only built in Spain. \n\nSection::::Tunings.\n", "The interior space is based on enhancing social interaction and focuses mainly on the third seat. From Shashwath's MA Automotive Design (Coventry University, 2006 - 2007) research and three seat user tests, it has been found that the following parameters affect social interaction directly and indirectly and these have been considered while designing the interiors:\n\nBULLET::::- Social space\n\nBULLET::::- Physical distance\n\nBULLET::::- Eye contact\n\nBULLET::::- Effort required to hear / be heard\n\nBULLET::::- Sense of importance\n\nBULLET::::- Perceived safety\n\nBULLET::::- Openness to communication\n\nBULLET::::- Comfort\n\nSection::::Exterior.\n", "In 2012, Hemmings Motor News wrote \"the three-box sedan design is seen as traditional or--worse--conventional.\" \n\nBy 2016 In the United States, the three-box sedan began to wane in popularity. In 2018, the Wall Street Journal wrote: \"from gangster getaway cars and the Batmobile to the humble family sedan, the basic three-box configuration of a passenger car—low engine compartment, higher cabin, low trunk in the rear—has endured for decades as the standard shape of the automobile. Until now.\"\n", "The estate model was launched in 1982, and was available with seven seats, just like the Peugeot 504 estate.\n", "In 1981, the XEF model was developed, having only three front seats, an unusual solution at the time. The car was an urban model, with small dimensions for the passengers and for the luggage.\n", "BMW's first large luxury car was the 1936-1941 BMW 326. After a hiatus of 21 years, BMW's next executive car models were the 1962 New Class Sedans. In 1972, the New Class was replaced by the BMW 5 Series, which remains in production today. Over the seven generations of 5 Series, it has been produced in sedan, wagon and four-door fastback body styles.\n", "The trend since the 1980s for smaller station wagon bodies has limited the seating to two rows, resulting in a total capacity of five people, or six people if a bench front seat is used. Since the 1990s, full-size station wagons have been largely replaced by SUVs with three-row seating, such as the Chevrolet Suburban, Ford Expedition, and Mercedes-Benz GL-Class.\n\nSection::::History.:United States.:Two-door wagons.\n", "Typically, in 2nd class the seat serves as the lowest bunk, and the back of the seat is turned into a horizontal position and serves as the middle bunk. There are two types of couchette car in countries of the former USSR: \"coupé\" and \"platzkart\". \"Coupé\" cars are more expensive and comfortable with 4-bunk compartments fully separated from each other and the corridor. The cheaper \"Platzkart\" cars, use a somewhat different layout, with no wall between compartment and corridor, only four bunks along the long sides of the compartment, and two more mounted on the corridor wall, the lower bunk folding in the daytime to become two seats.\n", "When configured as two and three person benches (available through Generation IV), the Easy Out Roller Seats could be unwieldy. Beginning in 2000, second and third row seats became available in a 'quad' configuration – bucket or captain chairs in the second row and a third row three-person 50/50 split \"bench\" – with each section weighing under . The Easy-out system remained in use through Generation V – where certain models featured a two-person bench \"and\" the under-floor compartments from the \"Stow'n Go\" system.\n", "In automotive use, manufacturers in the United Kingdom used the term for a development of the chummy body where passengers were forced to be friendly because they were tightly packed. They provided weather protection for extra passengers in what would otherwise be a two-seater car. Two-door versions would be described in the US and France as coach bodies. A postwar example is the Rover 3 Litre Coupé.\n\nSection::::Mid-20th century variations.:Club sedans.\n", "The popularity of the personal luxury car was greatly increased by the sales success of the 1958 Ford Thunderbird (second generation), due to being lengthened from a two-seat car to a four-seat car. The Thunderbird was sold for eleven generations up until the 2005 model year. The longest running model of personal luxury car was the Cadillac Eldorado, which was produced for 50 years, beginning with the 1953 model year.\n\nBy the 21st century, the personal luxury market had largely disappeared as consumers migrated to other market segments.\n\nSection::::Characteristics.\n", "From the 1980s to the 1990s, the market share of full-size cars began to decline; along with the increased use of mid-size cars, vans and SUVs grew in use as family vehicles. From 1960 to 1994, the market share of full-size cars declined from 65 percent to 8.3 percent. From 1990 to 1992, both GM and Ford redesigned its full-size car lines for the first time since the late 1970s. \n", "During the 1960s, compacts were the smallest class of North American cars, but they had evolved into only slightly smaller versions of the 6-cylinder or V8-powered six-passenger sedan. They were much larger than compacts by European manufacturers, which were typically five-passenger 4-cylinder engine cars. Nevertheless, advertising and road tests for the Ford Maverick and the Rambler American made comparisons with the popular Volkswagen Beetle.\n", "By the mid-17th century, sedans for hire had become a common mode of transportation. London had \"chairs\" available for hire in 1634, each assigned a number and the chairmen licensed because the operation was a monopoly of a courtier of King Charles I. Sedan chairs could pass in streets too narrow for a carriage and were meant to alleviate the crush of coaches in London streets, an early instance of traffic congestion. A similar system later operated in Scotland. In 1738 a fare system was established for Scottish sedans, and the regulations covering chairmen in Bath are reminiscent of the modern Taxi Commission's rules. A trip within a city cost six pence and a day's rental was four shillings. A sedan was even used as an ambulance in Scotland's Royal Infirmary.\n", "Traditionally, the most common layout for sports cars was a roadster (a two-seat car without a fixed roof), however there are also several examples of early sports cars with four seats.\n\nSports cars are not usually intended to regularly transport more than two adult occupants, so most modern sports cars are usually two-seat layout or 2+2 layout (two smaller rear seats for children or occasional adult use). Larger cars with more spacious rear-seat accommodation are usually considered sports sedans rather than sports cars.\n", "Even in the United States, the bucket seat has largely replaced the bench seat; the bucket is viewed as \"sportier\", and smaller cars have made the middle position less viable. For high performance cars, bucket seats help keep the driver in place during cornering. Some pickup and larger trucks are still available with bench seats which would only be able to seat two if bucket seats were fitted, though some extended and crew cabs retain them to keep costs down since separate availability of bucket seats (captain's chairs) adds to the parts cost.\n", "The Familiale (family estate), with its third row of bench seats (giving a total of eight forward-facing seats), was popular with larger families and as a taxi. The two rows of rear seats could be folded to give a completely flat load area, with 1.94 cubic metres of load capacity. The total load carrying capacity is . When released, it was hailed as a luxury touring wagon. The Familiale was marketed as the \"SW8\" in the United States, for \"station wagon, eight seats.\"\n", "The unique 3+1 interior layout is designed to provide comfortable seating for one or two tandem passengers to the right of the driver, with only occasional use of the fourth \"jump-seat\" behind the driver as needed.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-21421
What causes various 'solid' objects to go soggy when immersed in water, while others stay hard? How are they differently composed?
Density, mostly. Objects comprised of molecules that are more loosely combined absorb moisture easier, thus allowing for the inside of the solid to be filled with liquids. Super ELI5: a piece of cereal is hard when it's dry, but it's also covered with holes and air pockets, inside and out. When milk begins to fill those holes, the internal moisture increases, making it softer. Whereas a rock has been solidified by thousands of years of heat and pressure, so there's no holes for water to get into to make it soggy.
[ "It refers to the tendency of liquids — and of unbound aggregates of small solid objects, like seeds, gravel, or crushed ore, whose behavior approximates that of liquids — to move in response to changes in the attitude of a craft's cargo holds, decks, or liquid tanks in reaction to operator-induced motions (or sea states caused by waves and wind acting upon the craft). When referring to the free surface effect, the condition of a tank that is not full is described as a \"slack tank\", while a full tank is \"pressed up\".\n\nSection::::Stability and equilibrium.\n", "Major characteristics of bulk materials, so far as their handling is concerned, are: lump size, bulk weight (density), moisture content, flowability (particle mobility), angle of repose, abrasiveness and corrosivity, among others.\n", "Section::::Applications.:Denudation.\n", "Section::::Measurement.:Hard/soft classification.\n\nBecause it is the precise mixture of minerals dissolved in the water, together with the water's pH and temperature, that determine the behavior of the hardness, a single-number scale does not adequately describe hardness. However, the United States Geological Survey uses the following classification into hard and soft water,\n\nSeawater is considered to be very hard due to various dissolved salts. Typically seawater's hardness is in the range of 6630 ppm. In contrast, freshwater has hardness in the range of 15 - 375 ppm.\n\nSection::::Measurement.:Indices.\n", "Another sort of risk that can affect dry cargoes is absorption of ambient moisture. When very fine concretes and aggregates mix with water, the mud created at the bottom of the hold shifts easily and can produce a free surface effect. The only way to control these risks is by good ventilation practices and careful monitoring for the presence of water.\n\nSection::::Safety.:Structural problems.\n", "Section::::Sources of hardness.:Permanent hardness.\n\nPermanent hardness (mineral content) are generally difficult to remove by boiling. If this occurs, it is usually caused by the presence of calcium sulfate/calcium chloride and/or magnesium sulfate/magnesium chloride in the water, which do not precipitate out as the temperature increases. Ions causing permanent hardness of water can be removed using a water softener, or ion exchange column.\n\nPermanent Hardness = Permanent Calcium Hardness + Permanent Magnesium Hardness.\n\nSection::::Effects.\n", "Section::::Non-chemical devices.\n", "Section::::Mathematical Model.\n\nBy mathematically modeling the rate of diffusion of water in the material and the rate of degradation of the material, it is possible to predict whether a certain material will undergo surface or bulk erosion by looking at the ratio between the two rates.\n\nThe rate of diffusion of water is modeled by the equation\n\nformula_1\n\nWhere is the mean length of the material and D is the diffusion coefficient of water inside the polymer.\n\nThe rate of degradation is modeled by the following equation\n\nformula_2\n", "BULLET::::- \"Chemical decomposition\" and \"structural changes\" result when minerals are made soluble by water or are changed in structure. The first three of the following list are solubility changes and the last three are structural changes.\n\nBULLET::::1. The \"solution\" of salts in water results from the action of bipolar water molecules on ionic salt compounds producing a solution of ions and water, removing those minerals and reducing the rock's integrity, at a rate depending on water flow and pore channels.\n", "If a compartment or tank is either empty or full, there is no change in the craft's center of mass as it rolls from side to side (in strong winds, heavy seas, or on sharp motions or turns). However, if the compartment is only partially full, the liquid in the compartment will respond to the vessel's heave, pitch, roll, surge, sway or yaw. For example, as a vessel rolls to port, liquid will displace to the port side of a compartment, and this will move the vessel's center of mass to port. This has the effect of slowing the vessel's return to vertical.\n", "Section::::Disadvantages.:Chemistry.:Solids.\n\nTotal dissolved solids or TDS (sometimes called filtrable residue) is measured as the mass of residue remaining when a measured volume of filtered water is evaporated. Salinity measures water density or conductivity changes caused by dissolved materials. Probability of scale formation increases with increasing total dissolved solids. Solids commonly associated with scale formation are calcium and magnesium carbonate and sulfate. Corrosion rates initially increase with salinity in response to increasing electrical conductivity, but then decrease after reaching a peak as higher levels of salinity decrease dissolved oxygen levels.\n\nSection::::Disadvantages.:Chemistry.:Hydrogen.\n", "The handbooks does indicate that above the midpoint of the ranges defined as \"Moderately Hard\", effects are seen increasingly: \"The chief disadvantages of hard waters are that they neutralise the lathering power of soap... and, more important, that they can cause blockage of pipes and severely reduced boiler efficiency because of scale formation. These effects will increase as the hardness rises to and beyond 200 mg/l CaCO3.\"\n\nSection::::Regional information.:In the United States.\n", "Marine evaporites tend to have thicker deposits and are usually the focus of more extensive research. They also have a system of evaporation. When scientists evaporate ocean water in a laboratory, the minerals are deposited in a defined order that was first demonstrated by Usiglio in 1884. The first phase of the experiment begins when about 50% of the original water depth remains. At this point, minor carbonates begin to form. The next phase in the sequence comes when the experiment is left with about 20% of its original level. At this point, the mineral gypsum begins to form, which is then followed by halite at 10%, excluding carbonate minerals that tend not to be evaporites. The most common minerals that are generally considered to be the most representative of marine evaporites are calcite, gypsum and anhydrite, halite, sylvite, carnallite, langbeinite, polyhalite, and kainite. Kieserite (MgSO) may also be included, which often will make up less than four percent of the overall content. However, there are approximately 80 different minerals that have been reported found in evaporite deposits (Stewart,1963;Warren,1999), though only about a dozen are common enough to be considered important rock formers.\n", "In practice, water with an LSI between -0.5 and +0.5 will not display enhanced mineral dissolving or scale forming properties. Water with an LSI below -0.5 tends to exhibit noticeably increased dissolving abilities while water with an LSI above +0.5 tends to exhibit noticeably increased scale forming properties.\n", "Free surface effect\n\nThe free surface effect is a mechanism which can cause a watercraft to become unstable and capsize.\n", "Section::::Exsolution.\n\nWhen a solid solution becomes unstable—due to a lower temperature, for example—exsolution occurs and the two phases separate into distinct microscopic to megascopic lamellae. This is mainly caused by difference in cation size. Cations which have a large difference in radii are not likely to readily substitute.\n", "Section::::Origin.:Longitudinals.\n", "Section::::Methods.:Quantitative Methods.:Volumetric Analysis.\n", "BULLET::::- Damp (or wet) is defined as the condition of an aggregate in which water is fully permeated the aggregate through the pores in it, and there is free water in excess of the SSD condition on its surfaces which will become part of the mixing water.\n\nSection::::In aggregates.:Application.\n", "Stability conditions\n\nThe Stability conditions of watercraft are the various standard loading configurations to which a ship, boat, or offshore platform may be subjected. They are recognized by classification societies such as Det Norske Veritas, Lloyd's Register and American Bureau of Shipping (ABS). Classification societies follow rules and guidelines laid down by International Convention for the Safety of Life at Sea (SOLAS) conventions, the International Maritime Organization and laws of the country under which the vessel is flagged, such as the Code of Federal Regulations.\n\nStability is normally broken into two distinct types: Intact and Damaged\n\nSection::::Intact stability.\n", "The table shows some calculated values of this effect for water at different drop sizes:\n\nThe effect becomes clear for very small drop sizes, as a drop of 1 nm radius has about 100 molecules inside, which is a quantity small enough to require a quantum mechanics analysis.\n\nSection::::Surface tension of water and of seawater.\n\nThe two most abundant liquids on the Earth are fresh water and seawater. This section gives correlations of reference data for the surface tension of both.\n\nSection::::Surface tension of water and of seawater.:Surface tension of water.\n", "Physical processes that affect the sediment-water interface include, but are not limited to:\n\nBULLET::::- Resuspension\n\nBULLET::::- Deposition\n\nBULLET::::- Rippling (either small wave ripples or giant current ripples)\n\nBULLET::::- Turbidity currents\n\nBULLET::::- Bed consolidation\n\nSection::::Biological processes.\n", "Section::::Mathematical model: flotation of hinged plates.:Analytical results for maximum load.:Case 1: Small scale formula_58.\n", "In the equation, formula_1 the thickness of the freshwater zone above sea level is represented as formula_2 and that below sea level is represented as formula_3. The two thicknesses formula_2 and formula_3, are related by formula_6 and formula_7 where formula_6 is the density of freshwater and formula_9 is the density of saltwater. Freshwater has a density of about 1.000 grams per cubic centimeter (g/cm) at 20 °C, whereas that of seawater is about 1.025 g/cm. The equation can be simplified to formula_10.\n", "Surface states originating from clean and well ordered surfaces are usually called \"intrinsic\". These states include states originating from reconstructed surfaces, where the two-dimensional translational symmetry gives rise to the band structure in the k space of the surface.\n\n\"Extrinsic\" surface states are usually defined as states not originating from a clean and well ordered surface. Surfaces that fit into the category \"extrinsic\" are:\n\nBULLET::::1. Surfaces with defects, where the translational symmetry of the surface is broken.\n\nBULLET::::2. Surfaces with adsorbates\n\nBULLET::::3. Interfaces between two materials, such as a semiconductor-oxide or semiconductor-metal interface\n\nBULLET::::4. Interfaces between solid and liquid phases.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01786
What are the laws surrounding entering sewer systems in cities? Since there are no 'no trespassing' signs, are we free to open manholes and enter sewer systems?
Like any other laws, they are subject to local differences. Just like you can't walk into someone else's home uninvited just because there's not a "no trespassing" sign, or can't hop a fence at a military installation if there wasn't a "Do not enter" sign on that particular fence, doesn't mean it's legal to trespass. If you have to open a manhole cover or bypass a fence in order to access something - that's common sense that you probably shouldn't be there.
[ "In March or April 1977, Mark had a conversation with Kenny Near, a past president of the Sewer Board. During their talk Mark offered Near about $2,000 for testimony that the Sewer Board was acting in a vindictive manner towards the Hopkinsons. However, Near refused the offer.\n", "The player must prevent pedestrians from falling into one of four sewers by temporarily bridging the open gaps with a manhole cover.\n", "The criminal penalties for violating FACE vary according to the severity of the offense and the defendant's prior record of similar violations. Typically, a first time offender is sentenced to at most one year in prison and fined at most $100,000. For a second violation, the violator may be imprisoned for at most three years and fined at most $250,000. These are maximum sentences; lesser penalties are permitted at the judge's discretion.\n\nIf the offense causes injury to a person, the maximum sentence is 10 years, regardless of whether or not it is a first offense.\n\nSection::::Effectiveness of the Act.\n", "In most jurisdictions, if a person were to accidentally enter onto private property, there would be no trespass, because the person did not intend any violation. However, in Australia, negligence may substitute the requirement for intent.\n\nIf a trespass is actionable and no action is taken within reasonable or prescribed time limits, the land owner may forever lose the right to seek a remedy, and may even forfeit certain property rights. \"See Adverse possession\" and Easement by prescription.\n", "In the American television show \"The Honeymooners\" episode \"The Man from Space\", broadcast 31 December 1955, sewer worker Ed Norton comes in dressed as an 18th-century fop, and announces that he will win the Raccoon lodge costume ball because he is dressed as \"Pierre Francois Brioschi, designer of the Paris sewers.\"\n\nSection::::Museum.\n", "“If any area is delimited on each side of mains, conduits, arches, pipes of any size, reservoirs, or cisterns of the public water supply, which is or in the future shall be furnished to the city of Rome: after the passage of this law no one shall obstruct, construct, fence, fix, establish, set up, locate, plow, or sow anything therein; nor shall anyone introduce anything into that area, except what is permitted or ordered by this law for construction or repairs. Whoever does anything contrary to these regulations shall be subjected in every particular to the same law, statute, and procedure as he would be and properly should be, if he pierced or broke a main or a conduit contrary to this law.\n", "The theme of traveling through, hiding, or even residing in combined sewers is a common plot device in media. Famous examples of sewer dwelling are the Teenage Mutant Ninja Turtles, Stephen King's \"It\", \"Les Miserables\", \"The Third Man\", \"Ladyhawke,\" \"Mimic\", \"The Phantom of the Opera\", \"Beauty and the Beast\", and \"Jet Set Radio Future\". The Todd Strasser novel \"\" is centered on a dog thwarting terroristic threats to electronically sabotage American sewage treatment plants.\n\nSection::::Society and culture.:Sewer alligators.\n", "In general under the Act, owners must give notice either orally or in writing except where fencing is applied around gardens or areas under cultivation, or the keeping of animals. Access to the door of a building on premises for lawful purposes is protected except where specifically prohibited. A sign showing a graphic representation or wording of prohibited access is sufficient. Red markings indicate no trespassing, while yellow markings indicate limited access for certain activities. Trespassers can be fined not more than and may be levied costs or damages.\n\nSimilar laws exist in Prince Edward Island and Saskatchewan.\n", "Section::::Cast.\n\nBULLET::::- Michael Fassbender as Chad Cutler - Son\n\nBULLET::::- Brendan Gleeson as Colby Cutler - Father\n\nBULLET::::- Lyndsey Marshal as Kelly Cutler - Wife\n\nBULLET::::- Georgie Smith as Tyson - Grandson\n\nBULLET::::- Rory Kinnear as P.C. Lovage\n\nBULLET::::- Killian Scott as Kenny\n\nBULLET::::- Sean Harris as Gordon Bennett\n\nBULLET::::- Kingsley Ben-Adir as Sampson\n\nBULLET::::- Kacie Anderson as Mini Cutler - Granddaughter\n\nBULLET::::- Gerard Kearns as Lester\n\nBULLET::::- Tony Way as Norman\n\nBULLET::::- Barry Keoghan as Windows\n\nBULLET::::- Ezra Khan as Jamail\n\nBULLET::::- Alan Williams as Noah\n\nBULLET::::- Anastasia Hille as Mrs. Crawley\n\nBULLET::::- Peter Wight as Dog owner\n", "The trespasser need not enter the land in person. Indeed, if A and B are standing next to C's land, and A pushes B onto the land without entering it himself, it is A (and not B, who did not intend to enter that space) who is liable for the trespass to C's land. There must be some physical entry, however. Causing noise, light, odors, or smoke to enter the land of another is not a trespass, but is instead a different tort, nuisance.\n", "The Howard Jarvis Taxpayers Association argued that the court should look beyond mere dictionary definitions of \"sewer\" to examine the legal meaning of the term in the specific context of how that term is used in Proposition 218. The Association also observed that numerous California statutes differentiated between storm drainage and sewerage systems, including a specific statute that legally authorizes many local governments to levy fees and charges for storm drainage or sewerage systems.\n", "Mr Marcic’s property in Stanmore, 92 Old Church Lane, was repeatedly flooded with \"foul water\" and sewage after Thames Water plc failed to maintain the sewers: \"two such incidents in 1992, one in each year from 1993 to 1996, two in 1997, none in 1998, four in 1999 and four or five in 2000.\" Repairs had not been made because they were not profitable, rather than a lack of resources. It was argued that the right to enjoyment of property in ECHR Protocol 1, article 1, was breached. \n\nSection::::Judgment.\n", "Under section 34(2) an occupier of domestic property must, \"as respects the household waste produced on the property, take reasonable steps to secure that any transfer of waste is only to an authorised person or to a person for authorised transport purposes\" but has none of the other section 34(1) duties.\n\nAuthorised persons include local authorities who have responsibility for waste collection, persons licensed to manage or registered to transport waste or otherwise exempt persons (s.34(3)).\n", "Section::::Mitigation of CSOs.:Screening and disinfection facilities.\n", "a) A person who owns, possesses or controls a dog, cat or other animal shall not permit the animal to commit a nuisance on a sidewalk of any public place, on a floor, wall, stairway or roof of any public or private premises used in common by the public, or on a fence, wall or stairway of a building abutting on a public .\n\nAuthorized employees of New York City Departments of Health (including Animal Care & Control), of Sanitation, or of Parks and Recreation can issue tickets.\n", "Section::::Safety issues.\n\nThe Cave Clan does not advocate entering drains when it is raining, exploring alone, or removing a manhole from beneath if the above location is unknown. This is due to the potential hazard of the exit being on a road and thus has the risk of being struck by a vehicle. The golden rule of the Cave Clan is, \"When it rains, no drains!\".\n\nSection::::Controversy.\n", "Section 404 of the 1972 Clean Water Act works to protect wetlands directly by mandating that in order to negatively impact a wetland, a person must first receive a permit from the US Army Corps of Engineers (the Corps). In this process, the developer submits an application, called a Public Notice, to their respective district of the USACE requesting to carry out the project and associated ecological impacts. The Corps evaluates the probable impacts and solicits comments on public notices to use when making the decision whether to issue, modify, or deny a permit.\n", "Section::::Modus Operandi.\n", "To be found guilty of computer trespass in New York one must knowingly use a computer, computer service, or computer network without authorization \"and\" commit (or attempt) some further crime.\n\nSection::::Examples of State Legislation.:Ohio.\n\n(A) No person shall knowingly use or operate the property of another without the consent of the owner or person authorized to give consent.\n", "These are classified as people who intrude onto property without permission. The degree of care owed to trespassers, although slight, nevertheless exists particularly in situations where a source of danger is deliberately created or where small children are involved. An example would be where live wires were left exposed after the centre had closed. If some children entered the premises for some reason, despite that reason, if they were injured the owner of the centre would be liable.\n", "In several large American cities, homeless people live in storm drains. At least 300 people live in the 200 miles of underground storm drains of Las Vegas, many of them making a living finding unclaimed winnings in the gambling machines. An organization called Shine a Light was founded in 2009 to help the drain residents after over 20 drowning deaths occurred in the preceding years. A man in San Diego was evicted from a storm drain after living there for nine months in 1986.\n\nSection::::History.\n", "Section::::Actions of the ADA.\n", "A robotics research paper in 2011 suggested that robots could examine the shapes of specific manhole covers and use them to calculate their geographic position, as a double-check on GPS data.\n\nSection::::Security and safety.\n\nIn urban areas, stray voltage issues have become a significant concern for utilities. In 2004, Jodie S. Lane was electrocuted after stepping on a metal manhole cover, while walking her dog in New York City. As result of this and other incidents, increased attention has been focused on these hazards, including technical conferences on stray voltage detection and prevention.\n", "In 2002, a California appellate court held that an in-lieu franchise fee for water, sewer, and refuse collection services was a \"property-related\" fee subject to Article XIII D. Also in 2002, another California appellate court held that a stormwater drainage fee imposed on developed parcels was a \"property-related\" fee subject Article XIII D.\n", "In the United Kingdom the Land Drainage Act 1991 decrees drainage of land in England and Wales, but does not cover sewerage and water supplies but the actual process of draining land itself. The act defines who is responsible for various aspects of land drainage and the different areas in which the law applies.\n\nSection::::See also.\n\nBULLET::::- Best management practice for water pollution\n\nBULLET::::- New Jersey stormwater management rules\n\nBULLET::::- United States groundwater law\n\nBULLET::::- Water politics\n\nSection::::External links.\n\nBULLET::::- Discussion of Connecticut surface water law from the Connecticut State Law Library\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04976
Is there a reason why in the West, pay is quoted in annual terms but in Asia, it's monthly often?
It's usually monthly in a number of European countries as well. It's not an "east vs west" thing, different countries have just settled on different conventions and habits. There's no grand plan behind it.
[ "In Botswana, salaries are almost entirely paid on a monthly basis with pay dates falling on different dates of the second half of the month. Pay day usually ranges from the 15th of the month to the last day. The date of disbursement of the salary is usually determined by the company and in some cases in conjunction with the recognized Workers Union.\n", "WPI numbers were typically measured weekly by the Ministry of Commerce and Industry. This makes it more timely thanlagging and infrequent CPI statistic. However, since 2009 it has been measured monthly instead of weekly.\n\nSection::::Issues.\n", "Section::::History.\n\nThe \"Tenpō Tsūhō\" came around a century after the introduction of the \"Hōei Tsūhō\" (Kyūjitai: 寳永通寳 ; Shinjitai: 宝永通宝) during the 5th year of the Hōei era (1708), which had a face value of 10 mon (while only containing 3 times as much copper as a 1 mon \"Kan'ei Tsūhō\" coin), but was discontinued shortly after it started circulating as it wasn't accepted for its nominal value.\n", "Section::::History.:Inflation during the Bakumatsu.\n\nIn 1708 the Tokugawa shogunate introduced the \"Hōei Tsūhō\" (Kyūjitai: 寳永通寳 ; Shinjitai: 宝永通宝) which had a face value of 10 mon (but contained 3 times as much copper as a 1 mon \"Kan’ei Tsūhō\" coin), which lead to the coin being discontinued very shortly after it started circulating as it wasn't accepted for its nominal value.\n", "The following are circulation figures for the coins that were minted between the 4th, and the 43rd year of Meiji's reign. Coins for this period all begin with the Japanese symbol 明治 (Meiji). Patterns that include the rare 1870 dated coin are not included here.\n\nBULLET::::- Inscriptions on Japanese coins from this period are read clockwise from right to left:\n\n\"Year\" ← \"Number representing year of reign\" ← \"Emperor's name\" (Ex: 年 ← 五十三 ← 治明)\n\nSection::::Circulation figures.:Shōwa.\n", "Section::::Coinage of East Asia.:Japan.\n", "Section::::Circulation figures.\n\nSection::::Circulation figures.:Meiji.\n\nThe following are circulation figures for ten sen coins that were minted between the 3rd, and the 45th year of Meiji's reign. The dates all begin with the Japanese symbol 明治 (Meiji), followed by the year of his reign the coin was minted. Each coin is read clockwise from right to left, so in the example used below \"二十三\" would read as \"year 32\" or 1899. Some of the mintages included cover more than one variety of a given coin.\n", "The above-mentioned \"sixty-day rule\" shall not be applied to seamen and aircrew. Instead, they are bound by stricter conditions for exemption. To be exempt from liability to Salaries tax, they shall be present in Hong Kong not more than 60 days in the year and not more than 120 days in the two consecutive years, one of them being the current tax year. \n\nA pension will be considered to be sourced in Hong Kong if it is managed and controlled in Hong Kong.\n\nSection::::Liability to tax.\n", "The following are circulation figures for the \"twenty sen coin\", all of which were minted between the 3rd, and 44th year of Meiji's reign. The dates all begin with the Japanese symbol 明治 (Meiji), followed by the year of his reign the coin was minted. Each coin is read clockwise from right to left, so in the example used below \"一十二\" would read as \"year 21\" or 1888.\n\nBULLET::::- \"Year\" ← \"Number representing year of reign\" ← \"Emperor's name\" (Ex: 年 ← 一十二 ← 治明)\n", "Section::::Circulation figures.\n\nSection::::Circulation figures.:Meiji.\n\nThe following are circulation figures for the coins that were minted between the 3rd, and the 45th and last year of Meiji's reign. Coins for this period all begin with the Japanese symbol 明治 (Meiji). Fifty sen pieces that were minted between 1874 and 1877, and in 1880, are considered key date coins with a value in the thousands of US dollars. Early silver fifty sen coins have often been counterfeited, so grading by an expert is recommended for collectors.\n\nBULLET::::- Inscriptions on Japanese coins from this period are read clockwise from right to left:\n", "BULLET::::- Progress billing used to obtain partial payment on extended contracts, particularly in the construction industry (see Schedule of values)\n\nBULLET::::- Collective Invoicing is also known as monthly invoicing in Japan. Japanese businesses tend to have many orders with small amounts because of the outsourcing system (Keiretsu), or of demands for less inventory control (Kanban). To save the administration work, invoicing is normally processed on monthly basis.\n", "BULLET::::- Staff received pay increments as often as four times a year. Former director Matilda Chua's salary rose from S$1,300 to S$12,500 over nine years.\n\nBULLET::::- Staff were given exit payments of up to 10 months' salary.\n\nBULLET::::- Durai used NKF funds to pay bills relating to his wife's Mercedes, including paying for petrol and repairing the car.\n", "The English-language publication's name changed almost as often as its editorial office relocated, taking on the new name \"Far Eastern Monthly\" in April 1928 before becoming assuming its ultimate name, \"Pacific Monthly,\" in San Francisco in April 1929.\n", "Some people are also eligible for corporate retirement allowances. About 90% of firms with thirty or more employees gave retirement allowances in the late 1980s, frequently as lump sum payments but increasingly in the form of annuities.\n", "Section::::Circulation figures.\n\nMeiji\n", "From 1738 government authorised the manufacture of iron \"Kan'ei Tsūhō\" 1 mon coins, and in 1866 (just before the end of the Edo period) iron 4 mon \"Kan'ei Tsūhō\" were authorised. While iron coins were being minted the quality of copper coins would decrease due to frequent debasements.\n\nSection::::History.:Export.\n", "The following are circulation figures for the \"two sen coin\", all of which were minted between the 6th, and 25th year of Meiji's reign. The dates all begin with the Japanese symbol 明治 (Meiji), followed by the year of his reign the coin was minted. Each coin is read clockwise from right to left, so in the example used below \"九\" would read as \"year 9\" or 1876. Two sen coins were struck in 1892, but none were released for circulation.\n\nBULLET::::- \"Year\" ← \"Number representing year of reign\" ← \"Emperors name\" (Ex: 年 ← 九 ← 治明)\n\nSection::::Post-1892.\n", "Section::::Fifth Pay Commission.\n\nThe notification for setting up the Fifth CPC was issued on 9 April 1994, but started functioning only on 2 May 1994, with the assumption of charge by the Member Secretary.\n", "But two expanded forms are used in India. The DD MMMM YYYY usage is more prevalent over the MMMM DD, YYYY usage except the latter is more used by media publications, such as the print version of the \"Times of India\" and \"The Hindu\".\n\nMany government websites, including Prime Minister's official website, retain the historical format use by Britain (MMMM DD, YYYY) during the colonial era until sometimes 20th century.\n", "Unlike the UN salaries, the UN pension to former officials or to their survivors are not exempt from national income taxation. Applied tax rate depends exclusively on national legislation. Most of the countries tax the pension, but many grant exemptions for the lump sum pension payment. Countries which grant tax exemption for the UN pensions whether it is paid as a lump sum or as a monthly income are: Austria, Bahrain, Chile, India, Kuwait, Malaysia, Malta, Singapore, Saudi Arabia, UAE and Thailand. \n\nHowever, a different rule may apply to lump sum pension.\n\nSection::::Taxation.:India.\n", "Section::::Terms of service.:Allowance.\n\nService personnel are paid monthly allowances. The amount paid is determined by the Ministry of Finance. The allowance that is approved is what the ministry would pay the personnel throughout the service year. Payment is calculated from the date the service personnel reports for duty at his/her designated post. Personnel posted to statutory boards, corporations and churches or quasi-church organizations are paid by those establishments and not the secretariat.\n\nSection::::Terms of service.:Annual leave.\n", "As the controversy blew up, Victor Mallet was Asia news editor of the \"Financial Times\" based in Hong Kong where he lived with his family. He was vice-president of the FCC. Mallet and his employer attempted to renew his working visa in the normal way but were notified late on 2 October that his visa, which expired on 3 October, would not be renewed. No reasons were given. Returning from a visit to Bangkok, Mallet found out that he had been denied a working visa by the Hong Kong government. He was subsequently allowed to return on a seven-day visit after being interrogated by Hong Kong immigration.\n", "During the co-existence of the mon with the sen between 1870 and 1891, the metal content of the old currency became important. Official exchange for coins from 1871.6.27: 4 copper mon = 2 rin, 1 bronze mon = 1 rin (1 rin = 1/10th of a sen). So while not all mon were valued equally, their metal kind counted after the transition to decimal sen: bronze was valued more highly than copper. The first physical rin denomination was introduced 1873 with the 1 rin coin (with the 5 rin coin introduced in 1916), as until that time the rin had existed only as an accounting unit (10 rin = 1 sen). The most current coin, the \"Tempō Tsūhō\" (, a coin with a face value of 100 mon) was valued at only 8 rin (0.8 sen) in that sen period.\n", "The following are circulation figures for the \"two yen coin\", all of which were minted between the 3rd, and 25th year of Meiji's reign. The dates all begin with the Japanese symbol 明治 (Meiji), followed by the year of his reign the coin was minted. Each coin is read clockwise from right to left, so in the example used below \"九\" would read as \"year 9\" or 1876. It is unknown if genuine coins dated 1874 even exist as they remain \"unverified\". Two yen coins were struck in 1892, but none were released for circulation.\n", "In 2010, the prime minister's office reported that he/she does not receive a formal salary, but was only entitled to monthly allowances. That same year \"The Economist\" reported that, on a purchasing power parity basis, the prime minister received an equivalent of $4106 per year. As a percentage of the country's per-capita GDP (gross domestic product), this is the lowest of all countries \"The Economist\" surveyed.\n" ]
[ "West countries use annual terms and Asia uses monthly. ", "Pay is quoted annually in the west but monthly in Asia." ]
[ "Lots of countries use different methods of quoting rates. ", "Pay is mainly quoted monthly in the west as well, and it's not exclusive to Asia." ]
[ "false presupposition" ]
[ "West countries use annual terms and Asia uses monthly. ", "Pay is quoted annually in the west but monthly in Asia." ]
[ "false presupposition", "false presupposition" ]
[ "Lots of countries use different methods of quoting rates. ", "Pay is mainly quoted monthly in the west as well, and it's not exclusive to Asia." ]
2018-18140
How do housebuilders ensure new builds don't rot in the rain or snow before they're completed?
Wood doesn’t just rot the instant it gets wet. It can get rained on and it will dry out and be fine. In general though, they finish framing most homes pretty quick, and the very next thing they do is put the roof on and install the windows and Tyvek on the exterior. This is enough to keep everything dry while they finish doing everything else.
[ "The quality of a steel-framed prefab house, which can be suffering from rust, or a wooden house from rot, can be found in the footings of the structure where it meets the foundation slab. With checks undertaken by a qualified building surveyor, the structural integrity of the house can be quickly ascertained through exposure of the footings: if they are not rusty or rotted, the house is normally structurally sound.\n", "BULLET::::2. Identify, with the assistance of a structural engineer where required, any timber that requires replacement or strengthening due to loss of structural strength and carry out these works. Retain as much original fabric as possible, especially in historic buildings.\n\nBULLET::::3. Isolate timbers from other materials that will take a long time to dry out.\n\nBULLET::::4. Increase the ventilation of the area if this is insufficient, by introducing extra air bricks etc.\n\nBULLET::::5. Implement a regular schedule of inspection and maintenance for the building to tackle future problems early on and/or install monitoring equipment.\n", "Daily builds typically include a set of tests, sometimes called a \"smoke test.\" These tests are included to assist in determining what may have been broken by the changes included in the latest build. The critical piece of this process is to include new and revised tests as the project progresses. \n\nSection::::Continuous integration builds.\n", "Dr. Ridout quotes a case study where an initial quote for orthodox treatment of a building was £23,000 but subsequent treatment by environmental methods resulted in a saving of one third in remedial works and timber replacement. Where it is decided to install moisture monitoring equipment, this will represent an additional capital outlay.\n\nSection::::Effectiveness of Treatments.:Guarantees.\n", "NHBC inspectors visit building sites at key stages to check compliance with its Technical Standards. The stages are usually (but can sometimes be more): foundations, drainage, superstructure (e.g. brickwork), pre-plaster, and pre-handover to the buyer. For flats, they also inspect roof construction. The inspection process is not designed to check every detail of the build, but if NHBC is satisfied with the overall build quality they will issue the warranty for the new home/premises.\n", "BULLET::::4. Apply fungicide to all such masonry, concrete and earth surfaces at the specified rate. Apply two generous coats of fungicide to all timber surfaces to a distance of 1.5 metres from the cutting away. (Allow first coat to be absorbed before applying second coat)\n\nBULLET::::5. Use only fully preservative-treated timber for replacement.\n\nBULLET::::6. Replaster with zinc oxychloride (ZOC) plaster or, for areas not to be replastered, apply two coats of ZOC paint.\n\nAs can be seen from stages 1 and 2, this involves the removal of a considerable quantity of building fabric.\n", "'Buildmark', the NHBC warranty for private housing is split into two parts. In the first two years, the builder is responsible for fixing any defects caused by its failure to build to NHBC Technical Standards. If the builder fails to do this, or has gone out of business, NHBC will take responsibility to fix the defect. From the start of the third year, until the home is ten years old, NHBC is responsible for putting right defects to the structural and weather-proofing parts of the home caused by breaches of its Technical Standards.\n\nSection::::Building Control.\n", "Restoration work began with the removal of seven white ant nests, then a total re-roofing. The restorers kept the underlying shingles but replaced the rusted Corrugated iron covering those shingles. They also repaired the timberwork and carefully and painstakingly in-filled damage to the mud walls and brickwork, which they then whitewashed with a special formula.\n", "BULLET::::6. \"A River Ran Through It\" - Homeowners attempt to solve the problem of their mid-century home's leaky roof by adding a second story addition. However, the contractor does little to fix the lingering damage from the roof leaks and eventually skips out with the work only half-finished. Mike and the crew assist in correcting old mistakes and finishing the work left undone.\n\nBULLET::::7. \"Best Laid Plans\" - A wooden kitchen floor was replaced several times due to water damage. Mike calls in several experts to determine the source.\n", "The methods and approaches used to assess and repair timbers in historic buildings have changed considerably in recent years, with a move away from wall irrigation, damage to decorative features during invasive survey work, and unnecessary cutting out or chemical treatment of timbers.\n", "With all the building site prep work completed, the building will be delivered by trucks towing the individual sections on their permanent chassis. The sections will be joined together securely, and all final plumbing and electrical connections are made before a decorative skirt or facade is applied to the bottom exterior of the house, hiding the chassis and finishing off the look of the home.\n\nInside, paint and carpet are finished to design specifications, then the home is cleaned thoroughly.\n\nSection::::See also.\n\nBULLET::::- Modular home\n\nBULLET::::- Prefabrication\n\nBULLET::::- Prefabricated home\n\nBULLET::::- British post-war temporary prefab houses\n\nBULLET::::- HUD USER\n", "BULLET::::2. Hack off all plaster/render and remove any skirtings, panelling, linings and ceilings necessary to trace the fullest extent of the growth over or through adjacent masonry, concrete or timber surfaces.\n\nBULLET::::3. Clean off with a wire brush all surfaces and any steel and pipe work within the area up to a radius of 1.5 metres from the furthest extent of suspected infection. Remove from the building all dust and debris ensuing from the work.\n", "An Atrac in Townsville Australia, uses follow the sun technology in which the samples are rotated so that they always face the sun. In 17 months this produced the equivalent of 2 years of weathering.\n\nA variety of environmental chambers are also used in conjunction with industry standards.\n\nSection::::Types of weather testing.:Artificial weathering.\n", "Once the logs have dried for the desired length of time, they are profiled prior to shipping. Profiling usually does not take place until shortly before shipment, to ensure that the logs stay as uniform as possible. It is uncertain whether this process is advantageous; it depends on many factors such as local climate, wood species, its size, and the location of the log structure.\n\nSection::::Components.:Kiln-dried logs.\n", "BULLET::::- Cure time: There is only a 24-hour wait time to decorate over \"one coat\" veneer. Cure time is only for two or three coat plastering that may have lime in the finish. It is recommended to wait a few days or weeks before painting a lime finish wall. Plastering walls with one coat is actually faster than taping since you don't have to wait for three separate coats to dry for 24 hours, as in taping. If heavy patchwork is involved, then yes, there is also await time. However, that is typically for drying time not curing.\n", "The company managed to reduce the drying periods of sub-floor smoothing compounds. This invention enabled the company to manufacture \"quick-setting\" screeds, which become fully usable after 24 hours, i.e. installation of the finish top cover is possible after 24 hours. For conventional screeds, the drying period required is about 28 days even today. Using the \"quick-setting smoothing compounds\", top floor cover can be laid as soon as after one hour.\n", "Alternatively, the use of pastes and boron rods would be justified in this instance. \"Preservative treatments may be essential in some situations if the spread of the fungus is to be restricted and critical timbers are to be protected while the structure dries.\" \n", "The first step in any course of treatment is to make the necessary repairs to the building defects (overflowing gutters, blocked airbricks, missing slates, etc.) that allowed the ingress of dampness. The treatment methods described below assume that the dry rot has been positively identified, the full extent of the rot ascertained, and that the building is now water tight.\n\nA number of methods of attacking dry rot have been developed which can be classified as follows:\n\nBULLET::::- Orthodox – emphasis on the use of chemical fungicides\n\nBULLET::::- Environmental – emphasis on controlling the fungus by controlling environmental conditions\n", "BULLET::::2. \"Let's Rejoist\" - In this episode, featuring guest crew member Jordan MacNab, the winner of the first \"Handyman Superstar Challenge\" (for which Mike was a judge), Mike comes to the assistance of a homeowner who had discovered a small water stain in their ceiling below an upstairs balcony. After the homeowner hired a roofer to investigate, the roofer discovered that the joists holding the balcony up had completely rotted through, and was forced to abort any roof repair. Finding no other local contractors willing to undertake the challenge of replacing these joists in addition to fixing the original problem, Mike and his crew step up to the plate and make things right.\n", "The fast-track nature of the design and construction process (experience in 2011) often leads to missed planning, design, and even construction items. Items missed during the design and construction process can often be identified by the CxA during development of the functional and performance test procedures or during functional and performance tests.\n", "Other materials are avoided by practitioners of this building approach, due to their major negative environmental or health impacts. These include unsustainably harvested wood, toxic wood-preservatives, portland cement-based mixes and derived products such as Autoclaved aerated concrete, paints and other coatings that off-gas volatile organic compounds (VOCs), steel, waste materials such as rubber tires in regions where they are recycled, and some plastics; particularly polyvinyl chloride (PVC or \"vinyl\") and those containing harmful plasticizers or hormone-mimicking formulations.\n\nSection::::Techniques.\n", "The environmental approach emphasises the need for continued monitoring to ensure that future building defects do not start a new outbreak of dry rot or reactivate a dormant one. While in a simple small building this may be accomplished by regular maintenance inspections, systems are available that can monitor a large building with readings from moisture sensors being remotely monitored by a computer.\n\nSection::::Treatment Methods.:Heat treatment.\n", "In Denmark, a procedure has been developed whereby the building, or the affected part thereof, is tented and heated by hot air to kill dry rot. A temperature of is achieved at the centre of masonry and timbers and maintained for twenty-four hours. However, the question could be asked as to why someone should expend large amounts of energy heating the entire building to a high temperature when all that is needed to kill the rot is to dry it out.\n", "Section::::Maintenance.\n\nHomemakers that follow predictive maintenance techniques determine the condition of in-service equipment in order to predict when maintenance should be performed. This approach offers cost savings over routine or time-based maintenance, because tasks are performed only when warranted. Homemakers that follow preventive maintenance methods ensure that household equipment and the house are in satisfactory operating condition by providing for inspection, detection, and correction of incipient failures either before they occur or before they develop into major defects.\n\nSection::::Maintenance.:Home maintenance.\n", "A case study of successful environmental control of dry rot in a large building is included as an appendix in Historic Scotland’s \"Technical Advice Note 24\". Case studies are also quoted in Dr. Brian Ridout’s book \"Timber Decay in Buildings, The Conservation Approach to Treatment\".\n\nSection::::Effectiveness of Treatments.:Costs.\n\nWith all treatment methods, the costs of the repairs to rectify the building defects that permitted the ingress of moisture will be the same. The overall cost of using the environmental approach to the treatment of dry rot is likely to be less than the orthodox approach.\n" ]
[ "When wood gets wet it instantly starts to rot. ", "Wood rots after rain." ]
[ "When wood gets wet it does not instantly start to rot. ", "Wood dries out after rain and doesn't rot." ]
[ "false presupposition" ]
[ "When wood gets wet it instantly starts to rot. ", "Wood rots after rain." ]
[ "false presupposition", "false presupposition" ]
[ "When wood gets wet it does not instantly start to rot. ", "Wood dries out after rain and doesn't rot." ]
2018-06432
Why does LSD make you hallucinate?
It causes a chemical change in the way your brain processes sensations and experiences. That's more of a how than a why, I guess. It wasn't intentionally designed with those effects in mind.
[ "Section::::Critical reception.\n", "According to a 2009 study published in the \"Journal of Nervous and Mental Disease,\" the hallucinations are caused by the brain misidentifying the source of what it is currently experiencing, a phenomenon called faulty source monitoring.\n\nA study conducted on individuals who underwent REST while under the effects of Phencyclidine (PCP) showed a lower incidence of hallucination in comparison to participants who did not take PCP. The effects of PCP also appeared to be reduced while undergoing REST. The effects PCP has on reducing occurrences of hallucinatory events provides a potential insight into the mechanisms behind these events. \n", "One explanatory model for the experiences provoked by psychedelics is the \"reducing valve\" concept, first articulated in Aldous Huxley's book \"The Doors of Perception\". In this view, the drugs disable the brain's \"filtering\" ability to selectively prevent certain perceptions, emotions, memories and thoughts from ever reaching the conscious mind. This effect has been described as \"mind expanding\", or \"consciousness expanding\", for the drug \"expands\" the realm of experience available to conscious awareness.\n\nWhile possessing a unique mechanism of action, cannabis or marijuana has historically been regarded alongside the classic psychedelics.\n\nSection::::Research chemicals and designer drugs.\n", "BULLET::::- Synopsis\n\nA call comes into the Los Angeles Police Department juvenile narcotics division with a complaint of a person painted like an Indian and chewing the bark off a tree.\n\nBULLET::::- Closing narration\n\n\"The story you have just seen is true. The names were changed to protect the innocent. On December 15, a Coroner's inquest was held at the County Morgue, Hall of Justice, City and County of Los Angeles. In a moment, the results of that inquest.\"\n\nSection::::Reception.\n", "Lysergic acid diethylamide, or LSD, activates serotonin receptors (the amine transmitter of nerve urges) in brain matter. LSD acts on certain serotonin receptors, and its effects are most prominent in the cerebral cortex, an area involved in attitude, thought, and insight, which obtains sensory signs from all parts of the body. LSD's main effects are emotional and psychological. The ingester's feelings may alter quickly through a range from fear to ecstasy. (Humphrey, N. 2001) This may cause one to experience many levels of altered consciousness.\n", "While publicly available documents indicate that the CIA and Department of Defense have discontinued research into the use of LSD as a means of mind control, research from the 1960s suggests that both mentally ill and healthy people are more suggestible while under its influence.\n\nSection::::Adverse effects.:Flashbacks.\n\n\"Flashbacks\" are a reported psychological phenomenon in which an individual experiences an episode of some of LSD's subjective effects after the drug has worn off, \"persisting for months or years after hallucinogen use\".\n", "Outside Wikieup, Arizona they got stuck in the sand by a pond, and had an intense LSD party while they waited for a tractor to pull them out. In Phoenix they confounded the Barry Goldwater presidential headquarters by painting \"A VOTE FOR BARRY IS A VOTE FOR FUN!\" above the bus windows on the left side, and driving backwards through the downtown. Casamo had apparently taken too much LSD in Wikieup, and spent much of the drive from Phoenix to Houston standing naked on the rear platform, confounding the truckers who followed \"Further\" down the highway.\n", "BULLET::::- Michel Foucault had an LSD experience with Simeon Wade in the Death Valley and later wrote “it was the greatest experience of his life, and that it profoundly changed his life and his work.\"\n\nBULLET::::- Kary Mullis is reported to credit LSD with helping him develop DNA amplification technology, for which he received the Nobel Prize in Chemistry in 1993.\n\nBULLET::::- Oliver Sacks, a neurologist famous for writing best-selling case histories about his patients' disorders and unusual experiences, talks about his own experiences with LSD and other perception altering chemicals, in his book, \"Hallucinations\".\n\nSection::::See also.\n", "Hallucinogens cause perceptual and cognitive distortions without delirium. The state of intoxication is often called a “trip”. Onset is the first stage after an individual ingests (LSD, psilocybin, or mescaline) or smokes (dimethyltryptamine) the substance. This stage may consist of visual effects, with an intensification of colors and the appearance of geometric patterns that can be seen with one’s eyes closed. This is followed by a plateau phase, where the subjective sense of time begins to slow and the visual effects increase in intensity. The user may experience synesthesia, a crossing-over of sensations (for example, one may “see” sounds and “hear” colors). In addition to the sensory-perceptual effects, hallucinogenic substances may induce feelings of depersonalization, emotional shifts to a euphoric or anxious/fearful state, and a disruption of logical thought. Hallucinogens are classified chemically as either indolamines (specifically tryptamines), sharing a common structure with serotonin, or as phenethylamines, which share a common structure with norepinephrine. Both classes of these drugs are agonists at the 5-HT receptors; this is thought to be the central component of their hallucinogenic properties. Activation of 5-HT may be particularly important for hallucinogenic activity. However, repeated exposure to hallucinogens leads to rapid tolerance, likely through down-regulation of these receptors in specific target cells.\n", "Whilst promoting the 2016 superhero film \"Deadpool\", Ryan Reynolds did his take on the PSA in character. First he says \"Hi. Deadpool here, with a very important announcement.\" He holds up a chimichinga saying \"This is your brain. Actually, it's a chimichinga. But, I'm making a point, because...\" And points to a giant chimichinga on the table saying \"This... is your brain on IMAX. Bigger is better, right?\" Followed by clips from the film.\n\nWhen children's television series \"SpongeBob SquarePants\" aired on MTV in 2008, a promo was made to play before the show began that parodied this ad.\n", "BULLET::::- While the episode centers on the dangers of LSD, the climax shows that Benjie died not of an LSD overdose (which is nearly physically impossible), but rather a barbiturate overdose, which another character says was brought on by Benjie's desire to get \"farther out.\"\n\nBULLET::::- The plot in this episode was inspired by a real life acid test in Watts. That event was chronicled by Tom Wolfe in his book \"The Electric Kool-Aid Acid Test\". The band was the newly formed Grateful Dead. Merry Prankster Paul Foster, face painted half silver and half black, was arrested.\n\nSection::::See also.\n", "LSD can cause pupil dilation, reduced appetite, and wakefulness. Other physical reactions to LSD are highly variable and nonspecific, some of which may be secondary to the psychological effects of LSD. Among the reported symptoms are numbness, weakness, nausea, hypothermia or hyperthermia, elevated blood sugar, goose bumps, heart rate increase, jaw clenching, perspiration, saliva production, mucus production, hyperreflexia, and tremors.\n\nSection::::Effects.:Psychological.\n", "Most psychedelics are not known to have long-term physical toxicity. However, entactogens such as MDMA that release neurotransmitters may stimulate increased formation of free radicals possibly formed from neurotransmitters released from the synaptic vesicle. Free radicals are associated with cell damage in other contexts, and have been suggested to be involved in many types of mental conditions including Parkinson's disease, senility, schizophrenia, and Alzheimer's. Research on this question has not reached a firm conclusion. The same concerns do not apply to psychedelics that do not release neurotransmitters, such as LSD, nor to dissociatives or deliriants.\n", "Section::::LSD-related death of an elephant.\n", "No clear connection has been made between psychedelic drugs and organic brain damage. However, hallucinogen persisting perception disorder (HPPD) is a diagnosed condition wherein certain visual effects of drugs persist for a long time, sometimes permanently, although science and medicine have yet to determine what causes the condition.\n\nA large epidemiological study in the U.S. found that other than personality disorders and other substance use disorders, lifetime hallucinogen use was not associated with other mental disorders, and that risk of developing a hallucinogen use disorder was very low.\n\nSection::::How hallucinogens affect the brain.\n", "LSD may trigger panic attacks or feelings of extreme anxiety, known familiarly as a \"bad trip.\" Review studies suggest that LSD likely plays a role in precipitating the onset of acute psychosis in previously healthy individuals with an increased likelihood in individuals who have a family history of schizophrenia. There is evidence that people with severe mental illnesses like schizophrenia have a higher likelihood of experiencing adverse effects from taking LSD.\n\nSection::::Adverse effects.:Suggestibility.\n", "A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA (\"Ecstasy\"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, Dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role.\n", "As early as the 1960s, research into the medicinal properties of LSD was being conducted. It has been found that LSD is a fairly effective treatment for mental disorders such as obsessive compulsive disorder (OCD). \"Savage et al. (1962) provided the earliest report of efficacy for a hallucinogen in OCD, where after two doses of LSD, a patient who suffered from depression and violent obsessive sexual thoughts experienced dramatic and permanent improvement (Nichols 2004: 164).\" \n", "BULLET::::- Aldous Huxley, author of \"Brave New World\", became a user of psychedelics after moving to Hollywood. He was at the forefront of the counterculture's experimentation with psychedelic drugs, which led to his 1954 work \"The Doors of Perception\". Dying from cancer, he asked his wife on 22 November 1963 to inject him with 100 µg of LSD. He died later that day.\n\nBULLET::::- Steve Jobs, co-founder and former CEO of Apple Inc., said, \"Taking LSD was a profound experience, one of the most important things in my life.\"\n", "Drug-induced hallucinations are caused by hallucinogens, dissociatives, deliriants including many drugs with anticholinergic actions and certain stimulants, which are known to cause visual and auditory hallucinations. Some psychedelics such as lysergic acid diethylamide (LSD) and psilocybin can cause hallucinations that range from a spectrum of mild to severe.\n", "A diagnosable condition called hallucinogen persisting perception disorder has been defined to describe intermittent or chronic flashbacks that cause distress or impairment in life and work, and are caused only by prior hallucinogen use and not some other condition.\n\nSection::::Adverse effects.:Cancer and pregnancy.\n\nThe mutagenic potential of LSD is unclear. Overall, the evidence seems to point to limited or no effect at commonly used doses. Empirical studies showed no evidence of teratogenic or mutagenic effects from use of LSD in man.\n\nSection::::Adverse effects.:Tolerance.\n\nTolerance to LSD builds up with consistent use and cross-tolerance has been demonstrated between LSD, mescaline\n", "In 1992, Mike Dirnt of Green Day wrote the famous \"Longview\" bass line while under the influence of LSD. In an interview, Green Day lead singer and guitarist Billie Joe Armstrong recalled that he arrived at their house and saw Mike sitting on the floor with highly dilated pupils, holding his bass guitar. Mike looked up at Billie and exclaimed, \"Listen to this!\"\n\nSection::::Recreational use.:From 1960 to 1980.:LSD in Australia.\n", "In 1951, De Giacomo confirmed that when schizophrenic subjects were given large dosages of LSD by mouth, they experienced a state of catatonia. He also stated that psychotic patients were more tolerant than healthy patients to LSD, and required higher dosages to produce responses.\n", "Whenever I put the headset on now,\" he'd continued, \"I really do understand what I find there. When those kids sing about 'She loves you,' yeah well, you know, she does, she's any number of people, all over the world, back through time, different colors, sizes, ages, shapes, distances from death, but she loves. And the 'you' is everybody. And herself. Oedipa, the human voice, you know, it's a flipping miracle.\" His eyes brimming, reflecting the color of beer. \"Baby,\" she said, helpless, knowing of nothing she could do for this, and afraid for him. He put a little clear plastic bottle on the table between them. She stared at the pills in it, and then understood. \"That's LSD?\" she said.\n", "The psychoactive properties of LSD were discovered in 1943 by Swiss chemist Albert Hofmann when he accidentally ingested a small dose through the skin while studying the compound. Controlled research on human subjects began soon after and Hofmann's colleague Werner Stoll published his findings about the basic effects of LSD on human subjects in 1947.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03469
Why do jpgs lose quality/ get "deep fried" when they are downloaded too often?
Transferring a JPEG about the place doesn't affect the quality in itself, but websites will often compress images to save space, reduce download times, etc. So, if someone uploads an image to Facebook, then it goes to Twitter, then Reddit, then Facebook, then MySpace, then Wordpress, then back to Facebook, it's likely many of the websites in the chain will have applied their own compression to the image, slowly turning it into a murky mess (and ironically sometimes resulting in a larger file size than the original).
[ "Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts that appear in JPEG images, which may take the form of noise around contrasting edges (especially curves and corners), or \"blocky\" images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around sharp corners between contrasting colors (text is a good example, as it contains many such corners). The analogous artifacts in MPEG video are referred to as \"mosquito noise,\" as the resulting \"edge busyness\" and spurious dots, which change over time, resemble mosquitoes swarming around the object.\n", "BULLET::::- low mean square error over each 8×8-pixel block\n\nBULLET::::- very low mean error over each 8×8-pixel block\n\nBULLET::::- very low mean square error over the whole image\n\nBULLET::::- extremely low mean error over the whole image\n", "BULLET::::- The use of obsolete formats or poorly-supported extensions which break commonly used tools.\n\nIt is a cryptographic requirement that the carrier (e.g. photo) is original, not a copy of something publicly available (e.g., downloaded). This is because the publicly available source data could be compared against the version with a hidden message embedded.\n", "fail in the presence of file system fragmentation. Simson Garfinkel showed that on average 16% of JPEGs are fragmented, which\n\nmeans on average 16% of jpegs are recovered partially or appear corrupt when recovered using techniques that\n\ncan't handle fragmented photos.\n\nSection::::Recovering data after logical failure.:Photo Recovery Using File Carving.:Header-Footer Carving.\n\nIn Header-Footer Carving, a recovery program attempts to recover photos based on the standard starting and ending byte\n\nsignature of the photo format. To take an example, all JPEGs always begin with the hex sequence \"FFD8\" and they must\n\nend with the hex sequence \"FFD9\".\n", "BULLET::::- On newer firmware version of the 6230i, the camera is unable to process images that are bigger than 786 KB. This can be experienced when taking pictures at top quality setting in 1280x1024 mode of scenes with lots of fine details, like the leaves on a tree. You will sometimes get a message like \"Unable to save\" because the image processor in the camera module does not have enough memory to create the compressed JPEG file from the image sensor data.\n", "Although Exif data (in UTC rather than the local time zone) is recorded into each picture, the FAT32 filesystem's creation time is displayed as the file info. This information (along with the file-modified time) is destroyed if the file is moved between the phone and the card, causing pictures to appear out of order, as if taken at the time they were moved. Like other Verizon phones, filenames are recorded as MMDDYYhhmm\"x\".jpg or .3g2, where \"x\" is the letter a, b, c, etc. Because the year is in the wrong place (the middle instead of the beginning where the most significant digits belong), the sort order appears wrong when viewed on a computer via a card reader, if there are pictures from a previous calendar year.\n", "These artifacts can be reduced by choosing a lower level of compression; they may be completely avoided by saving an image using a lossless file format, though this will result in a larger file size. The images created with ray-tracing programs have noticeable blocky shapes on the terrain. Certain low-intensity compression artifacts might be acceptable when simply viewing the images, but can be emphasized if the image is subsequently processed, usually resulting in unacceptable quality. Consider the example below, demonstrating the effect of lossy compression on an edge detection processing step.\n", "BULLET::::- G 448 00001 P - G 576 00000 P Jul 1993\n\nBULLET::::- G 448 00001 Q - G 512 00000 Q Aug 1993\n\nBULLET::::- F 512 00001 U - F 576 00000 U Oct 1993\n\nBULLET::::- F 640 00001 U - F 704 00000 U Oct 1993\n\nBULLET::::- F 896 00001 U - F 960 00000 U Oct 1993\n\nBULLET::::- F 064 00001 V - F 128 00000 V Oct 1993\n\nBULLET::::- F 192 00001 V - F 256 00000 V Oct 1993\n\nBULLET::::- F 384 00001 V - F 448 00000 V Oct 1993\n", "Section::::JPEG-LS.:LOCO-I algorithm.\n\nPrior to encoding, there are two essential steps to be done in the modeling stage: decorrelation (prediction) and error modeling.\n\nSection::::JPEG-LS.:LOCO-I algorithm.:Decorrelation/prediction.\n", "Rotations where the image is not a multiple of 8 or 16, which value depends upon the chroma subsampling, are not lossless. Rotating such an image causes the blocks to be recomputed which results in loss of quality.\n", "There is also an interlaced \"progressive\" JPEG format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, support for progressive JPEGs is not universal. When progressive JPEGs are received by programs that do not support them (such as versions of Internet Explorer before Windows 7) the software displays the image only after it has been completely downloaded.\n", "Section::::Implementations.\n\nA very important implementation of a JPEG codec is the free programming library \"libjpeg\" of the Independent JPEG Group. It was first published in 1991 and was key for the success of the standard. This library or a direct derivative of it is used in countless applications. Recent versions introduce proprietary extensions which broke ABI compatibility with previous versions.\n\nIn March 2017, Google released the open source project Guetzli, which trades off a much longer encoding time for smaller file size (similar to what Zopfli does for PNG and other lossless data formats).\n", "The three simple predictors are selected according to the following conditions: (1) it tends to pick B in cases where a vertical edge exists left of the X, (2) A in cases of an horizontal edge above X, or (3) A + B – C if no edge is detected.\n\nSection::::JPEG-LS.:LOCO-I algorithm.:Context modeling.\n", "With this in mind, the sequence from earlier becomes:\n\nFrom here, frequency calculations are made based on occurrences of the coefficients. In our example block, most of the quantized coefficients are small numbers that are not preceded immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words.\n\nSection::::JPEG codec example.:Compression ratio and artifacts.\n", "Section::::Methods.\n\nSection::::Methods.:Graphics.\n\nSection::::Methods.:Graphics.:Image.\n\nBULLET::::- Better Portable Graphics, also known as BPG (lossless or lossy compression)\n\nBULLET::::- Cartesian Perceptual Compression, also known as CPC\n\nBULLET::::- DjVu\n\nBULLET::::- Fractal compression\n\nBULLET::::- ICER, used by the Mars Rovers, related to JPEG 2000 in its use of wavelets\n\nBULLET::::- JBIG2 (lossless or lossy compression)\n\nBULLET::::- JPEG\n\nBULLET::::- JPEG 2000, JPEG's successor format that uses wavelets (lossless or lossy compression)\n\nBULLET::::- JPEG XR, another successor of JPEG with support for high dynamic range, wide gamut pixel formats (lossless or lossy compression)\n\nBULLET::::- PGF, Progressive Graphics File (lossless or lossy compression)\n", "Below are several digital images illustrating data degradation, all consist of 326,272 bits. The original photo is displayed on the left. In the next image to the right, a single bit was changed from 0 to 1. In the next two images, two and three bits were flipped. On Linux systems, the binary difference between files can be revealed using command (e.g. ).\n\nSection::::In RAM.\n", "JPEGmini\n\nJPEGmini is a Photo Optimization software product created in 2011 that is said to compress the file size of JPEG photos by approximately a factor of two to three, sometimes more, with minimal or imperceptible degradation in image quality.\n\nSection::::Technology.\n", "Early in 2005, a new JPEG compression system was released that regularly obtained compression in the order of 25% (meaning a compressed file size 75% of the original file size) without any further loss of image quality and with the ability to rebuild the original file, not just the original image. (ZIP-like programs typically achieve JPEG compression rates in the order of 1 to 3%. Programs that optimise JPEGs without regard for the original file, only the original image, obtain compression rates from 3 to 10% (depending on the efficiency of the original JPEG). Programs that use the rarely implemented arithmetic coding option available to the JPEG standard typically achieve rates around 12%.)\n", "Although users can create secure web albums, Google refuses to fix an error that started with their 'upgrade' to Google+: an Unlisted Gallery link does not display correctly unless the viewer is logged into a google account. Previously, users could share their Unlisted Gallery link with anyone, Google user or not.\n", "As the typical use of JPEG is a lossy compression method, which reduces the image fidelity, it is inappropriate for exact reproduction of imaging data (such as some scientific and medical imaging applications and certain technical image processing work).\n", "JPEG JFIF images are widely used on the Web. The amount of compression can be adjusted to achieve the desired trade-off between file size and visual quality.\n\nSection::::Utilities.\n\nThe following utility programs are shipped together with libjpeg:\n\nBULLET::::- cjpeg and djpeg: for performing conversions between JPEG and some other popular image file formats.\n\nBULLET::::- rdjpgcom and wrjpgcom: for inserting and extracting textual comments in JPEG files.\n\nBULLET::::- jpegtran: for transformation of existing JPEG files.\n\nSection::::Utilities.:jpegtran.\n", "Digital resampling such as image scaling, and other DSP techniques can also introduce artifacts or degrade signal-to-noise ratio (S/N ratio) each time they are used, even if the underlying storage is lossless.\n", "Other lossy algorithms, which use pattern matching to deduplicate similar symbols, are prone to introducing hard to detect errors in printed text. For example, the numbers \"6\" and \"8\" may get replaced. This has been observed to happen with JBIG2 in certain photocopier machines.\n\nSection::::Images.:Block boundary artefacts.\n", "The problem is now to find the optimal packet length for all code blocks which minimizes the overall distortion in a way that the generated target bitrate equals the demanded bit rate.\n", "A number of alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple of 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0 chroma subsampling). Utilities that implement this include:\n\nBULLET::::- jpegtran and its GUI, Jpegcrop.\n\nBULLET::::- IrfanView using \"JPG Lossless Crop (PlugIn)\" and \"JPG Lossless Rotation (PlugIn)\", which require installing the JPG_TRANSFORM plugin.\n\nBULLET::::- FastStone Image Viewer using \"Lossless Crop to File\" and \"JPEG Lossless Rotate\".\n\nBULLET::::- XnViewMP using \"JPEG lossless transformations\".\n" ]
[ "JPGs lose quality when they are downloaded." ]
[ "Downloading doesn't reduce quality, compression tactics used by different websites reduce the quality when it is uploaded. " ]
[ "false presupposition" ]
[ "JPGs lose quality when they are downloaded." ]
[ "false presupposition" ]
[ "Downloading doesn't reduce quality, compression tactics used by different websites reduce the quality when it is uploaded. " ]
2018-05953
Why have I never heard of someone getting heart cancer?
It's just a very, very rare form of cancer so it's not heavily talked about. URL_0 Apparently the drummer of Kiss died from it, though.
[ "In most cases, patients with heart metastases have advanced tumour disease, with the heart being only one of the many places involved in the generalised tumour spread. At that stage of the disease, the patients will have already undergone extensive chemotherapy, radiation therapy or surgical procedures. Cardiac treatment is usually confined to palliative measures.\n\nSection::::Notable cases.\n\nBULLET::::- Henry VIII's first wife Catherine of Aragon's death is believed to have resulted from heart cancer.\n\nBULLET::::- Eric Carr, American musician; drummer of the rock band Kiss, died of heart cancer.\n", "Primary malignant cardiac tumors (PMCTs) are even rarer. The most recent published study about PMCTs used the Surveillance, Epidemiology and End-Results (SEER) Cancer Registry to study 497 patients with PMCTs who were diagnosed during 2000-2001 in the United States. Most cases were angiosarcomas (27.3%) with an incidence of 0.107 per 1,000,000 person-years and Non- Hodgkin's lymphomas [NHL] (26.9%), with an incidence of 0.108 per 1,000,000 person-years. The incidence rate of NHL increased significantly over the study period, but the incidence of cardiac angiosarcomas did not. The overall survival of NHL was found to be significantly better than angiosarcomas.\n", "Another previous study using the Surveillance, Epidemiology and End-Results (SEER) Cancer Registry from 1973–2011 found 551 cases of PMCTs, with an incidence of 34 cases per million persons. The study also found that the incidence has doubled over the past four decades. The associated mortality was very high, with only 46% of patients alive after one year. Sarcomas and mesotheliomas had the worst survival, while lymphomas had better survival. When compared with extracardiac tumors, PMCTs had worse survival.\n\nSection::::Types.:Secondary.\n", "Patients with heart tumours usually have non-specific symptoms, such as dyspnea (in particular shortness of breath when lying down), thoracoabdominal pain, fatigue, hemoptysis, nausea and vomiting, fever, weight loss, and night sweats. These symptoms mimic symptoms of other heart diseases, which can make diagnosis difficult.\n\nSection::::Diagnosis.\n\nIn most cases, the diagnosis is based on clinical history, echocardiography, a CT scan or an MRI scan. Cardiac tumours are often first diagnosed after the patient has had a stroke, an embolism caused by detached tumour tissue.\n\nSection::::Treatment.\n\nSection::::Treatment.:Secondary Tumours.\n", "Most heart tumors begin with myxomas, fibromas, rhabdomyomas, and hamartomas, although malignant sarcomas (such as angiosarcoma or cardiac sarcoma) have been known to occur. In a study of 12,487 autopsies performed in Hong Kong seven cardiac tumors were found, most of which were benign. According to Mayo Clinic: \"At Mayo Clinic, on average only one case of heart cancer is seen each year.\" In a study conducted in the Hospital of the Medical University of Vienna 113 primary cardiac tumour cases were identified in a time period of 15 years with 11 being malignant. The mean survival in the latter group of patients was found to be .\n", "Two women are playing tennis when one of them collapses, clutching at her chest. On a construction site, a crane worker dies in his seat. A fighter dies in the middle of a MMA match. A tuba instructor starts coughing up blood in the middle of a lesson and expires. Thirteen interrupts a college class to confirm the math instructor, Apple, had a corneal transplant five years ago, then informs her that four other people who received a transplant from the same donor died.\n", "Heart cancer\n\nHeart cancer is an extremely rare form of cancer that is divided into primary tumors of the heart and secondary tumors of the heart.\n\nSection::::Types.\n\nSection::::Types.:Primary.\n", "A subset of the primary tumors of the heart are tumors that are found on the valves of the heart. Tumors that affect the valves of the heart are found in an equal distribution among the four heart valves. The vast majority of these are papillary fibroelastomas. Primary tumors of the valves of the heart are more likely to occur in males. While most primary tumors of the valves of the heart are not malignant, they are more likely to have symptoms related to the valve, including neurologic symptoms and (in a few cases) sudden cardiac death.\n\nSection::::Prognosis.\n", "BULLET::::- 4 people received different organ transplants (liver, both lungs and kidneys) in 2007 from a 53-year-old woman who had recently died from intracranial bleeding. Before transplantation, the organ donor was deemed to have no signs of cancer upon medical examination. Later, the organ recipients developed metastatic breast cancer from the organs and 3 of them died from the cancer between 2009–2017.\n\nBULLET::::- A case of parasite-to-host cancer transmission occurred in a 41-year-old man in Colombia with a compromised immune system due to HIV. The man's tumor cells were shown to have originated from the dwarf tapeworm, \"Hymenolepis nana\".\n", "After another 12 years however, 'inoperable (terminal) secondary cancers' were found in my lymph glands, lung etc. I was given a few months to live; a maximum of 2 years. That was nearly 4 years ago.\n", "Because of the dangers inherent in an overlooked diagnosis of heart attack, cardiac disease should be considered first in people with unexplained chest pain. People with chest pain related to GERD are difficult to distinguish from those with chest pain due to cardiac conditions. Each condition can mimic the signs and symptomatic findings of the other. Further medical investigation, such as imaging, is often necessary.\n\nSection::::Differential diagnosis.:Heart.\n", "Premature heart disease is a major long-term complication in adult survivors of childhood cancer. Adult survivors are eight times more likely to die of heart disease than other people, and more than half of children treated for cancer develop some type of cardiac abnormality, although this may be asymptomatic or too mild to qualify for a clinical diagnosis of heart disease.\n\nSection::::Epidemiology.\n", "The most common primary tumor of the heart is the myxoma. In surgical series, the myxoma makes up as much as 77% of all primary tumors of the heart. Less common tumors of the heart include lipoma and cystic tumor of the atrioventricular nodal region.\n\nSection::::Types.:Malignant.\n", "BULLET::::- Nancy Reagan, former U.S. First Lady (died from congestive heart failure in 2016, aged 94)\n\nBULLET::::- Rita Reys, Dutch jazz singer who received the title \"Europe's First Lady of Jazz\" at the 1960 French jazz festival of Juan-les-Pins (died from a stroke in 2013, aged 88)\n", "Some slow-growing cancers are particularly common, but often are not fatal. Autopsy studies in Europe and Asia showed that up to 36% of people have undiagnosed and apparently harmless thyroid cancer at the time of their deaths and that 80% of men develop prostate cancer by age 80. As these cancers do not cause the patient's death, identifying them would have represented overdiagnosis rather than useful medical care.\n", "BULLET::::- Arlene Martel, American actress and dancer, primary cause of death given as complications from cardiac bypass surgery but she had also been battling breast cancer over the last five years of her life (died at age 78)\n\nBULLET::::- Jan Maxwell, American actress and singer, died from meningitis complicated by breast cancer (died at age 61)\n\nBULLET::::- Rue McClanahan, American actress; survived breast cancer, but died in 2010 following a stroke (died at age 76)\n", "BULLET::::- France Gall, French singer (died at age 70 from an infection complicated by cancer of undisclosed nature)\n\nBULLET::::- Greta Garbo, Swedish-American actress; \"apparently\" survived breast cancer following a double mastectomy; causes of death per one of her biographies were kidney and stomach failure and pneumonia; died in 1990 (died at age 84)\n\nBULLET::::- Paulette Goddard, American actress; apparently survived breast cancer, but died at her villa in Porto Ronco, Switzerland from heart failure under respiratory support due to emphysema (died at age 79)\n", "Aside from the great heterogeneity seen in lung cancers (especially those occurring among tobacco smokers), the considerable variability in diagnostic and sampling techniques used in medical practice, the high relative proportion of individuals with suspected GCCL who do not undergo complete surgical resection, and the near-universal lack of complete sectioning and pathological examination of resected tumor specimens prevent high levels of quantitative accuracy.\n\nSection::::Classification.\n", "Frequently seen cancers include lymphoma, melanoma, mast cell tumors (which are considered to be potentially malignant, even though they may have benign behavior), and osteosarcoma (bone cancer).\n\nCertain breeds are more likely to develop particular tumors, larger ones especially. The Golden Retriever is especially susceptible to lymphoma, with a lifetime risk of 1 in 8. Boxers and Pugs are prone to multiple mast cell tumors. Scottish Terriers have eighteen times the risk of mixed breed dogs to develop transitional cell carcinoma, a type of urinary bladder cancer.\n\nSection::::Diseases.:Gastrointestinal diseases.\n", "Higher physical activity is recommended. Physical exercise is associated with a modest reduction in colon but not rectal cancer risk. High levels of physical activity reduce the risk of colon cancer by about 21%. Sitting regularly for prolonged periods is associated with higher mortality from colon cancer. The risk is not negated by regular exercise, though it is lowered.\n\nSection::::Prevention.:Medication and supplements.\n", "Primary tumors of the heart\n\nPrimary tumors of the heart are extremely rare tumors that arise from the normal tissues that make up the heart. This is in contrast to secondary tumors of the heart, which are typically either metastatic from another part of the body, or infiltrate the heart via direct extension from the surrounding tissues.\n\nSection::::Types.\n\nSection::::Types.:Benign.\n", "Right atrial myxomas rarely produce symptoms until they have grown to be at least 13 cm (about 5 inches) wide.\n\nTests may include:\n\nBULLET::::- Echocardiogram and Doppler study\n\nBULLET::::- Chest x-ray\n\nBULLET::::- CT scan of chest\n\nBULLET::::- Heart MRI\n\nBULLET::::- Left heart angiography\n\nBULLET::::- Right heart angiography\n\nBULLET::::- ECG—may show atrial fibrillation\n\nBlood tests:\n\nA FBC may show anemia and increased WBCs (white blood cells). The erythrocyte sedimentation rate (ESR) is usually increased.\n\nSection::::Treatment.\n\nThe tumor must be surgically removed. Some patients will also need their mitral valve replaced. This can be done during the same surgery.\n", "Section::::Diagnosis.\n\nA diagnosis of lung cancer may be suspected on the basis of typical symptoms, particularly in a person with smoking history. Symptoms such as coughing up blood and unintentional weight loss may prompt further investigation, such as medical imaging.\n\nSection::::Diagnosis.:Classification.\n\nThe majority of lung cancers can be characterized as either small cell lung cancer (SCLC) or non-small cell lung cancer (NSCLC). Lung adenocarcinoma is one of the three major subtypes of NSCLC, which also include squamous carcinoma and large cell carcinoma.\n", "Cancer prevalence in dogs increases with age and certain breeds are more susceptible to specific kinds of cancers. Millions of dogs develop spontaneous tumors each year. Boxers, Boston Terriers and Golden Retrievers are among the breeds that most commonly develop mast cell tumors. Large and giant breeds, like Great Danes, Rottweilers, Greyhound and Saint Bernards, are much more likely to develop bone cancer than smaller breeds. Lymphoma occurs at increased rates in Bernese Mountain dogs, bulldogs, and boxers. It is important for the owner to be familiar with the diseases to which their specific breed of dog might have a breed predisposition.\n", "When BAC recurs after surgery, the recurrences are local in about three-quarters of cases, a rate higher than other forms of NSCLC, which tends to recur distantly.\n\nSection::::Epidemiology.\n\nInformation about the epidemiology of AIS is limited, due to changes in definition of this disease and separation from BAC category.\n\nUnder the new, more restrictive WHO criteria for lung cancer classification, AIS is now diagnosed much less frequently than it was in the past. Recent studies suggest that AIS comprises between 3% and 5% of all lung carcinomas in the U.S.\n\nSection::::Epidemiology.:Incidence.\n" ]
[ "Never heard of someone getting heart cancer." ]
[ "Drummer of Kiss dies from it. " ]
[ "false presupposition" ]
[ "Never heard of someone getting heart cancer." ]
[ "false presupposition" ]
[ "Drummer of Kiss dies from it. " ]
2018-16087
What's happening on some nights when you try to wipe the thick layer of moisture off your windshield but it reappears almost immediately?
Cold glass meets humid air. The water vapor in the air condenses on the glass. Like a cold drink on a hot day. As you drive along, the air hits your car at the speed you're driving into it, not letting the vapor have enough time to condense into water again. Your windscreen also heats up as the inside of the car heats up, preventing condensation. Try warming up your car before driving. Leave the AC on heat for a bit before you hit the road.
[ "BULLET::::- Emulsified - At the point of saturation, a cloudy appearance will be the tell-tale sign that the water/oil mixture has become emulsified.\n\nBULLET::::- Free - The most developed stage of fuel contamination is when free flowing puddles of water appear within stored oil. At this point, bacterial contamination and growth is boosted.\n\nSection::::Microbial contamination.\n\nThe contaminants found in fuel can be made up of a number of things, predominantly \"Hormoconis Resinae\", which is typically the main contaminant when microbial contamination is present, along with bacteria \"Pseudomonas Aeruginosa\", and fungi such as yeasts and moulds like \"Yarrowia tropicalis\".\n", "Rain-X Online Protectant was introduced to carwashes in 2005. It is a water-based compound that is applied to the entire car's surface, working much like consumer grade Rain-X products. \n\nCompeting products include Pittsburgh Glass Works' (formerly of PPG) Aquapel.\n\nSection::::Uses.\n", "Windshield washer fluid is sold in many formulations, and some may require dilution before being applied, although most solutions available in North America come premixed with no diluting required. The most common washer fluid solutions are given labels such as \"All-Season\", \"Bug Remover\", or \"De-icer\", and usually are a combination of solvents with a detergent. Dilution factors will vary depending on season, for example in winter the dilution factor may be 1:1, whereas during summer the dilution factor may be 1:10. It is sometimes sold as sachet of crystals, which is also diluted with water. Distilled or deionised water is the preferred diluent, since it will not leave trace mineral deposits on the glass.\n", "BULLET::::- In higher rainfall areas, the increased camber required to drain water, and open drainage ditches at the sides of the road, often cause vehicles with a high centre of gravity, such as trucks and off-road vehicles, to overturn if they do not keep close to the crown of the road\n\nBULLET::::- Excess dust permeates door-opening rubber moulding breaking the seal\n\nBULLET::::- Lost binder in the form of road dust, when mixed with rain, will wear away the painted surfaces of vehicles\n", "Section::::Parallel fail-over links.\n", "Diesel fuel is prone to \"waxing\" or \"gelling\" in cold weather; both are terms for the solidification of diesel oil into a partially crystalline state. Below the Cloud Point the fuel begins to develop solid wax particles giving it a cloudy appearance. The presence of solidified waxes thickens the oil and clogs fuel filters and injectors in engines. The crystals build up in the fuel line (especially in fuel filters) until the engine is starved of fuel, causing it to stop running.\n", "A \"wiperless windshield\" is a windshield that uses a mechanism other than wipers to remove snow and rain from the windshield. The concept car Acura TL features a wiperless windshield using a series of jet nozzles in the cowl to blow pressurized air onto the windshield. Also several glass manufacturers have experimented with nano type coatings designed to repel external contaminants with varying degrees of success but to date none of these have made it to commercial applications.\n\nSection::::Repair of stone-chip and crack damage.\n", "Section::::Suspended deposits.\n", "During the lawsuit, Petrolati portrayed himself as disengaged from the redistricting process. He did not take any notes at public hearings, did not communicate with advocates, and did not know how to use the computer software that drew the redistricting maps.\n\nSection::::State Representative.:Sale of insurance policies.\n", "Using a squeegee for window cleaning may sometimes produce run lines. These are caused by cleaning fluid being pushed up into the top edge of the window, or by fluid flowing from under the rubber blade into the dry area of the glass. The latter of these cases may be prevented by holding the squeegee at a slight angle relative to the direction in which it is being moved, directing fluid flow towards the wet area of the glass.\n", "First seen on the Rolls Royce in 1969 then the 1985 Ford Scorpio/Granada Mk. III in Europe and the 1986 Ford Taurus/Mercury Sable in the U.S., the system uses a mesh of very thin heating wires, or a silver/zinc oxide coated film embedded between two layers of windscreen glass. The overall effect when operative was defogging and defrosting of the windscreen at a very high rate. Landrover (UK) also fitted a similar screen to their Discovery range in the early 1990s, some of which were imported to Australia undetected by authorities, because at that stage they were not legal in any state. Owing to the high current draw, the system is engineered to operate only when the engine is running, and normally switches off after 10 minutes of operation. The metallic content of the glass has been shown to degrade the performance of certain windshield-mounted accessories, such as GPS navigators, telephone antennas and radar detectors.\n", "Today’s windshields are a safety device just like seatbelts and airbags. The urethane sealant is protected from UV in sunlight by a band of dark dots around the edge of the windshield. The darkened edge transitions to the clear windshield with smaller dots to minimize thermal stress in manufacturing. The same band of darkened dots is often expanded around the rearview mirror to act as a sunshade.\n\nSection::::Other aspects.\n", "By December 18, Gacy was beginning to show visible signs of strain as a result of the constant surveillance: he was unshaven, looked tired, appeared anxious and was drinking heavily. That afternoon, he drove to his lawyers' office to prepare a $750,000 civil suit against the Des Plaines police, demanding that they cease their surveillance. The same day, the serial number of the Nisson Pharmacy photo receipt found in Gacy's kitchen was traced to 17-year-old Kim Byers, a colleague of Piest at Nisson Pharmacy, who admitted when contacted in person the following day that she had worn the jacket and had placed the receipt in the parka pocket just before she gave the parka to Piest as he left the store to talk with a contractor. This revelation contradicted Gacy's previous statements that he had had no contact with Robert Piest on the evening of December 11: the presence of the receipt indicated that Gacy must have been in contact with Robert Piest after the youth had left the Nisson Pharmacy on December 11.\n", "Antecedent moisture\n\nIn hydrology and sewage collection and disposal, antecedent moisture is the relative wetness or dryness of a watershed or sanitary sewershed. Antecedent moisture conditions change continuously and can have a very significant effect on the flow responses in these systems during wet weather. The effect is evident in most hydrologic systems including stormwater runoff and sanitary sewers with inflow and infiltration. Many modeling and analysis challenges that are created by antecedent moisture conditions are evident within combined sewers and separate sanitary sewer systems. \n\nSection::::Definition.\n", "Section::::Chickasaw monetary removal.\n", "In April 2017, Representative Bagley proposed legislation which would halt most automobile inspection stickers required annually since 1961 on all vehicles in Louisiana. Bagley's bill would limit inspections to student transportation and commercial vehicles and would not impact the parishes of Ascension, East Baton Rouge, Iberville, Livingston, and West Baton Rouge, which are required under the Clean Air Act of 1963 to conduct specialized inspections for vehicle emissions, Displayed on windshields, the stickers are considered proof that the inspections was conducted.\n", "BULLET::::- ASTM International Standards include a standard test for drainage plane systems in EIFS Systems under code ASTM E2273 and the International Code Council features a more general \"Evaluation guideline for a moisture drainage system used with exterior wall veneers\" under code ICC-ES EG356.\n\nBULLET::::- Inappropriate rain screen materials may also introduce a risk of fast-spreading external fires.\n", "Due to its general water repellent properties, the original Rain-X formulation is used in a wide variety of consumer, commercial and industrial settings. The primary use of Rain-X is for automotive applications. Commercially sold \"Original Glass Treatment\" is the original and most well known Rain-X branded product. It is a hydrophobic silicone polymer that forces water to bead and roll off of the car, often without needing wipers. It is sold in 3.5 or 7 oz. bottles, or as wipes or towelettes.\n", "Almost everything on Earth contains elements of water; oil and fuel are no exceptions. While very small amounts exist in fuels to start with, stored fuel will become a breeding ground for the microbial bacteria and over time, the levels of damage change from dissolved to emulsified and finally free.\n\nBULLET::::- Dissolved - Dissolved water in oil is the presence of water but, unnoticeable to the eye. The water continues to develop until saturation point where water visibility begins.\n", "Rain-X\n\nRain-X is a synthetic hydrophobic surface-applied product that causes water to bead, most commonly used on glass automobile surfaces. It was introduced in 1972 by Howard G. Ohlhausen of the Unelko Corporation.\n\nThe brand has since been extended to a range of automotive and surface care products, including wiper blades. It is currently owned by ITW Global Brands.\n\nSection::::Products.\n\nThe Rain-X brand includes seven categories of products: wiper blades, glass and windshield treatments, plastic cleaners, windshield washer fluid, car washes, car wax, and bug and tar washes.\n", "After the main friction zone, some car washes have a dedicated care zone. Prior to entering the care zone, the car is rinsed with fresh water. This is immediately followed by a series of extra services. In many car washes, the first of these services is a polish wax. Polish waxes fill in microscopic imperfections in the vehicle's clear coat, thus improving shine. After the polish wax application is typically a retractable mitter or top brush and, in some cases, side brushes or wrap-around brushes. Next is a protectant, which creates a thin protective film over a vehicle's surface. Protectants generally repel water, which assists in drying the car and aiding in the driver's ability to see through their windshield during rain. A low-end wax or clear coat protectant follows the main protectant. A drying agent is typically applied at the end of the tunnel to assist in removing water from the vehicle's surface prior to forced air drying. After the drying agent, there may be a \"spot free\" rinse of soft water, that has been filtered of the salts normally present, and sent through semi-permeable membranes to produce highly purified water that will not leave spots.\n", "In 2004 the Royal Society for the Protection of Birds asked British motorists to attach a PVC film to their number plate to measure the number of insects that collided with it during car journeys. This splat-o-meter estimated one insect squashed every five miles of driving, and represents one of the few pieces of data directly relevant to the windscreen phenomenon. Although the study did not have historical data for comparison, it was reported that many participants in the study were astounded by how few insects their traps collected.\n", "The formtion behavior of condensation on the mirror's surface can be registered by either optical or visual means. In both cases, a light source is directed onto the mirror and changes in the reflection of this light due to the formation of condensation can be detected by a sensor or the human eye, respectively. The exact point at which condensation begins to occur is not discernible to the unaided eye, so modern manually operated instruments use a microscope to enhance the accuracy of measurements taken using this method.\n", "Exhaled air is almost fully at equilibrium with water vapor at the body temperature. In the cold air the exhaled vapor quickly condenses, thus showing up as a fog or mist of water droplets and as condensation or frost on surfaces. Forcibly condensing these water droplets from exhaled breath is the basis of exhaled breath condensate, an evolving medical diagnostic test.\n", "The symptoms of diesel bugs are easy to find. Important things to check over and look out for are:\n\nBULLET::::- Blocked Filters\n\nBULLET::::- Fuel System Failure\n\nBULLET::::- Worn Fuel Injectors\n\nBULLET::::- Corroded Tanks\n\nBULLET::::- Engine Failure\n\nMicrobial contamination is significantly accelerated when higher biodiesel content occurs along with lower sulphur content.\n\nSection::::Water in oil.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-19508
How do old songs like Message in a Bottle or TV shows like Friends get remastered when the equipment itself was used during the time it was recorded or filmed?
The studio recording equipment, eg reel to reel tape and studio cameras make a much higher quality first generation copy of the source material. It's the mixing process and delivery formats that suck. Digitize the source material and re edit using modern equipment, you then get something that looks and sounds better on today's consumer equipment.
[ "For the 1963 re-release of the picture and subsequent re-release of the record, instead of going back to the actual soundtrack recordings recorded in Hollywood specifically for the film and remixing for Stereo, producers took the original monaural New York session tapes and electronically synthesized a stereo signal. Thirty years later, producers finally went back to the original pre-recorded and post-recorded music stems and remixed for true stereo from sources that will lock to picture.\n", "Additionally, from an artistic point of view, original mastering involved the original artist, remastering often not. Therefore, many times remasters result in a totally changed character to the music.\n\nSection::::Remastering.:Film and television.\n", "Section::::History.:The 1960s.:Converting mono.\n\nColumbia's engineering department developed a process for emulating stereo from a mono source. They called this process \"Electronically Rechanneled for Stereo.\" In the June 16, 1962, issue of \"Billboard\" magazine (page 5), Columbia announced it would issue \"rechanneled\" versions of greatest hits compilations that had been recorded in mono, including albums by Doris Day, Frankie Laine, Percy Faith, Mitch Miller, Marty Robbins, Dave Brubeck, Miles Davis, and Johnny Mathis.\n", "Remaster\n\nRemaster (also digital remastering and digitally remastered) refers to changing the quality of the sound or of the image, or both, of previously created recordings, either audiophonic, cinematic, or videographic.\n\nSection::::Mastering.\n\nOften a pyramid of copies would be made from a single original \"master\" recording, which might itself be based on previous recordings. For example, sound effects (a door opening, punching sounds, falling down the stairs, a bell ringing, etc.) might have been added from copies of sound effect tapes similar to modern sampling to make a radio play for broadcast.\n", "ReMastered: The Lion's Share\n\nReMastered: The Lion's Share is a 2019 documentary film about the search for the original writers of the legendary song \"The Lion Sleeps Tonight\" by Rian Malan, a South African journalist.\n\nSection::::Premise.\n\nThe documentary takes a look at the controversy and legal battles following the song \"The Lion Sleeps Tonight\", which is one of the most recognisable songs in our history. The search for the song's roots in this documentary is done by the South African journalist Rian Malan.\n\nSection::::Cast.\n\nBULLET::::- Rian Malan\n\nBULLET::::- Solomon Linda\n\nBULLET::::- Delphi Linda\n\nBULLET::::- Elizabeth Linda\n\nBULLET::::- Fildah Linda\n", "The February 1968 master was remixed again for inclusion on \"Let It Be... Naked\" in 2003, at the correct speed but stripped of most of the instrumentation and digitally processed to correct tuning issues.\n\nSection::::Critical reception and legacy.\n", "An example of a restored film is the 1939 film \"The Wizard of Oz\". The color portions of \"Oz\" were shot in the three-strip Technicolor process, which in the 1930s yielded three separate black and white negatives created from red, green, and blue light filters which were used to print the cyan, magenta, and yellow portions of the final printed color film answer print. These three negatives were scanned individually into a computer system, where the digital images were tinted and combined using proprietary software.\n", "The process of creating a digital transfer of an analogue tape remasters the material in the digital domain, even if no equalization, compression, or other processing is done to the material. Ideally, because of their higher resolution, a CD or DVD (or even higher quality like high-resolution audio or hi-def video) release should come from the best source possible, with the most care taken during its transfer.\n", "There are alternative recordings of the song, instrumental as well as vocal, reggae to classical crossover, from artists as diverse as American country music band Alabama, Chris De Burgh, West End theatre star Michael Ball, Marcia Hines, Engelbert Humperdinck, James Last, The London Symphony Orchestra, Christian artist Russ Lee, Rhydian, John Tesh, Russell Watson, the London Community Gospel Choir, the Newsboys, The Isaacs, The Katinas, Japanese singer Kaho Shimada, Italian band Dik Dik and Michael English.\n", "In May 2016, Judge Percy Anderson ruled in a lawsuit between ABS Entertainment and CBS Radio that \"remastered\" versions of pre-1972 recordings can receive a federal copyright as a distinct work due to the amount of creative effort expressed in the process.\n\nSection::::Copyright limitations, exceptions, and defenses.\n\nUnited States copyright law includes numerous defenses, exceptions, and limitations. Some of the most important include:\n", "Some of the recordings were repressed from the original metal parts, which the production located whilst researching the films. Peter Henderson explained “in some cases we were lucky enough to get some metal parts – that’s the originals where they were cut to wax and the metal was put into the grooves and the discs were printed from those back in the ‘20s. Some of those still exist – Sony had some of them in their vaults – [but it only amounted to] 15-20 discs out of the whole.”\n\nSection::::Design.\n", "With digital recording, masters could be created and duplicated without incurring the usual generational loss. As CDs were a digital format, digital masters created from original analog recordings became a necessity.\n\nSection::::Remastering.\n\nRemastering is the process of making a new master for an album, film, or any other creation. It tends to refer to the process of porting a recording from an analogue medium to a digital one, but this is not always the case.\n", "Problematically, several different levels of masters often exist for any one audio release. As an example, examine the way a typical music album from the 1960s was created. Musicians and vocalists were recorded on multi-track tape. This tape was mixed to create a stereo or mono master. A further master tape would likely be created from this original master recording consisting of equalization and other adjustments and improvements to the audio to make it sound better on record players for example.\n", "New sound restoration techniques developed for the American Epic film production were utilized to restore the songs on the albums. The 78rpm disc transfers were made by sound engineer Nicholas Bergh using reverse engineering techniques, garnered from working with the original 1920s recording equipment on \"The American Epic Sessions\", along with meticulous sound restoration undertaken by Peter Henderson and Joel Tefteller to reveal greater fidelity, presence, and clarity to these 1920s and 30s recordings than had ever been heard before. Nicholas Bergh commented, “the recordings in this set are special since they utilize the earliest and simplest type of electric recording equipment used for commercial studio work. As a result, they have an unrivaled immediacy to the sound.”\n", "Section::::About the box set.\n\n\"1970: The Complete Fun House Sessions\" was compiled from all thirteen reels of multi-track reel-to-reel tape that held every note and snippet of studio dialogue. Twelve reels of tape had been used during the sessions; the thirteenth reel was the one that held the takes that would be used for the album. In 1999, Rhino house engineers Bill Inglot and Dan Hersch mixed down every tape from end to end, placing the master takes that had been used for the master reel of the album back in their rightful position for the boxed set. \n", "In 1981 MFSL produced a box set of recordings by The Beatles. The box set comprised all 12 original British versions of their albums, mastered from the original Abbey Road Studios master tapes, plus \"Magical Mystery Tour\" (1967) which was sourced from US tape copies prepared by Capitol Records. An album-sized booklet displaying the original album covers was also included. This project was the first and only time The Beatles master tapes ever left Abbey Road Studios.\n", "American copyright laws about sound recordings are uniquely restrictive compared to American copyright laws for other formats and international copyright laws about sound recordings. In addition to preventing access, existing laws sometimes prohibit the preservation of deteriorating carrier objects until the object has audibly degraded.\n", "More recently, Moore has been noted as a Master at re-mastering analog media for the digital age. When a long-lost tape of Joni Mitchell's was found, her Amchitka benefit concert for the birth of Greenpeace, it was brought to Moore who \"painstakingly restored each minute of the 40-year-old tapes.\" \n", "Most of the albums were mastered from the original, first-generation master tapes. Only two albums were not: \"The Times They Are A-Changin'\" and \"Highway 61 Revisited\". The original master tape for the former could not be found so a new master was mixed from the original three-track tape, using the original vinyl pressing as a guide. \"Highway 61 Revisited\" was mastered from a second-generation overseas copy of the mono mix.\n", "They also started a cassette/CD reissue series called STACKS of 78s that contained recordings from their collection of 78's. Most of these had been out of print recordings that had been unavailable for many years. They made no effort at sound restoration so some of the releases had a lot of scratch and surface noise.\n", "During production or earlier parts of post-production, sound editors, sound designers, sound engineers, production sound mixers and/or music editors assemble the tracks that become raw materials for the re-recording mixer to work with. Those tracks in turn originate with sounds created by professional musicians, singers, actors, or foley artists. \n", "Section::::Hardware.\n", "Other classics mastered by Ray Staff include \"Physical Graffiti\" and \"Presence\" by Led Zeppelin, \"Crime of the Century\" by Supertramp, \"It's Only Rock 'n Roll\" by The Rolling Stones and \"Hemispheres\" by Rush.\n\nSection::::Recent years.\n", "As in the case of the non-musical films, Rhino Records, which obtained the rights to the MGM soundtracks (owned by Turner Entertainment) in the 1990s, issued longer versions of their movie musical albums, containing virtually all of the songs and music. Rhino's license expired at the end of 2011 and the albums Rhino issued are now out of print. Warner Bros. now owns the MGM soundtracks first issued by MGM Records and Warner Bros' WaterTower Music unit now has the rights to release the MGM soundtracks.\n\nSection::::History.:Record manufacturing.\n", "In 2008, soundtracks of the missing episode \"A Stripe for Frazer\" and the 1968 Christmas Special \"Present Arms\" were recovered. The soundtrack of \"A Stripe for Frazer\" has been mixed with animation to replace the missing images. The Audio soundtrack for the 1970 Christmas Special \"Cornish Floral Dance\" has also been recovered.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-07480
How come what we ingest (eat or drink) doesn’t necessarily have to be sterile, but any time we have some sort of surgery or procedure done, everything has to be completely sterile?
Your digestive system isn't really "inside" your body in the same way your organs are. It is fully enclosed, end to end, and designed to deal with all the food-like objects the human animal is going to cram into itself. Your gut does a fine and dandy job of handling lots of pathogens and potential parasites by just generally being a pretty unfriendly environment for life. Now obviously some nasties have evolved to survive through that and will make you pretty sick if you eat them. Your gooey innards though? No such defenses. They are warm and wet and a perfect place for even fairly weak bacteria and other nasties to grow and go hog wild.
[ "In general, surgical instruments and medications that enter an already aseptic part of the body (such as the bloodstream, or penetrating the skin) must be sterile. Examples of such instruments include scalpels, hypodermic needles, and artificial pacemakers. This is also essential in the manufacture of parenteral pharmaceuticals.\n\nPreparation of injectable medications and intravenous solutions for fluid replacement therapy requires not only sterility but also well-designed containers to prevent entry of adventitious agents after initial product sterilization.\n", "Equipment used in aseptic processing of food and beverages must be sterilized before processing and remain sterile during processing. When designing aseptic processing equipment there are six basic requirements to consider: the equipment must have the capability of being cleaned thoroughly, it must be able to be sterilized with steam, chemicals, or high-temperature water, sterilization media should be able to contact all surfaces of the equipment, meaning the equipment does not contain any cracks, crevices or dead spots, the equipment must be able to be kept in a sterile state, it must have the ability to be used continuously, and lastly, the equipment must comply with regulations.\n", "Semi-critical items are items that are expected to have contact with what an intact mucous membrane, and normally consists of endoscopes like those used in colonoscopies.\n\nThese items require high level disinfectants such as glutaraldehyde solution, peracetic acid, or hydrogen peroxide plasma.\n\nCritical items, which include any instrument which will be introduced into a patient blood stream or in a normally sterile area of the body, require sterilization. \n\nSection::::Sterilization methods in use.\n", "Section::::Human health.:Hygiene.\n\nHygiene is a set of practices to avoid infection or food spoilage by eliminating microorganisms from the surroundings. As microorganisms, in particular bacteria, are found virtually everywhere, harmful microorganisms may be reduced to acceptable levels rather than actually eliminated. In food preparation, microorganisms are reduced by preservation methods such as cooking, cleanliness of utensils, short storage periods, or by low temperatures. If complete sterility is needed, as with surgical equipment, an autoclave is used to kill microorganisms with heat and pressure.\n\nSection::::See also.\n\nBULLET::::- Catalogue of Life\n\nBULLET::::- Microbiological culture\n\nBULLET::::- Impedance microbiology\n\nBULLET::::- Microbial biogeography\n", "Section::::FDA inspection and regulation for aseptic processing.\n\nInspections of aseptic processing is one of the most complex inspection of food manufacturing operations. Process authorities are required to establish a process that ensures commercial sterility for the following:\n\nBULLET::::1. The product\n\nBULLET::::2. All equipment including the hold tube and any equipment downstream from the holding tube such as the filler\n\nBULLET::::3. The packaging equipment\n\nBULLET::::4. The packaging material.\n\nDocumentation of production operations must be maintained by the facility, showing an achievement of commercial sterile conditions in all areas of the facility.\n", "In order to isolate organisms in materials with high microbial content, such as sewage, soil or stool, serial dilutions will increase the chance of separating a mixture.\n\nIn a liquid medium with few or no expected organisms, from an area that is normally sterile (such as CSF, blood inside the circulatory system) centrifugation, decanting the supernatant and using only the sediment will increase the chance to grow and isolate bacteria or the usually cell-associated viruses.\n", "In the US, one of the cheapest and easiest methods is steam sterilization, where instrumentation trays and packages are placed in a chamber which is them filled with steam, killing all microorganisms.\n", "In order to isolate a microbe from a natural, mixed population of living microbes, as present in the environment, for example in water or soil flora, or from living beings with skin flora, oral flora or gut flora, one has to separate it from the mix.\n\nTraditionally microbes have been cultured in order to identify the microbe(s) of interest based on its growth characteristics. \n\nDepending on the expected density and viability of microbes present in a liquid sample, physical methods to increase the gradient as for example serial dilution or centrifugation may be chosen.\n", "There are times where a patient may require a blood culture collection. The culture will determine if the patient has pathogens in the blood. Normally blood is sterile. When drawing blood from cultures use a sterile solution such as Betadine rather than alcohol. This is done using sterile gloves, while not wiping away the surgical solution, touching the puncture site, or in any way compromising the sterile process. It is vital that the procedure is performed in as sterile a manner as possible as the persistent presence of skin commensals in blood cultures could indicate endocarditis but they are most often found as contaminants. \n", "The animals can be born through a caesarian section then special care taken so the newborn does not acquire infections, such as use of sterile isolation units with a positive pressure differential to keep all outside air and pathogens from entering. Everything that needs to be inserted into the isolator, such as food, water and equipment needs to be completely sterilized and disinfected, and inserted through an airlock that can be disinfected before opening from the inside.\n", "Processing authorities are responsible for aseptic systems must be aware of certain factors unique to aseptic processing and packaging operations, therefore specific knowledge in this area is essential. Neither the FDA nor other regulatory agency maintains a list of recognized processing authorities, however, certain organizations are widely recognized within government agencies and the industry as having the experience and expertise. The FDA regulations rely upon aseptic processing and packaging authorities to establish parameters for sterilization of product, packages, and equipment so that commercial sterility of the end product is assured.\n", "Serial passage can either be performed in vitro or in vivo. In the in vitro method, a virus or a strain of bacteria will be isolated and allowed to grow for a period of time. After the sample has grown for some time, part of it will be transferred to a new environment and allowed to grow for the same period of time. This process will be repeated as many times as desired.\n", "The FDA does exert authority over the types of aseptic processing and packaging systems that can be utilized to produce foods for distribution in U.S. commerce by reviewing and either accepting or rejecting process filing forms from individual processing firms. The FDA may request sufficient technical information from the processor to evaluate adequacy of the equipment and the procedures used to produce a commercially sterile product. Until the FDA finds no further objections to a process filing, the company is prevented from distributing product produced on that system in interstate commerce.\n", "If irradiation were to become common in the food handling process there would be a reduction of the prevalence of foodborne illness and potentially the eradication of specific pathogens. However, multiple studies suggest that an increased rate of pathogen growth may occur when irradiated food is cross-contaminated with a pathogen, as the competing spoilage organisms are no longer present. This being said, cross contamination itself becomes less prevalent with an increase in usage of irradiated foods.\n", "The English microbiologist Professor Harry Smith and his colleagues in the mid-1950s found that sterile filtrates of serum from animals infected with \"Bacillus anthracis\" were lethal for other animals, whereas extracts of culture fluid from the same organism grown \"in vitro\" were not. This discovery of anthrax toxin through the use of \"in vivo\" experiments had a major impact on studies of the pathogenesis of infectious disease.\n", "Section::::In medicine.\n", "If the pathogen is cultured in a lab, it can grow on Miller and Schroth media, can use sucrose to make reducing sugars, and can use either lactose, methyl alpha-glucoside, inulin or raffinose to make acids. It is also capable of surviving in culture medium sodium levels of up to 7–9%, and in temperatures as high as 39 °C.\n\nSection::::Management.\n", "A medical autoclave is a device that uses steam to sterilize equipment and other objects. This means that all bacteria, viruses, fungi, and spores are inactivated. However, prions, such as those associated with Creutzfeldt–Jakob disease, and some toxins released by certain bacteria, such as Cereulide, may not be destroyed by autoclaving at the typical 134 °C for three minutes or 121 °C for 15 minutes. Although a wide range of archaea species, including \"Geogemma barosii\", can survive and even reproduce at temperatures above 121 °C, none of them are known to be infectious or otherwise pose a health risk to humans; in fact, their biochemistry is so different from our own and their multiplication rate is so slow that microbiologists need not worry about them.\n", "It is important first, to homogenize milk, heating it in a water bath at 40 °C for somatic cells that float to the surface along with the fat. The laboratory apparatus must be clean but not necessarily sterile, since the method is based on cell count and asepsis is not accurate. If later it is going to make detailed microbiological analyzes on the same sample, then it is necessary to be obtained and manipulated with sterile material.\n", "Sterilization is the process of destroying all living organisms on an item and is the main task of most sterile services departments. Items to be sterilized must first be cleaned in a separate decontamination room and inspected for effectiveness, cleanliness and damage. There are multiple methods of sterilization, and which one is used is dependant on many factors including: operational cost, potential hazards to workers, efficacy, time, and composition of the materials being sterilized.\n", "In the United States of America, microbial food cultures are regulated under the Food, Drug and Cosmetic Act. Section 409 of the 1958 Food Additives Amendment of the Food, Drug and Cosmetic Act, exempts from the definition of food additives substances generally recognized by experts as safe (GRAS) under conditions of their intended use. These substances do not require premarket approval by the US Food and Drug Administration.\n\nBecause there are various ways to obtain GRAS status for microbial food cultures, there is no exhaustive list of microbial food cultures having GRAS status in the US.\n", "BULLET::::- The ability of the bacteria to assemble an intact needle complex. NCs can be isolated from manipulated bacteria and examined microscopically. Minor changes, however cannot always be detected by microscopy.\n\nBULLET::::- The ability of bacteria to infect live animals or plants. Even if manipulated bacteria are shown \"in vitro\" to be able to infect host cells, their ability to sustain an infection in a live organism cannot be taken for granted.\n", "\"C. parvum\" oocysts are very difficult to detect; their small size means they are difficult to detect in fecal samples. A fecal ELISA could detect the presence of the parasite. A serological ELISA is unable to distinguish between past and present infections.\n", "Cell lines and microorganisms cannot be held in culture indefinitely due to the gradual rise in toxic metabolites, use of nutrients and increase in cell number due to growth. Subculture is therefore used to produce a new culture with a lower density of cells than the originating culture, fresh nutrients and no toxic metabolites allowing continued growth of the cells without risk of cell death. Subculture is important for both proliferating (e.g. a microorganism like \"E. coli\") and non-proliferating (e.g. terminally differentiated white blood cells) cells.\n", "Culture techniques are designed to promote the growth and identify particular bacteria, while restricting the growth of the other bacteria in the sample. Often these techniques are designed for specific specimens; for example, a sputum sample will be treated to identify organisms that cause pneumonia, while stool specimens are cultured on selective media to identify organisms that cause diarrhoea, while preventing growth of non-pathogenic bacteria. Specimens that are normally sterile, such as blood, urine or spinal fluid, are cultured under conditions designed to grow all possible organisms. Once a pathogenic organism has been isolated, it can be further characterised by its morphology, growth patterns (such as aerobic or anaerobic growth), patterns of hemolysis, and staining.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-15645
What aren't there term limits on Justices for the Supreme Court of the US like every other level of government?
The idea was to isolate judges from political fights so as to ensure the court's legitimacy. The writers of the constitution did not consider either how important the court would become and thus how politically charged appointments could be. Many other countries have long, yet predictable, terms for judges that allow them to be much more isolated from political cycles.
[ "Federal judges have different terms in office. Article I judges; such as those that sit on the United States bankruptcy courts, United States Tax Court, and United States Court of Appeals for the Armed Forces, and certain other federal courts and other forms of adjudicative bodies serve limited terms: The Court of Appeals for the Armed Forces for 15 years, bankruptcy courts for 14. However, the majority of the federal judiciary, Article III judges (such as those of the Supreme Court, courts of appeal, and federal district courts), serve for life.\n\nSection::::United States.:State and territories.\n", "Some state lawmakers have officially expressed to Congress a desire for a federal constitutional amendment to limit terms of Supreme Court justices as well as of judges of federal courts below the Supreme Court level. While there might be others, below are three known examples:\n\nBULLET::::1. In 1957, the Alabama Legislature adopted Senate Joint Resolution No. 47 on the subject (appearing in the U.S. Senate's portion of the \"Congressional Record\" on July 3, 1957, at page 10863, with full text provided);\n", "Most states base their legal system on English common law (with substantial indigenous changes and incorporation of certain civil law innovations), with the notable exception of Louisiana, a former French colony, which draws large parts of its legal system from French civil law.\n\nOnly a few states choose to have the judges on the state's courts serve for life terms. In most of the states the judges, including the justices of the highest court in the state, are either elected or appointed for terms of a limited number of years, and are usually eligible for re-election or reappointment.\n", "As of 2013, term limits at the federal level are restricted to the executive branch and some agencies. Judicial appointments at the federal level are made for life, and are not subject to election or to term limits. The U.S. Congress remains (since the Thornton decision of 1995) without electoral limits.\n\nSection::::Federal term limits.:President.\n", "List of United States federal judges by longevity of service\n\nThis is a list of Article III United States federal judges by longevity of service. The judges on the lists below were presidential appointees who have been confirmed by the Senate, and who served on the federal bench for over 40 years. It includes neither Article I judges (e.g., U.S. Tax Court) nor Article IV judges.\n\nSection::::United States Supreme Court.\n", "New York Supreme Court justices are elected to 14-year terms. A Supreme Court Justice's term ends, even if the 14-year term has not yet expired, at the end of the calendar year in which he or she reaches the age of 70. However, an elected Supreme Court Justice may obtain certification to continue in office, without having to be re-elected, for three two-year periods, until final retirement at the end of the year in which the Justice turns 76. These additional six years of service are available only for elected Supreme Court Justices, not for \"Acting\" Justices whose election or appointments were to lower courts.\n", "Section::::Federal term limits.:Congress.\n\nReformers during the early 1990s used the initiative and referendum to put congressional term limits on the ballot in 24 states. Voters in eight of these states approved the congressional term limits by an average electoral margin of two to one. It was an open question whether states had the constitutional authority to enact these limits. In May 1995, the U.S. Supreme Court ruled 5–4 in \"U.S. Term Limits, Inc. v. Thornton\", , that states cannot impose term limits upon their federal Representatives or Senators.\n", "Legal scholars have discussed whether or not to impose term limits on the Supreme Court of the United States. Currently, Supreme Court Justices are appointed for life \"during good behavior\". A sentiment has developed, among certain scholars, that the Supreme Court may not be accountable in a way that is most in line with the spirit of checks and balances. Equally, scholars have argued that life tenure has taken on a new meaning in a modern context. Changes in medical care have markedly raised life expectancy and therefore has allowed Justices to serve for longer than ever before. Steven G. Calebresi and James Lindgren, professors of law at Northwestern University, argued that because vacancies in the court are occurring with less frequency and justices served on average 26.1 years between 1971 and 2006, the \"efficacy of the democratic check that the appointment process provides on the Court's membership\" is reduced. There have been several similar proposals to implement term limits for the nation's highest court, including Professor of Law at Duke University Paul Carrington's \"Supreme Court Renewal Act of 2005\".\n", "BULLET::::- 9 Judges of the United States Court of International Trade (political balance required; life tenure)\n\nBULLET::::- 678 Judges of the United States district courts (Most are life tenure; in total there are 663 permanent judgeships, 11 temporary judgeships, and four territorial court judgeships. In the districts with the 11 temporary judgeships, the seat lapses with the departure of a judge from that district at some particular time specified in statute unless Congress enacts legislation to extend the temporary judgeship or convert it to a permanent judgeship.)\n", "The regular members were allowed to be reappointed without limit. The Secretary of Justice serves at the pleasure of the president, while the representative of Congress serves until they are recalled by their chamber, or until the term of Congress that named them expires. Finally, the Chief Justice serves until mandatory retirement at the age of 70. The regular members' terms start at July 9.\n", "Governors of 36 states and four territories are subject to various term limits, while the governors of 14 states, Puerto Rico, and the Mayor of Washington, D.C., may serve an unlimited number of terms. Each state's gubernatorial term limits are prescribed by its state constitution, with the exception of Wyoming, whose limits are found in its statutes. Territorial term limits are prescribed by its constitution in the Northern Mariana Islands, the Organic Acts in Guam and the U.S. Virgin Islands, and by statute in American Samoa.\n", "Once the court was abolished, the four remaining judges of the court served out their lifetime appointment as at-large appellate judges. (The fifth judge of the court, Robert Archbald, had been impeached and removed from office.)\n\nSection::::Judges.\n", "Judges of the United States courts, for example, serve for life, but a system of incentives to retire at full pay after a given age and disqualification from leadership has been instituted. The International Olympic Committee instituted a mandatory retirement age in 1965, and Pope Paul VI removed the right of cardinals to vote for a new pope once they reached the age of 80, which was to limit the number of cardinals that would vote for the new Pope, due to the proliferation of cardinals that was occurring at the time and is continuing to occur.\n", "Thirty-two states have mandatory retirement ages for justices, nearly all from 70 to 75 years old. It varies whether justices must retire on their birthday, at the end of that year, or the end of that term in which they reach the retirement age. Rhode Island is the only state with neither terms for reelection or retention or a retirement age.\n\nSection::::Selection.\n", "The Constitution of 1902 staggered the five seats by requiring that at the next election of judges, one judge would be elected to a term of 4 years, one to a term of 6 years, one to a term of 8 years, one to a term of 10 years, and one to a term of 12 years. It perpetuated this staggering by providing that any new judge elected to fill a vacancy would serve only the unexpired portion of his predecessor's term.\n", "The Court is composed of the chief justice and six justices, who all serve six-year staggered terms. The justices elect the chief justice from amongst themselves. justices must be an \"elector\" (a qualified, registered voter) of the state and must have been a member in good standing of the Florida Bar for at least ten years. The Court must have at least one justice who resided in each of Florida's five lower appellate districts on the date of their appointment. They must retire on their 70th birthday unless it falls within the second half of their six-year terms. In that event, they can remain in office until the end of the full term. Amendment 6 passed in 2018 raised the mandatory retirement age for justices to 75, effective July 1, 2019.\n", "Under the federal Judges Act, federally appointed judges (such as those on the Manitoba Court of Appeal) may, after being in judicial office for at least 15 years and whose combined age and number of years of judicial service is not less than 80 or after the age of 70 years and at least 10 years judicial service, elect to give up their regular judicial duties and hold office as a supernumerary judge.\n\nSupernumeraryBr\n\nSection::::Trivia.\n", "List of Chief Justices of Australia by time in office\n\nThis is a list of Australian Chief Justices by time in office.\n\nSection::::Milestone illustrations.\n\nThe time frames involved in the length of service by previous holders of the office can be illustrated by comparing them to the length of time it would take for incumbent Chief Justice Robert French to reach them. However, due to the mandatory retirement age of 70 for Justices, it will be constitutionally impossible for French to reach any of these milestones.\n\nSection::::See also.\n", "Among other activities, USTL supports statewide ballot initiatives to impose term limits. In the early 1990s, USTL organized grassroots campaigns that placed term limits on the congressional delegations of 23 states. These were overturned in 1995 by the Supreme Court of the United States in a 5-4 decision in \"U.S. Term Limits v. Thornton.\"\n", "On occasion, a judge will leave office at the end of a term, in which case a general election determines their replacement. If the Supreme Court needs an additional judge on a temporary basis due to illness, an unfilled position, or a justice is disqualified from sitting on a case due to a conflict of interest, the court can appoint a senior judge to serve as a judge pro tempore. Senior judges are all former, qualified judges (a minimum of 12 years on the bench) that have retired from a state court. Only former Supreme Court justices, elected Oregon circuit court judges, or elected Oregon Court of Appeals judges can be assigned to temporary service on the Supreme Court.\n", "Many of the proposals center around a term limit for Justices that would be 18 years (Larry Sabato, Professor of Political Science at University of Virginia, suggested between 15 and 18 years). The staggered term limits of 18 years proposed by and would allow for a new appointment to the Court every two years, which in effect would allow every president at least two appointments. Carrington has argued that such a measure would not require a constitutional amendment as the \"Constitution doesn’t even mention life tenure; it merely requires that justices serve during ‘good behaviour’ \". The idea was not without support among Judges, as John Roberts supported term limits before he was appointed to the Supreme Court as Chief Justice. Calebresi, Lingren, and Carrington have also proposed that when justices have served out their proposed 18-year term they should be able to sit on other Federal Courts until retirement, death, or removal.\n", "With respect to Judicial service, the tendency is toward higher office. Twelve members of the list served on the Supreme Court of the United States — three as chief justice. Of the other thirty, eight served on one of the federal courts of appeals (called federal circuit courts pre-1912), three went from a district court to a circuit court, and twenty-four garnered their judicial branch service in district court judgeships alone. Two of the Supreme Court Justices on the list had previously served on federal circuit courts. For thirty-three of the members of the list, their judicial appointment was also their final point of service. One Supreme Court justice, two Circuit Court judges and seven District Court judges resigned from the bench to take posts in the executive branch and one Circuit Court judge and four District Court judges resigned from the bench to join the United States Senate.\n", "Justices may be removed by one of two methods. They can be disciplined upon the recommendation of the Judicial Qualifications Commission, at which time the Court may remove a justice or impose a lesser penalty such as a fine or reprimand; and justices may be impeached by a two-thirds vote of the Florida House of Representatives and convicted by a two-thirds vote of the senate.\n\nSection::::Notable cases and precedent.\n\nSection::::Notable cases and precedent.:State and local cases.\n", "If all members currently sitting on the Court have already served as president, the rotation starts all over again; however, due to the existence of a compulsory retirement age, and the consequent appointment of new ministers to fill those vacancies, it is very rare for the cycle to be completed and restarted, and some ministers are forced to retire before their turn in the presidency arrives, as was expected to happen with Teori Zavascki.\n", "These rules have applied since October 1, 1982. The office of chief judge was created in 1948 and until August 6, 1959 was filled by the longest-serving judge who had not elected to retire on what has since 1958 been known as senior status or declined to serve as chief judge. From then until 1982 it was filled by the senior such judge who had not turned 70.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-13739
Why is it so easy to distinguish a video from a photo even when the video is a still image of a still object?
Because a video is **not** a still image of a still object. Most videos have some blur, which is fine when they are running at full speed because it helps preserve the illusion of motion. Blur is essentially the mixture of images over a short period of time, so you really aren't looking at a still image.
[ "When a camera creates an image, that image does not represent a single instant of time. Because of technological constraints or artistic requirements, the image may represent the scene over a period of time. Most often this exposure time is brief enough that the image captured by the camera appears to capture an instantaneous moment, but this is not always so, and a fast moving object or a longer exposure time may result in blurring artifacts which make this apparent. As objects in a scene move, an image of that scene must represent an integration of all positions of those objects, as well as the camera's viewpoint, over the period of exposure determined by the shutter speed. In such an image, any object moving with respect to the camera will look blurred or smeared along the direction of relative motion. This smearing may occur on an object that is moving or on a static background if the camera is moving. In a film or television image, this looks natural because the human eye behaves in much the same way.\n", "BULLET::::- The latter (e.g., Google Glass, GoPro), are commonly mounted on the head, and capture conventional video (around 35fps) that allows to capture fine temporal details of interactions. Consequently, they offer potential for in-depth analysis of daily or special activities. However, since the camera is moving with the wearer head, it becomes more difficult to estimate the global motion of the wearer and in the case of abrupt movements, the images can result blurred.\n\nIn both cases, since the camera is worn in a naturalistic setting, visual data present a huge variability in terms of illumination conditions and object appearance.\n", "When motion prediction is used, as in MPEG-1, MPEG-2 or MPEG-4, compression artifacts tend to remain on several generations of decompressed frames, and move with the optic flow of the image, leading to a peculiar effect, part way between a painting effect and \"grime\" that moves with objects in the scene.\n", "Section::::Applications.:McGurk effect.\n", "A movie camera or a video camera operates similarly to a still camera, except it records a series of static images in rapid succession, commonly at a rate of 24 frames per second. When the images are combined and displayed in order, the illusion of motion is achieved.\n", "Online services, including YouTube, are also beginning to provide \"'video stabilization\" as a post-processing step after content is uploaded. This has the disadvantage of not having access to the realtime gyroscopic data, but the advantage of more computing power and the ability to analyze images both before and after a particular frame.\n\nSection::::Techniques.:Orthogonal transfer CCD.\n", "Section::::Psychology of headroom.\n\nPerceptual psychological studies have been carried out with experimenters using a white dot placed in various positions within a frame to demonstrate that observers attribute potential motion to a static object within a frame, relative to its position. The unmoving object is described as 'pulling' toward the center or toward an edge or corner. Proper headroom is achieved when the object is no longer seen to be slipping out of the frame—when its potential for motion is seen to be neutral in all directions.\n", "BULLET::::- UltraPixel – HTC (Image Stabilization is only available for the 2013 HTC One & 2016 HTC 10 with UltraPixel. It is not available for the HTC One (M8) or HTC Butterfly S, which also have UltraPixel)\n\nMost high-end smartphones as of late 2014 use optical image stabilization for photos and videos.\n\nSection::::Techniques.:Optical image stabilization.:Lens-based.\n", "Multiview video contains a large amount of inter-view statistical dependencies, since all cameras capture the same scene from different viewpoints. Therefore, combined temporal and inter-view prediction is important for efficient MVC encoding. A frame from a certain camera can be predicted not only from temporally related frames from the same camera, but also from the frames of neighboring cameras. These interdependencies can be used for efficient prediction.\n", "Later, Oostveen et al. introduced the concept of a \"fingerprint\", or \"hash function\", that creates a unique signature of the video based on its contents. This fingerprint is based on the length of the video and the brightness, as determined by splitting it into a grid. The fingerprint cannot be used to recreate the original video because it describes only certain features of its respective video.\n\nSome time ago, B.Coskun et al. presented two robust algorithms based on discrete cosine transform.\n", "More complex algorithms are necessary to detect motion when the camera itself is moving, or when the motion of a specific object must be detected in a field containing other movement which can be ignored. An example might be a painting surrounded by visitors in an art gallery. For the case of a moving camera, models based on optical flow are used to distinguish between apparent background motion caused by the camera movement and that of independent objects moving in the scene.\n\nSection::::Devices.\n", "For example, in ĀTMAN the subject is a human being disguised with Japanese Noh-mask whereas in SPACY the subject is photograph itself. This, together with the last shot where the camera and the filmmaker (self-portrait) come into the same frame indicates the self-reflexive characteristic of his work. Compared to the two-dimensional \"flip-book\" style illusion of DUTCH PHOTOS, SPACY has vast spaciousness and dramatic sensation of movement and when it's accompanied by the sound, the film reveals the side of a fantastic film.\n", "Section::::MPEG.\n\nIn MPEG, images are predicted from previous frames or bidirectionally from previous and future frames are more complex because the image sequence must be transmitted and stored out of order so that the future frame is available to generate the \n\nAfter predicting frames using motion compensation, the coder finds the residual, which is then compressed and transmitted.\n\nSection::::Global motion compensation.\n\nIn global motion compensation, the motion model basically reflects camera motions such as:\n\nBULLET::::- Dolly - moving the camera forward or backward\n\nBULLET::::- Track - moving the camera left or right\n", "A special case occurs when the camera is moved during exposure while keeping it pointed at a moving target, so as to hold its projection on the recording medium steady. The stationary environment (usually mainly background, but possibly also some foreground) then is subjected to ICM and appears streaked in the final image.\n", "Generally, it compensates for pan and tilt (angular movement, equivalent to yaw and pitch) of the imaging device, though electronic image stabilization can also compensate for rotation. It is used in image-stabilized binoculars, still and video cameras, astronomical telescopes, and also smartphones, mainly the high-end. With still cameras, camera shake is a particular problem at slow shutter speeds or with long focal length (telephoto or zoom) lenses. With video cameras, camera shake causes visible frame-to-frame jitter in the recorded video. In astronomy, the problem of lens-shake is amplified by variation in the atmosphere, which changes the apparent positions of objects over time.\n", "Section::::Relation to motion blur.\n\nIn a sense, ICM is the same effect as (intentional) single-exposition motion blur: in the former the camera moves during exposure, in the second the target moves, but they have in common that there is relative motion between camera and target, resulting in streaking in the image.\n", "This technique was proposed by L.Chen and F. Stentiford. A measurement of dissimilarity is made by combining the two aforementioned algorithms, Global temporal descriptors and Global ordinal measurement descriptors, in time and space.\n\nSection::::Algorithms.:Local Descriptors.\n\nSection::::Algorithms.:Local Descriptors.:AJ.\n\nDescribed by A. Joly et al., this algorithm is an improvement of Harris' Interest Points detector. This technique suggests that in many videos a significant number of frames are almost identical, so it is more efficient to test not every frame but just those depicting a significant amount of motion.\n\nSection::::Algorithms.:Local Descriptors.:ViCopT.\n", "Considering these principles, the audiovisual montage may refer to two aspects: the intrinsic one, which alludes to the aesthetic connection of a shot with the next one, thus obtaining different forms, such as the analogy, the contrast and others. The other aspect is the extrinsic montage, which is the method of association point between a shot and the next one and that may be, for example, by a cut or by the so-called dissolve effect or lap dissolve between a still image and the one after it.\n", "The Teknomo–Fernandez algorithm is also an automatic background generation algorithm. Its advantage, however, is its computational speed of only formula_1-time, depending on the resolution formula_3 of an image and its accuracy gained within a manageable number of frames. Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos. Furthermore, it can be performed for both grayscale and colored videos.\n\nSection::::Assumptions.\n\nBULLET::::- The camera is stationary.\n", "BULLET::::- The former (e.g., Narrative Clip and Microsoft SenseCam), are commonly worn on the chest, and are characterized by a very low frame rate (up to 2fpm) that allows to capture images over a long period of time without the need of recharging the battery. Consequently, they offer considerable potential for inferring knowledge about e.g. behaviour patterns, habits or lifestyle of the user. However, due the low frame-rate and the free motion of the camera, temporally adjacent images typically present abrupt appearance changes so that motion features cannot be reliably estimated.\n", "The capture process fixes the \"natural\" frame rate of the image sequence. Moving image sequence can be captured at the rate which is different from presentation rate, however this is usually only done for the sake of artistic effect, or for studying fast-pace or slow processes. In order to faithfully reproduce familiar movements of persons, animals, or natural processes, and to faithfully reproduce accompanying sound, the capture rate must be equal to, or at least very close to the presentation rate.\n", "Note that demosaicing is only performed for CFA sensors; it is not required for 3CCD or Foveon X3 sensors.\n\nCameras and image processing software may also perform additional processing to improve image quality, for example:\n\nBULLET::::- removal of systematic noise – bias frame subtraction and flat-field correction\n\nBULLET::::- dark frame subtraction\n\nBULLET::::- optical correction – lens distortion, vignetting, chromatic aberration and color fringing correction\n\nBULLET::::- contrast manipulation\n\nBULLET::::- increasing visual acuity by unsharp masking\n\nBULLET::::- dynamic range compression – lighten shadow regions without blowing out highlight regions\n", "Let's say we are in a situation where the features we are tracking are on the surface of a rigid object such as a building. Since we know that the real point \"xyz\" will remain in the same place in real space from one frame of the image to the next we can make the point a constant even though we do not know where it is. So:\n\nwhere the subscripts \"i\" and \"j\" refer to arbitrary frames in the shot we are analyzing. Since this is always true then we know that:\n", "This method has been accused by many of subliminal messaging. Since the subconscious mind is said to pick things up much faster than the rest of the brain, one could say that a hidden message in a still motion movie, it may allow the brain to receive the message without the human eye having time to register it.\n\nSection::::History.\n\nThe technique has been used in many forms of media. The most common form of still motion now exists as Adobe Flash or Gif animation banners on website advertisements.\n\nSection::::Examples.\n", "An alternative is given by so-called direct approaches, where geometric information (3D structure and camera motion) is directly estimated from the images, without intermediate abstraction to features or corners.\n\nThere are several approaches to structure from motion. In incremental SFM, camera poses are solved for and added one by one to the collection. In global SFM , the poses of all cameras are solved for at the same time. A somewhat intermediate approach is out-of-core SFM, where several partial reconstructions are computed that are then integrated into a global solution.\n\nSection::::Applications.\n\nSection::::Applications.:Geosciences.\n" ]
[ "Video is always a still image of a still object.", "A still image from a video is the same as a photo." ]
[ "Most video have some blur, which is essentially the mixture of images over a short period of time, rather than being a still image of a still object.", "A still image from a video has some blur, which photos do not." ]
[ "false presupposition" ]
[ "Video is always a still image of a still object.", "A still image from a video is the same as a photo." ]
[ "false presupposition", "false presupposition" ]
[ "Most video have some blur, which is essentially the mixture of images over a short period of time, rather than being a still image of a still object.", "A still image from a video has some blur, which photos do not." ]
2018-04130
Why does a song not sound good at first but after 2 or more listens it can become your favourite?
One of the reasons you find music enjoyable is because your brain likes to predict which notes are coming next, and then it feels like a reward when you're correct. If a song has a confusing melody you won't get that same enjoyment from it. It may even seem unpleasant because it doesn't act the way you expect it to. When you hear it repeatedly you're able to recall its melody from memory and that presumably makes it more enjoyable.
[ "Section::::Critical reception.:\"Pitchfork\" review and \"sad boy\" comment.\n", "At Metacritic, which assigns a weighted average score out of 100 to reviews from mainstream critics, the album received an average score of 73% based on 15 reviews, indicating \"generally favorable reviews\".\n", "Louder Than the Music's Jono Davies indicated \"the album for me seemed to get better the longer it went on. The tracks seem to get more creative and not as obvious in sound towards the latter part of the album. With any album people have there [sic] favourites, and I'm sure this will be no different on this album. For me the one thing I have taken from this album is that actually this debut is very strong and can only lead onto bigger and better releases.\"\n", "Section::::Critical reception.:Retrospective assessment.:Biographers' appraisal.\n", "Regarding the album's lo-fi aesthetic, Vile noted, \"[the album] has songs that maybe if you don't normally listen to that stuff, you'd think were a bit throwaway because of the recording quality.\"\n\nSection::::Release and reception.\n", "BULLET::::- Gabriella Cilmi – \"Got No Place to Go\", \"Sweet About Me\", \"Don't Wanna Go To Bed Now\", \"Save the Lies\", \"Whole Lotta Love\" (12:55 pm)\n\nBULLET::::- Kings of Leon – \"Crawl\", \"Revelry\", \"On Call\", \"Use Somebody\" (1:40 pm)\n\nBULLET::::- Paul Kelly – \"Dumb Things\", \"To Her Door\", \"God Told Me To\", \"Leaps and Bounds\", \"How to Make Gravy\", \"Meet me in the Middle of the Air\" (2:20 pm)\n\nBULLET::::- Augie March – \"Lupus\", \"Pennywhistle\", \"Brundisium\", \"There Is No Such Place\", \"This Train Will Be Taking No Passengers\", \"One Crowded Hour\" (3:05 pm)\n", "Section::::Shelved LP.\n\nMotown originally created an album to capitalize on the success of the single, but when the single failed to hit the top of the charts the album was scrapped, and the single was included rather on Diana Ross and the Supremes' \"Love Child\" LP. The shelved LP track list was intended as follows:\n\nSide One:\n\nBULLET::::1. Some Things You Never Get Used To\n\nBULLET::::2. Heaven Must Have Sent You\n\nBULLET::::3. He's My Sunny Boy\n\nBULLET::::4. Come On And See Me\n\nBULLET::::5. Can I Get A Witness\n\nBULLET::::6. You've Been So Wonderful To Me\n\nSide two:\n\nBULLET::::1. My Guy\n", "With the first album we were 16 years old when we wrote it. But now we’ve been on tour for four and a half years, we’ve experienced stuff. We’ve written about a lot of different things on this album because we’re older and we’ve experienced more of life, so we’ve got more to talk about.\n\nSection::::Artwork.\n", "Another study examined how openness to experience and frequency of listening are related and how they affect music preference. While listening to classical music excerpts, those rated high in openness tended to decrease in liking music faster during repeated listenings, as opposed to those scoring low in openness, who tended to like music more with repeated plays. This suggests novelty in music is an important quality for people high in openness to experience.\n", "BULLET::::3. \"Broadway\" – 3:22\n\nBULLET::::4. \"Salome\" – 4:07\n\nBULLET::::5. \"W. TX Teardrops\" (vocals by Murry Hammond) – 3:05\n\nBULLET::::6. \"Melt Show\" – 3:07\n\nBULLET::::7. \"Streets of Where I'm From\" – 3:15\n\nBULLET::::8. \"Big Brown Eyes\" – 4:23\n\nBULLET::::9. \"Just Like California\" – 2:33\n\nBULLET::::10. \"Curtain Calls\" – 4:18\n\nBULLET::::11. \"Niteclub\" – 3:49\n\nBULLET::::12. \"House That Used to Be\" – 4:08\n\nBULLET::::13. \"Four Leaf Clover\" (with Exene Cervenka) – 3:20\n\nSection::::\"Too Far to Care: Expanded Edition\".\n", "Section::::Critical reception.\n", "BULLET::::- Venessa Yeboah – background vocals\n\nBULLET::::- Ian Pitter – background vocals\n\nBULLET::::- Jodi Marr – brass arrangement, producer, background vocals\n\nBULLET::::- Lawrence Johnson – background vocals vocal arrangement\n\nBULLET::::- Rosie Langley – violin, background vocals, string quartet\n\nBULLET::::- Robin Bailey – reid background vocals\n\nBULLET::::- Seye Adelekan – guitar, background vocals\n\nBULLET::::- David Paul Campbell – brass arrangement\n\nBULLET::::- Sarah Tuke – violin\n\nBULLET::::- Sato Kotono – viola\n\nBULLET::::- Úna Palliser – viola\n\nBULLET::::- Laura Stanford – violin\n\nBULLET::::- Fiona Brice – violin\n\nBULLET::::- Jesse Murphy – violin\n\nBULLET::::- Luke Potashnick – guitar\n", "Section::::Critical reception.\n", "Peter Goddard of the \"Toronto Star\" wrote that the album is \"Sure not about making any musical breakthrough[s]\". AW for the \"Shields Gazette\" called the \" music formulated for minimal offence, watered down with minimal risk, and, as such, failing to provoke any form of emotion\", which is the \"embodiment of the colour beige; painfully safe, somewhat dull and seemingly dreamt up as an ideal kitchen decoration.\" At \"The Vancouver Sun\", Francois Marchand called the album \"exactly what it should be: A middle-of-the-road jazz-pop crooner record that sparkles in your ears like a million little fizzy sugar-pop bubbles and is just cookie-cutter enough to please the masses.\" Virgin Media's Matthew Horton told that \"the majority of To Be Loved boils down to straight retreads of admittedly great songs\", and \"that's just the way it goes. You don't come to Bublé for post-dubstep glitch experiments. You come for the polar opposite, delivered with style.\"\n", "Section::::Review and reception.\n", "A 1996 study by Hsee asked participants to evaluate two used music dictionaries, one of which contained 20,000 entries and had a torn cover, the other of which contained 10,000 entries and looked brand-new. When evaluated separately, the newer-looking book was preferred; when evaluated together, the older book was chosen.\n", "Section::::Chart performance.\n", "Section::::Critical reception.\n", "Section::::Critical reception.\n", "BULLET::::- Jim James – backing vocals on \"Night Still Comes\"\n\nBULLET::::- Bo Koster – piano, organ, Mellotron, bass organ, synths, keyboards, Melodeon, Wurlitzer, clavinet, vibraphone\n\nBULLET::::- A. C. Newman – backing vocals\n\nBULLET::::- Jon Rauhouse – trombone on \"Ragtime\"\n\nBULLET::::- Marc Ribot – piano on \"Afraid\"\n\nBULLET::::- Craig Schmaucher – chimes on \"Local Girl\"\n\nBULLET::::- Chris Schultz – sonar samples on \"Where Did I Leave That Fire?\"\n\nBULLET::::- Jacob Valenzuela – trumpets on \"Calling Cards\" and \"Ragtime\"\n\nBULLET::::- M. Ward – electric guitar on \"Night Still Comes\", \"Man\" and \"Local Girl\", vocals on \"Madonna of the Wasps\".\n", "Section::::Musical style and themes.\n", "BULLET::::- Lisa Lindley-Jones – choir (track 4)\n\nBULLET::::- Vicky Oag – choir (track 4)\n\nBULLET::::- BSP – string arrangements\n\nBULLET::::- Technical personnel\n\nBULLET::::- Graham Sutton – mixing (except track 12), production\n\nBULLET::::- BSP – recording (tracks 4, 9, 10), production, packaging and photos\n\nBULLET::::- Howard Bilerman – recording (except tracks 4, 9 and 10)\n\nBULLET::::- Efrim Menuck – recording (except tracks 4, 9 and 10)\n\nBULLET::::- Jan Scott Wilkinson (\"Yan\") – mixing (track 12)\n\nBULLET::::- Milos Hajicek – assistant engineering\n\nBULLET::::- Laurence Aldridge – assistant engineering\n\nBULLET::::- Luke Joyse – assistant engineering\n\nBULLET::::- Tim Young – mastering\n", "Section::::Critical reception.\n", "Louder Than The Music's Jono Davies said that \"from the moment you hear the sound of the opening track, you know what you are going to get in style and sound. There isn't necessarily anything new in style or music, but what you do get is a whole album full of great, strong, honest songs with solid sound quality and production that will be please all fans of this band. \"\n", "Section::::Critical reception.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00681
How do banks choose what interest rates to allow their customers to earn on their savings accounts?
When you hear the news talk about the Fed raising/lowering interest rates, that is just one rate they set... but other banks, mortgage lenders, etc. use it as a baseline to which other rates are pegged. They will set their various rates to that, ie a savings account might pay 0.5% above while a mortgage is 3.5% above, so when the Fed raises rates .25% then those would go up to correspond.
[ "See the (S)ensitivity section of the CAMELS rating system for a substantial list of links to documents and examiner manuals, issued by financial regulators, that cover many issues in the analysis of interest rate risk.\n\nIn addition to being subject to the CAMELS system, the largest banks are often subject to prescribed stress testing. The assessment of interest rate risk is typically informed by some type of stress testing. See: Stress test (financial), List of bank stress tests, List of systemically important banks.\n", "The interest rate that the borrowing bank pays to the lending bank to borrow the funds is negotiated between the two banks, and the weighted average of this rate across all such transactions is the federal funds effective rate.\n\nThe federal funds target rate is determined by a meeting of the members of the Federal Open Market Committee which normally occurs eight times a year about seven weeks apart. The committee may also hold additional meetings and implement target rate changes outside of its normal schedule.\n", "The target rates are generally short-term rates. The actual rate that borrowers and lenders receive on the market will depend on (perceived) credit risk, maturity and other factors. For example, a central bank might set a target rate for overnight lending of 4.5%, but rates for (equivalent risk) five-year bonds might be 5%, 4.75%, or, in cases of inverted yield curves, even below the short-term rate. Many central banks have one primary \"headline\" rate that is quoted as the \"central bank rate\". In practice, they will have other tools and rates that are used, but only one that is rigorously targeted and enforced.\n", "For example, assume a particular U.S. depository institution, in the normal course of business, issues a loan. This dispenses money and decreases the ratio of bank reserves to money loaned. If its reserve ratio drops below the legally required minimum, it must add to its reserves to remain compliant with Federal Reserve regulations. The bank can borrow the requisite funds from another bank that has a surplus in its account with the Fed. The interest rate that the borrowing bank pays to the lending bank to borrow the funds is negotiated between the two banks, and the weighted average of this rate across all such transactions is the federal funds \"effective\" rate.\n", "Section::::By country.:Eurozone.\n\nIn the eurozone the bank rate managed by the European Central Bank is called Standing Facilities, which are used to manage overnight liquidity. Qualifying counterparties can use the Standing Facilities to increase the amount of cash they have available for overnight settlements using the \"Marginal Lending Facility\". Conversely, excess funds can be deposited within the European Central Bank System and earn interest using the \"Deposit facility\".\n\nSection::::By country.:India.\n\nIn India, the Reserve Bank of India determine the bank rate, which is the standard rate at which it is\n", "Many credit card issuers give a rate that is based upon an economic indicator published by a respected journal. For example, most banks in the U.S. offer credit cards based upon the lowest U.S. prime rate as published in the \"Wall Street Journal\" on the previous business day to the start of the calendar month. For example, a rate given as 9.99% plus the prime rate will be 16.99% when the prime rate is 7.00% (such as the end of 2005). These rates usually also have contractual minimums and maximums to protect the consumer (or the bank, as it may be) from wild fluctuations of the prime rate. While these accounts are harder to budget for, they can theoretically be a little less expensive since the bank does not have to accept the risk of fluctuation of the market (since the prime rate follows inflation rates, which affect the profitability of loans). A fixed rate can be better for consumers who have fixed incomes or need control over their payments budgets. These rates can be varied upon depending upon the policies of different organisations.\n", "By far the most visible and obvious power of many modern central banks is to influence market interest rates; contrary to popular belief, they rarely \"set\" rates to a fixed number. Although the mechanism differs from country to country, most use a similar mechanism based on a central bank's ability to create as much fiat money as required.\n", "The Federal Reserve System implements monetary policy largely by targeting the federal funds rate. This is the interest rate that banks charge each other for overnight loans of federal funds, which are the reserves held by banks at the Fed. This rate is actually determined by the market and is not explicitly mandated by the Fed. The Fed therefore tries to align the effective federal funds rate with the targeted rate by adding or subtracting from the money supply through open market operations. The Federal Reserve System usually adjusts the federal funds rate target by 0.25% or 0.50% at a time.\n", "A bank will use the capital deposited by individuals to make loans to their clients. In return, the bank should pay individuals who have deposited their capital interest. The amount of interest payment depends on the interest rate and the amount of capital they deposited.\n\nSection::::Related terms.\n\n\"Base rate\" usually refers to the annualized rate offered on overnight deposits by the central bank or other monetary authority.\n\n\"Annual percentage rate\" (APR) and \"effective annual rate\" or \"annual equivalent rate\" (AER) are used to help consumers compare products with different payment structures on a common basis.\n", "The \"federal funds target rate\" is set by the governors of the Federal Reserve, which they enforce by open market operations and adjustments in the interest rate on reserves. The target rate is almost always what is meant by the media referring to the Federal Reserve \"changing interest rates.\" The actual federal funds rate generally lies within a range of that target rate, as the Federal Reserve cannot set an exact value through open market operations.\n", "The Federal Reserve (Fed) implements monetary policy largely by targeting the federal funds rate. This is the rate that banks charge each other for overnight loans of federal funds. Federal funds are the reserves held by banks at the Fed.\n", "Based on the banking business, there are deposit interest rate and loan interest rate.\n\nBased on the relationship between supply and demand of market interest rate, there are fixed interest rate and floating interest rate.\n\nBased on the changes between different interest rates, there are base interest rate and cash interest rate.\n\nSection::::Monetary policy.\n", "Interest rates are generally determined by the market, but government intervention - usually by a central bank - may strongly influence short-term interest rates, and is one of the main tools of monetary policy. The central bank offers to borrow (or lend) large quantities of money at a rate which they determine (sometimes this is money that they have created \"ex nihilo\", that is, printed) which has a major influence on supply and demand and hence on market interest rates.\n\nSection::::Market interest rates.:Open market operations in the United States.\n", "Another possibility used to estimate the risk-free rate is the inter-bank lending rate. This appears to be premised on the basis that these institutions benefit from an implicit guarantee, underpinned by the role of the monetary authorities as 'the lendor of last resort.' (In a system with an endogenous money supply the 'monetary authorities' may be private agents as well as the central bank - refer to Graziani 'The Theory of Monetary Production'.) Again, the same observation applies to banks as a proxy for the risk-free rate – if there is any perceived risk of default implicit in the interbank lending rate, it is not appropriate to use this rate as a proxy for the risk-free rate.\n", "The interest rate target is maintained for a specific duration using open market operations. Typically the duration that the interest rate target is kept constant will vary between months and years. This interest rate target is usually reviewed on a monthly or quarterly basis by a policy committee.\n", "U.S. prime rate\n\nIn general, the United States prime rate runs approximately 300 basis points (or 3 percent) above the federal funds rate. The Federal Open Market Committee (FOMC) meets eight times per year wherein they set a target for the federal funds rate. Other rates, including the prime rate, derive from this base rate.\n\nWhen 23 out of 30 largest US banks change their prime rate, the \"WSJ\" prints a composite prime rate change.\n\nSection::::Uses.\n", "In the United States, the prime rate runs approximately 300 basis points (or 3 percentage points) above the federal funds rate, which is the interest rate that banks charge each other for overnight loans made to fulfill reserve funding requirements. The Federal funds rate plus a much smaller increment is frequently used for lending to the most creditworthy borrowers, as is LIBOR, the London Interbank Offered Rate. The Federal Open Market Committee (FOMC) meets eight times per year to set a target for the federal funds rate.\n", "The mechanism to move the market towards a 'target rate' (whichever specific rate is used) is generally to lend money or borrow money in theoretically unlimited quantities, until the targeted market rate is sufficiently close to the target. Central banks may do so by lending money to and borrowing money from (taking deposits from) a limited number of qualified banks, or by purchasing and selling bonds. As an example of how this functions, the Bank of Canada sets a target overnight rate, and a band of plus or minus 0.25%. Qualified banks borrow from each other within this band, but never above or below, because the central bank will always lend to them at the top of the band, and take deposits at the bottom of the band; in principle, the capacity to borrow and lend at the extremes of the band are unlimited. Other central banks use similar mechanisms.\n", "The Federal Reserve uses open market operations to make the federal funds effective rate follow the federal funds target rate. The target rate is chosen in part to influence the money supply in the U.S. economy\n\nSection::::Mechanism.\n", "The central bank influences interest rates by expanding or contracting the monetary base, which consists of currency in circulation and banks' reserves on deposit at the central bank. \n\nCentral banks have three main tools of monetary policy: open market operations, the discount rate and the reserve requirements.\n", "If the total income for a year does not fall within the overall taxable limits, customers can submit a Form 15 G (below 60 years of age) or Form 15 H (above 60 years of age) to the bank when starting the FD and at the start of every financial year to avoid TDS.\n\nSection::::How bank FD rates of interest vary with Central Bank policy.\n", "Prior to December 17, 2008, the \"Wall Street Journal\" followed a policy of changing its published prime rate when 23 out of 30 of the United States' largest banks changed their prime rates. Recognizing that fewer, larger banks now control most banking assets—i.e., it is more concentrated—the \"Journal\" now publishes a rate reflecting the base rate posted by at least 70% of the top ten banks by assets.\n\nSection::::Use in different banking systems.:Malaysia.\n", "The Federal Reserve sets monetary policy by influencing the federal funds rate, which is the rate of interbank lending of excess reserves. The rate that banks charge each other for these loans is determined in the interbank market and the Federal Reserve influences this rate through the three \"tools\" of monetary policy described in the \"Tools\" section below. The federal funds rate is a short-term interest rate that the FOMC focuses on, which affects the longer-term interest rates throughout the economy. The Federal Reserve summarized its monetary policy in 2005:\n", "To compensate for the low liquidity, FDs offer higher rates of interest than saving accounts. The longest permissible term for FDs is 10 years. Generally, the longer the term of deposit, higher is the rate of interest but a bank may offer lower rate of interest for a longer period if it expects interest rates, at which the Central Bank of a nation lends to banks (\"repo rates\"), will dip in the future.\n", "The assessment of interest rate risk is a very large topic at banks, thrifts, saving and loans, credit unions, and other finance companies, and among their regulators. The widely deployed CAMELS rating system assesses a financial institution's: (C)apital adequacy, (A)ssets, (M)anagement Capability, (E)arnings, (L)iquidity, and (S)ensitivity to market risk. A large portion of the (S)ensitivity in CAMELS is \"interest rate risk\". Much of what is known about assessing interest rate risk has been developed by the interaction of financial institutions with their regulators since the 1990s. Interest rate risk is unquestionably the largest part of the (S)ensitivity analysis in the CAMELS system for most banking institutions. When a bank receives a bad CAMELS rating equity holders, bond holders and creditors are at risk of loss, senior managers can lose their jobs and the firms are put on the FDIC problem bank list. \n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-20566
Why it is better to use smaller monitors (not 55" TVs) for playing videogames, if it's true.
A monitor has a latency (delay) that is a lot lower than a TV. Most normal TL monitor screens have a latency of ~5 ms and an average IPS monitor or most other technologies about ~8ms delay. A TV isn’t made for computer use or gaming so the latency is often about a second long and mostly doesn’t even get measured because it doesn’t really matter on TV’s cause most people just use it for television. Second is the refresh rate, a lot of TV’s have fake refresh rates of sometimes up to 1000 HZ but it isn’t actually how fast the screen refreshes. A lot of TV’s use a technology to predict the movement of the next frame. I think you can imagine that it isn’t good if you’re playing video games. Monitors don’t have these weird technologies and display the true refresh rate, most commonly 60HZ (refresh rate is how many times a second a screen refreshes). The refresh rates on monitors can also be a lot higher if you choose a higher end model. Some screens can go up to 240HZ true refresh rate depending on the resolution it’s running at. Lastly is the color. Most TV’s don’t have accurate colors and they are optimized for watching movies. If you’re going to game on it, the colors can be really distorted. Monitors are almost always equipped with OK color reproduction and if you want to you can buy other models with even better color reproduction if you’re going to edit video’s professionally or use photoshop
[ "Section::::Comparison of television display technologies.\n\nSection::::Comparison of television display technologies.:CRT.\n\nThough large-screen CRT TVs/monitors exist, the screen size is limited by their impracticality. The bigger the screen, the higher the weight, and the deeper the CRT.\n\nA typical 32-inch television can weigh about 50 lbs or more. The biggest ever CRT was about 60 inches, weighed 250 lbs.\n\nSlimFit televisions (40 inch, 80 lbs) exist, but are not common.\n\nSection::::Comparison of television display technologies.:LCD.\n\nBULLET::::- Advantages:\n\nBULLET::::- Slim profile\n\nBULLET::::- Lighter and less bulky than rear-projection televisions\n", "BULLET::::- Display resolution: the number of pixels in each dimension on a display. In general a higher resolution will yield a clearer, sharper image.\n", "Some monitors are designed exclusively for gamers, featuring higher refresh rates and improved response times at the expense of a lower resolution. E-sports, or competitive gamers, often favor higher framerates at the expense of reduced color accuracy, preferring TN panels over IPS panels.\n\nSection::::Hardware description.:Audio.\n\nGaming PCs are usually equipped with a dedicated sound card and speakers in a 5.1 or 7.1 surround sound configuration. The speaker setup or a set of quality headphones is required to enjoy the advanced sound found in most modern computer games.\n", "The TV image is composed of many lines of pixels. Ideally, the TV watcher sits far enough away from the screen that the individual lines merge into one solid image. The watcher may sit even farther away and still see a good picture, but it will be a smaller portion of their visual field. However, note that viewing behavior is dependent on screen size. With increasing screen sizes, visual exploration is enhanced with an increased number of fixations of shorter duration and a tendency to view only the center of the display.\n\nSection::::Display sizes of common TVs and computer monitors.\n", "BULLET::::- Since many modern DVDs and some TV shows are in a widescreen format, widescreen displays are optimal for their playback on a computer. 16:9 material on a 16:10 display will be letterboxed. In data processing or viewing 4:3 entertainment material such as older films and digital photographs, the widescreen will be pillarboxed.\n\nBULLET::::- In the majority of games since 2005, you get wider field of view with a widescreen monitor.\n\nBULLET::::- Games prior to 2005 usually work better with a 4:3 than a widescreen monitor because of better compatibility.\n\nSection::::Computer displays.:Conversion.\n", "BULLET::::- Aspect ratio: The ratio of the display width to the display height. The aspect ratio of a traditional television is 4:3, which is being discontinued; the television industry is currently changing to the 16:9 ratio typically used by large-screen, high-definition televisions.\n", "BULLET::::- Rear-projection has smaller viewing angles than those of flat-panel displays\n\nSection::::Comparison of different types of rear-projection televisions.\n\nSection::::Comparison of different types of rear-projection televisions.:CRT projector.\n\nAdvantages:\n\nBULLET::::- Achieves excellent black level and contrast ratio\n\nBULLET::::- Achieves excellent color reproduction\n\nBULLET::::- CRTs have generally very long lifetimes\n\nBULLET::::- Greater viewing angles than those of LCDs\n\nDisadvantages:\n\nBULLET::::- Heavy and large, especially depth-wise\n\nBULLET::::- If one CRT fails the other two should be replaced for optimal color and brightness balance\n\nBULLET::::- Susceptible to burn-in because CRT is phosphor-based\n", "BULLET::::- Typically have slower response times than Plasmas, which can cause ghosting and blurring during the display of fast-moving images. This is also improving by increasing the refresh rate of LCDs\n\nSection::::Comparison of television display technologies.:Plasma display.\n\nBULLET::::- Advantages:\n\nBULLET::::- Slim cabinet profile\n\nBULLET::::- Can be wall-mounted\n\nBULLET::::- Lighter and less voluminous than rear-projection television sets\n\nBULLET::::- More accurate color reproduction than that of an LCD; 68 billion (2) colors vs. 16.7 million (2) colors\n\nBULLET::::- Produces deep, true blacks, allowing for superior contrast ratios (+ 1:1,000,000)\n", "BULLET::::- Brightness: The amount of light emitted from the display. It is sometimes synonymous with the term \"luminance\", which is defined as the amount of light per area and is measured in SI units as candela per square meter.\n", "The concept of \"multi-monitor\" games is not limited to games that can be played on personal computers. As arcade technology entered the 1990s, larger cabinets were being built which in turn also housed larger monitors such as the 3 28\" screen version of Namco's \"Ridge Racer\" from 1993. Although large screen technology such as CRT rear projection was beginning to be used more often, multi-monitor games were still occasionally released, such as Sega's \"F355 Challenge\" from 1999 which again used 3 28\" monitors for the sit-down cockpit version. The most recent use of a multi-monitor setup in arcades occurred with Taito's \"Dariusburst: Another Chronicle\" game, released in Japan in December 2010 and worldwide the following year. It uses 2 32\" LCD screens and an angled mirror to create a seamless widescreen. \n", "When monitors are sold, the quoted size is the diagonal measurement of the display area. Because of the different ratio, a 16:9 monitor will have a shorter height than a 4:3 monitor of the same advertised size.\n", "BULLET::::- Wider viewing angles (+178°) than those of an LCD; the image does not degrade (dim and distort) when viewed from a high angle, as occurs with an LCD\n\nBULLET::::- No motion blur; eliminated with higher refresh rates and faster response times (up to 1.0 microsecond), which make plasma TV technology ideal for viewing the fast-moving film and sport images\n\nBULLET::::- Disadvantages:\n\nBULLET::::- Susceptible to screen burn-in and image retention; late-model plasma TV sets feature corrective technology, such as pixel shifting\n", "Although the early Vectrex home console had a built-in, vertically-oriented screen, the majority of home games consoles were designed to interface with standard television sets, which use landscape orientation. As a consequence the conversion of early popular arcade games to home consoles was difficult, not only because the home computing capability was lower, but also the screen orientation was mismatched and the home user could not be expected to set their television on its side to show the game correctly. This is why most early home versions of arcade games have a wide, squashed appearance compared to the full-quality arcade versions.\n", "BULLET::::- Complexity of formal features - as mentioned previously, television has the opportunity to provide several audio and video features. Studies have shown that features such as fast program pacing may decrease processing capabilities.\n\nSection::::Governing principles.\n\nThis theory is governed by several broad principles detailing the allocation of resources among the two content types.\n\nSection::::Governing principles.:Narrative dominance.\n", "Section::::Viewing distances.\n\nBefore deciding on a particular display technology size, it is very important to determine from what distances it is going to be viewed. As the display size increases so does the ideal viewing distance. Bernard J. Lechner, while working for RCA, studied the best viewing distances for various conditions and derived the so-called Lechner distance.\n\nAs a rule of thumb, the viewing distance should be roughly two to three times the screen size for standard definition (SD) displays.\n\nSection::::Display specifications.\n\nThe following are important factors for evaluating television displays:\n\nBULLET::::- Display size: the diagonal length of the display.\n", "BULLET::::- Phosphor-luminosity diminishes over time, resulting in the gradual decline of absolute image-brightness; corrected with the 60,000-hour life-span of contemporary plasma TV technology (longer than that of CRT technology)\n\nBULLET::::- Not manufactured in sizes smaller than 37-inches diagonal\n\nBULLET::::- Susceptible to reflective glare in a brightly lighted room, which dims the image\n\nBULLET::::- High rate of electrical-power consumption\n\nBULLET::::- Heavier than the comparable LCD TV set, because of the glass screen that contains the gases\n", "BULLET::::- Slimmest of all types of projection televisions\n\nBULLET::::- Achieves excellent black level and contrast ratio\n\nBULLET::::- DMD chip can be easily repaired or replaced\n\nBULLET::::- Is not susceptible to burn-in\n\nBULLET::::- Better viewing angles than those of CRT projectors\n\nBULLET::::- Image brightness only decreases due to the age of the lamp\n\nBULLET::::- defective pixels are rare\n\nBULLET::::- Does not experience the screen-door effect\n\nDisadvantages:\n", "BULLET::::- Costlier screen repair; the glass screen of a plasma TV set can be damaged permanently, and is more difficult to repair than the plastic screen of an LCD TV set\n\nSection::::Comparison of television display technologies.:Projection television.\n\nSection::::Comparison of television display technologies.:Projection television.:Front-projection television.\n\nBULLET::::- Advantages:\n\nBULLET::::- Significantly cheaper than flat-panel counterparts\n\nBULLET::::- Front-projection picture quality approaches that of movie theater\n\nBULLET::::- Front-projection televisions take up very little space because a projector screen is extremely slim, and even a suitably prepared wall can be used\n\nBULLET::::- Display size can be extremely large, typically limited by room height.\n\nBULLET::::- Disadvantages:\n", "Modern arcade emulators are able to handle this difference in screen orientation by dynamically changing the screen resolution to allow the portrait oriented game to resize and fit a landscape display, showing wide empty black bars on the sides of the portrait-on-landscape screen.\n\nPortrait orientation is still used occasionally within some arcade and home titles (either giving the option of using black bars or rotating the display), primarily in the vertical shoot 'em up genre due to considerations of aesthetics, tradition and gameplay.\n\nSection::::Modern display rotation methods.\n", "The term \"10-foot\" is used to differentiate this user interface style from those used on desktop computers, which typically assume the user's eyes are only about two feet (60 cm) from the display. This difference in distance from the display has a huge impact on the interface design, requiring the use of extra large fonts on a television and allowing relatively few items to be shown on a television at once.\n", "Nintendo demonstrated the feasibility of playing multi-monitor games on handheld game consoles in designing the Nintendo DS and its successor, the Nintendo 3DS, which both became successful consoles in their own right. Games on these systems take advantage of the two screens available, typically by displaying gameplay on the upper screen, while showing useful information on the bottom screen. There are also a number of games, mostly for the Nintendo DS, whose gameplay spans across both screens, combining them into one tall screen for a more unique and larger view of the action.\n\nSection::::Developing software for multiple monitor workstations.\n", "Video games are often resolution-independent; an early example is \"Another World\" for DOS, which used polygons to draw its 2D content and was later remade using the same polygons at a much higher resolution. 3D games are resolution-independent since the perspective is calculated every frame and so it can vary its resolution.\n\nSection::::See also.\n\nBULLET::::- Adobe Illustrator\n\nBULLET::::- CorelDRAW\n\nBULLET::::- Direct2D\n\nBULLET::::- Display PostScript\n\nBULLET::::- Himetric\n\nBULLET::::- Inkscape\n\nBULLET::::- Page zooming\n\nBULLET::::- Responsive Web Design\n\nBULLET::::- Retina display\n\nBULLET::::- Scalable Vector Graphics\n\nBULLET::::- Synfig\n\nBULLET::::- Twips\n\nBULLET::::- Vector-based graphical user interface\n\nBULLET::::- Vector graphics\n\nSection::::External links.\n", "BULLET::::- Gamut is measured as coordinates in the CIE 1931 color space. The names sRGB or AdobeRGB are shorthand notations.\n\nBULLET::::- Aspect ratio is the ratio of the horizontal length to the vertical length. Monitors usually have the aspect ratio , , or .\n\nBULLET::::- Viewable image size is usually measured diagonally, but the actual widths and heights are more informative since they are not affected by the aspect ratio in the same way. For CRTs, the viewable size is typically smaller than the tube itself.\n", "The resolution of the human eye (with 20/20 vision) is about one minute of arc. For full HDTV resolution, this one minute of arc implies that the TV watcher should sit 4 times the height of the screen away. At this distance the individual pixels can not be resolved while simultaneously maximising the viewing area. So the ideal set size can be determined from the chart below by measuring the distance from where the watcher would sit to the screen in centimeters (or inches), dividing that by 4, and comparing with the screen heights below. At this distance, viewers with better than 20/20 vision will still be able to see the individual pixels. If the user is replacing a standard definition TV with an HDTV this implies that the best visual experience will be with a set that is twice as tall as the standard definition set. As the average size LCD TV being sold is now 38\", which is only about 15% taller than a 27\" standard definition TV, this means that most consumers buy HDTV sets that are smaller than what they could utilize. Cost and budget also limit screen size. \n", "BULLET::::- Patterned Vertical Alignment (PVA): This type of display is a variation of MVA and performs very similarly, but with much higher contrast ratios.\n\nSection::::Display technologies.:Plasma display.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04823
How do apps keep up with updates in OS(android,iOS etc.)
Well often times they don't bother, but in general os updates are announced in advance and are fairly well documented so devs can get pre-release versions to test their app and keep up. OS updates aren't released as frequently - minor ones only a few times a year and the big ones less so - often times about once per year. Good devs interested in maintaining their app pay attention to whats happening and ensures that there aren't any show stoppers and they can release updates anytime they want.
[ "BULLET::::- Built-in update: Mechanisms for installing updates are built into some software systems (or, in the case of some operating systems such as Linux, Android and iOS, into the operating system itself). Automation of these update processes ranges from fully automatic to user initiated and controlled. Norton Internet Security is an example of a system with a semi-automatic method for retrieving and installing updates to both the antivirus definitions and other components of the system. Other software products provide query mechanisms for determining when updates are available.\n", "The App Store received several significant changes in iOS 7. Users can enable automatic app updates. Users can now view a history of updates to each installed app. With location services enabled, the App Store has a Near Me tab that recommends popular apps based on the user's geographic location. It also became possible to download older versions of apps, in case new iOS versions left older devices incompatible for system updates, allowing users to maintain a working copy of the last supported update of each app.\n\nSection::::App features.:Photos and Camera.\n", "Software updates are delivered to Windows Phone users via Microsoft Update, as is the case with other Windows operating systems. Microsoft initially had the intention to directly update any phone running Windows Phone instead of relying on OEMs or wireless carriers, but on January 6, 2012, Microsoft changed their policy to let carriers decide if an update will be delivered.\n", "Device management systems can benefit end-users by incorporating plug and play data services, supporting whatever device the end-user is using.. Such a platform can automatically detect devices in the network, sending them settings for immediate and continued usability. The process is fully automated, keeping the history of used devices and sending settings only to subscriber devices which were not previously set. One method of managing mobile updates is to filter IMEI/IMSI pairs. Some operators report activity of 50 over-the-air settings update files per second. Changed\n\nSection::::Mobile content provisioning.\n", "Section::::Windows Device Recovery Tool.\n\nIn February 2015, to coincide with the launch with the technical preview of Windows 10 Mobile, Microsoft launched a similar application for Windows Insiders known as the Windows Phone Recovery Tool. This application will remove Windows 10 from the device and restore the most current Windows Phone 8.1 software and the device's latest available firmware (i.e. Lumia Cyan or Lumia Denim).\n", "Those firmware packages are updated frequently, incorporate elements of Android functionality that haven't yet been officially released within a carrier-sanctioned firmware, and tend to have fewer limitations. CyanogenMod and OMFGB are examples of such firmware.\n", "On 13 May 2015 Microsoft added support for the HTC One (M8) for Windows devices in version 2.0.3 while earlier the Windows Phone Recovery Tool exclusively worked with Microsoft/Nokia Lumia devices. In September 2015 Microsoft updated the Windows Phone Recovery Tool which renamed it to the Windows Device Recovery Tool alongside several minor fixes such as improved support features, and some accessibility improvements. In November 2015 Microsoft added additional support for another non-Microsoft made Windows mobile device, the \"LG Lancet\".\n", "BULLET::::- Migration: Migration of languages (3GL or 4GL), databases (legacy to RDBMS, and one RDBMS to another), platform (from one OS to another OS), often using automated parsers and converters for high efficiency. This is a quick and cost-effective way of transforming legacy systems.\n\nBULLET::::- Cloud Migration: Migration of legacy applications to cloud platforms often using a methodology such as Gartner’s 5 Rs methodology to segment and prioritize apps into different models (Rehost, Refactor, Revise, Rebuild, Replace).\n", "Windows Update Agent on Windows 10 supports peer to peer distribution of updates; by default, systems' bandwidth is used to distribute previously downloaded updates to other users, in combination with Microsoft servers. Users may optionally change Windows Update to only perform peer to peer updates within their local area network.\n", "BULLET::::- Navigation Tabs Revamped – The iOS 11 App Store now includes three featured tabs (Today, Apps, and Games), as well as the Updates, and Search items. Additionally, the top charts have been moved to within the Apps and Games tabs.\n\nBULLET::::- Top Grossing chart removed – The top grossing chart is now removed from the App Store app.\n\nBULLET::::- All time-only ratings – Ratings and reviews will no longer reset after a new app update and there is no longer a current set of ratings/reviews. Developers can choose to manually reset their ratings and reviews.\n", "Any app that is ready for updating can be updated faster and more efficiently due to this new system. If, for example, a game that is 300 megabytes is updated with a new racetrack that adds an additional two megabytes to the application's size, only two megabytes will be downloaded instead of 302 megabytes.\n\nSection::::Uses.\n\nSection::::Uses.:Linux.\n", "Most mobile devices are sold with several apps bundled as pre-installed software, such as a web browser, email client, calendar, mapping program, and an app for buying music, other media, or more apps. Some pre-installed apps can be removed by an ordinary uninstall process, thus leaving more storage space for desired ones. Where the software does not allow this, some devices can be rooted to eliminate the undesired apps.\n", "German psychotherapist and online addiction expert Bert te Wildt recommends using apps such as Offtime and Menthal to help prevent mobile phone overuse. In fact, there are many apps available on Android and iOS stores which help track mobile usage. For example, in iOS 12 Apple added a function called \"Screen Time\" that allows users to see how much time they have spent on the phone. These apps usually work by doing one of two things: increasing awareness by sending user usage summaries, or notifying the user when he/ she has exceeded some user-defined time-limit for each app or app category.\n", "The pace at which feature updates are received by devices is dependent on which release channel is used. The default branch for all users of Windows10 Home and Pro is \"Semi-Annual Channel (Targeted)\" (formerly \"Current Branch\", or \"CB\"), which receives stable builds after they are publicly released by Microsoft. Each build of Windows 10 is supported for 18 months after its original release. In enterprise environments, Microsoft officially intends that this branch is used for \"targeted\" deployments of newly-released stable versions so that they can be evaluated and tested on a limited number of devices before a wider deployment. Once a stable build is certified by Microsoft and its partners as being suitable for broad deployment, the build is then released on the \"Semi-Annual Channel\" (formerly \"Current Branch for Business\", or \"CBB\"), which is supported by the Pro and Enterprise editions of Windows 10. Semi-Annual Channel receives stable builds on a four-month delay from their release on the Targeted channel, Administrators can also use the \"Windows Update for Business\" system, as well as existing tools such as WSUS and System Center Configuration Manager, to organize structured deployments of feature updates across their networks.\n", "While Windows Phone 7 users were required to attach their phones to a PC to install updates, starting with Windows Phone 8, all updates are done via over-the-air downloads. Since Windows Phone 8, Microsoft has also begun releasing minor updates that add features to a current OS release throughout the year. These updates were first labeled \"General Distribution releases\" (or GDRs), but were later rebranded simply as \"Updates\".\n\nAll third-party applications can be updated automatically from the Windows Phone Store.\n\nSection::::Features.:Advertising platform.\n", "Google's open source project Chromium requires frequent updates to narrow the window of vulnerability. It uses a more aggressive diffing algorithm called \"courgette\" to reduce diff size of two binary executable files, which reduces the diff patch from 6.7% to 0.76% for one version update. The technology helped Chrome to push its updates to 100% of users in less than 10 days.\n\nApp APK updates in Android's Play Store use bsdiff, a new efficient delta update algorithm introduced in 2016. \n\nSection::::Uses.:Apple iOS.\n", "Section::::Implementation.\n\nTypically solutions include a server component, which sends out the management commands to the mobile devices, and a client component, which runs on the managed device and receives and implements the management commands. In some cases, a single vendor provides both the client and the server, while in other cases the client and server come from different sources.\n\nThe management of mobile devices has evolved over time. At first it was necessary to either connect to the handset or install a SIM in order to make changes and updates; scalability was a problem.\n", "The Android operating system checks that updates are signed with the same key, preventing others from distributing updates that are signed by a different key. Originally, the Google Play store required applications to be signed by the developer of the application, while F-Droid only allowed its own signing keys. So apps previously installed from another source have to be reinstalled to receive updates.\n", "During the 2016 Build keynote, Microsoft announced an update to the WNS and the Windows 10 Operating System that will allow for Android and iOS devices to forward push notifications received to Windows 10 to be viewed and discarded.\n\nSection::::Technical details.:Architecture.\n", "The \"Today\" view of Notification Center has been replaced by widgets, and is accessible by swiping from left to right. On the iPad, widgets can be displayed in a two-column layout.\n\nSection::::System features.:Notification Center.\n\nThe Notification Center no longer has a \"Today\" view.\n\nNotifications, now larger, can expand to display more information and all unread notifications can be cleared at once, using 3D Touch.\n\nApps that need to be updated frequently can now have notifications that update live.\n\nThe Notification Center contains a Spotlight search bar.\n\nSection::::System features.:Settings.\n", "The Messages app allows users to see timestamps for every message they have sent or received.\n\nSection::::Reception.\n", "Apps for Windows Phone 8.1 can now be created using the same application model as Windows Store apps for Windows 8.1, based on the Windows Runtime, and the file extension for WP apps is now \".appx\" (which is used for Windows Store apps), instead of Windows Phone's traditional \".xap\" file format. Applications built for WP8.1 can invoke semantic zoom, as well as access to single sign-on with a Microsoft account. The Windows Phone Store now also updates apps automatically. The store can be manually checked for updates available for applications on a device. It also adds the option to update applications when on Wi-Fi only.\n", "Users can remove rarely-used apps without losing the app's data using the \"Offload App\" button. This allows for a later reinstallation of the app (if available on the App Store), in which data returns and usage can continue. Users can also have those apps removed automatically with the \"Offload Unused Apps\" setting. When an app is offloaded, the app appears on the home screen as a grayed-out icon.\n\nPersonalized suggestions will help the user free up storage space on their device, including emptying Photos trash, backing up messages, and enabling iCloud Photo Library for backing up photos and videos.\n", "BULLET::::- Version tracking: Version tracking systems help the user find and install updates to software systems. For example: Software Catalog stores version and other information for each software package installed on a local system. One click of a button launches a browser window to the upgrade web page for the application, including auto-filling of the user name and password for sites that require a login. On Linux, Android and iOS this process is even easier because a standardised process for version tracking (for software packages installed in the officially supported way) is built into the operating system, so no separate login, download and execute steps are required so the process can be configured to be fully automated. Some third-party software also supports automated version tracking and upgrading for certain Windows software packages.\n", "The service provides several kinds of updates. \"Security updates\" or \"critical updates\" mitigate vulnerabilities against security exploits against Microsoft Windows. \"Cumulative updates\" are updates that bundle previously released updates. Cumulative updates were introduced with Windows 10 and have been backported to Windows 7 and Windows 8.1.\n" ]
[ "Apps always keep up with OS updates." ]
[ "Some apps keep up to date but others do not and this can cause bugs over time." ]
[ "false presupposition" ]
[ "Apps always keep up with OS updates.", "Apps always keep up with OS updates." ]
[ "normal", "false presupposition" ]
[ "Some apps keep up to date but others do not and this can cause bugs over time.", "Some apps keep up to date but others do not and this can cause bugs over time." ]
2018-15173
If the atoms that make up our body aren’t living, what makes us living?
In this case, the whole is greater than the sum of its parts. Life arises from the interaction of components which cause various processes to occur.
[ "Some of the energy thus captured produces biomass and energy that is available for growth and development of other life forms. The majority of the rest of this biomass and energy are lost as waste molecules and heat. The most important processes for converting the energy trapped in chemical substances into energy useful to sustain life are metabolism and cellular respiration.\n\nSection::::Study and research.\n\nSection::::Study and research.:Structural.\n", "The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body.\n", "The survival of a living organism depends on the continuous input of energy. Chemical reactions that are responsible for its structure and function are tuned to extract energy from substances that act as its food and transform them to help form new cells and sustain them. In this process, molecules of chemical substances that constitute food play two roles; first, they contain energy that can be transformed and reused in that organism's biological, chemical reactions; second, food can be transformed into new molecular structures (biomolecules) that are of use to that organism.\n", "Section::::Definitions.:Living systems theories.\n\nLiving systems are open self-organizing living things that interact with their environment. These systems are maintained by flows of information, energy, and matter.\n", "Section::::Definitions.:Biology.\n\nSince there is no unequivocal definition of life, most current definitions in biology are descriptive. Life is considered a characteristic of something that preserves, furthers or reinforces its existence in the given environment. This characteristic exhibits all or most of the following traits:\n\nBULLET::::1. Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature\n\nBULLET::::2. Organization: being structurally composed of one or more cells – the basic units of life\n", "\"What we normally think of as 'life' is based on chains of carbon atoms, with a few other atoms, such as nitrogen or phosphorus\", per Stephen Hawking in a 2008 lecture, \"carbon [...] has the richest chemistry.\" \n", "The introduction is an illustrated essay, \"What Is Life?\", by Albert Szent-Györgyi—a Nobel Prize-winning biochemist and, as the essay's biographical tag explains, a protester against \"the irrational pursuit of war and politics that characterizes our Western culture\" as well as an advocate of using technology \"to create a psychologically and socially progressive world where humanistic values are paramount\". The essay explains Szent-Györgyi's outlook on the fundamental problems of biology, the relationship between biological and physical sciences, and reasons for pursuing biology. Szent-Györgyi concludes that \"To express the marvels of nature in the language of science is one of man's noblest endeavors. I no reason to expect the completion of that task within the near future.\"\n", "Bones in addition to supporting the body also serve, at the cellular level, as calcium and phosphate storage.\n\nSection::::Organisms with skeletons.:Vertebrates.:Fish.\n", "Section::::Energy usage in the human body.\n\nThe human body uses the energy released by respiration for a wide range of purposes: about 20% of the energy is used for brain metabolism, and much of the rest is used for the basal metabolic requirements of other organs and tissues. In cold environments, metabolism may increase simply to produce heat to maintain body temperature. Among the diverse uses for energy, one is the production of mechanical energy by skeletal muscle to maintain posture and produce motion.\n", "Section::::Content.\n", "The most notable groups of chemicals used in the processes of living organisms include:\n\nBULLET::::- Proteins, which are the building blocks from which the structures of living organisms are constructed (this includes almost all enzymes, which catalyse organic chemical reactions)\n\nBULLET::::- Nucleic acids, which carry genetic information\n\nBULLET::::- Carbohydrates, which store energy in a form that can be used by living cells\n\nBULLET::::- Lipids, which also store energy, but in a more concentrated form, and which may be stored for extended periods in the bodies of animals\n\nSection::::Fiction.\n", "In cinematic and literary science fiction, at a moment when man-made machines cross from nonliving to living, it is often posited, this new form would be the first example of non-carbon-based life. Since the advent of the microprocessor in the late 1960s, these machines are often classed as computers (or computer-guided robots) and filed under \"silicon-based life\", even though the silicon backing matrix of these processors is not nearly as fundamental to their operation as carbon is for \"wet life\".\n\nSection::::Overview.:Non-carbon-based biochemistries.:Other exotic element-based biochemistries.\n", "Living organisms are extremely complex functional systems that are made up of, at a minimum, many tens of thousands of genes, protein molecules, RNA molecules, small organic compounds, inorganic ions, and complexes in an environment that is spatially organized by membranes, and in the case of multicellular organisms, organ systems. These myriad components interact with each other and with their environment in a way that processes food, removes waste, moves components to the correct location, and is responsive to signalling molecules, other organisms, light, sound, heat, taste, touch, and balance.\n", "Therefore, these cofactors are continuously recycled as part of metabolism. As an example, the total quantity of ATP in the human body is about 0.1 mole. This ATP is constantly being broken down into ADP, and then converted back into ATP. Thus, at any given time, the total amount of ATP + ADP remains fairly constant. The energy used by human cells requires the hydrolysis of 100 to 150 moles of ATP daily, which is around 50 to 75 kg. In typical situations, humans use up their body weight of ATP over the course of the day. This means that each ATP molecule is recycled 1000 to 1500 times daily.\n", "Steady state (biochemistry)\n\nIn biochemistry, steady state refers to the maintenance of constant internal concentrations of molecules and ions in the cells and organs of living systems. Living organisms remain at a dynamic steady state where their internal composition at both cellular and gross levels are relatively constant, but different from equilibrium concentrations. A continuous flux of mass and energy results in the constant synthesis and breakdown of molecules via chemical reactions of biochemical pathways. Essentially, steady state can be thought of as homeostasis at a cellular level.\n\nSection::::Maintenance of Steady State.\n", "BULLET::::- Bioelectronics – the electrical state of biological matter significantly affects its structure and function, compare for instance the membrane potential, the signal transduction by neurons, the isoelectric point (IEP) and so on. Micro- and nano-electronic components and devices have increasingly been combined with biological systems like medical implants, biosensors, lab-on-a-chip devices etc. causing the emergence of this new scientific field.\n", "The musculoskeletal system consists of the human skeleton (which includes bones, ligaments, tendons, and cartilage) and attached muscles. It gives the body basic structure and the ability for movement. In addition to their structural role, the larger bones in the body contain bone marrow, the site of production of blood cells. Also, all bones are major storage sites for calcium and phosphate. This system can be split up into the muscular system and the skeletal system.\n\nSection::::Composition.:Systems.:Nervous system.\n", "Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium, and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.\n\nSection::::Biomolecules.\n", "There is currently no consensus regarding the definition of life. One popular definition is that organisms are open systems that maintain homeostasis, are composed of cells, have a life cycle, undergo metabolism, can grow, adapt to their environment, respond to stimuli, reproduce and evolve. However, several other definitions have been proposed, and there are some borderline cases of life, such as viruses or viroids.\n", "Microbes simply secrete digestive enzymes into their surroundings, while animals only secrete these enzymes from specialized cells in their guts, including the stomach and pancreas, and salivary glands. The amino acids or sugars released by these extracellular enzymes are then pumped into cells by active transport proteins.\n\nSection::::Catabolism.:Energy from organic compounds.\n", "ATP is the usable form of chemical energy for muscular activity. It is stored in most cells, particularly in muscle cells. Other forms of chemical energy, such as those available from food, must be transformed into ATP before they can be utilized by the muscle cells.\n\nSection::::Coupled reactions.\n", "Living systems\n\nLiving systems are open self-organizing life forms that interact with their environment. These systems are maintained by flows of information, energy and matter.\n", "Biology and its subfields of biochemistry and molecular biology study biomolecules and their reactions. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are present in small amounts.\n", "Section::::Definitions.:Living systems theories.:Life as a property of ecosystems.\n", "To reflect the minimum phenomena required, other biological definitions of life have been proposed, with many of these being based upon chemical systems. Biophysicists have commented that living things function on negative entropy. In other words, living processes can be viewed as a delay of the spontaneous diffusion or dispersion of the internal energy of biological molecules towards more potential microstates. In more detail, according to physicists such as John Bernal, Erwin Schrödinger, Eugene Wigner, and John Avery, life is a member of the class of phenomena that are open or continuous systems able to decrease their internal entropy at the expense of substances or free energy taken in from the environment and subsequently rejected in a degraded form.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02206
How did computer hackers do their hacking back in the 60s and 70s?
The vast majority of hacking in the pre-internet days relied on physical access to the system... as that was the only way to access it... or by manipulating someone with physical access to it to do something to it (knowingly or unwittingly) Even today, physical access to the system is often how a lot of hacks take place, there's just no substitute for ease of actually having physical access. Even many newer big name hacks, such as at least one of the Sony hacks and stuff like Stuxnet both were done with physical access.
[ "In 1984, Gold and fellow journalist/hacker Robert Schifreen demonstrated an \"ad hoc penetration test\" of a Prestel network which, according to the writer Nick Barron, used \"a combination of clever shoulder surfing and good old-fashioned hacking skills\". An archive telling the story of how the 1980s hack of Prince Philip’s mailbox led to UK anti-hacking legislation is held at The National Museum of Computing in Bletchley.\n", "BULLET::::- Susan Headley (also known as Susan Thunder), was an American hacker active during the late 1970s and early 1980s widely respected for her expertise in social engineering, pretexting, and psychological subversion. She became heavily involved in phreaking with Kevin Mitnick and Lewis de Payne in Los Angeles, but later framed them for erasing the system files at US Leasing after a falling out, leading to Mitnick's first conviction.\n", "Apple cofounder Steve Wozniak makes an appearance as well to talk about his phone phreaking days, as does phreaking pioneer John \"Captain Crunch\" Draper. In the film, Wozniak explains that hackers and phreakers originally maintained a strong ethic, using their techniques not to \"rip people off or make money\" but \"to explore\". Draper observes that the imprisonment of hackers had unintended side-effects: \"Once I got busted, I had to tell everybody else in jail how to do it and then the cat's out of the bag. The next thing you know, the mob has this technology.\"\n", "Section::::1970s.\n\nSection::::1970s.:1971.\n\nBULLET::::- John T. Draper (later nicknamed Captain Crunch), his friend Joe Engressia (also known as Joybubbles), and blue box phone phreaking hit the news with an \"Esquire Magazine\" feature story.\n\nSection::::1970s.:1979.\n\nBULLET::::- Kevin Mitnick breaks into his first major computer system, the Ark, the computer system Digital Equipment Corporation (DEC) used for developing their RSTS/E operating system software.\n\nSection::::1980s.\n\nSection::::1980s.:1980.\n\nBULLET::::- The FBI investigates a breach of security at National CSS (NCSS). \"The New York Times\", reporting on the incident in 1981, describes hackers as\n\nSection::::1980s.:1981.\n\nBULLET::::- Chaos Computer Club forms in Germany.\n", "An encounter of the programmer and the computer security hacker subculture occurred at the end of the 1980s, when a group of computer security hackers, sympathizing with the Chaos Computer Club (which disclaimed any knowledge in these activities), broke into computers of American military organizations and academic institutions. They sold data from these machines to the Soviet secret service, one of them in order to fund his drug addiction. The case was solved when Clifford Stoll, a scientist working as a system administrator, found ways to log the attacks and to trace them back (with the help of many others). \"23\", a German film adaption with fictional elements, shows the events from the attackers' perspective. Stoll described the case in his book \"The Cuckoo's Egg\" and in the TV documentary \"The KGB, the Computer, and Me\" from the other perspective. According to Eric S. Raymond, it \"nicely illustrates the difference between 'hacker' and 'cracker'. Stoll's portrait of himself, his lady Martha, and his friends at Berkeley and on the Internet paints a marvelously vivid picture of how hackers and the people around them like to live and how they think.\"\n", "BULLET::::- The first known incidence of network penetration hacking took place when members of a computer club at a suburban Chicago area high school were provided access to IBM's APL network. In the Fall of 1967, IBM (through Science Research Associates) approached Evanston Township High School with the offer of four 2741 Selectric teletypewriter based terminals with dial-up modem connectivity to an experimental computer system which implemented an early version of the APL programming language. The APL network system was structured in Workspaces which were assigned to various clients using the system. Working independently, the students quickly learned the language and the system. They were free to explore the system, often using existing code available in public Workspaces as models for their own creations. Eventually, curiosity drove the students to explore the system's wider context. This first informal network penetration effort was later acknowledged as helping harden the security of one of the first publicly accessible networks:\n", "Section::::History.\n\nLevy explains that MIT housed an early IBM 704 computer inside the Electronic Accounting Machinery (EAM) room in 1959. This room became the staging grounds for early hackers, as MIT students from the Tech Model Railroad Club sneaked inside the EAM room after hours to attempt programming the 30-ton, computer.\n", "According to an unpublished study by Beverly E. Golemba of Langley's early computers, a number of other women did not know about the West Computers. That said, both the black and white women Golemba interviewed recalled that when computers from both groups were assigned to a project together, \"everyone worked well together.\"\n\nSection::::Notable members.\n", "BULLET::::- \"Die Hard\" (1988) — The computer room shot up by one of the terrorists contained a number of working Cyber 180 computers and a mock-up of an ETA-10 supercomputer, along with a number of other peripheral devices, all provided by CDC Demonstration Services/Benchmark Lab. This equipment was requested on short notice after another computer manufacturer backed out at the last minute. Paul Derby, manager of the Benchmark Lab, arranged to send two van-loads of equipment to Hollywood for the shoot, accompanied by Jerry Stearns of the Benchmark Lab who watched over this equipment. After the machines were returned to Minnesota, they were inspected and tested, and as each machine was sold, a notation was made in the corporate records that the machine had appeared in the film.\n", "By 1986, the team was hiring freelancers and developing many home computer licenses of arcade machines. \"Ghosts 'n Goblins\", for example, was converted to the Spectrum by freelance programmer Nigel Alderton and graphics designer Karen Trueman, plus Elite's regular team. The Aldridge-based headquarters housed a row of arcade cabinets for games that were being converted. Their hardware had been hacked so the team could analyse the games to ensure an accurate, licensed conversion.\n", "Jack Marshall (Hacker) is a freelance systems analyst from Raleigh, North Carolina. Fascinated with computers ever since he was a child, he grew up alongside the industry and eventually ended up working at Digitronix World Industries (DTX), a small company in Dallas, Texas. Maverick company president Donny Travis worked alongside Marshall to invent the Digitronix Desktop PC.\n", "Section::::1960s.:1967.\n", "1. The Tech Model Railroad Club (TMRC) is a club at MIT that built sophisticated railroad and train models. The members were among the first hackers. Key figures of the club were Peter Samson, Alan Kotok, Jack Dennis, and Bob Saunders. The club was composed of two groups, those who were interested in the modeling and landscaping, and those who comprised the Signals and Power Subcommittee and created the circuits that enable the trains to run. The latter would be among the ones who popularized the term hacker among many other slang terms, and who eventually moved on to computers and computer programming. They were initially drawn to the IBM 704, the multimillion-dollar mainframe that was operated at Building 26, but access and time to the mainframe was reserved for more important people. The group really began being involved with computers when Jack Dennis, a former member, introduced them to the TX-0, a three-million-dollar computer on long-term-loan from Lincoln Laboratory. They would usually stake out the place where the TX-0 was housed until late in the night in hopes that someone who had signed up for computer time was absent.\n", "In 1961, the University of Wisconsin developed a technology called FORGO for the IBM 1620 which combined some of the steps.\n\nSimilar experiments were carried out at Purdue University on the IBM 7090 in a system called PUFFT.\n\nSection::::History.:WATFOR 7040.\n\nIn summer 1965, four undergraduate students of the University of Waterloo, Gus German, James G. Mitchell\n", "Morningstar worked on \"The Palace\", the world's largest graphical chat system. He was also worked on Project Xanadu, the first distributed hypertext system which was initially started in 1960.\n", "In February 1975, to illustrate the lax security on campus, a photo essay showed editor Gary Curtis stealing typewriters, and even a photocopier, while guards watched.\n\nSection::::History.:Later events.\n", "Fast Hack'em\n\nFast Hack'em is a Commodore 64 nibbler and disk editor written by Mike J. Henry and released in 1985. It was distributed in the U.S. via Henry's \"Basement Boys Software\", and in the U.K. via Datel Electronics. In the U.S., it retailed for $29.95. ($70.84 in inflation-adjusted 2018 dollars)\n\nSection::::Features.\n", "Three former staff and a pupil at the Harvey worked at the once secret code breaking centre at Bletchley Park near Milton Keynes, which was recently made public and has become a tourist attraction. Their unique roles are honoured on a plaque in the school hall. The school's Headmaster Oliver Berthoud (1946–1952) was there, as was the school's long-serving secretary Miss Audrey Wind. Although they worked closely in the school it was not until a discussion one day in Mr Berthoud's office that he managed to get Miss Wind to admit to her involvement and they spoke at length about their time there.\n", "16. The Third Generation consisted of hackers who had much more access to computers than the former hardware hackers. Personal computers were becoming popular to the point where high school kids could afford them. Among this new generation was John Harris, an Atari assembly hacker, who later produced games for On-Line.\n\n17. Summer Camp was the nickname for On-Line's headquarters. The staff partied and had fun whenever they were not working. The atmosphere was very casual and free-spirited, but it could not last as the company was growing.\n", "They then met John Parker, an investment banker who had run Northwest Aeronautical Corporation (NAC), a glider subsidiary of Chase Aircraft, in St. Paul, Minnesota. NAC was in the process of shutting down as the war ended most contracts, and Parker was looking for new projects to keep the factory running. He was told nothing about the work the team would do, but after being visited by a series of increasingly high-ranking naval officers culminating with James Forrestal, he knew \"something\" was up and decided to give it a try. Norris, Engstrom, and their group incorporated ERA in January, 1946, hired forty of their codebreaking colleagues, and moved to the NAC factory. \n", "While attending school in Vancouver, Richard Darling and his elder brother, David Darling, had learned programming with punch cards and had access the school's computer room outside of hours through one of the school's janitors. Additionally, on weekends, they were allowed to use the Commodore PET computer owned by their father, James, to create a text version of \"Dungeons & Dragons\". Later on, the two brothers and school friend Michael Heibert, whose family possessed a VIC-20 computer, founded Darbert Computers and created video game clones of popular games, such as \"Galaxian\" and \"Defender\".\n", "Perhaps the leading computer penetration expert during these formative years was James P. Anderson, who had worked with the NSA, RAND, and other government agencies to study system security. In early 1971, the U.S. Air Force contracted Anderson's private company to study the security of its time-sharing system at the Pentagon. In his study, Anderson outlined a number of major factors involved in computer penetration. Anderson described a general attack sequence in steps:\n\nBULLET::::1. Find an exploitable vulnerability.\n\nBULLET::::2. Design an attack around it.\n\nBULLET::::3. Test the attack.\n\nBULLET::::4. Seize a line in use.\n\nBULLET::::5. Enter the attack.\n", "The Secret History of Hacking\n\nThe Secret History of Hacking is a 2001 documentary film that focuses on phreaking, computer hacking and social engineering occurring from the 1970s through to the 1990s. Archive footage concerning the subject matter and (computer generated) graphical imagery specifically created for the film are voiced over with narrative audio commentary, intermixed with commentary from people who in one way or another have been closely involved in these matters.\n", "BULLET::::- In the first encounter between a computer and a master-rated chess player in a tournament, the \"Mac Hack\" computer program designed by Richard Greenblatt of the Massachusetts Institute of Technology program almost defeated another MIT student, Carl Wagner, who was rated at \"a little above master\" by the United States Chess Federation. Wagner was playing at the monthly chess club tournament at the YMCA building in Boylston, Massachusetts, while the Mac Hack (entered in the tournament as \"Robert Q. Computer\") remained at MIT while the moves and responses were relayed by teletype.\n", "BULLET::::- The FBI, Secret Service, Middlesex County NJ Prosecutor's Office and various local law enforcement agencies execute seven search warrants concurrently across New Jersey on July 12, 1985, seizing equipment from BBS operators and users alike for \"complicity in computer theft\", under a newly passed, and yet untested criminal statute. This is famously known as the Private Sector Bust, or the 2600 BBS Seizure, and implicated the Private Sector BBS sysop, Store Manager (also a BBS sysop), Beowulf, Red Barchetta, The Vampire, the NJ Hack Shack BBS sysop, and the Treasure Chest BBS sysop.\n\nSection::::1980s.:1986.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-12831
Why are tax havens mostly islands or small countries?
The whole point of a tax haven is that taxes are much lower than anywhere else. If you tried to run a large country like that, you wouldn’t be able to pay for a military, social services, or other essential stuff. That’s why tax havens are usually in very safe places (no need for a military) and have small populations with a lot of rich people (no need for social spending).
[ "Section::::History.\n\nSection::::History.:General phases.\n\nWhile areas of low taxation are recorded in Ancient Greece, tax academics identify what we know as tax havens as being a modern phenomenon, and note the following phases in their development:\n", "BULLET::::- *♣Luxembourg – one of the largest Sink OFCs in the world (a terminus for many corporate tax havens, especially Ireland and the Netherlands).\n\nBULLET::::- *♣Hong Kong – the \"Luxembourg of Asia\", and almost as large a Sink OFC as Luxembourg; tied to APAC's largest corporate tax haven, Singapore.\n\nSovereign or semi-sovereign states that feature mainly as traditional tax havens (but have non-zero tax rates), include:\n\nBULLET::::- ♣Cyprus – damaged its reputation during the financial crisis when the Cypriot banking system nearly collapsed, however reappearing in top 10 lists.\n", "BULLET::::- \"Emerging economy-based tax havens\". As well as the dramatic rise in OFCs, from the late 1960s onwards, new tax havens began to emerge to service developing and emerging markets, which became Palan's third group. The first Pacific tax haven was Norfolk Island (1966), a self-governing external territory of Australia. It was followed by Vanuatu (1970–71), Nauru (1972), the Cook Islands (1981), Tonga (1984), Samoa (1988), the Marshall Islands (1990), and Nauru (1994). All these havens introduced familiar legislation modeled on the successful British Empire and European tax havens, including near-zero taxation for exempt companies, and non-residential companies, Swiss-style bank secrecy laws, trust companies laws, offshore insurance laws, flags of convenience for shipping fleets and aircraft leasing, and beneficial regulations for new online services (e.g. gambling, pornography, etc.).\n", "As discussed in , most of these tax havens date from the late 1960s and effectively copied the structures and services of the above groups. Most of these tax havens are not OECD members, or in the case of British Empire-related tax havens, don't have a senior OECD member at their core. Some have suffered setbacks during various OECD initiatives to curb tax havens (e.g. Vanuatu and Samoa). However, others such as Taiwan (for AsiaPAC), and Mauritius (for Africa), have grown materially in the past decades. Taiwan has been described as \"Switzerland of Asia\", with a focus on secrecy. While no Emerging market-related tax haven ranks in the five major global Conduit OFCs or any lists, both Taiwan and Mauritius rank in the top ten global Sink OFCs.\n", "The post–2010 rise in quantitative techniques of identifying tax havens has resulted in a more stable list of the largest tax havens. Dharmapala notes that as corporate BEPS flows dominate tax haven activity, these are mostly corporate tax havens. Nine of the top ten tax havens in Gabriel Zucman's June 2018 study also appear in the top ten lists of the two other quantitative studies since 2010. Four of the top five Conduit OFCs are represented; however, the UK only transformed its tax code in 2009–2012. All five of the top 5 Sink OFCs are represented, although Jersey only appears in the Hines 2010 list.\n", "(♣) Appears on the James Hines 2010 list of 52 tax havens; seventeen of the twenty locations below, are on the James Hines 2010 list.br\n\n(Δ) Identified on the largest OECD 2000 list of 35 tax havens (the OECD list only contained Trinidad & Tobago by 2017); only four locations below were ever on an OECD list.br \n\n(↕) Identified on the European Union's first 2017 list of 17 tax havens; only one location below is on the EU 2017 list.\n\nSovereign states that feature mainly as major corporate tax havens are:\n", "Estimating the financial scale of tax havens is complicated by their inherent lack of transparency. Even jurisdictions that comply with OECD–transparency requirements such as Ireland, Luxembourg, and the Netherlands, provide alternate secrecy tools (e.g. Trusts, QIAIFs and ULLs). For example, when the EU Commission discovered Apple's tax rate in Ireland was 0.005%, they found Apple's had used Irish ULLs to avoid filing Irish public accounts since the early 1990s.\n", "While tax havens are diverse and varied, tax academics sometimes recognise three major \"groupings\" of tax havens when discussing the history of their development:\n\nSection::::Groupings.:European-related tax havens.\n", "The longest list from \"Non–governmental, Quantitative\" research on tax havens is the University of Amsterdam CORPNET July 2017 Conduit and Sink OFCs study, at 29 (5 Conduit OFCs and 25 Sink OFCs). The following are the 20 largest (5 Conduit OFCs and 15 Sink OFCs), which reconcile with other main lists as follows:\n\n(*) Appears in as a in all three quantitative lists, Hines 2010, ITEP 2017 and Zucman 2018 (above); all nine such s are listed below.br\n", "The expansion of the EU has motivated major improvements in tax-related transparency. The EU requires members to exchange tax information using automatic systems by the end of 2011, which assures that Austria, Belgium and Luxembourg are coming into compliance. Further, since January 2007 former tax havens (noted for their banking secrecy and low tax rates for foreign investments) in Bermuda, the Channel Islands (Jersey and Guernsey) and the Isle of Man have achieved “white list” status due to their EU ties.\n", "BULLET::::- 2017. The EU Commission produces its first formal list of tax havens with 17 countries on its 2017 blacklist and 47 on its 2017 greylist; however, as with the previous 2010 OECD list, none of the jurisdictions are OECD or EU–28 countries, nor are they in the list of .\n", "BULLET::::- 2010. James R. Hines Jr. publishes a list of 52 tax havens, and unlike all past tax haven lists, were scaled quantitatively by analysing corporate investment flows. The Hines 2010 list was the first to estimate the ten largest global tax havens, only two of which, Jersey and the British Virgin Isles, were on the OECD's 2000 list.\n", "In June 2018, another joint-IMF study showed that 8 \"pass-through economies\", namely, the Netherlands, Luxembourg, Hong Kong SAR, the British Virgin Islands, Bermuda, the Cayman Islands, Ireland, and Singapore; host more than 85 per cent of the world's investment in special purpose entities, which are often set up for tax reasons.\n\n(*) One of the largest 10 tax havens by James R. Hines Jr. in 2010 (the Hines 2010 List).br\n\n(†) Identified as one of the 5 Conduits (Ireland, Singapore, Switzerland, the Netherlands, and the United Kingdom), by CORPNET in 2017.br\n", "In several research papers, James R. Hines Jr. showed that tax havens were typically small but well-governed nations and that being a tax haven had brought significant prosperity. In 2009, Hines and Dharmapala suggested that roughly 15% of countries are tax havens, but they wondered why more countries had not become tax havens given the observable economic prosperity it could bring.\n", "In November 2009, Michael Foot, a former Bank of England official and Bahamas bank inspector, delivered an integrated report on the three British Crown Dependencies (Guernsey, Isle of Man and Jersey), and the six Overseas Territories (Anguilla, Bermuda, British Virgin Islands, Cayman Islands, Gibraltar, Turks and Caicos Islands), \"to identify the opportunities and challenges as offshore financial centres\", for the HM Treasury.\n\nSection::::Groupings.:Emerging market-related tax havens.\n", "In December 2017, EU Commission adopted a \"blacklist\" of territories to encourage compliance and cooperation: American Samoa, Bahrain, Barbados, Grenada, Guam, South Korea, Macau, The Marshall Islands, Mongolia, Namibia, Palau, Panama, Saint Lucia, Samoa, Trinidad and Tobago, Tunisia, United Arab Emirates. In addition, the Commission produced a \"greylist\" of 47 jurisdictions who had already committed to cooperate with the EU to change their rules on tax transparency and cooperation. Only one of the EU's 17 blacklisted tax havens, namely Samoa, was in above. The EU lists did not include any OECD or EU jurisdictions, or any of the . A few weeks later in January 2018, EU Taxation Commissioer Pierre Moscovici, called Ireland and the Netherlands, \"tax black holes\". After only a few months the EU reduced the blacklist further, and by November 2018, it contained only 5 juristictions: American Samoa, Guam, Samoa, Trinidad & Tobago, and the US Virgin Islands. However, by March 2019, the EU blacklist was expanded to 15 jurisdictions including Bermuda, a and the 5th largest Sink OFC.\n", "BULLET::::- *♣Cayman Islands, also features as a major U.S. corporate tax haven; 6th most popular destination for U.S. corporate tax inversions.\n\nBULLET::::- ♣ΔGibraltar – like the Isle of Man, has declined due to concerns, even by the U.K., over its practices.\n\nBULLET::::- ♣Mauritius – has become a major tax haven for both SE Asia (especially India) and African economies, and now ranking 8th overall.\n\nBULLET::::- Curacao – the Dutch dependency ranked 8th on the Oxfam's tax haven list, and the 12th largest Sink OFC, and recently made the EU's greylist.\n", "Section::::Data leaks.\n\nBecause of their secrecy, some tax havens have been subject to public and non-public disclosures of client account data, the most notable being:\n\nSection::::Data leaks.:Liechtenstein tax affair (2008).\n", "The creation of IP-based BEPS tools requires advanced legal and tax structuring capabilities, as well as a regulatory regime willing to carefully encode the complex legislation into the jurisdiction's statute books (note that BEPS tools bring increased risks of tax abuse by the domestic tax base in corporate tax haven's own jurisdiction, see for an example). Modern corporate tax havens, therefore, tend to have large global legal and accounting professional service firms in-situ (many classical tax havens lack this) who work with the government to build the legislation. In this regard, havens are accused of being captured states by their professional services firms. The close relationship between Ireland's International Financial Services Centre professional service firms and the State in Ireland, is often described as the \"green jersey agenda\". The speed at which Ireland was able to replace its double Irish IP-based BEPS tool, is a noted example.\n", "For example, all of the Top 10 tax havens, featured in the various post–2010 tax haven lists, bar the British Virgin Islands and Puerto Rico, appeared in the shorter of 22 OFCs. The British Virgin Islands and Puerto Rico were not included in the IMF 2007 study due to data issues.\n\nSection::::Definitions.:Conduit and Sink OFCs.\n", "(†) Identified as one of the 5 Conduits by CORPNET in 2017; the above list has 5 of the 5.br\n\n(‡) Identified as one of the largest 24 Sinks by CORPNET in 2017; the above list has 23 of the 24 (Guyana missing).br\n\n(↕) Identified on the European Union's first 2017 list of 17 tax havens; the above list contains 8 of the 17.br\n", "Tax havens have high GDP-per-capita rankings, as their \"headline\" economic statistics are artificially inflated by the BEPS flows that add to the haven's GDP, but are not taxable in the haven. As the largest facilitators of BEPS flows, corporate-focused tax havens, in particular, make up most of the top 10-15 GDP-per-capita tables, excluding oil and gas nations (see table below). Research into tax havens suggest a high GDP-per-capita score, in the absence of material natural resources, as an important proxy indicator of a tax haven. At the core of the FSF-IMF definition of an offshore financial centre is a country where the financial BEPS flows are out of proportion to the size of the indigenous economy. Apple's Q1 2015 \"leprechaun economics\" BEPS transaction in Ireland was a dramatic example, which caused Ireland to abandon its GDP and GNP metrics in February 2017, in favour of a new metric, modified gross national income, or GNI*.\n", "Some notable authors on tax havens describe them as \"captured states\" by their offshore finance industry, where the legal, taxation and other requirements of the professional service firms operating from the tax haven are given higher priority to any conflicting State needs. The term is particularly used for smaller tax havens, with examples being Antigua, the Seychelles, and Jersey. However, the term \"captured state\" has also been used for larger and more established OECD and EU offshore financial centres or tax havens. Ronen Palan has noted that even where tax havens started out as \"trading centres\", they can eventually become \"captured\" by \"powerful foreign finance and legal firms who write the laws of these countries which they then exploit\". Tangible examples include the public disclosure in 2016 of Amazon Inc's Project Goldcrest tax structure, which showed how closely the State of Luxembourg worked with Amazon for over 2 years to help it avoid global taxes. Other examples include how the Dutch Government removed provisions to prevent corporate tax avoidance by creating the Dutch Sandwich BEPS tool, which Dutch law firms then marketed to US corporations:\n", "The studies capture the rise of Ireland and Singapore, both major regional headquarters for some of the largest BEPS tool users, Apple, Google and Facebook. In Q1 2015, Apple completed the largest BEPS action in history, when it shifted US$300 billion of IP to Ireland, which Nobel-prize economist Paul Krugman called \"leprechaun economics\". In September 2018, using TCJA repatriation tax data, the NBER listed the \"key tax havens\" as: \"Ireland, Luxembourg, Netherlands, Switzerland, Singapore, Bermuda and [the] Caribbean havens\".\n", "BULLET::::- The author estimates in the book that around $12 trillion, a quarter of the world's wealth, goes untaxed in tax havens. If banks and companies were included, the amount would be at least twice that. Every FTSE 100 company has subsidiaries or partners in tax havens to avoid tax.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-08353
Why do massive updates to games give barely any extra space to the game when people download the base game (after the update is released)?
Because most of the update is actually overwriting files that were already there. So instead of adding to the size of the game it stays roughly the same due to files just being replaced with new ones that are similar in size.
[ "Any app that is ready for updating can be updated faster and more efficiently due to this new system. If, for example, a game that is 300 megabytes is updated with a new racetrack that adds an additional two megabytes to the application's size, only two megabytes will be downloaded instead of 302 megabytes.\n\nSection::::Uses.\n\nSection::::Uses.:Linux.\n", "Cloud saving was introduced on a very limited, game-by-game basis in July 2019, though Epic plans to expand this out after validating the feature.\n", "Game updates are pushed through an update server with major improvements scheduled every six months. Using this method the developer can make rapid fixes or game changes such as the addition of auction houses to the game. Game developers have promised updates such as graphical improvements in the future.\n\nSection::::Publishing.\n", "Shortly after the free version of the game comes out, the influx of players starts to affect the online component, which occasionally crashes and disconnects users, causing them to save progress they have made in the game. The company redeploys its resources and tries to mitigate the situation by incrementally improving the online component. It is clear, however, that a complete rewrite of the online component is needed on order to eliminate the problem entirely. Therefore the company contacts its investors in order to raise additional funds to rebuild the online component.\n", "A phenomenon of additional game content at a later date, often for additional funds, began with digital video game distribution known as downloadable content (DLC). Developers can use digital distribution to issue new storylines after the main game is released, such as Rockstar Games with \"Grand Theft Auto IV\" (\"\" and \"\"), or Bethesda with \"Fallout 3\" and its expansions. New gameplay modes can also become available, for instance, \"Call of Duty\" and its zombie modes, a multiplayer mode for \"Mushroom Wars\" or a higher difficulty level for \"\". Smaller packages of DLC are also common, ranging from better in-game weapons (\"Dead Space\", \"Just Cause 2\"), character outfits (\"LittleBigPlanet\", \"Minecraft\"), or new songs to perform (\"SingStar\", \"Rock Band\", \"Guitar Hero\").\n", "Unlike Xbox 360's emulation of the original Xbox, games do not have to be specifically patched but need to be repackaged in the Xbox One format. Users' digitally-purchased games will automatically appear in their library for download once available. Games on physical media are not executed directly from disc; inserting the disc initiates a download of a repackaged version. As with Xbox One titles, the disc must be inserted during play for validation purposes.\n", "To continue having the ability to view new content users were forced to apply the patches, which also hardened the security of player applications.\n\nOn 23 May 2007 the Processing Key for the next version of the Media Key Block was posted to the comments page of a Freedom to Tinker blog post.\n", "The option of vanilla servers has been a long-standing request in the \"World of Warcraft\" community. As the game's expansions typically superseded older content, many players felt that a rollback to an earlier version was the only way for them to re-experience old game content. This became especially true when the \"\" expansion reworked the entire game world of the launch version, making some content forever inaccessible. Blizzard was aware of this desire, but thought the development overhead of maintaining two divergent versions of the game was too great.\n", "After several weeks, the company comes to a major decision. It will re-release the game as free, instead focusing on selling additional content on the form in in-app purchases. The strategy works and many new users start playing the game. This has two effects:\n\nBULLET::::- The online component of the game becomes overloaded. It becomes clear that an investment is needed to rearchitect the component in order to scale it up without friction\n\nBULLET::::- The company is making little money from in-app purchases, since it doesn’t have a lot of premium content built for the game yet\n", "But even if the maximum level is not increased, the expansion does increase the characters' power in some other way: The rewards for playing the content of the expansion tend to be substantially better than rewards of content released before the expansion, even when target audience is the same.\n", "During its supported lifetime, software is sometimes subjected to service releases, patches or service packs, sometimes also called \"interim releases\". For example, Microsoft released three major service packs for the 32-bit editions of Windows XP and two service packs for the 64-bit editions. Such service releases contain a collection of updates, fixes, and enhancements, delivered in the form of a single installable package. They may also implement new features. Some software is released with the expectation of regular support. Classes of software that generally involve protracted support as the norm include anti-virus suites and massively multiplayer online games. A good example of a game that utilizes this process is Minecraft, an indie game developed by Mojang, which features regular \"updates\" featuring new content and bug fixes.\n", "Section::::In other media.\n\nWhile video games are the origins of downloadable content, with movies, books and music also becoming more popular in the digital sphere, experimental DLC has also been attempted. Amazon's Kindle service for example allows updating ebooks, which allows authors to not only update and correct work, but also add content.\n", "Sometimes the game development team creates new contents or fix previous bugs, which means they need to let every player's clients to synchronize with the server. One way a game developer can fix bugs or add new contents to a game is through patches. The digital distribution platform will alert the user that there is an update is available, and client apply those update patches for the users automatically to ensure every user has the same perspective of the game content when changes have been made. Some examples of a digital distribution platforms include steam, origin and battle.net, which provide the same services when it comes to game clients.\n", "The size of patches may vary from a few bytes to hundreds of megabytes; thus, more significant changes imply a larger size, though this also depends on whether the patch includes entire files or only the changed portion(s) of files. In particular, patches can become quite large when the changes add or replace non-program data, such as graphics and sounds files. Such situations commonly occur in the patching of computer games. Compared with the initial installation of software, patches usually do not take long to apply.\n", "Expansion packs are most commonly released for PC games, but are becoming increasingly prevalent for video game consoles, particularly due to the popularity of downloadable content. The increasing number of multi-platform games has also led to the release of more expansion packs on consoles, especially stand-alone expansion packs (as described above). \"\", for example, requires the original \"\" to play on the PC, but Xbox 360 versions of both the original \"Tiberium Wars\" and \"Kane's Wrath\" are available, neither of which require one another.\n", "BULLET::::- Instead of redeploying resources when the system was under stress, it might have introduced a stop-gap solution, such as restricting the number of players online in order to keep the standard for the users that got online. Meanwhile, it would produce new content to prove to its investors, that the in-app purchase model will work in the longer-term\n\nSection::::Modification with a Drifting Standard.\n\nThe Growth and Underinvestment with a Drifting Standard is a special case of the archetype.\n", "As of 2010 the sale of DLC makes up around 20% of video games sales, a substantial portion of a developer's profit margin. Developers are beginning to use the sale of DLC for an already successful game series to fund the development of new IP’s or sequels to existing games.\n\nSection::::Availability.\n", "Traditionally, the service provided each patch in its own proprietary archive file. Occasionally, Microsoft released service packs which bundled all updates released over the course of years for a certain product. Starting with Windows 10, however, all patches are delivered in cumulative packages. On 15 August 2016, Microsoft announced that effective October 2016, all future patches to Windows 7 and 8.1 would become cumulative as with Windows 10. The ability to download and install individual updates would be removed as existing updates are transitioned to this model. This has resulted in increasing download sizes of each monthly update. An analysis done by Computerworld determined that the download size for Windows 7 x64 has increased from 119.4MB in October 2016 to 203MB in October 2017. Initially, Microsoft was very vague about specific changes within each cumulative update package. However, since early 2016, Microsoft has begun releasing more detailed information on the specific changes.\n", "As there is only a fixed amount of space on a console by default, developers do still have to be mindful of the amount of space they can take up, especially if the install is compulsory. Some consoles provide users the ability to expand their storage with larger storage mediums, provide access to removable storage and release versions of their console with more storage.\n\nSection::::Technology.:Storage.:Cloud gaming.\n", "Section::::Development.:Downloadable content.:Expansion packs.\n\nA variation of downloadable content is expansion packs. Unlike DLC, expansion packs add a whole section to the game that either already exists in the game's code or is developed after the game is released. Expansions add new maps, missions, weapons, and other things that weren't previously accessible in the original game. An example of an expansion is Bungie's \"Destiny\", which had the \"\" expansion. The expansion added new weapons, new maps, and higher levels, and remade old missions.\n", "The game also introduces the ability for players to use their armies to raze settlements once they have been conquered. This new feature allows the player to enact a \"Scorched Earth policy\" which destroys the land around the nearby settlement, crippling the enemy's food and money supply. \"Attila\" also lets a faction who did not originally begin the campaign as a horde to abandon its settlements at the cost of burning those former settlements or simply abandon a chosen number of cities which before being destroyed, will provide a small amount of wealth to the treasury. However, it is advised to analyze which settlements players destroy; recolonizing it would cost a faction a hefty amount of gold, a separate cost from building expenses to reach its former state.\n", "The user-generation aspects for \"Volume\" were inspired by \"The Document of Metal Gear Solid 2\" that was on the \"\" disc, during which \"Metal Gear\" game designer, Hideo Kojima, designed prototypes of levels in real-life using Lego bricks. Bithell designed the in-game level editor to work similar to Lego, allowing the player to snap-in predesigned elements onto new or existing levels, including the game's core levels. Bithell hopes that \"Volume\" will have an active user-community that will continue to evolve the game over many years, similar to that of \"\" where the player community has continued to work on improving the game five years after release.\n", "Some games include some portions of planned DLC on-disc; a notable example is \"Mass Effect 3\" by BioWare and Electronic Arts. The game disc/distribution included portions of the content in the \"From Ashes\" DLC; when the user purchased the DLC, it unlocked this content and downloaded additional patches to complete the content. According to Electronic Arts, this on-disc DLC was done because they needed to have the appropriate hooks ready to go within the main game, comparing the content on-disc to scaffolding for constructing a building. Specifically, because the content was planned to add a new playable character which then could be played in the main campaign, they needed to have that content on-disc for all. From BioWare's perspective, having some of that support already in the game allowed them to finish off the main content of \"Mass Effect 3\" and turning it over to Electronic Arts to publish on schedule, thus allowing them to focus on completing the \"From Ashes\" content at their own pace, which would be downloaded once the DLC was purchased. However, players felt they were being cheated by Electronic Arts since the majority of content was already on disc, and believed Electronic Arts had simply locked the content away behind a microtransaction to get more money from players.\n", "Mods can extend the shelf life of games, such as \"Half-Life\" (1998), which increased its sales figures over the first three years of its release. According to the director of marketing at Valve, a typical shelf-life for a game would be 12 to 18 months, even if it was a \"mega-hit\". In early 2012, the \"DayZ\" modification for \"ARMA 2\" was released and caused a massive increase in sales for the three-year-old game, putting it in the top spot for online game sales for a number of months and selling over 300,000 units for the game. In some cases, modders who are against piracy have created mods that enforce the use of a legal game copy.\n", "Expansions are added to the base game to help prolong the life of the game itself until the company is able to produce a sequel or a new game altogether. Developers may plan out their game's life and already have the code for the expansion in the game, but inaccessible by players, who later unlock these expansions, sometimes for free and sometimes at an extra cost. Some developers make games and add expansions later, so that they could see what additions the players would like to have. There are also expansions that are set apart from the original game and are considered a stand-alone game, such as Ubisoft's expansion \" Freedom's Cry\", which features a different character than the original game.\n" ]
[ "Game updates add to the size of the game." ]
[ "Game updates stay roughly the same because the update overwrites files already there. " ]
[ "false presupposition" ]
[ "Game updates add to the size of the game.", "Game updates add to the size of the game." ]
[ "normal", "false presupposition" ]
[ "Game updates stay roughly the same because the update overwrites files already there. ", "Game updates mostly overwrite files that are already in the game." ]
2018-17870
If someone has a limb amputated, what causes them to feel like it's still there after it's gone? ie. ghost limbs?
The pathways in your brain are strengthened with use. That's why toddlers fall down so much: it's not just that their legs are tiny, it's that they've never done it before. After a year or two they can walk all the time. The neural connections for the muscles and nerves in their legs are reinforced. Now imagine all the things you can do with your arm. Eat food, catch a ball, shake hands, drive, type. A million little things you don't even have to think about anymore, because you've done them your whole life. That experience, that "muscle memory" and object permanence and fine motor control, none of that is actually your arm. It's all in your brain. Those neural connections are strong because they're constantly used. Now imagine you lose your arm. All those memories of everything that you do with your arm, all the sensations and experiences you normally don't even think about, are still there. They're all right there in your brain, even after your arm is gone. You reach out to pick up a glass of water and the pathways for reaching and grabbing activate, even if there's nothing for them to connect to. If you don't think about it, or even sometimes if you do, the motor and sensory pathways that you're expecting to use will fire, and your brain fills in the blanks of what it's expecting to experience.
[ "A large proportion of amputees (50–80%) experience the phenomenon of phantom limbs; they feel body parts that are no longer there. These limbs can itch, ache, burn, feel tense, dry or wet, locked in or trapped or they can feel as if they are moving. Some scientists believe it has to do with a kind of neural map that the brain has of the body, which sends information to the rest of the brain about limbs regardless of their existence. Phantom sensations and phantom pain may also occur after the removal of body parts other than the limbs, e.g. after amputation of the breast, extraction of a tooth (phantom tooth pain) or removal of an eye (phantom eye syndrome).\n", "Section::::Applications and example.:Phantom limbs.\n\nIn the phenomenon of phantom limb sensation, a person continues to feel pain or sensation within a part of their body that has been amputated. This is strangely common, occurring in 60–80% of amputees. An explanation for this is based on the concept of neuroplasticity, as the cortical maps of the removed limbs are believed to have become engaged with the area around them in the postcentral gyrus. This results in activity within the surrounding area of the cortex being misinterpreted by the area of the cortex formerly responsible for the amputated limb.\n", "Phantom limbs are a phenomenon which occurs following amputation of a limb from an individual. In 90–98% of cases, amputees report feeling all or part of the limb or body part still there, taking up space. The amputee may perceive a limb under full control, or paralyzed. A common side effect of phantom limbs is phantom limb pain. The neurophysiological mechanisms by which phantom limbs occur is still under debate. A common theory posits that the afferent neurons, since deafferented due to amputation, typically remap to adjacent cortical regions within the brain. This can cause amputees to report feeling their missing limb being touched when a seemingly unrelated part of the body is stimulated (such as if the face is touched, but the amputee also feels their missing arm being stroked in a specific location). Another facet of phantom limbs is that the efferent copy (motor feedback) responsible for reporting on position to the body schema does not attenuate quickly. Thus the missing body part may be attributed by the amputee to still be in a fixed or movable position.\n", "Section::::Application.\n\nCortical remapping helps individuals regain function from injury.\n\nSection::::Application.:Phantom limbs.\n\nPhantom limbs are sensations felt by amputees that make it feel like their amputated extremity is still there. Sometimes amputees can experience pain from their phantom limbs; this is called phantom limb pain (PLP).\n", "People with amputations have reported phantom limbs. This serves as evidence that the brain is hard-wired to perceive body image, making it notable that sensory input and proprioceptive feedback are not essential in its formation. Losing an anatomical part through amputation sets a person up for complex perceptual, emotional, and psychological responses. Such responses include phantom limb pain, which is the painful feeling some amputees incur after amputation in the area lost. Phantom limb pain permits a natural acceptance and use of prosthetic limbs.\n\nSection::::See also.\n\nBULLET::::- \"The Extended Mind\"\n\nBULLET::::- Embodied cognition\n\nBULLET::::- Situated cognition\n\nSection::::References.\n", "Phantom limb\n\nA phantom limb is the sensation that an amputated or missing limb is still attached. Approximately 60 to 80% of individuals with an amputation experience phantom sensations in their amputated limb, and the majority of the sensations are painful. Research continues into the mechanisms underlying phantom limb pain (PLP) and into effective treatments to control it. \n\nSection::::Signs and symptoms.\n\nMost (80% to 100%) of amputees experience a phantom with some non-painful sensations. The amputee may feel very strongly that the phantom limb is still part of the body.\n", "The term \"phantom limb\" was first coined by American neurologist Silas Weir Mitchell in 1871. Mitchell described that \"thousands of spirit limbs were haunting as many good soldiers, every now and then tormenting them\". However, in 1551, French military surgeon Ambroise Paré recorded the first documentation of phantom limb pain when he reported that, \"For the patients, long after the amputation is made, say that they still feel pain in the amputated part\".\n\nSection::::Signs and symptoms.\n\nPhantom pain involves the sensation of pain in a part of the body that has been removed.\n", "A similar phenomenon is unexplained sensation in a body part unrelated to the amputated limb. It has been hypothesized that the portion of the brain responsible for processing stimulation from amputated limbs, being deprived of input, expands into the surrounding brain, (\"Phantoms in the Brain\": V.S. Ramachandran and Sandra Blakeslee) such that an individual who has had an arm amputated will experience unexplained pressure or movement on his face or head.\n", "Phantom limb sensation is any sensory phenomenon (except pain) which is felt at an absent limb or a portion of the limb. It has been known that at least 80% of amputees experience phantom sensations at some time of their lives. Some experience some level of this phantom pain and feeling in the missing limb for the rest of their lives.\n", "Kean begins the chapter with the sad tale of George Dedlow. George Dedlow had fought in the Civil War and in turn had both his arms and both his legs amputated for various reasons. These amputations brought on another neurological phenomenon: phantom limbs. George Dedlow, amongst millions of other war amputees, felt pain in limbs that he did not have.\n", "A neuroscientist, Silas Weir Mitchell, specialized in examining amputees and was fascinated with them. He examined patients who complained of pain or discomfort in their phantom arms, legs, and genitals. Here, Kean ties the motor and sensory cortexes in. When something is amputated, the respective part of the brain for controlling that part goes black (figuratively). The now obsolete part of the brain is quickly taken over by neighboring brain areas, such as the face or arms.\n", "In many cases, the phantom limb aids in adaptation to a prosthesis, as it permits the person to experience proprioception of the prosthetic limb. To support improved resistance or usability, comfort or healing, some type of stump socks may be worn instead of or as part of wearing a prosthesis.\n", "Sensations are recorded most frequently following the amputation of an arm or a leg, but may also occur following the removal of a breast, tooth, or an internal organ. Phantom limb pain is the feeling of pain in an absent limb or a portion of a limb. The pain sensation varies from individual to individual.\n", "People who have a limb amputated may still have a confused sense of that limb's existence on their body, known as phantom limb syndrome. Phantom sensations can occur as passive proprioceptive sensations of the limb's presence, or more active sensations such as perceived movement, pressure, pain, itching, or temperature. There are a variety of theories concerning the etiology of phantom limb sensations and experience. One is the concept of \"proprioceptive memory\", which argues that the brain retains a memory of specific limb positions and that after amputation there is a conflict between the visual system, which actually sees that the limb is missing, and the memory system which remembers the limb as a functioning part of the body. Phantom sensations and phantom pain may also occur after the removal of body parts other than the limbs, such as after amputation of the breast, extraction of a tooth (phantom tooth pain), or removal of an eye (phantom eye syndrome).\n", "An interesting phenomenon involving cortical maps is the incidence of phantom limbs (see Ramachandran for review). This is most commonly described in people that have undergone amputations in hands, arms, and legs, but it is not limited to extremities. The phantom limb feeling, which is thought to result from disorganization in the brain map and the inability to receive input from the targeted area, may be annoying or painful. Incidentally, it is more common after unexpected losses than planned amputations. There is a high correlation with the extent of physical remapping and the extent of phantom pain. As it fades, it is a fascinating functional example of new neural connections in the human adult brain.\n", "Phantom limb pain and phantom limb sensations are linked, but must be differentiated from one another. While phantom limb sensations are experienced by those with congenital limb deficiency, spinal cord injury, and amputation, phantom limb pain occurs almost exclusively as a result of amputation. Almost immediately following the amputation of a limb, 90–98% of patients report experiencing a phantom sensation. Nearly 75% of individuals experience the phantom as soon as anesthesia wears off, and the remaining 25% of patients experience phantoms within a few days or weeks. Of those experiencing innocuous sensations, a majority of patients also report distinct painful sensations.\n", "For many years, the dominant hypothesis for the cause of phantom limbs was irritation in the peripheral nervous system at the amputation site (neuroma.) By the late 1980s, Ronald Melzack had recognized that the peripheral neuroma account could not be correct, because many people born without limbs also experienced phantom limbs. According to Melzack the experience of the body is created by a wide network of interconnecting neural structures, which he called the \"neuromatrix.\" \n", "However, a recent study by Tamar R. Makin suggests that instead of PLP being caused by maladaptive plasticity, it may actually be pain induced. The maladaptive plasticity hypothesis suggests that once afferent input is lost from an amputation, cortical areas bordering the same amputation area will begin to invade and take over the area, affecting the primary sensorimotor cortex, seeming to cause PLP. Makin now argues that chronic PLP may actually be 'triggered' by \"bottom-up nociceptive inputs or top-down inputs from pain-related brain areas\" and that the cortical maps of the amputation remain intact while the \"inter-regional connectivity\" is distorted.\n", "Phantom limb pain is considered to be caused from functional cortical reorganization, sometimes called maladaptive plasticity, of the primary sensorimotor cortex. Adjustment of this cortical reorganization has the potential to help alleviate PLP. One study taught amputees over a two-week period to identify different patterns of electrical stimuli being applied to their stump to help reduce their PLP. It was found that the training reduced PLP in the patients and reversed the cortical reorganization that had previously occurred. \n", "Section::::Research and theory.\n\nSection::::Research and theory.:Phantom limbs.\n\nWhen an arm or leg is amputated, patients often continue to feel vividly the presence of the missing limb as a \"phantom limb\" (an average of 80%). Building on earlier work by Ronald Melzack (McGill University) and Timothy Pons (NIMH), Ramachandran theorized that there was a link between the phenomenon of phantom limbs and neural plasticity in the adult human brain. To test this theory, Ramachandran recruited amputees, so that he could learn more about if phantom limbs could \"feel\" a stimulus to other parts of the body.\n", "Section::::Pathophysiology.:Peripheral mechanisms.\n\nNeuromas formed from injured nerve endings at the stump site are able to fire abnormal action potentials, and were historically thought to be the main cause of phantom limb pain. Although neuromas are able to contribute to phantom pain, pain is not completely eliminated when peripheral nerves are treated with conduction blocking agents. Physical stimulation of neuromas can increase C fiber activity, thus increasing phantom pain, but pain still persists once the neuromas have ceased firing action potentials. The peripheral nervous system is thought to have at most a modulation effect on phantom limb pain.\n\nSection::::Pathophysiology.:Spinal mechanisms.\n", "Phantom pain\n\nPhantom pain is a perception that an individual experiences relating to a limb or an organ that is not physically part of the body. Limb loss is a result of either removal by amputation or congenital limb deficiency. However, phantom limb sensations can also occur following nerve avulsion or spinal cord injury.\n", "In 1991, Tim Pons and colleagues at the National Institutes of Health (NIH) showed that the primary somatosensory cortex in macaque monkeys undergoes substantial reorganization after the loss of sensory input. \n", "There is no randomized study in medical literature that has studied the response with amputation of patients who have failed the above-mentioned therapies and who continue to be miserable. Nonetheless, there are reports that on average cite about half of the patients will have resolution of their pain, while half will develop phantom limb pain and/or pain at the amputation site. It is likely that as in any other chronic pain syndrome, the brain becomes chronically stimulated with pain, and late amputation may not work as well as it might be expected. In a survey of fifteen patients with CRPS Type 1, eleven responded that their life was better after amputation. Since this is the ultimate treatment of a painful extremity, it should be left as a last resort.\n", "Hearing about these results, Vilayanur S. Ramachandran hypothesized that phantom limb sensations in humans could be due to reorganization in the human brain's somatosensory cortex. Ramachandran and colleagues illustrated this hypothesis by showing that stroking different parts of the face led to perceptions of being touched on different parts of the missing limb. Later brain scans of amputees showed the same kind of cortical reorganization that Pons had observed in monkeys.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-21743
Why did the US go into prohibition, and why did it end 13 years later?
It went into Prohibition because a small but very dedicated group of people convinced that alcohol was the cause of society's ills worked extremely hard to put Prohibition into place. They eventually were able to make it such a disruptive wedge issue that it became one of the only criteria that mattered in getting elected. It ended 13 years later because it turns out that you cannot legislate morality and people LIKE to drink. No amount of hand wringing and shouting is going to change that and the small dedicated group could not keep a coalition of people together once the law was passed. I highly recommend the Ken Burn's series on Prohibition which I believe is on Netflix right now.
[ "\"Prohibition\" describes how the consumption and effect of alcoholic beverages in the United States were connected to many different cultural forces including immigration, women's suffrage, and the income tax. Eventually the Temperance movement led to the passing of Prohibition, the Eighteenth Amendment to the U.S. Constitution. Widespread defiance of the law, uneven and unpopular enforcement, and violent crime associated with the illegal trade in alcohol caused increasing dissatisfaction with the amendment, eventually leading to its repeal thirteen years later.\n\nSection::::Episodes.\n", "Kenneth D. Rose, a professor of history at California State University, says that 'the WONPR claimed that prohibition had nurtured a criminal class, created a \"crime wave,\" corrupted public officials, made drinking fashionable, engendered a contempt for rule of law, and set back the progress of \"true temperance.\"' Rose, however, states that a \"prohibition crime wave was rooted in the impressionistic rather than the factual.\" He writes:\n", "Repeal of Prohibition in the United States\n\nThe repeal of Prohibition in the United States was accomplished with the passage of the Twenty-first Amendment to the United States Constitution on December 5, 1933.\n\nSection::::Background.\n", "Mark H. Moore, a professor at Harvard University Kennedy School of Government, stated, with respect to the effects of prohibition:\n", "After the Eighteenth Amendment became law, the United States embraced bootlegging. In just the first six months of 1920 alone, the federal government opened 7,291 cases for Volstead Act violations. In just the first complete fiscal year of 1921, the number of cases violating the Volstead Act jumped to 29,114 violations and would rise dramatically over the next thirteen years.\n", "In 1919, Finland enacted prohibition, as one of the first acts after independence from the Russian Empire. Four previous attempts to institute prohibition in the early 20th century had failed due to opposition from the tsar. After a development similar to the one in the United States during its prohibition, with large-scale smuggling and increasing violence and crime rates, public opinion turned against the prohibition, and after a national referendum where 70% voted for a repeal of the law, prohibition was ended in early 1932.\n", "Prohibition in the 1920s United States, originally enacted to suppress the alcohol trade, drove many small-time alcohol suppliers out of business and consolidated the hold of large-scale organized crime over the illegal alcohol industry. Since alcohol was still popular, criminal organisations producing alcohol were well-funded and hence also increased their other activities. Similarly, the War on Drugs, intended to suppress the illegal drug trade, instead increased the power and profitability of drug cartels who became the primary source of the products.\n", "The number of repeal organizations and demand for repeal both increased.\n\nSection::::Organized opposition.:Organizations supporting repeal.\n\nBULLET::::- Association Against the Prohibition Amendment\n\nBULLET::::- Constitutional Liberty League of Massachusetts, a nationwide organization despite its name\n\nBULLET::::- The Crusaders\n\nBULLET::::- Labor's National Committee for Modification of the Volstead Act\n\nBULLET::::- Moderation League of New York, a nationwide organization despite its name\n\nBULLET::::- Molly Pitcher Club\n\nBULLET::::- Republican Citizens Committee Against National Prohibition\n\nBULLET::::- United Repeal Council\n\nBULLET::::- Voluntary Committee of Lawyers\n\nBULLET::::- Women's Committee for Repeal of the 18th Amendment\n\nBULLET::::- Women's Moderation Union\n\nBULLET::::- Women's Organization for National Prohibition Reform\n", "Section::::History.:Development of the prohibition movement.\n\nThe American Temperance Society (ATS), formed in 1826, helped initiate the first temperance movement and served as a foundation for many later groups. By 1835 the ATS had reached 1.5 million members, with women constituting 35% to 60% of its chapters.\n", "Following repeal, public interest in an organized prohibition movement dwindled. However, it survived for a while in a few southern and border states. To this day, there are still counties and parishes within the US known as \"dry\", where the sale of alcohol – liquor, and sometimes wine and beer – is prohibited. Some such counties/parishes/municipalities have adopted \"Moist county\" policies, however, in order to expand tax revenue. Some municipalities regulate when alcohol can be sold; an example is restricting or banning sales on Sunday, under the so-called \"blue laws\".\n\nSection::::South America.\n\nSection::::South America.:Venezuela.\n", "The Cullen–Harrison Act, signed by President Franklin D. Roosevelt on March 22, 1933, authorized the sale of 3.2 percent beer (thought to be too low an alcohol concentration to be intoxicating) and wine, which allowed the first legal beer sales since the beginning of Prohibition on January 16, 1920. In 1933 state conventions ratified the Twenty-first Amendment, which repealed Prohibition. The Amendment was fully ratified on December 5, 1933. Federal laws enforcing Prohibition were then repealed.\n\nSection::::Repeal.:Dry counties.\n", "Prohibition created a black market that competed with the formal economy, which came under pressure when the Great Depression struck in 1929. State governments urgently needed the tax revenue alcohol sales had generated. Franklin Roosevelt was elected in 1932 based in part on his promise to end prohibition, which influenced his support for ratifying the Twenty-first Amendment to repeal Prohibition.\n\nSection::::Repeal.\n", "According to a 2010 review of the academic research on Prohibition, \"On balance, Prohibition probably reduced per capita alcohol use and alcohol-related harm, but these benefits eroded over time as an organized black market developed and public support for NP declined.\" One study reviewing city-level drunkenness arrests concluded that prohibition had an immediate effect, but no long term effect. And, yet another study examining \"mortality, mental health and crime statistics\" found that alcohol consumption fell, at first, to approximately 30 percent of its pre-Prohibition level; but, over the next several years, increased to about 60–70 percent of its pre-prohibition level.\n", "Section::::Background.:Prohibition.:Why did prohibition fail?\n\nProhibition was repealed in 1933 because of many reasons. One reason was that there was not enough prohibition workers to cover all of the United States and because of this, none were paid very well making most of them become corrupt and easily bribable. Another reason was the rise of organized crime centering around the lucrative smuggling and selling of alcohol. Not many people paid attention to prohibition due to this poor enforcement.\n\nSection::::Background.:Prohibition.:Consequences of Prohibition.\n", "Some of the most important women involved in this movement were:\n\nBULLET::::- Marie C. Brehm – Vice Presidential candidate in 1924 – first unambiguously legally qualified woman ever to be nominated for this position\n\nBULLET::::- Rachel Bubar Kelly – Vice Presidential candidate in 1996\n\nBULLET::::- Susanna Madora Salter – First female mayor in the United States. Elected in Argonia, Kansas in 1887\n", "Consumer demand, however, led to a variety of illegal sources for alcohol, especially illegal distilleries and smuggling from Canada and other countries. It is difficult to determine the level of compliance, and although the media at the time portrayed the law as highly ineffective, even if it did not eradicate the use of alcohol, it certainly decreased alcohol consumption during the period. The Eighteenth Amendment was repealed in 1933, with the passage of the Twenty-First Amendment, thanks to a well-organized repeal campaign led by Catholics (who stressed personal liberty) and businessmen (who stressed the lost tax revenue).\n", "Prohibition (miniseries)\n\nProhibition is a 2011 American television documentary miniseries directed by Ken Burns and Lynn Novick with narration by Peter Coyote. The series originally aired on PBS between October 2, 2011 and October 4, 2011. It was funded in part by the National Endowment for the Humanities. It draws heavily from the 2010 book \"Last Call: The Rise and Fall of Prohibition\" by Daniel Okrent.\n\nSection::::Synopsis.\n", "Repeal of Prohibition was accomplished with the ratification of the Twenty-first Amendment on December 5, 1933. Under its terms, states were allowed to set their own laws for the control of alcohol.\n\nSection::::North America.:United States.:Aftermath.\n", "Section::::Repeal.\n", "Though there were significant increases in crimes involved in the production and distribution of illegal alcohol, there was an initial reduction in overall crime, mainly in types of crimes associated with the effects of alcohol consumption such as public drunkenness. Those who continued to use alcohol, tended to turn to organized criminal syndicates. Law enforcement wasn't strong enough to stop all liquor traffic; however, they used \"sting\" operations, such as Prohibition agent Eliot Ness famously using wiretapping to discern secret locations of breweries. The prisons became crowded which led to fewer arrests for the distribution of alcohol, as well as those arrested being charged with small fines rather than prison time. The murder rate fell for two years, but then rose to record highs because this market became extremely attractive to criminal organizations, a trend that reversed the very year prohibition ended. Overall, crime rose 24%, including increases in assault and battery, theft, and burglary.\n", "The first half of the 20th century saw periods of prohibition of alcoholic beverages in several countries:\n\nBULLET::::- 1907 to 1948 in Prince Edward Island, and for shorter periods in other provinces in Canada\n\nBULLET::::- 1907 to 1992 in the Faroe Islands; limited private imports from Denmark were allowed from 1928\n\nBULLET::::- 1914 to 1925 in the Russian Empire and the Soviet Union\n\nBULLET::::- 1915 to 1933 in Iceland (beer was still prohibited until 1989)\n\nBULLET::::- 1916 to 1927 in Norway (fortified wine and beer were also prohibited from 1917 to 1923)\n", "1988: Near the end of the Reagan administration, the Office of National Drug Control Policy was created for central coordination of drug-related legislative, security, diplomatic, research and health policy throughout the government. In recognition of his central role, the director of ONDCP is commonly known as the \"Drug Czar\". The position was raised to cabinet-level status by Bill Clinton in 1993.\n\n1992 Illegal drug use in the U.S. fell to 12 million people.\n\n1993, December 7: Joycelyn Elders, the Surgeon General, said that the legalization of drugs \"should be studied\", causing a stir among opponents.\n", "Section::::Commandant.:Prohibition.\n", "BULLET::::- Independent Order of Rechabites (IOR)\n\nBULLET::::- Methodist Board of Temperance, Prohibition, and Public Morals\n\nBULLET::::- Prohibition Party\n\nBULLET::::- Woman's Christian Temperance Union (WCTU)\n\nSection::::Repeal as a political party issue.\n\nIn 1932 the Democratic Party's platform included a plank for the repeal of Prohibition, and Democratic candidate Franklin D. Roosevelt ran for president of the United States promising repeal of federal Prohibition laws.\n\nA. Mitchell Palmer used his expertise as the Attorney General who first enforced Prohibition to promote a plan to expedite its repeal through state conventions rather than the state legislatures.\n\nSection::::Repeal.\n", "After its repeal, some former supporters openly admitted failure. For example, John D. Rockefeller, Jr., explained his view in a 1932 letter:\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00010
Why does freshly squeezed orange juice taste so different from orange juice in a carton?
The process of making mass produced orange juice, no matter the brand or what advertising says, is one im afraid you may not want to know. Basically what it comes down to is large amounts of orange juice is stored in large vats for up to a year or more, which causes it to lose both color and flavor. By the time its ready to be bottled and shipped, it is basically a colorless and flavorless liquid. So, what they do is add "flavor packs" to it to give it the bright orange color, and put flavor back in it. Flavor packs made in a lab. They get away with calling it "all natural" or similar buzzwords because the flavor packets, even though they are made by the same companies that make perfumes, are derived from "natural oranges". Simply put your simply orange isnot so simple, or fresh, as they lead people to believe.
[ "Commercial orange juice with a long shelf life is made by pasteurizing the juice and removing the oxygen from it. This removes much of the taste, necessitating the later addition of a flavor pack, generally made from orange products. Additionally, some juice is further processed by drying and later rehydrating the juice, or by concentrating the juice and later adding water to the concentrate.\n", "One common component of flavor packs is ethyl butyrate, a natural aroma that people associate with freshness, and which is removed from juice during pasteurization and storage. \"Cooks Illustrated\" sent juice samples to independent laboratories, and found that while fresh-squeezed juice naturally contained about 1.19 milligrams of ethyl butyrate per liter, juice that had been commercially processed had levels as high as 8.53 milligrams per liter.\n\nSection::::Commercial orange juice and concentrate.:Canned orange juice.\n", "Commercial squeezed orange juice is pasteurized and filtered before being evaporated under vacuum and heat. After removal of most of the water, this concentrate, about 65% sugar by weight, is then stored at about . Essences, Vitamin C, and oils extracted during the vacuum concentration process may be added back to restore flavor and nutrition (see below).\n\nWhen water is added to freshly thawed concentrated orange juice, it is said to be \"reconstituted\".\n", "Because such processes remove the distinct aroma compounds that give orange juice a fresh-squeezed taste, producers later add back these compounds in a proprietary mixture, called a \"flavor pack\", in order to improve the taste and to ensure a consistent year-round taste. The compounds in the flavor packs are derived from orange peels. Producers do not mention the addition of flavor packs on the label of the orange juice.\n\nSection::::Commercial orange juice and concentrate.:Types of orange.\n", "Fresh-squeezed, the unpasteurized juice is the closest to consuming the orange itself. This version of the juice consists of oranges that are squeezed and then bottled without having any additives or flavor packs inserted. The juice is not subjected to pasteurization. Depending on storage temperature, freshly squeezed, unpasteurized orange juice can have a shelf life of 5 to 23 days.\n\nSection::::Commercial orange juice and concentrate.:Major orange juice brands.\n", "Recently, many brands of organic orange juices have become available on the market.\n\nSection::::Processing and manufacture.\n\nSection::::Processing and manufacture.:Manufacture of frozen concentrated orange juice.\n", "FCOJ producers generally use evaporators to remove much of the water from the juice in order to decrease its weight and decrease transportation costs. Other juice producers generally deaerate the juice so that it can be sold much later in the year.\n", "The concentrated juice is held in a cold wall tank and is stored at or below 35 °F to prevent browning and development of undesired flavors. Next, a small amount of fresh juice is added to the concentrated juice to restore natural and fresh flavors of orange juice that have been lost through the concentration process. Specific cold-pressed orange oils are used to restore the lost aroma and volatile flavors. After the addition of fresh juice, the brix content is reduced to 42 °F. The fresh juice is referred to as \"cut-back\" in the industry and attributes to 7-10% of the total juice. Orange peel oil is also added if the oil content is below the required level. The concentrate is then further cooled in a continuous cooler or cold wall tank to 20 to 25 °F. The concentrate is canned using steam injection methods to sterilize the lid and develop a vacuum in the can. The cans then undergo final freezing where they are conveyed on a perforated belt in an air blast at -40 °F. After freezing, the product is stored at 0 °F in a refrigerated warehouse.\n", "By 1949, orange juice processing plants in Florida were producing over 10 million gallons of concentrated orange juice. Consumers were captivated with the idea of concentrated canned orange juice as it was affordable, tasty, convenient, and a vitamin-C packed product. The preparation was simple, thaw the juice, add water, and stir. However, by the 1980s, food scientists developed a more fresh-tasting juice known as reconstituted ready to serve juice. Eventually in the 1990s, \"not from concentrate\" (NFC) orange juice was developed and gave consumers an entirely new perspective of orange juice transforming the product from can to freshness in a carton. Orange juice is a common breakfast beverage in the United States.\n", "Section::::Effects on nutritional and sensory characteristics of foods.:Sensory effects.\n\nPasteurization also has a small but measurable effect on the sensory attributes of the foods that are processed. In fruit juices, pasteurization may result in loss of volatile aroma compounds. Fruit juice products undergo a deaeration process prior to pasteurization that may be responsible for this loss. Deaeration also minimizes the loss of nutrients like vitamin C and carotene. To prevent the decrease in quality resulting from the loss in volatile compounds, volatile recovery, though costly, can be utilized to produce higher-quality juice products.\n", "Drum drying or freezing are two processes for preserving juice solids. When product enzymes are deactivated through heat stabilization, they are frozen. Light and air are used for drum-drying, but this process often decreases the flavor and color of the solids.\n\nSection::::Use.\n", "The company is a major purchaser of Florida oranges for its orange juice, but also imports orange juice from Brazil and Mexico. Simply Orange uses a computer-modeled blending of orange juice sources, intending for the consumer to have a uniform taste year-round.\n\nSection::::Company history.\n", "Oranges have a limited growing season, and because there is demand for juice year round, an unspecified quantity of juice (some or potentially all) is deaerated and then stored for future packaging in chilled tanks to preserve quality. The aseptic tanks protect the juice from oxygen and light and hold the liquid at optimal temperatures just above freezing to maintain maximum nutrition. It has been reported that deaerated juice no longer tastes like oranges, and must be supplemented before consumption with orange oils in order to recreate the organe flavour. Pulp may be blended in at this point, too, depending on the product.\n", "A small fraction of fresh orange juice is canned. Canned orange juice retains vitamin C much better than bottled juice. The canned product loses flavor, however, when stored at room temperature for more than 12 weeks. In the early years of canned orange juice, the acidity of the juice caused the juice to have a metallic taste. In 1931, Dr. Philip Phillips developed a flash pasteurization process that eliminated this problem and significantly increased the market for canned orange juice.\n\nSection::::Commercial orange juice and concentrate.:Freshly squeezed, unpasteurized juice.\n", "After the juice is filtered, it may be concentrated in evaporators, which reduce the size of juice by a factor of 5, making it easier to transport and increasing its expiration date. Juices are concentrated by heating under a vacuum to remove water, and then cooling to around 13 degrees Celsius. About two thirds of the water in a juice is removed. The juice is then later reconstituted, in which the concentrate is mixed with water and other factors to return any lost flavor from the concentrating process. Juices can also be sold in a concentrated state, in which the consumer adds water to the concentrated juice as preparation.\n", "In the United Kingdom, orange juice from concentrate is a product of concentrated fruit juice with the addition of water. Any lost flavour or pulp of the orange juice during the initial concentration process may be restored in the final product to be equivalent to an average type of orange juice of the same kind. Any restored flavour or pulp must come from the same species of orange. Sugar may be added to the orange juice for regulating the acidic taste or sweetening, but must not exceed 150g per litre of orange juice. Across the UK, the final orange juice from concentrate product must contain a minimum Brix level of 11.2, excluding the additional sweetening ingredients. Vitamins and minerals may be added to the orange juice in accordance with Regulation (EC) 1925/2006.\n", "In the United States, orange juice is regulated and standardized by the Food and Drug Administration (FDA or USFDA) of the United States Department of Health and Human Services. According to the FDA, orange juice from concentrate is a mixture of water with frozen concentrated orange juice or concentrated orange juice for manufacturing. Additional ingredients into the mixture may include fresh/frozen/pasteurized orange juice from mature oranges, orange oil, and orange pulp. Furthermore, one or more of the following optional sweetening ingredients may be added: sugar, sugar syrup, invert sugar, invert sugar syrup, dextrose, corn syrup, dried corn syrup, glucose syrup, and dried glucose syrup. The orange juice must contain a minimum Brix level of 11.8, which indicates the percentage of orange juice soluble solids, excluding any added sweetening ingredients.\n", "With oranges, colour cannot be used as an indicator of ripeness because sometimes the rinds turn orange long before the oranges are ready to eat. Tasting them is the only way to know whether or not they are ready to eat.\n", "A variation on a basic traditional recipe:\n\nBULLET::::- 1 oz. freshly squeezed orange juice\n\nBULLET::::- 3/4 to 1 oz. freshly squeezed lime juice\n\nBULLET::::- 1/2 oz. real pomegranate-based grenadine\n\nBULLET::::- 1/4 tsp. ancho chile powder or 3 dashes hot sauce\n\nBULLET::::- 1-2 slices jalapeño\n\nA basic tomato based recipe (non-traditional)\n\nBULLET::::- 2 parts freshly squeezed tomato juice\n\nBULLET::::- 1 part freshly squeezed orange juice\n\nBULLET::::- 1/2 part freshly squeezed lime juice\n\nBULLET::::- Fresh minced green chile to taste\n\nMexico City style Sangrita:\n\nBULLET::::- 5 parts tomato juice\n\nBULLET::::- 2 parts fresh lime juice\n\nBULLET::::- 1 part orange juice\n", "The oranges then go through roller conveyors, which expose all sides of the fruit. The roller conveyors are efficiently built as they are well lighted, installed at a convenient height, and width to ensure all inspectors can reach the fruit to determine inadequacies. Some reasons why fruit may be rejected include indication of mold, rot, and ruptured peels. Afterwards, the oranges are separated based on size through machines prior to juice extraction. There are a number of different ways orange juice industry leaders extract their oranges. Some common methods include halving the fruit and pressing/reaming the orange to extract juice from the orange. One instrument inserts a tube through the orange peel and forces the juice out through the tube by squeezing the entire orange. Despite the variety of machines used to extract juices, all machines have commonalities in that they are rugged, fast, easy to clean and have the ability to reduce peel extractives into the juice. The extracted juice product does not contain the orange peel, but it may contain pulp and seeds, which are removed by finishers.\n", "Finishers have a screw-type design that comprises a conical helical screw enclosed in a cylindrical screen with perforations the size of 0.020 to 0.045 inches. Thereafter, the finished orange juice flows through blending tanks where the juice is tested for acid and soluble solids. At this stage, sugar can be added to the juice depending on if the product will be a sweetened or unsweetened beverage. Following blending, the orange juice is deaerated where the air is incorporated into the juice during extraction. The benefits of deaeration include the elimination of foaming, which improves the uniformity of can fill and improvement regarding the efficiency of the heat exchanger. Orange peel oil is essential for maximum flavor, but according to U.S. standards for Grades of Canned Orange Juice, 0.03% of recoverable oil is permitted. Deoiling through the use of vacuum distillation is the mechanism used to regulate the amount of peel oil in the juice. Condensation separates the oil and the aqueous distillate, which is returned to the juice.\n", "BULLET::::- Midsweet: grown in Florida, it is a newer scion similar to the Hamlin and Pineapple varieties, it is hardier than Pineapple and ripens later; the fruit production and quality are similar to those of the Hamlin, but the juice has a deeper color\n\nBULLET::::- Moro Tarocco: grown in Italy, it is oval, resembles a tangelo, and has a distinctive caramel-colored endocarp; this color is the result of a pigment called anthocarpium, not usually found in citruses, but common in red fruits and flowers; the original mutation occurred in Sicily in the seventeenth century\n", "The processing of orange to frozen concentrated orange juice begins with testing the orange fruit for quality to ensure it is safe for the process. Then the fruit is cleaned and washed thoroughly and orange oil is taken from the peel of the orange. Next, the juice is extracted from the orange and is screened in order to remove seeds and large pieces of pulp. The juice is then heated to 190 to 200 °F in order to inactivate natural enzymes found in the juice. The concentration step occurs in a high vacuum evaporator where the water content in the juice is evaporated while the juice sugar compounds and solids are concentrated. The vacuum evaporator is a low temperature falling-film mechanism, which operates at a temperature between 60 and 80 °F. Evaporators work in a continuous manner in that fresh juice is added as concentrate is being constantly removed. The concentration process increases the soluble solid portion of the juice from 12 °Brix to 60-70 °Brix.\n", "Orange juice that is pasteurized and then sold to consumers without having been concentrated is labeled as \"not from concentrate\". Just as \"from concentrate\" processing, most \"not from concentrate\" processing reduces the natural flavor from the juice. The largest producers of \"not from concentrate\" use a production process where the juice is placed in aseptic storage, with the oxygen stripped from it, for up to a year.\n", "Section::::Attributes.\n\nSection::::Attributes.:Sensory factors.\n\nThe taste of oranges is determined mainly by the relative ratios of sugars and acids, whereas orange aroma derives from volatile organic compounds, including alcohols, aldehydes, ketones, terpenes, and esters. Bitter limonoid compounds, such as limonin, decrease gradually during development, whereas volatile aroma compounds tend to peak in mid– to late–season development. Taste quality tends to improve later in harvests when there is a higher sugar/acid ratio with less bitterness. As a citrus fruit, the orange is acidic, with pH levels ranging from 2.9 to 4.0.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06974
Has vacuum negative weight in air, because it is lighter than air itself? Would a vacuum balloon fly like a helium balloon?
Theoretically a vacuum balloon would be lighter than a helium balloon but the structure of the balloon would have to be very heavy to resist the difference in pressure from the surrounding air, so basically it wouldn’t float.
[ "Vacuum airship\n\nA vacuum airship, also known as a vacuum balloon, is a hypothetical airship that is evacuated rather than filled with a lighter-than-air gas such as hydrogen or helium. First proposed by Italian Jesuit priest Francesco Lana de Terzi in 1670, the vacuum balloon would be the ultimate expression of lifting power per volume displaced.\n\nSection::::History.\n", "Vacuum airships would replace the helium gas with a near-vacuum environment. Having no mass, the density of this body would be near to 0.00 g/l, which would theoretically be able to provide the full lift potential of displaced air, so every liter of vacuum could lift 1.28 g. Using the molar volume, the mass of 1 liter of helium (at 1 atmospheres of pressure) is found to be 0.178 g. If helium is used instead of vacuum, the lifting power of every liter is reduced by 0.178 g, so the effective lift is reduced by 14%. A 1-liter volume of hydrogen has a mass of 0.090 g.\n", "where formula_28 formula_29 and formula_30 formula_31 are pressure and density of standard Earth atmosphere at sea level, formula_32 and formula_33 are molar mass (kg/kmol) and temperature (K) of atmosphere at floating area.\n", "Spherical vacuum body airships using the Magnus effect and made of carbyne or similar superhard carbon are glimpsed in Neal Stephenson's novel \"The Diamond Age\".\n\nIn \"Maelstrom\" and \"Behemoth:B-Max\", author Peter Watts describes various flying devices, such as \"botflies\" and \"lifters\" that use \"vacuum bladders\" to keep them airborne.\n", "From the analysis by Akhmeteli and Gavrilin:\n\nThe total force on a hemi-spherical shell of radius formula_1 by an external pressure formula_2 is formula_3. Since the force on each hemisphere has to balance along the equator the compressive stress will be, assuming formula_4 \n\nwhere formula_6 is the shell thickness.\n\nNeutral buoyancy occurs when the shell has the same mass as the displaced air, which occurs when formula_7, where formula_8 is the air density and formula_9 is the shell density, assumed to be homogeneous. Combining with the stress equation gives \n", "The density of air at standard temperature and pressure is 1.28 g/l, so 1 liter of displaced air has sufficient buoyant force to lift 1.28 g. Airships use a bag to displace a large volume of air; the bag is usually filled with a lightweight gas such as helium or hydrogen. The total lift generated by an airship is equal to the weight of the air it displaces, minus the weight of the materials used in its construction including the gas used to fill the bag.\n", "Objects on the surface of the Earth have weight, although sometimes this weight is difficult to measure. An example is a small object floating in water, which does not appear to have weight since it is buoyed by the water; but it is found to have its usual weight when it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the \"weightless object\" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area.\n", "In 1921, Lavanda Armstrong discloses a composite wall structure with a vacuum chamber \"surrounded by a second envelop constructed so as to hold air under pressure, the walls of the envelope being spaced from one another and tied together\", including a honeycomb-like cellular structure, however leaving some uncertainty how to achieve adequate buoyancy given \"walls may be made as thick and strong as desired\".\n\nIn 1983, David Noel discussed the use of geodesic sphere covered with plastic film and \"a double balloon containing pressurized air between the\n\nskins, and a vacuum in the centre\".\n", "In a theoretically perfect situation with weightless spheres, a 'vacuum balloon' would have 7% more net lifting force than a hydrogen-filled balloon, and 16% more net lifting force than a helium-filled one. However, because the walls of the balloon must be able to remain rigid without imploding, the balloon is impractical to construct with all known materials. Despite that, sometimes there is discussion on the topic.\n\nSection::::Gases theoretically suitable for lifting.:Plasma.\n", "Neutral buoyancy is not identical to weightlessness. Gravity still acts on all objects in a neutral buoyancy tank; thus, astronauts in neutral buoyancy training still feel their full body weight within their spacesuits, although the weight is well-distributed, similar to force on a human body in a water bed, or when simply floating in water. The suit and astronaut together are under no net force, as for any object that is floating, or supported in water, such as a scuba diver at neutral buoyancy. Water also produces drag, which is not present in vacuum.\n", "BULLET::::- Deep space is generally much more empty than any artificial vacuum. It may or may not meet the definition of high vacuum above, depending on what region of space and astronomical bodies are being considered. For example, the MFP of interplanetary space is smaller than the size of the Solar System, but larger than small planets and moons. As a result, solar winds exhibit continuum flow on the scale of the Solar System, but must be considered a bombardment of particles with respect to the Earth and Moon.\n", "Although not currently practical, it may be possible to construct a rigid, lighter-than-air structure which, rather than being inflated with air, is at a vacuum relative to the surrounding air. This would allow the object to float above the ground without any heat or special lifting gas, but the structural challenges of building a rigid vacuum chamber lighter than air are quite significant. Even so, it may be possible to improve the performance of more conventional aerostats by trading gas weight for structural weight, combining the lifting properties of the gas with vacuum and possibly heat for enhanced lift.\n", "A vacuum airship should at least float (Archimedes law) and resist external pressure (strength law, depending on design, like the above R. Zoelli's formula for sphere). These two conditions may be rewritten as an inequality where a complex of several physical constants related to the material of the airship is to be lesser than a complex of atmospheric parameters. Thus, for a sphere (hollow sphere and, to a lesser extent, cylinder are practically the only designs for which a strength law is known) it is formula_18, where formula_19 is pressure within the sphere, while formula_20 («Lana coefficient») and formula_21 («Lana atmospheric ratio») are:\n", "Where \"ρ\" is mass density, \"M\" is average molecular weight, \"P\" is pressure, \"T\" is temperature, and \"R\" is the ideal gas constant.\n\nThe gas is held in place by so-called \"hydrostatic\" forces. That is to say, for a particular layer of gas at some altitude: the downward (towards the planet) force of its weight, the downward force exerted by pressure in the layer above it, and the upward force exerted by pressure in the layer below, all sum to zero. Mathematically this is:\n", "For aluminum and terrestrial conditions Akhmeteli and Gavrilin estimate the stress as formula_11 Pa, of the same order of magnitude as the compressive strength of aluminum alloys.\n\nSection::::Material constraints.:Buckling.\n\nAkhmeteli and Gavrilin note, however, that the compressive strength calculation disregards buckling, and using R. Zoelli's formula for the critical buckling pressure of a sphere\n\nwhere formula_13 is the modulus of elasticity and formula_14 is the Poisson ratio of the shell. Substituting the earlier expression gives a necessary condition for a feasible vacuum balloon shell:\n\nThe requirement is about formula_16.\n", "Theoretically, an aerostatic vehicle could be made to use a vacuum or partial vacuum. As early as 1670, over a century before the first manned hot-air balloon flight, the Italian monk Francesco Lana de Terzi envisioned a ship with four vacuum spheres.\n", "The main problem with the concept of vacuum airships is that, with a near-vacuum inside the airbag, the exterior atmospheric pressure is not balanced by any internal pressure. This enormous imbalance of forces would cause the airbag to collapse unless it were extremely strong (in an ordinary airship, the force is balanced by helium, making this unnecessary). Thus the difficulty is in constructing an airbag with the additional strength to resist this extreme net force, without weighing the structure down so much that the greater lifting power of the vacuum is negated.\n\nSection::::Material constraints.\n\nSection::::Material constraints.:Compressive strength.\n", "A different approach for high altitude ballooning, especially used for long duration flights is the superpressure balloon. A superpressure balloon maintains a higher pressure inside the balloon than the external (ambient) pressure.\n\nSection::::High-altitude ballooning.:Solids.\n", "These are summarised in the table:\n\nSection::::Flight without power.:Flight methods and usage.\n\nSome examples of usage are shown in the following table:\n\nSection::::Lighter than air.\n\nLighter than air flight is only used by man. An unpowered, lighter than air craft is called a balloon.\n\nSection::::Lighter than air.:Balloons.\n\nA balloon is a bag filled with a gas with a lower density than the surrounding air to provide buoyancy. The gas may be hot air, hydrogen or helium. The use of buoyant gases is unknown in the natural world.\n", "The effects of buoyancy do not just affect balloons; both liquids and gases are fluids in the physical sciences, and when all macrosize objects larger than dust particles are immersed in fluids on Earth, they have some degree of buoyancy. In the case of either a swimmer floating in a pool or a balloon floating in air, buoyancy can fully counter the gravitational weight of the object being weighed, for a weighing device in the pool. However, as noted, an object supported by a fluid is fundamentally no different from an object supported by a sling or cable—the weight has merely been transferred to another location, not made to disappear.\n", "From 1886 to 1900 Arthur De Bausset attempted in vain to raise funds to construct his \"vacuum-tube\" airship design, but despite early support in the United States Congress, the general public was skeptical. Illinois historian Howard Scamehorn reported that Octave Chanute and Albert Francis Zahm \"publicly denounced and mathematically proved the fallacy of the vacuum principle\", however the author does not give his source. De Bausset published a book on his design and offered $150,000 stock in the Transcontinental Aerial Navigation Company of Chicago. His patent application was eventually denied on the basis that it was \"wholly theoretical, everything being based upon calculation and nothing upon trial or demonstration.\"\n", "But although it meets the definition of outer space, the atmospheric density within the first few hundred kilometers above the Kármán line is still sufficient to produce significant drag on satellites. Most artificial satellites operate in this region called low Earth orbit and must fire their engines every couple of weeks or a few times a year (depending on solar activity). The drag here is low enough that it could theoretically be overcome by radiation pressure on solar sails, a proposed propulsion system for interplanetary travel. Planets are too massive for their trajectories to be significantly affected by these forces, although their atmospheres are eroded by the solar winds.\n", "In physics, apparent weight is a property of objects that corresponds to how heavy an object is. The apparent weight of an object will differ from the weight of an object whenever the force of gravity acting on the object is not balanced by an equal but opposite normal force. By definition, the weight of an object is equal to the magnitude of the force of gravity acting on it. This means that even a \"weightless\" astronaut in low Earth orbit, with an apparent weight of zero, has almost the same weight as he would have while standing on the ground; this is due to the force of gravity in low Earth orbit and on the ground being almost the same.\n", "where \"m\" is the mass of the bottle and \"g\" the gravitational acceleration at the location at which the measurements are being made. \"ρ\" is the density of the air at the ambient pressure and \"ρ\" is the density of the material of which the bottle is made (usually glass) so that the second term is the mass of air displaced by the glass of the bottle whose weight, by Archimedes Principle must be subtracted. The bottle is, of course, filled with air but as that air displaces an equal amount of air the weight of that air is canceled by the weight of the air displaced. Now we fill the bottle with the reference fluid e.g. pure water. The force exerted on the pan of the balance becomes:\n", "Air pressure, technically, is a measurement of the amount of collisions against a surface at any time. In the case of balloon, it's supposed to measure how many particles at any in any given time space collide with the wall of the balloon and bounce off. However, since this is near impossible to measure, air pressure seems to be easier described as density. The similarity comes from the idea that when there are more molecules in the same space, more of them will be heading towards a collision course with the wall.\n" ]
[ "A vacuum balloon may be able to fly like a helium balloon.", "A vaccuum balloon would fly like a helium balloon." ]
[ "A vacuum balloon will not float because the balloon structure would have to be very heavy to resist the difference in pressure from the surrounding air and the vacuum.", "A vaccuum balloon likely wouldn't float at all. " ]
[ "false presupposition" ]
[ "A vacuum balloon may be able to fly like a helium balloon.", "A vaccuum balloon would fly like a helium balloon." ]
[ "false presupposition", "false presupposition" ]
[ "A vacuum balloon will not float because the balloon structure would have to be very heavy to resist the difference in pressure from the surrounding air and the vacuum.", "A vaccuum balloon likely wouldn't float at all. " ]
2018-01809
Do babies still dance if they haven't been exposed to it?
Yes. There's pretty good research indicating that even infants as young as five months will respond to music by moving rhythmically. The degree to which they're successful at synchronizing their movements with the beat increases with age, but it's something we seem to have an innate instinct for. But it's probably more the *beat*, than the music as such, to which the youngest infants respond. Simple drums, or even just repetitive, rhythmic sounds of any sort, seem to provoke the same kind of response as anything most people would be tempted to describe as "music" as such.
[ "BULLET::::- In the 2002 movie \"Life or Something Like It\", the Dancing Baby appears on the score board at the baseball game.\n\nBULLET::::- The Cincinnati, Ohio classic rock station WEBN featured the dancing baby dancing to the song \"You Shook Me All Night Long\" by AC/DC on a television commercial for the station.\n\nBULLET::::- In the episode of \"Family Guy\" called \"McStroke\", Stewie Griffin and Brian Griffin bet on whether or not Stewie could become the coolest kid in high school in a week. He did, so Brian had to email all of his friends the Dancing Baby video.\n", "Banas said he began to dance at age five. “I would immediately run and stand in a doorway pretending it was a frame for a small stage. I then would jive, moving my body to and fro, trying to keep up with the beat of the music, knowing that when the music would crescendo I’d leap in the air defying gravity, only to land in a heap. I’d pick myself up and start it all over again. I just couldn’t sit still when I’d hear those big bands: Tommy Dorsey, Ray Anthony, Count Basie, Les Brown and Stan Kenton.”\n", "BULLET::::- Breakdance\n\nSection::::Tradition.\n\nIn most African-American dance cultures, learning to dance does not happen in formal classrooms or dance studios. Children often learn to dance as they grow up, developing not only a body awareness but also aesthetics of dance which are particular to their community. Learning to dance - learning about rhythmic movement - happens in much the same way as developing a local language 'accent' or a particular set of social values.\n", "The latest Cochrane review entitled \"Dance Movement Therapy for Dementia\" published in 2017 concluded that there we no high quality trials to assess the effect of DMT on behavioural, social, cognitive and emotional symptoms in people with dementia.\n\nSection::::Research.:Benefits.\n", "Section::::Plot.\n", "BULLET::::- Blockbuster Video commercial, baby dances to the Rick James hit, \"Give It to Me Baby\".\n\nBULLET::::- The Dancing Baby is also spoofed in an episode of \"The Simpsons\", \"The Computer Wore Menace Shoes\", in which Homer visits (and later steals from) a website featuring Jesus dancing with the same moves as the baby.\n", "BULLET::::- In the television series \"Millennium,\" the episode \"Somehow, Satan Got Behind Me\" features a demon who manifests himself in the form of a baby, dancing to the Black Flag song \"My War\". Writer/director Darin Morgan based the baby on its use in \"Ally McBeal;\" as he commented, \"It's a terrifying thing, that baby. She dances with it, and you go, 'There's something really wrong with this person.'\"\n\nBULLET::::- In an episode of \"Chowder\", a parody of the dancing baby (looking rather demonically) appears, causing everyone to freak out and scream at the sight of it.\n", "Section::::Development.\n", "Dance helps students to develop a sense of self as an emotional and social being. In preschool, children developed language, movement and collaborative skills to express their ideas. They created and named poses, learned ways of breathing to apply in different emotional situations, mirrored others' movements, incorporated emotions into their movement and participated in free movement. Children enhanced their social cognition and raised their awareness of their bodies.\n\nSection::::Therapy.\n\nSection::::Therapy.:Dance movement therapy.\n", "Section::::Reception.:Critical reception.\n", "BULLET::::- The baby makes an appearance in the 2018 music video \"1999\" by Charli XCX and Troye Sivan as homage to popular culture of the 90s and 00s, along with many other references.\n\nSection::::See also.\n\nBULLET::::- Internet celebrity\n\nBULLET::::- Viral videos\n\nBULLET::::- \"Lenz v. Universal Music Corp.\"\n\nSection::::External links.\n\nBULLET::::- \"Dancing Baby cha-chas from the Internet to the networks\" - Sci-Tech Story Page, CNN, Jan 1998\n\nBULLET::::- Internet Dancing Baby site - Contains a copy of one of the original dancing baby renderings\n", "Children learn specific dance steps or 'how to dance' from their families - most often from older brothers and sisters, cousins or other older children. Because cultural dance happens in everyday spaces, children often dance with older members of the community around their homes and neighborhoods, at parties and dances, on special occasions, or whenever groups of people gather to 'have a good time'. Cultural dance traditions are therefore often cross-generational traditions, with younger dancers often 'reviving' dances from previous generations, albeit with new 'cool' variations and 'styling'. This is not to suggest that there are no social limitations on who may dance with whom and when. Dance partners (or people to dance with) are chosen by a range of social factors, including age, sex, kinship, interest and so on. The most common dance groups are often comprised by people of a similar age, background and often sex (though this is a varying factor).\n", "BULLET::::- JoJo Siwa – former ALDC dancer\n\nBULLET::::- Jessalynn Siwa – former ALDC mom\n\nBULLET::::- Paige Hyland — former ALDC dancer\n\nBULLET::::- Alexa Collins — former ALDC dancer\n\nSection::::Cast.:Candy Apples Dance Center.\n\nBULLET::::- Cathy and Vivi-Anne Stein (team owner, mother and daughter, former ALDC mom and dancer)\n\nBULLET::::- Jeanette and Ava Cota (mother and daughter, former ALDC mom and dancer, JC's Broadway mom and daughter)\n\nBULLET::::- Black Patsy and Nicaya Wiley (mother and daughter)\n\nBULLET::::- Melanie and Haley Huelsman (mother and daughter)\n\nBULLET::::- Liza and Chloe Smith (mother and daughter)\n\nBULLET::::- Shari and Tara Johnson (mother and daughter)\n", "Hungarians have been noted for their \"exceptionally well developed sense of rhythm\". Billroth performed tests with troops stationed in Vienna and found that the Hungarian troops outperformed others in keeping time with music.\n", "For this reason, scientific research into the mechanisms and efficacy of dance therapy is still in its infancy. Additionally, since the practice of dance therapy is heterogenous and the scope and methodology varies greatly, this makes it even harder to create medically rigorous evidence bases. However, studies exist which suggest positive outcomes of dance therapy.\n\nSection::::Research.:Proposed mechanisms.\n", "BULLET::::- In 2010, the Dancing Baby appeared on an episode of \"SuperNews!.\n\nBULLET::::- In 2015, it made an appearance in a Delta safety video.\n\nBULLET::::- The Dancing Baby makes several appearances in the Tiger Award-winning Peruvian film Videophilia (and Other Viral Syndromes).\n\nBULLET::::- In 2018, a higher quality version of the Dancing Baby appeared in the Charli XCX & Troye Sivan music video for their song \"1999,\" which features numerous references to trends in the late 90s.\n\nSection::::Appearances in mainstream media.:Video games.\n\nSeveral video games have included references to the Dancing Baby.\n", "Section::::Dances.\n", "O'Connor later said, \"I was about 13 months old, they tell me, when I first started dancing, and they'd hold me up by the back of my neck and they'd start the music, and I'd dance. You could do that with any kid, only I got paid for it.\" \n", "BULLET::::- Performed by Amakwenkwe (young men under the age of about 20 or 21) of the Xhosa, the Umteyo (Shaking Dance) involves the rapid undulation or shaking of the thorax so that the whole length of the spine appears to be rippling. Older men, Amadoda, do a similar dance, Xhensa accompanied by singing and clapping while dancers draw their breath in and out through a relaxed larynx, producing a kind of guttural roar.\n", "A jazz dance class study was conducted to improve older adults' balance, cognition and mood. These were measured with the MMSE, Geriatric Depression Scale (GDS), and Sensory Organization Test (SOT), respectively before (time 1), at the midpoint (time 2), and after (time 3) the class. Differences in MMSE and GDS scores were not significant, but SOT scores increased from time 1 to time 2 and from time 2 to time 3.\n\nSection::::Implications about mate selection.\n\nSection::::Implications about mate selection.:Symmetry.\n", "Émile Jaques-Dalcroze, primarily a musician and teacher, relates how a study of the physical movements of pianists led him \"to the discovery that musical sensations of a rhythmic nature call for the muscular and nervous response of the whole organism\", to develop \"a special training designed to regulate nervous reactions and effect a co-ordination of muscles and nerves\" and ultimately to seek the connections between \"the art of music and the art of dance\", which he formulated into his system of eurhythmics. He concluded that \"musical rhythm is only the transposition into sound of movements and dynamisms spontaneously and involuntarily expressing emotion\".\n", "When he ventured out into \"nearby mill towns, picking up partners on location\", he found that there were white girls who were \"mill-town...lower class\" and could dance and move \"in the authentic, flowing style\". \"They were poor and less educated than my high-school friends, but they could really dance. In fact, at that time it seemed that the lower class a girl was, the better dancer she was, too.\"\n", "In 2015, Celebrity Health & Fitness magazine reported \"royal bodyguards\" as saying that Catherine, Duchess of Cambridge took pole dancing lessons to lose weight after giving birth to Prince George in 2013.\n", "Section::::The Pointer Sisters version / \"Beverly Hills Cop\".\n", "Four-, five-, and eight-year-old children and adults watched videos of movement expressing joy, anger, fear and sadness and indicated which emotions they perceived in each video. All age groups achieved recognition scores above chance level. The four-year-olds had the lowest scores, while the five-year-olds achieved levels close to the eight-year-olds' and the adults' scores.\n\nSection::::Expertise.\n\nSection::::Expertise.:Ballet and Indian dance.\n" ]
[ "Babies dance to music. " ]
[ "Babies dance to a beat, rather than music. " ]
[ "false presupposition" ]
[ "Babies dance to music. ", "Babies dance to music. " ]
[ "normal", "false presupposition" ]
[ "Babies dance to a beat, rather than music. ", "Babies dance to a beat, rather than music. " ]
2018-03427
why people use . instead of , as thousands separator?
Other countries (e.g. germany) have the thousands separator and decimal separator switched. It's all a matter of what you're used to. To us 250,000.00 looks weird, because we're used to it being written as 250.000,00 As for historical reasons why some countries have it this way and some the other, i have no idea.
[ "The International Bureau of Weights and Measures states that \"when there are only four digits before or after the decimal marker, it is customary not to use a space to isolate a single digit\". Likewise, some manuals of style state that thousands separators should not be used in normal text for numbers from 1000 to 9999 inclusive where no decimal fractional part is shown (in other words, for four-digit whole numbers), whereas others use thousands separators, and others use both. For example, APA style stipulates a thousands separator for \"most figures of 1,000 or more\" except for page numbers, binary digits, temperatures, etc.\n", "The following examples show the decimal separator and the thousands separator in various countries that use the Arabic numeral system.\n\nBULLET::::- In Albania, Belgium (French), Estonia, Finland, France, Hungary, Poland, Slovakia and much of Latin Europe as well as French Canada: (In Spain, in handwriting it is also common to use an upper comma: 1.234.567'89)\n", "Since 2003, the use of spaces as separators (for example: and for \"twenty thousand\" and \"one million\") has been officially endorsed by SI/ISO 31-0 standard, as well as by the International Bureau of Weights and Measures and the International Union of Pure and Applied Chemistry (IUPAC), the American Medical Association's widely followed \"AMA Manual of Style\", and the Metrication Board, among others.\n", "For ease of reading, numbers with many digits may be divided into groups using a delimiter, such as comma \",\" or dot \".\" or space or underbar \"_\" (as in maritime \"21_450\"). In some countries, these \"digit group separators\" are only employed to the left of the decimal separator; in others, they are also used to separate numbers with a long fractional part. An important reason for grouping is that it allows rapid judgement of the number of digits, via subitizing (telling at a glance) rather than counting – contrast with 100000000 for one hundred million.\n", "BULLET::::- Historically, in Germany and Austria, thousands separators were occasionally denoted by alternating uses of comma and point, e.g. 1.234,567.890,12 for \"eine Milliarde 234 Millionen ...\", but this is never seen in modern days and requires explanation to a contemporary German reader.\n", "The groups created by the delimiters tend to follow the use of the local language, which varies. In European languages, large numbers are read in groups of thousands and the delimiter (which occurs every three digits when it is used) may be called a \"thousands separator\". In East Asian cultures, particularly China, Japan, and Korea, large numbers are read in groups of myriads (10,000s) but the delimiter commonly separates every three digits. The Indian numbering system is somewhat more complex: it groups the rightmost three digits together (till the hundreds place) and thereafter groups by sets of two digits. One trillion would thus be written as 10,00,00,00,00,000 or 10 kharab.\n", "BULLET::::- In Estonia, currency numbers often use a dot \".\" as the decimal separator, and a space as a thousands separator. This is most visible on shopping receipts and in documents that also use other numbers with decimals, such as measurements. This practice is used to better distinguish between prices and other values with decimals. An older convention uses dots to separate thousands (with commas for decimals) — this older practice makes it easier to avoid word breaks with larger numbers.\n", "The three most spoken international auxiliary languages, Ido, Esperanto, and Interlingua, all use the comma as the decimal separator. Interlingua has used the comma as its decimal separator since the publication of the in 1951. Esperanto also uses the comma as its official decimal separator, while thousands are separated by non-breaking spaces: . Ido's \"Kompleta Gramatiko Detaloza di la Linguo Internaciona Ido\" (Complete Detailed Grammar of the International Language Ido) officially states that commas are used for the decimal separator while full stops are used to separate thousands, millions, etc. So the number 12,345,678.90123 (in American notation) for instance, would be written \"12.345.678,90123\" in Ido. The 1931 grammar of Volapük by Arie de Jong uses the comma as its decimal separator, and (somewhat unusually) uses the middle dot as the thousands separator (12·345·678,90123).\n", "In the Arab world, where Eastern Arabic numerals are used for writing numbers, a different character is used to separate the integer and fractional parts of numbers. It is referred to as an Arabic decimal separator () (in hex U+066B) in Unicode. An Arabic thousands separator () also exists.\n", "BULLET::::- In Belgium (Dutch), Brazil, Denmark, Germany, Greece, Indonesia, Italy, Netherlands, Portugal, Romania, Russia, Slovenia, Sweden and much of Europe: or 1.234.567,89. In handwriting, 1˙234˙567,89 is also seen, but never in Belgium, Brazil, Denmark, Estonia, Germany, the Netherlands, Portugal, Romania, Russia, Slovenia or Sweden. In Italy, a straight apostrophe is also used in handwriting: 1'234'567,89. In the Netherlands and Dutch-speaking Belgium, the points thousands separator is used, and is preferred for currency amounts, but the space is recommended by some style guides, mostly in technical writing.\n", "Decimal separator\n\nA decimal separator is a symbol used to separate the integer part from the fractional part of a number written in decimal form.\n\nDifferent countries officially designate different symbols for the decimal separator. The choice of symbol for the decimal separator also affects the choice of symbol for the thousands separator used in digit grouping, so the latter is also treated in this article.\n", "Section::::Flow calibration in oil and gas separators.\n", "Section::::Flow measurements in oil and gas separators.\n", "There are always \"common-sense\" country-specific exceptions to digit grouping, such as year numbers, postal codes and ID numbers of predefined nongrouped format, which style guides usually point out.\n\nSection::::Digit grouping.:In non-base-10 numbering systems.\n", "In the United States, the full stop or period (.) was used as the standard decimal separator.\n", "BULLET::::- Switzerland: There are two cases: An apostrophe as a thousands separator along with a dot \".\" as the decimal separator are used for currency values (for example: 1'234'567.89). For other values, the SI-style is used with a comma \",\" as the decimal separator. The apostrophe is also the most common variety for non-currency values: 1'234'567,89 — though this usage is officially discouraged.\n", "The convention for digit group separators historically varied among countries, but usually seeking to distinguish the delimiter from the decimal separator. Traditionally, English-speaking countries employed commas as the delimiter – 10,000 – and other European countries employed periods or spaces: 10.000 or . Because of the confusion that could result in international documents, in recent years the use of spaces as separators has been advocated by the superseded SI/ISO 31-0 standard, as well as by the International Bureau of Weights and Measures and the International Union of Pure and Applied Chemistry, which have also begun advocating the use of a \"thin space\" in \"groups of three\". Within the United States, the American Medical Association's widely followed \"AMA Manual of Style\" also calls for a thin space. In some online encoding environments (for example, ASCII-only) a thin space is not practical or available, in which case a regular word space or no delimiter are the alternatives.\n", "BULLET::::- Spaces should be used as a thousands separator () in contrast to commas or periods (1,000,000 or 1.000.000) to reduce confusion resulting from the variation between these forms in different countries.\n\nBULLET::::- Any line-break inside a number, inside a compound unit, or between number and unit should be avoided. Where this is not possible, line breaks should coincide with thousands separators.\n\nBULLET::::- Because the value of \"billion\" and \"trillion\" varies between languages, the dimensionless terms \"ppb\" (parts per billion) and \"ppt\" (parts per trillion) should be avoided. The SI Brochure does not suggest alternatives.\n", "In many contexts, when a number is spoken, the function of the separator is assumed by the spoken name of the symbol: comma or point in most cases. In some specialized contexts, the word decimal is instead used for this purpose (such as in ICAO-regulated air traffic control communications).\n\nIn mathematics the decimal separator is a type of radix point, a term that also applies to number systems with bases other than ten.\n\nSection::::History.\n", "In countries with a decimal comma, the decimal point is also common as the \"international\" notation because of the influence of devices, such as electronic calculators, which use the decimal point. Most computer operating systems allow selection of the decimal separator and programs that have been carefully internationalized will follow this, but some programs ignore it and a few may even fail to operate if the setting has been changed.\n\nSection::::Arabic numerals.\n\nSection::::Arabic numerals.:Countries using decimal point.\n\nCountries where a dot \".\" is used as decimal separator include:\n\nSection::::Arabic numerals.:Countries using decimal comma.\n", "The toponym resolution sometimes is a simple conversion from name to abbreviation, in special when the abbreviation is used as standard geocode. For example converting the official country name Afghanistan into an ISO contry code, codice_1.\n\nIn annotating media and metadata, the conversion using a map and the geographical evidence (e.g. GPS), is the most usual approach to obtain toponym, or a geocode that represents the toponym.\n\nSection::::Resolution process.:From textual evidence.\n", "According to subdivisions of the International Map of the World: The sheet numbers of the \"International Map of the World\" 1:1,000,000 are augmented in the next smaller scale by a suffix (e.g. capital letters). Sheet numbers of each further smaller scale will bear a different system of suffixes (e.g., Roman numerals, small letters, etc.). These numbers can become very complex, but at the same time allow \"the experts\" to gain at least a rough location of the map sheet on the globe. Example: \"Soviet General Staff map\" (1:200,000).\n", "Section::::Primary functions of oil and gas separators.:Separation of water from oil.\n", "In 1958, disputes between European and American delegates over the correct representation of the decimal separator nearly stalled the development of the ALGOL computer programming language. ALGOL ended up allowing different decimal separators, but most computer languages and standard data formats (e.g. C, Java, Fortran, Cascading Style Sheets (CSS)) specify a dot.\n\nPreviously, signs along California roads expressed distances in decimal numbers with the decimal part in superscript, as in 3, meaning 3.7. Though California has since transitioned to mixed numbers with vulgar fractions, the older style remains on postmile markers and bridge inventory markers.\n\nSection::::Current standards.\n", "The individual units are often numbered so that their movements can be tracked. This helps engineers gauge whether they need to add more dolosse to the pile.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00929
why were dinosaurs much larger than most animals we see today?
the oxygen theories posted as other comments applies to insects, not dinosaurs. URL_1 the following theories are not mutually exclusive (meaning it's not #1 or #2, it could be #1 *and* #2 etc.). and tl;dr **we don't know for sure**, all the theories are controversial. theory #1 for dinosaurs: the mesozoic era (~250 million years ago - 65 million years ago, the time period all dinosaurs lived in) had much higher levels of carbon dioxide (co2) in the atmosphere. more co2 = higher temperatures. plants feed (via photosynthesis) off of co2, and higher temperatures promote more vegetative growth. the theory is that some dinosaurs were so big simply because there was so much for them to eat, which would explain why some herbivores were much larger than carnivores. note that the only dinosaurs that were small were carnivores, almost all herbivores were taller than 1 meter. this theory is also being challenged though ( URL_2 ) theory #2: hugeness was simply an evolutionary defense mechanism theory #3: if dinosaurs were cold blooded, as many paleontologists believe, their size could be a way to maintain their internal temperatures despite environmental circumstances. "a house-sized, homeothermic Argentinosaurus could warm up slowly (in the sun, during the day) and cool down equally slowly (at night), giving it a fairly constant average body temperature--whereas a smaller reptile would be at the mercy of ambient temperatures on an hour-by-hour basis." ( URL_3 ) see also URL_0 theory #4: larger size = lower metabolism, longer digestion = bigger dinosaurs need less food. there's probably more out there, but these were the main ones that i found on google.
[ "Early in the Cenozoic, following the K-Pg event, the planet was dominated by relatively small fauna, including small mammals, birds, reptiles, and amphibians. From a geological perspective, it did not take long for mammals and birds to greatly diversify in the absence of the dinosaurs that had dominated during the Mesozoic. Some flightless birds grew larger than humans. These species are sometimes referred to as \"terror birds,\" and were formidable predators. Mammals came to occupy almost every available niche (both marine and terrestrial), and some also grew very large, attaining sizes not seen in most of today's terrestrial mammals.\n", "\"Protoceratops\" was approximately 1.8 meters (6 ft) in length and 0.6 meters (2 ft) high at the shoulder. A fully grown adult would have weighed less than 400 pounds (180 kg). Smaller specimens are estimated at . The large numbers of specimens found in high concentration suggest that \"Protoceratops\" lived in herds.\n", "Early in the Cenozoic, following the K-Pg extinction event, most of the fauna was relatively small, and included small mammals, birds, reptiles, and amphibians. From a geological perspective, it did not take long for mammals and birds to greatly diversify in the absence of the large reptiles that had dominated during the Mesozoic. A group of avians known as the \"terror birds\" grew larger than the average human and were formidable predators. Mammals came to occupy almost every available niche (both marine and terrestrial), and some also grew very large, attaining sizes not seen in most of today's mammals.\n", "BULLET::::- \"Tenontosaurus\", full-size, dead\n\nBULLET::::- \"Pterygotus\" sp., full-size\n\nBULLET::::- \"Utahraptor\" sp., full-size\n\nBULLET::::- \"Protoceratops\" and two \"Velociraptors\", 1/2-size\n\nBULLET::::- \"Dilophosaurus\", full-size\n\nand various neotonous 'baby' dinosaurs, including hatching eggs and a pteranadon feeding a fish to youths in a rocky \"nest\". Others included prehistoric mammals, whales, a great white shark, an 8-limbed Archeteuthis, or \"giant squid,\" giant insects, and versions of some animals from Dougal Dixon's \"Future Zoo.\"\n", "BULLET::::- Theropods: dinosaurs that first evolved in the Triassic period but did not evolve into large sizes until the Jurassic. Most Triassic theropods, such as the \"Coelophysis\", were only around 1–2 meters long and hunted small prey in the shadow of the giant Rauisuchians.\n", "Section::::Record sizes.\n", "BULLET::::- \"Hypacrosaurus\"\n\nBULLET::::- \"Hypsilophodon\"\n\nBULLET::::- \"Ichthyosaurus\"\n\nBULLET::::- \"Iguanodon\"\n\nBULLET::::- \"Lambeosaurus\"\n\nBULLET::::- \"Lesothosaurus\"\n\nBULLET::::- \"Lexovisaurus\"\n\nBULLET::::- \"Maiasaura\"\n\nBULLET::::- \"Megalosaurus\"\n\nBULLET::::- \"Microceratus\"\n\nBULLET::::- \"Mixosaurus\"\n\nBULLET::::- \"Mosasaurus\"\n\nBULLET::::- \"Oviraptor\"\n\nBULLET::::- \"Ornitholestes\"\n\nBULLET::::- \"Ornithomimus\"\n\nBULLET::::- \"Orodromeus\"\n\nBULLET::::- \"Pachycephalosaurus\"\n\nBULLET::::- \"Pachyrhinosaurus\"\n\nBULLET::::- \"Parasaurolophus\"\n\nBULLET::::- \"Plateosaurus\"\n\nBULLET::::- \"Protoceratops\"\n\nBULLET::::- \"Psittacosaurus\"\n\nBULLET::::- \"Pteranodon\"\n\nBULLET::::- \"Pterodactylus\"\n\nBULLET::::- \"Quetzalcoatlus\"\n\nBULLET::::- \"Rhamphorhynchus\"\n\nBULLET::::- \"Riojasaurus\"\n\nBULLET::::- \"Saltasaurus\"\n\nBULLET::::- \"Scelidosaurus\"\n\nBULLET::::- \"Scutellosaurus\"\n\nBULLET::::- \"Stegosaurus\"\n\nBULLET::::- \"Struthiomimus\"\n\nBULLET::::- \"Styracosaurus\"\n\nBULLET::::- \"Triceratops\"\n\nBULLET::::- \"Troodon\"\n\nBULLET::::- \"Tyrannosaurus\"\n\nBULLET::::- \"Xiphactinus\"\n\nBULLET::::- \"Velociraptor\"\n\nBULLET::::- \"Wuerhosaurus\"\n\nSection::::Animations.\n", "Section::::Description.\n\n\"Unescoceratops\" is thought to have been between one and two meters long and less than 91 kilograms. Its teeth were the roundest of all leptoceratopsids.\n\nMallon et al. (2013) examined herbivore coexistence on the island continent of Laramidia, during the Late Cretaceous. It was concluded that small ornithischians like \"Unescoceratops\" were generally restricted to feeding on vegetation at, or below the height of 1 meter.\n\nSection::::Etymology.\n", "Dinosaur size\n\nSize has been one of the most interesting aspects of dinosaur science to the general public and to scientists. Dinosaurs show some of the most extreme variations in size of any land animal group, ranging from the tiny hummingbirds, which can weigh as little as three grams, to the extinct titanosaurs, which could weigh as much as .\n", "\"Allosaurus\" also had a large powerful jaw with long, sharp, serrated teeth that were long. These teeth were curved inward, shaped like a \"D\" to help secure its prey. It had a bulky body, a massive tail and thick bones. Its arms were short and had three fingered hands, with sharp claws that were up to long. \"Allosaurus\" was one of the largest meat-eating dinosaurs of the Late Jurassic, 156-145 MYA, and was also one of the largest predators until the tyrannosaurs appeared 50 million years later.\n\nSection::::A.:Ammonite.\n", "The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest were an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as \"Paraceratherium\" and \"Palaeoloxodon\" (the largest land mammals ever) were dwarfed by the giant sauropods, and only modern whales surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.\n", "\"Tyrannosaurus\" was for many decades the largest known theropod and best-known to the general public. Since its discovery, however, a number of other giant carnivorous dinosaurs have been described, including \"Spinosaurus\", \"Carcharodontosaurus\", and \"Giganotosaurus\". The original \"Spinosaurus\" specimens (as well as newer fossils described in 2006) support the idea that \"Spinosaurus\" is longer than \"Tyrannosaurus\", showing that \"Spinosaurus\" was possibly 3 meters longer than \"Tyrannosaurus\" though \"Tyrannosaurus\" could still be taller and more massive than \"Spinosaurus\". Specimens of Tyrannosaurus such as Sue and Scotty are estimated to be the most massive theropods known to science. There is still no clear explanation for exactly why these animals grew so much larger than the land predators that came before and after them.\n", "The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest were an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as \"Paraceratherium\" (the largest land mammal ever) were dwarfed by the giant sauropods, and only modern whales approach or surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.\n", "Section::::Paleobiology.:Diet.\n", "BULLET::::- The phytosaurs and crocodilians dominated the rivers and swamps and even invaded the seas (e.g., the teleosaurs, Metriorhynchidae and Dyrosauridae). The Metriorhynchidae were rather dolphin-like, with paddle-like forelimbs, a tail fluke and smooth, unarmoured skins.\n\nBULLET::::- Two clades of ornithodirans, the pterosaurs and the birds, dominated the air after becoming adapted to a volant lifestyle.\n\nSection::::Archosaur lifestyle.:Metabolism.\n", "Section::::Death.\n", "\"Thylacoleo\" was at the shoulder and about long from head to tail. The species \"T. carnifex\" is the largest, and skulls indicate they averaged , and individuals reaching were common, and the largest weight was of . Fully grown, \"Thylacoleo carnifex\" would have been close to the same size as a jaguar.\n\nSection::::Behaviour.\n", "One of the earliest known sauropodomorphs, \"Saturnalia\", was small and slender (1.5 metres, or 5 feet long); but, by the end of the Triassic, they were the largest dinosaurs of their time, and throughout the Jurassic and Cretaceous they kept on growing. Ultimately the largest sauropods, like \"Supersaurus\", \"Diplodocus hallorum\", \"Patagotitan\", and \"Argentinosaurus\", reached in length, and 60,000–100,000 kilograms (65–110 US short tons) or more in mass.\n", "BULLET::::- The Permian temnospondyl \"Prionosuchus\", the largest amphibian known, reached 9 m in length and was an aquatic predator resembling a crocodilian. After the appearance of real crocodilians, temnospondyls such as \"Koolasuchus\" (5 m long) had retreated to the Antarctic region by the Cretaceous, before going extinct.\n\nBULLET::::- Class Actinopterygii\n\nBULLET::::- Order Tetraodontiformes\n", "Section::::Evolution of large body size.:In marine mammals.\n", "The Early Jurassic spans from 200 to 175 million years ago. The climate was tropical, much more humid than the Triassic. In the oceans, plesiosaurs, ichthyosaurs and ammonites were abundant. On land, dinosaurs and other archosaurs staked their claim as the dominant race, with theropods such as \"Dilophosaurus\" at the top of the food chain. The first true crocodiles evolved, pushing the large amphibians to near extinction. All-in-all, archosaurs rose to rule the world. Meanwhile, the first true mammals evolved, remaining relatively small but spreading widely; the Jurassic \"Castorocauda\", for example, had adaptations for swimming, digging and catching fish. \"Fruitafossor\", from the late Jurassic period about 150 million years ago, was about the size of a chipmunk, and its teeth, forelimbs and back suggest that it dug open the nests of social insects (probably termites, as ants had not yet appeared). The first multituberculates like \"Rugosodon\" evolved, while volaticotherians took to the skies.\n", "Section::::Paleobiology.:Growth and ontogeny.\n", "BULLET::::- Foster and others observed that no other theropod inhabiting Asia or North America during the Campanian or Maastrichtian achieved a body size within \"two orders of magnitude\" of contemporary tyrannosaurs.~paleobio133-134~ They further speculated that this gap in body size may be attributable to juvenile tyrannosaurs occupying the ecological niches once exploited by other medium-to-large sized theropods.\n\nBULLET::::- Holtz found that within the coelurosaurs, tyrannosaurs were arctometatarsalians; meaning they were closer to ornithomimosaurs than to birds.\n", "Because of the small initial size of all mammals following the extinction of the non-avian dinosaurs, nonmammalian vertebrates had a roughly ten-million-year-long window of opportunity (during the Paleocene) for evolution of gigantism without much competition. During this interval, apex predator niches were often occupied by reptiles, such as terrestrial crocodilians (e.g. \"Pristichampsus\"), large snakes (e.g. \"Titanoboa\") or varanid lizards, or by flightless birds (e.g. \"Paleopsilopterus\" in South America). This is also the period when megafaunal flightless herbivorous gastornithid birds evolved in the Northern Hemisphere, while flightless paleognaths evolved to large size on Gondwanan land masses and Europe. Gastornithids and at least one lineage of flightless paleognath birds originated in Europe, both lineages dominating niches for large herbivores while mammals remained below 45 kg (in contrast with other landmasses like North America and Asia, which saw the earlier evolution of larger mammals) and were the largest European tetrapods in the Paleocene.\n", "BULLET::::- Saurischian dinosaurs of the Jurassic and Cretaceous include sauropods, the longest (at up to ) and most massive terrestrial animals known (\"Argentinosaurus\" reached 80–100 metric tonnes, or 90–110 tons), as well as theropods, the largest terrestrial carnivores (\"Spinosaurus\" grew to 7–9 tonnes; the more famous \"Tyrannosaurus\", to 6.8 tonnes).\n\nBULLET::::- Order Pterosauria\n\nBULLET::::- The largest azhdarchid pterosaurs, such as \"Hatzegopteryx\" and \"Quetzalcoatlus\", attained wingspans around and weights probably in the range. The former is thought to have been the apex predator of its island ecosystem.\n\nBULLET::::- Order Crocodilia\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-08357
why do mountains look blue from a certain distance?
Thats due to the refraction of light in our atmosphere. Its basically the exact same phenomenon that makes our sky blue...
[ "The Greater Blue Mountains Area consists of of mostly forested landscape on a sandstone plateau inland from the Sydney central business district. The area includes vast expanses of wilderness and is equivalent in area to almost one third of Belgium, or twice the size of Brunei.\n\nThe area is called \"Blue Mountains\" based on the fact that when atmospheric temperature rise, the essential oil of various eucalyptus species evaporates and disperse in the air, then visible blue spectrum of sunlight propagates more than other colours. Therefore, the reflected landscape from mountains seems bluish by human eyes.\n", "Section::::In science and nature.:Purple mountains phenomenon.\n\nIt has been observed that the greater the distance between a viewers eyes and mountains, the lighter and more blue or purple they will appear. This phenomenon, long recognized by Leonardo da Vinci and other painters, is called aerial perspective or atmospheric perspective. The more distant the mountains are, the less contrast the eye sees between the mountains and the sky.\n", "Section::::History.:During westward expansion of the United States.\n\nIn the mid-1800s, the Blue Mountains were a formidable obstacle to settlers traveling on the Oregon Trail and were often the last mountain range American pioneers had to cross before either reaching southeast Washington near Walla Walla or passing down the Columbia River Gorge to the end of the Oregon Trail in the Willamette Valley near Oregon City.\n\nSection::::History.:Modern travel.\n", "The farther away an object is, the more blue it often appears to the eye. For example, mountains in the distance often appear blue. This is the effect of atmospheric perspective; the farther an object is away from the viewer, the less contrast there is between the object and its background colour, which is usually blue. In a painting where different parts of the composition are blue, green and red, the blue will appear to be more distant, and the red closer to the viewer. The cooler a colour is, the more distant it seems.\n\nSection::::Science and nature.:Astronomy.\n", "BULLET::::- Mount Irvine ()\n\nSection::::Geography.:Geology.\n", "Section::::History.\n\nSection::::History.:Aboriginal inhabitants.\n\nThe Blue Mountains have been inhabited for millennia by the Gundungurra people, now represented by the Gundungurra Tribal Council Aboriginal Corporation based in Katoomba, and, in the lower Blue Mountains, by the Darug people, now represented by the Darug Tribal Aboriginal Corporation.\n\nThe Gundungurra creation story of the Blue Mountains tells that Dreamtime creatures Mirigan and Garangatch, half fish and half reptile, fought an epic battle which scarred the landscape into the Jamison Valley.\n", "Section::::Composition.\n", "Section::::Weather.\n", "The first documented use of the name \"Blue Mountains\" appears in Captain John Hunter’s account of Phillip’s 1789 expedition up the Hawkesbury River. Describing the events of about 5 July, Hunter wrote: \"We frequently, in some of the reaches which we passed through this day, saw very near us the hills, which we suppose as seen from Port Jackson, and called by the governor the Blue Mountains.\" During the nineteenth century the name was commonly applied to the portion of the Great Dividing Range from about Goulburn in the south to the Hunter Valley in the north, but in time it came to be associated with a more limited area.\n", "Section::::Background.\n", "The Blue Mountains are popular for hiking and camping. The traditional Blue Mountain trek is a 7-mile (10 km) hike to the peak and consists of a 3,000-foot (1,000 m) increase in elevation. Jamaicans prefer to reach the peak at sunrise, thus the 3–4 hour hike is usually undertaken in darkness. Since the sky is usually very clear in the mornings, Cuba can be seen in the distance. Some of the plants found on the Blue Mountain cannot be found anywhere else in the world and they are often of a dwarfed sort. This is mainly due to the cold climate which inhibits growth. \n", "Section::::Occurrence.:United States.\n", "Light from the sky is a result of the Rayleigh scattering of sunlight, which results in a blue color perceived by the human eye. On a sunny day, Rayleigh scattering gives the sky a blue gradient, where it is darkest around the zenith and bright near the horizon. Light rays incoming from overhead encounters of the air mass that those coming along a horizontal path encounter. Hence, fewer particles scatter the zenithal sunbeam, and thus the light remains a darker blue. The blueness is at the horizon because the blue light coming from great distances is also preferentially scattered. This results in a red shift of the distant light sources that is compensated by the blue hue of the scattered light in the line of sight. In other words, the red light scatters also; if it does so at a point a great distance from the observer it has a much higher chance of reaching the observer than blue light. At distances nearing infinity, the scattered light is therefore white. Distant clouds or snowy mountaintops will seem yellow for that reason; that effect is not obvious on clear days, but very pronounced when clouds are covering the line of sight reducing the blue hue from scattered sunlight.\n", "Blue Mountains (New Zealand)\n\nBlue Mountains are a range of rugged hills in West Otago, in southern New Zealand. They form a barrier between the valleys of the Clutha and Pomahaka Rivers. They lie between the towns of Tapanui and Lawrence and rise to 1019 metres (3280 ft).\n", "Blue Mountain (Pennsylvania)\n\nBlue Mountain Ridge, Blue Mountain, or the Blue Mountains of Pennsylvania is part of the geophysical makeup of the Ridge-and-Valley Appalachians in the U.S. state of Pennsylvania. It is a ridge that forms the southern and eastern edge of the Appalachian mountain range spanning over from the Delaware Water Gap as it cuts across the eastern half of the state on a slight diagonal from New Jersey tending southerly until it turns southerly curving into Maryland, and beyond.\n", "Section::::Climate.\n", "The Blue Ridge Mountains are noted for having a bluish color when seen from a distance. Trees put the \"blue\" in Blue Ridge, from the isoprene released into the atmosphere, thereby contributing to the characteristic haze on the mountains and their distinctive color.\n", "Arthur Phillip, the first governor of New South Wales, first glimpsed the extent of the Blue Mountains from a ridge at the site of today's Oakhill College, Castle Hill. He named them the Carmarthen Hills, \"some forty to sixty miles distant...\" and he reckoned that the ground was \"most suitable for government stock\". This is the location where Gidley King in 1799 established a prison town for political prisoners from Ireland and Scotland.\n", "Section::::Evolution.\n", "After the sun has also set for these altitudes at the end of nautical twilight, the intensity of light emanating from earlier mentioned lines decreases, until the oxygen-green remains as the dominant source.\n\nWhen astronomical darkness has set in, the green 557.7 nm oxygen line is dominant, and atmospheric scattering of starlight occurs.\n\nDifferential refraction causes different parts of the spectrum to dominate, producing a golden hour and a blue hour.\n\nSection::::Relative contributions.\n", "The sun is not the only object that may appear less blue in the atmosphere. Far away clouds or snowy mountaintops may appear yellowish. The effect is not very obvious on clear days but is very pronounced when clouds cover the line of sight, reducing the blue hue from scattered sunlight. At higher altitudes, the sky tends toward darker colors since scattering is reduced due to lower air density; an extreme example is the moon, where there is no atmosphere and no scattering, making the sky on the moon black even when the sun is visible.\n", "Daylight periods change as the seasons change. From May to August, darkness does not linger for long. Instead of rising or setting, the sun circles just above the horizon, turning to darkness only for a few hours when the sun circles behind the mountains. During the quick seasons of spring and autumn, the length of daylight changes by six to eight minutes each day.\n\nSection::::Flora.\n", "Blue Mountains\n\nBlue Mountains may refer to:\n\nSection::::Geography.\n\nBULLET::::- Blue Mountains (New South Wales), Australia\n\nBULLET::::- City of Blue Mountains, a local government area west of Sydney\n\nBULLET::::- Blue Mountains National Park\n\nBULLET::::- Blue Mountains railway line\n\nBULLET::::- Electoral district of Blue Mountains\n\nBULLET::::- Greater Blue Mountains Area, a World Heritage Site\n\nBULLET::::- Blue Mountains (Nunavut), Canada\n\nBULLET::::- The Blue Mountains, Ontario, a town in Canada\n\nBULLET::::- Blue Mountains (Congo), northwest of Lake Albert, Democratic Republic of the Congo\n\nBULLET::::- Sinimäed Hills (Blue Mountains) in Estonia, near Narva\n\nBULLET::::- Nilgiri mountains (Blue Mountains), southern India\n\nBULLET::::- Blue Mountains (Jamaica)\n", "At sunrise and sunset, the light is passing through the atmosphere at a lower angle, and traveling a greater distance through a larger volume of air. Much of the green and blue is scattered away, and more red light comes to the eye, creating the colors of the sunrise and sunset and making the mountains look purple.\n\nA Crayola crayon called Purple Mountains' Majesty (or Purple Mountain Majesty) is named after this natural phenomenon. It was first formulated in 1993.\n\nSection::::Mythology.\n", "Section::::Geography.\n\nThe ridge of Blue Mountain runs for through Pennsylvania, reaching an elevation of above sea level just north of the Pennsylvania Turnpike, near the borough of Newburg. Most of the ridgecrest, however, only reaches between and in elevation. The mountain's width varies from to .\n" ]
[ "Mountains are blue from afar." ]
[ "Mountains are not blue, the refraction of light in our atmosphere causes them to look this way." ]
[ "false presupposition" ]
[ "Mountains are blue from afar.", "Mountains are blue from afar." ]
[ "false presupposition", "normal" ]
[ "Mountains are not blue, the refraction of light in our atmosphere causes them to look this way.", "Mountains are not blue, the refraction of light in our atmosphere causes them to look this way." ]
2018-13288
Do I need a phone line for the NBN in Australia? If its no longer a phone line what's a nbn connection called?
Depending on what you get (FTTC, FTTN, FTTP, HFC, FW etc), you might need one coming into the house. For FTTN and FTTC you need the four wires which are currently go to your nearby Telstra pillar. For everything else you don't need these four wires anymore.
[ "NBN\n\nNBN may refer to:\n\nSection::::Television networks.\n\nBULLET::::- NBN Television, an Australian television network serving northern New South Wales\n\nBULLET::::- People's Television Network, the Philippine government national television station formerly known as National Broadcasting Network\n\nBULLET::::- National Broadcasting Network (Lebanon) is the official television of the Lebanese Amal Movement\n\nBULLET::::- Nagoya Broadcasting Network, also known as \"Mētele\", a network television station of All-Nippon News Network in Nagoya, Japan\n\nBULLET::::- Nanjing Broadcasting Network, in Nanjing, China\n\nSection::::Organizations.\n\nBULLET::::- Bureau of Normalization, the Belgian national organization for standardization\n", "Each access to the network is through copper connections using existing phone plugs. An NBN-provided mains powered FTTC connection device provides one ethernet port for connection to a ethernet and/or wireless router.\n\nVoice services are provided by voice over IP (VOIP) where supported by end user-supplied equipment.\n\nSection::::NBN technologies.:Fibre to the curb.:FTTC network.\n", "HFC is legacy technology purchased by NBN Co from Telstra and Optus. The Telstra HFC network is being maintained, it was found that the Optus HFC network was uneconomic to bring up to an acceptable standard, with connections now to be provided by FTTC.\n\nThe upgrade path for Telstra HFC-connected premises is DOCSIS 3.1.\n\nA cable modem provides networking, and an ISP router provides telephony via VOIP.\n\nSection::::NBN technologies.:Fixed wireless.\n\n2,600 transmission towers connected by microwave and optical fibre to exchanges will use TD-LTE 4G mobile broadband technology to cover around 500,000 premises in rural areas.\n\nSection::::NBN technologies.:Fixed wireless.:The premises.\n", "The premises in the fixed wireless area were to be fitted with a roof-mounted antenna allowing a connection to a wireless base station.\n\nNBN Co provides a modem with four UNI-D ports. Telephone connections are by VOIP. Where a copper connection is available users requiring connections during electrical power outages are encouraged to keep that.\n\nSection::::NBN technologies.:Fixed wireless.:Fixed wireless network.\n\nA 4G LTE fixed wireless network was to link premises to a base station in turn linked to a POI via a backhaul.\n", "The NBN network, as of 2017, included wired communication: copper, optical and hybrid fibre-coaxial; and radio communication: satellite and fixed wireless networks at 121 Points of Interconnect (POI) typically located in Telstra owned telephone exchanges throughout Australia. It also sold access for mobile telecommunication backhaul to mobile telecommunications providers.\n\nDetailed network design rules as required by the Special Access Undertaking agreed by NBN Co and the Australian Competition and Consumer Commission were released on 19 December 2011, with updates on 18 September 2012, 30 June 2016 and 30 June 2017.\n\nThe MTM comprises:\n\nBULLET::::- Wired communication\n", "The agreement with Telstra required that the copper telephone network be decommissioned in an area 18 months after optic fibre is ready for service and that new connections were to be made to the optic fibre network and not the copper network. In some cases, premises have been left without service due to lengthy delays in establishing NBN connections. Telstra advises the use of the mobile network for phone and internet in these cases.\n\nSection::::2011.:Agreement with Optus (23 June 2011).\n", "An initial request for proposal (RFP) to build the NBN was issued but not executed. Organisations lodging compliant proposals were neither able to meet the requirements nor able to raise the necessary capital. A non-compliant proposal was received from Telstra and they were excluded from consideration.\n\nSection::::History.:2009.\n", "The number of premises assigned to each base station was to be limited to ensure users received a 'good service' because of the 'high[er] throughput'. Users at the edge of the coverage for each base station were to receive a peak speed of 12 megabits per second. The speed increases 'considerably' closer to the base station.\n\nSection::::NBN technologies.:Satellite service.\n\nTwo Sky Muster satellites provide NBN services to locations outside the reach of other technologies, including Christmas Island, Lord Howe and Norfolk Islands.\n\nSection::::NBN technologies.:Satellite service.:The premises.\n", "Voice services are provided by VOIP where supported by the modem.\n\nSection::::NBN technologies.:Fibre to the node (FTTN).:FTTN network.\n\nOptical fibre goes from the exchange to a node. A run of copper goes from the node to the existing DA (Distribution Area) pillars, then a copper pair runs to each end point. Each node can serve up to 384 homes.\n\nSection::::NBN technologies.:Fibre to the curb.\n\nPreviously fibre to the distribution point (FTTdp)\n\nSection::::NBN technologies.:Fibre to the curb.:Premises.\n", "BULLET::::- National Biodiversity Network, the partnership initiative sharing information about wildlife in the UK, championed by the NBN Trust based in Newark\n\nBULLET::::- National Broadband Network, a government led high-speed broadband network in Australia\n\nBULLET::::- National Broadband Network of the Philippines, the subject of the Philippine National Broadband Network controversy\n\nBULLET::::- NBN Co, (trading as nbn™), an Australian government-owned corporation tasked to design, build and operate Australia's National Broadband Network\n\nBULLET::::- Nefesh B'Nefesh, an organization that encourages immigration to Israel from North America and other English-speaking countries\n\nSection::::Publications.\n\nBULLET::::- \"North by Northwestern\", an online magazine at Northwestern University\n\nSection::::Other.\n", "BULLET::::- 1422 – Premier Technologies\n\nBULLET::::- 1423 – Soul Pattinson\n\nBULLET::::- 1428 – Verizon Australia\n\nBULLET::::- 1431 – Vodafone Hutchison\n\nBULLET::::- 1434 – Symbio Networks\n\nBULLET::::- 1441 – Soul Pattinson\n\nBULLET::::- 1447 – TransACT\n\nBULLET::::- 1450 – Pivotel\n\nBULLET::::- 1455 – Netsip\n\nBULLET::::- 1456 – Optus\n\nBULLET::::- 1464 – Agile\n\nBULLET::::- 1466 – Primus\n\nBULLET::::- 1468 – Telpacific\n\nBULLET::::- 1469 – Lycamobile\n\nBULLET::::- 1474 – Powertel\n\nBULLET::::- 1477 – Vocus\n\nBULLET::::- 1488 – Symbio Networks\n\nBULLET::::- 1499 – VIRTUTEL\n\nSection::::Override prefixes.:Supplementary Control service (183) works from both landline and mobile.\n\nBULLET::::- 1831 – Block caller-id sending\n", "National Broadband Network\n\nThe National Broadband Network (NBN) is an Australian national wholesale open-access data network project. It includes wired and radio communication components rolled out and operated by NBN Co Limited. Retail service providers (RSPs), typically Internet service providers, contract with NBN to access the network and sell fixed internet access to end users.\n", "Initial costs and timing for the Coalition NBN were of public funding to construct by 2019.\n\nIn December a new agreement was finalised with Telstra and Optus for purchase of copper and HFC networks, for a similar cost to the existing compensation for shutting down those networks. Telstra accepted $11B for its part of the network, less a discount for a “remediation credit” where parts of the network required maintenance.\n\nSection::::2014.:Black spot policy (February 2014).\n", "Section::::Internet.:Telstra FTTN.\n\nTelstra proposed to upgrade to Fibre to the Node (FTTN) in 2006 but did not pursue the development because it would be required to share the network.\n\nSection::::Internet.:Wireless broadband.\n", "History of the National Broadband Network\n\nThe National Broadband Network had its origins in 2006 when the Federal Labor Opposition led by Kim Beazley committed the Australian Labor Party, if elected to government to a 'super-fast' national broadband network. Initial attempts to engage key businesses in Australian telecommunications in planning and development; and implementation and operation failed with NBN Co being set up in 2010 to have carriage of the 'largest infrastructure' project in Australia's history.\n\nCompletion of the project is anticipated to be in the early 2020s.\n\nSection::::2006–2007.\n\nSection::::2006–2007.:Pre-2007 federal election.\n", "Initial planning and work was commenced under the Labor Party's first Rudd government.\n\nSection::::2008.\n\nSection::::2008.:Initial Request for Proposal.\n\nRequest for Proposal (RFP) to build the NBN issued, compliant proposals were received from Acacia, Axia NetMedia, Optus on behalf of Terria, TransACT and the Tasmanian Government (covering their respective states only), a non-compliant proposal was received from Telstra and they were excluded from consideration.\n\nThere were suggestions that if the project were to go ahead, Telstra's exclusion could lead to them being entitled to compensation estimated at .\n", "Section::::History.\n\nSection::::History.:Origins.\n", "BULLET::::- Stephen Rue - Managing Director & Chief Executive Officer from 18 September 2018 (Chief Financial Officer from July 2014 until appointment as MD)\n\nBULLET::::- Drew Clarke – Non-executive director (from 22 August 2017 for a three-year term)\n\nBULLET::::- Patrick Flannigan – Non-executive director\n\nBULLET::::- Shirley In’t Veld – Non-executive director\n\nBULLET::::- Michael Malone - Non-executive director (from 20 April 2016)\n\nBULLET::::- Zoe McKenzie – Non-Executive Director (1 July 2018 – 30 June 2021)\n\nBULLET::::- Justin Milne – Non-executive director\n\nBULLET::::- Kerry Schott – Non-executive director\n\nSection::::Board.:Former directors.\n", "After the election of the Abbott government in 2013 a Multi Technological Mix was implemented, replacing FTTP where development was yet to start with Fibre To The Node and also repurposing the Telstra and Optus hybrid fibre-coaxial networks.\n\nSection::::Core technologies, the network, backhaul and the local loop.:Cable.\n\nIn the late 1990s, Telstra and Optus rolled-out separate cable Internet services, focusing on the east coast.\n\nSection::::Core technologies, the network, backhaul and the local loop.:Satellite.\n\nThe Overseas Telecommunications Commission (OTC) was established by Australia in August 1946 with responsibility for all international telecommunications services into, through and out of Australia.\n", "The stations were purchased by Westpac in 2006. In 2009 the business made a profit of which increased to in 2010.\n", "While primarily operating as a reseller of telecommunications services, Aussie Broadband operates its own wireless broadband network in some regional areas of Victoria and South Australia and has equipment at 34 ADSL exchanges across some states of Australia. In late 2016, the company began implementing its own backhaul infrastructure to interface the NBN.\n", "The and spectrums were to be used to deliver these fixed wireless services covering approximately 4 per cent of the non-fibre population. Unlike the mobile networks, only premises can connect to NBN's fixed wireless network.\n\n2,600 transmission towers connected by optical fibre to exchanges will provide TD-LTE 4G mobile broadband technology to cover around 500,000 premises.\n", "Telstra in 2006 proposed replacing its copper network with an optical fibre node network with the drop connection into end user premises being the existing copper cable. They abandoned this as under competition policy they would be required to open their network to competing carriers on a wholesale basis.\n\nFurther options were explored with the first Rudd government deciding to set up a National Broadband Network using Fibre To The Premises as the main carrier network, supported by satellite and wireless to remote areas.\n", "A fast broadband initiative was announced in the run-up to the 2007 federal election by the Labor opposition with an estimated cost of including a government contribution of that would be raised in part by selling the Federal Government's remaining shares in Telstra.\n\nThe Labor Party Rudd government was elected on 24 November 2007 and initial planning commenced.\n\nThe NBN was originally to deliver its wholesale service through fibre to the node (FTTN) and reach approximately 98% of premises in Australia by . A new satellite network would be built to reach the rest of the country.\n\nSection::::History.:2008.\n", "Section::::Core technologies, the network, backhaul and the local loop.:Copper cable and optical fibre networks.\n\nPrior to the government opening telecommunications to multi player competition the PMG (and later Telecom Australia) operated a vertically integrated system, providing the Core network, backhaul, ancillary networks and a range of services to end users.\n\nWith opening telecommunications to multi provider competition the government required Telstra to sell wholesale access to its core facilities and networks.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-00432
Why do UV light bulbs look so different to sunlight when the sun produces UV light also?
UV is actually not visible, so you would not see it from the sun to begin with. Furthermore, it is only about 10% of the sun's total light output (so only 1/10th of the total light, the rest of which is visible "white" light and infrared). UV lights of all types tend to have a more indigo tint because ultraviolet light is closer in wavelength to that color. By focusing on that range of the light spectrum, it can emit more UV light, and more efficiently than if it did a normal light, AND minimize interference in to the UV light.
[ "Section::::Types of ultraviolet lamps.:LEDs.\n\nUV LED devices are capable of emitting a narrow spectrum of radiation (+/- 10 nm), while mercury lamps have a broader spectral distribution. Fluorescent ultraviolet lamps can be fairly narrow, although not as narrow as LEDs.\n", "Three types of fluorescent lamps are commonly used for UV testing. Two of these are of the type UVB (medium wavelength UV), while the third is UVA (longer wavelength UV similar to black light). All these lamps produce mostly UV as opposed to visible or infrared light. The lamp used, and therefore the wavelength of UV light produced will affect how realistic the final degradation results will be. In reality, natural sunlight contains radiation from many areas of the spectrum. This includes both UVA and UVB, however the UVB radiation is at the lowest end of natural light and is less predominant than UVA. Since it has a shorter wavelength, it also has a higher energy. This makes UVB more damaging not only because it increase chemical reaction kinetics, but also because it can initiate chemical reactions to occur which would not normally be possible under natural condition. For this reason, testing using only UVB lamps have been shown to have poor correlation relative to natural weather testing of the same samples.\n", "Section::::Artificial sources.:Incandescent lamps.\n\n'Black light' incandescent lamps are also made, from an incandescent light bulb with a filter coating which absorbs most visible light. Halogen lamps with fused quartz envelopes are used as inexpensive UV light sources in the near UV range, from 400 to 300 nm, in some scientific instruments. Due to its black-body spectrum a filament light bulb is a very inefficient ultraviolet source, emitting only a fraction of a percent of its energy as UV.\n\nSection::::Artificial sources.:Gas-discharge lamps.\n", "Fluorescent lamps made specifically for UV curing are also available. These have the ability to dial into specific frequencies at a lower price point as fluorescent lamps are an established technology and the spectrum is easily controlled by the type of phosphor used. They can produce frequencies that LEDs and mercury vapor lamps can not, including multiple frequencies. They are somewhat less efficient than LEDs or Mercury vapor but cost a fraction of the price of the other systems. They allow for curing all around an item by using multiple tubes and off the shelf ballast systems.\n", "The addition of gallium to the lamp yields a strong output in the longwave range between 400 and 450 nm. This makes the V lamp a good choice for curing white pigmented inks and base coats containing titanium dioxide which blocks the most shortwave UV.\n\nSection::::Types of ultraviolet lamps.:Fluorescent lamps.\n", "Section::::Types of ultraviolet lamps.\n\nSection::::Types of ultraviolet lamps.:Mercury vapor lamp (H type).\n\nThe mercury lamp has an output in the short wave UV range between 220 and 320 nm (nanometers) and a spike of energy in the longwave range at 365 nm. The H lamp is a good choice for clear coatings and thin ink layers and produces hard surface cures and high gloss finishes.\n\nSection::::Types of ultraviolet lamps.:Mercury vapor lamp with iron additive (D type).\n", "Fluorescent lamps are used for UV curing in a number of applications. In particular, these are used where the excessive heat of mercury vapor is undesirable, or when an item needs more than a single source of light and instead the item needs to be surrounded by light, such as musical instruments. Fluorescent lamps can be created that produce ultraviolet anywhere within the UVA/UVB spectrum. Additionally, lamps that have multiple peaks are possible, allowing a wider variety of photoinitiators to be used. While fluorescent lamps are less efficient at producing UV than mercury vapor, newer initiators require less total energy, offsetting this disadvantage. Fluorescent lamps in a wide variety of sizes and wattages are available.\n", "UV sources for UV curing applications include UV lamps, UV LEDs, and Excimer flash lamps. Fast processes such as flexo or offset printing require high-intensity light focused via reflectors onto a moving substrate and medium so high-pressure Hg (mercury) or Fe (iron, doped)-based bulbs are used, energized with electric arcs or microwaves. Lower-power fluorescent lamps and LEDs can be used for static applications. Small high-pressure lamps can have light focused and transmitted to the work area via liquid-filled or fiber-optic light guides.\n", "Mercury vapor lamps are the industry standard for curing products with ultraviolet light. The bulbs work by high voltage passing through, vaporizing the mercury. An arc is created within the mercury which emits a spectral output in the UV region of the light spectrum. The light intensity occurs in the 240-270nm and 350-380nm. This intense spectrum of light is what causes the rapid curing of the different applications being used.\n", "BULLET::::- the samples are rotating to insure an homogenous exposure ;\n\nBULLET::::- the incident light is supplied by four medium-pressure Mercury-vapor lamps filtered by the borosilicate envelops of the lamps ; the incident light is not containing any radiations whose wavelength would be shorter than 300 nm. Although the spectral distribution is not simulating the solar light, the vibrational relaxations which occur from each excited states insure the absence of any wavelength effect under the mercury arc excitation, the light spectral distribution influencing only the rate of the photoreactions. That concept has been largely checked in the last 30 years ;\"\n", "Sources that rely on fluorescence have a different emission spectrum shape than do thermal sources. Some wavelengths will be produced with greater amplitude than others. Fluorescent sources used for lighting, such as fluorescent lamps, white light emitting diodes, and metal halide lamps are intended to produce light at all wavelengths, but the distribution is different from thermal sources and so colors will appear different under these forms of lighting than under daylight; some colors may match under one light source that don't appear the same under another, a phenomenon called metamerism.\n", "Shortwave UV lamps are made using a fluorescent lamp tube with no phosphor coating, composed of fused quartz, since ordinary glass absorbs UVC. These lamps emit ultraviolet light with two peaks in the UVC band at 253.7 nm and 185 nm due to the mercury within the lamp, as well as some visible light. From 85% to 90% of the UV produced by these lamps is at 253.7 nm, whereas only 5–10% is at 185 nm. The fused quartz tube passes the 253.7 nm radiation but blocks the 185 nm wavelength. Such tubes have two or three times the UVC power of a regular fluorescent lamp tube. These low-pressure lamps have a typical efficiency of approximately 30–40%, meaning that for every 100 watts of electricity consumed by the lamp, they will produce approximately 30–40 watts of total UV output. They also emit bluish-white visible light, due to mercury's other spectral lines. These \"germicidal\" lamps are used extensively for disinfection of surfaces in laboratories and food-processing industries, and for disinfecting water supplies.\n", "As such, UVA vs UVB rating on lamps only tells you the relative amount of UV, making a 5% lamp really a lamp whose UV spectrum is 5% UVB and 95% UVA. There are no accepted published numbers for rating the overall power for lamps, except the TE (time exposure), which is almost as useless for making comparisons.\n", "Section::::Lamp types.:Fluorescent lamp.\n\nFluorescent lamps consist of a glass tube that contains mercury vapour or argon under low pressure. Electricity flowing through the tube causes the gases to give off ultraviolet energy. The inside of the tubes are coated with phosphors that give off visible light when struck by ultraviolet energy. have much higher efficiency than Incandescent lamps. For the same amount of light generated, they typically use around one-quarter to one-third the power of an incandescent.\n\nSection::::Lamp types.:LED lamp.\n", "In the last few years an emerging type of UV curing technology called UV LED curing has entered the marketplace. This technology is growing rapidly in popularity and has many advantages over mercury based lamps although is not the right fit for every application.\n", "To reduce unintentional ultraviolet (UV) exposure, and to contain hot bulb fragments in the event of explosive bulb failure, general-purpose lamps usually have a UV-absorbing glass filter over or around the bulb. Alternatively, lamp bulbs may be doped or coated to filter out the UV radiation. With adequate filtering, a halogen lamp exposes users to less UV than a standard incandescent lamp producing the same effective level of illumination without filtering.\n", "Zoos have UV-B lamps for reptiles kept indoors. Reptile keepers in a home environment can purchase UV-B/UVA emitting bulbs from pet stores to provide their reptiles with the required amount of UVB/UVA they need to generate vitamin D3. Such lights are branded Creature World or Repti Glo and used for a period of 10–12 hours per day to give the required exposure.\n\nThese lamps simulate the sun spectrum and thus produce mostly visible light and a very small amount of UV-B light, like the sun.\n\nSection::::Cancer risks.\n", "High-pressure bulbs are 3 to 5 inches long and typically powered by a ballast with 250 to 2,000 watts. The most common is the 400 watt variety that is used as an added face tanner in the traditional tanning bed. High-pressure lamps use quartz glass, and as such do not filter UVC. Because UVC can be deadly, a special dichroic filter glass (usually purple) is required that will filter out the UVC and UVB. The goal with high-pressure tanning bulbs is to produce a high amount of UVA only. Unfiltered light from a high-pressure lamp is rich in UVC used in germicidal lamps, for water purification, but it damages human skin.\n", "Colored inks are also available, where the ink is visible in normal light (as with a regular tattoo) and the ink will glow vividly under UV light. Due to the mixing of visible and UV pigments the resulting color is not as vibrant in either lighting situation as a dedicated ink.\n", "Most types of glass will allow longwave UV to pass, but absorb all the other UV wavelengths, usually from about 350 nm and below. For UV photography it is necessary to use specially developed lenses having elements made from fused quartz or quartz and fluorite. Lenses based purely on quartz show a distinct focus shift between visible and UV light, whereas the fluorite/quartz lenses can be fully corrected between visible and ultraviolet light without focus shift. Examples of the latter type are the Nikon UV-Nikkor 105 mm f/4.5, the Coastal Optics 60 mm f/4.0, the Hasselblad (Zeiss) UV-Sonnar 105 mm, and the Asahi Pentax Ultra Achromatic Takumar 85 mm f/3.5\n", "For the average user, UV radiation from indoor lights does not appear to be a concern. For those with skin sensitivity long term indoor exposure may be a concern, in which case they may want to use a bulb with lower UV radiation output. There seems to be more variability within bulb types than between them, but the best option is shielded CFLs.\n", "Section::::Applications in printing and photography.\n\nUV filters span the color spectrum and are used for a wide variety of applications. Ortho Red and Deep Ortho Red lights are commonly used in diffusion transfer, typesetting films/paper, and other applications dealing with orthochromatic materials. Yellow Gold, Yellow, Lithostar Yellow, and Fuji Yellow filters or safelights provide safe workspaces for contact proofing applications like screen printing and platemaking. Pan Green, Infrared Green, and Dark Green filters or safelights are commonly used in scanning applications, work with panchromatic film, and papers and x-rays.\n", "Section::::Enhancement opportunities for EUV patterning.:Optimum illumination vs. pitch.:Pitch-dependent focus windows.\n", "UV fluorescent dyes that glow in the primary colors are used in paints, papers, and textiles either to enhance color under daylight illumination or to provide special effects when lit with UV lamps. Blacklight paints that contain dyes that glow under UV are used in a number of art and aesthetic applications.\n\nAmusement parks often use UV lighting to fluoresce ride artwork and backdrops. This often has the side effect of causing rider's white clothing to glow light-purple.\n", "The color temperature is characteristic of black-body radiation; practical white light sources approximate the radiation of a black body at a given temperature, but will not have an identical spectrum. In particular, narrow bands of shorter-wavelength radiation are usually present even for lamps of low color temperature (\"warm\" light).\n" ]
[ "The light from UV light bulbs should look the same as sunlight.", "If the sun produces UV light, then UV light bulbs should not look different than sunlight." ]
[ "UV is only about 10% of the sun's total light output.", "UV lights are not visible, therfore it cannot be used to differentiate the image of the Sun and UV lights. " ]
[ "false presupposition" ]
[ "The light from UV light bulbs should look the same as sunlight.", "If the sun produces UV light, then UV light bulbs should not look different than sunlight." ]
[ "false presupposition", "false presupposition" ]
[ "UV is only about 10% of the sun's total light output.", "UV lights are not visible, therfore it cannot be used to differentiate the image of the Sun and UV lights. " ]
2018-02245
How come the U.S., Australia, New Zealand, and most of Canada have a high majority of English speakers while other countries once colonized by the British do not?
Because the European immigrants and their descendants *vastly outnumber* the aboriginal people in these places.
[ "There are six large countries with a majority of native English speakers that are sometimes grouped under the term Anglosphere. In numbers of English speakers they are: the United States of America (at least 231 million), the United Kingdom (in England, Scotland, Wales, and Northern Ireland) (60 million), Canada (at least 20 million), Australia (at least 17 million), Republic of Ireland (4.8 million) and New Zealand (4.8 million).\n", "Countries with large communities of native speakers of English (the inner circle) include Britain, the United States, Australia, Canada, Ireland, and New Zealand, where the majority speaks English, and South Africa, where a significant minority speaks English. The countries with the most native English speakers are, in descending order, the United States (at least 231 million), the United Kingdom (60 million), Canada (19 million), Australia (at least 17 million), South Africa (4.8 million), Ireland (4.2 million), and New Zealand (3.7 million). In these countries, children of native speakers learn English from their parents, and local people who speak other languages and new immigrants learn English to communicate in their neighbourhoods and workplaces. The inner-circle countries provide the base from which English spreads to other countries in the world.\n", "Another substantial community of native speakers is found in South Africa (4.8 million).\n\nSection::::Countries where English is an official language.\n", "English is also the primary natively spoken language in the countries and territories of Anguilla, Antigua and Barbuda, the Bahamas, Barbados, Belize, Bermuda, the British Indian Ocean Territory, the British Virgin Islands, the Cayman Islands, Dominica, the Falkland Islands, Gibraltar, Grenada, Guam, Guernsey, Guyana, the Isle of Man, Jamaica, Jersey, Montserrat, the Pitcairn Islands, Saint Helena, Ascension and Tristan da Cunha, Saint Kitts and Nevis, Saint Vincent and the Grenadines, South Georgia and the South Sandwich Islands, Trinidad and Tobago, the Turks and Caicos Islands, and the United States Virgin Islands.\n", "English-speaking world\n\nOver 2 billion people speak English, making English the largest language by number of speakers, and the third largest language by number of native speakers. With 300 million native speakers, the United States of America is the largest English speaking country. As pictured in the pie graph below, most native speakers of English are Americans.\n\nAdditionally, there are 60 million native speakers in the United Kingdom, 29 million in Canada, 25.1 million in Australia, 4.7 million in the Republic of Ireland, and 4.9 million in New Zealand. \n", "Estimates of the numbers of second language and foreign-language English speakers vary greatly from 470 million to more than 1 billion, depending on how proficiency is defined. Linguist David Crystal estimates that non-native speakers now outnumber native speakers by a ratio of 3 to 1. In Kachru's three-circles model, the \"outer circle\" countries are countries such as the Philippines, Jamaica, India, Pakistan, Singapore, and Nigeria with a much smaller proportion of native speakers of English but much use of English as a second language for education, government, or domestic business, and its routine use for school instruction and official interactions with the government.\n", "Besides the major varieties of English, such as British English, American English, Canadian English, Australian English, Irish English, New Zealand English and their sub-varieties, countries such as South Africa, India, the Philippines, Jamaica and Nigeria also have millions of native speakers of dialect continua ranging from English-based creole languages to Standard English. Other countries such as Ghana and Uganda also use English as their primary official languages.\n", "BULLET::::- When taken from this list and added together, the total number of English speakers in the world adds up to around 1,200,000,000. Likewise, the total number of native English speakers adds up to around 350,000,000. This implies that there are approximately 850,000,000 people who speak English as an additional language.\n\nSection::::See also.\n\nBULLET::::- English medium education\n\nBULLET::::- English-speaking world\n\nBULLET::::- List of countries where English is an official language\n\nBULLET::::- World Englishes\n\nNon-English speaking populations:\n\nBULLET::::- Arabophone\n\nBULLET::::- Francophone\n\nBULLET::::- Hispanophone\n\nBULLET::::- Iberophone\n\nBULLET::::- Indosphere\n\nBULLET::::- Lusophone\n\nBULLET::::- Russophone\n\nBULLET::::- Sinophone\n\nSection::::References.\n\nSection::::References.:Bibliography.\n", "The first diaspora involved relatively large-scale migrations of mother-tongue English speakers from England, Scotland and Ireland predominantly to North America and the Caribbean, Australia, South Africa and New Zealand. Over time, their own English dialects developed into modern American, Canadian, West Indian, South African, Australian, and New Zealand Englishes. In contrast to the English of Great Britain, the varieties spoken in modern North America and Caribbean, South Africa, Australia, and New Zealand have been modified in response to the changed and changing sociolinguistic contexts of the migrants, for example being in contact with indigenous Native American, Khoisan and Bantu, Aboriginal or Maori populations in the colonies.\n", "Geographical distribution of English speakers\n\nThe article provides details and data regarding the geographical distribution of all English speakers, regardless of the legislative status of the countries where it's spoken. The English language is one of the most widely spoken languages of the world, and it's widely used in the international communication as lingua franca. Many international organizations use English as their official language.\n\nSection::::Statistics.\n\nSection::::Statistics.:Native speakers.\n", "Languages besides English are spoken extensively in provinces with English-speaking majorities. Besides French (which is an official language of the province of New Brunswick and in the three territories), indigenous languages, including Inuktitut and Cree are widely spoken and are in some instances influencing the language of English speakers, just as traditional First Nations art forms are influencing public art, architecture and symbology in English Canada. Immigrants to Canada from Asia and parts of Europe in particular have brought languages other than English and French to many communities, particularly Toronto, Vancouver and other larger centres. On the west coast, for example, Chinese and Punjabi are taught in some high schools; while on the east coast efforts have been made to preserve the Scots Gaelic language brought by early settlers to Nova Scotia. In the Prairie provinces, and to a lesser degree elsewhere, there are a large number of second-generation and more Ukrainian Canadians who have retained at least partial fluency in the Ukrainian language.\n", "The data in the following tables pertain to the population of Canada reporting English as its sole mother tongue, a total of 17,352,315 inhabitants out of 29,639,035. A figure for single ethnic origin responses is provide, as well as a total figure for ethnic origins appearing in single or multiple responses (for groups exceeding 2% of the total English-speaking population). The sum of the percentages for single responses is less than 100%, while the corresponding total for single or multiple responses is greater than 100%. The data are taken from the 2001 Census of Canada.\n", "In the European Union, English is one of 24 official languages and is widely used by institutions, and by a majority of the population as the native language in the United Kingdom and Ireland and as a second language in other member states.\n\nEstimates that include second language speakers vary greatly, from 470 million to more than 2 billion. David Crystal calculates that, as of 2003, non-native speakers outnumbered native speakers by a ratio of 3 to 1. When combining native and non-native speakers, English is the most widely spoken language worldwide.\n", "The settlement history of the English-speaking inner circle countries outside Britain helped level dialect distinctions and produce koineised forms of English in South Africa, Australia, and New Zealand. The majority of immigrants to the United States without British ancestry rapidly adopted English after arrival. Now the majority of the United States population are monolingual English speakers, although English has been given official status by only 30 of the 50 state governments of the US.\n\nSection::::Geographical distribution.:English as a global language.\n", "The British Empire was \"built on waves of migration overseas by British peoples\", who left Great Britain, later the United Kingdom, and reached across the globe and permanently affected population structures in three continents. As a result of the British colonisation of the Americas, what became the United States was \"easily the greatest single destination of emigrant British\", but in the Federation of Australia the British ethnic groups experienced a birth rate higher than anything seen before, resulting in the displacement of indigenous Australians.\n\nSection::::Americas.\n\nSection::::Americas.:Argentina.\n", "Many regions, notably Canada, Australia, India, New Zealand, Pakistan, South Africa, Hong Kong, Malaysia, Brunei, Singapore, Sri Lanka and the Caribbean, have developed their own native varieties of the language, primarily at the spoken and informal written level.\n", "By the 19th century, the expansion of the British Empire, as well as global trade, had led to the spread of English around the world. The rising importance of some of England's larger colonies and former colonies, such as the rapidly developing United States, enhanced the value of the English varieties spoken in these regions, encouraging the belief, among the local populations, that their distinct varieties of English should be granted equal standing with the standard of Great Britain.\n\nSection::::Historical context.:Global spread of English.\n\nSection::::Historical context.:Global spread of English.:First dispersal: English is transported to the 'new world'.\n", "According to the 2006 Census of Canada, the population of English-speaking Canadians is between 17,882,775 and 24,423,375, finding the population outside of this designation to be 23,805,130 individuals.\n", "Next comes the outer circle, which includes countries where English is not the native tongue, but is important for historical reasons and plays a part in the nation's institutions, either as an official language or otherwise. This circle includes India, Nigeria, the Philippines, Bangladesh, Pakistan, Malaysia, Tanzania, Kenya, non-Anglophone South Africa and Canada, etc. The total number of English speakers in the outer circle is estimated to range from 150 million to 300 million.\n", "Section::::Geographical distribution.\n\n, 400 million people spoke English as their first language, and 1.1 billion spoke it as a secondary language. English is the largest language by number of speakers. English is spoken by communities on every continent and on islands in all the major oceans.\n", "India has the largest English-speaking population in the Commonwealth, although comparatively few speakers of Indian English are first-language speakers. The same is true of English spoken in other parts of South Asia, e.g. Pakistani English, and Bangladeshi English. South Asian English phonology is highly variable; stress, rhythm and intonation are generally different from those of native varieties. There are also several peculiarities at the levels of morphology, syntax and usage, some of which can also be found among educated speakers.\n", "Despite their prominence as migrants, at no point after the early 1850s did the English-born constitute a majority of the colonial population. In the 1851 Census 50.5% of the total population were born in England, this proportion fell to 36.5% (1861) and 24.3% by 1881. In the most recent Census in 2013, there were 215,589 English-born representing 21.5% of all overseas-born residents or 5 percent of the total population and is still the most-common birthplace outside New Zealand.\n", "Despite this, after the early 1850s the English-born slowly fell from being a majority of the colonial population. In the 1851 census 50.5% of the total population were born in England, this proportion fell to 36.5% (1861) and 24.3% by 1881.\n\nIn the most recent Census in 2013, there were 215,589 English-born representing 21.5% of all overseas-born residents or 5 percent of the total population and is still the most-common birthplace outside New Zealand.\n\nSection::::English diaspora.:Argentina.\n", "The Outer Circle of English was produced by the second diaspora of English, which spread the language through imperial expansion by Great Britain in Asia and Africa. In these regions, English is not the native tongue, but serves as a useful lingua franca between ethnic and language groups. Higher education, the legislature and judiciary, national commerce and so on may all be carried out predominantly in English. This circle includes India, Nigeria, Bangladesh, Pakistan, Malaysia, Tanzania, Kenya, non-Anglophone South Africa, the Philippines (colonized by the US) and others. The total number of English speakers in the outer circle is estimated to range from 150 million to 300 million. Singapore, while in the Outer Circle, may be drifting into the Inner Circle as English becomes more often used as a home language (see Languages of Singapore), much as Ireland did earlier. Countries where most people speak an English-based creole and retain standard English for official purposes, such as Jamaica and Papua New Guinea, are also in the Outer Circle.\n", "As decolonisation proceeded throughout the British Empire in the 1950s and 1960s, former colonies often did not reject English but rather continued to use it as independent countries setting their own language policies. For example, the view of the English language among many Indians has gone from associating it with colonialism to associating it with economic progress, and English continues to be an official language of India. English is also widely used in media and literature, and the number of English language books published annually in India is the third largest in the world after the US and UK. However English is rarely spoken as a first language, numbering only around a couple hundred-thousand people, and less than 5% of the population speak fluent English in India. David Crystal claimed in 2004 that, combining native and non-native speakers, India now has more people who speak or understand English than any other country in the world, but the number of English speakers in India is very uncertain, with most scholars concluding that the United States still has more speakers of English than India.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-05022
Why are brand new cars years post dated?
It’s marketing. If you’re living in 2018 and have a 2019 model you’ll feel special and superior. That’s all.
[ "In other cases, products of a previous model year can continue production, especially if a newer model hasn't yet been released. In that case, the model year remains the same until a new model is introduced. This is to ensure that the model will be seen by the public, and will actually sell a number of vehicles before a new vehicle-model is produced, and people will look at the newer model rather than the previous one. \n", "European and Japanese automakers can utilise the term \"model year\" in respect of model availability dates in North American markets: these often receive updated models significantly later than domestic markets, especially in the event of unforeseen slow sales causing an inventory build-up of earlier versions. \n", "The model year and the actual calendar year of production rarely coincide. For example, a North American 2015 model year automobile is available during most of the 2015 calendar year, but is usually also available from the third quarter of 2014 because production of the 2015 model began in July or August 2014, continuing to May/June/July 2015. \n\nThe variables of build date and design revision number are semi-independent. There is no natural law that forces one to be strictly correlated to the other, other than that:\n\nBULLET::::1. future design revisions cannot have been built in the past, and\n", "In the United States, for regulation purposes (such as VIN numbering and EPA emissions certification), government authorities allow cars of a given model year to be sold starting on January 1 of the previous calendar year. For example, this means that a 2019 model year vehicle can legally go on sale on January 1, 2018. This has resulted in a few cars in the following model year being introduced in advertisements during the NFL's Super Bowl in February. A notable example of an \"early\" model year launch would be the Ford Mustang, introduced as an early 1965 model (informally referred to as \"1964½\") in April 1964\n", "BULLET::::2. most products, in most contexts, tend to be built to the design revision that was the latest one at the time of building.\n\nSection::::Automobiles.\n\nSection::::Automobiles.:United States and Canada.\n", "at the World's Fair, several months before the usual start of the 1965 model year in August 1964.\n\nFor recreational vehicles, the U.S. Federal Trade Commission allows a manufacturer to use a model year up to two years ahead of the date that the vehicle was manufactured.\n\nSection::::Automobiles.:Other countrines.\n", "Starting in the mid-1950s, new car introductions in the fall once again became an anticipated event, as all dealers would reveal the models for the upcoming year each October. In this era before the popularization of computerization, the primary source of information on new models was the dealer. The idea was originally suggested in the 1930s by President Franklin D. Roosevelt during the Great Depression, as a way of stimulating the economy by creating demand. The idea was reintroduced by President Dwight Eisenhower for the same reasons, and this method of introducing next year's models in the preceding autumn lasted well into the 1990s.\n", "The practice of identifying revisions of automobiles by their model year is strongest in Canada and the United States. Typically, complete vehicle redesigns of long-standing models occur in cycles of at least five years, with one or two facelifts during the model cycle, and manufacturers introduce such redesigns at various times throughout a calendar year.\n", "To distinguish promos from traditional \"Days Gone\" series models, model baseplates were differentiated. Either \"Days Gone\" or \"Lledo Promotional Model\" began to appear on the chassis, according to need.(Force 1988, p. 129). Most models were produced by Lledo, but several 'Code Two' models were manufactured and sold to second parties for label and logo application previously agreed to by Lledo (Force 1988, p. 129).\n\nSection::::Other lines.\n\nSection::::Other lines.:Foreign marketers.\n", "Alfred P. Sloan extended the idea of yearly fashion-change (a practices used by the clothing industry) to General Motors range of cars in the 1920s. This was an early form of planned obsolescence in the car industry, where yearly styling changes meant consumers could easily discern a car's newness, or lack of it. Other major changes to the model range usually coincided with the launch of the new model year. The practice of beginning production of next year's model before the end of the year is also a long standing tradition in America, for example the 1928 model year of the \"Ford Model A\" began production in October 1927 and the 1955 model year of the \"Ford Thunderbird\" began production in September 1954.\n", "Industry practice varies between markets according both to the level of exports to North America, and to the extent to which US-owned subsidiaries dominate the domestic automarket. In the 1960s and 1970s, many new models were traditionally introduced at the London or Paris motor shows during October, and manufacturers owned by US corporations as well as domestically controlled UK auto-makers tended to follow US auto-industry conventions in respect of model years. The concept was never so universally applied in Europe as in North America, however, and since the 1980s, the more commercially critical European Motor Shows have been the March Geneva Motor Show and the September Frankfurt Motor Show or Paris Motor Show. New models have increasingly been launched in June or July even in the UK, where the two remaining US-owned subsidiaries no longer design and build distinctively British Ford and Vauxhall models. All this has left the US-style model-year concept increasingly absent from the European domestic automarkets.\n", "BULLET::::- In order to identify the exact year in passenger cars and multipurpose passenger vehicles with a GVWR of 10,000 or less, one must read position 7 as well as position 10. For passenger cars, and for multipurpose passenger vehicles and trucks with a gross vehicle weight rating of or less, if position seven is numeric, the model year in position 10 of the VIN refers to a year in the range 1980–2009. If position seven is alphabetic, the model year in position 10 of VIN refers to a year in the range 2010–2039.\n", "The \"cutoff year\" as originally promoted by the \"National Street Rod Association\" (NSRA) is 1949. Many custom car shows will only accept 1948 and earlier models as entries, and many custom car organizations will not admit later model cars or trucks (also with some imports - this has been a gray area of what's acceptable e.g. an aircooled VW Beetle, a Big Three product manufactured overseas e.g. a Ford Capri built in the UK or a General Motors - Holden's product, not to mention captives), and/or a vintage import automobile with an American driveline transplant but this practice is subject to change. Modern day custom car shows which allow the inclusion of muscle cars have used the 1972 model year as the cutoff since it is considered the end of the muscle car era prior to the introduction of the catalytic converter. The NSRA has announced that starting in 2011 it will switch to a shifting year method where any owner with a car 30 years or older will be allowed membership. So in 2011 the owner of a 1981 model year vehicle will qualify, then in 2012 the owner of a 1982 model year vehicle will quality, and so on. Additionally, the Goodguys car show organization has moved the year limit for its \"rod\" shows from 1949 to 1954 in recent years.\n", "An automotive model year is categorically defined by the 10th digit of the vehicle identification number (VIN), and simply indicates any manufacturer-specified evolution in mid-cycle of a model range - such as revised paint options, trim options or any other minor specification change. The 10th VIN digit does not relate to the calendar year which the car is built, although the two may coincide. For example, a vehicle produced between July 2006 and June 2007 may have a 7 as the 10th digit of the VIN, and another vehicle produced between July 2007 and June 2008 may have an 8 in the 10th digit - with the change-over date varying depending on manufacturer, model and year.\n", "In other countries, it is more common to describe the age of a car by its generation instead of the specific year, using terms such as \"third generation\", \"Mark III\" or the manufacturer's code for that generation (such as \"BL\" being the code for a Mazda 3 built between November 2008 and June 2013).\n\nSection::::Automobiles.:Europe.\n\nIn the automotive industry the model year is absolutely defined only by the manufacturer, and not by any local vehicle registration practices or marketing opinions. \n", "BULLET::::- \"If a car's retiring year reads 2009 and its availability cell is coloured green, it means no more units will be produced nor imported, but the last units are still on sale.\"\n\nBULLET::::- \"If a car has no retiring year marked but its availability cell is coloured red, it means either that car is in the middle of a restyling or the maker is waiting for new units to arrive.\"\n\nBULLET::::- \"No future dates shall be given.\"\n", "In the United States, automobile model-year sales traditionally begin with the fourth quarter of the preceding year. So \"model year\" refers to the sales model year; for example, vehicles sold during the period from October 1 to September 30 of the following year belong to a single model year. In addition, the launch of the new model-year has long been coordinated to the launch of the traditional new television season (as defined by A.C. Nielsen) in late September, because of the heavy dependence between television to offer products from automakers to advertise, and the car companies to launch their new models at a high-profile time of year.\n", "There was no official 25th Anniversary model from Ford in 1989, even though this was looked into with several designs on body and performance modifications. In response Ford modified the running horse badge on the passenger side of the dash board, stating \"25 Years\" on the bottom of the badge. These badges were installed beginning in April 1989 for one year, until April 1990, the Anniversary model year. After April 1990, Ford kept the badges in place, without the \"25 Years\" portion. Ford also added \"25 Years\" water mark on the window sticker with the running horse badge during this time period.\n", "(Specifically, the date is the copyright date for the design of the base of the car, but there are only a handful of cases where that is not the same as the copyright date for the design of the entire car.) The date is usually the year before the car was first introduced, but it is sometimes the same year. For example, a car in the 2001 First Editions series called \"Evil\" \"Twin\", was released in 2001 but the year dated on the bottom of the car is 2000.\n", "The 1942-style Ford cars certainly continued to be produced as military staff cars from March 1942 through summer 1945. These would have been registered as 1942, 1943, 1944, and 1945 models. Additionally, a large number of 1942 (and a few 1941) cars held in dealer stocks by government edict, to be doled out to essential users during the conflict, were Fords. Some states titled cars by the year of sale, so it is possible to find 1943, 1944, and 1945 models by virtue of their registrations and titles.\n\nSection::::1946.\n", "Beginning collectors may try to simply identify a year of a promo from its license plate, but not all promos followed this tradition. 1970 and 1971 Thunderbirds had no year-stamped license plates, so telling them apart can be difficult (Doty 1999c, p. 89).\n\nSection::::Promotionals came first.:Frictions and Radios.\n", "BULLET::::- In South Africa the code on a battery to indicate production date is part of the casing and cast into the bottom left of the cover. The code is Year and week number. (YYWW) e.g. 1336 is for week 36 in the year 2013.\n\nSection::::Use and maintenance.\n\nExcess heat is a main cause of battery failures, as when the electrolyte evaporates due to high temperatures, decreasing the effective surface area of the plates exposed to the electrolyte, and leading to sulfation. Grid corrosion rates increase with temperature. Also low temperatures can lead to battery failure. \n", "BULLET::::- A buyer of a second-hand vehicle can in theory determine the year of first registration of the vehicle without having to look it up. However, a vehicle is permitted to display a number plate where the age identifier is older (but \"not\" newer) than the vehicle. The wide awareness of how the \"age identifier\" works has led to it being used in advertising by used-car showrooms instead of simply stating a year.\n", "Model year\n\nThe model year (MY) of a product is a number used worldwide, but with a high level of prominence in North America, to describe approximately when a product was produced, and it usually indicates the coinciding base specification (design revision number) of that product.\n", "Car and Driver 10Best\n\n\"Car and Driver\" 10Best is a list annually produced by \"Car and Driver\" (\"C/D\"), nominating what it considers the ten best cars of the year. \"C/D\" also produces the 5Best list, highlighting what it considers the five best trucks of the year.\n\nAll production vehicles for sale in that calendar year are considered with these recent restrictions:\n\nBULLET::::1. The vehicle must be on sale by January\n\nBULLET::::2. It must be priced below 2.5 times the average price of a car that year\n\nBULLET::::3. The manufacturer must provide an example for testing\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-15232
How are satellites able to take such clear images if they're moving extremely fast?
They're really far away. When in the car, look outside the passenger window at some thing close and it will blur. Then look at something in the distance and it will be clear.
[ "There have been hundreds of reconnaissance satellites launched by dozens of nations since the first years of space exploration. Satellites for imaging intelligence were usually placed in high-inclination low Earth orbits, sometimes in Sun-synchronous orbits. Since the film-return missions were usually short, they could indulge in orbits with low perigees, in the range of 100–200 km, but the more recent CCD-based satellites have been launched into higher orbits, 250–300 km perigee, allowing each to remain in orbit for several years. While the exact resolution and other details of modern spy satellites are classified, some idea of the trade-offs available can be made using simple physics. The formula for the highest possible resolution of an optical system with a circular aperture is given by the Rayleigh criterion:\n", "The resolution of satellite images varies depending on the instrument used and the altitude of the satellite's orbit. For example, the Landsat archive offers repeated imagery at 30 meter resolution for the planet, but most of it has not been processed from the raw data. Landsat 7 has an average return period of 16 days. For many smaller areas, images with resolution as high as 41 cm can be available.\n", "Canesta’s time-of-flight technology consists of an array of pixels where every pixel can independently determine the distance to the object it sees. This array is in effect a massively parallel LIDAR on a single CMOS chip. At the heart of the technology is a proprietary silicon photo collection structure in each pixel that allows accurate measurement of the arrival time of the collected photons. This photo collection structure is substantially immune to CMOS surface defects that ordinarily adversely affect time of flight operation. This enables time of flight ranging using a low cost CMOS process.\n", "Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed progressively during transmission, is used so the satellite receives a constant frequency signal.\n\nDoppler shift of the direct path can be estimated by the following formula:\n\nformula_34\n", "On NOAA POES system satellites, the two images are 4 km / pixel smoothed 8-bit images derived from two channels of the advanced very-high-resolution radiometer (AVHRR) sensor. The images are corrected for nearly constant geometric resolution prior to being broadcast; as such, the images are free of distortion caused by the curvature of the Earth.\n", "BULLET::::- Chambers, Lin H.. \"Electromagnetic Spectrum.\" My NASA Data. 6 November 2007. National Aeronautics and Space Administration. 21 November 2008 .\n\nBULLET::::- Chung, Soon-Jo, Miller, David W., and de Weck, Olivier L., \"ARGOS testbed: study of multidisciplinary challenges of future spaceborne interferometric arrays,\" Optical Engineering, vol. 43, no.9, September 2004, pp. 2156–2167. (download PDF file)\n\nBULLET::::- Duffieux, P.M. The Fourier Transform and its Applications to Optics. 2nd edition. New York: John Wiley and Sons, Inc., 1983.\n", "Section::::Imaging satellites.:Private Domain.:GeoEye.\n\nGeoEye's GeoEye-1 satellite was launched on September 6, 2008. The GeoEye-1 satellite has the high resolution imaging system and is able to collect images with a ground resolution of 0.41 meters (16 inches) in the panchromatic or black and white mode. It collects multispectral or color imagery at 1.65-meter resolution or about 64 inches.\n\nSection::::Imaging satellites.:Private Domain.:DigitalGlobe.\n", "Since each channel of the AVHRR sensor is sensitive to only one wavelength of light, each of the two images is luminance only, also known as grayscale. However, different materials tend to emit or reflect with a consistent relative intensity. This has enabled the development of software that can apply a color palette to the images which simulates visible light coloring. If the decoding software knows exactly where the satellite was, it can also overlay outlines and boundaries to help in utilizing the resulting images.\n\nSection::::History.\n\nBULLET::::- Developed by the National Earth Satellite Service\n", "QuickBird II (also QuickBird-2 or Quickbird 2), was launched October 18, 2001 from the Vandenberg Air Force Base, California, aboard a Boeing Delta II rocket. The satellite was initially expected to collect at 1 meter resolution but after a license was granted in 2000 by the U.S. Department of Commerce / NASA, DigitalGlobe was able launch the QuickBird II with 0.61 meter panchromatic and 2.4 meter multispectral (previously planned 4 meter) resolution.\n\nSection::::Mission Extension.\n", "Measurements are made by transmitting pulsed laser beams from Earth ground stations to the satellites. The laser beams then return to Earth after hitting the reflecting surfaces; the travel times are precisely measured, permitting ground stations in different parts of the Earth to measure their separations to better than one inch in thousands of miles.\n\nThe LAGEOS satellites make it possible to determine positions of points on the Earth with extremely high accuracy due to the stability of their orbits.\n", "Section::::Measurement techniques.:Earth-to-space methods.:Optical tracking.\n", "Data received from the satellite is free to the public. There are multiple levels of data available. Level-1 data takes 1–3 days to process, and the user will receive multiple files that they can then piece together to generate an RGB image. Higher level science data can also be requested, which contains data such as surface reflectance.\n", "EGP is entirely passive, and operates by reflecting sunlight or ground-based lasers. The satellite is a 685-kg hollow sphere with a diameter of 2.15 meters, and the surface is covered with 318 mirrors for reflecting sunlight and 1436 corner reflectors for reflecting laser beams. The mirrors are 10x10 inches, and the corner reflectors are one inch in diameter and grouped into 120 laser reflection assemblies.\n\nSection::::Orbit.\n", "At this resolution, detail such as buildings and other infrastructure are easily visible. However, this resolution is insufficient for working with smaller objects such as a license plate on a car. The imagery can be imported into remote sensing image processing software, as well as into GIS packages for analysis.\n\nContractors include Ball Aerospace & Technologies, Kodak and Fokker Space.\n\nSection::::QuickBird I.\n", "There is also the constant search for life in other worlds. A satellite system using the interferometric technologies mentioned above would be able to have a much higher resolution than any of the current deep space imaging systems. A space systems also reduces the amount of interference due to lack of an atmosphere.\n\nSection::::Future.\n", "BULLET::::- Image registration is the comparison of the image acquired from an imaging sensor to a recorded image (usually from satellite) which has a known global position. The comparison allows to place the image, and therefore the camera (and with it the aircraft) in a precise global position and orientation, up to a precision which depends on the image resolution.\n", "BULLET::::- The architecture is similar to that of the Pleiades satellites, with a centrally mounted optical instrument, a three-axis star tracker, a fiber-optic gyro (FOG) and four control moment gyros (CMGs).\n\nBULLET::::- SPOT 6 and SPOT 7 are phased in the same orbit as Pléiades 1A and Pléiades 1B at an altitude of 694 km, forming a constellation of 2-by-2 satellites - 90° apart from one another.\n\nBULLET::::- Image product resolution:\n\nBULLET::::- Panchromatic: 1.5 m\n\nBULLET::::- Colour merge: 1.5 m\n\nBULLET::::- Multi-spectral: 6 m\n\nBULLET::::- Spectral bands, with simultaneous panchromatic and multi-spectral acquisitions:\n\nBULLET::::- Panchromatic (450 – 745 nm)\n", "Although to the observer low Earth orbit satellites move at about the same apparent speed as aircraft, individual satellites can be faster or slower; they do not all move at the same speed. Individual satellites never deviate in their velocity (speed and direction). They can be distinguished from aircraft because satellites do not leave contrails. They are lit solely by the reflection of sunlight from solar panels or other surfaces. A satellite's brightness sometimes changes as it moves across the sky. Occasionally a satellite will 'flare' as its orientation changes relative to the viewer, suddenly increasing in reflectivity. Satellites often grow dimmer and are more difficult to see toward the horizons. Because reflected sunlight is necessary to see satellites, the best viewing times are for a few hours immediately after nightfall and a few hours before dawn. \n", "There are some advantages of geosynchronous satellites:\n\nBULLET::::- Get high temporal resolution data.\n\nBULLET::::- Tracking of the satellite by its earth stations is simplified.\n\nBULLET::::- Satellite always in same position.\n", "Section::::Spotting satellites.\n\nSatellite watching is generally done with the naked eye or with the aid of binoculars since most low Earth orbit satellites move too quickly to be tracked easily by telescope. It is this movement, as the satellite tracks across the night sky, that makes them relatively easy to see. As with any sky-watching pastime, the darker the sky the better, so hobbyists will meet with better success further away from light-polluted urban areas. Because geosynchronous satellites do not move relative to the viewer they can be difficult to find and are not typically sought when satellite watching.\n", "\"\"The implication of the Scheimpflug principle is that when a laser beam is transmitted into the atmosphere, the backscattering echo of the entire illuminating probe volume is still in focus simultaneously without diminishing the aperture as long as the object plane, image plane and the lens plane intersect with each other\"\". A two dimensional CCD/CMOS camera is used to resolve the backscattering echo of the transmitted laser beam.\n", "Images taken with ground-based telescopes are subject to the blurring effect of atmospheric turbulence (seen to the eye as the stars twinkling). Many astronomical imaging programs require higher resolution than is possible without some correction of the images. Lucky imaging is one of several methods used to remove atmospheric blurring. Used at a 1% selection or less, lucky imaging can reach the diffraction limit of even 2.5 m aperture telescopes, a resolution improvement factor of at least five over standard imaging systems.\n\nSection::::Demonstration of the principle.\n", "BULLET::::- \"PRISMA\" (Hyperspectral and Panchromatic instrument): a prism spectrometer composed of the Hyp/Pan camera, an optical head and the main electronics box. The design is based on a pushbroom type observation concept providing hyperspectral imagery (~ 250 bands) at a spatial resolution of 30 m on a swath of 30 km. The spectral resolution is better than 12 nm in a spectral range of 400-2500 nm (VNIR and SWIR regions). In parallel, Pan (Panchromatic) imagery is provided at a spatial resolution of 5 m; the Pan data is co-registered with the Hyp (Hyperspectral) data to permit testing of image fusion techniques.\n", "The wavelengths are approximate; exact values depend on the particular satellite's instruments:\n\nBULLET::::- Blue, 450–515..520 nm, is used for atmosphere and deep water imaging, and can reach depths up to in clear water.\n\nBULLET::::- Green, 515..520–590..600 nm, is used for imaging vegetation and deep water structures, up to in clear water.\n\nBULLET::::- Red, 600..630–680..690 nm, is used for imaging man-made objects, in water up to deep, soil, and vegetation.\n\nBULLET::::- Near infrared (NIR), 750–900 nm, is used primarily for imaging vegetation.\n\nBULLET::::- Mid-infrared (MIR), 1550–1750 nm, is used for imaging vegetation, soil moisture content, and some forest fires.\n", "Ball Aerospace built WorldView-3. It was launched on August 13, 2014. It has a maximum resolution of . WorldView-3 operates at an altitude of , where it has an average revisit time of less than once per day. Over the course of a day it is able to collect imagery of up to .\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-19139
why do air ships like blimps and zeppelins use helium instead of a big vacuum chamber. Based on what I know vacuums are less dense than helium
Trying to support any decent vacuum of that size would require an structure that could support a pressure difference of nearly 1 ATM between the inside and outside without changing shape. With out current materials, such a solution would be extremely expensive and/or heavy.
[ "where formula_28 formula_29 and formula_30 formula_31 are pressure and density of standard Earth atmosphere at sea level, formula_32 and formula_33 are molar mass (kg/kmol) and temperature (K) of atmosphere at floating area.\n", "Vacuum airships would replace the helium gas with a near-vacuum environment. Having no mass, the density of this body would be near to 0.00 g/l, which would theoretically be able to provide the full lift potential of displaced air, so every liter of vacuum could lift 1.28 g. Using the molar volume, the mass of 1 liter of helium (at 1 atmospheres of pressure) is found to be 0.178 g. If helium is used instead of vacuum, the lifting power of every liter is reduced by 0.178 g, so the effective lift is reduced by 14%. A 1-liter volume of hydrogen has a mass of 0.090 g.\n", "As the airship neared completion a decision had to be made on how best to fill it with helium. Once the two halves were completed they were suspended horizontally from cables attached to the hangar ceiling, and the two halves were joined with a final array of rivets. Since helium mixes freely with air and is hard to separate from it, it was impractical to pump helium directly into the airship until the air was removed. It was decided that the airship would first be filled with carbon dioxide (CO), a heavy gas that mixes less freely with helium and which is easier to separate from helium. Once filled with CO the helium could be pumped in under pressure from valves at the top of the chamber, forcing the CO out through valves located on the bottom, and then recovering any helium that did mix with it. Only a few weeks before this procedure was to begin a bright young engineer noted that once filled with CO the ZMC-2 would be many thousands of pounds heavier than when filled with air. The rest of the airship's assembly had to be postponed for several weeks while additional reinforcing panels and stronger connectors were attached in order to support the increased weight of the CO filled airship.\n", "The most commonly used lifting gas, helium, is inert and therefore presents no fire risk. A series of vulnerability tests were done by the UK Defence Evaluation and Research Agency DERA on a Skyship 600. Since the internal gas pressure was maintained at only 1–2% above the surrounding air pressure, the vehicle proved highly tolerant to physical damage or to attack by small-arms fire or missiles. Several hundred high-velocity bullets were fired through the hull, and even two hours later the vehicle would have been able to return to base. Ordnance passed through the envelope without causing critical helium loss. In all instances of light armament fire evaluated under both test and live conditions, the airship was able to complete its mission and return to base.\n", "Methane (density 0.716 g/L at STP, average molecular mass 16.04 g/mol), the main component of natural gas, is sometimes used as a lift gas when hydrogen and helium are not available. It has the advantage of not leaking through balloon walls as rapidly as the smaller molecules of hydrogen and helium. Many lighter-than-air balloons are made of aluminized plastic that limits such leakage; hydrogen and helium leak rapidly through latex balloons. However, methane is highly flammable and like hydrogen is not appropriate for use in passenger-carrying airships. It is also relatively dense and a potent greenhouse gas.\n", "When the decision was taken to utilise helium instead of hydrogen back in 1922, \"Shenandoah\" was fitted with a set of condensers to allow the collection of water vapour from her engine exhausts to be used to create ballast and manage the ship's buoyancy. In most airship designs this would have been accomplished simply by venting gas as fuel was burned, but because helium was so expensive to produce (approximately $55 per 1000 ft in 1923), and \"Shenandoah\" had required approximately 2.1m ft to fill its gas cells, the decision was taken to not routinely vent the valuable gas, and instead collect water vapour. The \"Akron\"-class required three times more gas to fill the cells, which made the collection of water more important, in spite of the increased availability of helium through improvements in production, transport and storage. The condensers appeared as black strips on the ship's envelope directly above each propeller.\n", "Helium is the only lifting gas which is both non-flammable and non-toxic, and it has almost as much (about 92%) lifting power as hydrogen. It was not discovered in quantity until early in the twentieth century, and for many years only the USA had enough to use in airships. Almost all gas balloons and airships now use helium.\n\nSection::::Lifting gases.:Low pressure gases.\n", "The main problem with the concept of vacuum airships is that, with a near-vacuum inside the airbag, the exterior atmospheric pressure is not balanced by any internal pressure. This enormous imbalance of forces would cause the airbag to collapse unless it were extremely strong (in an ordinary airship, the force is balanced by helium, making this unnecessary). Thus the difficulty is in constructing an airbag with the additional strength to resist this extreme net force, without weighing the structure down so much that the greater lifting power of the vacuum is negated.\n\nSection::::Material constraints.\n\nSection::::Material constraints.:Compressive strength.\n", "The density of air at standard temperature and pressure is 1.28 g/l, so 1 liter of displaced air has sufficient buoyant force to lift 1.28 g. Airships use a bag to displace a large volume of air; the bag is usually filled with a lightweight gas such as helium or hydrogen. The total lift generated by an airship is equal to the weight of the air it displaces, minus the weight of the materials used in its construction including the gas used to fill the bag.\n", "In early dirigibles, the lifting gas used was hydrogen, due to its high lifting capacity and ready availability. Helium gas has almost the same lifting capacity and is not flammable, unlike hydrogen, but is rare and relatively expensive. Significant amounts were first discovered in the United States and for a while helium was only used for airships in that country. Most airships built since the 1960s have used helium, though some have used hot air.\n", "This method requires the part to be tested to be connected to a helium leak detector. The outer surface of the part to be tested will be located in some kind of a tent in which the helium concentration will be raised to 100% helium.\n\nIf the part is small the vacuum system included in the leak testing instrument will be able to reach low enough pressure to allow for mass spectrometer operation.\n", "Modern airships use dynamic helium volume. At sea-level altitude, helium takes up only a small part of the hull, while the rest is filled with air. As the airship ascends, the helium inflates with reduced outer pressure, and air is pushed out and released from the downward valve. This allows an airship to reach any altitude with balanced inner and outer pressure if the buoyancy is enough. Some civil aerostats could reach without explosion due to overloaded inner pressure.\n", "Following the Hindenburg disaster, the Zeppelin company resolved to use helium in their future passenger airships. But by this time Europe was well on the path to World War II, and the United States, the only country with substantial helium reserves, refused to sell the necessary gas. Commercial international aviation was limited during the war, so development of new airships was halted. Following the rapid advances in aviation during and after World War II, fixed-wing heavier-than-air aircraft, able to fly much faster than rigid airships, became the favoured method of international air travel.\n\nSection::::Demise.:Modern rigids.\n", "As the first rigid airship to use helium rather than hydrogen, \"Shenandoah\" had a significant edge in safety over previous airships. Helium was relatively scarce at the time, and the \"Shenandoah\" used much of the world's reserves just to fill its volume. —the next rigid airship to enter Navy service, originally built by \"Luftschiffbau Zeppelin\" in Germany as \"LZ 126\"—was at first filled with the helium from \"Shenandoah\" until more could be procured.\n", "Vacuum airship\n\nA vacuum airship, also known as a vacuum balloon, is a hypothetical airship that is evacuated rather than filled with a lighter-than-air gas such as hydrogen or helium. First proposed by Italian Jesuit priest Francesco Lana de Terzi in 1670, the vacuum balloon would be the ultimate expression of lifting power per volume displaced.\n\nSection::::History.\n", "In 1921, Lavanda Armstrong discloses a composite wall structure with a vacuum chamber \"surrounded by a second envelop constructed so as to hold air under pressure, the walls of the envelope being spaced from one another and tied together\", including a honeycomb-like cellular structure, however leaving some uncertainty how to achieve adequate buoyancy given \"walls may be made as thick and strong as desired\".\n\nIn 1983, David Noel discussed the use of geodesic sphere covered with plastic film and \"a double balloon containing pressurized air between the\n\nskins, and a vacuum in the centre\".\n", "Since the \"Hindenburg\" disaster in 1937, helium has replaced hydrogen as a lifting gas in blimps and balloons due to its lightness and incombustibility, despite an 8.6% decrease in buoyancy.\n", "The \"USS Shenandoah (ZR-1)\" (1923–25) was the first airship with ballast water recovered from the condensation of exhaust gas. Prominent vertical slots in the airship's hull acted as exhaust condensers. A similar system was used on her sister ship, \"USS Akron (ZRS-4)\". The German-made \"USS Los Angeles (ZR-3)\" was also fitted with exhaust gas coolers to prevent jettisoning of the costly helium.\n\nSection::::Buoyancy compensation.:Lifting gas temperature.\n", "At night, the gas in a zero-pressure balloon cools and contracts, causing the balloon to sink. A zero-pressure balloon can only maintain altitude by releasing gas when it goes too high, where the expanding gas can threaten to rupture the envelope, or releasing ballast when it sinks too low. Loss of gas and ballast limits the endurance of zero-pressure balloons to a few days.\n", "For example, a trimix containing 20% oxygen, 40% helium, 40% nitrogen (trimix 20/40) being used at has an END of .\n", "Although not currently practical, it may be possible to construct a rigid, lighter-than-air structure which, rather than being inflated with air, is at a vacuum relative to the surrounding air. This would allow the object to float above the ground without any heat or special lifting gas, but the structural challenges of building a rigid vacuum chamber lighter than air are quite significant. Even so, it may be possible to improve the performance of more conventional aerostats by trading gas weight for structural weight, combining the lifting properties of the gas with vacuum and possibly heat for enhanced lift.\n", "Spherical vacuum body airships using the Magnus effect and made of carbyne or similar superhard carbon are glimpsed in Neal Stephenson's novel \"The Diamond Age\".\n\nIn \"Maelstrom\" and \"Behemoth:B-Max\", author Peter Watts describes various flying devices, such as \"botflies\" and \"lifters\" that use \"vacuum bladders\" to keep them airborne.\n", "Light gas balloons are predominant in scientific applications, as they are capable of reaching much higher altitudes for much longer periods of time. They are generally filled with helium. Although hydrogen has more lifting power, it is explosive in an atmosphere rich in oxygen. With a few exceptions, scientific balloon missions are unmanned.\n", "The gases liberated from the materials not only lower the vacuum quality, but also can be reabsorbed on other surfaces, creating deposits and contaminating the chamber.\n\nYet another problem is diffusion of gases through the materials themselves. Atmospheric helium can diffuse even through Pyrex glass, even if slowly; this however is usually not an issue. Some materials might also expand or increase in size causing problems in delicate equipment.\n", "The ZMC-2 was successful both in performance and longevity. Its manufacture required the development of a riveting machine and final assembly that are comparable to modern rockets and transport aircraft fuselages, while being capable of dealing with aluminum skin thicknesses thin enough to allow aerostatic lift. The final assembly of the single closing seam of the two hull-halves took over two months. Filling the rigid shell was similarly problematic, requiring an expensive and time-consuming process of filling it first with carbon dioxide, then with helium, and finally purifying the helium by scrubbing residual carbon dioxide from the helium. In addition, the hull had to be strengthened to sustain the weight of the carbon dioxide during the filling process.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-11257
Why do we get cold in a bath if the water gets slightly less warm, but not if we're swimming in a cold pool (after initial adjustment)?
In a bath you are stationary. Your muscles aren’t moving, and aren’t warming up, whereas when you swim, muscles are engaged and warmed up. When the temperature drops, the added heat from moving makes the temperature adjustment less noticeable than say in a bath, where you are hardly moving.
[ "Blood flow to the muscles is lower in cold water, but exercise keeps the muscle warm and flow elevated even when the skin is chilled. Blood flow to fat normally increases during exercise, but this is inhibited by immersion in cold water. Adaptation to cold reduces the extreme vasoconstriction which usually occurs with cold water immersion.\n", "BULLET::::- In lakes exposed to geothermal activity, the temperature of the deeper water may be warmer than the surface water. This will usually lead to convection currents.\n\nBULLET::::- Water at near-freezing temperatures is less dense than slightly warmer water - maximum density of water is at about 4°C - so when near freezing, water may be slightly warmer at depth than at the surface.\n", "Heat transfers very well into water, and body heat is therefore lost extremely quickly in water compared to air, even in merely 'cool' swimming waters around 70F (~20C). A water temperature of can lead to death in as little as one hour, and water temperatures hovering at freezing can lead to death in as little as 15 minutes. This is because cold water can have other lethal effects on the body, so hypothermia is not usually a reason for drowning or the clinical cause of death for those who drown in cold water.\n", "Blood circulation transports heat throughout the body, and adjustments to this flow are an important part of thermoregulation. Increasing blood flow to the surface (e.g., during warm weather or strenuous exercise) causes warmer skin, resulting in faster heat loss. In contrast, when the external temperature is low, blood flow to the extremities and surface of the skin is reduced and to prevent heat loss and is circulated to the important organs of the body, preferentially.\n\nSection::::Physiology.:Rate of blood flow.\n", "If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through a heat exchanger, the hot fluid would have a very small change in temperature while the cold fluid would heat up a significant amount. If the cool fluid has a much lower heat capacity rate, that is desirable. If they were equal, they would both change more or less temperature equally, assuming equal mass-flow per unit time through a heat exchanger. In practice, a cooling fluid which has both a higher specific heat capacity and a lower heat capacity rate is desirable, accounting for the pervasiveness of water cooling solutions in technology—the polar nature of the water molecule creates some distinct sub-atomic behaviors favorable in practice.\n", "BULLET::::- Gay-Lussac's second law – as temperature increases the pressure in a diving cylinder increases (originally described by Guillaume Amontons). This is why a diver who enters cold water with a warm diving cylinder, for instance after a recent quick fill, finds the gas pressure of the cylinder drops by an unexpectedly large amount during the early part of the dive as the gas in the cylinder cools.\n", "Some athletes use a technique known as contrast water therapy or contrast bath therapy, in which cold water and warmer water are alternated. One method of doing this was to have two tubs––one cold (10–15 degrees Celsius) and another hot (37–40 degrees Celsius) ––and to do one minute in the cold tub followed by two minutes in a hot tub, and to repeat this procedure three times.\n\nSection::::Techniques.:Temperature and timing.\n", "Taking advantage of the cooling properties of water may help attenuate the consequences of heat sensitivity. In a study done by White et al. (2000), exercise pre-cooling via lower body immersion in water of 16–17 °C for 30 minutes allowed heat sensitive individuals with MS to exercise in greater comfort and with fewer side effects by minimizing body temperature increases during exercise. Hydrotherapy exercise in moderately cool water of 27–29 °C water can also be advantageous to individuals with MS. Temperatures lower than 27 °C are not recommended because of the increased risk of invoking spasticity.\n\nSection::::History.\n", "In the 1980s Monarch Spas developed the dual-zone \"swim spa\" so that pumps and other equipment needed for the pool could also be used to power a separate spa. Today, the advantage of the modern \"dual-zone\" system is that the two pools can be at different temperatures using different chemicals - the Hot Tub (using bromine), is hot enough for relaxation and massage, while the swim zone is cool enough for strenuous exercise (using chlorine).\n\nSection::::Volume-driven machines.\n", "In humans, the diving reflex is not induced when limbs are introduced to cold water. Mild bradycardia is caused by subjects holding their breath without submerging the face in water. When breathing with the face submerged, the diving response increases proportionally to decreasing water temperature. However, the greatest bradycardia effect is induced when the subject is holding his breath with his face wetted. Apnea with nostril and facial cooling are triggers of this reflex.\n", "Blood flow to skin and fat are affected by skin and core temperature, and resting muscle perfusion is controlled by the temperature of the muscle itself. During exercise increased flow to the working muscles is often balanced by reduced flow to other tissues, such as kidneys spleen and liver. Blood flow to the muscles is also lower in cold water, but exercise keeps the muscle warm and flow elevated even when the skin is chilled. Blood flow to fat normally increases during exercise, but this is inhibited by immersion in cold water. Adaptation to cold reduces the extreme vasoconstriction which usually occurs with cold water immersion. Variations in perfusion distribution do not necessarily affect respiratory inert gas exchange, though some gas may be locally trapped by changes in perfusion. Rest in a cold environment will reduce inert gas exchange from skin, fat and muscle, whereas exercise will increase gas exchange. Exercise during decompression can reduce decompression time and risk, providing bubbles are not present, but can increase risk if bubbles are present. Inert gas exchange is least favourable for the diver who is warm and exercises at depth during the ingassing phase, and rests and is cold during decompression.\n", "(Illustrated by a still lake, where the surface water can be comfortably warm for swimming but deeper layers be so cold as to represent a danger to swimmers, the same effect as gives rise to notices in London's city docks warning 'Danger Cold Deep Water).\n", "Charging an empty dive cylinder also causes a temperature rise as the gas inside the cylinder is compressed by the inflow of higher pressure gas, though this temperature rise may initially be tempered because compressed gas from a storage bank at room temperature decreases in temperature when it decreases in pressure, so at first the empty cylinder is charged with cold gas, but the temperature of the gas in the cylinder then increases to above ambient as the cylinder fills to the working pressure.\n", "Temperature-controlled warm-water therapy pools are used to perform aquatic bodywork. For example, Watsu requires a warm-water therapy pool that is approximately chest deep (depending on height of the therapist) and temperature-controlled to about 35 °C (95 °F).\n\nSection::::Facilities, equipment, and supplies.:Dry-water massage tables.\n", "In these ways, winter swimmers can survive both the initial shock and prolonged exposure. Nevertheless, the human organism is not suited to freezing water: the struggle to maintain blood temperature (by swimming or conditioned metabolic response) produces great fatigue after thirty minutes or less.\n\nSection::::Cold shock response in bacteria.\n", "The hunting reaction is one out of four possible responses to immersion of the finger in cold water. The other responses observed in the fingers after immersion in cold water are a continuous state of vasoconstriction, slow steady and continuous rewarming and a proportional control form in which the blood vessel diameter remains constant after an initial phase of vasoconstriction. However, the vast majority of the vascular responses to immersion of the finger in cold water can be classified as the hunting reaction.\n", "The Russian immigrant professor Louis Sugarman of Little Falls, NY was the first American to become a famous ice swimmer in the 1890s. He attracted worldwide attention for his daily plunge in the Mohawk River, even when the thermometer hit 23 degrees F below zero, earning him the nickname \"the human polar bear\". \n", "Submerging the face in water cooler than about triggers the diving reflex, common to air-breathing vertebrates, especially marine mammals such as whales and seals. This reflex protects the body by putting it into \"energy saving\" mode to maximize the time it can stay under water. The strength of this reflex is greater in colder water and has three principal effects:\n\nBULLET::::- \"Bradycardia\", a slowing of the heart rate by up to 50% in humans.\n\nBULLET::::- \"Peripheral vasoconstriction\", the restriction of the blood flow to the extremities to increase the blood and oxygen supply to the vital organs, especially the brain.\n", "Blood flow to skin and fat are affected by skin and core temperature, and resting muscle perfusion is controlled by the temperature of the muscle itself. During exercise increased flow to the working muscles is often balanced by reduced flow to other tissues, such as kidneys spleen and liver.\n", "The rise of the influence of Christian Evangelicals caused arrangements for mixed bathing to be reassessed. Moral pressures forced some town councils to establish zones for the women and men to bathe separately. A half-hearted attempt was made to suggest to men that torso-suits would be fashionable, but this was resisted by genteel swimmers who believed that torso-suits restricted the contact between the skin and the saltwater.\n", "The result is that the top pipe which received hot water, now has cold water leaving it at 20 °C, while the bottom pipe which received cold water, is now emitting hot water at close to 60 °C. In effect, most of the heat was transferred.\n\nSection::::Three current exchange systems.:Countercurrent flow—almost full transfer.:Conditions for higher transfer results.\n\nNearly complete transfer in systems implementing countercurrent exchange, is only possible if the two flows are, in some sense, \"equal\".\n", "Within vertebrates, different skeletal muscle activity has correspondingly different thermal dependencies. The rate of muscle twitch contractions and relaxations are thermally dependent (\"Q\" of 2.0-2.5), whereas maximum contraction, e.g., tetanic contraction, is thermally independent.\n\nMuscles of some ectothermic species. e.g., sharks, show less thermal dependence at lower temperatures than endothermic species \n\nSection::::See also.\n\nBULLET::::- Arrhenius equation\n\nBULLET::::- Arrhenius plot\n\nBULLET::::- Isotonic (exercise physiology)\n\nBULLET::::- Isometric exercise\n\nBULLET::::- Skeletal striated muscle\n\nBULLET::::- Tetanic contraction\n\nSection::::References.\n", "Thomas Guidott set up a medical practice in the English town of Bath in 1668. He became interested in the curative properties of the waters. In 1676, he wrote \"A discourse of Bathe, and the hot waters there. Also, Some Enquiries into the Nature of the water\". This brought the health-giving properties of the hot mineral waters to the attention of the aristocracy. Doctors and quacks set up spa towns such as Harrogate, Bath, Matlock and Buxton soon after taking advantage of mineral water from chalybeate springs.\n\nSection::::History.:18th century.\n\nSection::::History.:18th century.:England.\n", "A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than \"thermal contraction\". For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983 °C and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather. Also, fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 kelvins.\n", "Scalding is a serious concern with any water heater. Human skin burns quickly at high temperature, in less than 5 seconds at , but much slower at — it takes a full minute for a second degree burn. Older people and children often receive serious scalds due to disabilities or slow reaction times. In the United States and elsewhere it is common practice to put a tempering valve on the outlet of the water heater. The result of mixing hot and cold water via a tempering valve is referred to as \"tempered water\".\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-17881
What Happens where Predators eat body parts of animals that contain large dose of Venom, like the head of a Snake or the tail of a Scorpion?
Essentially nothing. Sometimes they can still have an effect, but usually not conventional. It kind of has to be injected into the blood stream.
[ "The distribution of the venom of the Chinese cobra has been studied in mice using a whole-animal radiographic technique. Results indicate that venom accumulates primarily in the kidney (marked localization in the cortex) with little or no activity in the brain of mice sacrificed one to two minutes after intravenous injection of massive dose levels of venom. Using I-labelled cobra venom (\"Naja atra\"), 1 μg/g mice, its isolated I-neurotoxin (0.2 μg/g) or cardiotoxin (4 μg/g), it has been found that, after subcutaneous injection into the thigh, the neurotoxin was more rapidly absorbed than either crude venom or cardiotoxin.\n", "Normally the venom is directly injected into the bloodstream by the snake. In the experiments performed they also used intravenous injection of batroxobin. They used a total dose of 2 BU/kg (in dogs also 0.2 BU/kg) given during a time of 30 minutes, three times a day. In the graph below you can see the plasma concentrations of batroxobin after administration.\n\nSection::::Toxicokinetics.:Distribution.\n", "The effects of the toxic venom present with a predictable course of symptoms until treatment is received. Immediate and severe pain, oozing of blood from the fang punctures, considerable edema, epistaxis, bleeding of the gums, marked hematuria, general petechiae, shock, renal failure, and local necrosis. These effects are attributed to the various haemotoxins and necrotoxins contained in the venom. Many other toxins are present in the venom in small quantities and are not clinically significant due to their extremely low concentrations. \n\nSection::::Danger to humans.:Treatment.\n", "Some venoms are applied externally, especially to sensitive tissues such as the eyes, but most venoms are administered by piercing the skin of the victim. Venom in the saliva of the Gila monster and some other reptiles enters prey through bites of grooved teeth. More commonly animals have specialized organs such as hollow teeth (fangs) and tubular stingers that penetrate the prey's skin, whereupon muscles attached to the attacker's venom reservoir squirt venom deep within the victim's body tissue. Death may occur as a result of bites or stings. The rate of envenoming is described as the likelihood of venom successfully entering a system upon bite or sting.\n", "Brown (1973) gave an average venom yield (dried) of 125 mg, with a range of 80–237 mg, along with values of 4.0, 2.2, 2.7, 3.5, 2.0 mg/kg IV, 4.8, 5.1, 4.0, 5.5, 3.8, 6.8 mg/kg IP and 25.8 mg/kg SC for toxicity. Wolff and Githens (1939) described a specimen that yielded 3.5 ml of venom during the first extraction and 4.0 ml five weeks later (1.094 grams of dried venom).\n", "Section::::Background.\n", "BULLET::::- Mark O'Shea: the television snake expert was reported killed by a 14-year-old King cobra which struck his foot at West Midland Safari Park, UK on August 19, 2012. The cobra, was being fed thawed rats when it stuck O'Shea's shoe, venom soaking into his sock and entered his system via abrasions on his foot, rather than through fang punctures. The symptoms were relatively mild but he was hospitalised as a precaution due to the high yield and toxicity of king cobra venom. He was discharged the following day.\n", "Symptoms of the \"Arthropleura\" poisoning include uncontrolled shaking, anaphylaxis and short term memory loss in recovered patients.\n\nOnce bitten the venom then begins to slowly attack the central nervous system, not so far removed from modern biochemistry as to be totally ineffective, and any enzyme inhibitor would be detrimental to an extent. However, as the \"Arthropleura\" are detritus eaters they make no attempt to eat their victims. Fortunately the hospital staff discovered that the venom has a modern-day equivalent, thus producing an anti-venom.\n", "Section::::History.\n", "Because calciseptine is a peptide, theoretically it can be broken down by proteases in the tissues where it is injected. It has been found that digestion of snake toxic peptides by proteases does occur in the prey tissues, but due to the relative stability of the toxins, the speed with which the toxins act and the amount of venom injected, this is not enough to protect against the consequences of a snake bite. The same goes for the immune system: the larger venom peptides are unlikely to be missed by the immune system, but immunological action is not fast enough to counter the effects of the venom.\n", "BULLET::::- Cardiotoxins / Cytotoxins\n\nBULLET::::- Hemotoxins\n\nSection::::Determining venom toxicity (LD).\n", "Unlike most vipers, members of this genus will strike and then hold on and chew. In one case, a machete was used to pry off the jaws. March (1929) wrote that \"A. mexicanus\" (\"A. nummifer\") will hang on and make half a dozen punctures unless quickly and forcibly removed. However, the effects of the venom include only transient pain and mild swelling. In one part of Honduras the locals even insist that the snake (\"A. nummifer\") is not venomous. Laboratory studies suggest that \"Atropoides\" venoms are unlikely to lead to consumption coagulopathy and incoagulable blood in humans. However, other research revealed that of ten different Costa Rican pit viper venoms tested on mice, that of \"A. picadoi\" was the most hemorrhagic.\n", "Venoms in medicine\n\nVenom in medicine is the medicinal use of venoms for therapeutic benefit in treating diseases.\n", "In case of intravenous injection the tested in mice is 0.373 mg/kg, and 0.225 mg/kg in case of intraperitoneal injection. The average venom yield per bite is approximately 263 mg (dry weight).\n", "Section::::Venom.:\"Deinagkistrodon acutus\" venom.\n\nOne species formerly of \"Agkistrodon\", \"Deinagkistrodon acutus\" or the “100 pacer pit viper” is commonly used for research purposes. Researchers have found that this venom contains protease activity, meaning it attacks and degrades intra- and extracellular proteins. If injected into mice, within 2 hours the venom begins a process known as mesangiolysis (the degeneration and death of cells that line the inner layer of the glomerulus and regulate glomerular filtration in the kidney). Eventually, the kidneys no longer function and the mouse dies.\n\nSection::::Venom.:Venom and cancer treatment.\n", "Antivenom for the treatment of deathstalker envenomations is produced by pharmaceutical companies Twyford (German) and Sanofi Pasteur (French), and by the Antivenom and Vaccine Production Center in Riyadh. Envenomation by the deathstalker is considered a medical emergency even with antivenom treatment, as its venom is unusually resistant to treatment and typically requires large doses of antivenom.\n", "The venom affects the nervous system, stopping the nerve signals from being transmitted to the muscles and at later stages stopping those transmitted to the heart and lungs as well, causing death due to complete respiratory failure. Envenomation causes local pain, severe swelling, bruising, blistering, necrosis and variable non-specific effects which may include headache, nausea, vomiting, abdominal pain, diarrhea, dizziness, collapse or convulsions along with possible moderate to severe flaccid paralysis. Unlike some other African cobras (for example the red spitting cobra), this species does not spit venom.\n\nSection::::Other cultures.\n\nSection::::Other cultures.:In Ancient Egyptian culture and history.\n", "Bioavailability measurements have been conducted for several snake venoms. For example, cobra venom has been found to have a bioavailability of 41.7% when injected intramuscular, and for other venoms this may even be less than 10%. These values are quite low compared to those of most therapeutic drugs, which usually have a bioavailability of nearly 100% after intramuscular injection.\n\nIn general, toxic peptides of 10-40 amino acids have been found to have a relatively poor bioavailability due to their size and hydrophilicity. Thus, calciseptine, containing 60 amino acids, is expected to have a low bioavailability as well.\n\nSection::::Toxicokinetics.:Metabolism.\n", "In India, the serum prepared with the venom of monocled cobra \"Naja kaouthia\" has been found to be without effect on the venom of two species of kraits (\"Bungarus\"), Russell's viper (\"Daboia russelli\"), saw-scaled viper (\"Echis carinatus\"), and Pope's pit viper (\"Trimeresurus popeorum\"). Russell's viper serum is without effect on colubrine venoms, or those of \"Echis\" and \"Trimeresurus\".\n\nIn Brazil, serum prepared with the venom of lanceheads (\"Bothrops\" spp.) is without action on rattlesnake (\"Crotalus\" spp.) venom.\n", "In absence of dedicated research, recommended treatment of bites is, as for all true cobras, with the appropriate antivenom (SAVP polyvalent from South African Vaccine Producers). The dosage may need to be higher than for the average \"N. nigricollis\" bite. First aid treatment for venom in the eyes is immediate irrigation with water or any bland liquid - failure to do so may result in permanent blindness. Whether bitten or spat at, the patient should be seen as soon as possible by a physician. No available data suggest this species' toxins differ clinically from those of other spitting cobras, except perhaps by the effects of greater dosages, on average. Spitting cobra venom has rather low systemic toxicity, meaning with appropriate treatment, survival of bitten persons is very likely. A strong necrotizing effect (kills tissue around the wound) means survivors may be disfigured. If a jet of venom gets into the eyes and is not treated immediately, blindness (due to destruction of the cornea) is likely; even in patients treated with antivenom, amputation may become necessary if a full dose of a large spitting cobra's venom is received.\n", "The Indian cobra's venom mainly contains a powerful post-synaptic neurotoxin and cardiotoxin. The venom acts on the synaptic gaps of the nerves, thereby paralyzing muscles, and in severe bites leading to respiratory failure or cardiac arrest. The venom components include enzymes such as hyaluronidase that cause lysis and increase the spread of the venom. Envenomation symptoms may manifest between 15 minutes and 2 hours following the bite.\n", "The venom of \"L. quinquestriatus\" is among the most potent scorpion toxins. It severely affects the cardiac and pulmonary systems. Human fatalities, often children, have been confirmed by clinical reports. The median lethal dose of venom (LD) for this species was measured at 0.16 - 0.50 mg/kg.br\n\nThe toxicity of the other species is also potentially high to life-threatening, but reliable data are currently not available.\n\nSection::::Habitat.\n", "Some reports suggest that this species produces a large amount of venom that is weak compared to some other vipers. Others, however, suggest that such conclusions are not accurate. These animals are badly affected by stress and rarely live long in captivity. This makes it difficult to obtain venom in useful quantities and good condition for study purposes. For example, Bolaños (1972) observed that venom yield from his specimens fell from 233 mg to 64 mg while they remained in his care. As the stress of being milked regularly has this effect on venom yield, it is reasoned that it may also affect venom toxicity. This may explain the disparity described by Hardy and Haad (1998) between the low laboratory toxicity of the venom and the high mortality rate of bite victims.\n", "The venom mainly affects the cardiovascular and pulmonary system, eventually leading to a pulmonary oedema, which may cause death. Scorpion antivenom has little effect in clinical treatment but application of prazosin reduces the mortality rate to less than 4%.\n", "In case of a bite from the black mamba, the victim should be treated according to a standard protocol. The most important part of this treatment is the intravenous injection of a polyvalent antivenom. South African Vaccine Producers produces this antivenom. Polyvalent means that it can be used for different snakebites: vipers, mambas and cobras. Large quantities of the antivenom must be injected to counter the effects of the venom.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-14135
If a car’s engine is flooded (filled with water) can it be fixed?
It's possible, I've seen hydrolocked cars get fixed but it's a lot of work. You're going to have to tear down the engine to see what the damages are internally
[ "Some manufacturers offer parts for replacement or customization, whether compatible only with their own hydration systems, or usable also with others'.\n\nSection::::Hardware.:Plumbing.\n\nSection::::Hardware.:Plumbing.:Shut-off valves.\n\nEspecially while a hydration system is being carried in a vehicle, there is some danger of the bite valve being squeezed, opening it to leakage or a steady flow; this can be guarded against with an additional valve, usually installed between the bite valve and the hose, that stays open or closed according to the position of a lever.\n\nSection::::Hardware.:Plumbing.:Elbows.\n", "Liquids inside an internal combustion engine are extremely detrimental because of the incompressibility of liquids. Although not the most common cause, a severely flooded engine could result in a hydrolock. A hydrolock occurs when a liquid fills a combustion chamber to the point that it is impossible to turn the crankshaft without a catastrophic failure of the engine or one of its vital components.\n", "The impeller of the seawater pump can suffer from wear and tear, especially when run dry for some period of time, in which case it has to be replaced to avoid loss in the flow of cooling seawater, a potential source of engine overheating.\n", "If an engine hydrolocks while at speed, a mechanical failure is likely. Common damage modes include bent or broken connecting rods, a fractured crank, a fractured head, a fractured block, crankcase damage, damaged bearings, or any combination of these. Forces absorbed by other interconnected components may cause additional damage. Physical damage to metal parts can manifest as a \"crashing\" or \"screeching\" sound and usually requires replacement of the engine or a substantial rebuild of its major components.\n", "The permanent solution to prevent water from flooding the interior is to simply to remove the reed valve in the cowling drain so the water can flow straight and through the drain.\n\nSection::::Performance.\n", "With other problems, the driver may be able to operate the vehicle seemingly normally for some time, but the vehicle will need an eventual repair. These include grinding brakes, rough idle (often caused by the need for a tune-up), or poor shock absorption. Many vehicle owners with personal economic difficulty or a busy schedule may wait longer than they should get necessary repairs made to their vehicles, thereby increasing damage or else causing more danger.\n\nSection::::See also.\n\nBULLET::::- Emergency road service\n\nBULLET::::- Vehicle recovery\n\nBULLET::::- Tow truck\n\nBULLET::::- Automobile repair shop\n\nBULLET::::- Battery\n\nBULLET::::- Car insurance\n\nBULLET::::- Car warranty\n\nSection::::References.\n", "Amounts of water significant enough to cause hydrolock tend to upset the air/fuel mixture in gasoline engines. If water is introduced slowly enough, this effect can cut power and speed in an engine to a point that when hydrolock actually occurs it does not cause catastrophic engine damage.\n\nSection::::Causes and special cases.\n\nSection::::Causes and special cases.:Automotive.\n", "As well as an inability to work in periods of drought, the amount of water available could also vary the power of machinery powered by it. The amount and type of work to be carried out by heavy industries could be influenced by the seasonal availability of water. In 1785 Kirkstall Forge near Leeds wrote to a customer, \" 'It will be convenient for us just now to roll a few tons because we have a full supply of water—and we cannot manufacture thin plate so well when our water is short.' \"\n", "Breakdown (vehicle)\n\nA vehicle breakdown is the mechanical failure of a motor vehicle in such a way that the underlying problem prevents the vehicle from being operated at all, or impedes the vehicle's operation so much, that it is very difficult, nearly impossible, or else dangerous to operate. Vehicle breakdowns can occur for a large number of reasons. Depending on the nature of the problem, the vehicle may or may not need to be towed to an automobile repair shop.\n", "Another issue is the governor valve sticking, which can be caused by contamination. i.e. clutch plates or other parts disintegrating. The fine debris finds its way past the filter and tends to accumulate in the governor, causing it to stick. A temporary solution is to remove and clean the governor. The problem will often recur as debris from damaged parts continues to build up in the governor. If the problem continues after cleaning the governor then it may be necessary it replace the autobox.\n", "Given the constant contact of water, corrosion eventually happens anyway. If corrosion of the tank creates holes in it, there are some temporary fixes to try to patch it, but the long-term solution is to replace the tank altogether.\n", "Small boats with outboard engines and PWCs tend to ingest water simply because they run in and around it. During a rollover, or when a wave washes over the craft, its engine can hydrolock, though severe damage is rare due to the special air intakes and low rotating inertia of small marine engines. Inboard marine engines have a different vulnerability as these often have their cooling water mixed with the exhaust gases in the header to quiet the engine. Rusted out exhaust headers or lengthy periods of turning the starter can cause water to build up in the exhaust line to the point it back-flows through the exhaust manifold and fills the cylinders.\n", "If an internal combustion engine hydrolocks while idling or under low power conditions, the engine may stop suddenly with no immediate damage. In this case the engine can often be purged by unscrewing the spark plugs or injectors and turning the engine over to expel the liquid from the combustion chambers after which a restart may be attempted. Depending on how the liquid was introduced to the engine, it possibly can be restarted and dried out with normal combustion heat, or it may require more work, such as flushing out contaminated operating fluids and replacing damaged gaskets.\n", "Most modern consumer vehicle engines are pre-programmed with specific fuel-to-air ratios, so introducing water without re-programming the car's computer or otherwise changing these ratios will most likely provide no benefit, and may likely reduce performance or damage the engine. In addition, most modern fuel systems cannot determine that water in any form has been added, and cannot determine a new compression ratio or otherwise take advantage of lower cylinder temperatures. In most cases in pre-programmed cars, introducing water vapor via an indirect water injection method causes loss of power because the water vapor takes the place of air (and fuel in engines with either a carburetor or single point injection) that is required to complete the combustion process and produce power. Normally, only vehicles re-tuned for water injection see any benefits.\n", "BULLET::::- How likely is the component to fail? Some components, like the drive shaft in a car, are not likely to fail, so no fault tolerance is needed.\n\nBULLET::::- How expensive is it to make the component fault tolerant? Requiring a redundant car engine, for example, would likely be too expensive both economically and in terms of weight and space, to be considered.\n", "External cleaning may also be required to remove contaminants, corrosion products or old paint or other coatings. Methods which remove the minimum amount of structural material are indicated. Solvents, detergents and bead blasting are generally used. Removal of coatings by the application of heat may render the cylinder unserviceable by affecting the crystalline microstructure structure of the metal. This is a particular hazard for aluminium alloy cylinders, which may not be exposed to temperatures above those stipulated by the manufacturer.\n\nSection::::Safety.\n", "Engine flooding was a common problem with carbureted cars, but newer fuel-injected ones are immune to the problem when operating within normal tolerances. Flooding usually occurs during starting, especially under cold conditions or because the accelerator has been pumped. It can also occur during hot starting; high temperatures may cause fuel in the carburetor float chamber to evaporate into the inlet manifold, causing the air/fuel mixture to exceed the upper explosive limit. High temperature fuel may also result in a vapor lock, which is unrelated to flooding but has a similar symptom.\n", "Tanks or structural tubing such as bench seat supports or amusement park rides can accumulate water and moisture if the structure does not allow for drainage. This humid environment can then lead to internal corrosion of the structure affecting the structural integrity. The same can happen in tropical environments leading to external corrosion.\n\nSection::::Types of corrosion situations.:External corrosion.:Galvanic corrosion.\n\nSee main article Galvanic corrosion\n", "BULLET::::- Cylinder valves must be closed whilst in transit and checked that there are no leaks. Where applicable, protective valve caps and covers should be fitted to cylinders before transporting. Cylinders should not be transported with equipment attached to the valve outlet (regulators, hoses etc.).\n\nBULLET::::- A fire extinguisher is required on the vehicle.\n\nBULLET::::- Gas cylinders may only be transported if they are in-date for periodic inspection and test, except they may be transported when out of date for inspection, testing or disposal.\n", "Piston seals can get damaged, be distorted, or worn. Such damaged seals can cause leakage of hydraulic fluid from the cylinder leading to lower overall pressure or inability to hold pressure. When such events occur, you know that these seals need to be replaced.\n\nSection::::Repair.:Repairing or replacing damaged parts.\n", "In a \"partial breakdown\", the vehicle may still be operable, but its operation may become more limited or more dangerous, or else its continued operation may contribute to further damage to the vehicle. Often, when this occurs, it may be possible to drive the vehicle to a garage, thereby avoiding a tow.\n\nSome common causes of a partial breakdown include overheating, brake failure, or frequent stalling.\n\nSection::::Levels of breakdown.:Top 10 causes of car breakdowns in 2014.\n", "If a cylinder passes the listed procedures, but the condition remains doubtful, further tests can be applied to ensure that the cylinder is fit for use. Cylinders that fail the tests or inspection and cannot be fixed should be rendered unserviceable after notifying the owner of the reason for failure.\n", "Swapping the engine may have implications on the cars safety, performance, handling and reliability. The new engine may be lighter or heavier than the existing one which affects the amount of weight over the nearest axle and the overall weight of the car - this can adversely affect the car's ride, handling and braking ability. Existing brakes, transmission and suspension components may be inadequate to handle the increased weight and/or power of the new engine with either upgrades being required or premature wear and failure being likely.\n", "Section::::Automotive use.\n\nWaterless coolant is most prominently used in the cooling systems for motorsports, classic car, ATVs, UTVs, snowmobiles and older cars. Older cars often have nonpressurized cooling systems, and the coolant can boil and overflow. Traditionally, this issue has been solved by topping off the radiator with water. This dilutes the coolant and the water can contain minerals harmful to the vehicle. Classic car owners have adopted waterless coolant to solve this problem. Jay Leno uses waterless coolant for his replica 1937 Bugatti Type 57SC Atlantic vehicle.\n\nSection::::Other uses.\n", "If a cylinder fills with liquid while the engine is turned off, the engine will refuse to turn when a starting cycle is attempted. Since the starter mechanism's torque is normally much lower than the engine's operating torque, this will usually not damage the engine but may burn out the starter. The engine can be drained as above and restarted. If a corrosive substance such as water has been in the engine long enough to cause rusting, more extensive repairs will be required.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-05612
Why is urinating not affected by spicy food?
Pee is not the result of a direct line from your stomach to the bladder, unlike poo. When you eat food, it goes through your digestive tract (stomach, small and large intestine). In the intestines, broken down food molecules and water are absorbed into the bloodstream and then carried around the body to the cells that need it. Once the nutrients have been used up and waste products have been put in by the cells, the blood flows to the kidneys, which filter out these waste products and some water. The part of spicy food that tastes spicy, capsaicin, does not last very long in the bloodstream before being broken down into smaller molecules that don't have the same spicy effect.
[ "BULLET::::- In a 1995 episode of the first season of \"Friends\", Chandler pesters Joey, while the latter tries to urinate. Joey begs Chandler to cease the disturbance, claiming that he needs to concentrate in order to urinate. In another episode, Monica is stung on the foot by a jellyfish, and Joey is unable to urinate on it to soothe the pain, so Chandler comes to the rescue.\n", "Cytochrome P450 1A2 is evenly distributed over all the cells of a liver acinus, and is in contrast to other members of the cytochrome P450 family exclusively expressed in the liver. Cytochrome P450 A2 is usually not inducible by clinically frequently used drugs making it ideal even in complex clinical situations (exceptions are oral contraceptives resulting in a strong induction of P450 A2). Nutrition and lifestyle can strongly influence P450 A2 induction e.g. smoking or coffee consumption. \n", "Section::::Goitenyo.\n", "BULLET::::- with urea or guanidine resulted in a compound with much less activity (only 5% of the potency of metiamide)\n\nBULLET::::- however, the NH form (the guanidine analog of metiamide) did not show agonistic effects\n\nBULLET::::- to prevent the guanidine group being protonated at physiological pH, electron-withdrawing groups were added\n\nBULLET::::- adding a nitrile or nitro group prevented the guanidine group from being protonated and did not cause agranulocytosis\n", "Since elevated PGE2 levels are correlated with PDP, urinary PGE2 can be a useful biomarker for this disease. Additionally, HPGD mutation analyses are relatively cheap and simple and may prove to be useful in early investigation in patients with unexplained clubbing or children presenting PDP-like features. Early positive results can prevent expensive and longtime tests at identifying the pathology.\n", "BULLET::::- In 2006, Pai Police purchased a new mobile drug testing vehicle, and there have been numerous reported instances of the police entering bars and other establishments and randomly urine-testing foreign tourists. In many of these cases it is apparent that the searches were not performed legally. In Thailand, \"when requesting urinalysis for drug identification purposes, at least one member of the Narcotics Suppression Police must be present. Regular Thai police do not have this right, nor do the Tourist Police. Second of all, there must be probable cause.\".\n", "The amount of capsaicin in the fruit is highly variable and dependent on genetics and environment, giving almost all types of \"Capsicum\" varied amounts of perceived heat. The most recognizable \"Capsicum\" without capsaicin is the bell pepper, a cultivar of \"Capsicum annuum\", which has a zero rating on the Scoville scale. The lack of capsaicin in bell peppers is due to a recessive gene that eliminates capsaicin and, consequently, the \"hot\" taste usually associated with the rest of the \"Capsicum\" family. There are also other peppers without capsaicin, mostly within the \"Capsicum annuum\" species, such as the cultivars Giant Marconi, Yummy Sweets, Jimmy Nardello, and Italian Frying peppers (also known as the Cubanelle).\n", "The mechanism proposed by Hausinger and Karplus attempts to revise some of the issues apparent in the Blakely and Zerner pathway, and focuses on the positions of the side chains making up the urea-binding pocket. From the crystal structures from K. aerogenes urease, it was argued that the general base used in the Blakely mechanism, His, was too far away from the Ni2-bound water to deprotonate in order to form the attacking hydroxide moiety. In addition, the general acidic ligand required to protonate the urea nitrogen was not identified. Hausinger and Karplus suggests a reverse protonation scheme, where a protonated form of the His ligand plays the role of the general acid and the Ni2-bound water is already in the deprotonated state. The mechanism follows the same path, with the general base omitted (as there is no more need for it) and His donating its proton to form the ammonia molecule, which is then released from the enzyme. While the majority of the His ligands and bound water will not be in their active forms (protonated and deprotonated, respectively,) it was calculated that approximately 0.3% of total urease enzyme would be active at any one time. While logically, this would imply that the enzyme is not very efficient, contrary to established knowledge, usage of the reverse protonation scheme provides an advantage in increased reactivity for the active form, balancing out the disadvantage. Placing the His ligand as an essential component in the mechanism also takes into account the mobile flap region of the enzyme. As this histidine ligand is part of the mobile flap, binding of the urea substrate for catalysis closes this flap over the active site and with the addition of the hydrogen bonding pattern to urea from other ligands in the pocket, speaks to the selectivity of the urease enzyme for urea.\n", "The odor of normal human urine can reflect what has been consumed or specific diseases. For example, an individual with diabetes mellitus may present a sweetened urine odor. This can be due to kidney diseases as well, such as kidney stones.\n\nEating asparagus can cause a strong odor reminiscent of the vegetable caused by the body's breakdown of asparagusic acid. Likewise consumption of saffron, alcohol, coffee, tuna fish, and onion can result in telltale scents. Particularly spicy foods can have a similar effect, as their compounds pass through the kidneys without being fully broken down before exiting the body.\n\nSection::::Characteristics.:Turbidity.\n", "Early research showed capsaicin to evoke a long-onset current in comparison to other chemical agonists, suggesting the involvement of a significant rate-limiting factor. Subsequent to this, the TRPV1 ion channel has been shown to be a member of the superfamily of TRP ion channels, and as such is now referred to as . There are a number of different TRP ion channels that have been shown to be sensitive to different ranges of temperature and probably are responsible for our range of temperature sensation. Thus, capsaicin does not actually cause a chemical burn, or indeed any direct tissue damage at all, when chili peppers are the source of exposure. The inflammation resulting from exposure to capsaicin is believed to be the result of the body's reaction to nerve excitement. For example, the mode of action of capsaicin in inducing bronchoconstriction is thought to involve stimulation of C fibers culminating in the release of neuropeptides. In essence, the body inflames tissues as if it has undergone a burn or abrasion and the resulting inflammation can cause tissue damage in cases of extreme exposure, as is the case for many substances that cause the body to trigger an inflammatory response.\n", "Yellow ají is one of the ingredients of Peruvian cuisine and Bolivian cuisine. It is used as a condiment, especially in many dishes and sauces. In Peru the chilis are mostly used fresh, and in Bolivia dried and ground. Common dishes with ají \"amarillo\" are the Peruvian stew \"Ají de gallina\" (\"Hen Chili\"), \"Papa a la Huancaína\" and the Bolivian \"Fricase Paceno\", among others. In Ecuadorian cuisine, Ají amarillo, onion, and lemon juice (amongst others) are served in a separate bowl with many meals as an optional additive.\n", "BULLET::::- \"Urocystis phaceliae\"\n\nBULLET::::- \"Urocystis phalaridis\"\n\nBULLET::::- \"Urocystis phlei\"\n\nBULLET::::- \"Urocystis phlei-alpini\"\n\nBULLET::::- \"Urocystis picbaueri\"\n\nBULLET::::- \"Urocystis poae\"\n\nBULLET::::- \"Urocystis poae-palustris\"\n\nBULLET::::- \"Urocystis polygonati\"\n\nBULLET::::- \"Urocystis preussii\"\n\nBULLET::::- \"Urocystis primulae\"\n\nBULLET::::- \"Urocystis primulicola\"\n\nBULLET::::- \"Urocystis pseudoanemones\"\n\nBULLET::::- \"Urocystis puccinelliae\"\n\nBULLET::::- \"Urocystis pulsatillae\"\n\nBULLET::::- \"Urocystis pulsatillae-albae\"\n\nBULLET::::- \"Urocystis qinghaiensis\"\n\nBULLET::::- \"Urocystis radicicola\"\n\nBULLET::::- \"Urocystis ranunculi\"\n\nBULLET::::- \"Urocystis ranunculi-alpestris\"\n\nBULLET::::- \"Urocystis ranunculi-aucheri\"\n\nBULLET::::- \"Urocystis ranunculi-auricomi\"\n\nBULLET::::- \"Urocystis ranunculi-bullati\"\n\nBULLET::::- \"Urocystis ranunculi-lanuginosi\"\n\nBULLET::::- \"Urocystis rechingeri\"\n\nBULLET::::- \"Urocystis reinhardii\"\n\nBULLET::::- \"Urocystis rigida\"\n\nBULLET::::- \"Urocystis rodgersiae\"\n\nBULLET::::- \"Urocystis roivainenii\"\n\nBULLET::::- \"Urocystis rostrariae\"\n\nBULLET::::- \"Urocystis rytzii\"\n\nBULLET::::- \"Urocystis schizocaulon\"\n\nBULLET::::- \"Urocystis scilloides\"\n\nBULLET::::- \"Urocystis secalis-silvestris\"\n", "In rats the UII gene expression is higher than the URP gene expression throughout the entire body. However, when the brains of the rats were tested, only the URP peptide was found making it the primary endogenous ligand in the brain.\n\nUnlike humans and rats, URP gene expression is found in mice spinal cords.\n\nSection::::Function.\n\nSection::::Function.:Cardiovascular.\n\nWhen URP is injected into rats a long hypotensive response will be observed. UII is known as a vasoconstrictor meaning that even though both are agonists for the same receptor they can produce opposite effects\n\nSection::::Function.:CNS.\n", "Spicy (song)\n\n\"Spicy\" is a song by French musician Herve Pagez and American producer Diplo, featuring vocals of English singer-songwriter Charli XCX. The song was released on 30 May 2019 by record label Mad Decent. \"Spicy\" heavily interpolates the song \"Wannabe\" by English girl group the Spice Girls, from their debut album \"Spice\" (1996). The Spice Girls are therefore credited as songwriters for the track, alongside Charli XCX, Emmanuel Valere, Joel Jaccoulet, and Matt Rowe. \"Spicy\" was produced by Pagez and Diplo.\n", "A diet which is high in protein from meat and dairy, as well as alcohol consumption can reduce urine pH, whilst potassium and organic acids such as from diets high in fruit and vegetables can increase the pH and make it more alkaline. Some drugs also can increase urine pH, including acetazolamide, potassium citrate, and sodium bicarbonate.\n\nCranberries, popularly thought to decrease the pH of urine, have actually been shown not to acidify urine. Drugs that can decrease urine pH include ammonium chloride, chlorothiazide diuretics, and methenamine mandelate.\n\nSection::::Characteristics.:Density.\n", "The web implementation allows embedding in a blog, and can also be run as a form of slide show where each node corresponds to a slide.\n\nBULLET::::- Multitouch – The first multitouch implementation of SpicyNodes was as part of the WikiNodes multitouch Wikipedia browser for the Apple iPad, and launched in April 2011.\n\nSection::::Related, but different implementations.\n", "BULLET::::- The American Urological Association (AUA) guidelines for the treatment of BPH from 2018 stated that \"TUNA is not recommended for the treatment of LUTS/BPH\".\n\nBULLET::::- The European Association of Urology (EAU) has - as of 2019 - removed TUNA from its guidelines.\n\nSection::::History.\n", "Idrabiotaparinux sodium is also administered once-weekly. It has the same pentasaccharidic structure as idraparinux sodium, but with biotin\n\nattached, which allows its neutralisation with avidin, an egg-derived protein with low antigenicity.\n", "Regulating diet mainly controls urinary pH, although using medication can also control it. Diets rich in animal proteins tend to produce acidic urine, while diets mainly composed of vegetables tend to produce alkali urine.\n", "While many web sites are configured to gather referer information and serve different content depending on the referer information obtained, exclusively relying on HTTP referer information for authentication and authorization purposes is not a genuine computer security measure. HTTP referer information is freely alterable and interceptable, and is not a password, though some poorly configured systems treat it as such.\n\nSection::::Application.\n", ", the UK National Health Service did not offer general PSA screening, for similar reasons.\n", "Because of its diet, the Crawford's gray shrew must expel a large amount of nitrogenous waste from its body, which has a potential for a large loss of water when urinating. However, it is able to reduce water loss from urine, as well, by concentrating urea in the urine. The urine is four times more concentrated than that of a human, thus saving a huge amount of water.\n", "BULLET::::- The URL path that is specified when composing an API is no longer required to be unique. Furthermore, the full URL path for the operation, which is formed from the base path of the containing API followed by the operation path, does not have to be unique. However, if it is not unique then an application is required to identify itself with a client ID when calling the operation.\n\nAdd multiple security keys to an application\n", "Pungency is not considered a taste in the technical sense because it is carried to the brain by a different set of nerves. While taste nerves are activated when consuming foods like chili peppers, the sensation commonly interpreted as \"hot\" results from the stimulation of somatosensory fibers in the mouth. Many parts of the body with exposed membranes that lack taste receptors (such as the nasal cavity, genitals, or a wound) produce a similar sensation of heat when exposed to pungent agents.\n", "BULLET::::- Panos Kalidis\n\nBULLET::::- Thanos Petrelis\n\nSection::::Charity.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03534
Why does London have so few skyscrapers or tall buildings in general compared to comparable cities?
Relevant bits from [Wikipedia:]( URL_0 ) > Few skyscrapers were built in London before the late 20th century, owing to restrictions on building heights originally imposed by the London Building Act of 1894, which followed the construction of the 14-storey Queen Anne's Mansions. Though restrictions have long since been eased, strict regulations remain to preserve protected views, especially those of St Paul's, the Tower of London and Palace of Westminster, as well as to comply with the requirements of the Civil Aviation Authority. > The Greater London metropolitan area contains the most skyscrapers in the European Union. As of 2018, there are 21 skyscrapers in London that reach a roof height of at least 150 metres (492 ft),[4] with 18 in the Paris Metropolitan Area, 15 in Frankfurt, eleven in Warsaw and five each in Madrid and Milan.
[ "In the dense areas, most of the concentration is via medium- and high-rise buildings. London's skyscrapers, such as 30 St Mary Axe, Tower 42, the Broadgate Tower and One Canada Square, are mostly in the two financial districts, the City of London and Canary Wharf. High-rise development is restricted at certain sites if it would obstruct protected views of St Paul's Cathedral and other historic buildings. Nevertheless, there are a number of tall skyscrapers in central London (see Tall buildings in London), including the 95-storey Shard London Bridge, the tallest building in the European Union.\n", "In Western Europe, there are fewer high-rise buildings because of the historic city centers. In the 1960s, people started demolishing a few old buildings to replace them with modern high buildings. \n\nIn Brussels, the capital of Europe, there are numerous modern high-rise buildings in the Northern Quarter business district. The government of Belgium wants to recreate Washington, D.C. on a small scale. \n\nFrankfurt is currently the best known \"high-rise building city\" of Europe. Big skyscrapers dominate the city.\n\nIn London, you can find them in Canary Wharf. \n", "Building skyscrapers can be difficult for factors other than complexity and cost. For example, in European cities like Paris, the difference between the appearance of old architecture and modern skyscrapers can make it hard to get approval from local authorities to construct new skyscrapers. Building skyscrapers in an old and famous town can drastically alter the image of the city. In cities like London, Edinburgh, Portland, and San Francisco there is a legal requirement called protected view, which limits the height of new buildings within or adjacent to the sightline between the two places involved. This rule also makes it harder to find suitable sites for new tall buildings.\n", "A survey in 2016, by Ipsos Mori, found that many Londoners, particularly those who live in the most affected areas, think the trend towards ever taller, bolder skyscrapers has gone too far. More than 400 buildings of more than 20 floors in 2016 were tentatively proposed by developers in London. Among respondants, six out of ten backed a limit on the height of new skyscrapers, with the same proportion backing restrictions on the number of buildings with more than 50 floors.\n\nSection::::Designated area.\n", "Few skyscrapers were built in London before the late 20th century, owing to restrictions on building heights originally imposed by the London Building Act of 1894, which followed the construction of the 14-storey Queen Anne's Mansions. Though restrictions have long since been eased, strict regulations remain to preserve protected views, especially those of St Paul's, the Tower of London and Palace of Westminster, as well as to comply with the requirements of the Civil Aviation Authority.\n", "List of tallest buildings and structures in London\n\nSince 2010, the tallest structure in London has been The Shard, which was topped out at , making it the tallest habitable building in Europe at the time. \n\nThe Greater London metropolitan area contains the most skyscrapers in the European Union. As of 2018, there are 31 skyscrapers in London that reach a roof height of at least , with 18 in the Paris Metropolitan Area, 15 in Frankfurt, eleven in Warsaw and five each in Madrid and Milan.\n", "BULLET::::- 20 Fenchurch Street, London, Greater London, , completed in 2014\n\nBULLET::::- One Churchill Place, London, Greater London, , completed in 2004\n\nBULLET::::- 25 Bank Street, London, Greater London, , completed 2003\n\nBULLET::::- 40 Bank Street, London, Greater London, , completed in 2003\n\nBULLET::::- 10 Upper Bank Street, London, Greater London, , completed in 2003\n\nBULLET::::- Strata SE1, London, Greater London, , completed in 2010\n\nBULLET::::- Pan Peninsula East Tower, London, Greater London, , completed in 2008\n\nBULLET::::- Guy's Tower, London, Greater London, , completed in 1974\n\nBULLET::::- The Landmark East Tower, London, Greater London, , completed in 2010\n", "BULLET::::- 100 Bishopsgate\n\nBULLET::::- 110 Bishopsgate\n\nBULLET::::- City of London#Landmarks\n\nBULLET::::- List of tallest buildings and structures in Great Britain\n\nBULLET::::- List of tallest buildings and structures in London\n\nSection::::External links.\n\nBULLET::::- Official website\n\nBULLET::::- The Tower 42 Bird Study Group, which aims to study the migration of large birds of prey and other bird species over London seen from the roof during the spring (April - June) and autumn (August - November)\n", "As of April 2019, there are more than 50 habitable buildings tall under construction in the UK – 40 in London, 7 in Greater Manchester, 3 in Birmingham, 2 in Liverpool, and 1 in Woking.\n\nSection::::Tallest existing buildings.\n\nThis list includes completed buildings in the UK that stand at least tall. Architectural height is considered, so masts and other elements added after completion of building are not considered.\n\nSection::::Tallest existing buildings.:Tallest buildings under construction.\n", "BULLET::::- Strata tower: Southwark’s sore thumb, Building.co.uk, 9 April 2010.\n\nBULLET::::- Don’t Look Down When Strata Tower Opens With Best London Views, \"Business Week\", 13 April 2010.\n\nBULLET::::- Nestled among ghost homes, Business Day, \"The Sydney Morning Herald\", 16 April 2010.\n\nBULLET::::- The high life in Elephant & Castle, \"The Times\", 19 May 2010.\n\nBULLET::::- Tower to the people in Elephant and Castle, \"Evening Standard\" Homes&Property, 26 May 2010.\n\nBULLET::::- Wind-powered high-rise living?, \"Sydney Morning Herald\", 30 June 2010.\n", "Section::::Tallest completed buildings.\n\nThe tallest completed buildings above , as of the end of 2010, in Croydon are listed below. Buildings that have been demolished are included in the list.\n\nSection::::Tallest structures.\n\nThe two tallest structures, as of the beginning of 2008, in Croydon are listed below. Structures which have been demolished are not included. A structure differs from a high-rise by its lack of floors and habitability.\n\nSection::::Tallest under construction, approved, and proposed buildings.\n\nThe tallest under construction, approved, or proposed buildings above or equal to , as of the beginning of 2013, in Croydon are listed below.\n", "Notable recent tall buildings are the 1980s skyscraper Tower 42, the Lloyd's building with services running along the outside of the structure, and the 2004 Swiss Re building, nicknamed the \"Gherkin\" which set a new precedent for recent high-rise developments including Richard Rogers Leadenhall Building.\n\nLondon's historic mid-rise character has been, in some instances controversially, altered over the last generation with new high-rise 'skyscrapers' erected reflecting London's predominance as a global financial centre. Renzo Piano's 310m The Shard is the tallest building in the European Union, the fourth-tallest building in Europe and the 96th-tallest building in the world\n", "The UK had not been noted historically for its abundance of skyscrapers, with the taller structures throughout the country tending to be cathedrals, church spires and industrial chimneys. Despite this, since the late 20th century the number of high-rise apartment buildings and office blocks in many large British cities has grown significantly, most notably in London and Manchester. The three tallest purely residential buildings in the UK are: Manchester's Deansgate Square South Tower (201m), London's St George Wharf Tower (181m) and Manchester's Beetham Tower (169m). \n", "This lists buildings that are proposed for construction in London and are planned to rise at least . Once a planning application has been submitted, a decision by the relevant authority may take two or three years.\n\n* Approximate figure.\n\nSection::::Cancelled constructions.\n\nThis lists proposals for the construction of buildings in London that were planned to rise at least , for which planning permission was rejected or which were otherwise withdrawn.\n\nSection::::Demolished buildings.\n\nThis lists all demolished buildings in London that stood at least tall.\n\nSection::::Visions of skyscrapers.\n\n* Estimated height.\n\nSection::::Timeline of tallest buildings and structures.\n", "This lists buildings that are under construction in London and are planned to rise at least . Under construction buildings that have already been topped out are listed above.\n\nSection::::Tallest under construction, approved, and proposed.:Approved.\n\nThis lists buildings that are approved for construction in London and are planned to rise at least .\n\n* Table entries without text indicate that information regarding a building's expected year of completion has not yet been released.\n\n** Approximate figure.\n\nSection::::Tallest under construction, approved, and proposed.:Proposed.\n", "A growing number of tall buildings and skyscrapers are principally used by the financial sector. Almost all are situated in the eastern side around Bishopsgate, Leadenhall Street and Fenchurch Street, in the financial core of the City. In the north there is a smaller cluster comprising the Barbican Estate's three tall residential towers and the commercial CityPoint tower. In 2007, the tall Drapers' Gardens building was demolished and replaced by a shorter tower.\n\nThe City's buildings of more than in height are:\n\nBULLET::::- Timeline\n\nThe timeline of the tallest building in the City is as follows:\n\nSection::::Transport.\n\nSection::::Transport.:Rail.\n", "The Shard in Southwark, London, is currently the tallest completed building in both the UK and the European Union; it was topped out at a height of in March 2012, inaugurated in July 2012 and opened to the public in February 2013.\n", "BULLET::::- Heron Tower, London, Greater London, , completed in 2010\n\nBULLET::::- Leadenhall Building, London, Greater London, , completed in 2014\n\nBULLET::::- 8 Canada Square, London, Greater London, , completed in 2002\n\nBULLET::::- 25 Canada Square, London, Greater London, , completed in 2001\n\nBULLET::::- Tower 42, London, Greater London, , completed in 1980\n\nBULLET::::- St George Wharf Tower, London, Greater London, , completed in 2014\n\nBULLET::::- 30 St Mary Axe, London, Greater London, , completed in 2003\n\nBULLET::::- Beetham Tower, Manchester, North West England,\n\nBULLET::::- Broadgate Tower, London, Greater London, , completed in 2008\n", "BULLET::::- 2015 – 20 Fenchurch Street (the 'Walkie Talkie'), City of London, by Rafael Viñoly\n\nBULLET::::- 2014 – Woolwich Central, London, by Sheppard Robson\n\nBULLET::::- 2013 – 465 Caledonian Road, London, by Stephen George and Partners\n\nBULLET::::- 2012 – Cutty Sark Renovation, Greenwich, London, by Grimshaw Architects\n\nBULLET::::- 2011 – MediaCityUK, Salford, by Fairhurst, Chapman Taylor and Wilkinson Eyre\n\nBULLET::::- 2010 – Strata, Elephant and Castle, London, by BFLS\n\nBULLET::::- 2009 – Liverpool Ferry Terminal, Liverpool, by Hamilton Architects\n\nBULLET::::- 2008 – Radisson SAS Waterfront Hotel, Saint Helier, Jersey, by EPR Architects\n", "BULLET::::- Erskine Williamson Building\n\nBULLET::::- Faraday Building\n\nBULLET::::- Fleeming Jenkin Building\n\nBULLET::::- Grant Institute\n\nBULLET::::- Hudson Beare Building\n\nBULLET::::- James Clerk Maxwell Building\n\nBULLET::::- John Muir Building\n\nBULLET::::- John Murray Labs\n\nBULLET::::- Joseph Black Building\n\nBULLET::::- Kenneth Denbigh Building\n\nBULLET::::- King's Buildings Centre\n\nBULLET::::- King's Buildings House\n\nBULLET::::- March Building\n\nBULLET::::- Mary Brück Building\n\nBULLET::::- Michael Swann Building\n\nBULLET::::- Murchison House\n\nBULLET::::- Noreen and Kenneth Murray Library\n\nBULLET::::- Ocean Energy Research Facility\n\nBULLET::::- Peter Wilson Building\n\nBULLET::::- Robertson Engineering & Science Library\n\nBULLET::::- Roger Land Building\n\nBULLET::::- Sanderson Building\n\nBULLET::::- Scottish Microelectronics Centre\n\nBULLET::::- Structures Lab\n\nBULLET::::- Swann Building\n", "Height restrictions have much to do with this list. Until the 1960s, London, the capital of the Empire, had especially strict height maxima to preserve the views of historic structures. Until the late 1920 Montreal limited all buildings to a maximum of 10 stories, and it still limits buildings to less than the sea-level elevation of Mont Royal. Since 1989 Vancouver restricted buildings from blocking the North Shore Mountains, creating a practical upper limit of around 137 meters, until 1997 when seven sites were pre-selected for taller buildings as exceptions to the rule. Singapore limits all buildings to below 280 meters because of the proximity of Singapore Changi Airport.\n", "List of tallest buildings by United Kingdom settlement\n\nThis is a list of the tallest buildings by United Kingdom settlement. The article includes all cities and towns with a population over 100,000. This list is based on criteria set out by the Council on Tall Buildings and Urban Habitat which excludes structures such as telecommunication towers and church spires from being labelled as a 'skyscraper or tall building'. The tallest building in the United Kingdom in a settlement with fewer than 100,000 inhabitants is The Triad in Bootle, Merseyside at .\n", "List of tallest buildings in the United Kingdom\n\nAs of November 2018 there are 78 habitable buildings (used for living and working in, as opposed to masts and churches) in the United Kingdom at least tall, 60 of them in London, eight in Greater Manchester, two in Birmingham, two in Leeds, two in Portsmouth and one each in Brighton and Hove, Liverpool, Sheffield and Swansea (the only structure outside England). \n", "In Paris, the counterpart of Canary Wharf is La Défense. \n\nSection::::Modern development.:Europe.:Western Europe.:Great Britain.\n\nTower blocks were first built in the United Kingdom after the Second World War, and were seen as a cheap way to replace 19th-century urban slums and war-damaged buildings. They were originally seen as desirable, but quickly fell out of favour as tower blocks attracted rising crime and social disorder, particularly after the collapse of Ronan Point in 1968.\n", "BULLET::::- List of tallest buildings in Bristol\n\nBULLET::::- List of tallest buildings and structures in Edinburgh\n\nBULLET::::- List of tallest buildings and structures in Cardiff\n\nBULLET::::- List of tallest buildings and structures in Croydon\n\nBULLET::::- List of tallest buildings and structures in Glasgow\n\nBULLET::::- List of tallest buildings in Leeds\n\nBULLET::::- List of tallest buildings and structures in Liverpool\n\nBULLET::::- List of tallest buildings and structures in London\n\nBULLET::::- List of tallest buildings and structures in Manchester\n\nBULLET::::- List of tallest buildings and structures in Newcastle upon Tyne\n\nBULLET::::- List of tallest buildings and structures in Portsmouth\n" ]
[ "London has less skyscrapers than comparable cities." ]
[ "Paris, Frankfurt, Warsaw, and Milan all have less skyscrapers than London." ]
[ "false presupposition" ]
[ "London has less skyscrapers than comparable cities.", "London has less skyscrapers than comparable cities." ]
[ "false presupposition", "normal" ]
[ "Paris, Frankfurt, Warsaw, and Milan all have less skyscrapers than London.", "Paris, Frankfurt, Warsaw, and Milan all have less skyscrapers than London." ]
2018-00330
Why does the body feel sharp pains differently (or less tolerant) in the distal parts of the body like hands and feet?
The body feels pain because every time you touch the skin, it sends a signal to your brain that you are being touched. These signals can tell your brain if it's light touch, vibration or pain. Different parts of your skin send more signals than others. Here's why: imagine a bandaid sized patch of skin. When you touch it, say, on your back, that patch sends one signal. The same sized patch of skin on "less tolerant" parts of your body, like your hands and feet, will send a whole bunch of signals. The more signals sent, the more you feel things. In non ELI5 terms, this is called receptor density. Expanding on that, as humans evolved over time, it was more useful to be able to feel things better with certain body parts (think: hands to feel different foods and make tools with, vs. back, which was mostly just there for structural support), or to sense dangerous things touching those body parts in order to be able to protect them. Distal has less to do with it than you might think - for example, the underwear area (thanks for that word, ELI5) is also very sensitive, and many humans find this useful encouragement to protect it :)
[ "Although thresholds for touch-position perception are relatively easy to measure, those for pain-temperature perception are difficult to define and measure. \"Touch\" is an objective sensation, but \"pain\" is an individualized sensation which varies among different people and is conditioned by memory and emotion. Anatomical differences between the pathways for touch-position perception and pain-temperature sensation help explain why pain, especially chronic pain, is difficult to manage.\n\nSection::::Trigeminal nucleus.\n", "Spontaneous pain or allodynia (pain resulting from a stimulus which would not normally provoke pain, such as a light touch of the skin) is not limited to the territory of a single peripheral nerve and is disproportionate to the inciting event.\n\nBULLET::::1. There is a history of edema, skin blood flow abnormality, or abnormal sweating in the region of the pain since the inciting event.\n\nBULLET::::2. No other conditions can account for the degree of pain and dysfunction.\n", "The IASP criteria for CRPS I diagnosis has shown a sensitivity ranging from 98–100% and a specificity ranging from 36–55%. Per the IASP guidelines, interobserver reliability for CRPS I diagnosis is poor. Two other criteria used for CRPS I diagnosis are Bruehl's criteria and Veldman's criteria, which have moderate to good interobserver reliability. In the absence of clear evidence supporting one set of criteria over the other, clinicians may use IASP, Bruehl’s, or Veldman’s clinical criteria for diagnosis. While the IASP criteria are nonspecific and possibly not as reproducible as Bruehl’s or Veldman’s criteria, they are cited more widely in literature, including treatment trials.\n", "Melzack's recent research at McGill indicates that there are two types of pain, transmitted by two separate sets of pain-signaling pathways in the central nervous system. Sudden, short-term pain, such as the pain of cutting a finger, is transmitted by a group of pathways that Melzack calls the \"lateral\" system, because they pass through the brain stem on one side of its central core. Prolonged pain, on the other hand, such as chronic back pain, is transmitted by the \"medial\" system, whose neurons pass through the central core of the brain stem.\n", "BULLET::::- Pain associated with temporomandibular joint disorder and myofascial pain also often occurs in the same region as pericoronitis. They are easily missed diagnoses in the presence of mild and chronic pericoronitis, and the latter may not be contributing greatly to the individual's pain (see table).\n", "Electromyography (EMG) and Nerve Conduction Studies (NCS) are important ancillary tests in CRPS because they are among the most reliable methods of detecting nerve injury. They can be used as one of the primary methods to distinguish between CRPS I & II, which differ based on whether there is evidence of actual nerve damage. EMG & NCS are also among the best tests for ruling in or out alternative diagnoses. CRPS is a \"diagnosis of exclusion\", which requires that there be no other diagnosis that can explain the patient's symptoms. This is very important to emphasise because otherwise patients can be given a wrong diagnosis of CRPS when they actually have a treatable condition that better accounts for their symptoms. An example is severe Carpal Tunnel Syndrome, which can often present in a very similar way to CRPS. Unlike CRPS, Carpal Tunnel Syndrome can often be corrected with surgery in order to alleviate the pain and avoid permanent nerve damage and malformation.\n", "The signs and symptoms of CRPS will usually manifest near the injury site. The most common symptoms are extreme pain including burning, stabbing, grinding, and throbbing. The pain is out of proportion to the severity of the initial injury. Moving or touching the limb is often intolerable. With diagnosis of either CRPS I or II, patients may develop burning pain and allodynia (pain to non-noxious stimuli). Both syndromes are also characterized by autonomic dysfunction, which presents with localized temperature changes, cyanosis, and/or edema. The patient may also experience localized swelling; extreme sensitivity to non-painful things such as wind, water, noise and vibrations; extreme sensitivity to touch (by themselves, other people, and even their clothing or bedding/blankets); abnormally increased sweating (or absent sweating); changes in skin temperature (alternating between sweaty and cold); changes in skin colouring (from white and mottled to bright red or reddish violet); changes in skin texture (waxy, shiny, thin, tight skin); softening and thinning of bones; joint tenderness or stiffness; changes in nails and hair (delayed or increased growth, brittle nails/hair that easily break); muscle spasms; muscle loss (atrophy); tremors; dystonia; allodynia; hyperalgesia; decreased/restricted ability and painful movement of affected body part. Drop attacks (falls), almost fainting, and fainting spells are infrequently reported, as are visual problems. The symptoms of CRPS vary in severity and duration. Since CRPS is a systemic problem, potentially any organ can be affected.\n", "The two types differ only in the nature of the inciting event. Type I CRPS develops following an initiating noxious event that may or may not have been traumatic, while type II CRPS develops after a nerve injury.\n", "Further, recent research has found that ketamine, a sedative, is capable of blocking referred pain. The study was conducted on patients suffering from fibromyalgia, a disease characterized by joint and muscle pain and fatigue. These patients were looked at specifically due to their increased sensitivity to nociceptive stimuli. Furthermore, referred pain appears in a different pattern in fibromyalgic patients than non-fibromyalgic patients. Often this difference manifests as a difference in terms of the area that the referred pain is found (distal vs. proximal) as compared to the local pain. The area is also much more exaggerated owing to the increased sensitivity.\n", "BULLET::::1. The presence of continuing pain, allodynia, or hyperalgesia after a nerve injury, not necessarily limited to the distribution of the injured nerve\n\nBULLET::::2. Evidence at some time of edema, changes in skin blood flow, or abnormal sudomotor activity in the region of pain\n\nBULLET::::3. The diagnosis is excluded by the existence of any condition that would otherwise account for the degree of pain and dysfunction.\n", "BULLET::::- Pain is typically on one side or the other (unilateral PSIS pain), but the pain can occasionally be bilateral.\n\nBULLET::::- When the pain of SIJ dysfunction is severe (which is infrequent), there can be referred pain into the hip, groin, and occasionally down the leg, but rarely does the pain radiate below the knee.\n\nBULLET::::- Pain can be referred from the SIJ down into the buttock or back of the thigh, and rarely to the foot.\n\nBULLET::::- Low back pain and stiffness, often unilateral, that often increases with prolonged sitting or prolonged walking.\n", "Section::::Role in neuropathic pain.\n\nActivation of nociceptors is not necessary to cause the sensation of pain. Damage or injury to nerve fibers that normally respond to innocuous stimuli like light touch may lower their activation threshold needed to respond; this change causes the organism to feel intense pain from the lightest of touch. Neuropathic pain syndromes are caused by lesions or diseases of the parts of the nervous system that normally signal pain. There are four main classes: \n\nBULLET::::- peripheral focal and multifocal nerve lesions\n\nBULLET::::- traumatic, ischemic or inflammatory\n\nBULLET::::- peripheral generalized polyneuropathies\n", "BULLET::::- Hand elevation test, the hand elevation test is performed by lifting both hands above the head, and if symptoms are reproduced in the median nerve distribution within 2 minutes, considered positive. The hand elevation test has higher sensitivity and specificity than Tinel's test, Phalen's test, and carpal compression test. Chi-square statistical analysis has shown the hand elevation test to be as effective, if not better than, Tinel's test, Phalen's test, and carpal compression test.\n", "Persons having the HLA-DR4 type of human leucocyte antigen appear to have a higher risk of PMR.\n\nSection::::Diagnosis.\n\nNo specific test exists to diagnose polymyalgia rheumatica; many other diseases can cause inflammation and pain in muscles, but a few tests can help narrow down the cause of the pain. Limitation in shoulder motion, or swelling of the joints in the wrists or hands, are noted by the doctor. A patient's answers to questions, a general physical exam, and the results of tests can help a doctor determine the cause of pain and stiffness.\n", "BULLET::::1. The presence of an initiating noxious event or a cause of immobilization\n\nBULLET::::2. Continuing pain, allodynia (perception of pain from a nonpainful stimulus), or hyperalgesia (an exaggerated sense of pain) disproportionate to the inciting event\n\nBULLET::::3. Evidence at some time of edema, changes in skin blood flow, or abnormal sudomotor activity in the area of pain\n\nBULLET::::4. The diagnosis is excluded by the existence of any condition that would otherwise account for the degree of pain and dysfunction.\n\nAccording to the IASP, CRPS II (causalgia) is diagnosed as follows:\n", "Previously it was considered that CRPS had three stages; it is now believed that people affected by CRPS do not progress through these stages sequentially. These stages may not be time-constrained and could possibly be event-related, such as ground-level falls or re-injuries of previously damaged areas. Thus, rather than a progression of CRPS from bad to worse, it is now thought, instead, that such individuals are likely to have one of the three following types of disease progression:\n", "Section::::Pathophysiology.\n", "Diabetic peripheral neuropathy is the most likely diagnosis for someone with diabetes who has pain in a leg or foot, although it may also be caused by vitamin B deficiency or osteoarthritis. A 2010 review in the Journal of the American Medical Association's \"Rational Clinical Examination Series\" evaluated the usefulness of the clinical examination in diagnosing diabetic peripheral neuropathy. While the physician typically assesses the appearance of the feet, presence of ulceration, and ankle reflexes, the most useful physical examination findings for large fiber neuropathy are an abnormally decreased vibration perception to a 128-Hz tuning fork (likelihood ratio (LR) range, 16–35) or pressure sensation with a 5.07 Semmes-Weinstein monofilament (LR range, 11–16). Normal results on vibration testing (LR range, 0.33–0.51) or monofilament (LR range, 0.09–0.54) make large fiber peripheral neuropathy from diabetes less likely. Combinations of signs do not perform better than these 2 individual findings. Nerve conduction tests may show reduced functioning of the peripheral nerves, but seldom correlate with the severity of diabetic peripheral neuropathy and are not appropriate as routine tests for the condition.\n", "People suffering from sacroiliitis can often experience symptoms in a number of different ways, however it is commonly related to the amount of pressure that is put onto the sacroiliac joint. Sacroiliitis pain is typically axial, meaning that the location of the condition is also where the pain is occurring. Symptoms commonly include prolonged, inflammatory pain in the lower back region, hips or buttocks.\n\nHowever, in more severe cases, pain can become more radicular and manifest itself in seemingly unrelated areas of the body including the legs, groin and feet.\n\nSymptoms are typically aggravated by:\n", "Section::::Diagnosis.:Thermography.\n", "This is usually correlated with muscle weakness, “other symptoms include painful cramps, fasciculations (uncontrolled muscle twitching visible under the skin) and muscle shrinking”. Motor symptoms can usually be aided through \"mechanical aids\" such as hand or foot braces, orthopaedic shoes, splints, and in more severe cases procedures such as tendon transfers or bone fusions can take place. All of these aids and procedures can reduce physical disability, pain, pressured or compressed nerves and weaknesses.\n\nSection::::Signs and symptoms.:Sensory nerve damage.\n\nThis is a broader category as sensory nerves have a broader function range, and therefore there are deviations in symptoms:\n", "The mechanisms leading to reduced bone mineral density (up to overt osteoporosis) are still unknown. Potential explanations include a dysbalance of the activities of sympathetic and parasympathetic autonomic nervous system and mild secondary hyperparathyroidism. However, the trigger of secondary hyperparathyroidms has not yet been identified.\n\nIn summary, the pathophysiology of complex regional pain syndrome has not yet been defined; there is conjecture that CRPS, with its variable manifestations, could be the result of multiple pathophysiological processes.\n\nSection::::Diagnosis.\n\nCRPS types I and II share the common diagnostic criteria shown below.\n", "During physical examination, specifically a neurological examination, those with generalized peripheral neuropathies most commonly have distal sensory or motor and sensory loss, although those with a pathology (problem) of the nerves may be perfectly normal; may show proximal weakness, as in some inflammatory neuropathies, such as Guillain–Barré syndrome; or may show focal sensory disturbance or weakness, such as in mononeuropathies. Classically, ankle jerk reflex is absent in peripheral neuropathy.\n", "Section::::Research.\n", "Section::::Prognosis.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]