id
stringlengths
10
10
question
stringlengths
18
294
comment
stringlengths
28
6.89k
passages
sequence
presuppositions
sequence
corrections
sequence
labels
sequence
raw_presuppositions
sequence
raw_labels
sequence
raw_corrections
sequence
2018-04680
How do CPUs switch between the states of 1 and 0?
1 = on, has power 0=off, does not have power They are literally switches that are either in place and on or popped out and off.
[ "Moore/Mealy machines, are DFAs that have also output at any tick of the clock. Modern CPUs, computers, cell phones, digital clocks and basic electronic devices/machines have some kind of finite state machine to control it.\n\nSimple software systems, particularly ones that can be represented using regular expressions, can be modeled as Finite State Machines. There are many of such simple systems, such as vending machines or basic electronics.\n", "Section::::Unbalanced tenary.\n\nTenary computing implemented in therms of unbalanced tenary, which uses the three digits 0, 1, 2. The original 0 and 1 are explained as an ordinary Binary computer, but instead uses 2 as leakage current.\n", "The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a \"state register.\" Each time a clock signal ticks, the state register captures the feedback generated from the previous state of the combinational logic, and feeds it back as an unchanging input to the combinational part of the state machine. The fastest rate of the clock is set by the most time-consuming logic calculation in the combinational logic.\n", "Section::::Programming.\n\nA typical PROM comes with all bits reading as \"1\". Burning a fuse bit during programming causes the bit to read as \"0\". The memory can be programmed just once after manufacturing by \"blowing\" the fuses, which is an irreversible process.\n", "A ring counter with 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15 flip flops chained in series with the Q output of each flip flop connected to the D input of the next and the D input of the first flip flop connected to the Q output of the 15th flip flop. The first flip flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip flop which represents the last state. Upon reset of the state machine all of the flip flops are reset to '0' except the first in the chain which is set to '1'. The next clock edge arriving at the flip flops advances the one 'hot' bit to the second flip flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state.\n", "Some digital devices support a form of three-state logic on their outputs only. The three states are \"0\", \"1\", and \"Z\".\n", "The circuit shown below is a basic NAND latch. The inputs are generally designated S and R for Set and Reset respectively. Because the NAND inputs must normally be logic 1 to avoid affecting the latching action, the inputs are considered to be inverted in this circuit (or active low).\n\nThe circuit uses feedback to \"remember\" and retain its logical state even after the controlling input signals have changed. When the S and R inputs are both high, feedback maintains the Q outputs to the previous state.\n\nSection::::Flip-flop types.:Simple set-reset latches.:SR AND-OR latch.\n", "When using static gates as building blocks, the most fundamental latch is the simple \"SR latch\", where S and R stand for \"set\" and \"reset\". It can be constructed from a pair of cross-coupled NOR logic gates. The stored bit is present on the output marked Q.\n", "(i) Operand fetch which register to test for empty?: Analogous to the fetch phase, the finite state machine moves the contents of the register pointed to by the PC, i.e. hole #6, into the Program-Instruction Register PIR #2. It then uses the contents of register #2 to point to the register to be tested for zero, i.e. register #18. Hole #18 contains a number \"n\". To do the test, now the state machine uses the contents of the PIR to indirectly copy the contents of register #18 into a spare register, #3. So there are two eventualities (ia), register #18 is empty, (ib) register #18 is not empty.\n", "\"Transport triggered architecture\" (TTA) is a design in which computation is a side effect of data transport. Usually, some memory registers (triggering ports) within common address space perform an assigned operation when the instruction references them. For example, in an OISC using a single memory-to-memory copy instruction, this is done by triggering ports that perform arithmetic and instruction pointer jumps when written to.\n\nSection::::Machine architecture.:Arithmetic-based Turing-complete machines.\n", "Every Moore machine formula_13 is equivalent to the Mealy machine with the same states and transitions and the output function formula_14, which takes each state-input pair formula_15 and yields formula_16, where formula_17 is formula_13's output function.\n", "Since each binary memory element, such as a flip-flop, has only two possible states, \"one\" or \"zero\", and there is a finite number of memory elements, a digital circuit has only a certain finite number of possible states. If N is the number of binary memory elements in the circuit, the maximum number of states a circuit can have is 2.\n\nSection::::Program state.\n\nSimilarly, a computer program stores data in variables, which represent storage locations in the computer's memory. The contents of these memory locations, at any given point in the program's execution, is called the program's \"state\".\n", "So a process switch proceeds something like this – a process requests a resource that is not immediately available, maybe a read of a record of a file from a block which is not currently in memory, or the system timer has triggered an interrupt. The operating system code is entered and run on top of the user stack. It turns off user process timers. The current process is placed in the appropriate queue for the resource being requested, or the ready queue waiting for the processor if this is a preemptive context switch. The operating system determines the first process in the ready queue and invokes the instruction move_stack, which makes the process at the head of the ready queue active.\n", "Serial binary addition is done by a flip-flop and a full adder. The flip-flop takes the carry-out signal on each clock cycle and provides its value as the carry-in signal on the next clock cycle. After all of the bits of the input operands have arrived, all of the bits of the sum have come out of the sum output.\n\nSection::::Serial binary subtracter.\n", "BULLET::::- Mealy machine: The FSM also uses input actions, i.e., output depends on input and state. The use of a Mealy FSM leads often to a reduction of the number of states. The example in figure 7 shows a Mealy FSM implementing the same behaviour as in the Moore example (the behaviour depends on the implemented FSM execution model and will work, e.g., for virtual FSM but not for event-driven FSM). There are two input actions (I:): \"start motor to close the door if command_close arrives\" and \"start motor in the other direction to open the door if command_open arrives\". The \"opening\" and \"closing\" intermediate states are not shown.\n", "The state of the FSM transitions from one state to another based on 2 stimuli. The first stimulus is the processor specific Read and Write request. For example: A processor P1 has a Block X in its Cache, and there is a request from the processor to read or write from that block. The second stimulus comes from other processors, which doesn't have the Cache block or the updated data in its Cache. The bus requests are monitored with the help of Snoopers which snoops all the bus transactions.\n", "Section::::Digital logic circuit state.\n\nDigital logic circuits can be divided into two types: combinational logic, whose output signals are dependent only on its present input signals, and sequential logic, whose outputs are a function of both the current inputs and the past history of inputs. In sequential logic, information from past inputs is stored in electronic memory elements, such as flip-flops. The stored contents of these memory elements, at a given point in time, is collectively referred to as the circuit's \"state\" and contains all the information about the past to which the circuit has access.\n", "Another expression is :\n\nformula_4 with formula_5.\n\nSection::::Flip-flop types.:Simple set-reset latches.:NAND latch.\n\nWhen using static gates as building blocks, the most fundamental latch is the simple SR latch, where S and R stand for set and reset. It can be constructed from a pair of cross-coupled NOR or NAND logic gates. The stored bit is present on the output marked Q.\n", "specified in the Mask portion of the BPI instruction. If there is a match, the condition code is set to reflect the interrupt that occurred and the branch is taken. Otherwise, the next instruction is checked to determine if it is a BPI instruction, etc. If there is no BPI transfer made (either because there was no BPI instruction or because the program interrupt type did not match the mask of any BPIs that were present), the normal processing of the program interrupt occurs.\n", "The latch can be set or cleared by the processor in several ways; a particular memory address may be decoded and used to control the latch, or, in processors with separately-decoded I/O addresses, an output address may be decoded. Several bank-switching control bits could be gathered into a register, approximately doubling the available memory spaces with each additional bit in the register.\n", "Commercial design tools simplify and automate memory-mapped register specification and code generation for hardware, firmware, hardware verification, testing and documentation.\n\nRegisters can be read/write, read-only or write-only.\n", "Since only one data line is available, the protocol is serial. The clock input is at the TCK pin. One bit of data is transferred in from TDI, and out to TDO per TCK rising clock edge. Different instructions can be loaded. Instructions for typical ICs might read the chip ID, sample input pins, drive (or float) output pins, manipulate chip functions, or bypass (pipe TDI to TDO to logically shorten chains of multiple chips).\n", "This configuration allows conversion from serial to parallel format. Data input is serial, as described in the SISO section above. Once the data has been clocked in, it may be either read off at each output simultaneously, or it can be shifted out.\n\nIn this configuration, each flip-flop is edge triggered. All flip-flops operate at the given clock frequency. Each input bit makes its way down to the Nth output after N clock cycles, leading to parallel output.\n", "the current state of the machine, together with the remaining input. The first configuration must be the initial state of formula_1 and the complete input. A transition from a configuration formula_3 to\n\na configuration formula_4 is allowed if formula_5 for\n\nsome input symbol formula_6 and if formula_1 has a transition from\n\nformula_8 to formula_9 on input formula_6. The final\n\nconfiguration must have the empty string formula_11 as its remaining\n\ninput; whether formula_1 has accepted or rejected the input depends\n\non whether the final state is an accepting state. \n\nSection::::Turing Machines.\n", "Some computer systems, upon receiving a boot signal from a human operator or a peripheral device, may load a very small number of fixed instructions into memory at a specific location, initialize at least one CPU, and then point the CPU to the instructions and start their execution. These instructions typically start an input operation from some peripheral device (which may be switch-selectable by the operator). Other systems may send hardware commands directly to peripheral devices or I/O controllers that cause an extremely simple input operation (such as \"read sector zero of the system device into memory starting at location 1000\") to be carried out, effectively loading a small number of boot loader instructions into memory; a completion signal from the I/O device may then be used to start execution of the instructions by the CPU.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02816
Why are Arabs not black? Since arabia is the most sun-intensive area of earth it would make sense to have dark skin for protection.
You need to understand how the [human skin color]( URL_0 ) evolved. Second You need to understand where did [Arabs]( URL_1 ) come from. But here's the short answer you can deduce from both articles: An estimated time of about 10,000 to 20,000 years is enough for human populations to achieve optimal skin pigmentation in a particular geographic area. Modern Arabs descended from some civilization that lived in the Levant about 3000 years ago (and these originated from a whiteish background), and as a result of being white, they chose to use better clothing instead of waiting for evolution. Also there was "Ancient Arabs", they were tribes that had vanished or been destroyed, such as ʿĀad and Thamud, and we can't know if these are white or black. I hope my answer is correct as i simplified it to the max, if you read the articles and the "further information" links in them, you'll eventually get to your answer
[ "Section::::Geographic distribution.:New Guinea.\n\nThe indigenous Papuan people of New Guinea have dark skin pigmentation and have inhabited the island for at least 40,000 years. Due to their similar phenotype and the location of New Guinea being in the migration route taken by Indigenous Australians, it was generally believed that Papuans and Aboriginal Australians shared a common origin. However, a 1999 study failed to find clear indications of a single shared genetic origin between the two populations, suggesting multiple waves of migration into Sahul with distinct ancestries.\n\nSection::::Geographic distribution.:Sub-Saharan Africa.\n", "Wallace Fard Muhammad taught that the original peoples of the world were black and that white people were a race of \"devils\" created by a scientist named Yakub (the Biblical and Qur'anic Jacob) on the Greek island of Patmos. According to the supreme wisdom lessons, Fard taught that whites were devils because of a culture of lies and murder that Yakub instituted on the island to ensure the creation of his new people. Fard taught that Yakub established a secret eugenics policy among the ruling class on the island. They were to kill all dark babies at birth and lie to the parents about the child's fate. Further, they were to ensure that lighter-skinned children thrived in society. This policy encouraged a general preference for light skin. It was necessary to allow the process of grafting or making of a lighter-skinned race of people who would be different. The idea was that if the light-skinned people were allowed to mate freely with the dark-skinned people, the population would remain dark-skinned due to the genetic dominance of the original dark-skinned people. This process took approximately 600 years to produce a blond-haired, blue-eyed group of people. As they migrated into the mainland, they were greeted and welcomed by the indigenous people wherever they went. But according to the supreme wisdom lessons, they started making trouble among the righteous people, telling lies and causing confusion and mischief. This is when the ruling class of the Middle East decided to round up all the troublemakers they could find and march them out, over the hot desert sands, into the caves and hillsides of Europe. Elijah claimed that this history is well-known and preserved, and is ritualized or re-enacted within many fraternal organizations and secret societies. Fard taught that much of the savage ways of white people came from living in the caves and hillsides of Europe for over 2,000 years without divine revelation or knowledge of civilization. The writings of Elijah Muhammad advise a student must learn that the white man is \"Yacub's grafted Devil\" and \"the Skunk of the planet Earth\".\n", "In the 9th century, Al-Jahiz, an Afro-Arab Islamic philosopher, attempted to explain the origins of different human skin colors, particularly black skin, which he believed to be the result of the environment. He cited a stony region of black basalt in the northern Najd as evidence for his theory.\n\nIn the 14th century, the Islamic sociologist Ibn Khaldun, dispelled the Babylonian \"Talmud\"'s account of peoples and their characteristics as a myth. He wrote that black skin was due to the hot climate of sub-Saharan Africa and not due to the descendants of Ham being cursed.\n", "Section::::Geographic distribution.:Australia.\n\nThe Aborigines of Australia, as with all humans, are descendants of African migrants, and their ancestors may have been among the first major groups to leave Africa around 50,000 years ago. Despite early migrations, genetic evidence has pointed out that the indigenous peoples of Australia are genetically very dissimilar to the dark-skinned populations of Africa and that they are more closely related to Eurasian populations.\n", "Sub-Saharan Africa is the region in Africa situated south of the Sahara where a large number of dark-skinned populations live. Dark-skinned groups on the continent have the same receptor protein as \"Homo ergaster\" and \"Homo erectus\" had. According to scientific studies, populations in Africa also have the highest skin colour diversity. High levels of skin colour variation exists between different populations in Sub-Saharan Africa. These differences depend in part on general distance from the equator, illustrating the complex interactions of evolutionary forces which have contributed to the geographic distribution of skin color at any point of time.\n", "The Romans interacted with and later conquered parts of Mauretania, an early state that covered modern Morocco, western Algeria, and the Spanish cities Ceuta and Melilla during classical period. The people of the region were noted in classical literature as \"Mauri\", which was subsequently rendered as Moors in English.\n\nNumerous communities of dark-skinned peoples are present in North Africa, some dating from prehistoric communities. Others descend from immigrants via the historical trans-Saharan trade or, after the Arab invasions of North Africa in the 7th century, from slaves from the Arab slave trade in North Africa.\n", "In the 18th century, the Moroccan Sultan Moulay Ismail \"the Warrior King\" (1672–1727) raised a corps of 150,000 black soldiers, called his Black Guard.\n\nAccording to Carlos Moore, resident scholar at Brazil's University of the State of Bahia, in the 21st century Afro-multiracials in the Arab world, including Arabs in North Africa, self-identify in ways that resemble multi-racials in Latin America. He claims that darker toned Arabs, much like darker toned Latin Americans, consider themselves white because they have some distant white ancestry.\n", "Dark-skinned populations inhabiting Africa, Australia, Melanesia, Papua New Guinea and South Asia all live in some of the areas with the highest UV radiation in the world, and have evolved very dark skin pigmentations as protection from the harmful sun rays. Evolution has restricted humans with darker skin in tropical latitudes, especially in non-forested regions, where ultraviolet radiation from the sun is usually the most intense. Different dark-skinned populations are not necessarily closely related genetically. Before the modern mass migration, it has been argued that the majority of dark pigmented people lived within 20° of the equator.\n", "The distribution of indigenous light-skinned populations is highly correlated with the low ultraviolet radiation levels of the regions inhabited by them. Historically, light-skinned indigenous populations almost exclusively lived far from the equator, in high latitude areas with low sunlight intensity; for example, in Northwestern Europe. Due to mass migration and increased mobility of people between geographical regions in recent centuries, light-skinned populations today are found all over the world.\n\nSection::::Evolution.\n", "Skin colour seems to vary mostly due to variations in a number of genes of large effect as well as several other genes of small effect (\"TYR\", \"TYRP1\", \"OCA2\", \"SLC45A2\", \"SLC24A5\", \"MC1R\", \"KITLG\" and \"SLC24A4\"). This does not take into account the effects of epistasis, which would probably increase the number of related genes. Variations in the \"SLC24A5\" gene account for 20–25% of the variation between dark and light skinned populations of Africa, and appear to have arisen as recently as within the last 10,000 years. The Ala111Thr or rs1426654 polymorphism in the coding region of the SLC24A5 gene reaches fixation in Europe, and is also common among populations in North Africa, the Horn of Africa, West Asia, Central Asia and South Asia.\n", "Melanism, meaning a mutation that results in completely dark skin, does not exist in humans. Melanin is the primary determinant of the degree of skin pigmentation and protects the body from harmful ultraviolet radiation. The same ultraviolet radiation is essential for the synthesis of vitamin D in skin, so lighter colored skin - less melanin - is an adaptation related to the prehistoric movement of humans away from equatorial regions, as there is less exposure to sunlight at higher latitudes. People from parts of Africa, South Asia, Southeast Asia, and Australia have very dark skin, but this is not melanism.\n", "Ibn Khaldun, the Arab sociologist and polymath, similarly linked skin color to environmental factors. In his \"Muqaddimah\" (1377), he wrote that black skin was due to the hot climate of sub-Saharan Africa and not due to African lineage. He thereby challenged Hamitic theories of race that held that the sons of Ham (son of Noah) were cursed with black skin. Many writings of Ibn Khaldun were translated during the colonial era in order to advance the colonial propaganda machine.\n", "Population and admixture studies suggest a three-way model for the evolution of human skin color, with dark skin evolving in early hominids in Africa and light skin evolving partly separately at least two times after modern humans had expanded out of Africa.\n\nFor the most part, the evolution of light skin has followed different genetic paths in Western and Eastern Eurasian populations. Two genes however, KITLG and ASIP, have mutations associated with lighter skin that have high frequencies in Eurasian populations and have estimated origin dates after humans spread out of Africa but before the divergence of the two lineages.\n", "Due to frequently differing ancestry among dark-skinned populations, the presence of dark skin in general is not a reliable genetic marker, including among groups in Africa. For example, Wilson et al. (2001) found that most of their Ethiopian samples showed closer genetic affinities with light-skinned Armenians and Norwegians than with dark-skinned Bantu populations. Mohamoud (2006) likewise observed that their Somali samples were genetically more similar to Arab populations than to other African populations.\n\nSection::::Geographic distribution.:South Asia.\n", "BULLET::::1. From about 1.2 million years ago to less than 100,000 years ago, archaic humans, including archaic Homo sapiens, were dark-skinned.\n\nBULLET::::2. As \"Homo sapiens\" populations began to migrate, the evolutionary constraint keeping skin dark decreased proportionally to the distance north a population migrated, resulting in a range of skin tones within northern populations.\n\nBULLET::::3. At some point, some northern populations experienced positive selection for lighter skin due to the increased production of vitamin D from sunlight and the genes for darker skin disappeared from these populations.\n", "Another group of hypotheses contended that dark skin pigmentation developed as antibacterial protection against tropical infectious diseases and parasites. Although it is true that eumelanin has antibacterial properties, its importance is secondary as a physical absorbed to protect against UVR induced damage. This hypothesis is not consistent with the evidence that most of the hominid evolution took place in savanna environment and not in tropical rainforests. Humans living in hot and sunny environments have darker skin than humans who live in wet and cloudy environments. The antimicrobial hypothesis also does not explain why some populations (like the Inuit or Tibetans) who live far from the tropics and are exposed to high UVR have darker skin pigmentation than their surrounding populations.\n", "The earliest primate ancestors of modern humans most likely had light skin, like our closest modern relative – the chimpanzee. About 7 million years ago human and chimpanzee lineages diverged, and between 4.5 and 2 million years ago early humans moved out of rainforests to the savannas of East Africa. They not only had to cope with more intense sunlight but had to develop a better cooling system. It was harder to get food in the hot savannas and as mammalian brains are prone to overheating – 5 or 6 °C rise in temperature can lead to heatstroke – so there was a need for the development of better heat regulation. The solution was sweating and loss of body hair.\n", "Data collected from studies on \"MC1R\" gene has shown that there is a lack of diversity in dark-skinned African samples in the allele of the gene compared to non-African populations. This is remarkable given that the number of polymorphisms for almost all genes in the human gene pool is greater in African samples than in any other geographic region. So, while the \"MC1R\"f gene does not significantly contribute to variation in skin colour around the world, the allele found in high levels in African populations probably protects against UV radiation and was probably important in the evolution of dark skin.\n", "Writers in the medieval Middle East also produced theories of environmental determinism. The Afro-Arab writer al-Jahiz argued that the skin color of people and livestock were determined by the water, soil, and heat of their environments. He compared the color of black basalt in the northern Najd to the skin color of the peoples living there to support his theory.\n", "Section::::Genetics.:Dark skin.\n\nAll modern humans share a common ancestor who lived around 200,000 years ago in Africa. Comparisons between known skin pigmentation genes in chimpanzees and modern Africans show that dark skin evolved along with the loss of body hair about 1.2 million years ago and that this common ancestor had dark skin. Investigations into dark skinned populations in South Asia and Melanesia indicate that skin pigmentation in these populations is due to the preservation of this ancestral state and not due to new variations on a previously lightened population.\n\nBULLET::::- MC1R\n\nSection::::Genetics.:Light skin.\n", "In Song of Songs (1:5), the tents of the Qedarites are described as black: \"Black I am, but beautiful, ye daughters of Jerusalem / As tents of Qedar, as tentcloth of Salam black.\" Their tents are said to be made of black goat hair. A tribe of Salam was located just south of the Nabateans in Madain Salih, and Knauf proposed that the Qedarites mentioned in this Masoretic text were in fact Nabataeans and played a crucial role in the spice trade in the 3rd century BCE.\n", "Variations in the \"KITL\" gene have been positively associated with about 20% of melanin concentration differences between African and non-African populations. One of the alleles of the gene has an 80% occurrence rate in Eurasian populations. The \"ASIP\" gene has a 75–80% variation rate among Eurasian populations compared to 20–25% in African populations. Variations in the \"SLC24A5\" gene account for 20–25% of the variation between dark and light skinned populations of Africa, and appear to have arisen as recently as within the last 10,000 years. The Ala111Thr or rs1426654 polymorphism in the coding region of the SLC24A5 gene reaches fixation in Europe, but is found across the globe, particularly among populations in Northern Africa, the Horn of Africa, West Asia, Central Asia and South Asia.\n", "Light skin\n\nLight skin is a human skin color, which has little eumelanin pigmentation and which has been adapted to environments of low UV radiation. Light skin is most commonly found amongst the native populations of Europe and Northeast Asia as measured through skin reflectance. People with light skin pigmentation are often referred to as \"white\" or \"fair\", although these usages can be ambiguous in some countries where they are used to refer specifically to certain ethnic groups or populations.\n", "A similar, minor Arabid element is found in parts of the Horn of Africa, having been introduced From the Gulf region in historic times by the first Islamic proselytizers as well as the adjacent Himyarites and Sabaeans of Hadhramaut. However, here again the Arabid element is secondary to the predominant Hamitic type of the region's first Hamito-Semitic speakers, who were ancestral to the Somalis, Abyssinians and other Ethiopid populations.\n", "Section::::Health implications.\n\nSkin pigmentation is an evolutionary adaptation to various UVR levels around the world. As a consequence there are many health implications that are the product of population movements of humans of certain skin pigmentation to new environments with different levels of UVR. Modern humans are often ignorant of their evolutionary history at their peril. Cultural practices that increase problems of conditions among dark-skinned populations are traditional clothing and vitamin D-poor diet.\n\nSection::::Health implications.:Advantages of dark skin pigmentation in high sunlight environments.\n" ]
[ "Since arabs are in arabia they should have very dark skin." ]
[ "Skin color doesn't change evolutionarily for tens of thousands of years. The people there decended from people in locations with light skin color. " ]
[ "false presupposition" ]
[ "Since arabs are in arabia they should have very dark skin." ]
[ "false presupposition" ]
[ "Skin color doesn't change evolutionarily for tens of thousands of years. The people there decended from people in locations with light skin color. " ]
2018-02479
What properties do cooking oils contain that make them beneficial for cooking?
They should be cheap to produce in large quantities. They need to withstand high temperatures without burning (despite what some other user said). They must be non-toxic, not including any carcinogenic compounds they form when they're used.
[ "The following oils are suitable for high-temperature frying due to their high smoke point above :\n\nBULLET::::- Avocado oil\n\nBULLET::::- Mustard oil\n\nBULLET::::- Palm oil\n\nBULLET::::- Peanut oil (marketed as \"groundnut oil\" in the UK and India)\n\nBULLET::::- Rice bran oil\n\nBULLET::::- Safflower oil\n\nBULLET::::- Semi-refined sesame oil\n\nBULLET::::- Semi-refined sunflower oil\n", "There is a wide variety of cooking oils from plant sources such as olive oil, palm oil, soybean oil, canola oil (rapeseed oil), corn oil, peanut oil and other vegetable oils, as well as animal-based oils like butter and lard.\n\nOil can be flavoured with aromatic foodstuffs such as herbs, chillies or garlic.\n\nSection::::Health and nutrition.\n\nA guideline for the appropriate amount of fat—a component of daily food consumption—is established by government agencies.\n", "Section::::Applications.\n\nSection::::Applications.:Cooking.\n\nSeveral edible vegetable and animal oils, and also fats, are used for various purposes in cooking and food preparation. In particular, many foods are fried in oil much hotter than boiling water. Oils are also used for flavoring and for modifying the texture of foods (e.g. Stir Fry). Cooking oils are derived either from animal fat, as butter, lard and other types, or plant oils from the olive, maize, sunflower and many other species.\n\nSection::::Applications.:Cosmetics.\n", "Most large-scale commercial cooking oil refinement will involve all of these steps in order to achieve a product that's uniform in taste, smell and appearance, and has a longer shelf life. Cooking oil intended for the health food market will often be unrefined, which can result in a less stable product but minimizes exposure to high temperatures and chemical processing.\n\nYou can also extract oil from various seeds like, Coconut, peanuts, Sesame, walnuts and many more at home. For that you can use any cold press oil maker machine.\n\nSection::::Waste cooking oil.\n", "Cooking oil\n\nCooking oil is plant, animal, or synthetic fat used in frying, baking, and other types of cooking. It is also used in food preparation and flavouring not involving heat, such as salad dressings and bread dips, and in this sense might be more accurately termed edible oil.\n\nCooking oil is typically a liquid at room temperature, although some oils that contain saturated fat, such as coconut oil, palm oil and palm kernel oil are solid.\n", "BULLET::::- Flavor base – oils can also \"carry\" flavors of other ingredients, since many flavors are due to chemicals that are soluble in oil.\n\nOils can be heated to temperatures significantly higher than the boiling point of water, , and used to cook foods (frying). Oils for this purpose must have a high flash point. Such oils include the major cooking oils – soybean, rapeseed, canola, sunflower, safflower, peanut, cottonseed, etc. Tropical oils, such as coconut, palm, and rice bran oils, are particularly valued in Asian cultures for high-temperature cooking, because of their unusually high flash points.\n", "Oils are extracted from nuts, seeds, olives, grains or legumes by extraction using industrial chemicals or by mechanical processes. Expeller pressing is a chemical-free process that collects oils from a source using a mechanical press with minimal heat. Cold-pressed oils are extracted under a controlled temperature setting usually below intended to preserve naturally occurring phytochemicals, such as polyphenols, plant sterols and vitamin E which collectively affect color, flavor, aroma and nutrient value.\n\nSection::::Cooking oil extraction and refinement.\n", "Many vegetable oils are consumed directly, or indirectly as ingredients in food – a role that they share with some animal fats, including butter, ghee, lard, and schmaltz. The oils serve a number of purposes in this role:\n\nBULLET::::- Shortening – to give the pastry a crumbly texture.\n\nBULLET::::- Texture – oils can serve to make other ingredients stick together less.\n\nBULLET::::- Flavor – while less flavorful oils command premium prices, some oils, such as olive, sesame, or almond oil, may be chosen specifically for the flavor they impart.\n", "Section::::Importance of cooking temperature on interfaces.:Smoke points of oils.\n", "The smoke point of cooking oils varies generally in association with how oil is refined: a higher smoke point results from removal of impurities and free fatty acids. Residual solvent remaining from the refining process may decrease the smoke point. It has been reported to increase with the inclusion of antioxidants (BHA, BHT, and TBHQ). For these reasons, the published smoke points of oils may vary.\n", "Cooking techniques can be broken down into two major categories: Oil based and water based cooking techniques. Both oil and water based techniques rely on the vaporization of water to cook the food. Oil based cooking techniques have significant surface interactions that greatly affect the quality of the food they produce. These interactions stem from the polar oil molecules interacting with the surface of the food. Water based techniques have far less surface interactions that affect the quality of the food.\n\nSection::::Interaction of cooking techniques.:Pan fry.\n", "There are large numbers of crude oils all around the world that are used to produce base oils. The most common one is a type of paraffinic crude oil, although there are also naphthenic crude oils that create products with better solubility and very good properties at low temperatures. By using hydrogenation technology, in which sulfur and aromatics are removed using hydrogen under high pressure, you can obtain extremely pure base oils, which are suitable when quality requirements are particularly stringent.\n", "Cooking oils are composed of various fractions of fatty acids. For the purpose of frying food, oils high in monounsaturated or saturated fats are generally popular, while oils high in polyunsaturated fats are less desirable. High oleic acid oils include almond, macadamia, olive, pecan, pistachio, and high-oleic cultivars of safflower and sunflower.\n\nSection::::Types and characteristics.:Smoke point.\n\nThe smoke point is marked by \"a continuous wisp of smoke.\" It is the temperature at which an oil starts to burn, leading to a burnt flavor in the foods being prepared and degradation of nutrients and phytochemicals characteristic of the oil.\n", "BULLET::::- Preservative addition, such as BHA and BHT to help preserve oils that have been made less stable due to high-temperature processing.\n\nFiltering, a non-chemical process which screens out larger particles, could be considered a step in refinement, although it doesn't alter the state of the oil.\n", "Fats and oils created by enzymatic interesterification provide several benefits to food manufacturers. These oils provide better health profiles than either palm oil or partially hydrogenated oil because they are trans fat free and lower in saturated fat. The wide plasticity range and more consistent solid fat content create less variability in firmness, which is beneficial in production.\n\nMost often created through domestically sourced soybean oil, they provide a better risk management profile than globally produced palm oil. Lastly, producing enzymatic interesterified oil typically uses less processing and no harmful by-products creating a more sustainable, green process.\n", "In addition to the choice of herbs and seasoning, the timing of when flavours are added will affect the food that is being cooked.\n\nIn some cultures, meat may be seasoned by pouring seasoning sauce over the dish at the table. A variety of seasoning techniques exist in various cultures.\n\nSection::::Oil infusion.\n\nInfused oils are also used for seasoning. There are two methods for doing an infusion—hot and cold. Olive oil makes a good infusion base for some herbs, but tends to go rancid more quickly than other oils. Infused oils should be kept refrigerated.\n\nSection::::Escoffier.\n", "The smoke point of any oil is defined by the temperature at which light blue smoke rises from the surface. The smoke, which contains acrolein, is an eye irritant and asphixiant. The smoke point of oils vary widely. Depending on origin, refinement, age, and source growth conditions, the smoke point for any given type of oil can drop nearly 20 °C. For example, the smoke point of olive oil can vary from being suitable for high temperature frying to only safely used for stir frying. As a cooking oil is refined its smoke point increases. This is because many of the impurities found in natural oils aid in their breakdown. In general the lighter the oil, the higher its smoke point. It is important to choose the appropriate oil for each cooking technique and temperature as cooking oils degrade rapidly when heated about their smoke point. It is recommended that oils heated beyond their smoke point should not be consumed as the chemicals created are suspected carcinogens.\n", "Less aggressive frying temperatures are frequently used. A quality frying oil has a bland flavor, at least smoke and flash points, with maximums of 0.1% free fatty acids and 3% linolenic acid. Those oils with higher linolenic fractions are avoided due to polymerization or gumming marked by increases in viscosity with age. Olive oil resists thermal degradation and has been used as a frying oil for thousands of years.\n\nBULLET::::- Olive oil\n\nSection::::Health and nutrition.:Storing and keeping oil.\n", "While consumption of small amounts of saturated fats is common in diets, meta-analyses found a significant correlation between \"high consumption\" of saturated fats and blood LDL concentration, a risk factor for cardiovascular diseases. Other meta-analyses based on cohort studies and on controlled, randomized trials found a positive, or neutral, effect from consuming polyunsaturated fats instead of saturated fats (a 10% lower risk for 5% replacement).\n", "Several large studies indicate a link between the consumption of high amounts of trans fat and coronary heart disease, and possibly some other diseases. The United States Food and Drug Administration (FDA), the National Heart, Lung and Blood Institute and the American Heart Association (AHA) all have recommended limiting the intake of trans fats. In the US, trans fats are no longer \"generally recognized as safe,\" and cannot be added to foods, including cooking oils, without special permission.\n\nSection::::Health and nutrition.:Cooking with oil.\n", "List of macerated oils\n\nMacerated oils are vegetable oils to which other matter, such as herbs, has been added. Commercially available macerated oils include all these, and others. Herbalists and aromatherapists use not only these pure macerated oils, but blends of these oils, as well, and may macerate virtually any known herb. Base oils commonly used for maceration include almond oil, sunflower oil, and olive oil as well as other food-grade triglyceride vegetable oils, but other oils undoubtedly are used as well.\n", "Peanut oil, cashew oil and other nut-based oils may present a hazard to persons with a nut allergy.\n\nSection::::Health and nutrition.:Trans fats.\n\nUnlike other dietary fats, trans fats are not essential, and they do not promote good health. The consumption of trans fats increases one's risk of coronary heart disease by raising levels of \"bad\" LDL cholesterol and lowering levels of \"good\" HDL cholesterol. Trans fats from partially hydrogenated oils are more harmful than naturally occurring oils.\n", "There are many cooking techniques that do not use oil as part of the process such as steaming or boiling. Water based techniques are typically used to cook vegetables or other plants which can be consumed as food. When no oil is present the method of heat transfer to the food is typically water vapor. Water vapor molecules do not have any significant surface interactions with the food surface. Since food, including vegetables, is cooked by the vaporization of water within the food, the use of water vapor as the mode of heat transfer has no effect on the chemical interactions on the surface of the food.\n", "Refined oils high in monounsaturated fats, such as macadamia oil, keep \"up to a year\", while those high in polyunsaturated fats, such as soybean oil, keep about six months. Rancidity tests have shown that the shelf life of walnut oil is about 3 months, a period considerably shorter than the \"best before\" date shown on labels.\n\nBy contrast, oils high in saturated fats, such as avocado oil, have relatively long shelf lives and can be safely stored at room temperature, as the low polyunsaturated fat content facilitates stability.\n\nSection::::Types and characteristics.\n", "Cooking oil can be recycled. It can be used as animal feed, directly as fuel, and to produce biodiesel, soap, and other industrial products.\n\nIn the recycling industry, used cooking oil recovered from restaurants and food-processing industries (typically from deep fryers or griddles) is called recycled vegetable oil (RVO), used vegetable oil (UVO), waste vegetable oil (WVO), or yellow grease.\n\nYellow grease is used to feed livestock, and to make soap, make-up, clothes, rubber, detergents, and biodiesel fuel.\n\nUsed cooking oil, besides being converted to biodiesel, can be used directly in modified diesel engines and for heating.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-20802
When a large retail corporation like Toys R Us goes out of business, what happens in practicality?
The remaining stores will try to find a buyer for merchandise at the best price they can quickly move goods for, this often involves sharp discounts on store shelves, or another business buying out merchandise, or sometimes it may be simply donated to a charity if nobody will buy it. Employees may be given a short notice, but many bankruptcy cases are something that any casual employee is likely to for see well in advance (store closings, layoffs, poor sales volumes, etc). Sometimes the rights to use the company name are sold off, sometimes to create a new online retailler with the same name.
[ "Sears Holdings filed for bankruptcy protection in October 2018 and plans to close about 142 of its 700 stores by the end of 2018.\n\nToys \"R\" Us filed for bankruptcy, and closed all its US stores in June 2018. In late 2018, the brand name was pulled out of bankruptcy protection; there are currently plans for it to return in the form of specialty stores under the name \"Geoffrey's Toy Box\".\n", "Section::::Financial trouble and bankruptcy.:Cancellations and demise.\n", "On June 29, 2018, Toys \"R\" Us shut down all of its remaining U.S. locations, after 70 years of operations. In early July 2018, it was reported that unknown benefactors had bought out all of the remaining stock of two locations in North Carolina so they could be donated to charity. \n", "BULLET::::- The Bon-Ton, a regional department store operator, filed for Chapter 11 bankruptcy on February 5, 2018. The company said it would close 42 stores. On April 17, 2018 they announced plans to go out of business after being purchased by two liquidators.\n\nBULLET::::- Borders Group, which included its namesake chain, along with Waldenbooks, filed for bankruptcy and closed all of its stores in 2011.\n", "They acquired Pleasants Hardware in 1989. However, on December 28, 2015, the C.F. Sauer company informed the employees of Pleasants Hardware that they were no longer interested in owning hardware stores. C.F. Sauer then sold a number of Pleasants' smaller stores to a Do it Best group in Virginia Beach and gave the remaining 100+ employees in the flagship store sixty days notice (required by law according to the WARN Act) that they would soon be unemployed. On February 27, 2016, the original Pleasants Hardware closed.\n", "On June 16, 2009, it was announced that Koenigsegg and a group of Norwegian investors planned to acquire the Saab brand from General Motors. GM would continue to supply architecture and powertrain technology for an unspecified amount of time. It also becomes the last brand/subsidiary from GM to be sold (Hummer was first, followed by Saturn). The deal failed on November 24, 2009. GM, however, requested Spyker Cars to acquire Saab from MLC a few weeks later. But however, MLC announced it would close Saab on December 19, 2009, although this plan was later reversed. Motors Liquidation Company had until January 7, 2010, for the deadline of the revised bid. The sale of Saab to Spyker was approved on January 26, 2010, and completed on February 23, 2010.\n", "On April 21, 2018, it was announced that UK and Irish rival Smyths would purchase Toys \"R\" Us stores in Germany, Austria and Switzerland, as well as Toys \"R\" Us Europe's head office in Cologne. Smyths said that all of the outlets acquired will be rebranded. On April 13, a bid was made by Isaac Larian to buy 356 Toys \"R\" Us stores for $890 million, but was rejected on April 17 and was fully scrapped on April 23. On July 19, 2019, it was announced that PicWicToys will replace the former Toys \"R\" Us stores in France. \n\nSection::::History.:Bankruptcy.:Australia.\n", "Section::::History.:Bankruptcies and closure.\n", "As of 1999, the company operated 1,324 stores across the United States, and was the second-largest toy retailer in the U.S. After filing for bankruptcy, the company went out of business on February 9, 2009. The company operated 461 stores at the time of its closure. International retailer Toys \"R\" Us acquired the remains of K·B Toys, consisting mainly of its website, trademarks, and intellectual property rights. Strategic Marks, a company that buys and revives defunct brands, purchased the brand in 2016, and plans to open new stores under the name beginning in 2019.\n\nSection::::History.\n", "The company filed for Chapter 11 bankruptcy protection on September 18, 2017, and its British operations entered administration in February 2018. In March 2018, the company announced that it would close all of its U.S. and British stores. The British locations closed in April and the U.S. locations in June. The Australian wing of Toys \"R\" Us entered voluntary administration on May 22 and closed all of its stores on August 5, 2018. Operations in other international markets such as Asia and Africa were less affected, but chains in Canada, parts of Europe and Asia were eventually sold to third-parties.\n", "The Australian wing of Toys \"R\" Us entered voluntary administration on May 22. On June 20, It was announced that all of their Australian stores will be closing as well. The closure of all stores was concluded on August 5, 2018.\n\nSection::::History.:Bankruptcy.:Asia.\n", "Abandoning a technology is not only due to bad or outmoded idea. There are instances, such as the case of some medical technologies, where products are phased out the market because they are no longer viable as business ventures. Some orphaned technologies do not suffer complete abandonment or obsolescence. For instance, there is the case of IBM's Silicon Germanium (SiGe) technology, which is a program that produced an \"in situ\" dopped alloy as a replacement for the conventional implantation step in silicon semiconductor bipolar process. The technology was previously orphaned but was continued again by a small team at IBM so that it emerged as a leading product in the high-volume communications marketplace. Technologies orphaned due to failure on the part of their startup developers can be picked up by another investor. This is demonstrated by Wink, an IoT technology orphaned when its parent company Quirky filed for bankruptcy. The platform, however, continued after it was purchased by another company called Flex.\n", "In 2004, after four wooden roller coasters were built, S&S closed that division of the company.\n\nIn 2006, S&S Power opened Celebration Centre, a Family Entertainment Center featuring a number of S&S rides and prototypes. The facility was later sold and is currently no longer operating.\n", "On January 20, 2019, the company emerged from bankruptcy as Tru Kids.\n\nAs of June 21, 2019, the company plans to open new stores in the US slated to be 10,000 square feet, roughly a third of the size of the big box brand that closed last year. \n\nSection::::Flagship store.\n", "It was initially stated that only the U.S. and Canadian operations would be affected, and that its brick-and-mortar stores and online sales sites would continue to operate. In January 2018, the company announced it would liquidate and close up to 182 of its stores in the U.S. as part of its restructuring, as well as convert up to 12 stores into co-branded Toys \"R\" Us and Babies \"R\" Us stores. \n", "Section::::History.:Demise.\n", "It is generally believed that production of the original Mattel Thingmakers was discontinued following consumer safety concerns over allowing children to use a small electric heater as a toy.\n", "On June 29, 2018, Toys \"R\" Us closed as part of the chain's liquidation. The store shared a 63,000-square-foot building, built in 1994 on the former Sears & Roebuck site, with an Ulta Beauty cosmetic store, according to city records. The toy store rented about 45,000 square feet, and its space is valued at $4.9 million. In February 2019, the former Toys \"R\" Us spot would be the new home of the PGA Tour Superstore golf shop. \n", "The company lost market share in its housewares and electronics sectors to giant discounters such as Walmart and Bed Bath & Beyond, and later Best Buy and Circuit City. Although Service Merchandise was early to embrace the Internet in the 1990s, generating tens of millions of dollars in sales, it was not enough to offset the damage done by the mega-chain stores springing up nationwide. Until its closure, however, Service Merchandise enjoyed a strong jewelry department, continuing as the largest watch retailer in the United States.\n", "On March 8, 2019 Enesco, LLC, a global leader in the giftware, home décor, and accessories industries acquired Things Remembered, Inc., the North American leading omnichannel retailer of personalized gifts and merchandise. Things Remembered will continue to operate more than 170 retail locations, as well as its online, direct mail, and B2B retail businesses, all under its brand name.ref\n", "On January 24, 2018, it was announced that Toys \"R\" Us' sister store, Babies \"R\" Us would be closing as part of a plan to close 182 Toys \"R\" Us and/or Babies \"R\" Us stores nationwide due to bankruptcy. The Babies \"R\" Us store closed in April 2018. On March 14, 2018, Toys \"R\" Us announced that they would be closing all 1,795 locations Worldwide, including the Shopper's World Toys \"R\" Us location. The store closed on June 27, 2018. Toy City moved in place in September. \n\nSection::::See also.\n", "Finally, on February 1, 2006, Palisades announced its bankruptcy and subsequent sale of the company to Limited by CAS Inc. Horn discussed the situation in a press release noting, \"This development parallels a general trend within the toy industry, including the bankruptcy of one of Palisades’ largest customers.\" While Limited continued Palisades' Factory X branch of statues and prop replicas, Horn and his wife were not offered positions in the transaction.\n", "In 2010, Toys \"R\" Us, Inc. reported that its Internet sales grew 29.9% year-over-year to $782 million from $602 million, and in April 2011, the company announced plans to open a dedicated e-commerce fulfillment center in McCarran, Nevada. The company later reported online sales of $1 billion for 2011 and $1.1 billion for 2012.\n\nThe website was sunsetted with a brief farewell message when the US liquidation began in March 2018. The surviving international stores continue to sell merchandise online.\n\nSection::::Mascot.\n", "On February 28, 2018, it was reported that the company was exploring retaining its stronger Canadian operations, and the divestiture of some of its corporate-owned stores to franchises (leaving approximately 200 in a downsized chain). Toys \"R\" Us Inc. later announced that all U.S. locations would be closed. \n", "The company responded to the market pressures with a series of restructuring plans that included the discontinuation of unprofitable product lines such as electronics, toys and sporting goods, and refocusing on fine jewelry, gifts, and home decor products. Many of their showrooms were also closed or downsized significantly. During this time, the company was successful in sub-dividing a number of its company-owned stores into two or three units and sub-leasing the newly created spaces to other national chains thus reducing costs and at the same time, generating more mall and store traffic.\n\nSection::::History.:Bankruptcy and liquidation.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03185
How does fever work as a self-regulatory mechanism of the body?
At high temperatures the DNA and proteins of many Bacteria and Viruses that are affecting your body and making you sick will denature, preventing them from being able to reproduce. Now a fever becomes dangerously high at 39°C so you should probably contact a Doctor.
[ "Temperature is ultimately regulated in the hypothalamus. A trigger of the fever, called a pyrogen, causes release of prostaglandin E2 (PGE2). PGE2 in turn acts on the hypothalamus, which creates a systemic response in the body, causing heat-generating effects to match a new higher temperature set point.\n", "One model for the mechanism of fever caused by exogenous pyrogens includes LPS, which is a cell wall component of gram-negative bacteria. An immunological protein called lipopolysaccharide-binding protein (LBP) binds to LPS. The LBP–LPS complex then binds to the CD14 receptor of a nearby macrophage. This binding results in the synthesis and release of various endogenous cytokine factors, such as interleukin 1 (IL-1), interleukin 6 (IL-6), and the tumor necrosis factor-alpha. In other words, exogenous factors cause release of endogenous factors, which, in turn, activate the arachidonic acid pathway. The highly toxic metabolism-boosting supplement 2,4-dinitrophenol induces high body temperature via the inhibition of ATP production by mitochondria, resulting in impairment of cellular respiration. Instead of producing ATP, the energy of the proton gradient is lost as heat.\n", "These cytokine factors are released into general circulation, where they migrate to the circumventricular organs of the brain due to easier absorption caused by the blood–brain barrier's reduced filtration action there. The cytokine factors then bind with endothelial receptors on vessel walls, or interact with local microglial cells. When these cytokine factors bind, the arachidonic acid pathway is then activated.\n\nSection::::Pathophysiology.:Pyrogens.:Exogenous.\n", "Fever can also be behaviorally induced by invertebrates that do not have immune-system based fever. For instance, some species of grasshopper will thermoregulate to achieve body temperatures that are 2–5 °C higher than normal in order to inhibit the growth of fungal pathogens such as \"Beauveria bassiana\" and \"Metarhizium acridum\". Honeybee colonies are also able to induce a fever in response to a fungal parasite \"Ascosphaera apis\".\n\nSection::::Further reading.\n\nBULLET::::- Rhoades, R. and Pflanzer, R. Human physiology, third edition, chapter 27 \"Regulation of body temperature\", p. 820 \"Clinical focus: pathogenesis of fever\".\n\nSection::::External links.\n\nBULLET::::- Fever and Taking Your Child's Temperature\n", "Section::::Pathophysiology.:PGE2 release.\n\nPGE2 release comes from the arachidonic acid pathway. This pathway (as it relates to fever), is mediated by the enzymes phospholipase A2 (PLA2), cyclooxygenase-2 (COX-2), and prostaglandin E2 synthase. These enzymes ultimately mediate the synthesis and release of PGE2.\n", "PGE2 is the ultimate mediator of the febrile response. The set point temperature of the body will remain elevated until PGE2 is no longer present. PGE2 acts on neurons in the preoptic area (POA) through the prostaglandin E receptor 3 (EP3). EP3-expressing neurons in the POA innervate the dorsomedial hypothalamus (DMH), the rostral raphe pallidus nucleus in the medulla oblongata (rRPa), and the paraventricular nucleus (PVN) of the hypothalamus . Fever signals sent to the DMH and rRPa lead to stimulation of the sympathetic output system, which evokes non-shivering thermogenesis to produce body heat and skin vasoconstriction to decrease heat loss from the body surface. It is presumed that the innervation from the POA to the PVN mediates the neuroendocrine effects of fever through the pathway involving pituitary gland and various endocrine organs.\n", "Cytokines, such as interleukin-1 can be synthesized and released by neurons. Bartfai's group showed interleukin-1, then called the endogenous pyrogen, is released from the adrenal medulla and brain and demonstrated that the endogenous pyrogen can control body temperature by acting at receptors and hyperpolarizing hypothalamic gabaergic interneurons that control thermogenesis in brown adipose tissue, and thus core body temperature and the fever response.,\n", "Shortly after an onset of an infection into organism, IL-1α activates a set of immune system response processes. In particular, IL-1α: \n\nBULLET::::- stimulates fibroblasts proliferation\n\nBULLET::::- induces synthesis of proteases, subsequent muscle proteolysis, release of all types of amino acids in blood and stimulates acute-phase proteins synthesis\n\nBULLET::::- changes the metallic ion content of blood plasma by increasing copper and decreasing zinc and iron concentration in blood\n\nBULLET::::- increases blood neutrophils\n\nBULLET::::- activates lymphocyte proliferation and induces fever\n", "Most autoinflammatory diseases are genetic and present during childhood. The most common genetic autoinflammatory syndrome is familial Mediterranean fever, which causes short episodes of fever, abdominal pain, serositis, lasting less than 72 hours. It is caused by mutations in the MEFV gene, which codes for the protein pyrin.\n", "To counter this, Francesco Torti, who first systematically studied the effect of cinchona in the treatment of malaria, wrote a book which he titled \"Therapeutice Specialis ad Febres Periodicas Perniciosas\".\n\nSection::::Tree of fevers.\n", "Pyrin is a protein normally present in the inflammasome. The mutated pyrin protein is thought to cause inappropriate activation of the inflammasome, leading to release of the pro-inflammatory cytokine IL-1β. Most other autoinflammatory diseases also cause disease by inappropriate release of IL-1β. Thus, IL-1β has become a common therapeutic target, and medications such as anakinra, rilonacept, and canakinumab have revolutionized the treatment of autoinflammatory diseases.\n", "The condition appears to be the result of a disturbance of innate immunity. The changes in the immune system are complex and include increased expression of complement related genes (C1QB, C2, SERPING1), interleulkin-1-related genes (interleukin-1B, interleukin 1 RN, CASP1, interleukin 18 RAP) and interferon induced (AIM2, IP-10/CXCL10) genes. T cell associated genes (CD3, CD8B) are down regulated. Flares are accompanied by increased serum levels of activated T lymphocyte chemokines (IP-10/CXCL10, MIG/CXCL9), G-CSF and proinflammatory cytokines (interleukin 6, interleukin 18). Flares also manifest with a relative lymphopenia. Activated CD4(+)/CD25(+) T-lymphocyte counts correlated negatively with serum concentrations of IP-10/CXCL10, whereas CD4(+)/HLA-DR(+) T lymphocyte counts correlated positively with serum concentrations of the counterregulatory IL-1 receptor antagonist.\n", "A pyrogen is a substance that induces fever. These can be either internal (endogenous) or external (exogenous) to the body. The bacterial substance lipopolysaccharide (LPS), present in the cell wall of gram-negative bacteria, is an example of an exogenous pyrogen. Pyrogenicity can vary: In extreme examples, some bacterial pyrogens known as superantigens can cause rapid and dangerous fevers. Depyrogenation may be achieved through filtration, distillation, chromatography, or inactivation.\n\nSection::::Pathophysiology.:Pyrogens.:Endogenous.\n", "Following tissue injury in patients with Graft-versus-host disease (GVHD), ATP is released into the pertioneal fluid. It binds onto the P2RX7 receptors of host antigen-presenting cells (APCs) and activates the inflammasomes. As a result, the expression of co-stimulatory molecules by APCs is upregulated. The inhibition of the P2X7 receptor increases the number of regulatory T cells and decreases the incidence of acute GVHD.\n\nSection::::Therapeutic interventions.\n\nSection::::Therapeutic interventions.:Current.\n\nBULLET::::- Acupuncture\n", "Section::::Pathophysiology.:Hypothalamus.\n\nThe brain ultimately orchestrates heat effector mechanisms via the autonomic nervous system or primary motor center for shivering. These may be:\n\nBULLET::::- Increased heat production by increased muscle tone, shivering and hormones like epinephrine (adrenaline)\n\nBULLET::::- Prevention of heat loss, such as vasoconstriction.\n\nIn infants, the autonomic nervous system may also activate brown adipose tissue to produce heat (non-exercise-associated thermogenesis, also known as non-shivering thermogenesis). Increased heart rate and vasoconstriction contribute to increased blood pressure in fever.\n\nSection::::Pathophysiology.:Usefulness.\n", "The innate immune system is common to all multicellular organisms and forms the first line of defense against pathogens. Infected cells recognize that they are under attack by detecting the pathogen directly through the Pathogen Associated Molecular Patterns (PAMPS) which bind with the Pattern Recognition Receptors (PRR) on the host cells. Host cells also recognize the pathogen through effector-triggered immunity, whereby the host cells are alerted to the pathogen by the associated damage caused by pathogenic toxins or effectors.\n", "An organism at optimum temperature is considered \"afebrile\" or \"apyrexic\", meaning \"without fever\". If temperature is raised, but the setpoint is not raised, then the result is hyperthermia.\n\nSection::::Concepts.:Hyperthermia.\n", "A fever occurs when the core temperature is set higher, through the action of the pre-optic region of the anterior hypothalamus. For example, in response to a bacterial or viral infection, certain white blood cells within the blood will release pyrogens which have a direct effect on the anterior hypothalamus, causing body temperature to rise, much like raising the temperature setting on a thermostat.\n\nIn contrast, hyperthermia occurs when the body temperature rises without a change in the heat control centers.\n", "EP-deficient mice as well as mice selectively deleted of EP expression in the brain's median preoptic nucleus fail to develop fever in response to endotoxin (i.e. bacteria-derived lipopolysaccharide) or the host-derived regulator of body temperature, IL-1β. The ability of endotoxind and IL-1β but not that of PGE to trigger fever is blocked by inhibitors of nitric oxide and PG2 EP3-deficient mice exhibit normal febrile responses to stress, interleukin-8, and macrophage inflammatory protein-1beta (MIP-1β). It is suggested that these findings indicate that a) activation of the EP receptor suppresses the inhibitory tone that the preoptic hypothalamus has on thermogenic effector cells in the brain; b) endotoxin and IL-1β simulate the production of nitric oxide which in turn causes the production of PGE and thereby the EP-dependent fever-producing; c) other factors such as stress, interleukin 8, and MIP-1β trigger fever independently of EP; and d) inhibition of the PGE-EP pathway underlies the ability of aspirin and other Nonsteroidal anti-inflammatory drugs to reduce fever caused by inflammation in animals and, possibly, humans.\n", "However, there are some autoinflammatory diseases that are not known to have a clear genetic cause. This includes PFAPA, which is the most common autoinflammatory disease seen in children, characterized by episodes of fever, aphthous stomatitis, pharyngitis, and cervical adenitis. Other autoinflammatory diseases that do not have clear genetic causes include adult-onset Still's disease, systemic-onset juvenile idiopathic arthritis, Schnitzler syndrome, and chronic recurrent multifocal osteomyelitis. It is likely that these diseases are multifactorial, with genes that make people susceptible to these diseases, but they require an additional environmental factor to trigger the disease.\n\nSection::::See also.\n\nBULLET::::- List of cutaneous conditions\n", "Section::::Background.\n\nNeural targets that control thermogenesis, behavior, sleep, and mood can be affected by pro-inflammatory cytokines which are released by activated macrophages and monocytes during infection. Within the central nervous system production of cytokines has been detected as a result of brain injury, during viral and bacterial infections, and in neurodegenerative processes.\n\nFrom the US National Institute of Health:\n", "BULLET::::- Another control mechanism is through the IL-2 feedback loop. Antigen-activated T cells produce IL-2 which then acts on IL-2 receptors on regulatory T cells alerting them to the fact that high T cell activity is occurring in the region, and they mount a suppressory response against them. This is a negative feedback loop to ensure that overreaction is not occurring. If an actual infection is present other inflammatory factors downregulate the suppression. Disruption of the loop leads to hyperreactivity, regulation can modify the strength of the immune response. A related suggestion with regard to interleukin 2 is that activated regulatory T cells take up interleukin 2 so avidly that they deprive effector T cells of sufficient to avoid apoptosis.\n", "The neural activation mechanisms involved in the regulation of body temperature are largely undefined. It is known that sympathetic pathways are involved in increasing heat production and reducing heat loss and are activated by neurons in the rostal medullary raphe (RMR). These neurons were identified as playing an important role in the elevation of body temperature during both cold exposure and induced fever by observation that hyperpolarization prior to exposure to these conditions inhibits the elevation of body temperature in response.\n\nSection::::Role in thermoregulation.:Febrile response.\n", "With fever, the body's core temperature rises to a higher temperature through the action of the part of the brain that controls the body temperature; with hyperthermia, the body temperature is raised without the influence of the heat control centers.\n\nSection::::Concepts.:Hypothermia.\n\nIn hypothermia, body temperature drops below that required for normal metabolism and bodily functions. In humans, this is usually due to excessive exposure to cold air or water, but it can be deliberately induced as a medical treatment. Symptoms usually appear when the body's core temperature drops by below normal temperature.\n\nSection::::Concepts.:Basal body temperature.\n", "Information gained during recent epidemics suggests that chikungunya fever may result in a chronic phase as well as the phase of acute illness. Within the acute phase, two stages have been identified: a viral stage during the first five to seven days, during which viremia occurs, followed by a convalescent stage lasting approximately ten days, during which symptoms improve and the virus cannot be detected in the blood. Typically, the disease begins with a sudden high fever that lasts from a few days to a week, and sometimes up to ten days. The fever is usually above and sometimes reaching and may be biphasic—lasting several days, breaking, and then returning. Fever occurs with the onset of viremia, and the level of virus in the blood correlates with the intensity of symptoms in the acute phase. When IgM, an antibody that is a response to the initial exposure to an antigen, appears in the blood, viremia begins to diminish. However, headache, insomnia and an extreme degree of exhaustion remain, usually about five to seven days.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-00065
How is my phone able to share it's internet with my pc though usb tethering?
USB is just a conduit for data signals. If software on both the phone and your pc know how to use that communication channel, then they can talk to each other about anything they want, including internet traffic. There is no difference between your phone transferring pictures to your pc via the USB cable, and your phone transferring the HTML and image files needed to load a web page.
[ "Many mobile devices are equipped with software to offer tethered Internet access. Windows Mobile 6.5, Windows Phone 7, Android (starting from version 2.2), and iOS 3.0 (or later) offer tethering over a Bluetooth PAN or a USB connection. Tethering over Wi-Fi, also known as Personal Hotspot, is available on iOS starting with iOS 4.2.5 (or later) on iPhone 4, 4S (2010), 5, iPad (3rd generation), certain Windows Mobile 6.5 devices like the HTC HD2, Windows Phone 7, 8 and 8.1 devices (varies by manufacturer and model), and certain Android phones (varies widely depending on carrier, manufacturer, and software version).\n", "If tethering is done over WLAN, the feature may be branded as a personal or mobile hotspot, which allows the device to serve as a portable router. Mobile hotspots may be protected by a PIN or password. The Internet-connected mobile device can act as a portable wireless access point and router for devices connected to it.\n\nSection::::Mobile device's OS support.\n", "Tethering\n\nTethering, or phone-as-modem (PAM), is the sharing of a mobile device's Internet connection with other connected computers. Connection of a mobile device with other devices can be done over wireless LAN (Wi-Fi), over Bluetooth or by physical connection using a cable, for example through USB.\n", "Standard USB uses a master/slave architecture; a \"host\" acts as the master device for the entire bus, and a USB \"device\" acts as a slave. If implementing standard USB, devices must assume one role or the other, with computers generally set up as hosts, while (for example) printers normally function as slaves. In the absence of USB OTG, cell phones often implemented slave functionality to allow easy transfer of data to and from computers. Such phones, as slaves, could not readily be connected to printers as they also implemented the slave role. USB OTG directly addresses this issue.\n", "A mobile phone, such as a smartphone, that connects to data or voice services without going through the cellular base station is not on the mobile Internet. A laptop with a broadband modem and a cellular service provider subscription, that is traveling on a bus through the city is on mobile Internet.\n\nA mobile broadband modem \"tethers\" the smartphone to one or more computers or other end-user devices to provide access to the Internet via the protocols that cellular telephone service providers may offer.\n", "In 2005 XS4ALL Internet BV (the first public internet provider in the Netherlands) supplied all of its DSL users with a USB VOIP Phone, it was a creation out of the mind of Donar Alofs and built by Philips. It came with a browser plugin and people could make and receive calls via this USB Phone via and internet website (webphone.xs4all.nl). Later on, people got DSL modems with rj-11 VoIP ports and rj45 ISDN ports, later the DSL modem/routers also got built-in DECT for making calls with traditional phones.\n", "All MSN Companions ran an early version of Microsoft Windows CE, and were shipped with Microsoft Internet Explorer 4.0. However, the hardware provided by each manufacturer was significantly different, with some companies choosing to use a wireless keyboard over a wired one.\n\nThe Vestel package included a 15-inch monitor (or a 10-inch LCD monitor) and a PS/2 keyboard with a touchpad. The device itself had 32MB (+16MB flash) of memory, a 200 MHz Geode processor, two USB ports, a phone jack, and a parallel port.\n", "However, to facilitate migration from wired to wireless, WUSB introduced a new \"Device Wire Adapter (DWA)\" class. Sometimes referred to as a \"WUSB hub\", a DWA allows existing USB 2.0 devices to be used wirelessly with a WUSB host.\n\nWUSB host capability can be added to existing PCs through the use of a \"Host Wire Adapter (HWA)\". The HWA is a USB 2.0 device that attaches externally to a desktop or laptop's USB port or internally to a laptop's MiniCard interface.\n", "As USB has become faster, devices have also become hungrier for data and so there is now demand for sending large amounts of data - either to be stored on the device, or be relayed over wireless links (see 3GPP Long Term Evolution).\n\nSince the new devices, although faster than before, are still much lower in power than desktop PCs, the issue of careful data handling arises, to maximize use of DMA resources on the device and minimize (or eliminate) copying of data (zero-copy). The NCM protocol has elaborate provisions for this. See link below for careful protocol comparisons.\n", "BULLET::::- Automatic path choice - Once connected, devices find a path to the Internet also completely automatically. If a path fails, a new one will be chosen and, if necessary, new connections will be established with other devices.\n\nBULLET::::- Multi-hop - When there is no direct Internet connection, devices will access the Internet through chains of other devices. Again, if necessary, network chains will grow to reach the Internet connection.\n\nSection::::Awards and recognitions.\n", "There is also USB over IP, which may use IP-based networking to transfer USB traffic wirelessly. For example, with proper drivers the host side may use 802.11a/b/g/n/ac Wi-Fi (or wired Ethernet) to communicate with the device side.\n\nSection::::Competitors.:Media Agnostic USB.\n", "There are, however, several ways to enable tethering on restricted devices without paying the carrier for it, including 3rd party USB Tethering apps such as PDAnet, rooting Android devices or jailbreaking iOS devices and installing a tethering application on the device. Tethering is also available as a downloadable third-party application on most Symbian mobile phones as well as on the MeeGo platform and on WebOS mobiles phones.\n\nSection::::In carriers' contracts.\n", "Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers to computers, mobile phones (called \"cell phones\" in North America and South Africa, and \"hand phones\" in Asia), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used.\n", "Though Kies connectivity has traditionally been via mini or micro-USB cable (needing of some software, and not plug and play), wireless LAN connectivity between a Samsung device on which the \"Kies Wireless\" Android app is running, and any Windows or Macintosh computer running the Kies full version, is now also possible. The Kies Wireless app also supports wireless connectivity with other devices via said other devices' web browsers. All such connectivity, though, must be via a local Wi-Fi connection (and not via cellular 2G, 3G, or 4G data networks) wherein all involved devices are on the same Wi-Fi LAN.\n", "The most common operating system on such embedded devices is Linux. More seldomly, VxWorks is used. The devices are configured over a web user interface served by a light web server software running on the device. It is possible for a computer running a desktop operating system with appropriate software to act as a wireless router. This is commonly referred to as a SoftAP.\n", "Although mobile phones had long had the ability to access data networks such as the Internet, it was not until the widespread availability of good quality 3G coverage in the mid-2000s (decade) that specialized devices appeared to access the mobile web. The first such devices, known as \"dongles\", plugged directly into a computer through the USB port. Another new class of device appeared subsequently, the so-called \"compact wireless router\" such as the Novatel MiFi, which makes 3G Internet connectivity available to multiple computers simultaneously over Wi-Fi, rather than just to a single computer via a USB plug-in.\n", "In June 2006, five companies showed the first multi-vendor interoperability demonstration of Wireless USB. A laptop with an Intel host adapter using an Alereon PHY was used to transfer high definition video from a Philips wireless semiconductor with a Staccato Communications PHY, all using Microsoft Windows XP drivers developed for Wireless USB.\n", "Except with Phone-as-Modem plans, you may not use a mobile device (including a Bluetooth device) as a modem in connection with any computer. We reserve the right to deny or terminate service without notice for any misuse or any use that adversely affects network performance.\n\nT-Mobile USA has a similar clause in its \"Terms & Conditions\": \n\nUnless explicitly permitted by your Data Plan, other uses, including for example, using your Device as a modem or tethering your Device to a personal computer or other hardware, are not permitted.\n", "On some mobile network operators, this feature is contractually unavailable by default, and may only be activated by paying to add a tethering package to a data plan or choosing a data plan that includes tethering, such as Lycamobile MVNO. This is done primarily because with a computer sharing the network connection, there may well be a substantial increase in the customer's mobile data use, for which the network may not have budgeted in their network design and pricing structures.\n", "Section::::Linux-specific driver.\n\nThe USB-eth module in Linux makes the computer running it a variation of an Ethernet device that uses USB as the physical medium. It creates a Linux network interface, which can be assigned an IP address and otherwise treated the same as a true Ethernet interface. Any applications that work over real Ethernet interfaces will work over a USB-eth interface without modification, because they can't tell that they aren't using real Ethernet hardware.\n\nOn Linux hosts, the corresponding Ethernet-over-USB kernel module is called usbnet. The Bahia Network Driver is a usbnet-style driver available for Win32 hosts.\n", "BULLET::::1. Association Phase: Once AOSS has been initiated on both devices via the AOSS button, the access point will change its SSID to \"ESSID-AOSS\" and the client will attempt to connect to it. Both devices will attempt connection for two minutes. Connection will be made using a secret 64-bit WEP key known to both devices.\n", "There are numerous protocols for Ethernet-style networking over USB. The main motivation for these protocols is to allow application-independent exchange of data with USB devices, instead of specialized protocols such as video or MTP. Even though USB is not a physical Ethernet, the networking stacks of all major operating systems are set up to transport IEEE 802.3 frames, without caring much what the underlying transport really is.\n", "Copies of files accessed over MTP may remain on the host computer even after reboot, where they will be accessible to the user account which accessed them, as well as any other user accounts able to read that user account's files, including any administrative users. Windows 7's sensor platform supports sensors built into MTP-compatible devices.\n\nSection::::MTP support.:Unix-like systems.\n\nA free and open-source implementation of the Media Transfer Protocol is available as libmtp. This library incorporates product and device IDs from many sources, and is commonly used in other software for MTP support.\n\nSection::::MTP support.:Unix-like systems.:Graphical.\n", "For IPv4 networks, the tethering normally works via NAT on the handset's existing data connection, so from the network point of view, there is just one device with a single IPv4 network address, though it is technically possible to attempt to identify multiple machines.\n", "The approach allows devices with very limited communications hardware to operate over IP networks. The Linux kernel for the iPAQ uses this communications strategy exclusively, since the iPAQ hardware has neither an accessible legacy (RS-232/RS-422) serial port nor a dedicated network interface.\n\nSection::::Providers.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-04294
Why is it that despite how much CGI has improved in the last 20 years it is always possible to tell when human faces are digitally rendered?
The term is “uncanny valley.” And yes it is much harder to make something real look realistic. Our brain is much better at finding something strange about things that we are used to.
[ "BULLET::::- In 2010 Walt Disney Pictures released a sci-fi sequel entitled \"\" with a digitally rejuvenated digital look-alike of actor Jeff Bridges playing the antagonist CLU.\n\nBULLET::::- In SIGGGRAPH 2013 Activision and USC presented a real time \"Digital Ira\" a digital face look-alike of Ari Shapiro, an ICT USC research scientist, utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture. The end result both precomputed and real-time rendering with the modernest game GPU shown here and looks fairly realistic.\n", "BULLET::::- Late 2017 and early 2018 saw the surfacing of the deepfakes controversy where porn videos were doctored utilizing deep machine learning so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.\n", "Standard Poser characters have been extensively used by European and US based documentary production teams to graphically render the human body or virtual actors in digital scenes. Humanoids printed in several science and technology magazines around the US are often Poser rendered and postworked models.\n\nSection::::Library.\n", "In the last couple years there have been advances in computer graphics and computer vision on modeling lighting and pose changes in facial imagery. These advances have led to the development of new computer algorithms that can automatically correct for lighting and pose changes in facial imagery. These new algorithms work by preprocessing a facial image to correct for lighting and pose prior to being processed through a face recognition system. The preprocessing portion of the FRGC will measure the impact of new preprocessing algorithms on recognition performance.\n", "BULLET::::- In 2003 audience debut of photo realistic human-likenesses in the 2003 films \"The Matrix Reloaded\" in the burly brawl sequence where up-to-100 Agent Smiths fight Neo and in \"The Matrix Revolutions\" where at the start of the end showdown Agent Smith's cheekbone gets punched in by Neo leaving the digital look-alike unnaturally unhurt. The Matrix Revolutions bonus DVD documents and depicts the process in some detail and the techniques used, including facial motion capture and limbal motion capture, and projection onto models.\n", "Human image synthesis\n\nHuman image synthesis can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. This has effectively been the situation since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material.\n\nSection::::Timeline of human image synthesis.\n", "The late-1980s saw the development of a new muscle-based model by Waters, the development of an abstract muscle action model by Magnenat-Thalmann and colleagues, and approaches to automatic speech synchronization by Lewis and Hill. The 1990s have seen increasing activity in the development of facial animation techniques and the use of computer facial animation as a key storytelling component as illustrated in animated films such as \"Toy Story\" (1995), \"Antz\" (1998), \"Shrek\", and \"Monsters, Inc.\" (both 2001), and computer games such as \"Sims\". \"Casper\" (1995), a milestone in this decade, was the first movie in which a lead actor was produced exclusively using digital facial animation.\n", "Computer based facial expression modelling and animation is not a new endeavour. The earliest work with computer based facial representation was done in the early-1970s. The first three-dimensional facial animation was created by Parke in 1972. In 1973, Gillenson developed an interactive system to assemble and edit line drawn facial images. in 1974, Parke developed a parameterized three-dimensional facial model.\n", "U.S. Government-sponsored evaluations and challenge problems have helped spur over two orders-of-magnitude in face-recognition system performance. Since 1993, the error rate of automatic face-recognition systems has decreased by a factor of 272. The reduction applies to systems that match people with face images captured in studio or mugshot environments. In Moore's law terms, the error rate decreased by one-half every two years.\n\nLow-resolution images of faces can be enhanced using face hallucination.\n\nSection::::Techniques for face acquisition.\n", "The early 2000s saw the advent of fully virtual cinematography with its audience debut considered to be in the 2003 films \"The Matrix Reloaded\" and \"The Matrix Revolutions\" with its digital look-alikes so convincing that it is often impossible to know if some image is a human imaged with a camera or a digital look-alike shot with a simulation of a camera. The scenes built and imaged within virtual cinematography are the \"\"Burly brawl\"\" and the end showdown between Neo and Agent Smith. With conventional cinematographic methods the burly brawl would have been prohibitively time consuming to make with years of compositing required for a scene of few minutes. Also a human actor could not have been used for the end showdown in Matrix Revolutions: Agent Smith's cheekbone gets punched in by Neo leaving the digital look-alike naturally unhurt.\n", "BULLET::::- Human facial proportions and photorealistic texture should only be used together. A photorealistic human texture demands human facial proportions, or the computer generated character can fall into the uncanny valley. Abnormal facial proportions, including those typically used by artists to enhance attractiveness (e.g., larger eyes), can look eerie with a photorealistic human texture. Avoiding a photorealistic texture can permit more leeway.\n\nSection::::Criticism.\n\nA number of criticisms have been raised concerning whether the uncanny valley exists as a unified phenomenon amenable to scientific scrutiny:\n", "BULLET::::- In 2014 The Presidential Portrait by USC ICT in conjunction with the Smithsonian Institution was made using the latest USC mobile light stage wherein President Barack Obama had his geometry, textures and reflectance captured.\n\nBULLET::::- For the 2015 film \"Furious 7\" a digital look-alike of actor Paul Walker who died in an accident during the filming was done by Weta Digital to enable the completion of the film.\n\nBULLET::::- In 2016 techniques which allow near real-time counterfeiting of facial expressions in existing 2D video have been believably demonstrated.\n", "Although development of computer graphics methods for facial animation started in the early-1970s, major achievements in this field are more recent and happened since the late 1980s.\n", "At the beginning, the computer needs to know the shapes of the characters, even the detail of their hands or their thumbs. For example, a sculptor sculpted Marilyn's and Humphrey's hands by covering real human hands with plaster, a grid was drawn, photos from various angles were taken, and the information was digitized in 2D and the computer reconstituted the 3D information. For the heads and torsos, a sculptor created 3D plaster models and the process of digitizing is the same.\n", "In the last two decades, a number of computer based facial composite systems have been introduced; amongst the most widely used systems are SketchCop FACETTE Face Design System Software, \"Identi-Kit 2000\", FACES, E-FIT and PortraitPad. In the U.S. the FBI maintains that hand-drawing is its preferred method for constructing a facial composite. Many other police agencies, however, use software, since suitable artistic talent is often not available.\n\nSection::::Methods.:Evolutionary systems.\n", "It is generally known that the degree of accuracy in facial recognition (not affective state recognition) has not been brought to a level high enough to permit its widespread efficient use across the world (there have been many attempts, especially by law enforcement, which failed at successfully identifying criminals). Without improving the accuracy of hardware and software used to scan faces, progress is very much slowed down.\n\nOther challenges include\n\nBULLET::::- The fact that posed expressions, as used by most subjects of the various studies, are not natural, and therefore not 100% accurate.\n", "Early computer-generated animated faces include the 1985 film \"Tony de Peltrie\" and the music video for Mick Jagger's song \"Hard Woman\" (from \"She's the Boss\"). The first actual human beings to be digitally duplicated were Marilyn Monroe and Humphrey Bogart in a March 1987 film \"Rendez-vous in Montreal\" created by Nadia Magnenat Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Institute of Canada. The film was created by six people over a year, and had Monroe and Bogart meeting in a café in Montreal, Quebec, Canada. The characters were rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands.\n", "BULLET::::- For believable results also the reflectance field must b.e captured or an approximation must be picked from the libraries to form a 7D reflectance model of the target.\n\nSection::::Synthesis.\n\nThe whole process of making digital look-alikes i.e. characters so lifelike and realistic that they can be passed off as pictures of humans is a very complex task as it requires photorealistically modeling, animating, cross-mapping, and rendering the soft body dynamics of the human appearance.\n", "Digital video-based methods are becoming increasingly preferred, as mechanical systems tend to be cumbersome and difficult to use.\n", "In SIGGRAPH 2013 Activision and USC presented a real-time digital face look-alike of \"Ira\" utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture. The end result, both precomputed and real-time rendered with the state-of-the-art Graphics processing unit: \"Digital Ira\", looks fairly realistic. Techniques previously confined to high-end virtual cinematography systems are rapidly moving into the video games and leisure applications.\n\nSection::::Further developments.\n", "Section::::History.\n\nHuman facial expression has been the subject of scientific investigation for more than one hundred years. Study of facial movements and expressions started from a biological point of view. After some older investigations, for example by John Bulwer in the late 1640s, Charles Darwin’s book \"The Expression of the Emotions in Men and Animals\" can be considered a major departure for modern research in behavioural biology.\n", "Facial motion capture\n\nFacial motion capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce CG (computer graphics) computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in more realistic and nuanced computer character animation than if the animation were created manually.\n", "BULLET::::- Design elements should match in human realism. A robot may look uncanny when human and nonhuman elements are mixed. For example, both a robot with a synthetic voice or a human being with a human voice have been found to be less eerie than a robot with a human voice or a human being with a synthetic voice. For a robot to give a more positive impression, its degree of human realism in appearance should also match its degree of human realism in behavior. If an animated character looks more human than its movement, this gives a negative impression. Human neuroimaging studies also indicate matching appearance and motion kinematics are important.\n", "Many new features have been derived for cost functions based on matching methods via large deformations have emerged in the field Computational Anatomy including \n\nSection::::Uncertainty.\n\nThere is a level of uncertainty associated with registering images that have any spatio-temporal differences. A confident registration with a measure of uncertainty is critical for many change detection applications such as medical diagnostics.\n", "BULLET::::- In 2018 GDC Epic Games and Tencent Games demonstrated \"Siren\", a digital look-alike of the actress Bingjie Jiang. It was made possible with the following technologies: CubicMotion's computer vision system, 3Lateral's facial rigging system and Vicon's motion capture system. The demonstration ran in near real time at 60 frames per second in the Unreal Engine 4.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-09908
Why are things like jewelry and precise measuring instruments kept in velvet lined cases? Is it just because it’s fancier, or does the velvet serve a purpose?
It is soft. It prevents scratching and movement during transit. Thus it avoids damaging the valuable and often fragile thing inside.
[ "In recent times they are mostly receptacles for trinkets and jewels, but in earlier periods, when other types of container were rarer, and the amount of documents held by the typical person far fewer, they were used for keeping important documents and many other purposes. It may take a very modest form, covered in leather and lined with satin, or it may reach the monumental proportions of the jewel cabinets which were made for Marie Antoinette, one of which is at Windsor, and another at Versailles. Both were the work of Schwerdfeger as cabinet maker, his assistants Michael Reyad, Mitchell Stevens, Christopher Visvis, Degault as miniature painter, and Thomire as chaser.\n", "BULLET::::- Pile-on-pile: A particularly luxurious type of velvet woven with piles of differing heights to create a pattern.\n\nBULLET::::- Plain: Commonly made of cotton, this type of velvet has a firm hand and can be used for many purposes.\n\nBULLET::::- Utrecht: A pressed and crimped velvet associated with Utrecht, the Netherlands.\n", "The English term \"Prayer nut\" comes from the equivalent Dutch word , and took on common usage in the 18th century. The use of the word \"nut\" may be derived from the fact that some of the beads were actually carved from nuts or pits, and although no such miniatures survive, it was a known practice in medieval southern Germany. They are mostly the same shape (deliberately designed to resemble apples), decorated with carved openwork Gothic tracery and flower-heads, and of a size suitable for holding in the palm of a hand.\n", "Scholten notes that the tracery may have been intended to suggest that the object contained a small relic, \"so that the object took on the character of a talisman and was deemed to have an apotropaic effect\". A number contain a wooden loop in the middle of one half so they could be worn hanging from a belt, or carried in a case. A fragrant substance was sometimes placed inside the shell, which diffused when the beads were opened, making them comparable to the then fashionable pomanders.\n", "From 1896, American handbag manufacturer Whiting & Davis created lidded compartments in its bags where powder rouge and combs could be stowed. In 1908, Sears' catalogue advertised a silver-plated case with mirror and powder puff (price 19 cents) and described it as small enough to fit in a handbag. \n\nIn the US, manufacturers such as Evans and Elgin American produced metal compacts with either finger chains or longer tango chains. Designed to be displayed rather than fitted in a handbag, they required more ornate designs and many from this era are examples of sleek Art Deco styling. \n", "Kas, kast or kasten (pronounced kaz) is a massive cupboard or wardrobe of Dutch origin similar to an armoire that was popular in the Netherlands and America in the 17th & 18th century. It was fitted with shelves and drawers used to store linen, clothing and other valuables and locked by key. They were status symbols and family heirlooms in the Low Countries and imported luxury goods to the American colonies. As such they were often made of quality wood such as cherry, rosewood and ebony and paneled, carved or painted.\n\nSection::::See also.\n\nBULLET::::- Cabinetry\n\nBULLET::::- Closet\n\nBULLET::::- Encoignure\n", "The most common type of decorative box is the feminine work box. It is usually fitted with a tray divided into many small compartments for needles, reels of silk and cotton, and other necessaries for stitchery. The date of its origin is unclear, but 17th-century examples exist, covered with silk and adorned with beads and embroidery.\n\nNo lady would have been without her work box in the 18th century. In the second half of that century, elaborate pains were taken to make these boxes dainty and elegant.\n", "Compacts were heavily influenced by prevailing fashions – for instance, the 1922 discovery of Tutankhamun's tomb spawned Egypt-inspired obelisks, sphinxes and pyramids, while the growing popularity of the car meant compacts were incorporated into visors, steering wheels and gears. Jewellers such as Van Cleef & Arpels, Tiffany and Cartier began producing minaudières, metal evening bags/vanity cases carried on a metal or silk cord that contained a compact plus space for a few other small items, many were inlaid with jewels or personalised.\n", "Elaborate needlework confections like the frog-shaped needlecase in the Los Angeles County Museum of Art appeared by the 16th century. Heavily decorated silver and brass needlecases are typical of the Victorian period. \n\nBetween 1869 and 1887, W. Avery & Son, an English needle manufactory, produced a series of figural brass needlecases, which are now highly collectible. Avery's dominance of this market was such that all similar brass Victorian needlecases are called \"Averys\".\n\nSection::::External links.\n\n Needlecases in museum collections\n\nBULLET::::- Inuit Art: Needle cases, Canadian Museum of History\n\nBULLET::::- Mongolian/Tibetan silver needlecase, 19th century, McClung Museum\n", "Work boxes are ordinarily portable, but at times they form the top of a stationary table.\n\nSection::::Jewelry Box.\n\nA jewelry box, also known as a casket, is a receptacle for trinkets, not only jewels. It may take a very modest form, covered in leather and lined with satin, or it may reach the monumental proportions of the jewel cabinets which were made for Marie Antoinette, one of which is at Windsor Castle, and another at the Palace of Versailles; the work of Schwerdfeger as cabinetmaker, Degault as miniature-painter, and Thomire as chaser.\n\nSection::::Snuff box.\n", "In the Middle Ages people usually brought their own cutlery with them when eating away from home, and the more expensive types came with their own custom-made leather cases, stamped and embossed in various designs. Later, as cutlery became provided by the host, decorative cases, especially for the knives, were often left on display in the dining-room. Some of the most elegant and often ornate were in the styles of Robert Adam, George Hepplewhite and Thomas Sheraton. Occasionally flat-topped containers, they were most frequently either rod-shaped, or tall and narrow with a sloping top necessitated by a series of raised veins for exhibiting the handles of knives and the bowls of spoons. Mahogany and satinwoods were most common, occasionally inlaid with marquetry, or edged with boxwood which was resistant to chipping. These receptacles, often made in pairs, still exist in large numbers; they are often converted into stationery cabinets. Another version is an open tray or rack, usually with a handle, also for the storage of table cutlery.\n", "After the war production methods changed considerably, driven by shortages and changing requirements. The growth of air travel brought with it the need for sturdy but lightweight luggage and new materials were utilized, including plastic and vinyl. There was still a clientele, however, for traditionally made high-quality leatherwork, and skills in that area were maintained.\n\nA travelling wardrobe and matching travelling case were made for Princess Margaret on the occasion of the royal family's tour of South Africa in 1947.\n", "Pabst used a cameo-carved panel and reverse-painted ribbed-glass tiles on an earlier Modern Gothic cabinet (below), now at the Brooklyn Museum. This originally seems to have been the center section of a larger piece, with an attached bookcase on either side.\n\nSection::::Scholarship.\n", "Today, passementerie is used with clothing, such as the gold braid on military dress uniforms, and for decorating couture clothing and wedding gowns. They are also used in furniture trimming, such as the Centripetal Spring Armchair of 1849 and some lampshades, draperies, fringes and tassels.\n\nSection::::History.\n", "This style of design is very ornate. French Provincial objects are often stained or painted, leaving the wood concealed. Corners and bevels are often decorated with gold leaf or given some other kind of gilding. Flat surfaces often have artwork such as landscapes painted directly on them. The wood used in French provincial varied, but was often originally beech.\n\nSection::::Schools of design.:Early American Colonial.\n", "An (from the French, for keeper or holder) is a woman's ornamental case, usually carried in a pocket or purse. It holds small tools for daily use such as folding scissors, bodkins, sewing needles (a needlecase), hairpins, tweezers, makeup pencils, etc. Some étuis were also used to carry doctors' lancets. These boxes were made of different materials such as wood, leather, ivory, silver, gold, tortoise shell, mother of pearl, and shagreen. Fabergé created the Necessaire Egg as an étui.\n\nSection::::Wooden wine box.\n", "Section::::Styles.:Viking.\n", "Section::::Construction.\n\nThe white and silver gilt that was used to replace the Gothic gold jewelry, which prevailed in the past, although it continued to be used for the realization of special pieces, usually commissioned by kings and the worship of great cathedrals. The predominance the silver was due to the cheapness of the material compared to gold, and also for their physical and chemical properties, to facilitate its alloy with copper and to facilitate their development, giving a material hardness. \n", "King Richard II of England directed in his will that his body should be clothed \"in velveto\" in 1399.\n\nThe \"Encyclopædia Britannica\" Eleventh Edition described velvet and its history thus:\n\nSection::::Types.\n\nBULLET::::- Chiffon (or transparent) velvet: Very lightweight velvet on a sheer silk or rayon chiffon base.\n\nBULLET::::- Ciselé: Velvet where the pile uses cut and uncut loops to create a pattern.\n\nBULLET::::- Crushed: Lustrous velvet with patterned appearance that is produced by either pressing the fabric down in different directions, or alternatively by mechanically twisting the fabric while wet.\n", "Most styles and techniques used in jewellery for personal adornment, the main subject of this article, were also used in pieces of decorated metalwork, which was the most prestigious form of art through most of this period; these were often much larger. Most surviving examples are religious objects such as reliquaries, church plate such as chalices and other pieces, crosses like the Cross of Lothair and treasure bindings for books. However this is largely an accident of survival, as the church has proved much better at preserving its treasures than secular or civic elites, and at the time there may well have been as many secular objects made in the same styles. For example, the Royal Gold Cup, a secular cup though decorated with religious imagery, is one of a handful of survivals of the huge collections of metalwork \"\" (\"jewels\") owned by the Valois dynasty who ruled France in the late Middle Ages.\n", "Section::::History.\n", "Wood panelling or wainscoting, almost always made from oak, became popular in Northern Europe from the 14th century, after European carpenters rediscovered the techniques to create frame and panel joinery. The framing technique was used from the 13th century onwards to clad interior walls, to form choir stalls, and to manufacture moveable and semi-moveable furniture, such as chests and presses, and even the back panels of joined chairs. Linenfold was developed as a simple technique to decorate the flat surfaces of the ubiquitous panels thus created.\n", "Although its origins relate to the nobility, the velvet season came to have a more general meaning. It is better defined by the general public‘s tastes, behaviors, and morals than by those of the nobility. By the 1900s, people traveled for the velvet season because it was fashionable and they were hoping to meet people, rather than because the court was traveling.\n", "Other popular materials used in making these boxes include:\n\nBULLET::::- Tortoise-shell, a favorite material owing to its satin lustre;\n\nBULLET::::- Mother-of-pearl, which was kept in its natural iridescent state, or gilded, or used together with silver; and\n\nBULLET::::- Boxes made from exotic materials such as cowrie shells, enriched with enamels or set with diamonds or other precious stones.\n", "The different patterns and types must run into many thousands. As well as plain and decorated square, oblong and round cases, a myriad of novelty shapes have been recorded; silver, brass or white metal pigs with hinged heads were popular, as were vesta cases in the form of Mr Punch, hearts, skulls, musical instruments (often violas), owls, boots and shoes, bottles, ladies' legs and so on.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-17404
Prisoners being sentenced to "five consecutive life sentences plus 20 years" etc, why is it not just always referred to as life without parole? That is as long as anyone can spend in Prison.
It's better keeping it as seperate charges, if one of the charges falls through for any reason (failed trial or something) then the other charges still stand and they wont be let out
[ "Parole is an option for most prisoners. However, parole is not guaranteed, particularly for prisoners serving life or indeterminate sentences. In cases of first-degree murder, one can apply for parole after 25 years if convicted of a single murder. However, if convicted of multiple murder (either first or second-degree), the sentencing judge has the option to make parole ineligibility periods consecutive - thereby extending parole ineligibility beyond 25 years and, in rare cases, beyond a normal life-span.\n\nSection::::China.\n", "Other countries either allow multiple concurrent life sentences which can be served at the same time (e.g. Russia), or allow multiple consecutive life sentences with a single minimum term (e.g. Australia), thus allowing earlier release of the prisoner.\n", "BULLET::::- In 1994: California, Colorado, Connecticut, Indiana, Kansas, Maryland, New Mexico, North Carolina, Virginia, Louisiana, Wisconsin, and Tennessee. Tennessee is one of the few states, together with Georgia and South Carolina, that mandates life without parole for two or more convictions for the most serious violent crimes, including murder, rape, child sexual abuse, aggravated cases of robbery, sexual abuse or child sexual abuse, etc.\n", "Individuals convicted of multiple murder may be given consecutive parole ineligibility periods thus extending their parole ineligibility period beyond 25 years. In rare cases this ineligibility for parole may extend beyond a normal life span, meaning, in \"de facto\" terms, a sentence of life-without-parole.\n\nFor a reflection on the work of a Member of the Parole Board see Lubomyr Luciuk's article in The Toronto Star, \"Making parole decisions is one tough job,\" 23 June 2016.\n\nSection::::Authority.:Record Suspensions.\n", "This is a common punishment for a double murder in the United States, and is effective because the defendant may be awarded parole after 25 years when he or she is eligible, and then must serve an \"additional\" 25 years in prison to be eligible for parole again. It also serves as a type of insurance that the defendant will have to serve the maximum length of at least one life sentence if, for some reason, one of the murder convictions is overturned on appeal.\n", "These provisions of the Criminal Code came into force in December 2011, and permit a trial judge, after considering any jury recommendations, to impose consecutive parole ineligibility periods extending beyond 25 years. In the most extreme cases, this can result in a \"de facto\" term of life imprisonment without parole (i.e. a total parole ineligibility period extending beyond the offender's life expectancy). \n", "For multiple murder offences committed after December 2, 2011, a court may, after considering any jury recommendation, impose consecutive periods of parole ineligibility for each murder. While the provision is not mandatory, this means, for example, that an individual convicted of three counts of first degree murder could face life with no parole for 75 years – or 25 years for each conviction. This provision has been used in several cases where parole ineligibility periods have been extended beyond 25 years; in four cases to 75 years prior to parole eligibility.\n\n\"See also:\" \"Life imprisonment in Canada\"\n", "BULLET::::- Thomas Quigley and Paul Kavanagh were both convicted of two murders in a bomb attack in 1981 by the Chelsea Barracks which also injured 39 people. They were both sentenced to 35 years in prison each in 1985 and were told in 1996 by the Home Secretary Michael Howard that they were to receive whole life tariffs, however the order was reversed by the Belfast High Court in 1997 after an appeal by the two men and they were released under the Good Friday Agreement.\n", "BULLET::::- David Martin Simmons had originally received a whole-life term for rape and false imprisonment. This was reduced to a ten-year minimum when he appealed alongside Restivo, Roberts, and others whose appeals were not successful.\n\nBULLET::::- Donald Andrews had received a whole-life term for rape and kidnapping in 2012, while having two previous convictions for manslaughter. This was reduced to a twelve-year minimum when he appealed in 2015, making him eligible for release in 2024.\n", "Section::::Parole and nonviolent offenses.\n\nUnder the federal criminal code, however, with respect to offenses committed after December 1, 1987, parole has been abolished for all sentences handed down by the federal system, including life sentences. A life sentence from a federal court will therefore result in imprisonment for the life of the defendant unless a pardon or reprieve is granted by the President or if, upon appeal, the conviction is quashed.\n", "Section::::Effects of the Baumes law.\n\nWhile the Baumes law seemed to remove authority from judges and parole boards by mandating sentences, they gradually began to work around it through the increased use of plea bargains. In cases where a life sentence would have been unjust, prosecutors became more amenable to accused parties' guilty pleas to misdemeanor charges, which would not trigger portions of the statute.\n", "Jeb Bush and Florida Legislature not only came up with the 10-20-Life system, they also came up with or modified several acts designed for repeat offenders. These acts include Violent Career Criminal, Habitual felony offender, Habitual violent felony offender, Three-time violent felony offender, Prison Releasee Reoffender, and Dangerous Sexual Felony Offender.\n", "Around 100 prisoners are believed to have been issued with whole life orders since the mechanism was first introduced in 1983, although some of them were convicted of their crimes before that date and some of the prisoners known to have been issued with the whole life order have since died in prison or had their sentences reduced on appeal.\n", "Since 2 December 2011, it is possible for back-to-back life sentences to be handed out in Canada. Before doing this, the judge must consider a jury recommendation as to whether to impose a minimum sentence of more than 25 years. The longest minimum sentence so far is 75 years, handed out to four offenders: Justin Bourque, John Paul Ostamas, Douglas Garland and Derek Saretzky.\n\nSection::::See also.\n\nBULLET::::- Incapacitation (penology)\n\nBULLET::::- Life without possibility of parole\n\nSection::::External links.\n\nBULLET::::- The Free Dictionary on Back-to-back life sentences\n\nBULLET::::- CBC: Justin Bourque gets 5 life sentences, no chance of parole for 75 years\n", "In New Zealand, inmates serving a short-term sentence (up to two years) are automatically released after serving half their sentence, without a parole hearing. Inmates serving sentences of more than two years are normally seen by the New Zealand Parole Board after serving one-third of the sentence, although the judge at sentencing can make an order for a minimum non-parole period of up to two-thirds of the sentence. Inmates serving life sentences usually serve a minimum of 10 years, or longer depending on the minimum non-parole period, before being eligible for parole. Parole is not an automatic right and it was declined in 71 percent of hearings in the year ending 30 June 2010. Many sentences include a specific non-parole period.\n", "Parole is not automatic. The parole board must consider, first and foremost, the protection of the public. Secondary considerations are reintegration, rehabilitation and compassion. When life sentences are imposed, eligibility for parole is 25 years in first-degree murder cases, between 10 and 25 years in second-degree murder cases, and 7 years for other life sentences or indeterminate sentences. Any person released on parole from a life sentence or an indeterminate sentence must remain on parole and be subject to parole conditions of the board for the remainder of the offender's life. \n", "Three-strikes law\n\nIn the United States, habitual offender laws (commonly referred to as three-strikes laws) were first implemented on March 7, 1994 and are part of the United States Justice Department's Anti-Violence Strategy. These laws require both a severe violent felony and two other previous convictions to serve a mandatory life sentence in prison. The purpose of the laws is to drastically increase the punishment of those convicted of more than two serious crimes.\n", "Back-to-back life sentences\n\nIn judicial practice, back-to-back life sentences are two or more consecutive life sentences given to a felon. This penalty is typically used to prevent the felon from ever getting released from prison.\n", "In many countries around the world, particularly in the Commonwealth, courts have the authority to pass prison terms which may amount to \"de facto\" life imprisonment. For example, courts in South Africa have handed out at least two sentences that have exceeded a century, and in Tasmania, Australia, Martin Bryant, the perpetrator of the Port Arthur massacre in 1996, received 35 life sentences, plus 1,035 years without parole, while Aurora Cinema shooter James Holmes, who received 12 consecutive life sentences and an extra 3,318 years without the possibility of parole for killing 12 and injuring 70 in his shooting spree, and also booby trapping his apartment with explosives. \n", "BULLET::::- Ronald William Barton was convicted of murdering his 14-year-old stepdaughter Keighley Barton in October 1986, in what was believed as an attempt on Barton's part to stop Keighley from testifying against him for child abuse and to gain revenge against her mother. He had several previous convictions for gross indecency against Keighley and sexual assault against other teenage girls, one of which he had been in prison for. After his conviction, he was handed a minimum term of 25 years; but the Lord Chief Justice then recommended that life must mean life, which the Home Secretary agreed with. The term was reset in 1997 to the original 25 years, which was reduced again to 23 years in 2006.\n", "Despite formal parole eligibility after seven years, full parole is rare in dangerous offender cases as this provision is reserved for individuals assessed as likely to commit further serious violent offences. In violent non-murder cases, it is more likely to be used than a sentence of life imprisonment. As of 2012, nearly 500 inmates had a \"Dangerous Offender\" designation constituting about 3% of the federal offender population. Three years later, in 2015, 622 federal offenders had a Dangerous Offender designation. Of these, 586 (or some 94%) were incarcerated (representing 3.9% of the In-Custody Population) and 36 were in the community under supervision. This supervision lasts for the remainder of the offender's life. \n", "In cases of multiple murder, after considering the jury's recommendation (if there is one), a court may also order that the parole inelibility period be served consecutively to the one being served. Amendments to the Criminal Code in 2011 permit the judge to impose consecutive parole ineligibility periods for first or second-degree murders committed as part of the same \"transaction\" (or as part of the same series of offences). One of the first cases where the new sentencing provisions were used was a multiple murder in Edmonton, Alberta of three armoured car guards by one of their co-workers. The perpetrator in that case was sentenced to life in prison with no chance of parole for 40 years - 25 years for one first degree murder conviction, ordered to be served consecutively to two concurrent 15-year parole ineligibility periods for two second-degree murder convictions as part of the same series of offences. Subsequent to this sentence Justin Bourque, convicted of the first-degree murders of three RCMP officers in Moncton New Brunswick in 2014, was sentenced to life in prison with no chance of parole for 75 years. Consecutive parole ineligibility periods were also imposed in the case of serial killer John Paul Ostamas in June 2016, who was sentenced to life in prison with no chance of parole for 75 years for the second-degree murders of three homeless men in Winnipeg, Manitoba.\n", "In the United States, a 2009 report by the Sentencing Project suggested that life imprisonment without parole should be abolished in the country. U.S. law enforcement officials opposed its proposed abolition.\n\nPope Francis proposed the abolition of both capital punishment and life imprisonment in a meeting with representatives of the International Association of Penal Law. He also stated that life imprisonment, recently removed from the Vatican penal code, is just a variation of the death penalty.\n\nSection::::See also.\n\nBULLET::::- 10-20-Life\n\nBULLET::::- Incapacitation (penology)\n\nBULLET::::- Indefinite imprisonment\n\nBULLET::::- List of prison deaths\n\nBULLET::::- Use of capital punishment by country\n\nSection::::External links.\n", "A similar system operates in Scotland, whereby the trial judge fixes a \"punishment part\" to \"satisfy the requirements of retribution and deterrence\". The prisoner cannot be considered for parole until this punishment part is served.\n", "The practice of imposing longer prison sentences on repeat offenders (versus first-time offenders who commit the same crime) is nothing new, as judges often take into consideration prior offenses when sentencing. However, there is a more recent history of mandatory prison sentences for repeat offenders. For example, New York State had a long-standing \"Persistent Felony Offender\" law dating back to the early 20th century (partially ruled unconstitutional in 2010, but reaffirmed \"en banc\" shortly after). But such sentences were not compulsory in each case, and judges had much more discretion as to what term of incarceration should be imposed.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-22793
Why do most conventional education systems test memory and not actual intelligence ?
It's hard to test for actual intelligence and not memory, especially when it's the same curriculum taught every year. I believe most university programs try to overload people with work to the point where their memory is full and they need to rely on understanding/intelligence.
[ "Some researchers question whether the results of training are long lasting and transferable, especially when these techniques are used by healthy children and adults without cognitive deficiencies. A meta-analytical review conducted by researchers from the University of Oslo in 2012 concluded that \"memory training programs appear to produce short-term, specific training effects that do not generalize.\"\n", "In a study using four individual experiments, 70 participants (36 of them female, all with a mean age of 25.6) recruited from the University of Bern community, Susanne M. Jaeggi and her colleagues at the University of Michigan found that healthy young adults who practiced a demanding working memory task (dual n-back, a task that has strong face validity, has received some criticism regarding its construct validity and is in widespread use as a measure of working memory) approximately 25 minutes per day for between 8 and 19 days had statistically significant increases in their scores on a matrix test of fluid intelligence taken before and after the training than a control group who did not do any training at all.\n", "Further analyses of research concerning intelligence and free recall have shown that there are relatively large differences in intelligence when a positive correlation between recall and intelligence is demonstrated. This implies that intelligence significantly influences child eyewitness memory when comparing high and low levels; however, small differences in intelligence are not significant.\n", "Recent studies have shown that training in using one's working memory may increase IQ. A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training. Further research will be needed to determine nature, extent and duration of the proposed transfer. Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable of extended periods of time.\n", "Another problem is about the \"Flynn effect\" (Dickens & Flynn, 2002). Vocabulary performance (considered as crystallized intelligence) has smaller cohort effects than reasoning performance (considered as fluid intelligence). Both exposure to vocabulary task and reasoning task have been increased greatly in past decades. However, whatever there is a great increase in mental ability test score, the performance on fluid intelligence \"may merely reflect greater development of related neuronal patterns.\"\n", "PET scans performed on several mathematics prodigies have suggested that they think in terms of long-term working memory (LTWM). This memory, specific to a field of expertise, is capable of holding relevant information for extended periods, usually hours. For example, experienced waiters have been found to hold the orders of up to twenty customers in their heads while they serve them, but perform only as well as an average person in number-sequence recognition. The PET scans also answer questions about which specific areas of the brain associate themselves with manipulating numbers.\n", "Transitioning between elementary school and middle school is a time when many students with an entity theory of intelligence begin to experience their first taste of academic difficulty. Transitioning students with low abilities can be oriented to a growth mentality when taught that their brains are like muscles that get stronger through hard work and effort. This lesson can result in a marked improvement in grades compared to students with similar abilities and resources available to them who do not receive this information on the brain.\n\nSection::::Shifting from entity to incremental mindset to improve achievement.:College-aged students.\n", "In a large-scale screening study, one in ten children in mainstream classrooms were identified with working memory deficits. The majority of them performed very poorly in academic achievements, independent of their IQ. Similarly, working memory deficits have been identified in national curriculum low-achievers as young as seven years of age. Without appropriate intervention, these children lag behind their peers. A recent study of 37 school-age children with significant learning disabilities has shown that working memory capacity at baseline measurement, but not IQ, predicts learning outcomes two years later. This suggests that working memory impairments are associated with low learning outcomes and constitute a high risk factor for educational underachievement for children. In children with learning disabilities such as dyslexia, ADHD, and developmental coordination disorder, a similar pattern is evident.\n", "Attention is drawn to the limitations of these results and the need for specific follow up inquiery. Robert J. Sternberg comments that\"it is unclear to what extent the results can be generalized to other working-memory tasks\" and states \"it would be useful to show that the training transfers to success in meaningful behaviours that extend beyond the realm of psychometric testing\". Sternberg asserts that ability level of the test participants is not necessarily examining a wide range of ability levels, or \"address whether the training is durable over extended periods of time [and not only] \"fleeting.\"\n", "In an experiment, groups of adults were first assessed using standard tests for fluid intelligence. Then they trained groups for four different numbers of days, for half an hour each day, using an n-back exercise that worked on improving one's working memory. It supposedly does so through a few different components, involving having to ignore irrelevant items, manage tasks simultaneously, and monitor performance on exercise, while connecting related items. After this training, the groups were tested again and those with training (compared against control groups who did not undergo training) showed significant increases in performance on the fluid intelligence tests.\n", "Students who demonstrate a wide range of metacognitive skills perform better on exams and complete work more efficiently. They are self-regulated learners who utilize the \"right tool for the job\" and modify learning strategies and skills based on their awareness of effectiveness. Individuals with a high level of metacognitive knowledge and skill identify blocks to learning as early as possible and change \"tools\" or strategies to ensure goal attainment. Swanson (1990) found that metacognitive knowledge can compensate for IQ and lack of prior knowledge when comparing fifth and sixth grade students' problem solving. Students with a high-metacognition were reported to have used fewer strategies, but solved problems more effectively than low-metacognition students, regardless of IQ or prior knowledge. In one study examining students who send text messages during college lectures, it was suggested that students with higher metacognitive abilities were less likely than other students to have their learning affected by using a mobile phone in class.\n", "The range in children's intellectual capacities may explain the positive relationship between intelligence and eyewitness memory. Intellectually disabled children and children with below average to very low IQ's have been included in studies examining the influence of intelligence on memory recall. It was found that when giving an eyewitness testimony, there is a stronger positive relationship between intelligence and recall for intellectually disabled children, with recall accuracy being poorer with children of lower IQ than for children with average or high intelligence. A possible explanation for this may be that in comparison to a child of mainstream intelligence, children of lower intelligence encode weaker memory traces of events.\n", "There have been few studies done on the relationship between short-term memory and intelligence in PTSD. However, examined whether people with PTSD had equivalent levels of short-term, non-verbal memory on the Benton Visual Retention Test (BVRT), and whether they had equivalent levels of intelligence on the Raven Standard Progressive Matrices (RSPM). They found that people with PTSD had worse short-term, non-verbal memory on the BVRT, despite having comparable levels of intelligence on the RSPM, concluding impairments in memory influence intelligence assessments in the subjects.\n\nSection::::Measuring digit span and short term-memory.\n", "A few adults have had phenomenal memories (not necessarily of images), but their abilities are also unconnected with their intelligence levels and tend to be highly specialized. In extreme cases, like those of Solomon Shereshevsky and Kim Peek, memory skills can reportedly hinder social skills. Shereshevsky was a trained mnemonist, not an eidetic memoriser, and there are no studies that confirm whether Kim Peek had true eidetic memory.\n\nAccording to Herman Goldstine, the mathematician John von Neumann was able to recall from memory every book he had ever read.\n\nSection::::Skepticism.\n", "Rodney Brooks explains that, according to early AI research, intelligence was \"best characterized as the things that highly educated male scientists found challenging\", such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. \"The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence.\"\n", "Data: Knowing the brain region that supports an elementary cognitive function tells us nothing about how to design instruction for that function. However, Varma et al. suggest that neuroscience provide the opportunity for a novel analyses of cognition, breaking down behaviour into elements invisible at the behavioural level. For example, the question of whether different arithmetic operations show different speed and accuracy profiles is the result of different efficiency levels within one cognitive system versus the use of different cognitive systems.\n", "Prospective memory has been implicated in the steering cognition model of how children coordinate their attention and response to learning tasks in school. Walker and Walker showed that pupils able to adjust their prospective memory most accurately for different curriculum learning tasks in maths, science and English were more effective learners than pupils whose prospective memory was fixed or inflexible.\n\nSection::::Everyday prospective memory.:Prospective person memory.\n", "Section::::Findings.:Interventions.\n\nHowever, regarding interventions such as the Head Start Program and similar programs lasting one or two years, while producing initial IQ gains, these had disappeared by the end of elementary school, although there may be other benefits such as more likely to finish high school. The more intensive Abecedarian Project had produced more long-lasting gains. \n\nSection::::Findings.:Other biological factors.\n", "Another finding in the influence of intelligence on a memory recall in children is that it seems to be age-dependent. Differences in age group explains the variance in which intelligence has an effect on memory performance. Older children have higher correlations of intelligence and recall, whereas chronological age is more significant of a factor than intelligence for young children's eyewitness memory. More specifically, a study examining the influence of fluid intelligence on recall of children's eyewitness memory regarding a videotaped event found that there was not a positive relationship between fluid intelligence and free narrative for six- and eight-year -lds; however, the positive relationship was present for ten-year-olds.\n", "(1) There is a strong correlation between indices of frontal lobe function or structural integrity and metamemory accuracy (2) The combination of frontal lobe dysfunction and poor memory severely impairs metamemorial processes (3) Metamemory tasks vary in subject performance levels, and quite likely, in the underlying processes these different tasks measure, and (4) Metamemory, as measured by experimental tasks, may dissociate from basic memory retrieval processes and from global judgments of memory.\n\nSection::::Physiological influences.:Neurological disorders.:Frontal lobe injury.\n", "A strong inverse correlation between early life intelligence and mortality has been shown across different populations, in different countries, and in different epochs.\"\n\nA study of one million Swedish men found showed \"a strong link between cognitive ability and the risk of death.\"\n\nA similar study of 4,289 former US soldiers showed a similar relationship between IQ and mortality.\n\nThe strong correlation between intelligence and mortality has raised questions as to how better public education could delay mortality.\n", "The idea that autistic individuals employ a different style of learning than people who do not fall in the spectrum can account for the delay in categorization but the resulting average level of cognitive ability. This, however, is only applicable to higher functioning individuals within the spectrum as those with lower IQ levels are notoriously difficult to test and measure.\n", "In younger children (ages 10 and under), it has also been found that inducing involuntary memory during testing produced significantly better results than using voluntary memory. This can be accomplished by posing a vague, mildly related question or sentence prior to the actual test question. In older children (aged 14 and above), the opposite holds, with strictly voluntary memory leading to better test results.\n\nSection::::Effects of age.:Reminiscence bump.\n", "One study involved 600 Scottish students with one group of students who played twenty minutes of Brain Age before class daily for nine weeks and a control group that studied regularly. The students were tested at the beginning and end of the study. In the end, the group that played Brain Age improved test scores by 50%. The time to complete the tests in the Brain Age group dropped by five minutes, and this improvement doubled that of the control group.\n", "Section::::Applications.\n\nThe results can give the experimenter considerable information about personalities, different conditions and learning difficulties. For example, an anxious participant may perform poorly on the first trial but improve as the task is repeated. Adults with limited learning capacity may perform well on early trials but reach a plateau where repeated trials do not reflect improved performance, or have inconsistent recall across trials. This can happen if they try and fail with different strategies of learning. Studies have demonstrated that inconsistent recall across trials characterises patients with amnesia caused by frontal lobe pathology.\n" ]
[]
[]
[ "normal" ]
[ "Most education relies on testing memory not intelligence." ]
[ "false presupposition", "normal" ]
[ "Most university programs try to overload people with work the the point where their memory is full and they need to rely on understanding. " ]
2018-03651
How does a Moscow Mule stay so cold?!?
The copper cup is a gimmick. It's not getting colder, you think it is because the mug is getting colder to the touch. That's just your perception. The whole system is coming to thermal equilibrium as it sits there.
[ "The Moscow mule is popularly served in a copper mug, which takes on the cold temperature of the liquid. Some public health advisories recommended the mugs be plated with nickel or stainless steel on the inside and the lip, but it has been disputed whether the time and acidity involved in the drinking of a Moscow mule would be enough to leach out the 30 milligrams of copper per liter needed to cause copper toxicity.\n\nSection::::Variations.\n", "BULLET::::- Mulefa are four-legged wheeled animals; they have one leg in front, one in back, and one on each side. The \"wheels\" are huge, round, hard seed-pods from seed-pod trees; an axle-like claw at the end of each leg grips a seed-pod. The Mulefa society is primitive.\n\nSection::::Dæmons.\n", "In general, a mule can be packed with dead weight of up to 20% of its body weight, or approximately . Although it depends on the individual animal, it has been reported that mules trained by the Army of Pakistan can carry up to and walk without resting. The average equine in general can carry up to approximately 30% of its body weight in live weight, such as a rider.\n", "Another variation is to use ginger syrup instead of ginger beer.\n\nSection::::History.\n\nGeorge Sinclair's 2007 article on the origin of the drink quotes the \"New York Herald Tribune\" from 1948: The mule was born in Manhattan but \"stalled\" on the West Coast for the duration. The birthplace of \"Little Moscow\" was in New York's Chatham Hotel. That was back in 1941 when the first carload of Jack Morgan's Cock 'n' Bull ginger beer was railing over the plains to give New Yorkers a happy surprise…\n", "BULLET::::- 1851: The first refrigerated boxcar entered service on the Northern Railroad (New York).\n\nBULLET::::- 1857: The first consignment of refrigerated, dressed beef traveled from Chicago to the East Coast in ordinary box cars packed with ice.\n\nBULLET::::- 1866: Horticulturist Parker Earle shipped strawberries in iced boxes by rail from southern Illinois to Chicago on the Illinois Central Railroad.\n\nBULLET::::- 1867: First U.S. refrigerated railroad car patent was issued.\n", "During the Soviet–Afghan War, the United States used large numbers of mules to carry weapons and supplies over Afghanistan's rugged terrain to the mujahideen. Use of mules by U.S. forces has continued during the War in Afghanistan, and the United States Marine Corps has conducted an 11-day Animal Packers Course since the 1960s at its Mountain Warfare Training Center located in the Sierra Nevada near Bridgeport, California.\n\nMule trains have been part of working portions of transportation links as recently as 2005 by the World Food Programme.\n\nSection::::Trains.\n", "A month later, a report in the Cincinnati Enquirer transplanted the story from Pittsburgh to New York, crediting it to an anonymous traveling man: “Finally a thick-furred cat was procured, that lived, and subsequently a mate for it. A litter of kittens came, and it was noticed their fur was longer than that of the parent cat. There have now been five generations born in the warehouse, the fur of each a little longer and thicker than that of the preceding generation, until now they are covered with fur as thick and close as that of a muskrat, and when removed from the warehouse they cannot stand the warm climate, and soon die. It is a distinct breed of cold-storage cats.”\n", "BULLET::::- Illinois Central Railroad number 51000 was built in the McComb, Mississippi shops with an aluminum superstructure to reduce weight with steel where required for strength and provided the standard dimensions, cushioned draft gear, easy-riding trucks, minimum of insulation, adjustable ice bunker bulkheads and half-stage icing racks with forced air circulation through side wall flues and floor racks recommended by UFF&VA.\n", "BULLET::::- 1887: Parker Earle joined F.A. Thomas of Chicago in the fruit shipping business. The company owned 60 ice-cooled railcars by 1888, and 600 by 1891.\n\nBULLET::::- 1888: Armour & Co. shipped beef from Chicago to Florida in a car cooled by ethyl chloride-compression machinery. Florida oranges were shipped to New York under refrigeration for the first time.\n\nBULLET::::- 1889: The first cooled shipment of fruit from California was sold on the New York market.\n", "BULLET::::- 1878: Gustavus Swift (along with engineer Andrew Chase) developed the first practical ice-cooled railcar. Soon Swift formed the Swift Refrigerator Line (SRL), the world's first.\n\nBULLET::::- 1880: The first patent for a mechanically refrigerated railcar issued in the United States was granted to Charles William Cooper.\n\nBULLET::::- 1884: The Santa Fe Refrigerator Despatch (SFRD) was established as a subsidiary of the Atchison, Topeka and Santa Fe Railway to carry perishable commodities.\n\nBULLET::::- 1885: Berries from Norfolk, Virginia were shipped by refrigerator car to New York.\n", "The sled tractor capacity of Argentine Polar Dogs was twice was much as any dog breed before it. A group of 11 Argentine Polar Dogs could drag a sled loaded with 1.1 tons (2200 lbs or 1,000 kg) at 35 km/h (22 mph) (on flat terrain) and 50 km/h (31 mph) on a 45° downward slope, in both cases without resting for 6 hours in a row.\n", "BULLET::::- \"A Whole New World of Life After Death: The Process of Freeze-Drying of Pets…and Beyond\". (Rod Humphries Writes, \"The Doberman Pinscher Magazine\", Volume 2, Issue 1, 2008, pages 52–57)\n\nBULLET::::- \"U.S. Showring Handling is Big Business\". (Rod Humphries, Dogs on Parade column, \"Sun-Herald\" newspaper, Sydney, 30 December 1973, page 54).\n\nBULLET::::- \"Here’s Why Greyhounds Run So Fast\". (Rod Humphries, Dogs on Parade column, \"The Sun-Herald\", Sydney, 26 August 1973, page 90).\n", "Their wool is made up of between 150-170 threads / mm². At 25 μm thick, their wool is 1.5 μm thinner that of the Suri, and considerably whiter, on average. Suri wool is marginally stronger Some of the products that can be made with fine Huacaya fiber include:\n\nBULLET::::- Ponchos\n\nBULLET::::- Scarves\n\nBULLET::::- Vests\n\nBULLET::::- Sweaters\n\nBULLET::::- Bedspreads\n\nSection::::Products.:Meat.\n", "Live cattle and dressed beef deliveries to New York (short tons):\n\n19th Century American Refrigerator Cars:\n", "Eric Felten quotes Wes Price in an article that was published in 2007 in \"The Wall Street Journal\" \"I just wanted to clean out the basement,\" Price would say of creating the Moscow mule. \"I was trying to get rid of a lot of dead stock.\" The first one he mixed he served to the actor Broderick Crawford. \"It caught on like wildfire,\" Price bragged.\"\n", "In 2003, researchers at University of Idaho and Utah State University produced the first mule clone as part of Project Idaho. The research team included Gordon Woods, professor of animal and veterinary science at the University of Idaho; Kenneth L. White, Utah State University professor of animal science; and Dirk Vanderwall, University of Idaho assistant professor of animal and veterinary science. The baby mule, Idaho Gem, was born May 4. It was the first clone of a hybrid animal. Veterinary examinations of the foal and its surrogate mother showed them to be in good health soon after birth. The foal's DNA comes from a fetal cell culture first established in 1998 at the University of Idaho.\n", "BULLET::::- Fruit Growers Express number 38374 was equipped with an experimental aluminum body in the Indiana Harbor, Indiana shops.\n\nSection::::History.:Experimentation.:\"Depression Baby\".\n", "BULLET::::- The breed is featured in Amor Towles’ novel \"A Gentleman in Moscow\", notably in a scene where two borzois create havoc in their unsuccessful attempt to corral the hotel cat in the lobby of a luxury hotel in Moscow.\n\nSection::::In art.\n", "After this process, the sample is transported to the laboratory in a 4 °C conservative container. It can be preserved in these conditions for 24 hours.\n\nFor sampling there are two basic conditions to consider:\n", "Mules come in a variety of shapes, sizes and colors, from minis under to maxis over , and in many different colors. The coats of mules come in the same varieties as those of horses. Common colors are sorrel, bay, black, and grey. Less common are white, roans, palomino, dun, and buckskin. Least common are paint mules or tobianos. Mules from Appaloosa mares produce wildly colored mules, much like their Appaloosa horse relatives, but with even wilder skewed colors. The Appaloosa color is produced by a complex of genes known as the Leopard complex (Lp). Mares homozygous for the Lp gene bred to any color donkey will produce a spotted mule.\n", "In June 2009 a rare dateless British 20 pence mule was reported to be in circulation, resulting from the accidental combination of old and new dies in production following a 2008 redesign of UK coinage, with an estimated 50,000 to 200,000 mules released before the error was noticed.\n\nThe Winter Olympic coins produced in the Royal Canadian Mint Olympic coins program for the 2010 Winter Olympics in Vancouver featured several mules which entered circulation.\n", "Examples of many styles of refrigerator and ice cars can be found at railroad museums around the world.\n\nThe Western Pacific Railroad Museum at Portola, California features a very complete roster of 20th century cars, including wood bodied ice cars, steel bodied ice cars, one of the earliest mechanical refrigerator cars, later mechanical refrigerator cars and a cryogenic reefer, as well as several \"insulated\" boxcars also used for food transport.\n\nSection::::Timeline.\n\nBULLET::::- 1842: The Western Railroad of Massachusetts experimented with innovative freight car designs capable of carrying all types of perishable goods without spoilage.\n", "The Moscow mule is often served in a copper mug. The popularity of this drinking vessel is attributable to Martin, who went around the United States to sell Smirnoff vodka and popularize the Moscow mule. Martin asked bartenders to pose with a specialty copper mug and a bottle of Smirnoff vodka, and took Polaroid photographs of them. He took two photos, leaving one with the bartender for display. The other photo was put into a collection and used as proof to the next bar Martin visited of the popularity of the Moscow mule. The copper mug remains, to this day, a popular serving vessel for the Moscow mule.\n", "Section::::Operation of a mule.\n\nBULLET::::- Watch video demonstration #1\n\nMule spindles rest on a carriage that travels on a track a distance of , while drawing out and spinning the yarn. On the return trip, known as putting up, as the carriage moves back to its original position, the newly spun yarn is wound onto the spindle in the form of a cone-shaped cop. As the mule spindle travels on its carriage, the roving which it spins is fed to it through rollers geared to revolve at different speeds to draw out the yarn.\n", "Section::::Description.:Coat.\n" ]
[ "Moscow Mule stays cold.", "The Moscow mule should not be able to stay very cold." ]
[ "It is not staying cold, it is a gimmick and it is coming to thermal equilibrium.", "The concept is actually a gimmick and it is not getting colder at all, it just feels that it is due to human perception." ]
[ "false presupposition" ]
[ "Moscow Mule stays cold.", "The Moscow mule should not be able to stay very cold." ]
[ "false presupposition", "false presupposition" ]
[ "It is not staying cold, it is a gimmick and it is coming to thermal equilibrium.", "The concept is actually a gimmick and it is not getting colder at all, it just feels that it is due to human perception." ]
2018-22186
How does breast milk actually work nutritionwise, does it filter or do only select nutriants make it up?
Breast milk has the perfect combination of proteins, fats and vitamins that every baby needs. It consist of basic formula that every baby needs for mental and physical development and your body always 'monitors' the baby to change the nutrient levels according to the baby needs if they differ from the norm. It also consist of some hormones, antibodies and nutrients that cannot be added to the baby formula that could be bought in the store. As the baby cannot have any solid food this is the best option for the baby.
[ "Most women that do not breastfeed use infant formula, but breast milk donated by volunteers to human milk banks can be obtained by prescription in some countries. In addition, research has shown that women who rely on infant formula could minimize the gap between the level of immunity protection and cognitive abilities a breastfed child benefits from versus the degree to which a bottle-fed child benefits from them. This can be done by supplementing formula-fed infants with bovine milk fat globule membranes (MFGM) meant to mimic the positive effects of the MFGMs which are present in human breast milk.\n", "The baby nursing from its own mother is the most common way of obtaining breast milk, but the milk can be pumped and then fed by baby bottle, cup and/or spoon, supplementation drip system, or nasogastric tube. In preterm children who do not have the ability to suck during their early days of life, avoiding bottles and tubes, and use of cups to feed expressed milk and other supplements is reported to result in better breastfeeding extent and duration subsequently. Breast milk can be supplied by a woman other than the baby's mother, either via donated pumped milk (generally from a milk bank or via informal milk donation), or when a woman nurses a child other than her own at her breast, a practice known as wetnursing.\n", "Colostrum will gradually change to become mature milk. In the first 3–4 days it will appear thin and watery and will taste very sweet; later, the milk will be thicker and creamier. Human milk quenches the baby's thirst and hunger and provides the proteins, sugar, minerals, and antibodies that the baby needs.\n", "Not all of breast milk's properties are understood, but its nutrient content is relatively consistent. Breast milk is made from nutrients in the mother's bloodstream and bodily stores. It has an optimal balance of fat, sugar, water, and protein that is needed for a baby's growth and development. Breastfeeding triggers biochemical reactions which allows for the enzymes, hormones, growth factors and immunologic substances to effectively defend against infectious diseases for the infant. The breast milk also has long-chain polyunsaturated fatty acids which help with normal retinal and neural development.\n", "Breast milk isn't sterile, but contains as many as 600 different species of various bacteria, including beneficial Bifidobacterium breve, B. adolescentis, B. longum, B. bifidum, and B. dentium.\n", "Colostrum is the first milk a breastfed baby receives. It contains higher amounts of white blood cells and antibodies than mature milk, and is especially high in immunoglobulin A (IgA), which coats the lining of the baby's immature intestines, and helps to prevent pathogens from invading the baby's system. Secretory IgA also helps prevent food allergies. Over the first two weeks after the birth, colostrum production slowly gives way to mature breast milk.\n\nSection::::Human.:Hormonal influences.:Autocrine control - Galactapoiesis.\n", "Breast milk contains complex proteins, lipids, carbohydrates and other biologically active components. The composition changes over a single feed as well as over the period of lactation.\n\nDuring the first few days after delivery, the mother produces colostrum. This is a thin yellowish fluid that is the same fluid that sometimes leaks from the breasts during pregnancy. It is rich in protein and antibodies that provide passive immunity to the baby (the baby's immune system is not fully developed at birth). Colostrum also helps the newborn's digestive system to grow and function properly.\n", "Both the AAP and the NHS recommend vitamin D supplementation for breastfed infants. Vitamin D can be synthesised by the infant via exposure to sunlight, however, many infants are deficient due being kept indoors or living in areas with insufficient sunlight. Formula is supplemented with vitamin D for this reason.\n\nSection::::Production.\n", "Breast milk contains a unique type of sugars, human milk oligosaccharides (HMOs), which are not present in infant formula. HMOs are not digested by the infant but help to make up the intestinal flora. They act as decoy receptors that block the attachment of disease causing pathogens, which may help to prevent infectious diseases. They also alter immune cell responses, which may benefit the infant. To date (2015) more than a hundred different HMOs have been identified; both the number and composition vary between women and each HMO may have a distinct functionality.\n", "The breast milk of diabetic mothers has been shown to have a different composition from that of non-diabetic mothers. It may contain elevated levels of glucose and insulin and decreased polyunsaturated fatty acids. A dose-dependent effect of diabetic breast milk on increasing language delays in infants has also been noted, although doctors recommend that diabetic mothers breastfeed despite this potential risk.\n", "Breast milk\n\nBreast milk is the milk produced by the breasts (or mammary glands) of a human female to feed a child. Milk is the primary source of nutrition for newborns before they are able to eat and digest other foods; older infants and toddlers may continue to be breastfed, in combination with other foods from six months of age when solid foods should be introduced.\n\nSection::::Methods.\n", "An exclusively breastfed baby depends on breast milk completely so it is important for the mother to maintain a healthy lifestyle, and especially a good diet. Consumption of 1500–1800 calories per day could coincide with a weight loss of 450 grams (one pound) per week. While mothers in famine conditions can produce milk with highly nutritional content, a malnourished mother may produce milk with decreased levels of several micronutrients such as iron, zinc, and vitamin B. She may also have a lower supply than well-fed mothers.\n", "Section::::Health benefits of breast milk.:Promoting digestive health.\n", "A newborn has a very small stomach capacity. At one-day old it is 5–7 ml, about the size of a large marble; at day three it is 22–30 ml, about the size of a ping-pong ball; and at day seven it is 45–60 ml, or about the size of a golf ball. The amount of breast milk that is produced is timed to meet the infant's needs in that the first milk, colostrum, is concentrated but produced in only very small amounts, gradually increasing in volume to meet the expanding size of the infant's stomach capacity.\n", "Whole cow's milk contains too little iron, retinol, vitamin E, vitamin C, vitamin D, unsaturated fats or essential fatty acids for human babies. Whole cow's milk also contains too much protein, sodium, potassium, phosphorus and chloride which may put a strain on an infant's immature kidneys. In addition, the proteins, fats and calcium in whole cow's milk are more difficult for an infant to digest and absorb than the ones in breast milk.\n\nBULLET::::- Note: Milk is generally fortified with vitamin D in the U.S. and Canada. Non-fortified milk contains only 2 IU per 3.5 oz.\n", "The primary and by far the largest group of consumers of human breast milk are premature babies. Infants with gastrointestinal disorders or metabolic disorders may also consume this form of milk as well. Human breast milk acts as a substitute, instead of formula, when a mother cannot provide her own milk. Human breast milk can also be fed to toddlers and children with medical conditions that include but are not limited to chemotherapy for cancer and growth failure while on formula.\n\nSection::::History.\n", "Section::::Alternative uses.\n", "When weaning is complete the mother's breasts return to their previous size after several menstrual cycles. If the mother was experiencing lactational amenorrhea her periods will return along with the return of her fertility. When no longer breastfeeding she will need to adjust her diet to avoid weight gain.\n\nSection::::Process.:Drugs.\n\nAlmost all medicines pass into breastmilk in small amounts. Some have no effect on the baby and can be used while breastfeeding. Many medications are known to significantly suppress milk production, including pseudoephedrine, diuretics, and contraceptives that contain estrogen.\n", "Section::::Health benefits of breast milk.:Promoting immunity.\n", "In a 2012 policy statement, the American Academy of Pediatrics recommended feeding preterm infants human milk, finding \"significant short- and long-term beneficial effects,\" including lower rates of necrotizing enterocolitis (NEC).\n\nExpressing milk for donation is another use for breast pumps. Donor milk may be available from milk banks for babies who are not able to receive their mothers' milk. \n\nSection::::Efficiency.\n\nThe breast pump is not as efficient at removing milk from the breast as most nursing babies or hand expression.\n", "In the 1980s and 1990s, lactation professionals (De Cleats) used to make a differentiation between foremilk and hindmilk. But this differentiation causes confusion as there are not two types of milk. Instead, as a baby breastfeeds, the fat content very gradually increases, with the milk becoming fattier and fattier over time.\n\nThe level of Immunoglobulin A (IgA) in breast milk remains high from day 10 until at least 7.5 months post-partum.\n", "The US National Library of Medicine publishes \"LactMed\", an up-to-date online database of information on drugs and lactation. Geared to both healthcare practitioners and nursing mothers, LactMed contains over 450 drug records with information such as potential drug effects and alternate drugs to consider.\n\nSome substances in the mother's food and drink are passed to the baby through breast milk, including mercury (found in some carnivorous fish), caffeine, and bisphenol A.\n\nSection::::Social factors.:Healthcare.:Medical conditions.\n", "Though it now is almost universally prescribed, in some countries in the 1950s the practice of breastfeeding went through a period where it was out of vogue and the use of infant formula was considered superior to breast milk. However, it is now universally recognized that there is no commercial formula that can equal breast milk. In addition to the appropriate amounts of carbohydrate, protein, and fat, breast milk provides vitamins, minerals, digestive enzymes, and hormones. Breast milk also contains antibodies and lymphocytes from the mother that help the baby resist infections. The immune function of breast milk is individualized, as the mother, through her touching and taking care of the baby, comes into contact with pathogens that colonize the baby, and, as a consequence, her body makes the appropriate antibodies and immune cells.\n", "Children who are born preterm have difficulty in initiating breast feeds immediately after birth. By convention, such children are often fed on expressed breast milk or other supplementary feeds through tubes or bottles until they develop satisfactory ability to suck breast milk. Tube feeding, though commonly used, is not supported by scientific evidence as of October 2016. It has also been reported in the same systematic review that by avoiding bottles and using cups instead to provide supplementary feeds to preterm children, a greater extent of breast feeding for a longer duration can subsequently be achieved.\n", "Breastfeeding offers health benefits to mother and child even after infancy. These benefits include a 73% decreased risk of sudden infant death syndrome, increased intelligence, decreased likelihood of contracting middle ear infections, cold and flu resistance, a tiny decrease in the risk of childhood leukemia, lower risk of childhood onset diabetes, decreased risk of asthma and eczema, decreased dental problems, decreased risk of obesity later in life, and a decreased risk of developing psychological disorders, including in adopted children. In addition, feeding an infant breast milk is associated with lower insulin levels and higher leptin levels compared feeding an infant via powdered-formula.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00834
why sometimes projects encourage you to check several checksums instead of just one?
It is possible, through some very meticulous manipulation, to create something that looks extremely similar to the original product, but has something malicious injected, and some extra stuff just to make the MD5 value match. In theory, you can also do the same for a SHA1 hash. Making something that can break both of those simultaneously? That would be incredibly tricky.
[ "Check digits and parity bits are special cases of checksums, appropriate for small blocks of data (such as Social Security numbers, bank account numbers, computer words, single bytes, etc.). Some error-correcting codes are based on special checksums which not only detect common errors but also allow the original data to be recovered in certain cases.\n\nSection::::Algorithms.\n\nSection::::Algorithms.:Parity byte or parity word.\n", "Alternately, you can use the same checksum creation algorithm, ignoring the checksum already in place as if it had not yet been calculated. Then calculate the checksum and compare this calculated checksum to the original checksum included with the credit card number. If the included checksum matches the calculated checksum, then the number is valid.\n\nSection::::Strengths and weaknesses.\n", "As with any calculation that divides a binary data word into short blocks and treats the blocks as numbers, any two systems expecting to get the same result should preserve the ordering of bits in the data word. In this respect, the Fletcher checksum is not different from other checksum and CRC algorithms and needs no special explanation.\n", "Checksum\n\nA checksum is a small-sized datum derived from a block of digital data for the purpose of detecting errors that may have been introduced during its transmission or storage. It is usually applied to an installation file after it is received from the download server. By themselves, checksums are often used to verify data integrity but are not relied upon to verify data authenticity.\n", "The simple checksums described above fail to detect some common errors which affect many bits at once, such as changing the order of data words, or inserting or deleting words with all bits set to zero. The checksum algorithms most used in practice, such as Fletcher's checksum, Adler-32, and cyclic redundancy checks (CRCs), address these weaknesses by considering not only the value of each word but also its position in the sequence. This feature generally increases the cost of computing the checksum.\n\nSection::::Algorithms.:General considerations.\n", "The simplest checksum algorithm is the so-called longitudinal parity check, which breaks the data into \"words\" with a fixed number \"n\" of bits, and then computes the exclusive or (XOR) of all those words. The result is appended to the message as an extra word. To check the integrity of a message, the receiver computes the exclusive or of all its words, including the checksum; if the result is not a word consisting of \"n\" zeros, the receiver knows a transmission error occurred.\n", "Section::::The algorithm.:The Fletcher checksum.\n", "A single-bit transmission error then corresponds to a displacement from a valid corner (the correct message and checksum) to one of the \"m\" adjacent corners. An error which affects \"k\" bits moves the message to a corner which is \"k\" steps removed from its correct corner. The goal of a good checksum algorithm is to spread the valid corners as far from each other as possible, so as to increase the likelihood \"typical\" transmission errors will end up in an invalid corner.\n\nSection::::See also.\n\nGeneral topic\n\nBULLET::::- Algorithm\n\nBULLET::::- Check digit\n\nBULLET::::- Damm algorithm\n\nBULLET::::- Data rot\n\nBULLET::::- File verification\n", "The actual procedure which yields the checksum from a data input is called a checksum function or checksum algorithm. Depending on its design goals, a good checksum algorithm will usually output a significantly different value, even for small changes made to the input. This is especially true of cryptographic hash functions, which may be used to detect many data corruption errors and verify overall data integrity; if the computed checksum for the current data input matches the stored value of a previously computed checksum, there is a very high probability the data has not been accidentally altered or corrupted.\n", "There is one checksum item per contiguous run of allocated blocks, with per-block checksums packed end-to-end into the item data. If there are more checksums than can fit, they spill rightwards over into another checksum item in a new leaf. If the file system detects a checksum mismatch while reading a block, it first tries to obtain (or create) a good copy of this block from another device if internal mirroring or RAID techniques are in use.\n", "CRC-32C checksums are computed for both data and metadata and stored as \"checksum items\" in a \"checksum tree\". There is room for 256 bits of metadata checksums and up to a full leaf block (roughly 4 KB or more) for data checksums. More checksum algorithm options are planned for the future.\n", "As with simpler checksum algorithms, the Fletcher checksum involves dividing the binary data word to be protected from errors into short \"blocks\" of bits and computing the modular sum of those blocks. (Note that the terminology used in this domain can be confusing. The data to be protected, in its entirety, is referred to as a \"word\", and the pieces into which it is divided are referred to as \"blocks\".)\n", "Checksum functions are related to hash functions, fingerprints, randomization functions, and cryptographic hash functions. However, each of those concepts has different applications and therefore different design goals. For instance, a function returning the start of a string can provide a hash appropriate for some applications but will never be a suitable checksum. Checksums are used as cryptographic primitives in larger authentication algorithms. For cryptographic systems with these two specific design goals, see HMAC.\n", "A variant of the previous algorithm is to add all the \"words\" as unsigned binary numbers, discarding any overflow bits, and append the two's complement of the total as the checksum. To validate a message, the receiver adds all the words in the same manner, including the checksum; if the result is not a word full of zeros, an error must have occurred. This variant too detects any single-bit error, but the promodular sum is used in SAE J1708.\n\nSection::::Algorithms.:Position-dependent.\n", "The validity of a record can be checked by computing its checksum and verifying that the computed checksum equals the checksum appearing in the record; an error is indicated if the checksums differ. Since the record's checksum byte is the negative of the data checksum, this process can be reduced to summing all decoded byte values — including the record's checksum — and verifying that the LSB of the sum is zero.\n\nSection::::Format.:Text line terminators.\n", "The below is a treatment on how to calculate the checksum including the check bytes; i.e., the final result should equal 0, given properly-calculated check bytes. The code by itself, however, will not calculate the check bytes.\n\nAn inefficient but straightforward implementation of a C language function to compute the Fletcher-16 checksum of an array of 8-bit data elements follows:\n", "Fletcher's checksum\n\nThe Fletcher checksum is an algorithm for computing a position-dependent checksum devised by John G. Fletcher (1934–2012) at Lawrence Livermore Labs in the late 1970s. The objective of the Fletcher checksum was to provide error-detection properties approaching those of a cyclic redundancy check but with the lower computational effort associated with summation techniques.\n\nSection::::The algorithm.\n\nSection::::The algorithm.:Review of simple checksums.\n", "a) Apply circular shift to the checksum:\n\nb) Add checksum and segment together, apply bitmask onto the obtained result:\n\nIteration 2: \n\na) Apply circular shift to the checksum:\n\nb) Add checksum and segment together, apply bitmask onto the obtained result:\n\nIteration 3:\n\na) Apply circular shift to the checksum:\n\nb) Add checksum and segment together, apply bitmask onto the obtained result:\n\nFinal checksum: 1000\n\nSection::::Sources.\n\nBULLET::::- official FreeBSD sum source code\n\nBULLET::::- official GNU sum manual page\n\nBULLET::::- coreutils download page --- find and unpack the newest version of the coreutils package, read src/sum.c\n", "Section::::The algorithm.:Weaknesses of simple checksums.\n\nThe first weakness of the simple checksum is that it is insensitive to the order of the blocks (bytes) in the data word (message). If the order is changed, the checksum value will be the same and the change will not be detected. The second weakness is that the universe of checksum values is small, being equal to the chosen modulus. In our example, there are only 255 possible checksum values, so it is easy to see that even random data has about a 0.4% probability of having the same checksum as our message.\n", "With this checksum, any transmission error which flips a single bit of the message, or an odd number of bits, will be detected as an incorrect checksum. However, an error which affects two bits will not be detected if those bits lie at the same position in two distinct words. Also swapping of two or more words will not be detected. If the affected bits are independently chosen at random, the probability of a two-bit error being undetected is 1/\"n\".\n\nSection::::Algorithms.:Modular sum.\n", "Fletcher addresses both of these weaknesses by computing a second value along with the simple checksum. This is the modular sum of the values taken by the simple checksum as each block of the data word is added to it. The modulus used is the same. So, for each block of the data word, taken in sequence, the block's value is added to the first sum and the new value of the first sum is then added to the second sum. Both sums start with the value zero (or some other known value). At the end of the data word, the modulus operator is applied and the two values are combined to form the Fletcher checksum value.\n", "Checksum schemes include parity bits, check digits, and longitudinal redundancy checks. Some checksum schemes, such as the Damm algorithm, the Luhn algorithm, and the Verhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers.\n\nSection::::Error detection schemes.:Cyclic redundancy checks (CRCs).\n", "The Verhoeff checksum calculation is performed as follows:\n\nBULLET::::1. Create an array \"n\" out of the individual digits of the number, taken from right to left (rightmost digit is \"n,\" etc.).\n\nBULLET::::2. Initialize the checksum \"c\" to zero.\n\nBULLET::::3. For each index \"i\" of the array \"n,\" starting at zero, replace \"c\" with \"d(c, p(i mod 8, n)).\"\n\nThe original number is valid if and only if \"c = 0\".\n\nTo generate a check digit, append a \"0\", perform the calculation: the correct check digit is \"inv(c)\".\n\nSection::::Examples.\n\nGenerate a check digit for \"236\":\n", "The FIX protocol also defines sets of fields that make a particular message; within the set of fields, some will be mandatory and others optional. The ordering of fields within the message is generally unimportant, however repeating groups are preceded by a count and encrypted fields are preceded by their length. The message is broken into three distinct sections: the head, body and tail. Fields must remain within the correct section and within each section the position may be important as fields can act as delimiters that stop one message from running into the next. The final field in any FIX message is tag 10 (checksum).\n", "Because digits are encoded by pairs, only an even number of digits can be encoded. Typically an odd number of digits is encoded by adding a \"0\" as first digit, but sometimes an odd number of digits is encoded by using five narrow spaces in the last digit.\n\nA checksum can be added as last digit, which is calculated in the same way as UPC checksums.\n\nThere are specific constraints on the height and width of the bars and the width of the \"quiet areas\", the blank areas before the start and after the stop symbol.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-14849
How does a body that's extremely out shape adapt to higher amounts of cardio?
There are metabolic changes that start when you exercise regularly. One of the first things that happens is that the number of mitochondria in your cells increases. Your body literally makes more of them per cell. And we all know what they do....
[ "Understanding the relationship between cardiorespiratory fitness and other categories of conditioning requires a review of changes that occur with increased aerobic, or anaerobic capacity. As aerobic/anaerobic capacity increases, general metabolism rises, muscle metabolism is enhanced, haemoglobin rises, buffers in the bloodstream increase, venous return is improved, stroke volume is improved, and the blood bed becomes more able to adapt readily to varying demands. Each of these results of cardiovascular fitness/cardiorespiratory conditioning will have a direct positive effect on muscular endurance, and an indirect effect on strength and flexibility.\n", "Section::::Diagnosis.\n\nThere are two main types of cardiomegaly:\n\nDilated cardiomyopathy is the most common type of cardiomegaly. In this condition, the walls of the left and/or right ventricles of the heart become thin and stretched. The result is an enlarged heart.\n\nIn the other types of cardiomegaly, the heart's large muscular left ventricle becomes abnormally thick. Hypertrophy is usually what causes left ventricular enlargement. Hypertrophic cardiomyopathy is typically an inherited condition.\n\nThere are many techniques and tests used to diagnose an enlarged heart. Below is a list of tests and how they test for cardiomegaly:\n", "In general, even a small amount of exercise can induce hypomethylation of the whole genome within muscle cells. This means that many regulatory genes can be turned on for pathways like muscle repair and growth. The intensity of the exercise directly correlates to the amount of promoter demethylation, so more strenuous exercise activates more genes.\n", "Section::::Electrical.\n", "People with athlete's heart do not exhibit an abnormally enlarged septum, and the growth of heart muscle at the septum and free ventricular wall is symmetrical. The asymmetrical growth seen in HCM results in a less-dilated left ventricle. This in turn leads to a smaller volume of blood leaving the heart with each beat.\n\nSection::::Screening and diagnosis.\n", "Recent studies have shown that those subjects with an extremely high occurrence (several thousands a day) of premature ventricular contractions (extrasystole) can develop dilated cardiomyopathy. In these cases, if the extrasystole are reduced or removed (for example, via ablation therapy) the cardiomyopathy usually regresses.\n\nSection::::Causes.:Genetics.\n\nAbout 25–35% of affected individuals have familial forms of the disease, with most mutations affecting genes encoding cytoskeletal proteins, while some affect other proteins involved in contraction. The disease is genetically heterogeneous, but the most common form of its transmission is an autosomal dominant pattern.\n", "Section::::Health effects.:Fitness.\n\nIndividuals can increase fitness following increases in physical activity levels. Increases in muscle size from resistance training is primarily determined by diet and testosterone. This genetic variation in improvement from training is one of the key physiological differences between elite athletes and the larger population. Studies have shown that exercising in middle age leads to better physical ability later in life.\n", "The basis of Performance Medicine in improving athletic performance lies in the understanding that the functions of the immune system, nervous system, hormonal system, and digestive system govern adaptation to training. All environmental stimuli (including training and nutrition) are processed by these systems, which will respond with adaptation, if their capacity permits. It is therefore the functions of these systems, which determine the result of all training stimuli. \n", "During exercise a person breathes deeper in order to meet higher oxygen requirements. This adoption of a deeper breathing pattern also serves a secondary function of strengthening the core of the body. This strengthening effect occurs because the thoracic diaphragm adopts a lower position than it does than when at rest; this generates increased intra-abdominal pressure which helps to strengthen the lumbar spine and the core of the body overall. For this reason, taking a deep breath, or adopting a deeper breathing pattern, is a fundamental requirement when lifting heavy weights.\n\nSection::::Related physiological processes.:Post-activation potentiation (PAP).\n", "Cortisol decreases amino acid uptake by muscle tissue, and inhibits protein synthesis. The short-term increase in protein synthesis that occurs subsequent to resistance training returns to normal after approximately 28 hours in adequately fed male youths. Another study determined that muscle protein synthesis was elevated even 72 hours following training.\n", "Section::::Respiratory system adaptations.\n\nAlthough all of the described adaptations in the body to maintain homeostatic balance during exercise are very important, the most essential factor is the involvement of the respiratory system. The respiratory system allows for the proper exchange and transport of gases to and from the lungs while being able to control the ventilation rate through neural and chemical impulses. In addition, the body is able to efficiently use the three energy systems which include the phosphagen system, the glycolytic system, and the oxidative system.\n\nSection::::Temperature regulation.\n", "Section::::Neural control.\n\nRespiratory adaptation begins almost immediately after the initiation of the physical stress associated with exercise. This triggers signals from the motor cortex that stimulate the respiratory center of the brain stem, in conjunction with feedback from the proprioreceptors in the muscles and joints of the active limbs.\n\nSection::::Breathing rate.\n\nWith higher intensity training, breathing rate is increased in order to allow more air to move in and out of the lungs, which enhances gas exchange. Endurance training typically results in an increase in the respiration rate.\n\nSection::::Lung capacity.\n", "Special organs release chemicals such as \"Pineal Tribrantine 3\" or PT3, which confers youth and vigor. Cyborg muscles are much stronger than the human equivalent. Their bodies are also capable of \"hyperfunction\" which allows them to carry out actions at many times normal speed.\n", "Section::::Effects on metabolic processes.\n\nIn addition to restructuring the muscular and skeletal system to better handle mechanical stress, physical exercise also affects gene expression with respect to metabolism. The effects are widespread and can affect anything from muscle growth to aerobic stamina to diabetes and other metabolic disorders.\n", "The explanation why the heart is capable of adapting to growing volumes of blood flow is called 'Frank-Starling’s mechanism', which says that the more distended are the muscle during the filling, the more strength of contraction and quantity of blood pumped by the left-ventricle to the aorta. When the heart reaches a physiological limitation, then blood pumping cannot increase, although the venous return is further increased. The SVV contributes to know the state of volemia in ventilated patients and designates the point on the Frank-Starling curve where the patient is.\n\nSection::::Therapy and methods.:The current qCO utilities can be summarized in.\n", "BULLET::::14. Beisvag V, Kemi OJ, Arbo I, Loennechen LP, Wisløff U, Langaas M, Sandvik AK, Ellingsen Ø. Pathological and physiological hypertrophies are regulated by distinct gene programs. Eur J Cardiovasc Prev Rehabil 2009, 16: 690–607.\n\nBULLET::::15. Wisløff U, Ellingsen Ø, Kemi OJ. High-intensity interval training to maximize cardiac benefits of exercise training? Exerc Sport Sci Rev 2009, 37: 139–146.\n", "Bradycardia is not necessarily problematic. People who regularly practice sports may have sinus bradycardia, because their trained hearts can pump enough blood in each contraction to allow a low resting heart rate. Sinus bradycardia can also be an adaptive advantage; for example, diving seals may have a heart rate as low as 12 beats per minute, helping them to conserve oxygen during long dives.\n\nSinus bradycardia is a common condition found in both healthy individuals and those who are considered well conditioned athletes.\n", "Examples of increased muscle hypertrophy are seen in various professional sports, mainly strength related sports such as boxing, olympic weightlifting, mixed martial arts, rugby, professional wrestling and various forms of gymnastics. Athletes in other more skill-based sports such as basketball, baseball, ice hockey, and soccer may also train for increased muscle hypertrophy to better suit their position of play. For example, a center (basketball) may want to be bigger and more muscular to better overpower his or her opponents in the low post. Athletes training for these sports train extensively not only in strength but also in cardiovascular and muscular endurance training.\n", "The body most efficiently produces power when its strength producing areas exist in particular proportions. If these proportions exist in the correct ratio to each other, then power generation can be optimised. Conversely, if one area is too strong, this may mean that it is disproportionately strong relative to other areas of the body. This may cause a number of problems: a weaker area of the body may be excessively strained by working in conjunction with the stronger area; and the stronger area may be slowed by working with the weaker area. Such problems hinder power development.\n", "BULLET::::3. Initiation of cardiac activity using the formula \"My heartbeat is calm and regular\".\n\nBULLET::::4. Passive concentration on the respiratory mechanism with the formula \"It breathes me\".\n\nBULLET::::5. Concentration on the warmth in the abdominal region with \"My solar plexus is warm\" formula.\n\nBULLET::::6. Passive concentration on coolness in the cranial region with the formula \"My forehead is cool\".\n\nWhen a new exercise step is added in autogenic training, the trainee should always concentrate initially on the already learned exercises and then add a new exercise. In the beginning, a new exercise is added for only brief periods.\n", "Cardiogenesis products focus on the treatment of refractory angina in patients with diffuse coronary artery disease(CAD). Patients suffering from chronic angina (chest pain) are generally managed aggressively with medications to help alleviate their symptoms and frequently receive multiple coronary interventions and even bypass surgery over the course of the advancement of their disease. When chest pain continues in spite of treatments, the condition is considered, “refractory angina.”\n", "Myostatin is a protein responsible for inhibiting muscle differentiation and growth. Removing the myostatin gene or otherwise limiting its expression leads to an increase in muscle size and power. This has been demonstrated in knockout mice lacking the gene that were dubbed \"Schwarzenegger mice\". Humans born with defective genes can also serve as \"knockout models\"; a German boy with a mutation in both copies of the myostatin gene was born with well-developed muscles. The advanced muscle growth continued after birth, and the boy could lift weights of 3 kg at the age of 4. In work published in 2009, scientists administered follistatin via gene therapy to the quadriceps of non-human primates, resulting in local muscle growth similar to the mice.\n", "Section::::Screening.:United States.\n", "Respiratory adaptation\n\nRespiratory adaptation is the specific changes that the respiratory system undergoes in response to the demands of physical exertion. Intense physical exertion, such as that involved in fitness training, places elevated demands on the respiratory system. Over time, this results in respiratory changes as the system adapts to these requirements. These changes ultimately result in an increased exchange of oxygen and carbon dioxide, which is accompanied by an increase in metabolism. Respiratory adaptation is a physiological determinant of peak endurance performance, and in elite athletes, the pulmonary system is often a limiting factor to exercise under certain conditions.\n", "The goal of core training is definitely not to develop muscle hypertrophy but to improve functional predispositions of physical activity. This particularly involves improving intra- and intermuscular coordination or synchronization of participating muscles.\n\nInvolvement of the core means more than just compressing abdominal muscles when in crouching or seated position. The role of the core muscles is to stabilize the spine. Resisting expansion or rotation is as important as the ability to execute movement.\n\nSection::::Types.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-00212
Why do humans float on water?
The amount of water your body displaces weighs more than your body therefor you float. Essentially a human body is less dense than water
[ "Section::::Physiology.:Buoyancy.\n", "Section::::Related academic and independent research.:Wading and bipedalism.\n\nAAH proponent Algis Kuliukas, performed experiments to measure the comparative energy used when lacking orthograde posture with using fully upright posture. Although it is harder to walk upright with bent knees on land, this difference gradually diminishes as the depth of water increases and is still practical in thigh-high water.\n\nIn a critique of the AAH, Henry Gee questioned any link between bipedalism and diet. Gee writes that early humans have been bipedal for 5 million years, but our ancestor's \"fondness for seafood\" emerged a mere 200,000 years ago.\n", "When a person floats motionless in the water, their legs tend to sink. When a person swims freestyle, the legs rise toward surface because water passing underneath the body pushes the legs up, similar to how the wind can lift a kite into the air. In addition, a proper kicking technique will bring the legs all the way to the surface, creating a more streamlined profile for the arms to pull through the water. Both of these mechanisms of becoming horizontal require a small amount of energy from the swimmer. When a person wearing a thick wetsuit floats motionless in the water, their legs tend to float on the surface. Theoretically, this obviates the small energy expenditure mentioned above, although an additional small amount of energy is required to continually flex the wetsuit during swimming motions.\n", "Traveling long distances and deep dives are a combination of good stamina and also moving an efficient speed and in an efficient way to create laminar flow, reducing drag and turbulence. In sea water as the fluid, it traveling long distances in large mammals, such as whales, is facilitated by their neutral buoyancy and have their mass completely supported by the density of the sea water. On land, animals have to expend a portion of their energy during locomotion to fight the effects of gravity.\n", "The diving response in animals, such as the dolphin, varies considerably depending on level of exertion during foraging. Children tend to survive longer than adults when deprived of oxygen underwater. The exact mechanism for this effect has been debated and may be a result of brain cooling similar to the protective effects seen in people treated with deep hypothermia.\n\nSection::::Physiological response.:Carotid body chemoreceptors.\n", "In \"The Accidental Species: Misunderstandings of Human Evolution\" (2013), the \"Nature\" editor Henry Gee remarked on how a seafood diet can aid in the development of the human brain. He nevertheless criticized the AAH because \"it's always a problem identifying features [such as body fat and hairlessness] that humans have now and inferring that they must have had some adaptive value in the past.\" Also \"it's notoriously hard to infer habits [such as swimming] from anatomical structures\".\n", "BULLET::::- \"Blood Shift\", the shifting of blood to the thoracic cavity, the region of the chest between the diaphragm and the neck, to avoid the collapse of the lungs under higher pressure during deeper dives.\n\nThe reflex action is automatic and allows both a conscious and an unconscious person to survive longer without oxygen under water than in a comparable situation on dry land. The exact mechanism for this effect has been debated and may be a result of brain cooling similar to the protective effects seen in people who are treated with deep hypothermia.\n", "Other automatic breathing control reflexes also exist. Submersion, particularly of the face, in cold water, triggers a response called the diving reflex. This firstly has the result of shutting down the airways against the influx of water. The metabolic rate slows right down. This is coupled with intense vasoconstriction of the arteries to the limbs and abdominal viscera. This reserves the oxygen that is in blood and lungs at the beginning of the dive almost exclusively for the heart and the brain. The diving reflex is an often-used response in animals that routinely need to dive, such as penguins, seals and whales. It is also more effective in very young infants and children than in adults.\n", "Submerging the face in water cooler than about triggers the diving reflex, common to air-breathing vertebrates, especially marine mammals such as whales and seals. This reflex protects the body by putting it into \"energy saving\" mode to maximize the time it can stay under water. The strength of this reflex is greater in colder water and has three principal effects:\n\nBULLET::::- \"Bradycardia\", a slowing of the heart rate by up to 50% in humans.\n\nBULLET::::- \"Peripheral vasoconstriction\", the restriction of the blood flow to the extremities to increase the blood and oxygen supply to the vital organs, especially the brain.\n", "The diving reflex is exhibited strongly in aquatic mammals, such as seals, otters, dolphins, and muskrats, and exists as a lesser response in other animals, including adult humans, babies up to 6 months old (see Infant swimming), and diving birds, such as ducks and penguins.\n", "High-speed ram ventilation creates laminar flow of water from the gills along the body of an organism.\n\nThe secretion of mucus along the organism's body surface, or the addition of long-chained polymers to the velocity gradient, can reduce frictional drag experienced by the organism.\n\nSection::::Efficiency.:Buoyancy.\n", "\"Blood shift\" is a term used when blood flow to the extremities is redistributed to the head and torso during a breathhold dive. Peripheral vasoconstriction occurs during submersion by resistance vessels limiting blood flow to muscles, skin, and viscera, regions which are \"hypoxia-tolerant\", thereby preserving oxygenated blood for the heart, lungs, and brain. The increased resistance to peripheral blood flow raises the blood pressure, which is compensated by bradycardia, conditions which are accentuated by cold water. Aquatic mammals have blood volume that is some three times larger per mass than in humans, a difference augmented by considerably more oxygen bound to hemoglobin and myoglobin of diving mammals, enabling prolongation of submersion after capillary blood flow in peripheral organs is minimized.\n", "Other animals, e.g. penguins, diving ducks, move underwater in a manner which has been termed \"aquatic flying\". Some fish propel themselves without a wave motion of the body, as in the slow-moving seahorses and \"Gymnotus\". \n", "Swim bladders are also used in the food industry as a source of collagen. They can be made into a strong, water-resistant glue, or used to make isinglass for the clarification of beer. In earlier times they were used to make condoms.\n\nSection::::Swim bladder disease.\n\nSwim bladder disease is a common ailment in aquarium fish. A fish with swim bladder disorder can float nose down tail up, or can float to the top or sink to the bottom of the aquarium.\n\nSection::::Risk of injury.\n", "Swim bladder disease\n\nSwim bladder disease, also called swim bladder disorder or flipover, is a common ailment in aquarium fish. The swim bladder is an internal gas-filled organ that contributes to the ability of a fish to control its buoyancy, and thus to stay at the current water depth without having to waste energy in swimming. A fish with swim bladder disorder can float nose down tail up, or can float to the top or sink to the bottom of the aquarium. \n\nSection::::Causes.\n", "The speed of motion in air is faster than in water because of drag force. The drag force is proportional to density of the fluid. The animal jumping out of water will feel almost no drag, since the air density is 1,000 times less than water density. Usually animals gain thrust for the jumping as how they lift themselves underwater. Some of them are group behavior.\n\nSection::::Mechanism.\n\nSection::::Mechanism.:Jet propulsion.\n", "As alluded to earlier, buoyancy plays a large role in ensuring the participants benefits from performing hydrogymnastics. Buoyancy means that when one is “submerged” in water, their body will float and this is mainly due to the fact that water has an anti-gravitational effect. In hydrogymnastics, the deeper a person is immersed in water, the less body weight they are carrying and therefore, this results in less pressure on one's joints, ligaments, bones and muscles, better flexibility and increased range of motion. Because resistance activities are also incorporated into hydrogymnastics, this means that the participant will experience other health benefits such as muscle toning (especially around the legs, buttocks and arms) and reduced swelling (e.g. around the lower legs and feet).\n", "When the face is submerged and water fills the nostrils, sensory receptors sensitive to wetness within the nasal cavity and other areas of the face supplied by the fifth (V) cranial nerve (the trigeminal nerve) relay the information to the brain. The tenth (X) cranial nerve, (the vagus nerve) – part of the autonomic nervous system – then produces bradycardia and other neural pathways elicit peripheral vasoconstriction, restricting blood from limbs and all organs to preserve blood and oxygen for the heart and the brain (and lungs), concentrating flow in a heart–brain circuit and allowing the animal to conserve oxygen.\n", "Section::::Stability.\n", "The more of the animal's body that is submerged while swimming, the less energy it uses. Swimming on the surface requires two to three times more energy than when completely submerged. This is because of the bow wave that is formed at the front when the animal is pushing the surface of the water when swimming, creating extra drag.\n\nSection::::Secondary evolution.\n\nWhile tetrapods lost many of their natural adaptations to swimming when they evolved onto the land, many have re-evolved the ability to swim or have indeed returned to a completely aquatic lifestyle.\n", "Balance and equilibrium depend on vestibular function and secondary input from visual, organic, cutaneous, kinesthetic and sometimes auditory senses which are processed by the central nervous system to provide the sense of balance. Underwater, some of these inputs may be absent or diminished, making the remaining cues more important. Conflicting input may result in vertigo and disorientation. The vestibular sense is considered to be essential in these conditions for rapid, intricate and accurate movement.\n\nSection::::Sensory impairment.:Proprioception.\n", "Section::::Buoyancy.\n", "Aquatic locomotion\n\nAquatic locomotion is biologically propelled motion through a liquid medium. The simplest propulsive systems are composed of cilia and flagella. Swimming has evolved a number of times in a range of organisms including arthropods, fish, molluscs, reptiles, birds, and mammals.\n\nSection::::Evolution of swimming.\n", "Section::::History.:First evaluation by astronauts.\n", "The guinea pig (or cavy) is noted as having an excellent swimming ability. Mice can swim quite well. They do panic when placed in water, but many lab mice are used in the Morris water maze, a test to measure learning. When mice swim, they use their tails like flagella and kick with their legs.\n\nMany snakes are excellent swimmers as well. Large adult anacondas spend the majority of their time in the water, and have difficulty moving on land.\n\nMany monkeys can naturally swim and some, like the proboscis monkey, crab-eating macaque, and rhesus macaque swim regularly.\n\nSection::::Human swimming.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01838
How is milk turned into cheese?
1. Add acid to milk to make it sour. 2. Add rennet (enzymes from the stomachs of certain mammals like cows). 3. This separates the milk into solid curds (solidified milk proteins) and liquid whey (mainly sugars) 4. The curd is then processed. The specifics depend on the cheese being made. Some are ready as is. Some are dried out. Some are heated. Some are stretched. Some are washed. 5. Ripening. The cheese is left to ripen for days to years, allowing microbes and enzymes to perform chemical reactions which change the tasted and texture of the cheese. Some have additional bacteria and molds introduced to enhance this process.
[ "Cheese consists of proteins and fat from milk, usually the milk of cows, buffalo, goats, or sheep. It is produced by coagulation that is caused by destabilization of the casein micelle, which begins the processes of fractionation and selective concentration. Typically, the milk is acidified and then coagulated by the addition of rennet, containing a proteolytic enzyme known as rennin; traditionally obtained from the stomachs of calves, but currently produced more often from genetically modified microorganisms. The solids are then separated and pressed into final form.\n", "Modern industrial cottage cheese has been manufactured since the 1930s using pasteurized skim milk, or in more modern processes using concentrated nonfat milk or reconstituted nonfat dry milk. A bacterial culture that produces lactic acid (\"Lactococcus lactis\" ssp. \"lactis\" or \"L. lactis\" ssp. \"cremoris\" strains such as are usually used) or a food-grade acid such as vinegar is added to the milk, which allows the milk to curdle and parts to solidify, and it is heated until the liquid reaches , after which it is cooled to . The solids are known as curd and form a gelatinous skin over the liquid (known as whey) in the vat, which is cut into cubes with wires, which allows more whey to drain from the curds. After this step the curds are then reheated by various methods to for an hour or two. In Iowa in the early 1930s, hot water was poured into the vat. This further firms the curds. Once the curds have been drained of the water, and are mostly dry, the mass is pressed to further dry the curds. The curds are then rinsed in water. Finally, salt and a \"dressing\" of cream is added, and the result is packaged and shipped for consumption. Some modern manufactures add starches and gums to stabilize the product. Some smaller modern luxury creameries omit the first heating step but allow the milk to curdle much longer with bacteria to produce the curds, or use crème fraîche as dressing.\n", "Cheese\n\nCheese is a dairy product derived from milk that is produced in a wide range of flavors, textures, and forms by coagulation of the milk protein casein. It comprises proteins and fat from milk, usually the milk of cows, buffalo, goats, or sheep. During production, the milk is usually acidified, and adding the enzyme rennet causes coagulation. The solids are separated and pressed into final form. Some cheeses have molds on the rind, the outer layer, or throughout. Most cheeses melt at cooking temperature.\n", "Traditionally (and legally within the EU), feta is produced using only whole sheep's milk, or a blend of sheep's and goat's milk (with a maximum of 30% goat's milk). The milk may be pasteurized or not, but most producers now use pasteurized milk. If pasteurized milk is used, a starter culture of micro-organisms is added to replace those naturally present in raw milk which are killed in pasteurization.These organisms are required for acidity and flavour development. When the pasteurized milk has cooled to approximately , rennet is added and the casein is left to coagulate. The compacted curds are then chopped up and placed in a special mould or a cloth bag that allows the whey to drain. After several hours, the curd is firm enough to cut up and salt; salinity will eventually reach approximately 3%, when the salted curds are placed (depending on the producer and the area of Greece) in metal vessels or wooden barrels and allowed to infuse for several days. After the dry-salting of the cheese is complete, aging or maturation in brine (a 7% salt in water solution) takes several weeks at room temperature and a further minimum of 2 months in a refrigerated high-humidity environment—as before, either in wooden barrels or metal vessels, depending on the producer (the more traditional barrel aging is said to impart a unique flavour). The containers are then shipped to supermarkets where the cheese is cut and sold directly from the container; alternatively blocks of standardized weight are packaged in sealed plastic cups with some brine. Feta dries relatively quickly even when refrigerated; if stored for longer than a week, it should be kept in brine or lightly salted milk.\n", "After each milking, and once the milk is pasteurised, rennet is added to the milk and renneted for a period of 30 to 40 minutes, whether it is an industrial or farmstead cheese. The curd obtained is then uncurdled with a “lyre” which is an instrument made of metal. The milk is uncurdled in order to obtain bits of renneted cheese about the size of wheat grains.\n", "Cheese curds are made from fresh pasteurized milk in the process of creating cheese when bacterial culture and rennet are added to clot the milk. After the milk clots it is then cut into cubes; the result is a mixture of whey and curd. This mixture is then cooked and pressed to release the whey from the curd, creating the final product of cheese curd.\n\nSection::::Characteristics.\n", "Cheesemaking\n\nCheesemaking (or \"caseiculture\") is the craft of making cheese. The production of cheese, like many other food preservation processes, allows the nutritional and economic value of a food material, in this case milk, to be preserved in concentrated form. Cheesemaking allows the production of the cheese with diverse flavors and consistencies.\n\nSection::::History.\n\nCheesemaking is documented in Egyptian tomb drawings and in ancient Greek literature.\n", "Brie may be produced from whole or semi-skimmed milk. The curd is obtained by adding rennet to raw milk and warming it to a maximum temperature of . The cheese is then cast into molds, sometimes with a traditional perforated ladle called a \"pelle à brie\". The mold is filled with several thin layers of cheese and drained for approximately 18 hours. The cheese is then taken out of the molds, salted, inoculated with cheese culture (\"Penicillium candidum\", \"Penicillium camemberti\") or \"Brevibacterium linens\", and aged in a controlled environment for at least four or five weeks.\n", "Skim milk is held until lactic acid bacteria acidify and coagulate its proteins. The curdled milk is stirred and heated to a temperature as high as 80 °C (175 °F), then the whey is drained off and the curd is gathered in bags and pressed. The curd is placed in flat pans, broken up, and washed with warm skim milk, to form a mixture consisting of two parts milk to one part curd. This mixture is stirred and heated, as before, until the casein in the milk curdles and adheres to the mass of curd. The steps of draining, pressing, adding more skim milk, and heating are repeated once more. The curd is drained again, salted (2 to 2.5% by weight) and kneaded on a table for about 15 minutes. Hot butterfat or rich cream is added, about one part of butterfat for every five parts of curd, and the mixture is once again heated and stirred. The cheese is then molded in parchment-lined boxes.\n", "Section::::Process.\n\nThe curds and whey are separated using rennet, an enzyme complex normally produced from the stomachs of newborn calves (in vegetarian or kosher cheeses, bacterial, yeast or mould-derived chymosin is used).\n", "Section::::Industrial processing.:Whey.\n\nIn earlier times, whey or milk serum was considered to be a waste product and it was, mostly, fed to pigs as a convenient means of disposal. Beginning about 1950, and mostly since about 1980, lactose and many other products, mainly food additives, are made from both casein and cheese whey.\n\nSection::::Industrial processing.:Yogurt.\n\nYogurt (or yoghurt) making is a process similar to cheese making, only the process is arrested before the curd becomes very hard.\n\nSection::::Industrial processing.:Milk powders.\n", "On the farms, about 5% of buttermilk may be added to the milk, and it is set with rennet at a temperature of to . About 30 minutes later, the curd is cut with a harp, stirred, and warmed to about by pouring in hot whey. The curd is dipped with a cloth and kneaded. Cumin seeds are added to a portion of the curd, and the curd is then put into cloth-lined hoops in three layers, with the spiced curd as the middle layer. The cheese is pressed for about three hours, then it is redressed, inverted, and again pressed overnight. It may be salted with dry salt, or it may be immersed in a brine bath. It is cured in a cool, moist cellar. If the rind becomes too hard, it is washed with whey or salty water.\n", "Dairy plants process the raw milk they receive from farmers so as to extend its marketable life. Two main types of processes are employed: heat treatment to ensure the safety of milk for human consumption and to lengthen its shelf-life, and dehydrating dairy products such as butter, hard cheese and milk powders so that they can be stored.\n\nSection::::Industrial processing.:Cream and butter.\n", "The first step is acidification where a starter culture is added to milk in order to change lactose to lactic acid, thus changing the acidity of the milk and turning it from liquid to solid. The next step is coagulation, where rennet, a mixture of rennin and other material found in the stomach lining of a calf is added to solidify the milk further. Following this, thick curds are cut typically with a knife to encourage the release of liquid or whey. The smaller the curds are cut, the thicker and harder the resulting cheese will become. Salt is then added to provide flavor as well as to act as a preservative so the cheese does not spoil. Next, the cheese is given its form and further pressed with weights if necessary to expel any excess liquid. The final step is ripening the cheese by aging it. The temperature and the level of humidity in the room where the cheese is aging is monitored to ensure the cheese does not spoil or lose its optimal flavor and texture.\n", "Section::::Extraction of calf rennet.:Traditional method.\n\nDried and cleaned stomachs of young calves are sliced into small pieces and then put into salt water or whey, together with some vinegar or wine to lower the pH of the solution. After some time (overnight or several days), the solution is filtered. The crude rennet that remains in the filtered solution can then be used to coagulate milk. About 1 g of this solution can normally coagulate 2 to 4 L of milk.\n\nSection::::Extraction of calf rennet.:Modern method.\n", "For a few cheeses, the milk is curdled by adding acids such as vinegar or lemon juice. Most cheeses are acidified to a lesser degree by bacteria, which turn milk sugars into lactic acid, then the addition of rennet completes the curdling. Vegetarian alternatives to rennet are available; most are produced by fermentation of the fungus \"Mucor miehei\", but others have been extracted from various species of the \"Cynara\" thistle family. Cheesemakers near a dairy region may benefit from fresher, lower-priced milk, and lower shipping costs.\n", "Fresh from the farm, milk is poured into large copper vats where it is gently warmed. Each cheese requires up to of milk. Rennet is added, causing the milk to coagulate. The curds are then cut into tiny white grains that are the size of rice or wheat which are then stirred before being heated again for around 30 minutes. The contents are then placed into moulds and the whey is pressed out. After several hours the mould is opened and left to mature in cellars, first for a few weeks at the dairy, and then over several months elsewhere.\n", "BULLET::::- Cured – presented in cylinders high and 12-18 cm (3-4 in) diameter weighing 1 kg.(2.2 lbs) or 2 kg (4.4 lbs).\n\nSection::::Manufacture.\n\nThe goats are milked daily and after filtration the milk is warmed and curdled with an animal enzyme or another authorised agent. Depending on the type of cheese to be produced the process continues thus:\n", "The cheese is made from cow's milk from cows milked in the afternoon or evening and heated between 25 °C and 30 °C (77-86 °F) with a coagulant added so the milk forms curds. After midday the following day, the curds are cut and deposited in a mold to drain. From the mold it is passed to a sack or bag (\"\"Fardela\"\") for the \"\"Trapu\"\" version or left with the form given by the mold (\"\"Troncado\"\"-Trunk) like a bishop's mitre. Salt is added, as is paprika (\"pimentón\") if desired. After a few days the period of curing starts with aging occurring on wooden planks for a period between a week and several months.\n", "To make \"Telemea\" cheese, rennet is added into milk to curdle it. Most commonly, cow's and sheep's milk is used, with goat's and buffalo's being more of a delicacy. The resulting curd is removed and is kept in cheesecloth, pressed overnight, then cut into square pieces. The cheese is then left to mature in brine. This fresh cheese (preserved in brine up to a couple of weeks) has its own name, caș. Subsequently, it is stored in wooden barrels named \"putini\" (singular: \"putină\"). It can be kept throughout winter in a more concentrated brine, in which case, it is desalted in fresh water before consumption.\n", "The cheese is made by heating whole milk, adding quark, and then cooking the mixture until fluffy curds separate from a clear whey. The whey is discarded when the cheese mass reaches a temperature of . At this point, the curds are placed into a skillet or cooking pan, and stirred with a traditional mixture of egg, butter, salt, and caraway seeds. Once a solid, firm ball is formed, the cheese is placed in a muslin or cheesecloth to drain. Generally, the cheese is prepared a few days before eating, and is allowed to ripen in a cool place before consumption.\n", "After the milk is collected in the morning, it is left to rest overnight in a cool place. The fat is removed in a caldera of copper and the milk mixed, heated to a temperature ranging between , all in constant motion. Then rennet is added, which helps the whey from the cheese to coagulate into a large, soft ball. The procedure continues with the caldera reheated to . The curd is then pressed mildly in wooden molds, and purged for 24 hours. \n", "A variant of filmjölk called \"tätmjölk\", \"filtäte\", \"täte\" or \"långmjölk\" is made by rubbing the inside of a container with leaves of certain plants: sundew (\"Drosera\", ) or butterwort (\"Pinguicula\", ). Lukewarm milk is added to the container and left to ferment for one to two days. More \"tätmjölk\" can then be made by adding completed \"tätmjölk\" to milk. In \"Flora Lapponica\" (1737), Carl von Linné described a recipe for \"tätmjölk\" and wrote that any species of butterwort could be used to make \"tätmjölk\".\n", "Starter whey (containing a mixture of certain thermophilic lactic acid bacteria) is added, and the temperature is raised to 33–35 °C (91–95 °F). Calf rennet is added, and the mixture is left to curdle for 10–12 minutes. The curd is then broken up mechanically into small pieces (around the size of rice grains). The temperature is then raised to 55 °C (131 °F) with careful control by the cheese-maker. The curd is left to settle for 45–60 minutes. The compacted curd is collected in a piece of muslin before being divided in two and placed in molds. There is 1100 L (291 US gallons or 250 imperial gallons) of milk per vat, producing two cheeses each. The curd making up each wheel at this point weighs around 45 kg (100 lb). The remaining whey in the vat was traditionally used to feed the pigs from which \"Prosciutto di Parma\" (cured Parma ham) was produced. The barns for these animals were usually just a few yards away from the cheese production rooms.\n", "During the fermentation process, once the cheesemaker has gauged that sufficient lactic acid has been developed, rennet is added to cause the casein to precipitate. Rennet contains the enzyme chymosin which converts κ-casein to para-κ-caseinate (the main component of cheese curd, which is a salt of one fragment of the casein) and glycomacropeptide, which is lost in the cheese whey. As the curd is formed, milk fat is trapped in a casein matrix. After adding the rennet, the cheese milk is left to form curds over a period of time.\n\nSection::::Process.:Draining.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-12772
Where does the light gain so much energy to travel at it's speed?
Photon (light particle) is massless, so it doesn't need to use energy to reach light speed. Photon becomes an energy pocket, it can transfer it but it doesn't use it. An electron in an atom has a natural orbit that it occupies, but if you energize an atom, you can move its electrons to higher orbitals. A photon is produced whenever an electron in a higher-than-normal orbit falls back to its normal orbit. During the fall from high energy to normal energy, the electron emits a photon -- a packet of energy -- with very specific characteristics. The photon has a frequency, or color, that exactly matches the distance the electron falls.
[ "Light that travels through transparent matter does so at a lower speed than \"c\", the speed of light in a vacuum. For example, photons engage in so many collisions on the way from the core of the sun that radiant energy can take about a million years to reach the surface; however, once in open space, a photon takes only 8.3 minutes to reach Earth. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polariton (other quasi-particles are phonons and excitons); this polariton has a nonzero effective mass, which means that it cannot travel at \"c\". Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering.\n", "The simplest picture of light given by classical physics is of a wave or disturbance in the electromagnetic field. In a vacuum, Maxwell's equations predict that these disturbances will travel at a specific speed, denoted by the symbol . This well-known physical constant is commonly referred to as the speed of light. The postulate of the constancy of the speed of light in all inertial reference frames lies at the heart of special relativity and has given rise to a popular notion that the \"speed of light is always the same\". However, in many situations light is more than a disturbance in the electromagnetic field.\n", "Relativistic jets emit most of their energy via synchrotron emission. In our simple model the sphere contains highly relativistic electrons and a steady magnetic field. Electrons inside the blob travel at speeds just a tiny fraction below the speed of light and are whipped around by the magnetic field. Each change in direction by an electron is accompanied by the release of energy in the form of a photon. With enough electrons and a powerful enough magnetic field the relativistic sphere can emit a huge number of photons, ranging from those at relatively weak radio frequencies to powerful X-ray photons.\n", "If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer—when the photon catches up, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon has. As an observer approaches the speed of light with regard to the source, the photon looks redder and redder, by relativistic Doppler effect (the Doppler shift is the relativistic formula), and the energy of a very long-wavelength photon approaches zero. This is because the photon is \"massless\"—the rest mass of a photon is zero.\n", "If a laser beam is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than \"c\". Similarly, a shadow projected onto a distant object can be made to move across the object faster than \"c\". In neither case does the light travel from the source to the object faster than \"c\", nor does any information travel faster than light. An analogy can be made to pointing a water hose in one direction and then quickly moving the hose to point the stream of water in another direction. At no point does the water leaving the hose ever increase in velocity, but the endpoint of the stream can be moved faster than the water in the stream itself.\n", "So-called superluminal motion is seen in certain astronomical objects, such as the relativistic jets of radio galaxies and quasars. However, these jets are not moving at speeds in excess of the speed of light: the apparent superluminal motion is a projection effect caused by objects moving near the speed of light and approaching Earth at a small angle to the line of sight: since the light which was emitted when the jet was farther away took longer to reach the Earth, the time between two successive observations corresponds to a longer time between the instants at which the light rays were emitted.\n", "In empty space, the photon moves at \"c\" (the speed of light) and its energy and momentum are related by , where \"p\" is the magnitude of the momentum vector p. This derives from the following relativistic relation, with :\n\nThe energy and momentum of a photon depend only on its frequency (\"ν\") or inversely, its wavelength (\"λ\"):\n\nwhere k is the wave vector (where the wave number ), is the angular frequency, and is the reduced Planck constant.\n\nSince p points in the direction of the photon's propagation, the magnitude of the momentum is\n", "In the case of a relativistic jet, beaming (emission aberration) will make it appear as if more energy is sent forward, along the direction the jet is traveling. In the simple jet model a homogeneous sphere will emit energy equally in all directions in the rest frame of the sphere. In the rest frame of Earth the moving sphere will be observed to be emitting most of its energy along its direction of motion. The energy, therefore, is ‘beamed’ along that direction.\n\nQuantitatively, aberration accounts for a change in luminosity of\n\nSection::::A simple jet model.:Beaming equation.:Time dilation.\n", "The name most often associated with emission theory is Isaac Newton. In his \"corpuscular theory\" Newton visualized light \"corpuscles\" being thrown off from hot bodies at a nominal speed of \"c\" with respect to the emitting object, and obeying the usual laws of Newtonian mechanics, and we then expect light to be moving towards us with a speed that is offset by the speed of the distant emitter (\"c\" ± \"v\").\n", "Section::::Scientific examination.\n\nIn early 2008, Heins was given access to equipment to demonstrate it by professor Riadh Habash of the University of Ottawa, who says of it, \"It accelerates, but when it comes to an explanation, there is no backing theory for it. That's why we're consulting MIT. But at this time we can't support any claim.\"\n", "If a laser beam is swept quickly across a distant object, the spot of light can move faster than \"c\", although the initial movement of the spot is delayed because of the time it takes light to get to the distant object at the speed \"c\". However, the only physical entities that are moving are the laser and its emitted light, which travels at the speed \"c\" from the laser to the various positions of the spot. Similarly, a shadow projected onto a distant object can be made to move faster than \"c\", after a delay in time. In neither case does any matter, energy, or information travel faster than light.\n", "Light particles, or photons, travel at the speed of \"c\", the constant that is conventionally known as the \"speed of light\". This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a light-like world line and, in appropriate units, have equal space and time components for every observer.\n", "In modern quantum physics, the electromagnetic field is described by the theory of quantum electrodynamics (QED). In this theory, light is described by the fundamental excitations (or quanta) of the electromagnetic field, called photons. In QED, photons are massless particles and thus, according to special relativity, they travel at the speed of light in vacuum.\n", "BULLET::::- Relativistic Doppler effect, where light that bounces on an object that is moving in a very high speed will get its wavelength changed; if the light bounces at an object that is moving towards it, the impact will compress the photons, so the wavelength will become shorter and the light will be blueshifted and the photons will be packed more closely so the photon flux will be increased; if it bounces at an object that is moving away from it, it will be redshifted and the photons will be packed more sparsely so the photon flux will be decreased.\n", "In classical physics, light is described as a type of electromagnetic wave. The classical behaviour of the electromagnetic field is described by Maxwell's equations, which predict that the speed \"c\" with which electromagnetic waves (such as light) propagate through the vacuum is related to the distributed capacitance and inductance of the vacuum, otherwise respectively known as the electric constant \"ε\" and the magnetic constant \"μ\", by the equation\n", "Relativistic beaming (also known as Doppler beaming, Doppler boosting, or the headlight effect) is the process by which relativistic effects modify the apparent luminosity of emitting matter that is moving at speeds close to the speed of light. In an astronomical context, relativistic beaming commonly occurs in two oppositely-directed relativistic jets of plasma that originate from a central compact object that is accreting matter. Accreting compact objects and relativistic jets are invoked to explain the following observed phenomena: x-ray binaries, gamma-ray bursts, and, on a much larger scale, active galactic nuclei (AGN). (Quasars are also associated with an accreting compact object, but are thought to be merely a particular variety of AGN.)\n", "Marx, Redding and Simmons and McInnes calculated that the energy conversion efficiency of terrestrial laser-driven propulsion is approximately proportional to v/c at low speeds (v<0.1c), thus is small at low speeds (v«0.1c). However, at higher speeds (v>0.1c), owing to the favorable Doppler shift energy transfer, onboard photon propulsion becomes much more energy efficient.\n", "When an object is pushed in the direction of motion, it gains momentum and energy, but when the object is already traveling near the speed of light, it cannot move much faster, no matter how much energy it absorbs. Its momentum and energy continue to increase without bounds, whereas its speed approaches (but never reaches) a constant value—the speed of light. This implies that in relativity the momentum of an object cannot be a constant times the velocity, nor can the kinetic energy be a constant times the square of the velocity.\n", "It is possible for a particle to travel through a medium faster than the phase velocity of light in that medium (but still slower than \"c\"). When a charged particle does that in a dielectric material, the electromagnetic equivalent of a shock wave, known as Cherenkov radiation, is emitted.\n\nSection::::Practical effects of finiteness.\n\nThe speed of light is of relevance to communications: the one-way and round-trip delay time are greater than zero. This applies from small to astronomical scales. On the other hand, some techniques depend on the finite speed of light, for example in distance measurements.\n", "Section::::History.:Connections with electromagnetism.\n", "This phenomenon is caused by the jets traveling very near the speed of light towards the observer. The angle is not necessarily very small with the line-of-sight as is commonly asserted. Because the high-velocity jets are emitting light at every point of their path, the light they emit does not approach the observer much more quickly than the jet itself. This causes the light emitted over hundreds of years of the jet's travel to not have hundreds of light-years of distance between its front end (the earliest light emitted) and its back end (the latest light emitted); the complete \"light-train\" thus arrives at the observer over a much smaller time period (ten or twenty years), giving the illusion of faster-than-light travel.\n", "Under general relativity, the rotation of a body gives it an additional gravitational attraction due to its kinetic energy; and light is pulled around (to some degree) by the rotation (Lense–Thirring effect).\n\nBULLET::::- In the case of rotation, under general relativity we observe a velocity-dependent dragging effect, since, for a rotating body, the tendency of the object to pull things around with it can be accounted for by the fact that the receding part of the object is pulling more strongly than the approaching part.\n\nSection::::References.\n", "where we have inserted formula_18 radians (imagine that the central mass, about which the photon is orbiting, is located at the centre of the coordinate axes. Then, as the photon is travelling along the formula_19-coordinate line, for the mass to be located directly in the centre of the photon's orbit, we must have formula_18 radians).\n\nHence, rearranging this final expression gives:\n\nformula_21\n\nwhich is the result we set out to prove.\n\nSection::::Photon orbits around a Kerr black hole.\n", "Section::::Fundamental role in physics.\n", "Later, in 1916 Einstein also showed that the recoil of molecules during the emission and absorption of photons was consistent with, and necessary for, a quantum description of thermal radiation processes. Each photon acts as if it imparts a momentum impulse \"p\" equal to its energy divided by the speed of light, ().\n" ]
[ "Light gains energy to travel at it's speed." ]
[ "Photons (light particles) are massless, so they do not require the use of energy to reach light speed. " ]
[ "false presupposition" ]
[ "Light gains energy to travel at it's speed." ]
[ "false presupposition" ]
[ "Photons (light particles) are massless, so they do not require the use of energy to reach light speed. " ]
2018-22785
Why do colors in a TV screen change when looked at at a certain angle?
From different perspectives, some of the colours in each pixel (red green blue) would be more visible than others, changing the general colour of the picture. This of course doesn't happen when facing the front of the screen because every pixel and colour is equally exposed
[ "TN displays suffer from limited viewing angles, especially in the vertical direction. Colors will shift when viewed off-perpendicular. In the vertical direction, colors will shift so much that they will invert past a certain angle.\n", "In television sets and computer monitors, the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In all modern CRT monitors and televisions, the beams are bent by \"magnetic deflection\", a varying magnetic field generated by coils and driven by electronic circuits around the neck of the tube, although electrostatic deflection is commonly used in oscilloscopes, a type of diagnostic instrument.\n", "BULLET::::1. Apply the first test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,\n\nBULLET::::2. Measure the luminance and/or the chromaticity of the first test pattern and record the result,\n\nBULLET::::3. Apply the second test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,\n\nBULLET::::4. Measure the luminance and/or the chromaticity of the second test pattern and record the result,\n", "This way of proceeding is suitable only when the display device does not exhibit \"loading effects\", which means that the luminance of the test pattern is varying with the size of the test pattern. Such loading effects can be found in CRT-displays and in PDPs. A small test pattern (e.g. 4% window pattern) displayed on these devices can have significantly higher luminance than the corresponding full-screen pattern because the supply current may be limited by special electronic circuits.\n\nSection::::Full-swing contrast.\n", "BULLET::::5. Calculate the resulting \"static contrast\" for the two test patterns using one of the metrics listed above (CR,C or K).\n\nWhen luminance and/or chromaticity are measured before the optical response has settled to a stable steady state, some kind of \"transient contrast\" has been measured instead of the \"static contrast\".\n\nSection::::Transient contrast.\n\nWhen the image content is changing rapidly, e.g. during the display of video or movie content, the optical state of the display may not reach the intended stable steady state because of slow response and thus the apparent contrast is reduced if compared to the static contrast.\n", "If the reflective properties of the projection screen (usually depending on direction) are included in the measurement, the luminance reflected from the centers of the rectangles has to be measured for a (set of) specific directions of observation.\n\nLuminance, contrast and chromaticity of LCD-screens is usually varying with the direction of observation (i.e. viewing direction). The variation of electro-optical characteristics with viewing direction can be measured sequentially by mechanical scanning of the viewing cone (\"gonioscopic\" approach) or by simultaneous measurements based on conoscopy.\n\nSection::::See also.\n\nBULLET::::- Contrast (vision)\n\nSection::::External links.\n\nBULLET::::- Charles Poynton:\" Reducing eyestrain from video and computer monitors\"\n", "A fortunate side-effect of inversion (see above) is that, for most display material, what little cross-talk there is largely cancelled out. For most practical purposes, the level of crosstalk in modern LCDs is negligible.\n\nCertain patterns, particularly those involving fine dots, can interact with the inversion and reveal visible cross-talk. If you try moving a small Window in front of the inversion pattern (above) which makes your screen flicker the most, you may well see cross-talk in the surrounding pattern.\n\nDifferent patterns are required to reveal cross-talk on different displays (depending on their inversion scheme).\n", "BULLET::::2. The LCD moves around two axes which are at a right angle to each other, so that the screen both tilts and swivels. This type is called \"swivel screen\". Other names for this type are \"vari-angle screen\", \"fully articulated screen\", \"fully articulating screen\", \"rotating screen\", \"multi-angle screen\", \"variable angle screen\", \"flip-out-and-twist screen\", \"twist-and-tilt screen\" and \"swing-and-tilt screen\".\n", "In television sets and computer monitors, the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In all modern CRT monitors and televisions, the beams are bent by \"magnetic deflection\", a varying magnetic field generated by coils and driven by electronic circuits around the neck of the tube, although electrostatic deflection is commonly used in oscilloscopes, a type of diagnostic instrument.\n", "Photographs of a TV screen taken with a digital camera often exhibit moiré patterns. Since both the TV screen and the digital camera use a scanning technique to produce or to capture pictures with horizontal scan lines, the conflicting sets of lines cause the moiré patterns. To avoid the effect, the digital camera can be aimed at an angle of 30 degrees to the TV screen.\n\nSection::::Implications and applications.:Marine navigation.\n", "An example of pixel shape affecting \"resolution\" or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or \"sharper\". However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to \"fix\" the non-native resolution input into the display's native resolution output.\n", "When the liquid crystal material is in its natural state, light passing through the first filter will be rotated (in terms of polarity) by the twisted molecule structure, which allows the light to pass through the second filter. When voltage is applied across the electrodes, the liquid crystal structure is untwisted to an extent determined by the amount of voltage. A sufficiently large voltage will cause the molecules to untwist completely, such that the polarity of any light passing through will not be rotated and will instead be perpendicular to the filter polarity. This filter will block the passage of light because of the difference in polarity orientation, and the resulting pixel will be black. The amount of light allowed to pass through at each pixel can be controlled by varying the corresponding voltage accordingly. In a color LCD each pixel consists of red, green, and blue subpixels, which require appropriate color filters in addition to the components mentioned previously. Each subpixel can be controlled individually to display a large range of possible colors for a particular pixel.\n", "LCD classification\n\nThere are various classifications of the electro-optical modes of liquid crystal displays (LCDs).\n\nSection::::LCD operation in a nutshell.\n\nThe operation of TN, VA and IPS-LCDs can be summarized as follows:\n\nBULLET::::- a well aligned LC configuration is deformed by an applied electric field,\n\nBULLET::::- this deformation changes the orientation of the local LC optical axis with respect to the direction of light propagation through the LC layer,\n\nBULLET::::- this change of orientation changes the polarization state of the light propagating through the LC layer,\n", "The conductive qualities and standards-compliance of connecting cables, circuitry and equipment can also alter the electrical signal at any stage in the signal flow. (A partially inserted VGA connector can result in a monochrome display, for example, as some pins are not connected.)\n\nSection::::Color perception.\n", "Once tile data is set up in the nametable, it is a simple matter of adjusting the PPU's X/Y scrolling registers to move the screen around.\n", "Today's displays, being driven by digital signals (such as DVI, HDMI and DisplayPort), and based on newer fixed-pixel digital flat panel technology (such as liquid crystal displays), can safely assume that all pixels are visible to the viewer. On digital displays driven from a digital signal, therefore, no adjustment is necessary because all pixels in the signal are unequivocally mapped to physical pixels on the display. As overscan reduces picture quality, it is undesirable for digital flat panels; therefore, is preferred. When driven by analog video signals such as VGA, however, displays are subject to timing variations and cannot achieve this level of precision.\n", "Section::::The original experiments showing the TI and TAE.\n", "BULLET::::- Luminance/contrast – Displays have adjustments in luminance and contrast to account for ambient lighting, which can vary widely (e.g., from the glare of bright clouds to moonless night approaches to minimally lit fields).\n", "Color can also change depending on viewing angle, using iridescence, for example, in ChromaFlair.\n\nSection::::Art.\n", "Pixel shift for displays is a method to prevent static images (such as station bugs and video game HUD elements) from causing image retention and screen burn-in in susceptible display types such as plasma and OLED. The entire video frame is moved periodically (vertically and/or horizontally) so there are effectively no static images. One definition reads: \"the image rotates in a circle in a way imperceptible to the viewer with a defined rhythm and pixel interval.\"\n", "The firmware on some high end Samsung plasma TVs moves the video horizontally and vertically by some number of pixels every few minutes. Some TVs allow the user to define the number of pixels moved and their interval. On Panasonic plasma TVs this technique is named \"Pixel Orbiter\". Sony uses the term Pixel Shift for this technique in its OLED displays, while LG calls it \"Screen Shift\".\n\nPixel shifting is sometimes used with other burn-in prevention methods like screensaver or power management functions.\n\nSection::::Pixel Shift to increase resolution.\n", "Visual tilt effects\n\nDue to the effect of a spatial context or temporal context, the perceived orientation of a test line or grating pattern can appear tilted away from its physical orientation. The tilt illusion (TI) is the phenomenon that the perceived orientation of a test line or grating is altered by the presence of surrounding lines or grating with a different orientation (spatial context; see Fig.1). And the tilt aftereffect (TAE) is the phenomenon that the perceived orientation is changed after prolonged inspection of another oriented line or grating (temporal context; see Fig.2).\n", "Screen angle\n\nIn offset printing, the screen angle is the angle at which the halftones of a separated color is made output to a lithographic film, hence, printed on final product media.\n\nSection::::Why screen angles should differ.\n", "BULLET::::- Viewing angle: The maximum angle at which the display can be viewed with acceptable quality. The angle is measured from one direction to the opposite direction of the display, such that the maximum viewing angle is 180 degrees. Outside of this angle the viewer will see a distorted version of the image being displayed. The definition of what is acceptable quality for the image can be different among manufacturers and display types. Many manufacturers define this as the point at which the luminance is half of the maximum luminance. Some manufacturers define it based on contrast ratio and look at the angle at which a certain contrast ratio is realized.\n", "For example, imagine we have an RGB display whose color is controlled by three sliders ranging from , one controlling the intensity of each of the red, green, and blue primaries. If we begin with a relatively colorful orange , with sRGB values , , , and want to reduce its colorfulness by half to a less saturated orange , we would need to drag the sliders to decrease \"R\" by 31, increase \"G\" by 24, and increase \"B\" by 59, as pictured below.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-02018
How do mantis shrimp see more colors than us? Is it possible to even imagine what these colors would look like?
We have two types of cells on our eyes: rods allow us to see light, cones allow us to see colors. dogs has two different types of cones, humans have three and mantis shrimp have sixteen, that's the reason they see so many colors. You can learn more in [this infography]( URL_0 ) by the oatmeal.
[ "Some species have at least 16 photoreceptor types, which are divided into four classes (their spectral sensitivity is further tuned by colour filters in the retinas), 12 for colour analysis in the different wavelengths (including six which are sensitive to ultraviolet light) and four for analysing polarised light. By comparison, most humans have only four visual pigments, of which three are dedicated to see colour, and human lenses block ultraviolet light. The visual information leaving the retina seems to be processed into numerous parallel data streams leading into the brain, greatly reducing the analytical requirements at higher levels.\n", "Rows 1 to 4 of the midband are specialised for colour vision, from deep ultraviolet to far red. Their UV vision can detect five different frequency bands in the deep ultraviolet. To do this, they use two photoreceptors in combination with four different colour filters. They are not currently believed to be sensitive to infrared light. The optical elements in these rows have eight different classes of visual pigments and the rhabdom (area of eye that absorbs light from a single direction) is divided into three different pigmented layers (tiers), each for different wavelengths. The three tiers in rows 2 and 3 are separated by colour filters (intrarhabdomal filters) that can be divided into four distinct classes, two classes in each row. It is organised like a sandwich - a tier, a colour filter of one class, a tier again, a colour filter of another class, and then a last tier. These colour filters allow the mantis shrimp to see with diverse colour vision. Without the filters, the pigments themselves range only a small segment of the visual spectrum, about 490 to 550 nm. Rows 5 and 6 are also segregated into different tiers, but have only one class of visual pigment, the ninth class, and are specialised for polarization vision. Depending upon the species, they can detect circularly polarized light, linearly polarised light, or both. A tenth class of visual pigment is found in the upper and lower hemispheres of the eye.\n", "Reptiles and amphibians also have four cone types (occasionally five), and probably see at least the same number of colors that humans do, or perhaps more. In addition, some nocturnal geckos have the capability of seeing color in dim light.\n", "Some animals can distinguish colors in the ultraviolet spectrum. The UV spectrum falls outside the human visible range, except for some cataract surgery patients. Birds, turtles, lizards, many fish and some rodents have UV receptors in their retinas. These animals can see the UV patterns found on flowers and other wildlife that are otherwise invisible to the human eye.\n", "Many species can see light with frequencies outside the human \"visible spectrum\". Bees and many other insects can detect ultraviolet light, which helps them to find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to ultraviolet \"colors\" and patterns rather than how colorful they appear to humans. Birds, too, can see into the ultraviolet (300–400 nm), and some have sex-dependent markings on their plumage that are visible only in the ultraviolet range. Many animals that can see into the ultraviolet range, however, cannot see red light or any other reddish wavelengths. For example, bees' visible spectrum ends at about 590 nm, just before the orange wavelengths start. Birds, however, can see some red wavelengths, although not as far into the light spectrum as humans. It is a myth that the common goldfish is the only animal that can see both infrared and ultraviolet light; their color vision extends into the ultraviolet but not the infrared.\n", "BULLET::::- In the 1942 novel \"Perelandra\", C. S. Lewis describes the colors of angelic beings when they manifest themselves: \"We think that when creatures of the hypersomatic kind choose to 'appear' to us, they are not in fact affecting our retina at all, but directly manipulating the relevant parts of our brain. If so, it is quite possible that they can produce there the sensations we should have if our eyes were capable of receiving those colours in the spectrum which are actually beyond their range.\" (Chapter 16)\n", "Vertebrate animals such as tropical fish and birds sometimes have more complex color vision systems than humans; thus the many subtle colors they exhibit generally serve as direct signals for other fish or birds, and not to signal mammals. In bird vision, tetrachromacy is achieved through up to four cone types, depending on species. Each single cone contains one of the four main types of vertebrate cone photopigment (LWS/ MWS, RH2, SWS2 and SWS1) and has a colored oil droplet in its inner segment. Brightly colored oil droplets inside the cones shift or narrow the spectral sensitivity of the cell. It has been suggested that it is likely that pigeons are pentachromats.\n", "The most sensitive pigment, rhodopsin, has a peak response at 500 nm. Small changes to the genes coding for this protein can tweak the peak response by a few nm; pigments in the lens can also filter incoming light, changing the peak response. Many organisms are unable to discriminate between colours, seeing instead in shades of grey; colour vision necessitates a range of pigment cells which are primarily sensitive to smaller ranges of the spectrum. In primates, geckos, and other organisms, these take the form of cone cells, from which the more sensitive rod cells evolved. Even if organisms are physically capable of discriminating different colours, this does not necessarily mean that they can perceive the different colours; only with behavioural tests can this be deduced.\n", "Their visual experience of colours is not very different from humans; the eyes are actually a mechanism that operates at the level of individual cones and makes the brain more efficient. This system allows visual information to be preprocessed by the eyes instead of the brain, which would otherwise have to be larger to deal with the stream of raw data, thus requiring more time and energy. While the eyes themselves are complex and not yet fully understood, the principle of the system appears to be simple. It is similar in function to the human eye, but works in the opposite manner. In the human brain, the inferior temporal cortex has a huge number of colour-specific neurons, which process visual impulses from the eyes to create colourful experiences. The mantis shrimp instead uses the different types of photoreceptors in its eyes to perform the same function as the human brain neurons, resulting in a hardwired and more efficient system for an animal that requires rapid colour identification. Humans have fewer types of photoreceptors, but more colour-tuned neurons, while mantis shrimps appears to have fewer colour neurons and more classes of photoreceptors.\n", "Postmortem samples of living or recently extinct species, on the other hand, generally allow to obtain MR image qualities sufficient for morphometric analyses, though preservation artifacts would have to be taken into account. Previous MR imaging studies include specimens\n\npreserved in formalin,\n\nby freezing \n\nor in alcohol .\n\nThe third line of comparative evidence would be cross-species in vivo MR imaging studies like the one by Rilling & Insel (1998), who investigated brains from eleven primate species by VBM in order to shed new light on primate brain evolution.\n", "BULLET::::- Cone monochromacy, type II, if its existence were established, would be the case in which the retina contains no rods, and only a single type of cone. Such an animal would be unable to see at all at lower levels of illumination, and of course would be unable to distinguish hues. In practice, it is hard to produce an example of such a retina, at least as the normal condition for a species.\n\nSection::::Animals that are monochromats.\n", "The presence of photoreceptor cell types in an organism's eyes do not directly imply that they are being used to functionally perceive color. Measuring functional spectral discrimination in non-human animals is challenging due to the difficulty in performing psychophysical experiments on creatures with limited behavioral repertoires who cannot respond using language. Limitations in the discriminative ability of shrimp having twelve distinct color photoreceptors have demonstrated that having more cell types in itself need not always correlate with better functional color vision.\n\nSection::::Psychological primaries.\n", "Many species can see light within frequencies outside the human \"visible spectrum\". Bees and many other insects can detect ultraviolet light, which helps them find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to their appearance in ultraviolet light rather than how colorful they appear to humans. Birds, too, can see into the ultraviolet (300–400 nm), and some have sex-dependent markings on their plumage that are visible only in the ultraviolet range. Many animals that can see into the ultraviolet range cannot see red light or any other reddish wavelengths. Bees' visible spectrum ends at about 590 nm, just before the orange wavelengths start. Birds can see some red wavelengths, although not as far into the light spectrum as humans. The popular belief that the common goldfish is the only animal that can see both infrared and ultraviolet light is incorrect, because goldfish cannot see infrared light. Similarly, dogs are often thought to be color blind but they have been shown to be sensitive to colors, though not as many as humans. Some snakes can \"see\" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes, and other snakes with the organ may detect warm bodies from a meter away. It may also be used in thermoregulation and predator detection. (See Infrared sensing in snakes)\n", "The Polka-dot tree frog, widely found in the Amazon was discovered to be the first fluorescent amphibian in 2017. The frog is pale green with dots in white, yellow or light red. The fluorescence of the frog was discovered unintentionally in Buenos Aires, Argentina. The fluorescence was traced to a new compound found in the lymph and skin glads. The main fluorescent compound is Hyloin-L1 and it gives a blue-green glow when exposed to violet or ultra violet light. Scientists behind the discovery say that the fluorescence can be used for communication. They also think that about 100 or 200 species of frogs are likely to be fluorescent.\n", "Because of its very small size, picoplankton is difficult to study by classic methods such as optical microscopy. More sophisticated methods are needed.\n\nBULLET::::- Epifluorescence microscopy allows researchers to detect certain groups of cells possessing fluorescent pigments such as \"Synechococcus\" which possess phycoerythrin.\n", "Marshall’s research focuses on neuroethology, understanding how animals perceive their environment, and also how the brains and sensory systems of animals in the real world have been shaped by their environment and needs, particularly their visual systems.\n\nHis study of the mantis shrimp revealed it has the world’s most complex visual system of any known animal, with 12-channel colour channels. His research also showed that octopus and other cephalopods are colour blind.\n", "The huge diversity seen in mantis shrimp photoreceptors likely comes from ancient gene duplication events. One interesting consequence of this duplication is the lack of correlation between opsin transcript number and physiologically expressed photoreceptors. One species may have six different opsin genes, but only express one spectrally distinct photoreceptor. Over the years, some mantis shrimp species have lost the ancestral phenotype, although some still maintain 16 distinct photoreceptors and four light filters. Species that live in a variety of photic environments have high selective pressure for photoreceptor diversity, and maintain ancestral phenotypes better than species that live in murky waters or are primarily nocturnal.\n", "The photo-receptivity of the \"eyes\" of other species also varies considerably from that of humans and so results in correspondingly [[Color vision#In other animal species|different \"color\" perceptions]] that cannot readily be compared to one another. [[Honey bee|Honeybees]] and [[bumblebee]]s for instance have trichromatic color vision sensitive to [[ultraviolet]] but is insensitive to red. [[Papilio]] butterflies possess six types of photoreceptors and may have [[Pentachromacy|pentachromatic]] vision. The most complex color vision system in the animal kingdom has been found in [[stomatopod]]s (such as the [[mantis shrimp]]) with up to 12 spectral receptor types thought to work as multiple dichromatic units.\n", "Although near-infrared vision (780–1000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters.\n\nSection::::Applications.:Photobiomodulation.\n", "Most other mammals are currently thought to be dichromats, with only two types of cone (though limited trichromacy is possible at low light levels where the rods and cones are both active). Most studies of carnivores, as of other mammals, reveal dichromacy, examples including the domestic dog, the ferret, and the spotted hyena. Some species of insects (such as honeybees) are also trichromats, being sensitive to ultraviolet, blue and green instead of blue, green and red.\n", "Tritanopes are missing the short wavelength sensitive opsins and see short wavelength colours in a green hue and dim compared to other colours. They may also see some short wavelength colours as black. Other perception problems include distinguishing yellow from pink or purple colours being perceived as shades of red.\n\nSection::::Drivers for Color Vision Evolution.\n\nSection::::Drivers for Color Vision Evolution.:Food Foraging.\n", "Other animals, such as tropical fish and birds, have more complex color vision systems than humans. There is evidence that ultraviolet light plays a part in color perception in many branches of the animal kingdom, especially for insects; however, there has not been enough evidence to prove this. It has been suggested that it is likely that pigeons are pentachromats. \"Papilio\" butterflies apparently have tetrachromatic color vision despite possessing six photoreceptor types. The most complex color vision system in animal kingdom has been found in stomatopods with up to 12 different spectral receptor types which are thought to work as multiple dichromatic units.\n", "The Nayatani et al. color appearance model focuses on illumination engineering and the color rendering properties of light sources.\n\nSection::::Color appearance models.:Hunt model.\n", "Humans and primates are unique as they possess trichromatic color vision, and are able to discern between violet [short wave (SW)], green [medium wave (MW)], and yellow-green [long wave (LW)]. Mammals other than primates generally have less effective two-receptor color perception systems, allowing only dichromatic color vision; marine mammals have only a single cone type and are thus monochromats. Honey- and bumblebees have trichromatic color vision, which is insensitive to red but sensitive in ultraviolet to a color called \"bee purple\".\n", "The basis for this variation is the number of cone types that differ between species. Mammals in general have color vision of a limited type, and usually have red-green color blindness, with only two types of cones. Humans, some primates, and some marsupials see an extended range of colors, but only by comparison with other mammals. Most non-mammalian vertebrate species distinguish different colors at least as well as humans, and many species of birds, fish, reptiles and amphibians, and some invertebrates, have more than three cone types and probably superior color vision to humans.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-13924
why can you set system clocks to before the hardware was released?
Most computers reckon time by counting the seconds from some fixed date far in the past. Unix based OSs count from 1970, Windows from 1980. You put in 441763200 and your computer will automatically understand that is 1984 or 1994. Also, while it might seem silly to set your clock to a time that far back, it is important for your computer to be able to understand dates in the past. If you want to track the mortgage you got in 1998, it is useful for the computer to use one system to track all dates.
[ "BULLET::::- In 1994 Apogee Software released the video game \"Rise of the Triad\" for the DOS operating system. If the internal clock of the player's computer indicated that the date was 25 December, the game's background music was replaced with an up-beat arrangement of \"God Rest You Merry, Gentlemen,\" by Lee Jackson, called \"God Rest Ye Deadly Gentlemen.\"\n", "Most OEM systems do not expose to the user the adjustments needed to change processor clock speed or voltage, which precludes overclocking (for warranty and support reasons). The same processor installed on a different motherboard offering adjustments will allow the user to change them.\n", "Section::::Overclocking of unsupported processors.\n\nOfficially Intel supported overclocking of only the \"K\" and \"X\" versions of Skylake processors. However, it was later discovered that other \"non-K\" chips could be overclocked by modifying the base clock value – a process made feasible by the base clock applying only to the CPU, RAM, and integrated graphics on Skylake. Through beta UEFI firmware updates, some motherboard vendors, such as ASRock (which prominently promoted it under the name \"Sky OC\") allowed the base clock to be modified in this manner.\n", "Testing was running around the clock during December. Technicians were testing the CSM's fuel systems during the day and the testing was running on the rocket at night.\n\nThere was even an instance of a variant of the Y2K bug in the computer. As it ran past midnight, when the time changed from 2400 to 0001 the computer could not handle it and \"turned into a pumpkin\" according to an interview with Frank Bryan, a Kennedy Space Center Launch Vehicle Operations Engineering staff member.\n", "Earlier Apple III units came with a built-in real time clock. The hardware, however, would fail after prolonged use. Assuming that National Semiconductor would test all parts before shipping them, Apple did not perform this level of testing. Apple was soldering chips directly to boards and could not easily replace a bad chip if one was found. Eventually, Apple solved this problem by removing the real-time clock from the Apple III's specification rather than shipping the Apple III with the clock pre-installed, and then sold the peripheral as a level 1 technician add-on.\n\nSection::::BASIC.\n", "Due to the expense of the SGI workstations and computer networks at the time, many system administrators removed \"dogfight\" from newly installed systems in order to prevent abuse of resources, or limited play to restricted off-peak hours.\n", "On March 1, 2010 (UTC), many of the original \"fat\" PlayStation 3 models worldwide were experiencing errors related to their internal system clock. The error had many symptoms. Initially, the main problem seemed to be the inability to connect to the PlayStation Network. However, the root cause of the problem was unrelated to the PlayStation Network, since even users who had never been online also had problems playing installed offline games (which queried the system timer as part of startup) and using system themes. At the same time many users noted that the console's clock had gone back to December 31, 1999. The event was nicknamed the ApocalyPS3, a play on the word \"apocalypse\" and PS3, the abbreviation for the PlayStation 3 console.\n", "On January 6, 2009 a hacking ring known as the \"\"Sh4d0ws\"\" leaked the jig files needed to launch the PlayStation 3 into service mode. Although the PlayStation 3 can be triggered into service mode, it is not yet of any use because the files needed to make changes to the console have not been leaked.\n", "Sony confirmed that there was an error and stated that it was narrowing down the issue and were continuing to work to restore service. By March 2 (UTC), 2010, owners of original PS3 models could connect to PSN successfully and the clock no longer showed December 31, 1999. Sony stated that the affected models incorrectly identified 2010 as a leap year, because of a bug in the BCD method of storing the date. However, for some users, the hardware's operating system clock (mainly updated from the internet and not associated with the internal clock) needed to be updated manually or by re-syncing it via the internet.\n", "As of May 2008, there is a superior exploit called Free McBoot, which is applicable to all PS2s including Slimlines except for SCPH-9000x models with BIOS 2.30 and up, where the exploit was patched by Sony. Manufacturing of such homebrew-proof models started in the third quarter of 2008, which is denoted as date code 8C on the console, although some consoles of this line still have the old unpatched 2.20 BIOS.\n", "First generation units, having control ROM versions below 2.00, require a 40 millisecond delay between system exclusive messages. Some computer games which were programmed to work with the compatible modules (see above) or later ROM versions that do not require this delay, fail to work with these units, producing incorrect sounds or causing the firmware to lock up due to a buffer overflow bug, requiring turning the unit off and on. However, some games were designed to exploit errors in earlier units, causing incorrect sound on later revisions.\n", "Most first-generation personal computers did not keep track of dates and times. These included systems that ran the CP/M operating system, as well as early models of the Apple II, the BBC Micro, and the Commodore PET, among others. Add-on peripheral boards that included real-time clock chips with on-board battery back-up were available for the IBM PC and XT, but the IBM AT was the first widely available PC that came equipped with date/time hardware built into the motherboard. Prior to the widespread availability of computer networks, most personal computer systems that did track system time did so only with respect to local time and did not make allowances for different time zones.\n", "If performance engineering has been properly applied at each iteration and phase of the project to this point, hopefully this will be sufficient to enable the system to receive performance certification. However, if for some reason (perhaps proper performance engineering working practices were not applied) there are tests that cannot be tuned into compliance, then it will be necessary to return portions of the system to development for refactoring. In some cases the problem can be resolved with additional hardware, but adding more hardware leads quickly to diminishing returns.\n\nSection::::Performance engineering approach.:Transition.\n", "A worsening of the situation occurred when, officially due to a software bug, it came to light that many Bravia televisions were predisposed with an operating time of about 1200 hours, before stopping functioning; Stranger still was the fact that, used for a period of about 3 hours a day, devices would stop working exactly after the expiry of Sony warranty. The Tokyo's company denied any direct responsibility and announced to release software patches as a solution, desperately trying to limit the rumours about the problem before they spread to Europe, where the company's presence was very strong, and admitting: \"Our products are not designed to work badly\".\n", "OS/360 used the Interval Timer feature for providing time of day and for triggering time-dependent events. The support for S/370 made limited use of new timing facilities, but retained a dependency on the Interval Timer. SVS uses the TOD Clock, Clock Comparator and CPU Timer exclusively.\n", "Section::::Sixth-generation consoles.:PlayStation 2.\n\nEarly versions of the PlayStation 2 have a buffer overflow bug in the part of the BIOS that handles PS1 game compatibility; hackers found a way to turn this into a loophole called the PS2 Independence Exploit, allowing the use of homebrew software. Another option for homebrew development is the use of a modchip. Also, it is possible for developers to utilize a PS2 hard drive and HD Loader.\n", "Setting the time involves writing the appropriate BCD values into the registers. A write access to the hours register will completely halt the clock. The clock will not start again until a value has been written into the tenths register. Owing to the order in which the registers appear in the system's memory map, a simple loop is all that is required to write the registers in the correct order. It is permissible to write to only the tenths register to \"nudge\" the clock into action, in which following a hardware reset, the clock will start at 1:00:00.0.\n", "On Windows platforms, Microsoft strongly discourages using the TSC for high-resolution timing for exactly these reasons, providing instead the Windows APIs codice_7 and codice_8. On POSIX systems, a program can get similar function by reading the value of codice_9 clock using the codice_10 function.\n", "The first problem encountered by NASA came on October 7. The RCA 110A computer which would test the rocket and thus automating the process was ten days behind schedule meaning that it would not be at the Cape before November 1. This meant that by the middle of October little could be done at the pad. When the computer finally did arrive it continued to have problems with the punch cards and also the capacitors that did not operate well under a protective coating. In the end however the testing of the launch vehicle was still on schedule.\n", "Section::::History.\n\nAt its launch in November 2013, the Xbox One did not have native backward compatibility with original Xbox or Xbox 360 games. Xbox Live director of programming Larry \"Major Nelson\" Hryb suggested users could use the HDMI-in port on the console to pass an Xbox 360 or any other device with HDMI output through Xbox One. Senior project management and planning director Albert Penello explained that Microsoft was considering a cloud gaming platform to enable backward compatibility, but he felt it would be \"problematic\" due to varying internet connection qualities.\n\nSection::::History.:Xbox 360.\n", "Ports were announced for the Super NES and Genesis/Mega Drive, with intended release in Spring 1994, but Nintendo had the Super NES version cancelled early that Spring, while the Genesis/Mega Drive version's release date was pushed back. Two months later the Genesis/Mega Drive version was cancelled entirely, even though developer THQ had already completed it. According to a journalist for \"GamePro\", \"Reportedly, the game was considered too explicit. It also had a poor test run among reviewers who saw the preview copy.\"\n", "Also, in December 2006 it was revealed that one reason for the delay was a lawsuit brought against Canon by Applied Nanotech. On 25 May 2007, Canon announced that the prolonged litigation would postpone the launch of SED televisions, and a new launch date would be announced at some date in the future.\n", "The Neo-Geo Home Cart and Arcade Systems can be tough candidates for homebrew development. Neo-Geo AES and MVS cartridges have two separate boards: one for video, and one for sound. If programming a cartridge for the system were to occur, it would involve replacing the old ROM chips with one's newly programmed ones as the cartridges are in a sense, Arcade boards. NGDevTeam who have released \"Fast Striker\" and \"Gunlord\" found a workaround with this. What they did was print out their own board, and soldered their own ROM chips into them; this, however, can cause the Universe Bios logo to look corrupted if a custom bios were to be programmed. Programming for the Neo-Geo CD, however is easier than programming for cartridges. The CDs themselves can actually contain both sound and video respectively. Depending on the Megabit count for a game program, load times will vary. A CD game with low Megabit counts will load only one time; whereas a CD game with higher megabit counts could load in between scenes, or rounds. There are now some full games scheduled for release in physical form, such as \"Neo Xyx\".\n", "Among elements of the first E3 that would continue into future events were large press conferences by the major companies (here, Sony, Sega, and Nintendo) showcasing their up-and-coming hardware and software. Notably, at point, both Sega and Sony were ready to introduce new hardware for Western releases, the Sega Saturn and the PlayStation, respectively. Sega's conference was first, and while Kalinske announced that the Saturn would be immediately available in stores, they were notified soon after that supplies were more limited than thought. During Sony's presentation, after covering many of the PlayStation's games, Steve Race, the lead for bringing the PlayStation to the United States, came on stage, said \"two-ninety-nine\" and then left, revealing that the price of the PlayStation was less than that of the Saturn. The moment is considered one of the first proverbial mic drop moments in E3's history, and would continue a trend as each company would try to outdo others at these press events.\n", "Homebrew has now become available on most if not all Xbox 360 consoles due to the Reset Glitch Hack. So far it works on all current dashboards up to as of now the latest 17526 dashboard. Although it can run unsigned code some hardware is required to do the hack/exploit. Also soldering skills are a necessity when attempting to use this exploit.\n\nSection::::Seventh-generation consoles.:PlayStation 3.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-04362
How does the ohms resistance in coaxial cable effect the data flow?
> RG59 coaxial cable provides 75 ohms of resistance The 75 Ohm figure for RG59 is the **characteristic impedance** of the cable, not its resistance. The characteristic impedance of a transmission line is not the same thing as its resistance. There's some good info [here]( URL_0 ). The resistance of the transmission line determines the attenuation of the signal being sent through it.
[ "BULLET::::- Peak Voltage. The peak voltage is set by the breakdown voltage of the insulator. One website gives:\n\nSection::::Important parameters.:Choice of impedance.\n", "In many cases, the same single coax cable carries power in the opposite direction, to the antenna, to power the low-noise amplifier.\n\nIn some cases a single coax cable carries (unidirectional) power and bidirectional data/signals, as in DiSEqC.\n\nSection::::Types.\n\nSection::::Types.:Hard line.\n", "Coaxial cables form a transmission line and confine the electromagnetic wave inside the cable between the center conductor and the shield. The transmission of energy in the line occurs totally through the dielectric inside the cable between the conductors. Coaxial lines can therefore be bent and twisted (subject to limits) without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them.\n\nEarly Ethernet, 10BASE5 and 10BASE2, used baseband signaling over coaxial cables. In the 20th century the L-carrier system used coaxial cable for long-distance calling.\n", "Signals transmitted over consumer-grade TOSLINK connections are identical in content to those transmitted over coaxial connectors, though TOSLINK S/PDIF commonly exhibits higher jitter.\n\nSection::::Protocol specifications.\n", "The net electrical inductance is due to all three contributions:\n\nformula_57 is not changed by the skin effect and is given by the frequently cited formula for inductance \"L\" per length \"D\" of a coaxial cable:\n\nAt low frequencies, all three inductances are fully present so that formula_64.\n\nAt high frequencies, only the dielectric region has magnetic flux, so that formula_65.\n\nMost discussions of coaxial transmission lines assume they will be used for radio frequencies, so equations are supplied corresponding only to the latter case.\n", "Section::::Issues.:Common mode current and radiation.\n\nCommon mode current occurs when stray currents in the shield flow in the same direction as the current in the center conductor, causing the coax to radiate.\n", "In the case of coaxial cable, where all of the volume in between the inner conductor and the shield is filled with a dielectric, the fill factor is unity, since the electromagnetic wave is confined to that region. In other types of cable, such as twin lead, the fill factor can be much smaller. Regardless, any cable intended for radio frequencies will have its velocity factor (as well as its characteristic impedance) specified by the manufacturer. In the case of coaxial cable, where F=1, the velocity factor is solely determined by the sort of dielectric used as specified here.\n", "BULLET::::- Velocity of propagation, in meters per second. The velocity of propagation depends on the dielectric constant and permeability (which is usually 1).\n", "Section::::Skin effect reduction of the internal inductance of a conductor.:Inductance per length in a coaxial cable.\n\nLet the dimensions \"a\", \"b\", and \"c\" be the inner conductor radius, the shield (outer conductor) inside radius and the shield outer radius respectively, as seen in the crossection of figure A below.\n\nFor a given current, the total energy stored in the magnetic fields must be the same as the calculated electrical energy attributed to that current flowing through the inductance of the coax; that energy is proportional to the cable's measured inductance.\n", "BULLET::::- Now, if we apply short circuit at the receiving end , the effective receiving end voltage will be zero (i.e. V = 0)\n\n1.formula_34\n\nSo, the parameter B is the ratio of sending end voltage to receiving end current, thus called the transfer impedance and the unit of C is Ohm (Ω).\n\n2.formula_35\n\nSo, the parameter D is the ratio of sending end current to receiving end current, thus called the current ratio. Being the ratio of two same quantities, the patameter D is unitless.\n\nSection::::Modelling of Transmission Lines.:Transmission Matrix & ABCD Parameters.:ABCD parameter values.\n", "BULLET::::- Shunt capacitance per unit length, in farads per metre.\n\nBULLET::::- Series inductance per unit length, in henrys per metre.\n\nBULLET::::- Series resistance per unit length, in ohms per metre. The resistance per unit length is just the resistance of inner conductor and the shield at low frequencies. At higher frequencies, skin effect increases the effective resistance by confining the conduction to a thin layer of each conductor.\n", "If several such measurements are made between pairs of contacts that are separated by different distances, a plot of resistance versus contact separation can be obtained. If the contact separation is expressed in terms of the ratio L/W - where L and W are the length and width of the area between the contacts - such a plot should be linear, with the slope of the line being the sheet resistance. The intercept of the line with the y-axis, is two times the contact resistance. Thus the sheet resistance as well as the contact resistance can be determined from this technique.\n", "Coaxial cable is a particular kind of transmission line, so the circuit models developed for general transmission lines are appropriate. See Telegrapher's equation.\n\nSection::::Important parameters.:Physical parameters.\n\nIn the following section, these symbols are used:\n\nBULLET::::- Length of the cable, formula_2.\n\nBULLET::::- Outside diameter of \"inner\" conductor, formula_3.\n\nBULLET::::- Inside diameter of the shield, formula_4.\n", "where formula_4 is the diameter of the bigger conductor and formula_3 is the diameter of the smaller conductor. The capacitance can then be solved by substitution,\n\nformula_35\n\nand the inductance is taken from Ampere's Law for two concentric conductors (coaxial wire) and with the definition of inductance,\n\nformula_36 and formula_37\n\nwhere formula_38 is magnetic induction, formula_39 is the permeability of free space, formula_40 is the magnetic flux and formula_41 is the differential surface. Taking the inductance per meter,\n\nformula_42,\n\nSubstituting the derived capacitance and inductance,\n\nformula_43\n\nSection::::Issues.\n\nSection::::Issues.:Signal leakage.\n", "Section::::C.:Coaxial Cable.\n\nA particular type of cable capable of passing a wide range of frequencies with very low signal loss. Such a cable in its simplest form consists of a hollow metallic shield with a single wire accurately placed along the center of the shield and isolated from the shield.\n\nSection::::C.:CODEC (Coding/Decoding).\n", "BULLET::::- The notion of \"mutual capacitance\" is particularly important for understanding the operations of the capacitor, one of the three elementary linear electronic components (along with resistors and inductors).In electrical circuits, the term \"capacitance\" is usually a shorthand for the \"mutual capacitance\" between two adjacent conductors, such as the two plates of a capacitor.\n\nSection::::Line Parameters of AC transmission.:Shunt Capacitance.:Charecteristics.\n", "In the case of a coaxial cable, there is a closed-form solution. The resistive surface is considered to be a series of infinitesimal annular rings, each having a width of \"dρ\" and a resistance of (\"η\"/2π\"ρ\")\"dρ\". The resistance between the inner electrode and the outer electrode is just the integral over all such rings.\n\nThis is exactly the equation for the characteristic impedance of a coaxial cable in free space.\n\nSection::::Examples.:Calculating surface resistance from characteristic impedance.\n\nThe characteristic impedance of a two parallel wire transmission line is given by\n", "External fields create a voltage across the inductance of the outside of the outer conductor between sender and receiver. The effect is less when there are several parallel cables, as this reduces the inductance and, therefore, the voltage. Because the outer conductor carries the reference potential for the signal on the inner conductor, the receiving circuit measures the wrong voltage.\n\nSection::::Issues.:Noise.:Transformer effect.\n", "Another situation for which this formula is not exact is with alternating current (AC), because the skin effect inhibits current flow near the center of the conductor. For this reason, the \"geometrical\" cross-section is different from the \"effective\" cross-section in which current actually flows, so resistance is higher than expected. Similarly, if two conductors near each other carry AC current, their resistances increase due to the proximity effect. At commercial power frequency, these effects are significant for large conductors carrying large currents, such as busbars in an electrical substation, or large power cables carrying more than a few hundred amperes.\n", "Since these currents are larger than in the original standard, the extra voltage drop in the cable reduces noise margins, causing problems with High Speed signaling. Battery Charging Specification 1.1 specifies that charging devices must dynamically limit bus power current draw during High Speed signaling; 1.2 specifies that charging devices and ports must be designed to tolerate the higher ground voltage difference in High Speed signaling.\n", "where for twin-lead line the primary line constants are\n\nwhere the surface resistance of the wires is\n\nand where \"d\" is the wire diameter and \"D\" is the separation of the wires measured between their centrelines.\n\nNeglecting the wire resistance \"R\" and the leakage conductance \"G\", this gives\n\nwhere \"Z\" is the impedance of free space (approximately 377 Ω), \"ε\" is the effective dielectric constant (which for air is 1.00054). If the separation \"D\" is much greater than the wire diameter \"d\" then this is approximately\n\nThe separation needed to achieve a given characteristic impedance is therefore\n", "A common type of 75 ohm coaxial cable is cable television (CATV) distribution coax, used to route cable television signals to and within homes. CATV distribution coax typically has a copper-clad steel (CCS) center conductor and a combination aluminum foil/aluminum braid shield, typically with low coverage (about 60%). 75 ohm cables are also used in professional video applications, carrying either base band analog video signals or serial digital interface (SDI) signals; in these applications, the center conductor is ordinarily solid copper, the shielding is much heavier (typically aluminum foil, and 95% copper braid), and tolerances are more tightly controlled, to improve impedance stability.\n", "Section::::Types.:RG-6.\n", "Section::::Description.\n", "At low frequencies and under small-signal conditions, the circuit in Figure 1 can be represented by that in Figure 2, where the hybrid-pi model for the BJT has been employed. The input signal is represented by a Thévenin voltage source \"v\" with a series resistance \"R\" and the load is a resistor \"R\".\n\nThis circuit can be used to derive the following characteristics of the common base amplifier.\n\nIn general, the overall voltage/current gain may be substantially less than the open/short-circuit gains listed above (depending on the source and load resistances) due to the loading effect.\n\nSection::::Low-frequency characteristics.:Active loads.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-22833
How can photons infinitely move forward if perpetual motion is considered impossible?
A perpetual motion *machine* is not possible. A system only has so much energy to give so nothing that does any work or emits energy can move forever. Photons don't do any work until they hit something and are destroyed, so the travel time and distance are somewhat irrelevant.
[ "This can be made clearer by writing the propagator in the following form for a massless photon,\n\nThis is the usual definition but normalised by a factor of formula_40. Then the rule is that one only takes the limit formula_41 at the end of a calculation.\n\nOne sees that \n\nand\n\nHence this means a single photon will always stay on the light cone. It is also shown that the total probability for a photon at any time must be normalised by the reciprocal of the following factor:\n", "Because a parametric process prohibits a net change in the energy state of the system, parametric processes are \"instantaneous\". For example, if an atom absorbs a photon with energy E, the atom's energy increases by ΔE = E, but as a parametric process, the quantum state cannot change and thus the elevated energy state must be a temporary virtual state. By the Heisenberg Uncertainty Principle we know that ΔEΔt~ħ/2, thus the lifetime of a parametric process is roughly Δt~ħ/2ΔE, which is appreciably small for any non-zero ΔE.\n\nSection::::Parametric versus non-parametric processes.\n\nSection::::Parametric versus non-parametric processes.:Linear optics.\n", "Thus, machines that extract energy from finite sources will not operate indefinitely, because they are driven by the energy stored in the source, which will eventually be exhausted. A common example is devices powered by ocean currents, whose energy is ultimately derived from the Sun, which itself will eventually burn out. Machines powered by more obscure sources have been proposed, but are subject to the same inescapable laws, and will eventually wind down.\n", "Although there is only one electronic transition from the excited state to ground state, there are many ways in which the electromagnetic field may go from the ground state to a one-photon state. That is, the electromagnetic field has infinitely more degrees of freedom, corresponding to the different directions in which the photon can be emitted. Equivalently, one might say that the phase space offered by the electromagnetic field is infinitely larger than that offered by the atom. This infinite degree of freedom for the emission of the photon results in the apparent irreversible decay, i.e., spontaneous emission.\n", "A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist, that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. For systems which do not have time translation symmetry, it may not be possible to define conservation of energy. Examples include curved spacetimes in general relativity or time crystals in condensed matter physics.\n\nSection::::History.\n", "that is, there is no probability for a transition, and the system is in the initial state after cessation of the perturbation. Such a slow perturbation is therefore reversible, as it is classically.\"\n", "Section::::Relativistic propagators.:Faster than light?\n\nThe Feynman propagator has some properties that seem baffling at first. In particular, unlike the commutator, the propagator is \"nonzero\" outside of the light cone, though it falls off rapidly for spacelike intervals. Interpreted as an amplitude for particle motion, this translates to the virtual particle travelling faster than light. It is not immediately obvious how this can be reconciled with causality: can we use faster-than-light virtual particles to send faster-than-light messages?\n", "Section::::Early objections.\n", "For non-rotating black holes, the photon sphere is a sphere of radius 3/2 \"r\". There are no stable free fall orbits that exist within or cross the photon sphere. Any free fall orbit that crosses it from the outside spirals into the black hole. Any orbit that crosses it from the inside escapes to infinity or falls back in and spirals into the black hole. No unaccelerated orbit with a semi-major axis less than this distance is possible, but within the photon sphere, a constant acceleration will allow a spacecraft or probe to hover above the event horizon.\n", "Two photons moving in different directions cannot both be made to have arbitrarily small total energy by changing frames, or by moving toward or away from them. The reason is that in a two-photon system, the energy of one photon is decreased by chasing after it, but the energy of the other increases with the same shift in observer motion. Two photons not moving in the same direction comprise an inertial frame where the combined energy is smallest, but not zero. This is called the center of mass frame or the center of momentum frame; these terms are almost synonyms (the center of mass frame is the special case of a center of momentum frame where the center of mass is put at the origin). The most that chasing a pair of photons can accomplish to decrease their energy is to put the observer in a frame where the photons have equal energy and are moving directly away from each other. In this frame, the observer is now moving in the same direction and speed as the center of mass of the two photons. The total momentum of the photons is now zero, since their momenta are equal and opposite. In this frame the two photons, as a system, have a mass equal to their total energy divided by . This mass is called the invariant mass of the pair of photons together. It is the smallest mass and energy the system may be seen to have, by any observer. It is only the invariant mass of a two-photon system that can be used to make a single particle with the same rest mass.\n", "Even machines which extract energy from long-lived sources - such as ocean currents - will run down when their energy sources inevitably do. They are not perpetual motion machines because they are consuming energy from an external source and are not isolated systems.\n\nSection::::Basic principles.:Classification.\n\nOne classification of perpetual motion machines refers to the particular law of thermodynamics the machines purport to violate:\n\nBULLET::::- A perpetual motion machine of the first kind produces work without the input of energy. It thus violates the first law of thermodynamics: the law of conservation of energy.\n", "Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect that goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever.\n", "BULLET::::- Vacuum energy and zero-point energy: In order to explain effects such as virtual particles and the Casimir effect, many formulations of quantum physics include a background energy which pervades empty space, known as vacuum or zero-point energy. The ability to harness zero-point energy for useful work is considered pseudoscience by the scientific community at large. Inventors have proposed various methods for extracting useful work from zero-point energy, but none have been found to be viable, no claims for extraction of zero-point energy have ever been validated by the scientific community, and there is no evidence that zero-point energy can be used in violation of conservation of energy.\n", "In quantum field theory the Heisenberg uncertainty relations indicate that photons can travel at any speed for short periods. In the Feynman diagram interpretation of the theory, these are known as \"virtual photons\", and are distinguished by propagating off the mass shell. These photons may have any velocity, including velocities greater than the speed of light. To quote Richard Feynman \"...there is also an amplitude for light to go faster (or slower) than the conventional speed of light. You found out in the last lecture that light doesn't go only in straight lines; now, you find out that it doesn't go only at the speed of light! It may surprise you that there is an amplitude for a photon to go at speeds faster or slower than the conventional speed, \"c\".\" These virtual photons, however, do not violate causality or special relativity, as they are not directly observable and information cannot be transmitted acausally in the theory. Feynman diagrams and virtual photons are usually interpreted not as a physical picture of what is taking place, but rather as a convenient calculation tool (which, in some cases, happen to involve faster-than-light velocity vectors).\n", "where formula_26 is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate formula_27 for the emission of photons of frequency formula_17 and transition from a higher energy formula_21 to a lower energy formula_20 is\n", "Later, in 1916 Einstein also showed that the recoil of molecules during the emission and absorption of photons was consistent with, and necessary for, a quantum description of thermal radiation processes. Each photon acts as if it imparts a momentum impulse \"p\" equal to its energy divided by the speed of light, ().\n", "Scientific investigations as to whether the laws of physics are invariant over time use telescopes to examine the universe in the distant past to discover, to the limits of our measurements, whether ancient stars were identical to stars today. Combining different measurements such as spectroscopy, direct measurement of the speed of light in the past and similar measurements demonstrates that physics has remained substantially the same, if not identical, for all of observable time spanning billions of years.\n", "Single-photon sources are light sources that emit light as single particles or photons. They are distinct from coherent light sources (lasers) and thermal light sources such as incandescent light bulbs. The Heisenberg uncertainty principle dictates that a state with an exact number of photons of a single frequency cannot be created. However, Fock states (or number states) can be studied for a system where the electric field amplitude is distributed over a narrow bandwidth. In this context, a single-photon source gives rise to an effectively one-photon number state. Photons from an ideal single-photon source exhibit quantum mechanical characteristics. These characteristics include photon antibunching, so that the time between two successive photons is never less than some minimum value.\n", "SPDC allows for the creation of optical fields containing (to a good approximation) a single photon. As of 2005, this is the predominant mechanism for experimenter to create single photons (also known as Fock states). The single photons as well as the photon pairs are often used in quantum information experiments and applications like quantum cryptography and Bell test experiments.\n", "The answer is no: while in classical mechanics the intervals along which particles and causal effects can travel are the same, this is no longer true in quantum field theory, where it is commutators that determine which operators can affect one another.\n", "Regarding virtual particles, the propagator at spacelike separation can be thought of as a means of calculating the amplitude for creating a virtual particle-antiparticle pair that eventually disappears into the vacuum, or for detecting a virtual pair emerging from the vacuum. In Feynman's language, such creation and annihilation processes are equivalent to a virtual particle wandering backward and forward through time, which can take it outside of the light cone. However, no signaling back in time is allowed.\n\nSection::::Relativistic propagators.:Faster than light?:Explanation using limits.\n", "Photons have many applications in technology. These examples are chosen to illustrate applications of photons \"per se\", rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an extremely important application and is discussed above under stimulated emission.\n", "The action is a \"functional\" rather than a \"function\", since it depends on the Lagrangian, and the Lagrangian depends on the path q(\"t\"), so the action depends on the \"entire\" \"shape\" of the path for all times (in the time interval from \"t\" to \"t\"). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the \"entire continuum\" of Lagrangian values corresponding to some path, \"not just one value\" of the Lagrangian, is required (in other words it is \"not\" as simple as \"differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc\", rather this idea is applied to the entire \"shape\" of the function, see calculus of variations for more details on this procedure).\n", "The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10 second.\n\nThe direction of distribution of emitted electrons peaks in the direction of polarization (the direction of the electric field) of the incident light, if it is linearly polarized.\n\nSection::::Emission mechanism.:Mathematical description.\n\nIn 1905, Einstein proposed an explanation of the photoelectric effect using a concept first put forward by Max Planck that light waves consist of tiny bundles or packets of energy known as photons or quanta. \n\nThe maximum kinetic energy formula_1 of an ejected electron is given by\n", "BULLET::::- A modification of the Ritz–Tolman theory was introduced by J. G. Fox (1965). He argued that the extinction theorem (i.e., the regeneration of light within the traversed medium) must be considered. In air, the extinction distance would be only 0.2 cm, that is, after traversing this distance the speed of light would be constant with respect to the medium, not to the initial light source. (Fox himself was, however, a supporter of special relativity.)\n" ]
[]
[]
[ "normal" ]
[ "Perpetual motion is considered impossible." ]
[ "false presupposition", "normal" ]
[ "A perpetual motion machine is not possible, however photons can keep continue moving forever, at least until they hit something and are destroyed." ]
2018-01609
Why does the air after a lightning storm feel fresher?
When lightning passes through the air, it ionises oxygen gas (O2). As the oxygen ions cool down, some of them will rejoin into groups of three ozone instead of the two they were in before, forming ozone (O3). It is this ozone that is apparently the "fresh smell" observed.
[ "The Loo ends in late summer, with the arrival of the Indian monsoon. In some areas of North India and Pakistan, there are brief, but violent, dust storms known as Kali Andhi (or \"black Storm\") before the monsoon sets in. The arrival of monsoon clouds in any location is frequently accompanied with cloudbursts, and the sudden transformation of the landscape from brown to green can seem \"astonishing\" as a result of the ongoing deluge and the abrupt cessation of the Loo.\n\nSection::::Dwelling adaptation.\n", "With an extensive and healthy urban forest air quality can be drastically improved. Trees help to lower air temperatures and the urban heat island effect in urban areas. This reduction of temperature not only lowers energy use, it also improves air quality, as the formation of ozone is dependent on temperature. Trees reduce temperature not only by directly shading: when there is a large number of trees it create a difference in temperatures between the area when they are located and the neighbor area. This creates a difference in atmospheric pressure between the two areas, which creates wind. This phenomenon is called urban breeze cycle if the forest is near the city and park breeze cycle if the forest is in the city. That wind helps to lower temperature in the city.\n", "42 year old Chinese paraglider pilot He Zhongpin died in 2007 after being sucked into the same storm system and struck by lightning at 5900 m (19,000 feet). His body was found the next day from his last known position prior to entering the cloud.\n\nIn 2014 Italian paraglider Paolo Antoniazzi, 66 years old retired Army general, died after being sucked into a thunderstorm.\n", "The album begins with the most commercial sounding song on the album, \"Let It Go.\" However, the rest of the album is in a heavier mold, and is in a relatively similar style to the previous album, \"Thunder in the East\".\n", "Matt describe the songs from the album as \"slightly…. Umm, I'm not going to say they're dark, but while everything has maintained an upbeat quality, there are some tracks that are a little darker that are related to just… I don't know it's weird. People tend to say, “The more success you have, the easier things will be,” and that's not always the case. Sometimes it's more like, “The more success you have the harder things become.”\"\n", "\"The concept of a year round natural microcosmic forest, which would contain plants and trees indigenous to pre-colonial New York is fresh and intriguing and is desperately needed for our city.\" – Ed Koch, Former New York City Mayor\n\n\"After making art of quiet distinction for over 30 years, Alan Sonfist suddenly finds himself close to the spotlight. His concern for the fragility of nature, rather than for its sublimeness or monumentality, makes him a forerunner of the new ecological sensibility.\" – Michael Brenson, New York Times\n", "The artwork uses a photomontage by the Japanese artist Tsunehisa Kimura titled \"The City Welcomes a Fresh Morning\", which depicts New York City being engulfed in a waterfall.\n\nSection::::Critical reception.\n", "As many as 90 percent of the city's trees were estimated to be damaged, including many in the city's cherished parks and parkways, which were designed by landscape architect Frederick Law Olmsted. The damage constituted a significant setback to Buffalo's \n\nurban reforestation agenda, which had aimed to increase the city's tree canopy from its estimated 2003 levels of 12% to more closely approach the national average of 30%. Buffalo's suburbs, also hard hit by the storm, do have a canopy cover approaching 20 to 30%.\n", "Ward began working on \"More Rain\" in 2012, initially experimenting with layering his own vocals to create a doo-wop record. After collaborating with other artists on the record like Peter Buck, k.d. lang, and Neko Case, the sound of the album went in a different direction, described as “true gotta-stay-indoors, rainy-season record that looks upwards through the weather while reflecting on his past.”\n\nSection::::Critical reception.\n", "BULLET::::- Cars parked in parking lots with 50% canopy cover emit 8% less through evaporative emissions than cars parked in parking lots with only 8% canopy cover.\n\nBULLET::::- Due to the positive effects trees have on reducing temperatures and evaporative emissions in parking lots, cities like Davis, California, have established parking lot ordinances that mandate 50% canopy cover over paved areas.\n\nBULLET::::- \"Cold Start\" emissions\n", "BULLET::::- Lower temperatures reduce emissions in parking lots\n", "In an interview with \"The Daily Telegraph\", lyricist Gary Lightbody revealed the song was conceived after he was caught in a heavy storm one night in Glasgow: \"I was pretty terrified – 150-mile-an-hour winds, trees falling down. But we went outside the house, and it was also just thrilling. There was this howling wind, but it felt like silence, as if our senses were being too bombarded to cope with what was going on. So the record was born out of that feeling, of two people having a protective shell around each other. I'm not saying there's not darkness in there still, but it's happening from outward factors more than inward. Maybe things are terrifying, but they're beautiful, too. The world is extremely surprising\".\n", "Urban forests play an important role in ecology of human habitats in many ways: they filter air, water, sunlight, provide shelter to animals and recreational area for people. They moderate local climate, slowing wind and stormwater, and shading homes and businesses to conserve energy. They are critical in cooling the urban heat island effect, thus potentially reducing the number of unhealthful ozone days that plague major cities in peak summer months.\n", "From the Primum Mobile, Dante ascends to a region beyond physical existence, the Empyrean, which is the abode of God. Beatrice, representing theology, is here transformed to be more beautiful than ever before, and Dante becomes enveloped in light, rendering him fit to see God (Canto XXX):\n\npoem\n\nLike sudden lightning scattering the spirits\n\nof sight so that the eye is then too weak\n\nto act on other things it would perceive,\n\nsuch was the living light encircling me,\n\nleaving me so enveloped by its veil\n\nof radiance that I could see no thing.\n", "In addition to the uptake of harmful gases, trees act as filters intercepting airborne particles and reducing the amount of harmful particulate matter. The particles are captured by the surface area of the tree and its foliage. These particles temporarily rest on the surface of the tree, as they can be washed off by rainwater, blown off by high winds, or fall to the ground with a dropped leaf. Although trees are only a temporary host to particulate matter, if they did not exist, the temporarily housed particulate matter would remain airborne and harmful to humans. Increased tree cover will increase the amount of particulate matter intercepted from the air. \n", "\"Streets of Arklow\" describes a perfect day in \"God's green land\" and is a tribute to the Wicklow town visited during this vacation trip. The opening lines of the song: \"And as we walked through the streets of Arklow, oh the colours of the day warm, and our heads were filled with poetry, in the morning coming onto dawn\" were said to \"contain the thematic seeds of the whole album: nature, poetry, god, innocence re-found and love lost\" by PopMatters critic John Kennedy.\n", "BULLET::::- Trees reduce temperatures and smog\n", "Buffalo, by virtue of its position downwind of Lake Erie, is one of the nation's windiest cities, and as a result, New Era Field often is a difficult stadium for kickers, with swirling winds that change direction rapidly. This is exacerbated by the stadium's design. The field is below ground level, while the top of the upper deck stands only 60 feet above ground. The open end lies parallel to the direction of the prevailing winds, so when the winds come in, they immediately drop down into the bowl, causing the stadium's signature wind patterns.\n\nSection::::Other uses.\n", "Unlike the vast majority of the state, New York City features a humid subtropical climate (Koppen \"Cfa\"). New York City is an urban heat island, with temperatures 5–7 degrees Fahrenheit (3–4 degrees Celsius) warmer overnight than surrounding areas. In an effort to fight this warming, roofs of buildings are being painted white across the city in an effort to increase the reflection of solar energy, or albedo.\n\nSection::::Temperatures.:Summer.\n", "Section::::Urban effects on climate.:Urban heat island effect.\n", "It is also the subject of a 2011 \"New Yorker\" article by Geoff Dyer called \"Poles Apart\". David Ulin discusses the work as a narrative which \"unfolds not as a fixed encounter but rather as something that gets inside us in a more sequential way.\" It is also the inspiration for composer John Mackey's piece also entitled The Lightning Field.\n\n\"The Lightning Field\"'s fortieth anniversary was the subject of an essay titled \"Walter De Maria and The Lightning Field at Forty: Art as Symbiosis,\" by Jason Rosenfeld in the December 2017/January 2018 issue of \"The Brooklyn Rail\".\n\nSection::::Visiting.\n", "Shelley in this canto \"expands his vision from the earthly scene with the leaves before him to take in the vaster commotion of the skies\". This means that the wind is now no longer at the horizon and therefore far away, but he is exactly above us. The clouds now reflect the image of the swirling leaves; this is a parallelism that gives evidence that we lifted \"our attention from the finite world into the macrocosm\". The \"clouds\" can also be compared with the leaves; but the clouds are more unstable and bigger than the leaves and they can be seen as messengers of rain and lightning as it was mentioned above.\n", "In James Thurber's 1937 \"New Yorker\" article \"There's No Place Like Home\", a phrasebook from \"the era of Imperial Russia\" contains the \"magnificent\" line: \"Oh, dear, our postillion has been struck by lightning!\". Thurber speculates that such a \"fantastic piece of disaster\" must have been rare, \"even in the days of the Czars\". Thurber heard of the quote from \"an writer in a London magazine\".\n", "This song was written in two sections. Andriano wrote the first part at home, when he woke up to a \"very Florida-style downpour — thick, muggy and [producing] the sweet smell of wet greenery\". He had the melody to the song stuck in his head, so he wrote the lyrics to the chorus and left it at that for the moment.\n", "There are several causes of an urban heat island (UHI); for example, dark surfaces absorb significantly more solar radiation, which causes urban concentrations of roads and buildings to heat more than suburban and rural areas during the day; materials commonly used in urban areas for pavement and roofs, such as concrete and asphalt, have significantly different thermal bulk properties (including heat capacity and thermal conductivity) and surface radiative properties (albedo and emissivity) than the surrounding rural areas. This causes a change in the energy budget of the urban area, often leading to higher temperatures than surrounding rural areas. Another major reason is the lack of evapotranspiration (for example, through lack of vegetation) in urban areas. The U.S. Forest Service found in 2018 that cities in the United States are losing 36 million trees each year. With a decreased amount of vegetation, cities also lose the shade and evaporative cooling effect of trees.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03912
It is said that zinc has quantifiable effects on the cold virus, regardless of strain; as people who intake zinc during the early stages of cold reduce the symptoms by one day or so. What's the reason behind this?
The evidence for zinc's effects on colds are limited, at best. From the [Mayo Clinic website]( URL_0 ): > Recently an analysis of several studies showed that zinc lozenges or syrup reduced the length of a cold by one day, especially when taken within 24 hours of the first signs and symptoms of a cold. > But the recent analysis stopped short of recommending zinc. None of the studies analyzed had enough participants to meet a high standard of proof. Also, the studies used different zinc dosages and preparations (lozenges or syrup) for different lengths of time. As a result, it's not clear what the effective dose and treatment schedule would be. > Zinc — especially in lozenge form — also has side effects, including nausea or a bad taste in the mouth. Many people who used zinc nasal sprays suffered a permanent loss of smell. For this reason, Mayo Clinic doctors caution against using such sprays. The mechanisms through which zinc may work are murky: > Zinc may work by preventing the rhinovirus from multiplying. It may also stop the rhinovirus from lodging in the mucous membranes of the throat and nose.
[ "A 2013 review found that zinc supplementation at doses in excess of 75 mg/day within 24 hours of the onset of cold symptoms reduced the average duration of symptoms by 1 day. It also found that the likelihood of experiencing cold symptoms 1 week after the onset of symptoms was lower in individuals who used supplemental zinc relative to those who did not.\n\nA 2012 systematic review suggested that \"zinc formulations may shorten the duration of symptoms of the common cold\", but that further research was needed and that possible adverse effects needed to be studied.\n", "Zinc and the common cold\n\nThe human rhinovirus – the most common viral pathogen in humans – is the predominant cause of the common cold. The hypothesized mechanism of action by which zinc reduces the severity and/or duration of cold symptoms is the suppression of nasal inflammation and the direct inhibition of rhinoviral receptor binding and rhinoviral replication in the nasal mucosa.\n\nSection::::Effectiveness.\n\nA 2016 meta-analysis on zinc acetate-lozenges and the common cold found that colds were 2.7 days shorter by zinc lozenge usage. This estimate is to be compared with the 7 day average duration of colds in the three trials.\n", "The human rhinovirus – the most common viral pathogen in humans – is the predominant cause of the common cold. The hypothesized mechanism of action by which zinc reduces the severity and/or duration of cold symptoms is the suppression of nasal inflammation and the direct inhibition of rhinoviral receptor binding and rhinoviral replication in the nasal mucosa. Zinc has been known for many years to have an effect on cold viruses in the laboratory.\n\nSection::::Chemistry.\n", "Zinc supplements may shorten the duration and reduce the severity of symptoms if supplementation begins within 24 hours of the onset of symptoms. Some zinc remedies directly applied to the inside of the nose have led to the loss of the sense of smell. A 2017 expert panel, however, found the evidence to be insufficient to recommend zinc's use.\n", "The effects of zinc supplementation on the duration and severity of cold symptoms in individuals with AIDS/HIV or chronic illness is not known due to a lack of studies involving these populations.\n\nSection::::Safety.\n\nThere have been several cases of people using zinc nasal sprays and suffering a loss of sense of smell. In 2009 the US Food and Drug Administration issued a warning that people should not use nasal sprays containing zinc.\n\nAdverse effects of lozenges include bad taste and nausea.\n\nSection::::Mechanism of action.\n", "Liposomal nystatin is not commercially available, but investigational use has shown greater \"in vitro\" activity than colloidal formulations of amphotericin B, and demonstrated effectiveness against some amphotericin B-resistant forms of fungi. It offers an intriguing possibility for difficult-to-treat systemic infections, such as invasive aspergillosis, or infections that demonstrate resistance to amphotericin B. \"Cryptococcus\" is also sensitive to nystatin. Additionally, liposomal nystatin appears to cause fewer cases of and less severe nephrotoxicity than observed with amphotericin B. \n\nIn the UK, its license for treating neonatal oral thrush is restricted to those over the age of one month. \n", "Herd immunity, generated from previous exposure to cold viruses, plays an important role in limiting viral spread, as seen with younger populations that have greater rates of respiratory infections. Poor immune function is a risk factor for disease. Insufficient sleep and malnutrition have been associated with a greater risk of developing infection following rhinovirus exposure; this is believed to be due to their effects on immune function. Breast feeding decreases the risk of acute otitis media and lower respiratory tract infections among other diseases, and it is recommended that breast feeding be continued when an infant has a cold. In the developed world breast feeding may not be protective against the common cold in and of itself.\n", "A 2009 review found that the evidence supporting the effectiveness of zinc is mixed with respect to cough, and a 2011 Cochrane review concluded that zinc \"administered within 24 hours of onset of symptoms reduces the duration and severity of the common cold in healthy people\". A 2003 review concluded: \"Clinical trial data support the value of zinc in reducing the duration and severity of symptoms of the common cold when administered within 24 hours of the onset of common cold symptoms.\" Zinc gel in the nose may lead to long-term or permanent loss of smell. The FDA therefore discourages its use.\n", "A 2015 meta-analysis on zinc lozenges and the common cold found no difference in the effects of zinc acetate lozenges on diverse respiratory symptoms. Although zinc lozenges most probably lead to highest concentration of zinc in the pharyngeal region, a subsequent meta-analysis showed that the effects of high-dose zinc acetate lozenges did not significantly differ in their effects on pharyngeal and nasal symptoms. The duration of nasal discharge was shortened by 34%, nasal congestion by 37%, sneezing by 22%, scratchy throat by 33%, sore throat by 18%, hoarseness by 43%, and cough by 46%. Zinc lozenges shortened the duration of muscle ache by 54%, but there was no significant effect on the duration of headache and fever.\n", "Definitive diagnosis of WNV is obtained through detection of virus-specific antibody IgM and neutralizing antibodies. Cases of West Nile virus meningitis and encephalitis that have been serologically confirmed produce similar degrees of CSF pleocytosis and are often associated with substantial CSF neutrophilia.\n\nSpecimens collected within eight days following onset of illness may not test positive for West Nile IgM, and testing should be repeated. A positive test for West Nile IgG in the absence of a positive West Nile IgM is indicative of a previous flavivirus infection and is not by itself evidence of an acute West Nile virus infection.\n", "Regular hand washing appears to be effective in reducing the transmission of cold viruses, especially among children. Whether the addition of antivirals or antibacterials to normal hand washing provides greater benefit is unknown. Wearing face masks when around people who are infected may be beneficial; however, there is insufficient evidence for maintaining a greater social distance.\n\nIt is unclear if zinc supplements affect the likelihood of contracting a cold. Routine vitamin C supplements do not reduce the risk or severity of the common cold, though they may reduce its duration. Gargling with water was found useful in one small trial.\n", "It was difficult to detect small quantities of virus until the advent of polymerase chain reaction; since then, stored samples of vaccine made after 1962 have tested negative for SV40. In 1997, Herbert Ratner of Oak Park, Illinois, gave some vials of 1955 Salk vaccine to researcher Michele Carbone. Ratner, the Health Commissioner of Oak Park at the time the Salk vaccine was introduced, had kept these vials of vaccine in a refrigerator for over forty years. Upon testing this vaccine, Carbone discovered that it contained not only the SV40 strain already known to have been in the Salk vaccine (containing two 72-bp enhancers) but also the same slow-growing SV40 strain currently found in some malignant tumors and lymphomas (containing one 72-bp enhancers). It is unknown how widespread the virus was among humans before the 1950s, though one study found that 12% of a sample of German medical students in 1952 – prior to the advent of the vaccines – had SV40 antibodies.\n", "Diagnosis of Spondweni viral infection would be to screen blood samples from infected individuals for the presence of the positive-sense, single-stranded RNA virion through the use of serologic assay, virus isolation, or PCR/qPCR. These methods also aid in the prevention of misdiagnosis of Spondweni viral infection with other viral infections and infections with a similar clinical symptom array which includes Zika fever, dengue fever, Lassa fever, rickettsial infection, leptospirosis, and typhoid fever.\n", "Thirty volunteers were required every fortnight during trial periods. The unit advertised in newspapers and magazines for volunteers, who were paid a small amount. A stay at the unit was presented in these advertisements as an unusual holiday opportunity. The volunteers were infected with preparations of cold viruses and typically stayed for ten days. They were housed in small groups of two or three, with each group strictly isolated from the others during the course of the stay. Volunteers were allowed to go out for walks in the countryside south of Salisbury, but residential areas were out of bounds.\n", "NCp7 is a 55-amino acid protein that is highly basic and consists of two gag-knuckle motifs. These motifs contain two peptide units of Cys-X2-Cys-X4-His-X4-Cys (CCHC), where the X represents a substituted amino acid, that make up the zinc (II) ion binding sites. The binding of zinc (II) in the CCHC binding site is necessary for the domain to be functional and for the stabilization of the conformation of the structure, allowing the NCp7 to carry out the processes required for HIV replication. Since the CCHC binding site is mutation resistant and involved in the replication of HIV-1, it makes a prime candidate for the prevention of HIV through zinc ejectors. By inhibiting the function of NCp7, the viral replication is affected and a non-functional virus that is unable to infect its host is produced.\n", "Vitamin C and the common cold\n\nThe common cold, or simply the cold, is a viral infectious disease of the upper respiratory tract. The cold is indeed common, and is a significant cause for absences from work and school. Even before the discovery of vitamin C, folklore had it that certain fruits were effective in both preventing and treating the cold. After scientific identification of vitamin C in the early part of the 20th century, research began into the possible effects of the vitamin against the common cold.\n", "Many alternative treatments are used to treat the common cold. A 2007 review states that, \"alternative therapies (i.e., Echinacea, vitamin C, and zinc) are not recommended for treating common cold symptoms; however, ... Vitamin C prophylaxis may modestly reduce the duration and severity of the common cold in the general population and may reduce the incidence of the illness in persons exposed to physical and environmental stresses.\" A 2014 review also found insufficient evidence for Echinacea.\n", "Once the acute symptoms of an initial infection disappear, they often do not return. But once infected, the person carries the virus for the rest of their life. The virus typically lives dormantly in B lymphocytes. Independent infections of mononucleosis may be contracted multiple times, regardless of whether the person is already carrying the virus dormantly. Periodically, the virus can reactivate, during which time the person is again infectious, but usually without any symptoms of illness. Usually, a person with IM has few, if any, further symptoms or problems from the latent B lymphocyte infection. However, in susceptible hosts under the appropriate environmental stressors, the virus can reactivate and cause vague physical symptoms (or may be subclinical), and during this phase the virus can spread to others.\n", "In USS severe ADAMTS13 deficiency is often not enough to induce a (first) acute TTP episode. It primarily occurs when an additional (environmental) trigger is present. Recognized triggers are infections (including mild flu-like upper airway infections), pregnancy, heavy alcohol intake or certain drugs. In these situations, VWF is released from its storage organelles, such as Weibel–Palade bodies and granules of platelets. Increased VWF levels in the circulation are leading to a higher demand of ADAMTS13, which is lacking in USS, and can bring forward a TTP episode.\n\nSection::::Pathology.\n", "Forced to switch fields, Dochez initiated studies on a different type of infection: the common cold. Dochez and collaborators confirmed that the common cold was not caused by bacteria by demonstrating that the infection could be induced by exposure to bacteria-free substances. He concluded that the common cold was likely of viral etiology, but techniques of the time period were not sophisticated enough to prove this conclusively.\n", "Patients with Franklin disease usually have a history of progressive weakness, fatigue, intermittent fever, night sweats and weight loss and may present with lymphadenopathy (62%), splenomegaly (52%) or hepatomegaly (37%). The fever is considered secondary to impaired cellular and humoral immunity, and thus recurrent infections are the common clinical presentation in Franklin disease. Weng et al. described the first case of Penicillium sp. infection in a patient with Franklin disease and emphasized the importance of proper preparation for biopsy, complete hematologic investigation, culture preparation and early antifungal coverage to improve the outcome.\n", "According to the Cochrane review on vitamin C and the common cold, 1 g/day or more of vitamin C does not influence common cold incidence in the general community. However, in five randomized double-blind placebo-controlled trials with participants who were under heavy short-term physical stress (three of the trials were with marathon runners), vitamin C halved the incidence of colds. In the dose of 1 g/day or more, vitamin C shortened the duration of colds in adults by 8% and in children by 18%. Vitamin C also decreased the severity of colds.\n\nSection::::Echinacea.\n", "The long-term outlook (prognosis) for people with cold agglutinin disease varies based on many factors including the severity of the condition, the signs and symptoms present in each person and the underlying cause. For example, people with cold agglutinin disease caused by bacterial or viral infections tend to have an excellent prognosis; in these cases, the symptoms typically disappear within 6 months after the infection has resolved. Mild to moderate primary (unknown cause) cold agglutinin disease can also be associated with a good prognosis if excessive exposure to the cold is avoided. Those with cold agglutinin disease caused by HIV infection or certain types of cancer generally have a poor prognosis due to the nature of the underlying condition.\n", "Usually secondary viremia results in higher viral shedding and viral loads within the bloodstream due to the possibility that the virus is able to reach its natural host cell from the bloodstream and replicate more efficiently than the initial site. An excellent example to profile this distinction is the rabies virus. Usually the virus will replicate briefly within the first site of infection, within the muscle tissues. Viral replication then leads to viremia and the virus spreads to its secondary site of infection, the central nervous system (CNS). Upon infection of the CNS, secondary viremia results and symptoms usually begin. Vaccination at this point is useless, as the spread to the brain is unstoppable. Vaccination must be done before secondary viremia takes place for the individual to avoid brain damage or death.\n", "An ELISA technique for CMV-specific IgM is available, but may give false-positive results unless steps are taken to remove rheumatoid factor or most of the IgG antibody before the serum sample is tested. Because CMV-specific IgM may be produced in low levels in reactivated CMV infection, its presence is not always indicative of primary infection. Only virus recovered from a target organ, such as the lung, provides unequivocal evidence that the current illness is caused by acquired CMV infection. If serologic tests detect a positive or high titer of IgG, this result should not automatically be interpreted to mean that active CMV infection is present. However, if antibody tests of paired serum samples show a fourfold rise in IgG antibody and a significant level of IgM antibody, meaning equal to at least 30% of the IgG value, or virus is cultured from a urine or throat specimen, the findings indicate that an active CMV infection is present.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-11807
In theory, what would happen if a woman consumed an entire 24-day pack of birth control pills at once?
It would not work like a morning after pill/ emergency contraceptive pill. Someone more qualified can answer precisely what would happen but if that's where your line of thought is going please go to a pharmacy. The woman could get very ill. An overdose could result in large blood pressure changes, huge headaches, bleeding, and screwing up your hormonal cycle, as well as lots of other effects I've not thought of.. Tldr: I don't know other than the general side effects x100 but it doesn't equal emergency contraceptive and could make you very ill.
[ "The herbal preparation of St John's wort and some enzyme-inducing drugs (e.g. anticonvulsants or rifampicin) may reduce the effectiveness of ECP, and a larger dose may be required, especially in women who weigh more than 165 lbs.\n\nSection::::Intrauterine device.\n", "BULLET::::- In March 1978, a \"FDA Drug Bulletin\" was sent to all U.S. physicians and pharmacists which said: \"FDA has not yet given approval for any manufacturer to market DES as a postcoital contraceptive. The Agency, however, will approve this indication for emergency situations such as rape or incest if a manufacturer provides patient labeling and special packaging. To discourage 'morning after' use of DES without patient labeling, FDA has removed from the market the 25 mg tablets of DES, formerly used for this purpose.\"\n", "BULLET::::- In September 1973, the FDA published a proposed rule specifying patient labeling and special packaging requirements for any manufacturer seeking FDA approval to market DES as a postcoital contraceptive, inviting manufacturers to submit abbreviated new drug applications (ANDAs) for that indication, and notifying manufacturers that the FDA intended to order the withdrawal of DES 25 mg tablets (which were being used off-label as postcoital contraceptives).\n", "If the pill formulation is monophasic, meaning each hormonal pill contains a fixed dose of hormones, it is possible to skip withdrawal bleeding and still remain protected against conception by skipping the placebo pills altogether and starting directly with the next packet. Attempting this with bi- or tri-phasic pill formulations carries an increased risk of breakthrough bleeding and may be undesirable. It will not, however, increase the risk of getting pregnant.\n", "Different sources note different incidences of side effects. The most common side effect is breakthrough bleeding. A 1992 French review article said that as many as 50% of new first-time users discontinue the birth control pill before the end of the first year because of the annoyance of side effects such as breakthrough bleeding and amenorrhea. A 2001 study by the Kinsey Institute exploring predictors of discontinuation of oral contraceptives found that 47% of 79 women discontinued the pill. One 1994 study found that women using birth control pills blinked 32% more often than those not using the contraception.\n", "COCPs provide effective contraception from the very first pill if started within five days of the beginning of the menstrual cycle (within five days of the first day of menstruation). If started at any other time in the menstrual cycle, COCPs provide effective contraception only after 7 consecutive days use of active pills, so a backup method of contraception (such as condoms) must be used until active pills have been taken for 7 consecutive days. COCPs should be taken at approximately the same time every day.\n", "Section::::Methods.:Emergency.\n\nEmergency contraceptive methods are medications (sometimes misleadingly referred to as \"morning-after pills\") or devices used after unprotected sexual intercourse with the hope of preventing pregnancy. They work primarily by preventing ovulation or fertilization. They are unlikely to affect implantation, but this has not been completely excluded. A number of options exist, including high dose birth control pills, levonorgestrel, mifepristone, ulipristal and IUDs. Providing emergency contraceptive pills to women in advance does not affect rates of sexually transmitted infections, condom use, pregnancy rates, or sexual risk-taking behavior. All methods have minimal side effects.\n", "BULLET::::- In May 1973, in an attempt to restrict off-label use of DES as a postcoital contraceptive to emergency situations such as rape, a \"FDA Drug Bulletin\" was sent to all U.S. physicians and pharmacists that said the FDA had approved, under restricted conditions, postcoital contraceptive use of DES. (In February 1975, the FDA Commissioner testified that the only error in the May 1973 \"FDA Drug Bulletin\" was that the FDA had not approved postcoital contraceptive use of DES.)\n", "Combined estrogen (ethinylestradiol) and progestin (levonorgestrel or norgestrel) pills used to be available as dedicated emergency contraceptive pills under several brand names: \"Schering PC4\", \"Tetragynon\", \"Neoprimavlar\", and \"Preven\" (in the United States) but were withdrawn after more effective dedicated progestin-only (levonorgestrel) emergency contraceptive pills with fewer side effects became available. If other more effective dedicated emergency contraceptive pills (levonorgestrel, ulipristal acetate, or mifepristone) are not available, specific combinations of regular combined oral contraceptive pills can be taken in split doses 12 hours apart (the Yuzpe regimen), effective up to 72 hours after intercourse. The U.S. Food and Drug Administration (FDA) approved this off-label use of certain brands of regular combined oral contraceptive pills in 1997. As of 2014, there are 26 brands of regular combined oral contraceptive pills containing levonorgestrel or norgestrel available in the United States that can be used in the emergency contraceptive Yuzpe regimen, when none of the more effective and better tolerated options are available.\n", "The effectiveness of emergency contraception is expressed as a percentage reduction in pregnancy rate for a single use of EC. Using an example of \"75% effective\", the effectiveness calculation thus: ... these numbers do not translate into a pregnancy rate of 25 percent. Rather, they mean that if 1,000 women have unprotected intercourse in the middle two weeks of their menstrual cycles, approximately 80 will become pregnant. Use of emergency contraceptive pills would reduce this number by 75 percent, to 20 women.\n", "A version of the combined pill has also been packaged to completely eliminate placebo pills and withdrawal bleeds. Marketed as Anya or Lybrel, studies have shown that after seven months, 71% of users no longer had any breakthrough bleeding, the most common side effect of going longer periods of time without breaks from active pills.\n\nWhile more research needs to be done to assess the long term safety of using COCP's continuously, studies have shown no difference in short term adverse effects when comparing continuous use versus cyclic use of birth control pills.\n\nSection::::Medical use.:Non-contraceptive use.\n", "The antiprogestin ulipristal acetate is available as a micronized emergency contraceptive tablet, effective up to 120 hours after intercourse. Ulipristal acetate ECPs developed by HRA Pharma are available over the counter in Europe and by prescription in over 50 countries under the brand names \"ellaOne\", \"ella\" (marketed by Watson Pharmaceuticals in the United States), \"Duprisal 30\", \"Ulipristal 30\", and \"UPRIS\".\n", "If pills are missed or the ring or patch used incorrectly, the risk of pregnancy is increased. In the UK, the Faculty of Sexual and Reproductive Healthcare issue guidance for incorrectly used CHC pills, patches and rings.\n\nSection::::Special populations.\n\nFollowing childbirth, the use of CHC depends on factors such as whether the woman is breastfeeding and whether she has other medical conditions including superficial venous thrombosis and dyslipidaemia. \n\nSection::::Special populations.:Breastfeeding.\n", "After an intake of 1.5 mg levonorgestrel in clinical trials, very common side effects (reported by 10% or more) included: hives, dizziness, headache, nausea, abdominal pain, uterine pain, delayed menstruation, heavy menstruation, uterine bleeding, and fatigue; common side effects (reported by 1% to 10%) included diarrhea, vomiting, and painful menstruation; these side effects usually disappeared within 48 hours. However, the long term side effects common with oral contraceptives such as arterial disease are lower in levonorgestral than in combination pills.\n\nSection::::Overdose.\n\nOverdose of levonorgestrel as an emergency contraceptive has not been described. Nausea and vomiting might be expected.\n\nSection::::Interactions.\n", "BULLET::::- In February 1975, the FDA said it had not yet approved DES as a postcoital contraceptive, but would after March 8, 1975 permit marketing of DES for that indication in emergency situations such as rape or incest \"if\" a manufacturer obtained an approved ANDA that provided patient labeling and special packaging as set out in a FDA final rule published in February 1975. To discourage off-label use of DES as a postcoital contraceptive, in February 1975 the FDA ordered DES 25 mg (and higher) tablets removed from the market and ordered the labeling of lower doses (5 mg and lower) of DES still approved for other indications be changed to state: \"THIS DRUG PRODUCT SHOULD NOT BE USED AS A POSTCOITAL CONTRACEPTIVE\" in block capital letters on the first line of the physician prescribing information package insert and in a prominent and conspicuous location of the container and carton label.\n", "Due to physiological changes in the body during pregnancy, it may be necessary to alter the dosing of medications so that they remain effective. Generally, the dose or the frequency of dosing are increased to account for these changes.\n\nThe recommended ART regimen for HIV-positive pregnant women consists of drugs from 4 different classes of medications listed below. In the United States, the favored regimen is a three-drug regimen where the first two drugs are NRTIs and the third is either a protease inhibitor, an integrase inhibitor, or an NNRTI.\n", "BULLET::::- \"The Pill Versus the Springhill Mine Disaster\" was the title poem of a 1968 collection by Richard Brautigan.\n\nSection::::Result on popular culture.:Music.\n\nBULLET::::- Singer Loretta Lynn commented on how women no longer had to choose between a relationship and a career in her 1974 album with a song entitled \"The Pill\", which told the story of a married woman's use of the drug to liberate herself from her traditional role as wife and mother.\n\nSection::::Environmental impact.\n\nA woman using COCPs excretes from her urine and feces natural estrogens, estrone (E1) and estradiol (E2), and synthetic estrogen ethinylestradiol (EE2).\n", "BULLET::::- In late 1973, Eli Lilly, the largest U.S. manufacturer of DES, discontinued its DES 25 mg tablets and in March 1974 sent a letter to all U.S. physicians and pharmacists telling them it did not recommend use of DES as a postcoital contraceptive.\n\nBULLET::::- Only one pharmaceutical company, Tablicaps, Inc., a small manufacturer of generic drugs, ever submitted (in January 1974) an ANDA for use of DES as an emergency postcoital contraceptive, and the FDA never approved it.\n", "The antiprogestin mifepristone (also known as RU-486) is available in five countries as a low-dose or mid-dose emergency contraceptive tablet, effective up to 120 hours after intercourse. Low-dose mifepristone ECPs are available by prescription in Armenia, Russia, Ukraine, and Vietnam and from a pharmacist without a prescription in China. Mid-dose mifepristone ECPs are available by prescription in China and Vietnam.\n", "Combined oral contraceptive pills are a type of oral medication that is designed to be taken every day, at the same time of day, in order to prevent pregnancy. There are many different formulations or brands, but the average pack is designed to be taken over a 28-day period, or cycle. For the first 21 days of the cycle, users take a daily pill that contains hormones (estrogen and progestogen). The last 7 days of the cycle are hormone free days. Some packets only contain 21 pills and users are then advised to take no pills for the following week. Other packets contain 7 additional placebo pills, or biologically inactive pills. Some newer formulations have 24 days of active hormone pills, followed by 4 days of placebo (examples include Yaz 28 and Loestrin 24 Fe) or even 84 days of active hormone pills, followed by 7 days of placebo pills (Seasonale). A woman on the pill will have a withdrawal bleed sometime during her placebo pill or no pill days, and is still protected from pregnancy during this time. Then after 28 days, or 91 days depending on which type a person is using, users start a new pack and a new cycle.\n", "BULLET::::- 75 µg gestodene (UK: Femodette, Bayer; RU: Logest, Bayer; Brazil: Femiane, Bayer)\n\nBULLET::::- 3000 µg drospirenone: 24 days + 4 days placebo (US, EU, RU: Yaz; Bayer Schering Pharma AG. Cleonita); 21 days + 7 days placebo (US, EU: , Bayer); 24 days + 4 days placebo and levomefolate calcium (US: Beyaz; Bayer)\n\nBULLET::::- 30 µg ethinylestradiol\n\nBULLET::::- 1500 µg norethisterone acetate (UK: Loestrin 30, Galen; US: Loestrin 1.5/30, Duramed; Microgestin 1.5/30, Watson; Junel 1.5/30, Barr)\n\nBULLET::::- 300 µg norgestrel (US: Lo/Ovral, Wyeth; Low-Ogestrel, Watson; Cryselle, Barr)\n", "COCPs are also contraindicated for people with advanced diabetes, liver tumors, hepatic adenoma or severe cirrhosis of the liver. COCPs are metabolized in the liver and thus liver disease can lead to reduced elimination of the medication. People with known or suspected breast cancer or unexplained uterine bleeding should also not take COCPs.\n", "A variety of types of emergency contraceptive pills are available: combined estrogen and progestin pills, progestin-only (levonorgestrel, LNG) pills, and antiprogestin (ulipristal acetate or mifepristone) pills. Progestin-only and antiprogestin pills are available specifically packaged for use as emergency contraceptive pills. Emergency contraceptive pills originally contained higher doses of the same hormones (estrogens, progestins, or both) found in regular combined oral contraceptive pills. Combined estrogen and progestin pills are no longer recommended as dedicated emergency contraceptive pills (because this regimen is less effective and caused more nausea), but certain regular combined oral contraceptive pills (taken 2-5 at a time in what was called \"the Yuzpe regimen\") have also been shown to be effective as emergency contraceptive pills.\n", "BULLET::::- 1000 µg norethisterone acetate (UK: Loestrin 20, Galen; US: Loestrin 1/20, Duramed; Microgestin 1/20, Watson Pharmaceuticals; Junel 1/20, Barr)\n\nBULLET::::- 1000 µg norethisterone acetate: 24 days + 4 days ferrous fumarate only (US: Loestrin 24 Fe, Warner Chilcott)\n\nBULLET::::- 90 µg levonorgestrel: continuous: 365 days/year, no placebo (US: Amethyst, Watson)\n\nBULLET::::- 100 µg levonorgestrel: extended cycle: 84 days + 7 days 10 µg ethinylestradiol only (US: LoSeasonique, Teva; CamreseLo, Teva)\n\nBULLET::::- 100 µg levonorgestrel (US: Alesse, Wyeth; Aviane, Barr; Lessina, Barr; Lutera, Watson; Sronyx, Watson)\n\nBULLET::::- 150 µg desogestrel (UK: Mercilon, Organon; RU: Novynette, Richter Gedeon)\n", "BULLET::::- 50 µg mestranol (equivalent to 35 µg ethinylestradiol)\n\nBULLET::::- 1000 µg norethisterone (UK: Norinyl-1, Pfizer; US: Ortho-Novum 1/50; Ortho-McNeil; Norinyl 1/50, Watson; Necon 1/50, Watson)\n\nBULLET::::- 50 µg ethinylestradiol\n\nBULLET::::- 1000 µg norethisterone (US: Ovcon 50, Warner Chilcott)\n\nBULLET::::- 1000 µg etynodiol diacetate (US: Demulen 1/50, Pfizer; Zovia 1/50, Watson)\n\nBULLET::::- 500 µg norgestrel (US: Ogestrel, Watson)\n\nBULLET::::- 250 µg levonorgestrel (US: Nordiol, Wyeth)\n\nBULLET::::- 1.5 mg estradiol (as hemihydrate)\n\nBULLET::::- 2.5 mg nomegestrol acetate: 24-day cycle + 4 placebo pills (AU, EU, RU: Zoely, MSD)\n\nSection::::Combined oral contraceptive pills.:Multiphasic.\n\nBULLET::::- 25 µg ethinylestradiol: triphasic\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-05453
Why is it easier to remember a quote or words to a song if someone says the first few words?
it's a very complicated process that is very highly debated. That being said, a popular theory how the brain stores and remembers memory hopefully should be enough to answer it, keep in mind i'm not an expert by any means. so keep the grains of salt at the ready. Our brains are not like a hard drive where there is a location at x,y,z where that bit of data is stored. instead we store and remember memories based on a patterned sequence of neurons firing. so when i first experience that quote my neurons fired in a very specific pattern, 1 > 2 > 3 > 4 > 5. Whenever i hear it again my neurons fires in the same order 1 > 2 > 3 > 4 > 5 plus an additional pattern for the new memory itself. so when you hear the start of the quote your brain starts firing that pattern 1 > 2 > , and like googles autofill for it's searches, it fills in the rest of the quote, > 3 > 4 > 5. However if someone starts the quote in the middle, you'r brain has a hard time recognizing the pattern. it'll find it familiar but it's having to work backwards to find the start of the pattern and that's incredibly hard for it. The bright side to all this is that the more you deal with the same piece of information, whatever it is, your neurons gets more used to the pattern and can recall it faster and faster, and the neurons in the pattern get closer together to facilitate it. thats why they say practice makes perfect. the more you do something the better you brain can solve the pattern needed for the activity. edit: grammer
[ "Experiments in memory span have found that the more familiar a person is with the type of subject matter presented to them, the more they will remember it in a novel setting. For example, a person will better remember a sequence in their first-language than their second-language; a person will also remember a sequence of words better than they would a sequence of nonsense syllables.\n", "Section::::Influences on remembering and knowing.:Factors that influence both remember and know responses are.:Word vs. non-word memory.\n\nWhen words are used as stimuli, more remember responses and fewer know responses are produced in comparison to when nonwords are used as stimuli.\n\nSection::::Influences on remembering and knowing.:Factors that influence both remember and know responses are.:Gradual vs. rapid presentation.\n\nGradual presentation of stimuli causes an increase in familiarity and thus an increase in associated know responses; however, gradual presentation causes a decrease in remember responses.\n\nSection::::Influences on remembering and knowing.:Role of emotion.\n", "A few studies have even found that emotionally arousing stimuli enhance memory only after a delay. The most famous of these was a study by Kleinsmith and Kaplan (1963) that found an advantage for numbers paired with arousing words over those paired with neutral words only at delayed test, but not at immediate test. As outlined by Mather (2007), the Kleinsmith and Kaplan effects were most likely due to a methodological confound. However, Sharot and Phelps (2004) found better recognition of arousing words over neutral words at a delayed test but not at an immediate test, supporting the notion that there is enhanced memory consolidation for arousing stimuli. According to these theories, different physiological systems, including those involved in the discharge of hormones believed to affect memory consolidation, become active during, and closely following, the occurrence of arousing events.\n", "Briefly presenting a prime prior to test causes an increase in fluency of processing and an associated increase in feeling of familiarity. Short duration primes tend to enhance know responses. In contrast to briefly presented primes, primes which are presented for long durations are said to disrupt processing fluency as the prime saturates the representation of the target word. Thus, longer duration primes tend to have a negative impact on know responses.\n\nSection::::Influences on remembering and knowing.:Factors that influence know responses but not remember responses are.:Stimulus modality.\n", "One's attention to words is impacted by emotion grasping vocabulary. Negative and positive words are better recalled than neutral words that are spoken. Many different ways that attention is focused on hearing what the speaker has to say are the inflection of the presenter's voice in a sad, content, or frustrated sound or in the use of words that are close to the heart. A study was conducted to observe if the use of emotional vocabulary was a key receptor of recall memory. The groups were put into the same lecture halls and given the same speakers, but the results came back to determine that the inflection and word choice recalled by the listeners concluded that emotional words, phrases, and sounds are more memorable than neutral speakers.\n", "Another application on the social advantage in selective memory is with reproduction. Testing female undergraduate students in recall found that in a short video with a male introducing themselves and being considered for a future partner, participants selectively remembered more of what he said than what he looked like. This is supporting of the notion that the purpose of evolution is to pass on genetic information and that selective retention plays a part in that. Seitz, Polack, and Miller (2018) also found that memory performance increased when stimulated by reproductive cues. In an evolutionary perspective, the organization of the semantic memory may link and connect this type of information more strongly to influence recall and therefore the survival of the individual.\n", "Other research provides support for memory of text being improved by musical training. Words presented by song were remembered significantly better than when presented by speech. Earlier research has supported for this finding, that advertising jingles that pair words with music are remembered better than words alone or spoken words with music in the background. Memory was also enhanced for pairing brands with their proper slogans if the advertising incorporated lyrics and music rather than spoken words and music in the background.\n", "One of the classic experiments is by Ebbinghaus, who found the serial position effect where information from the beginning and end of list of random words were better recalled than those in the center. This primacy and recency effect varies in intensity based on list length. Its typical U-shaped curve can be disrupted by an attention-grabbing word; this is known as the Von Restorff effect.\n", "For example, if a person examines a shopping list with one item highlighted in bright green, he or she will be more likely to remember the highlighted item than any of the others. Additionally, in the following list of words – desk, chair, bed, table, chipmunk, dresser, stool, couch – chipmunk will be remembered the most as it stands out against the other words in its meaning.\n\nSection::::Explanation.\n", "Self-reference effect\n\nThe self-reference effect is a tendency for people to encode information differently depending on the level on which they are implicated in the information. When people are asked to remember information when it is related in some way to themselves, the recall rate can be improved.\n\nSection::::Research.\n", "Levels of processing have been an integral part of learning about memory. The self-reference effect describes the greater recall capacity for a particular stimulus if it is related semantically to the subject. This can be thought of as a corollary of the familiarity modifier, because stimuli specifically related to an event in a person's life will have widespread activation in that person's semantic network. For example, the recall value of a personality trait adjective is higher when subjects are asked whether the trait adjective applies to them than when asked whether trait adjective has a meaning similar to another trait.\n", "Section::::Lyrical vs. instrumental memory.\n", "One suggested reason for the primacy effect is that the initial items presented are most effectively stored in dormant memory because of the greater amount of processing devoted to them. (The first list item can be rehearsed by itself; the second must be rehearsed along with the first, the third along with the first and second, and so on.) The primacy effect is reduced when items are presented quickly and is enhanced when presented slowly (factors that reduce and enhance processing of each item and thus permanent storage). Longer presentation lists have been found to reduce the primacy effect.\n", "Deeper processing of the originally learned material results in more effective encoding and retrieval, due to semantic processing having taken place. Semantic processing occurs after we hear information and encode its meaning, allowing for deeper processing. Semantic encoding can therefore lead to greater levels of retention when learning new information. The avoidance of interfering stimuli such as music and technology when learning, can improve memory and retention significantly. These distractions interfere with the encoding of material in long-term memory stores.\n", "Section::::Influences on remembering and knowing.:Factors that influence know responses but not remember responses are.:Masked recognition priming.\n\nMasked recognition priming is a manipulation which is known to increase perceptual fluency. Since know responses increase with increased fluency of processing, masked recognition priming enhances know responses.\n\nSection::::Influences on remembering and knowing.:Factors that influence know responses but not remember responses are.:Repetition priming.\n", "Ebbinghaus a pioneer of research into memory, noted that associations between items aids recall of information thus the internal context of a list matters. This is because we look for any connection that helps us combine items into meaningful units. This started a lot of research into lists of to-be-remembered (tbr) words, and cues that helped them. In 1968 Tulving and Osler made participants memories a list of 24 tbr words in the absence or presence of cue words. The cue words facilitated recall when present in the input and output of memorising and recalling the words. They concluded that specific retrieval cues can aid recall if the information of their relation to the tbr words is stored at the same time as the words on the list. Tulving and Thompson studied the effect of the change in context of the tbr by adding, deleting and replacing context words. This resulted in a reduction in the level of recognition performance when the context changed, even though the available information remained context. This led to the encoding specificity principle.\n", "Section::::Experiments.\n\nThis phenomenon has been shown by various experiments:\n\nBULLET::::- One example of this is empirically shown, specifically, in a study by Morris and associates (1977) using semantic and rhyme tasks. In a standard recognition test, memory was better following semantic processing compared to rhyme processing (the levels-of-processing effect). However, in a rhyming recognition test, memory was better for those who engaged in rhyme processing compared to semantic processing.\n", "The effects of elaborative rehearsal or deep processing can be attributed to the number of connections made while encoding that increase the number of pathways available for retrieval.\n\nSection::::Optimal encoding.\n\nOrganization can be seen as the key to better memory. As demonstrated in the above section on levels of processing the connections that are made between the to-be-remembered item, other to-be-remembered items, previous experiences and context generate retrieval paths for the to-be-remembered item. These connections impose organization on the to-be-remembered item, making it more memorable.\n\nSection::::Optimal encoding.:Mnemonics.\n", "The primacy effect is related to enhanced remembering. In a study, a free recall test was conducted on some lists of words and no test on other lists of words prior to a recognition test. They found that testing led to positive recency effects for remembered items; on the other hand, with no prior test, negative recency effects occurred for remembered items. Thus, both primary and recency effects can be seen in remember responses.\n\nSection::::Influences on remembering and knowing.:Factors that influence know responses but not remember responses are.\n", "The December 9, 1944 edition of \"Billboard\" magazine reviewed an episode saying \"Vera Massey, in \"Will You Remember?\", improved over last week. Tonight her songs and soliloquies were better chosen and faster paced. She was sentimental, not sloppy. New twist had her at a window talking to her overseas husband. As she turned to walk to the piano, stagehands noise-lessly removed the wall and window, and camera moved in while the other (inside the room) took over for a couple of seconds. She had taken only a few steps before camera one caught up and record the rest of her movement from the window. It was a nice touch\".\n", "Since words such as \"sorrow\" or \"comfort\" may be more likely to be associated with autobiographical experiences or self-introspection than neutral words such as \"shadow\", autobiographical elaboration may explain the memory enhancement of non-arousing positive or negative items. Studies have shown that dividing attention at encoding decreases an individual's ability to utilize controlled encoding processes, such as autobiographical or semantic elaboration.\n", "Section::::Phenomena.:The Face Advantage.\n", "Section::::Structure of mnemonic skills.:Acceleration.\n\nThe final step in skilled memory theory is acceleration. With practice, time necessary for encoding and retrieval operations can be dramatically reduced. As a result, storage of information can then be performed within a few seconds. Indeed, one confounding factor in the study of memory is that the subjects often improve from day-to-day as they are tested over and over.\n\nSection::::Learned skill or innate ability.\n\nThe innateness of expert performance in the memory field has been studied thoroughly by many scientists; it is a matter which has still not been definitively resolved.\n", "Section::::Accuracy.:Source of Information.\n\nWhen looking at the source of knowledge about an event, hearing the news from the media or from another person does not cause a difference in reaction, rather causes a difference in the type of information that is encoded to one's memory. When hearing the news from the media, more details about the events itself are better remembered due to the processing of facts while experiencing high levels of arousal, whereas when hearing the news from another individual a person tends to remember personal responses and circumstances.\n\nSection::::Demographic differences.\n", "Another type of device people use to help their recall memory become efficient is chunking. Chunking is the process of breaking down numbers into smaller units to remember the information or data, this helps recall numbers and math facts. An example of this chunking process is a telephone number; this is chunked with three digits, three digits, then four digits. People read them off as such when reciting a phone number to another person. There has been research done about these techniques and an institution tested two groups of people to see if these types of devices work well for real people, the results came back determining a significant performance difference between the group who did not use cognitive strategies and the group who did. The group using the techniques immediately performed better than the other group and when taking a pre-test and post-test the results indicated that the group using the techniques improved while the other group did not.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-02630
Why does a buffet style streaming service work for Spotify, but not for audiobook streaming services like Audible?
You should check out Amazon Prime, which has something like what you are describing for a lot of books (though not all books). You'll notice that the majority are not big name authors but rather small timers self publishing. Books and music are on fundamentally different scales. A song lasts 3-4 minutes and the average user might listen to the same song many times. A book can take many hours or days to read and most people won't return to a book they've read unless they really enjoyed it. This creates a problem in paying the content creator. Songs work well under a "pay when played" method where every time the song is played the owner gets a few cents. Since songs may be played many times by the same user and songs are relatively short it makes sense to offer a flat monthly fee. By comparison, books are significantly bigger time investments, and don't have high "re-use" value of songs, so the content owner is going to demand a bigger fee up front. From that angle, it makes more sense to charge you per book and an extra fee for the access to the service.
[ "The service uses proprietary cloudmark synchronization technology that enables users to mark their place in an audiobook on one device and continue listening from the same spot when they switch to a different listening device, without needing a browser plug-in or special application.\n\nAs of February 2012, the company's library currently contained more than 11,000 titles from such publishers as Blackstone Audio, HarperCollins, and Simon & Schuster.\n", "Technical innovation returned to center-state for the company in September 2012 when Audible launched Whispersync for Voice, an innovation that enables readers to switch seamlessly between reading a Kindle book and listening to the corresponding audiobook without losing their place. Along with Whispersync for Voice, Audible released Immersion Reading, a feature which highlights text on a Kindle book as the audiobook is narrated. It was also the focus in June 2015 when audiblebooks from Audible.com was made available on Amazon Echo, a voice command device from Amazon with functions including question answering, playing music and controlling smart devices.\n", "such as PCs, smart phones, and tablet computers. The application allows users to listen to audiobooks by clicking on a link and streaming their book within the web browser itself, rather than downloading a dedicated application. The audiobooks are stored in cloud storage, which means users can listen to books at any time without downloading files and taking up space on their device.\n", "BULLET::::- 2008: Simply Audiobooks began offering DRM-free MP3 downloads which can be downloaded to any computer (running Linux, Mac OS X, or Windows) and are compatible with any MP3 player (iPod, Zansa, Zen, or Zune) or Smartphone.\n", "As of November 2016, Audiobooks make up the plurality of their circulations (35%) followed by movies (22%), music (19%), ebooks (12%), comics (6%), and television (6%).\n\nAudiobooks: Contracts with publishers such as Tantor Audio, Harper Collins, Blackstone Audio, Simon & Schuster Audio, and others.\n\nMovies: Contracts with publishers such as Lionsgate, Disney, Warner Brothers, Starz, and others.\n\nSection::::Technology.\n\nHoopla content can be borrowed and consumed on the web or via native Android or iOS apps.\n", "There were hopes that Amazon, after its purchase of Audible, would remove the DRM from its audiobook selection, in keeping with the current trend in the industry. Nevertheless, Audible's products continue to have DRM, similar to the policy of DRM-protecting their Kindle e-books, which have DRM that allows for a finite, yet undisclosed number of downloads at the discretion of the publisher, however Audible titles that are DRM free can be copied to the Kindle and made functional.\n", "On July 14, 2016, eMusic launched eStories, an audiobook service that will offer 80,000 titles at a cost of $11.95 per title to use, plus 33 percent off additional purchases.\n\nSection::::File format support.\n", "Human-read audiobooks are digitally recorded by volunteer narrators and produced in downloadable audio files in a specialized format, which allows Learning Ally to respect copyright and allows users to navigate their audiobooks by chapter or page number, set bookmarks, speed up playback, etc. \n\nDownloadable audiobooks can be played using mainstream devices like the iPad, iPhone, and iPod Touch, as well as Android smartphones and tablets, Mac-OS or Microsoft Windows-compatible computers running Learning Ally’s software. The audiobooks can also be played back on assistive technology devices like the Plextalk, Humanware Stream and Intel Reader, to name just a few.\n", "A recent survey released by the Audio Publishers Association found that the overwhelming majority of audiobook users listen in the car, and more than two-thirds of audiobook buyers described audiobooks as relaxing and a good way to multitask. Another stated reason for choosing audiobooks over other formats is that an audio performance makes some books more interesting.\n\nCommon practices include:\n", "In May 2014, Recorded Books acquired HighBridge Audio from Workman Publishing. HighBridge Audio was initially founded by Minnesota Public Radio in the early 1980s to produce and distribute recordings of Garrison Keillor's \"A Prairie Home Companion\". Since then, HighBridge produced approximately 45 titles a year in the forms of spoken word audio cassettes, CDs and downloadable audio books. The company was best known for publishing public-radio related titles, as well as Oprah's Book Club titles. HighBridge made use of two readers in its audio book production for works primarily involving two main characters. Other popular titles published by HighBridge included \"The Time Traveler's Wife\", \"Water for Elephants\", \"Life of Pi\" and \"Across the Nightingale Floor\".\n", "In addition to the regular price charged for audiobooks, Audible offers subscriptions with the following benefits:\n", "Nearly 700 uncut audio author interviews conducted by Don Swaim for his \"Book Beat\" show on CBS Radio were available at Wired for Books in their entirety. Original unabridged audio productions of \"Alice's Adventures in Wonderland\", \"A Christmas Carol\", \"Macbeth\", and \"The Wonderful Wizard of Oz\" could be found at Wired for Books in streaming media as well as some downloadable MP3 files. Essayists, fiction writers, and poets read their works, often with commentary. Kids' Corner, the children's section of Wired for Books, featured the stories of Beatrix Potter and other classic stories for children.\n\nSection::::Reception.\n", "BULLET::::- \"Credits\": For a monthly subscription fee, a customer receives one or two audio credits. Most titles can be purchased with one of these credits. Some titles (usually larger books or collections of more than one book) may cost two credits, while others (usually very short works) cost only a third of a credit. (Users may also purchase a year's subscription at a time, for a discount, receiving all credits at once, but only in some countries.) Platinum subscribers also receive a complimentary subscription to the digital audio version of \"The New York Times\" or \"The Wall Street Journal\".\n", "BULLET::::- 2013: From July 1–6, in honor of Independence Day, Staples.com offered three downloadable Simply Audiobooks for $0.01 each.: \"Bill of Rights Audiobook - Download\", \"The Constitution & Historical Influences\", and \"Gettysburg Address & Emancipation Audiobook-Download\", which do not appear on the company's website and which lack such details as the items' narrators, producers, and release dates. This lack of information did not appeal to some would-be purchasers.\n\nSection::::Audiobooks.com.\n\nOn January 24, 2012, Simply Audiobooks launched Audiobooks.com, the first cloud-based service for audiobooks, allowing users to access audiobooks on Internet-enabled devices,\n", "BULLET::::- 2005: Simply Audiobooks launched audiobooks via download, through low-cost monthly subscriptions and \"a la carte\" purchases. These downloads are DRM-protected WMA files which can be downloaded to any Windows computer with Windows Media Player 10 or 11 and are compatible with most MP3 players.\n\nBULLET::::- 2006: Simply Audiobooks launched the sale of audiobooks on CD and cassette online. They also purchased North America's largest audiobook store in downtown Toronto, near the company's headquarters in Oakville, Canada.\n", "About 40 percent of all audiobook consumption occurs through public libraries, with the remainder served primarily through retail book stores. Library download programs are currently experiencing rapid growth (more than 5,000 public libraries offer free downloadable audio books). Libraries are also popular places to check out audio books in the CD format. According to the National Endowment for the Arts' study, \"Reading at Risk: A Survey of Literary Reading in America\" (2004), audiobook listening is one of very few \"types\" of reading that is increasing general literacy.\n\nSection::::Listening practices.\n", "Listening Library was also a pioneering company, it was one of the first to distribute children's audiobooks to schools, libraries and other special markets, including VA hospitals. It was founded by Anthony Ditlow and his wife in 1955 in their Red Bank, New Jersey home; Ditlow was partially blind. Another early pioneering company was Spoken Arts founded in 1956 by Arthur Luce Klein and his wife, they produced over 700 recordings and were best known for poetry and drama recordings used in schools and libraries. Like Caedemon, Listening Library and Spoken Arts benefited from the new technology of LPs, but also increased governmental funding for schools and libraries beginning in the 1950s and 60s.\n", "A frequent concern of listeners is the site's policy of allowing any recording to be published as long as it is understandable and faithful to the source text. This means that some recordings are of lower audio fidelity; some feature background noises, non-native accents or other perceived imperfections in comparison to professionally recorded audiobooks. While some listeners may object to those books with chapters read by multiple readers, others find this to be a non-issue or even a feature, though many books are narrated by a single reader.\n\nSection::::See also.\n\nBULLET::::- Virtual volunteering\n\nBULLET::::- Voice acting\n\nSection::::External links.\n", "The Learning Ally Reading App\n\nWith the Learning Ally Reading App, students can download and read their audiobooks anytime, anywhere. Students receive unique login credentials, allowing them to download any of our audiobooks to their personal online bookshelf. They can then log in to our free reading app and download books from their bookshelf directly to their home or school-issued computer or mobile device, giving them 24/7 access to our game-changing assistive technology. Learning Ally is compatible with PCs, Macs, Chromebooks, Android and iOS devices.\n\nStudent-centric features include:\n", "The resurgence of audio storytelling is widely attributed to advances in mobile technologies such as smartphones, tablets, and multimedia entertainment systems in cars, also known as connected car platforms. Audio drama recordings are also now podcast over the internet.\n\nIn 2014, Bob & Debra Deyan of Deyan Audio opened the Deyan Institute of Vocal Artistry and Technology, the world's first campus and school for teaching the art and technology of audiobook production.\n\nIn 2018, approximately 50,000 audiobooks were recorded in the United States with a sales growth of 20 percent year over year.\n\nSection::::History.:Germany.\n", "BULLET::::- Multitasking: Many audiobook listeners choose the format because it allows multitasking during otherwise mundane or routine tasks such as exercising, crafting, or cooking.\n\nBULLET::::- Entertainment: Audiobooks have become a popular form of travel entertainment for families or commuters.\n\nSection::::Charitable and nonprofit organizations.\n", "All members must be certified as needing a reading accommodation by a competent authority. An adult or a household can obtain a Learning Ally annual membership and gain unlimited access to our audio book library for $135. Hardship waivers are available for those who qualify. Institutional memberships are also provided at various fee levels to public and private schools, colleges and universities. Learning Ally Audio software for mainstream mobile devices and Learning Ally's Reading App for Mac and PC are available to members free of charge. \n\nSection::::Software.\n\nWhy Human-Read Audiobooks?\n", "Another innovation was the creation of LibriVox in 2005 by Montreal-based writer Hugh McGuire who posed the question on his blog: \"Can the net harness a bunch of volunteers to help bring books in the public domain to life through podcasting?\" Thus began the creation of public domain audiobooks by volunteer narrators. By the end of 2017, LibriVox had a catalog of over 12,000 works and was producing about 1,000 per year.\n", "In September 2015, Google acquired Oyster, a subscription-based ebook service. As a part of the acquisition, Oyster shut down its existing service in early 2016, and its founders joined Google Play Books in New York.\n\nIn January 2018, Google began selling audiobooks that can be listened via the app.\n\nIn June 2018, Google reopened its publisher program to new sign-ups. To curb piracy, text of new books would now compared with that of other books in the store.\n\nSection::::History.:Reseller program.\n", "Spoken audio has been available in schools and public libraries and to a lesser extent in music shops since the 1930s. Many spoken word albums were made prior to the age of cassette tapes, compact discs, and downloadable audio, often of poetry and plays rather than books. It was not until the 1980s that the medium began to attract book retailers, and then book retailers started displaying audiobooks on bookshelves rather than in separate displays.\n\nSection::::Etymology.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04370
If we have explored less than five percent of the world’s oceans, how can we label the bottom of the Mariana Trench as the deepest point of the oceans?
We have not thoroughly explored the world's oceans. But that doesn't mean we haven't found out QUITE a lot about them. Scientists can argue, however, there there is so much MORE to know. We're just recently finding out about a major underwater current near Iceland that we hadn't even been factoring in for weather models. This is a big deal. So have we learned much? Sure. Do we need to know a zillion more things? Yes. As to depths: we have done sonar readings all over the globe at this point. Long ago we did "soundings" involving dropping something down and there was a way to measure depth efficiently doing this (that I won't get into). This was time consuming. Now we send sound down, in a pulse, and measure how long it takes for it to hit the bottom and bounce back up to the people listening with instruments up top. Sonar. In this manner we have been able to put together what we are pretty darn sure is a complete record of the ocean depths, with Mariana topping the list as to big baddass deep dark place.
[ "In 1951, \"Challenger II\" surveyed the trench using echo sounding, a much more precise and vastly easier way to measure depth than the sounding equipment and drag lines used in the original expedition. During this survey, the deepest part of the trench was recorded when the \"Challenger II\" measured a depth of at , known as the Challenger Deep.\n\nIn 1957, the Soviet vessel reported a depth of at a location dubbed the \"Mariana Hollow\".\n\nIn 1962, the surface ship M.V. \"Spencer F. Baird\" recorded a maximum depth of using precision depth gauges.\n", "BULLET::::- In 1998, a regional bathymetric survey of the Challenger Deep was conducted by the Deep Sea Research Vessel RV \"Kairei\", from the Japan Agency for Marine-Earth Science and Technology, using a SeaBeam 2112 multibeam echosounder. The regional bathymetric map made from the data obtained in 1998 shows that the greatest depths in the eastern, central, and western depressions are ±, ±, and ±, respectively, making the eastern depression the deepest of the three.\n", "On 1 June 2009, sonar mapping of the Challenger Deep by the Simrad EM120 sonar multibeam bathymetry system for deep water, mapping aboard the \"RV Kilo Moana\" (mothership of the Nereus vehicle), indicated a spot with a depth of . The sonar system uses phase and amplitude bottom detection, with an accuracy of better than 0.2% of water depth across the entire swath (implying that the depth figure is accurate to ± ).\n", "In 1984, the Japanese survey vessel \"Takuyō\" (拓洋) collected data from the Mariana Trench using a narrow, multi-beam echo sounder; it reported a maximum depth of , also reported as ±. Remotely Operated Vehicle \"KAIKO\" reached the deepest area of the Mariana Trench and made the deepest diving record of on March 24, 1995.\n", "BULLET::::- In 1962, the US Navy research vessel RV \"Spencer F. Baird\" using a frequency-controlled depth recorder surveyed a maximum depth of ± at .\n\nBULLET::::- In 1975 and 1980, the US Navy research vessel RV \"Thomas Washington\" using a precision depth recorder with satellite positioning surveyed a maximum depth of ± at .\n\nBULLET::::- In 1984, the survey vessel \"Takuyo\" from the Hydrographic Department of Japan, used a narrow, multibeam echo sounder to take a measurement of ± at .\n", "Section::::Notable missions.\n\nThe first manned exploration to reach Challenger Deep, the deepest known part of the ocean located in the Mariana Trench, was accomplished in 1960 by Jacques Piccard and Don Walsh. They reached a maximum depth of 10,911 meters in the bathyscaphe \"Trieste\".\n\nJames Cameron also reached the bottom of Mariana Trench in March 2012 using the \"Deepsea Challenger\". The descent of the \"Deepsea Challenger\" was unable to break the deepest dive record set by Piccard and Walsh by about 100 meters; however, Cameron holds the record for the deepest solo dive.\n", "In 1974, \"Alvin\" (operated by the Woods Hole Oceanographic Institution and the Deep Sea Place Research Center), the French bathyscaphe \"Archimède\", and the French diving saucer \"CYANA\", assisted by support ships and , explored the great rift valley of the Mid-Atlantic Ridge, southwest of the Azores. About 5,200 photographs of the region were taken, and samples of relatively young solidified magma were found on each side of the central fissure of the rift valley, giving additional proof that the seafloor spreads at this site at a rate of about per year (see plate tectonics,).\n", "The deepest point in the ocean is the Mariana Trench, located in the Pacific Ocean near the Northern Mariana Islands. Its maximum depth has been estimated to be (plus or minus 11 meters; see the Mariana Trench article for discussion of the various estimates of the maximum depth.) The British naval vessel \"Challenger II\" surveyed the trench in 1951 and named the deepest part of the trench the \"Challenger Deep\". In 1960, the Trieste successfully reached the bottom of the trench, manned by a crew of two men.\n\nSection::::Earth's global ocean.:Oceanic maritime currents.\n", "To resolve the debate regarding the deepest point of the Indian Ocean the Diamantina Fracture Zone was surveyed by the Five Deeps Expedition in March 2019 by the Deep Submersible Support Vessel \"DSSV Pressure Drop\", equipped with a Kongsberg SIMRAD EM124 multibeam echosounder system. Using the multibeam echosounder system and direct measurement by an ultra-deep-sea lander a maximum water depth of ± at for the Dordrecht Deep was recorded. This is shallower than previously thought when historically measured by other, less precise, methods. The gathered data will be donated to the GEBCO Seabed 2030 initiative. The survey was part of the Five Deeps Expedition. The objective of this expedition is to thoroughly map and visit the deepest points of all five of the world's oceans by the end of September 2019.\n", "A 2014 study by Gardner et al. concludes that with the best of 2010 multibeam echosounder technologies a depth uncertainty of ± (95% confidence level) on nine degrees of freedom and a positional uncertainty of ± remains. The deepest point and its location recorded in the 2010 sonar mapping conducted by the US Center for Coastal & Ocean Mapping/Joint Hydrographic Center (CCOM/JHC) aboard is at ().\n", "BULLET::::- In 1999 and 2002, \"Kairei\" revisited the Challenger Deep. The cross track survey in the 1999 \"Kairei\" cruise shows that the greatest depths in the eastern, central, and western depressions are ±, ±, and ±, respectively, which supports the results of the 1998 survey. The detailed grid survey in 2002 showed that the deepest site is located in the eastern part of the eastern depression around , with a depth of ±, about southeast of the deepest site determined by the survey vessel \"Takuyo\" in 1984 and about east of the deepest place determined by the 1998 \"Kairei\" survey.\n", "The 2009 and 2010 maximal depths were not confirmed by the series of dives \"Nereus\" made to the bottom during an expedition in May–June 2009. The direct descent measurements by several expeditions which have reported from the bottom, have fixed depths in a narrow range from () to (), to (\"Nereus\") to () to ± (\"DSV Limiting Factor\"). Although an attempt was made to correlate locations, it could not be absolutely certain that \"Nereus\" (or the other descents) reached exactly the same points found to be maximally deep by the sonar/echo sounders of previous mapping expeditions, even though one of these echo soundings was made by \"Nereus\" mothership.\n", "The trench was first sounded during the \"Challenger\" expedition in 1875, using a weighted rope, which recorded a depth of . In 1877, a map was published called \"Tiefenkarte des Grossen Ozeans\" (\"Deep map of the Great Ocean\") by Petermann, which showed a \"Challenger Tief\" (\"Challenger deep\") at the location of that sounding. In 1899, USS Nero, a converted collier, recorded a depth of .\n", "The trench is long and has a maximum depth of or 5.20 miles in the Brownson Deep, which is the deepest point in the Atlantic Ocean and the deepest point not in the Pacific Ocean. On December 19, 2018, its deepest point was identified by the \"DSSV Pressure Drop\" using a state-of-the-art Kongsberg EM124 multibeam sonar and then directly visited and its depth verified by the manned submersible Deep-Submergence Vehicle \"DSV Limiting Factor\" (a Triton 36000/2 model submersible).\n", "On 23 March 1875, at sample station number 225 located in the southwest Pacific Ocean between Guam and Palau, the crew recorded a sounding of deep, which was confirmed by an additional sounding. As shown by later expeditions using modern equipment, this area represents the southern end of the Mariana trench and is one of the deepest known places on the ocean floor.\n", "On 5 April 2019 Victor Vescovo made the first manned descent to the deepest point of the trench in the Deep-Submergence Vehicle \"DSV Limiting Factor\" (a Triton 36000/2 model submersible) and measured a depth of ± at 11°7'44\" S, 114°56'30\" E. The operating area was surveyed by the support ship, the Deep Submersible Support Vessel \"DSSV Pressure Drop\", with a Kongsberg SIMRAD EM124 multibeam echosounder system. The gathered data will be donated to the GEBCO Seabed 2030 initiative. The dive was part of the Five Deeps Expedition. The objective of this expedition is to thoroughly map and visit the deepest points of all five of the world's oceans by the end of September 2019.\n", "On 24 March 1995, the Japanese robotic deep-sea probe \"Kaikō\" broke the depth record for unmanned probes when it reached close to the surveyed bottom of the Challenger Deep. Created by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), it was one of the few unmanned deep-sea probes in operation that could dive deeper than . The manometer measured depth of ± at for the Challenger Deep is believed to be the most accurate measurement taken yet. \"Kaikō\" also collected sediment cores containing marine organisms from the bottom of the deep. \"Kaikō\" made many unmanned descents to the Mariana Trench during three expeditions in 1995, 1996 and 1998. The greatest depth measured by \"Kaikō\" in 1996 was at and in 1998 at . It was lost at sea off Shikoku Island during Typhoon Chan-Hom on 29 May 2003.\n", "Section::::Descents.:Unmanned descents.:\"ABISMO\".\n\nOn 3 June 2008, the Japanese robotic deep-sea probe \"ABISMO\" (Automatic Bottom Inspection and Sampling Mobile) reached the bottom of the Mariana Trench about east of the Challenger Deep and collected core samples of the deep sea sediment and water samples of the water column. Created by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), it was the only unmanned deep-sea probe in use that could dive deeper than after that of \"Nereus\". During \"ABISMO\"s deepest Mariana Trench dive its manometer measured a depth of ±\n\nSection::::Descents.:Unmanned descents.:\"Nereus\".\n", "BULLET::::- On 1 June 2009, sonar mapping of the Challenger Deep by the Kongsberg Simrad EM 120 sonar multibeam bathymetry system for deep water (300 – 11,000 metres) mapping aboard (mothership of the underwater vehicle ) indicated a depth of . The sonar system uses phase and amplitude bottom detection, which is capable of an accuracy of 0.2% to 0.5% of water depth across the entire swath. In 2014 the multibeam bathymetry data of this sonar mapping have yet to be publicly released, so the data are not available for comparisons with other soundings.\n", "The trench is long and has a maximum depth of below sea level at , as measured by a Kongsberg EM124 multibeam sonar from February 2–7, 2019 during the Five Deeps Expedition. This measurement was made during the first complete sonar mapping of the trench which covered its entire length, with a measurement error of +/- . It is noteworthy that the deepest point of the South Sandwich Trench is only shallower than the deepest point in the Puerto Rico Trench, which hosts the deepest point in the Atlantic at the Brownson Deep.\n", "BULLET::::- Darwin Mounds – A large field of undersea sand mounds off the north west coast of Scotland\n\nBULLET::::- Darwin's Arch – A natural rock arch feature situated to the southeast of Darwin Island in the Pacific Ocean\n\nBULLET::::- Florida Platform – A flat geological feature with the emergent portion forming the Florida peninsula\n\nBULLET::::- Hawaiian Islands – An archipelago in the North Pacific Ocean, currently administered by the US state of Hawaii (archipelago)\n\nBULLET::::- Milwaukee Deep – The deepest part of the Atlantic Ocean – part of the Puerto Rico Trench\n", "The deep trenches or fissures that plunge down thousands of meters below the ocean floor (for example, the midoceanic trenches such as the Mariana Trench in the Pacific) are almost unexplored. Previously, only the bathyscaphe \"Trieste\", the remote control submarine \"Kaikō\" and the \"Nereus\" have been able to descend to these depths. However, as of March 25, 2012 one vehicle, the \"Deepsea Challenger\" was able to penetrate to a depth of 10,898.4 meters (35,756 ft).\n\nSection::::Ecosystem.\n", "In 1960, the Bathyscaphe \"Trieste\" descended to the bottom of the Mariana Trench near Guam, at , the deepest known spot in any ocean. If Mount Everest (8,848 metres) were submerged there, its peak would be more than a mile beneath the surface. The \"Trieste\" was retired, and for a while the Japanese remote-operated vehicle (ROV) Kaikō was the only vessel capable of reaching this depth. It was lost at sea in 2003. In May and June 2009, the hybrid-ROV (HROV) \"Nereus\" returned to the Challenger Deep for a series of three dives to depths exceeding 10,900 meters.\n", "The Five Deeps Expedition objective is to thoroughly map and visit the deepest points of all five of the world's oceans by the end of September 2019. On 28 April 2019, explorer Victor Vescovo descended to the 'Eastern Pool' of the Challenger Deep in the Deep-Submergence Vehicle \"DSV Limiting Factor\" (a Triton 36000/2 model submersible). Between 28 April and 4 May 2019, the \"Limiting Factor\" completed four dives to the bottom of Challenger Deep. The Five Deeps Expedition estimated maximum depths of ± and ± by direct measurements and a survey of the operating area by the support ship, the Deep Submersible Support Vessel \"DSSV Pressure Drop\", with a Kongsberg SIMRAD EM124 multibeam echosounder system. The gathered data is subject to further analysis and will possibly be revised in the future will be donated to the GEBCO Seabed 2030 initiative.\n", "In 2014, a study was conducted regarding the determination of the depth and location of the Challenger Deep based on data collected previous to and during the 2010 sonar mapping of the Mariana Trench with a Kongsberg Maritime EM 122 multibeam echosounder system aboard USNS \"Sumner\". This study by James. V. Gardner et al. of the Center for Coastal & Ocean Mapping-Joint Hydrographic Center (CCOM/JHC), Chase Ocean Engineering Laboratory of the University of New Hampshire splits the measurement attempt history into three main groups: early single-beam echo sounders (1950s-1970s), early multibeam echo sounders (1980s - 21st century), and modern (i.e., post-GPS, high-resolution) multibeam echo sounders. Taking uncertainties in depth measurements and position estimation into account the raw data of the 2010 bathymetry of the Challenger Deep vicinity consisting of 2,051,371 soundings from eight survey lines was analyzed. The study concludes that with the best of 2010 multibeam echosounder technologies after the analysis a depth uncertainty of ± (95% confidence level) on 9 degrees of freedom and a positional uncertainty of ± (2drms) remain and the location of the deepest depth recorded in the 2010 mapping is at . The depth measurement uncertainty is a composite of measured uncertainties in the spatial variations in sound-speed through the water volume, the ray-tracing and bottom-detection algorithms of the multibeam system, the accuracies and calibration of the motion sensor and navigation systems, estimates of spherical spreading, attenuation throughout the water volume, and so forth.\n" ]
[ "We have explored less than five percent of the world’s oceans" ]
[ "While we have not thoroughly explored the world's oceans, we have found out quite a lot about them." ]
[ "false presupposition" ]
[ "We have explored less than five percent of the world’s oceans" ]
[ "false presupposition" ]
[ "While we have not thoroughly explored the world's oceans, we have found out quite a lot about them." ]
2018-04307
Why is not participating in the thread when coming from, say, r/bestof, important?
If you come about a thread organically, or as part as a normal subscriber to a community, there is no issue. But sometimes when some threads from one community are crosslinked to another, that can result in a group of people from the latter suddenly flooding to the former when they: A) are not otherwise involved in that conversation and B) are not otherwise part of that community. The nature of the second sub that the first was linked to can bias how those people view it. For example, if you are a member of r/worstof, you *expect* to see examples of horrible behavior or horrible opinions. So anything posted there you are going to read in that light. You may walk away from that conversation with a very different viewpoint than someone who was part of that conversation from the beginning in the context of the original sub it was supposed to. There is also the issue of brigading. It is against the site rules for you to round up a bunch of people just to go to a sub to downvote or upvote a post or comment. Yet that is a natural tendacy when you take a post or comment and link it to subs like r/bestof, r/subredditdrama, etc. To prevent brigading, most of these types of subs have rules against participating in or voting on the linked conversations and that is why "no participation" mode exists.
[ "On the other hand, in some situations, any trimming or editing of the original message may be inappropriate. For example, if the reply is being copied to a third person who did not see the original message, it may be advisable to quote it in full; otherwise the trimmed message may be misinterpreted by the new recipient, for lack of context.\n\nAlso, when replying to a customer or supplier, it may be advisable to quote the original message in its entirety, in case the other party somehow failed to keep a copy of it.\n\nSection::::Placement of replies.\n", "Architectures can also be oriented to give editorial control to a group or individual. Many email lists are worked in this fashion (e.g., Freecycle). In these situations, the architecture usually allows, but does not require that contributions be moderated. Further, moderation may take two different forms: reactive or proactive. In the reactive mode, an editor removes posts, reviews, or content that is deemed offensive after it has been placed on the site or list. In the proactive mode, an editor must review all contributions before they are made public.\n", "In particular, when replying to a message that already included quoted text, one should consider whether that quoted material is still relevant. For example:\n\nThe quote from Mary's message is relevant to Peter's reply, but not to Joe's reply. The latter could have been trimmed to\n", "However, any attempt from individuals or groups within or outside of the produsage group to capitalize on the content of information shared must be avoided. Any content that is worked upon by a community must remain easily accessible and that edits or modifications to the content must be available under similar conditions. In addition, any contributions made by participants to the shared content must be rewarded and recognized whenever appropriate.\n", "One feature of the story queue was \"edit mode\", in which a story was protected from voting for a period of time during which the author could make changes. Comments could still be made on the story to suggest changes before voting began. These comments were distinguished as being \"editorial\" or \"topical\".\n", "According to RFC 1855, a message can begin with an abbreviated summary; i.e. a post can begin with a paraphrasing instead of quoting selectively. Specifically, it says:\n\nInterleaved reply combined with top-posting combines the advantages of both styles. However this also results in some portions of the original message being quoted twice, which takes up extra space and may confuse the reader.\n", "The interleaved reply style can require more work in terms of labeling lines, but possibly less work in establishing the context of each reply line. It also keeps the quotes and their replies close to each other and in logical reading order, and encourages trimming of the quoted material to the bare minimum. This style makes it easier for readers to identify the points of the original message that are being replied to; in particular, whether the reply misunderstood or ignored some point of the original text. It also gives the sender freedom to arrange the quoted parts in any order, and to provide a single comment to quotations from two or more separate messages, even if these did not include each other.\n", "When replying to long discussions, particularly in newsgroup discussions, quoted text from the original message is often trimmed so as to leave only the parts that are relevant to the reply — or only a reminder thereof. This practice is sometimes called \"trim-posting\" or \"edited posting\", and is recommended by some manuals of posting etiquette.\n", "When an author, usually a journalist, posts threads via Twitter, users are able to respond to each 140- or 280-character tweet in the thread, often before the author posts the next message. This allows the author the option of including the feedback as part of subsequent messages.\n\nSection::::Disadvantages.\n\nSection::::Disadvantages.:Reliability.\n\nAccurate threading of messages requires the email software to identify messages that are replies to other messages.\n", "By contrast, excessive indentation of interleaved and bottom posting may turn difficult to interpret. If the participants have different stature such as manager vs. employee or consultant vs. client, one person's cutting apart another person's words without the full context may look impolite or cause misunderstanding.\n", "Instead of an attribution line, one may indicate the author by a comment in brackets, at the beginning of the quotation:\n\nAnother alternative, used in Fidonet and some mail user agents, is to place the initials of the author before the quoting marker. This may be used with or without attribution lines:\n\nSection::::Trimming and reformatting.\n", "In forwarding it is sometimes preferred to include the entire original message (including all headers) as a MIME attachment, while in top-posted replies these are often trimmed or replaced by an attribution line. An untrimmed quoted message is a weaker form of transcript, as key pieces of meta information are destroyed. (This is why an ISP's Postmaster will typically insist on a \"forwarded\" copy of any problematic e-mail, rather than a quote.) These forwarded messages are displayed in the same way as top-posting in some mail clients.\n", "For a long time the traditional style was to post the answer below as much of the quoted original as was necessary to understand the reply (bottom or inline). Many years later, when email became widespread in business communication, it became a widespread practice to reply above the entire original and leave it (supposedly untouched) below the reply.\n\nWhile each online community differs on which styles are appropriate or acceptable, within some communities the use of the \"wrong\" method risks being seen as a breach of netiquette, and can provoke vehement response from community regulars.\n\nSection::::Quoting previous messages.\n", "It is not uncommon during discussions concerning top-posting vs. bottom-posting to hear quotes from \"Netiquette Guidelines (RFC 1855)\". While many RFCs are vetted and approved though a committee process, some RFCs, such as RFC 1844, are just \"Informational\" and in reality, sometimes just personal opinions. (Additional information on \"Informational\" RFCs can be found in RFC 2026, under \"4.2.2 Informational\" and \"4.2.3 Procedures for Experimental and Informational RFCs\".) The nature of RFC 1855 should be considered while reading the following discussion.\n", "This reply quotes two messages, one by Nancy (itself a reply to Peter) and one by Peter (itself a reply to Mary).\n\nMany mail agents will add these attribution lines automatically to the top of the quoted material. Note that a newly added attribution line should not get the quotation marker, since it is not part of the quoted text; so that the level indicator of the attribution line is always one less than the corresponding text. Doing otherwise may confuse the reader and also e-mail interfaces that choose the text color according to the number of leading markers.\n", "Messages within a thread do not always provide the user with the same options as individual messages. For example, it may not be possible to move, star, reply to, archive, or delete individual messages that are contained within a thread.\n\nThe lack of individual message control can prevent messaging systems from being used as to-do lists (a common function of email folders). Individual messages that contain information relevant to a to-do item can easily get lost in a long thread of messages.\n\nSection::::Disadvantages.:Parallel Discussions.\n", "If the original message is to be quoted in full, for any reason, bottom-posting is usually the most appropriate format — because it preserves the logical order of the replies and is consistent with the Western reading direction from top to bottom.\n", "Although the site's moderation policy is publicly available as part of the moderator manual, the site has been criticised for the excessive dispersion of policy-related material, such as the FAQ, the Bill of Rights, the moderator list and the Community Moderation threads, leading to reduced transparency. In response, the site's administrators posted a bulletin of all moderation-related content on the site on the homepage.\n\nSection::::Technical details.\n", "Users of mobile devices, like smartphones, are encouraged to use top-posting because the devices may only download the beginning of a message for viewing. The rest of the message is only retrieved when needed, which takes additional download time. Putting the relevant content at the beginning of the message requires less bandwidth, less time, and less scrolling for the user.\n", "Individuals are often singled out for abuse by Czar or Empress. Verbal abuse is frequently heaped upon writers of remarkably obscene or distasteful entries, and individuals who whine about the judging (see Russell Beland) or overtly lobby for their own entries. The Empress is constantly on the look out for flagrant plagiarism (defined as \"being in touch with one's inner Google\"), the penalty for which is severe admonition and retribution.\n\nSection::::Prizes.\n", "Top- and bottom-posting are sometimes compared to traditional written correspondence in that the response is a single continuous text, and the whole original is appended only to clarify which letter is being replied to. Customer service e-mail practices, in particular, often require that all points be addressed in a clear manner without quoting, while the original e-mail message may be included as an attachment. Including the whole original message may be necessary also when a new correspondent is included in an ongoing discussion. Especially in business correspondence, an entire message thread may need to be forwarded to a third party for handling or discussion. On the other hand, in environments where the entire discussion is accessible to new readers (such as newsgroups or online forums), full inclusion of previous messages is inappropriate; if quoting is necessary, the interleaved style is probably best.\n", "Top-posting preserves an apparently unmodified transcript of a branch in the conversation. Often all replies line up in a single branch of a conversation. The top of the text shows the latest replies. This appears to be advantageous for business correspondence, where an e-mail thread can dupe others into believing it is an \"official\" record.\n", "Some style guides recommend that, as a general rule, quoted material in replies should be trimmed or summarized as much as possible, keeping only the parts that are necessary to make the readers understand the replies. That of course depends on how much the readers can be assumed to know about the discussion. For personal e-mail, in particular, the subject line is often sufficient, and no quoting is necessary; unless one is replying to only some points of a long message.\n", "Top-posting is a natural consequence of the behavior of the \"reply\" function in many current e-mail readers, such as Microsoft Outlook, Gmail, and others. By default, these programs insert into the reply message a copy of the original message (without headers and often without any extra indentation or quotation markers), and position the editing cursor above it. Moreover, a bug present on most flavours of Microsoft Outlook caused the quotation markers to be lost when replying in plain text to a message that was originally sent in HTML/RTF. \n", "Section::::Background and release.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-19478
Who in Luxury branded clothing companies determine the price of their items? And what is the process like?
Depends on the company. If it's very small - just a few people - then it's just an executive decision by whoever is running things. But in large companies, pricing is approached as its own discipline. Studies are done to analyze how people will respond to various price points; get your product in front of 1,000 people and offer it at varying prices, see how many people show interest. Then prospecti are written up - a detailed report which predicts what profits will look like at various prices, who will be buying the item, what it looks like long-term and short-term. When a decision gets made, it is probably done by committee with several levels of approval - but again, this differs from company to company.
[ "Based on how much consumers are willing to pay, buyers can determine the optimum cost price which they should expect to pay. Suppliers take a different approach to the optimum cost price. The suppliers determine the price based on how much it will cost to produce the garment. The manufacturer's salespeople are typically able to accurately anticipate the price that buyers will expect to pay.\n", "\"The Role of the Fashion Buyer\" states that one part of a buyer's job is to negotiate prices and details of delivery with the supplier. Some retailers provide their buyers with negotiation training courses. When the buyer meets with the supplier, the sales executive of the garment manufacturer submits a \"cost price\" for a garment. Then the buyer calculates the price that the garment will need to be sold for in order to reach the retailer's mark-up price. The markup price is the difference between the selling price and the manufacturer's cost price. The retail selling price is typically 2.5 or 3 times the price of the manufacturer's cost price. While it may seem like a retailer is making a large profit from this markup, the proceeds are used to cover many costs such as the buyer's salary, store rent, utility bills, and office costs. The markup must be high enough to cover the retailer's expense of \"housing\" a garment on a rack or a shelf for anywhere from a few days to an entire season, plus the risk that some garments will inevitably have to be marked down to cost, to get them out of the store.\n", "Buyers and suppliers both want to make the largest profits for their companies. The buyers want to buy the garments at the lowest possible price and suppliers want to sell the garments at a high price. Suppliers and buyers must agree on a price and/ financing terms and in some cases they may not agree. If the two cannot agree and the buyer cannot reach a price within the retailer's target margin, the buyer may ask the buying manager for permission to buy the garment at a higher price or else the style may be dropped.\n\nSection::::Negotiation.\n", "The buyer must remember that the retailer and manufacturer share similar goals and that the two need to form an honest relationship based on respect and integrity. Buyers not only select clothing, but are also involved in the ordering and delivering of garments. In some cases, the buyers may also be involved in the product development and display processes.\n\nSection::::Forecasting.\n", "up the apparel stocks starting prices and consumers are the ones that choose to buy or not a product sold at a certain price. Despite increasingly sophisticated tools like computer modelling for measuring and predicting consumer behavior and tweaking price and promotional activity, retailers can never be entirely certain of their sell through rate or the average price realized. Retailers hedge risk by seeking vendor financing or vendor buy-back arrangements as mentioned above, which distributes some of the risk of unsold merchandise back up the supply chain to the producer or wholesaler; they also open alternate channels such as outlet stores to liquidate unsold merchandise while freeing up floor space for new arrivals. Some \"fast fashion\" retailers, like Zara attempt to control their whole supply chain from design to production to the retail store, in order to practice just in time production, or something close to it; in cases of complete integration, there is no \"wholesale fashion distribution,\" as the retailer is its own manufacturer and wholesaler.\n", "One of the main roles of a buyer is to negotiate with the clothing suppliers. Buyers may negotiate with suppliers on a regular, sometimes daily, basis. It is important for a buyer to establish a strong relationship with the suppliers since it will be beneficial to both parties. Suppliers and buyers have the same goal which is to sell as many garments as possible to customers so they must work together to achieve this goal. Buyers rely heavily on suppliers to \"enable ranges to be bought successfully\".\n", "Materials and labor may be allocated based on past experience, or standard costs. Where materials or labor costs for a period fall short of or exceed the expected amount of standard costs, a \"variance\" is recorded. Such variances are then allocated among cost of goods sold and remaining inventory at the end of the period.\n", "Most wholesalers get their fashion stocks from the producers that commercialize the latest collections in bulk, at volume discounts. Others purchase overstocks and closeout merchandise from retailers or distributors. Their clients are the resellers that purchase those stocks and sell it to the final consumers. Often, this process is financed through merchant factoring or vendor finance. In other cases, the merchant is assessed \"counter rent\" for a \"store-within-a-store\" concept, common in the cosmetics industry, but also not unheard of in clothing. In other cases, the vendor agrees to buy back unsold merchandise from the retailer — this is a common arrangement for higher-value seasonal clothing, like designer coats.\n", "In comparison to manufacturer merchandisers, retailer merchandisers also begin their process by forecasting industry and fashion trends with their target markets in mind. Sales are predicted in retail dollars and beginning of the month (BOM) stock. Similar to manufacturer merchandisers, retailer merchandisers must make all decisions regarding the final consumer. Decisions are made based on the past, present, and future of the economy, sales, industry and fashion trends, region and world events, and the fashion cycle. When selecting merchandise to offer, retailer merchandisers will consider their target markets' color, style, size, and cost preferences. Once accurate decisions are made, retailer merchandisers will order goods from vendors or produce private labels. Following shipment, ordered seasonal apparel assortments are strategically arranged on sales floors, or visually merchandised.\n", "In the retail industry, a buyer is an individual who selects what items are stocked. Buyers usually work closely with designers and their designated sales representatives and attend trade fairs, wholesale showrooms and fashion shows to observe trends. They may work for large department stores, chain stores or smaller boutiques. For smaller independent stores, a buyer may participate in sales as well as promotion, whereas in a major fashion store there may be different levels of seniority such as trainee buyers, assistant buyers, senior buyers and buying managers, and buying directors. Decisions about what to stock can greatly affect fashion businesses.\n", "Section::::Background.\n\nThe role of a buyer is influenced by the type of retail and business. A buyer is required to possess visual creativity, analytical skills, negotiation skills, business acumen, and a keen awareness of fashion. Retail businesses are generally classified as manufacturers, wholesalers, and retailers. These organizations vary in the scope of their activities and their range of responsibilities within their particular segment of the market.\n", "Under the traditional model, garments and lines are compiled into fixed inventories for a particular season before delivery to store; the price of those garments is fixed, and remains relatively static until clearance-sale markdowns are employed to make way for new stock. The Fast Fashion model is more flexible, and the concept of seasons is often sub-divided, with mini-collections being delivered to stores on a more regular basis. This is designed to mitigate some of the impact of season-ending clearance sales (and the associated markdown cost implications).\n\nSection::::Traditional Production and Sampling Process.:Multinational Supply Chains.\n", "Some buyers meet regularly to update each other about price ranges as well as to receive or give advice. Buyers often travel together so that they can advise one another on ranges and to coordinate ranges. For instance, a buyer for women's jackets may coordinate with the buyer for women's blouses since the two garments are frequently worn and purchased together.\n\nSection::::Negotiation.\n", "Clothing manufacturers practice fashion merchandising differently than retailers. Manufacturer merchandisers forecast customers' preferences for silhouettes, sizes, colors, quantities, and costs each season. When making decisions, manufacturer merchandisers must keep retailers and end consumers in mind. Following the forecasting stage, manufacturer merchandisers meet with designers to develop products that consumers will purchase most. By referring to the five rights of merchandising, manufacturer merchandisers determine the best fabric, notions, product methods, and promotions for products. These decisions all contribute to final retail costs, which must be affordable to end consumers.\n\nSection::::Retailers.\n", "Section::::Production.:Supply chain, vendor relationships and internal relationships.:Vendor relationships.\n\nThe companies in the fast fashion market also utilize a range of relationships with the suppliers. The product is first classified as \"core\" or \"fashion\". Suppliers close to the market are used for products that are produced in the middle of a season, meaning trendy, \"fashion\" items. In comparison, long-distance suppliers are utilized for cheap, \"core\" items, sometimes referred to as \"capsule\" clothing, that are used in collections every season and have a stable forecast.\n\nSection::::Production.:Supply chain, vendor relationships and internal relationships.:Internal relationships.\n", "The term applies to fashion retail. Off-price is different from other special pricing formats (such as Outlet Store and Discount Store) in that one store might contain a great deal of products, price rates and trademarks. The range of goods is usually measured in millions of product items, whereas the quantity of brands represented is measured in thousands. The discount amount is 60-65% on average, reaching up to 90% of the initial price of similar products in brand stores of their respective trademarks and multi-brand boutiques.\n\nSection::::Quality and product origins.\n", "Buying and merchandising team structure differ by the type, history, and size of the organization. The traditional structure of a team assigns a buying team and merchandising team to a specific product area, and there are controllers tasked with directing teams across various areas. Buying teams are further divided based on the product area (e.g., athletic wear, leather goods, etc.), organization division (e.g., men's wear).\n", "BULLET::::- in the right place\n\nBULLET::::- in the right quantities.\n\nBy researching and answering the five rights of merchandising, fashion merchandisers can gain an understanding of what products consumers want, when and where they wish to make purchases, and what prices will have the highest demand. Both fashion retailers and manufacturers utilize the 5Rs.\n\nSection::::Manufacturers.\n", "It takes Cost of Goods Available for Sale and divides it by the number of units available for sale (number of goods from Beginning Inventory + Purchases/production). This gives a Weighted Average Cost per Unit. A physical count is then performed on the ending inventory to determine the number of goods left. Finally, this quantity is multiplied by Weighted Average Cost per Unit to give an estimate of ending inventory cost. The cost of goods sold valuation is the amount of goods sold times the Weighted Average Cost per Unit. The sum of these two amounts (less a rounding error) equals the total actual cost of all purchases and beginning inventory.\n", "To order a made-to-measure garment, the customer's measurements are first taken by a made-to-measure retailer. Then a base pattern is selected that most closely corresponds with the customer’s measurements. This base pattern is altered to match the customer’s measurements. The garment is constructed from this altered pattern.\n", "BULLET::::- New changes: Trump moves Brandy to Fortitude.\n\nBULLET::::- Fortitude project manager: Liza\n\nBULLET::::- Octane project manager: Clint\n\nBULLET::::- Judges: Donald Trump; Juan Betancourt; Catherine Roman\n\nBULLET::::- Results: Fortitude selects the watch which it prices at $69.55, and Octane selects the handbag which it prices at $194.97.\n", "Section::::Identification conventions.\n\nIn some cases, the cost of goods sold may be identified with the item sold. Ordinarily, however, the identity of goods is lost between the time of purchase or manufacture and the time of sale. Determining which goods have been sold, and the cost of those goods, requires either identifying the goods or using a convention to assume which goods were sold. This may be referred to as a cost flow assumption or inventory identification assumption or convention. The following methods are available in many jurisdictions for associating costs with goods sold and goods still on hand:\n", "Section::::Risks.\n\nBusiness always involves risk, especially in a market strongly controlled by powerful fashion houses and manufacturers at one end and fickle consumers at the other, where global supply chains and the seasonality of clothing often mean that clothing must be bought months or a year in advance and credit, transportation and warehousing arranged on tight deadlines. Producers are the ones that set\n", "For buyers at department stores like Harrods or Saks, responsibilities may include ensuring that the store is properly stocked with a wide variety of designer clothing. However, if you support a fashion brand such as Tommy Hilfiger, you may be responsible for directing the entire product development process and then managing the delivery of the products. Your role is also heavily influenced by the structure of your organization; for example, a Christian Dior buyer in the Paris office may supervise the entire development process of the collection. However, in the New York office, a buyer may only source completed product that is suitable for the American market.\n", "Buyers work alongside their buyer colleagues because they receive helpful advice from one another. A buyer can have frequent meetings with the buying manager to discuss the development of the range of garments. Buyers also interact often with the merchandising, design, quality control, and fabric technology departments. A buyer meets with the finance, marketing, and retail sales personnel on a less frequent basis.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-12764
How do paleontologists figure out how long a fossil has been underground and the time frame of its existence ?
Its called carbon dating and when underground carbon the element decays at a certain rate and given this every half life( half of all the molecules in the substance of carbon) have decayed, there are less molecules so in a sample paleontiligists cab look at that amount that haven't decayed to see how long it has been underground -- there is much online called carbon dating
[ "BULLET::::4. The fourth assumption is that volume change is proportional to thickness and density. This states that the loss of soil volume, and the degree of compaction during burial, are related to their density or thickness change. Although common sense suggests that volume and density are three dimensional, and thickness it one dimensional, observations on various materials, including fossil plants of known shape (Walton 1936; Briggs and Williams 1981), show that while under conditions of static vertical load, soils and fossils are maintained by pressure at the side.\n", "These data are also used to model basin subsidence rates. Knowing the depth of a hydrocarbon source rock beneath the basin-filling strata allows calculation of the age at which the source rock passed through the generation window and hydrocarbon migration began. Because the ages of cross-cutting trapping structures can usually be determined from magnetostratigraphic data, a comparison of these ages will assist reservoir geologists in their determination of whether or not a play is likely in a given trap.\n", "Comparing the record about the discordance in the record to the full rock column shows the non-occurrence of the missing species and that portion of the local \"rock record\", from the early part of the middle Eocene is missing there. This is one form of discordancy and the means geologists use to compensate for local variations in the rock record. With the two remaining marker species it is possible to correlate rock layers of the same age (early Eocene and latter part of the middle Eocene) in both South Carolina and Virginia, and thereby \"calibrate\" the local rock column into its proper place in the overall geologic record.\n", "A recent technique is the use of CT-scanning on intact specimens to analyze density, where more dense speleothem development indicates higher moisture availability.\n\nSection::::Absolute dating.\n", "Section::::Midden.:Climate change indicators.\n\nZoologists examine the remains of animals in middens to get a sense of the fauna in the neighborhood of the midden, while paleobotanists can reconstruct the vegetation that grew nearby. Middens are considered reliable \"time capsules\" of natural life, centuries and millennia after they occurred. Woodrat middens are composed of many things, including plants macrofossils and fecal pellets.\n", "Section::::Discordant strata example.\n\nCorrecting for discordancies can be done in a number of ways and utilizing a number of technologies or field research results from studies in other disciplines.\n\nIn this example, the study of layered rocks and the fossils they contain is called biostratigraphy and utilizes amassed geobiology and paleobiological knowledge. Fossils can be used to recognize rock layers of \"the same or different geologic ages\", thereby coordinating locally occurring geologic stages to the overall geologic timeline.\n", "BULLET::::- Hedberg, H.D., (editor), \"International stratigraphic guide: A guide to stratigraphic classification, terminology, and procedure\", New York, John Wiley and Sons, 1976\n\nSection::::External links.\n\nBULLET::::- International Stratigraphic Chart from the International Commission on Stratigraphy\n\nBULLET::::- USA National Park Service\n\nBULLET::::- Washington State University\n\nBULLET::::- Web Geological Time Machine\n\nBULLET::::- Eon or Aeon, Math Words - An alphabetical index\n\nBULLET::::- The Global Boundary Stratotype Section and Point (GSSP): overview\n\nBULLET::::- Chart of The Global Boundary Stratotype Sections and Points (GSSP): chart\n\nBULLET::::- Geotime chart displaying geologic time periods compared to the fossil record\n", "Most fossil groundwater has been estimated to have originally infiltrated within the Holocene and Pleistocene (10,000–40,000 years ago). Some fossil groundwater is associated with the melting of ice in the time since the last glacial maximum. Dating of groundwater relies on measuring concentrations of certain stable isotopes, including (tritium) and (\"heavy\" oxygen), and comparing values with known concentrations of the geologic past.\n", "Section::::Work.:Stratigraphy and Time Scales.\n", "Other methods for classifying soil fossils rely on geochemical analysis of the soil material, which allows the minerals in the soil to be identified. This is only useful where large amounts of the ancient soil are available, which is rarely the case.\n\nSection::::Records of the various soil groups.\n", "BULLET::::- Oxidation of iron from Fe to Fe by O as the former soil becomes dry and more oxygen enters the soil.\n\nBULLET::::- Drying out of hydrous ferric oxides to anhydrous oxides - again due to the presence of more available O in the dry environment.\n\nThe keys to recognising fossils of various soils include:\n\nBULLET::::- Tubular structures that branch and thin irregularly downward or show the anatomy of fossilised root traces\n\nBULLET::::- Gradational alteration down from a sharp lithological contact like that between land surface and soil horizons\n", "BULLET::::- The longest, highest resolution, stratigraphically continuous, single‐species benthic foraminiferal carbon and oxygen isotope records for the Late Maastrichtian to Early Eocene from a single site in the South Atlantic Ocean, providing information on the evolution of climate and carbon‐cycling during this time period, are presented by Barnet \"et al.\" (2019).\n", "The potassium-argon method was utilized to determine the age of volcanism at Pilot Knob. This method is based upon the decay of the radioactive isotope of potassium (potassium 40) to argon 40, an isotope of argon, an inert gas. By knowing the concentration of potassium in a rock mineral and the amount of argon gas produced by radioactive decay trapped within the minerals, an age can be assigned to the rock because the decay rate of potassium 40 to argon 40 is known from experimental work. The age of Pilot Knob volcanism dated through the potassium-argon method is 79.5 +/- 3 million years, and agrees with the age derived by correlation with fossils to other radiometrically dated deposits.\n", "A typical thermochronological study will involve the dates of a number of rock samples from different areas in a region, often from a vertical transect along a steep canyon, cliff face, or slope. These samples are then dated. With some knowledge of the subsurface thermal structure, these dates are translated into depths and times at which that particular sample was at the mineral's closure temperature. If the rock is today at the surface, this process gives the exhumation rate of the rock.\n", "BULLET::::- A plat of the drilling pad showing the slope of natural contours and the location of the mud sump with respect to cut/fill. Dimensions of these items is to be indicated on the plat.\n\nThe application is reviewed for completeness and then proposed permit conditions are sent, with the application materials, to the other natural resource agencies for their review.\n\n45-day comment period\n\nOther natural resource agencies have a 45-day comment period for review of the application and to respond.\n\nBULLET::::- Reclamation security must be submitted before the permit can be issued.\n", "BULLET::::- A study on the age of a bentonite layer from Bed 36 in the Frasnian–Famennian succession at the abandoned Steinbruch Schmidt Quarry (Germany), aiming to determine the precise age of the Frasnian–Famennian boundary and the precise timing of the Late Devonian extinction, is published by Percival \"et al.\" (2018).\n\nBULLET::::- A study on the atmospheric oxygen levels through the Phanerozoic, evaluating whether Romer's gap and the concurrent gap in the fossil record of insects were caused by low oxygen levels, is published by Schachat \"et al.\" (2018).\n", "Section::::Taphonomic biases in the fossil record.:Human biases.\n\nMuch of the incompleteness of the fossil record is due to the fact that only a small amount of rock is ever exposed at the surface of the Earth, and not even most of that has been explored. Our fossil record relies on the small amount of exploration that has been done on this. Unfortunately, paleontologists as humans can be very biased in their methods of collection; a bias that must be identified. Potential sources of bias include,\n", "BULLET::::- A study on changes in mammalian faunal composition and structure during the earliest Paleogene biotic recovery, based on data from four localities in the Hell Creek Formation and Tullock Member of the Fort Union Formation (Montana, United States), is published by Smith \"et al.\" (2018).\n\nBULLET::::- A high-resolution age model for mammalian turnover between the To2 and To3 substages of the Torrejonian across the San Juan Basin is presented by Leslie \"et al.\" (2018).\n", "Section::::Stratigraphic dating.\n", "Section::::Stratigraphic dating.:Residual and intrusive finds.\n", "Soil fossils are usually classified by USDA soil taxonomy. With the exception of some exceedingly old soils which have a clayey, grey-green horizon that is quite unlike any present soil and clearly formed in the absence of O, most fossil soils can be classified into one of the twelve orders recognised by this system. This is usually done by means of X-ray diffraction, which allows the various particles within the former soils to be analysed so that it can be seen to which order the soils correspond.\n", "Section::::Stratigraphy.\n", "BULLET::::- Complex patterns of cracks and mineral replacements like those of soil clods (\"peds\") and planar \"cutans\".\n\nSection::::Classification.\n", "This process requires a considerable degree of effort and checking of field relationships and age dates. For instance, there may be many millions of years between a bed being laid down and an intrusive rock cutting it; the estimate of age must necessarily be between the oldest cross-cutting intrusive rock in the fossil assemblage and the youngest rock upon which the fossil assemblage rests.\n\nSection::::Units.\n\nChronostratigraphic units, with examples:\n\nBULLET::::- eonothem – Phanerozoic\n\nBULLET::::- erathem – Paleozoic\n\nBULLET::::- system – Ordovician\n\nBULLET::::- series – Upper Ordovician\n\nBULLET::::- stage – Ashgill\n\nSection::::Differences from geochronology.\n", "Bore holes can be drilled into ore bodies (for example coal seams or gold ore) and either rock samples taken to determine the ore or coal quality at each bore hole location or the wells can be wireline logged to make measurements that can be used to infer quality. Some petrophysicists do this sort of analysis. The information is mapped and used to make mine development plans.\n\nSection::::Methods of analysis.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01532
How does a seed "know" which way is down or up to grow?
Plants can sense gravity due to special containers (organelles) that store starch in a plant cell. The starch is rather dense compared to the rest of the cell so that the container will be affected by gravity. The containers are entangled in a mesh (actin) which can sense the direction they are moving. Based on these signals a special hormone (auxin) is distributed in the shoot to indicate the growth direction. This is only one of many possible mechanisms.
[ "A later and not so widely accepted indicator is the orientation of the seed, which should be pointing vertically by an etrog, except if it was strained by its neighbors; in lemons and hybrids, the seeds are positioned horizontally even when there is enough space.\n", "Section::::Aspects of perception.:Gravity.\n\nTo orient themselves correctly, plants must have an adequate sense of the direction of gravity’s unidirectional pull. The subsequent response is known as gravitropism. In roots, this typically works as gravity is sensed and translated in the root tip, and subsequently roots grow towards gravity via elongation of the cells. In shoots, similar effects occur, but gravity is perceived and then growth occurs in the opposite direction, as the aboveground part of the plant experiences negative gravitropism.\n", "Plants can also interact with one another in their environment through their root systems. Studies have demonstrated that plant-plant interaction occurs among root systems via the soil as a medium. For instance, Novoplanksy and his students at Ben-Gurion University in Israel tested whether plants growing in ambient conditions would change their behavior if a nearby plant was exposed to drought conditions. His team wondered if plants can communicate to their neighbors of nearby stressful environmental conditions.\n", "The correct environment of air, mineral nutrients and water directs plant roots to grow in any direction to meet the plant's needs. Roots will shy or shrink away from dry or other poor soil conditions.\n\nGravitropism directs roots to grow downward at germination, the growth mechanism of plants that also causes the shoot to grow upward.\n\nSection::::Shade Avoidance Root Response.\n", "Section::::Types.:Autochory.\n\nAutochorous plants disperse their seed without any help from an external vector, as a result this limits plants considerably as to the distance they can disperse their seed.\n\nTwo other types of autochory not described in detail here are blastochory, where the stem of the plant crawls along the ground to deposit its seed far from the base of the plant, and herpochory (the seed crawls by means of trichomes and changes in humidity).\n\nSection::::Types.:Autochory.:Gravity.\n", "Gravitropism is an integral part of plant growth, orienting its position to maximize contact with sunlight, as well as ensuring that the roots are growing in the correct direction. Growth due to gravitropism is mediated by changes in concentration of the plant hormone auxin within plant cells.\n", "Several experiments have been focused on how plant growth and distribution compares in micro-gravity, space conditions versus Earth conditions. This enables scientists to explore whether certain plant growth patterns are innate or environmentally driven. For instance, Allan H. Brown tested seedling movements aboard the Space Shuttle Columbia in 1983. Sunflower seedling movements were recorded while in orbit. They observed that the seedlings still experienced rotational growth and circumnation despite lack of gravity, showing these behaviors are built-in.\n", "Research of Arabidopsis has led to the discovery of how this auxin mediated root response works. In an attempt to discover the role that phytochrome plays in lateral root development, Salisbury et al. (2007) worked with \"Arabidopsis thaliana\" grown on agar plates. Salisbury et al. used wild type plants along with varying protein knockout and gene knockout Arabidopsis mutants to observe the results these mutations had on the root architecture, protein presence, and gene expression. To do this, Salisbury et al. used GFP fluorescence along with other forms of both macro and microscopic imagery to observe any changes various mutations caused. From these research, Salisbury et al. were able to theorize that shoot located phytochromes alter auxin levels in roots, controlling lateral root development and overall root architecture. In the experiments of van Gelderen et al. (2018), they wanted to see if and how it is that the shoot of \"Arabidopsis thaliana\" alters and affects root development and root architecture. To do this, they took Arabidopsis plants, grew them in agar gel, and exposed the roots and shoots to separate sources of light. From here, they altered the different wavelengths of light the shoot and root of the plants were receiving and recorded the lateral root density, amount of lateral roots, and the general architecture of the lateral roots. To identify the function of specific photoreceptors, proteins, genes, and hormones, they utilized various Arabidopsis knockout mutants and observed the resulting changes in lateral roots architecture. Through their observations and various experiments, van Gelderen et al. were able to develop a mechanism for how root detection of Red to Far-red light ratios alter lateral root development.\n", "At the root tip, amyloplasts containing starch granules fall in the direction of gravity. This weight activates secondary receptors, which signal to the plant the direction of the gravitational pull. After this occurs, auxin is redistributed through polar auxin transport and differential growth towards gravity begins. In the shoots, auxin redistribution occurs in a way to produce differential growth away from gravity.\n", "Root growth occurs by division of stem cells in the root meristem located in the tip of the root, and the subsequent asymmetric expansion of cells in a shoot-ward region to the tip known as the elongation zone. Differential growth during tropisms mainly involves changes in cell expansion versus changes in cell division, although a role for cell division in tropic growth has not been formally ruled out. Gravity is sensed in the root tip and this information must then be relayed to the elongation zone so as to maintain growth direction and mount effective growth responses to changes in orientation to and continue to grow its roots in the same direction as gravity.\n", "In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism).\n", "Several experiments have been focused on how plant growth and distribution compares in micro-gravity, space conditions versus Earth conditions. This enables scientists to explore whether certain plant growth patterns are innate or environmentally driven. For instance, Allan H. Brown tested seedling movements aboard the Space Shuttle Columbia in 1983. Sunflower seedling movements were recorded while in orbit. They observed that the seedlings still experienced rotational growth and circumnation despite lack of gravity, showing these behaviors are instinctual.\n", "For perception to occur, the plant often must be able to sense, perceive, and translate the direction of gravity. Without gravity, proper orientation will not occur and the plant will not effectively grow. The root will not be able to uptake nutrients or water, and the shoot will not grow towards the sky to maximize photosynthesis.\n\nSection::::Plant intelligence.\n", "As plants mature, gravitropism continues to guide growth and development along with phototropism. While amyloplasts continue to guide plants in the right direction, plant organs and function rely on phototropic responses to ensure that the leaves are receiving enough light to perform basic functions such as photosynthesis. In complete darkness, mature plants have little to no sense of gravity, unlike seedlings that can still orient themselves to have the shoots grow upward until light is reached when development can begin.\n", "There is a correlation of roots using the process of plant perception to sense their physical environment to grow, including the sensing of light, and physical barriers. Plants also sense gravity and respond through auxin pathways, resulting in gravitropism. Over time, roots can crack foundations, snap water lines, and lift sidewalks. Research has shown that roots have ability to recognize 'self' and 'non-self' roots in same soil environment.\n", "Section::::Mechanisms of damage.\n", "In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem bends to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism).\n", "Meristems may also be induced in the roots of legumes such as soybean, \"Lotus japonicus\", pea, and \"Medicago truncatula\" after infection with soil bacteria commonly called Rhizobia. Cells of the inner or outer cortex in the so-called \"window of nodulation\" just behind the developing root tip are induced to divide. The critical signal substance is the lipo-oligosaccharide Nod factor, decorated with side groups to allow specificity of interaction. The Nod factor receptor proteins NFR1 and NFR5 were cloned from several legumes including \"Lotus japonicus\", \"Medicago truncatula\" and soybean (\"Glycine max\"). Regulation of nodule meristems utilizes long-distance regulation known as the autoregulation of nodulation (AON). This process involves a leaf-vascular tissue located LRR receptor kinases (LjHAR1, GmNARK and MtSUNN), CLE peptide signalling, and KAPP interaction, similar to that seen in the CLV1,2,3 system. LjKLAVIER also exhibits a nodule regulation phenotype though it is not yet known how this relates to the other AON receptor kinases.\n", "The plant hormone auxin has also been implicated in this process, with the new primordium being initiated at the placenta, where the auxin concentration is highest. There is still much to understand about the genes involved in primordium development.\n", "In order to avoid shade, plants utilize a shade avoidance response. When a plant is under dense vegetation, the presence of other vegetation nearby will cause the plant to avoid lateral growth and experience an increase in upward shoot, as well as downward root growth. In order to escape shade, plants adjust their root architecture, most notably by decreasing the length and amount of lateral roots emerging from the primary root. Experimentation of mutant variants of Arabidospis thaliana found that plants sense the Red to Far Red light ratio that enters the plant through photoreceptors known as phytochromes. Nearby plant leaves will absorb red light and reflect far- red light which will cause the ratio red to far red light to lower. The phytochrome PhyA that senses this Red to Far Red light ratio is localized in both the root system as well as the shoot system of plants, but through knockout mutant experimentation, it was found that root localized PhyA does not sense the light ratio, whether directly or axially, that leads to changes in the lateral root architecture. Research instead found that shoot localized PhyA is the phytochrome responsible for causing these architectural changes of the lateral root. Research has also found that phytochrome completes these architectural changes through the manipulation of auxin distribution in the root of the plant. When a low enough Red to Far Red ratio is sensed by PhyA, the phyA in the shoot will be mostly in its active form. In this form, PhyA stabilize the transcription factor HY5 causing it to no longer be degraded as it is when phyA is in its inactive form. This stabilized transcription factor is then able to be transported to the roots of the plant through the phloem, where it proceeds to induce its own transcription as a way to amplify its signal. In the roots of the plant HY5 functions to inhibit an auxin response factor known as ARF19, a response factor responsible for the translation of PIN3 and LAX3, two well known auxin transporting proteins. Thus, through manipulation of ARF19, the level and activity of auxin transporters PIN3 and LAX3 is inhibited. Once inhibited, auxin levels will be low in areas where lateral root emergence normally occurs, resulting in a failure for the plant to have the emergence of the lateral root primordium through the root pericycle. With this complex manipulation of Auxin transport in the roots, lateral root emergence will be inhibited in the roots and the root will instead elongate downwards, promoting vertical plant growth in an attempt to avoid shade.\n", "In the first round of breeding for horizontal resistance, plants are exposed to pathogens and selected for partial resistance. Those with no resistance die, and plants unaffected by the pathogen have vertical resistance and are removed. The remaining plants have partial resistance and their seed is stored and bred back up to sufficient volume for further testing. The hope is that in these remaining plants are multiple types of partial-resistance genes, and by crossbreeding this pool back on itself, multiple partial resistance genes will come together and provide resistance to a larger variety of pathogens.\n", "Most understood the root system is extremely important to its overall performance once planted out. and have tried changing container designs. One of the first designs was simply using an open-bottomed, waxed cardboard milk carton container. The results were promising. When the taproot eventually reached the base, it would become exposed to air, dehydrate and die at the tip, stimulating roots to branch behind this point, much like pruning a hedge. However, all roots were forced downward so there was still plenty of room for improvement to gain side branching.\n", "The signal for root growth, in this case, is varying water potential in a plant’s soil environment; the response is differential growth towards higher water potentials. Plants sense water potential gradients in their root cap and bend in the midsection of the root towards that signal. In this way, plants can identify where to go in order to get water. Other stimuli such as gravity, pressure, and vibrations also help plants choreograph root growth towards water acquisition to adapt to varying amounts of water in a plant’s soil environment for use in metabolism. Thus far, these interactions between signals have not been studied in great depth, leaving potential for future research.\n", "Section::::Seedling response.:Pathway.\n", "Plant cells are fixed with regards to their neighbor cells within the tissues they are growing in. In contrast to animals where certain cells can migrate within the embryo to form new tissues, the seedlings of higher plants grow entirely based on the orientation of cell division and subsequent elongation and differentiation of cells within their cell walls. Therefore, the accurate control of cell division planes and placement of the future cell wall in plant cells is crucial for the correct architecture of plant tissues and organs.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-16825
How do stores get paid for winning lottery tickets? I won 20 dollars on a 5 dollar ticket, and the guy ripped it up and threw it away and gave me 20 from the register.
It's computer based. Scan in the winning code and it gets credited to them. We threw them away a decade ago to. I always expected them to be kept for auditing, but I guess not. The rest, I can't answer.
[ "There have also been several cases of cashiers at lottery retailers who have attempted to scam customers out of their winnings. Some locations require the patron to hand the lottery ticket to the cashier to determine how much they have won, or if they have won at all, the cashier then scans the ticket to determine one or both. In cases where there is no visible or audible cue to the patron of the outcome of the scan some cashiers have taken the opportunity to claim that the ticket is a loser or that it is worth far less than it is and offer to \"throw it away\" or surreptitiously substitute it for another ticket. The cashier then pockets the ticket and eventually claims it as their own.\n", "Prizes under $1000 can be collected directly from a retailer that has a lottery terminal in store. This is subject to cash availability. People can collect bigger prizes by visiting an OLG Casino or Slot facility. This can be done by mailing the ticket to the OLG Prize Centre or by visiting the OLG Prize Centre in Toronto. When claiming the prize at the OLG Prize Centre, the prizewinner must have valid government identification as well as providing a signature. The ticket will be double checked in case of fraud. \n", "More than 2,700 businesses in Louisiana earn a 5 percent commission on the sale of Lottery products as licensed retailers. In addition to revenue from commission, retailers earn an incentive of up 2 percent for cashing winning tickets up to $600. Retailers are also paid a selling bonus of up to 1 percent on the sale of top-prize winning tickets for Lotto, Easy 5, or Powerball (1% of Louisiana's contribution to the jackpot's cash value; for Powerball, a minimum bonus of $25,000). Retailer commission and incentives totaled $19.7 million in fiscal year 2009.\n", "For contestants to appear on the show, they had to purchase an \"Illinois Instant Riches\"/\"Illinois' Luckiest\" scratch-off ticket from an Illinois Lottery retailer. Common for the lottery game shows of the 90s, if they uncovered three television set symbols on the ticket, the ticket was sent into a submission address, or redeemed physically at the nearest lottery office.\n\nPlayers were randomly chosen from those tickets to be in the show's contestant pool, but only a certain number of them would be selected to play an on-stage game.\n", "Lottery tickets are bearer instruments, which means that the Lottery must pay the holder of a winning ticket presented for payment. Signing the back of the ticket is the single most important thing one can do to help protect themselves from theft and demonstrate ownership of the ticket. Any alteration to a winning ticket worth more than $600 is cause for an immediate security investigation. Once a winning ticket has been paid, however, it is much more difficult to determine whether another individual was the rightful owner. By law, the Lottery can pay a winning ticket only once.\n\nSection::::Lottery games.\n", "As of January 2013, the Florida Lottery had 13,200 retail stores selling products. Each retailer earns 5% of their ticket sales and 1% on cashed tickets. They also receive incentives for a top winning ticket sold in the big games. A $100,000 bonus is given to the store for a winning Powerball ticket. Publix is the largest retailer in the state, accounting for 18% of all lottery sales.\n", "Each of these corporations operate a regional add-on games that, for an extra $1 each, can be added to a 6/49 ticket. This \"spiel\" game (named \"Tag\", \"Encore\" or \"Extra\" depending on the region), adds a 6- or 7-digit number to the ticket with a top prize of $100,000 if all six digits are matched or $250,000 to $1,000,000 depending on the region for a seven-number match ($1,000,000 in Ontario and Quebec; $250,000 in the Western Canada region of Alberta, Saskatchewan, Manitoba and the territories).\n", "All non-cash prizes and cash prizes over $1,000 can be claimed at any Lotto outlet by presenting and surrendering the ticket. The bearer must complete a Prize Claim form, which is sent along with the ticket to Lotto NZ in Auckland for the claim of the prize.\n", "Companies using this model are not required to purchase tickets from official lottery operators. Instead, when a player places a wager on a lottery, the company then forwards this bet to a third-party insurance company. The betting company pays a set fee for every wager placed to the insurance company in order to offset the risk of a large lottery prize being won. If a player wins a large prize (such as a jackpot), the insurance company pays the betting company, who subsequently give the money to the winning player. In the event of small prize wins, the betting firm will typically pay the prize directly to the player from their own funds.\n", "Drawings for Louisiana-based games are conducted at Lottery headquarters in downtown Baton Rouge. They are videotaped and conducted in a special room secured by alarms and motion detectors. Each drawing is conducted using one of two secure automated drawing machines. Automated drawing machines are stand-alone computers that are essentially random number generators that are completely separate from the system that generates tickets, so the number of winners and where the winning tickets were sold is not known until after the drawing has occurred.\n", "Potential contestants purchased a \"$100,000 Fortune Hunt\" scratch-off ticket from an Illinois Lottery retailer. To play the \"$100,000 Fortune Hunt\" players had to rub off the play area on the lottery instant ticket. If three matching prize amounts were revealed, the player won the prize shown–such as a free ticket or up to $100. If three TV symbols appeared, players could submit the ticket to the lottery for a preliminary drawing. This drawing was held every week in Springfield.\n", "Each Mega Gem, depending on the type of game, as already mentioned, is operated by automation. The Mega Gem loads the balls from the loading bays to the draw chamber, after which the blower starts to mix the balls. In the number lottery games (excluding the Power Lotto), the machine draws six numbers one-by-one and is inserted into the inner left loading bay. In machines used in the EZ2 Lotto and the digit lottery games, each number/digit in the combination is drawn from its own chamber. Once a ball is drawn, it is locked into place by slats placed over the pipe leading from the drawing chamber. Once the necessary number of balls has been picked, the Mega Gem is turned off.\n", "Some businesses, rather than refunding the fee paid, provide something else in kind to distance themselves from being a lottery. In the New Zealand case \"Department of Internal Affairs v Hayes\" [2007], customers offered bids costing 99 cents for the chance to win a Peugot car. The company offered Pizza Hut discount coupons to the bidders. Although customers received an item of value, the bids were sent for the purpose of winning a car, and the refund was not identical to what had been offered, and was held to be a lottery.\n", "Section::::Play.\n\nSection::::Play.:Buying a lottery ticket.\n\nLottery tickets are sold to national wholesalers who then sell them on to local retailers. Tickets are available only from retail agents. Lottery vendors can be found roaming markets, streets, and villages carrying their signature slim wooden ticket briefcases. There are also lottery ticket stands outside shops such as Tesco Lotus and Big C. Lottery tickets come in \"ticket-pairs\". The official cost of a single ticket is 40 baht, but lottery tickets can only be purchased in ticket-pairs, making the official retail price 80 baht.\n", "A player paid $1 (or $2 for the \"Sizzler\" option) and picked five numbers from 1 through 47, plus one additional number (the “Hot Ball”) from 1 through 19 drawn from a second, separate pool, or asked for terminal-selected numbers, known by various lotteries as \"easy pick\", \"quick-pick\", etc., for the five white numbers, the \"Hot Ball\", or all six. (The \"Hot Ball\" could duplicate one of the five \"white\" numbers.)\n\nSection::::Rules.:\"Sizzler\" option.\n", "BULLET::::- 2. Non-jackpot prizes under $510,000 can be claimed at any Lottery office.\n\nBULLET::::- 3. All prizes can be claimed at Lottery headquarters in Baton Rouge. Powerball prizes over $510,000 and Lotto jackpots must be claimed at Headquarters.\n\nBULLET::::- 4. Prizes can also be claimed by signing the winning ticket and mailing it, along with a claim form, to:\n", "Section::::Current in-house games.:Lotto.:Lotto Extra Shot.\n\nIn November 2012, Illinois introduced an add-on to Lotto, called Lotto Extra Shot. While regular Lotto plays are $1 per game, Lotto Extra Shot plays are $2. An LES purchase adds a \"quick picked\" number from 1 through 25 for each play. Matching the Extra Shot number increases the payout received.\n", "Section::::Operations.:Distribution of monetary funds.\n\nMore than half of Lottery sales are reserved for prize expenses. Prizes not claimed are returned to winners in the form of increased payouts on scratch-off tickets. Players have won more than $2.8 billion in Lottery prizes since the Lottery's inception.\n", "The fake check technique described above is also used. Fake or stolen checks, representing a part payment of the winnings, being sent; then a fee, smaller than the amount received, is requested. The bank receiving the bad check eventually reclaims the funds from the victim.\n\nIn 2004, a variant of the lottery scam appeared in the United States: a scammer phones a victim purporting to be speaking on behalf of the government about a grant they qualify for, subject to an advance fee of typically US$250.\n\nSection::::Variants.:Online sales and rentals.\n", "When filing a standard claim form, the claimant, the retailer, and the Pennsylvania Lottery each receive a copy (the form is triplicate). The Lottery then reports all winnings to the IRS. For federal income tax purposes, any lottery winnings over $2,500 in a fiscal year are taxable. However, when the winning amount is greater than $5,000, the Pennsylvania Department of Revenue withholds the proper amount of federal income tax before a check is mailed to the claimant. Pennsylvania Lottery winnings by Pennsylvania residents are exempt from state tax; however, winnings may be subject to local taxes for residents of some municipalities (e.g. Philadelphia).\n", "The Kansas Lottery Act requires that a minimum of 45 percent of total sales be paid back to the players through the prize fund. In fiscal year 2009 (July 1, 2008 through June 30, 2009), the Kansas Lottery paid out 56 percent in prizes. The State Gaming Revenues Fund received 29 percent of ticket sales; cost of sales was 4 percent (which covers online vendor fees, telecommunications costs and instant ticket printing); 6 percent was paid to Lottery retailers for commissions and bonuses; and 5 percent covered administrative expenses (salaries, advertising, depreciation, professional services and other administrative expenses.)\n", "Betting on the outcome of lottery draws is the most common form of lottery betting. This follows the same format as purchasing online lottery tickets in that players follow the same ruleset as found on the official lottery draw. Typically, this means that players choose the same amount of numbers and win the same prizes if they match these numbers, as they would have if purchasing an official ticket. The cost of betting on a lottery can differ from the cost of purchasing an official lottery ticket.\n\nSection::::Types of bets.:Number betting.\n", "The classic lottery is a drawing in which each contestant buys a combination of numbers. Each combination of numbers, or \"play\", is usually priced at $1. Plays are usually non-exclusive, meaning that two or more ticket holders may buy the same combination. The lottery organization then draws the winning combination of 5-8 numbers, usually from 1 to 50, using a randomized, automatic ball tumbler machine.\n", "Individuals who are at least 21 can give Lottery tickets to a person under 21 as a gift, although minors must be accompanied by a legal guardian or a family member who is at least 21 to claim a Lottery prize. Underage people can sell Lottery tickets if they meet the minimum employment age of 14, and are employed by a licensed Lottery retailer.\n", "BULLET::::- The scammer will ask the victims to pay a fee in advance to receive their prize. All genuine lotteries simply subtract any fees and tax from the prize. Regardless of what the scammer claims this fee is for (such as courier charges, bank charges, or various imaginary certificates), these are all fabricated by the scammer to obtain money from victims.\n\nBULLET::::- Scam lottery emails will nearly always come from free email accounts such as Outlook, Yahoo!, Hotmail, Live, MSN, Gmail etc.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02229
why do penises not get fat cells like stomachs
Fat cells belong to a certain class of tissue called the mesoderm. This tissue goes on to make up all of our connective tissues including our musculoskeletal system. Fat cells can be distributed anywhere within this tissue classification from the medullary cavity (middle part) of your long bones to in between your muscle cells. Fat cells are NOT distributed within the ectoderm or the endoderm layers of the body. These are the other 2 primordial germ cell layers that distribute from that original tiny cluster of cells in mama's uterus. Now, have a look at the [cross section of a penis]( URL_0 ). The bulk of the penis is made up by the corpus cavernosum, which is the chamber that fills with blood to make penile erection possible. Of that tissue, the majority is endothelial cells. In between the endothelial cells is scant connective tissue, but not enough for fat cell deposition. The outer tissue is similarly "space challenged" for fat cell deposition, as it's primarily outer keratinized epithelium, vessels and nerves. I'm happy to explain any of the terminology I used. :) Feel free to ask if you have any questions. ***EDIT: So there's some questions about fingers, heads, ears, nose, etc.*** Let's start with fat's normal purpose. So there's 2 kinds of adipose (that's the fancy word for fat) deposition: physiological (normal and/or healthy) and pathological (as a result of a disease state). Fat cells are present for energy storage as well as for structural support. The distribution of said cells is determined largely by our DNA (as well as our eating habits :)). There are strategic depositions of fat in areas that need cushioning, areas with higher metabolic demands and in areas with excess space that needs to be filled (nature abhors a vacuum as they say). There is variation in our individual coding that results in physiologic differences. For instance, some women have large prominent (natural) breasts. This is a result of biological selection for increased adipose in the breast tissue to give the impression of fertility. This is independent of their ability to generate milk as the mammary gland tissue (the part that actually makes milk) is wholly dependent upon reproductive hormones. However, this is all normal variation. Obesity from over-conditioning (over-eating) lies somewhere in between a physiological and pathological state. There are areas in the body that are specifically prone for fat deposition (abdominal wall, thighs, upper arms). What do all these things have in common? They're the areas of the body with the largest amount of stroma. Stroma is the connective tissue that holds all your bits together. Think of it like tissue glue. The more stroma you have, the more adipose precursor cells you already have there, lying dormant in your tissues and waiting to be filled up with those juicy little morsels of fatty acids circulating in your bloodstream. Once those adipose cells start to plump up, they send signals for more adipose cells to be generated. The amount of fat deposition in various parts of the body are directly proportional to the amount of stroma already present in that body part. Additionally, some areas are more anatomically convenient for stretching and therefore can "create" more room for more stroma and more fat storage. The stroma around your nose, your ears and the top of your head is thin with relatively little room for stretching because A) there is little to no muscling of these parts and B) the structure under the skin is either bone or cartiledge, neither one of which is prone to fat deposits outside of a pathological setting. There are some diseases that can cause fat deposition in organs or in arteries. That's a slightly different topic.
[ "The non-motile spermatozoa are transported to the epididymis in \"testicular fluid\" secreted by the Sertoli cells with the aid of peristaltic contraction. While in the epididymis the spermatozoa gain motility and become capable of fertilization. However, transport of the mature spermatozoa through the remainder of the male reproductive system is achieved via muscle contraction rather than the spermatozoon's recently acquired motility.\n\nSection::::Role of Sertoli cells.\n", "Section::::Immune cells found in the testis.:B-Lymphocytes.\n\nB-lymphocytes take part in the adaptive immune response and produce antibodies. These cells are not normally found in the testis, even during inflammatory conditions. The lack of B-lymphocytes in the testis is significant, since these are the antibody-producing cells of the immune system. Since anti-sperm antibodies can cause infertility, it is important that antibody-producing B-lymphocytes are kept separated from the testis.\n\nSection::::Immune cells found in the testis.:T-lymphocytes.\n", "Motile sperm cells typically move via flagella and require a water medium in order to swim toward the egg for fertilization. In animals most of the energy for sperm motility is derived from the metabolism of fructose carried in the seminal fluid. This takes place in the mitochondria located in the sperm's midpiece (at the base of the sperm head). These cells cannot swim backwards due to the nature of their propulsion. The uniflagellated sperm cells (with one flagellum) of animals are referred to as spermatozoa, and are known to vary in size.\n", "Adipose tissue and lactating mammary glands also take up glucose from the blood for conversion into triglycerides. This occurs in the same way as it does in the liver, except that these tissues do not release the triglycerides thus produced as VLDL into the blood. Adipose tissue cells store the triglycerides in their fat droplets, ultimately to release them again as free fatty acids and glycerol into the blood (as described above), when the plasma concentration of insulin is low, and that of glucagon and/or epinephrine is high. Mammary glands discharge the fat (as cream fat droplets) into the milk that they produce under the influence of the anterior pituitary hormone prolactin.\n", "Under microscopy, the seminal vesicles can be seen to have a mucosa, consisting of a lining of interspersed columnar cells and a lamina propria; and a thick muscular wall. The lumen of the glands is highly irregular and stores secretions from the glands of the vesicles. In detail:\n\nBULLET::::- The epithelium is pseudostratified columnar in character, similar to other tissues in the male reproductive system.\n\nThe height of these columnar cells, and therefore activity, is dependent upon testosterone levels in the blood.\n", "At all stages of differentiation, the spermatogenic cells are in close contact with Sertoli cells which are thought to provide structural and metabolic support to the developing sperm cells. A single Sertoli cell extends from the basement membrane to the lumen of the seminiferous tubule, although the cytoplasmic processes are difficult to distinguish at the light microscopic level.\n\nSertoli cells serve a number of functions during spermatogenesis, they support the developing gametes in the following ways:\n\nBULLET::::- Maintain the environment necessary for development and maturation, via the blood-testis barrier\n\nBULLET::::- Secrete substances initiating meiosis\n\nBULLET::::- Secrete supporting testicular fluid\n", "In placental mammals, fertilization typically occurs inside the female in the oviducts. The oviducts are positioned near the ovaries where ova are produced. An ovum therefore needs only to travel a short distance to the oviducts for fertilization. In contrast sperm cells must be highly motile, since they are deposited into the female reproductive tract during sexual intercourse and must travel through the cervix (in some species) as well as the uterus and the oviduct (in all species) to reach an ovum. Sperm cells that are motile are spermatozoa.\n", "The best single biochemical marker for polycystic ovary syndrome is a raised testosterone level, but \"combination of SHBG and testosterone to derive a free testosterone value did not further aid the biochemical diagnosis of PCOS\". Instead SHBG is reduced in obesity and so the FAI seems more correlated with the degree of obesity than with PCOS itself.\n", "Postovulatory follicles are structures formed after oocyte release; they do not have endocrine function, present a wide irregular lumen, and are rapidly reabsorbed in a process involving the apoptosis of follicular cells. A degenerative process called follicular atresia reabsorbs vitellogenic oocytes not spawned. This process can also occur, but less frequently, in oocytes in other development stages.\n\nSome fish are hermaphrodites, having both testes and ovaries either at different phases in their life cycle or, as in hamlets, have them simultaneously.\n", "The hormones involved in the reproductive system are negatively affected with an increase of weight. In humans, via white adipocytes (fat cells), production of the hormone leptin (an adipokine) acts on the hypothalamus where reproductive hormone Gonadotrophin-releasing hormone (GnRH) is produced. Leptin is also a product of the \"obese\" gene. Leptins interaction with the hypothalamus decreases appetite, therefore a mutation in the \"obese\" gene would result in an increased appetite, leading to inevitable obesity. Leptin has been found to be linked to the HPG axis as it can induce the release of GnRH by the hypothalamus and subsequently follicle stimulating hormone (FSH) and leutinising hormone (LH) by the anterior pituitary. Pre-pubertal individuals that lack leptin fail to reach the pubertal stage. If given leptin administratively, the mutation would be reversed and puberty resumed. Leptin is further expressed in mature follicles produced by the ovary, suggesting it plays a role in oocyte maturation, hence embryo development.\n", "Clinical studies have repeatedly shown that even though insulin resistance is usually associated with obesity, the membrane phospholipids of the adipocytes of obese patients generally still show an increased degree of fatty acid unsaturation. This seems to point to an adaptive mechanism that allows the adipocyte to maintain its functionality, despite the increased storage demands associated with obesity and insulin resistance.\n", "Sertoli cells were also exploited in experiments for their immunosuppressive function. They were used to protect and nurture islets producing insulin to treat type I diabetes. The exploitation of Sertoli cells significantly increased the survival of transplanted islets. However, more experiments must be conducted before this method may be tested in human medicine as part of clinical trials. In another study on type II diabetic and obese mice, the transplantation of microencapsulated Sertoli cells in the subcutaneous abdominal fat depot lead to the return of normal glucose levels in 60% of the animals.\n\nSection::::History of research.\n", "The dwarf sperm whale is an open ocean predator. The stomach contents of stranded dwarf sperm whales comprise mainly squid and, to a lesser degree, deep sea fish (from the mesopelagic and bathypelagic zones) and crustaceans. However, crustaceans make up a sizable part of the diets of Hawaiian dwarf sperm whales, up to 15%. The stomach contents of whales washed up in different regions of the world indicate a preference for cock-eyed squid and glass squid across its range, particularly the elongate jewel squid (\"Histioteuthis reversa\") and \"Taonius\". \n", "Without estrogen, the vaginal epithelium is only a few layers thick. Only small round cells are seen that originate directly from the basal layer (basal cells) or the cell layers (parabasal cells) above it. The parabasal cells, which are slightly larger than the basal cells, form a five- to ten-layer cell layer. The parabasal cells can also differentiate into histiocytes or glandular cells. Estrogen also influences the changing ratios of nuclear constituents to cytoplasm. As a result of cell aging, cells with shrunken, seemingly foamy cell nuclei (intermediary cells ) develop from the parabasal cells. These can be categorized by means of the nuclear-plasma relation into \"upper\" and \"deep\" intermediate cells. Intermediate cells make abundant glycogen and store it. The further nuclear shrinkage and formation of mucopolysaccharides are distinct characteristics of superficial cells. The mucopolysaccharides form a keratin-like cell scaffold. Fully keratinized cells without a nucleus are called 'floes'. Intermediate and superficial cells are constantly exfoliated from the epithelium. The glycogen from these cells is converted to sugars and then fermented by the bacteria of the vaginal flora to lactic acid. The cells progress through the cell cycle and then decompose (cytolysis) within a week's time. Cytolysis occurs only in the presence of glycogen-containing cells, that is, when the epithelium is degraded to the upper intermediate cells and superficial cells. In this way, the cytoplasm is dissolved, while the cell nuclei remain.\n", "However, other subcutaneous fat tissues also might contribute to metabolic disease, if the fat cells become too enlarged and \"sick.\" Admittedly, subcutaneous fat cells typically are larger, and capable of storing more fat when needed. However, subcutaneous fat tissue represents the largest proportion of fat tissue in the body, and is the major source of leptin.\n", "In contrast to most eukaryotic cells, mature sperm cells largely use protamines to package their genomic DNA, most likely to achieve an even higher packaging ratio. Histone equivalents and a simplified chromatin structure have also been found in Archea, suggesting that eukaryotes are not the only organisms that use nucleosomes.\n\nSection::::Structure.\n\nSection::::Structure.:Structure of the core particle.\n\nSection::::Structure.:Structure of the core particle.:Overview.\n", "In cartilaginous fishes, the part of the archinephric duct closest to the testis is coiled up to form an epididymis. Below this are a number of small glands secreting components of the seminal fluid. The final portion of the duct also receives ducts from the kidneys in most species.\n\nIn amniotes, however, the archinephric duct has become a true vas deferens, and is used only for conducting sperm, never urine. As in cartilaginous fish, the upper part of the duct forms the epididymis. In many species, the vas deferens ends in a small sac for storing sperm.\n", "The use of vagotomy to treat obesity is now being studied. The vagus nerve provides efferent nervous signals out from the hunger and satiety centers of the hypothalamus, a region of the brain central to the regulation of food intake and energy expenditure. The circuit begins with an area of the hypothalamus, the arcuate nucleus, that has outputs to the lateral hypothalamus (LH) and ventromedial hypothalamus (VMH), the brain's feeding and satiety centers, respectively. Animals with lesioned VMH will gain weight even in the face of severe restrictions imposed on their food intake, because they no longer provide the signaling needed to turn off energy storage and facilitate energy burning. In humans, the VMH is sometimes injured by ongoing treatment for acute lymphoblastic leukemia or surgery or radiation to treat posterior cranial fossa tumors. With the VMH disabled and no longer responding to peripheral energy balance signals, \"[e]fferent sympathetic activity drops, resulting in malaise and reduced energy expenditure, and vagal activity increases, resulting in increased insulin secretion and adipogenesis.\" \"VMH dysfunction promotes excessive caloric intake and decreased caloric expenditure, leading to continuous and unrelenting weight gain. Attempts at caloric restriction or pharmacotherapy with adrenergic or serotonergic agents have previously met with little or only brief success in treating this syndrome.\" The vagus nerve is thought to be one key mediator of these effects, as lesions lead to chronic elevations in insulin secretion, promoting energy storage in adipocytes. \n", "Cattle that are over-conditioned are also more insulin resistant compared to their leaner counterparts. As demonstrated in mice, insulin resistance is a factor in poor fertility as it has an effect on oocyte development. This in turn means that less oocytes are suitable for fertilisation and fertility is impaired.\n", "Erectile dysfunction from vascular disease is usually seen only amongst elderly individuals who have atherosclerosis. Vascular disease is common in individuals who have diabetes, peripheral vascular disease, hypertension and those who smoke. Any time blood flow to the penis is impaired, erectile dysfunction is the end result.\n", "Halata & Spathe (1997) reported; \"the glans penis contains a predominance of free nerve endings, numerous genital end bulbs and rarely Pacinian and Ruffinian corpuscles. Merkel nerve endings and Meissner's corpuscles are not present.\"\n", "Faty started to lose his place in the team following the arrival of Ghanaian John Mensah early in 2006. The departure of coach László Bölöni to manage AS Monaco did not help his claim either. Pierre Dréossi came in to fill in the vacant coach's position and used Grégory Bourillon and Mensah as the main central defence partnership for the 2006–07 season.\n", "Another reason for decrease in fertility is to do with leptin. Leptin is a hormone which production is increased in obese animals. In cows, leptin can inhibit thecal cells from producing adrostenediol and progesterone. Androstenediol is important in fertility as it is the precursor to oestrogen. Without oestrogen production, the balance of hormones is affected and there is no LH surge with is required for ovulation.\n\nSection::::Animals.:Domestic Fowl.\n", "On November 15, 2016, the American Medical Association (AMA) passed policy that \"The use of human chorionic gonadotropin (HCG) for weight loss is inappropriate.\"\n\nAccording to the American Society of Bariatric Physicians, no new clinical trials have been published since the definitive 1995 meta-analysis.\n\nThe scientific consensus is that any weight loss reported by individuals on an \"HCG diet\" may be attributed entirely to the fact that such diets prescribe calorie intake of between 500 and 1,000 calories per day, substantially below recommended levels for an adult, to the point that this may risk health effects associated with malnutrition.\n", "At this point, the fats are in the bloodstream in the form of chylomicrons. Once in the blood, chylomicrons are subject to delipidation by lipoprotein lipase. Eventually, enough lipid has been lost and additional apolipoproteins gained, that the resulting particle (now referred to as a chylomicron remnant) can be taken up by the liver. From the liver, the fat released from chylomicron remnants can be re-exported to the blood as the triglyceride component of very low-density lipoproteins. Very low-density lipoproteins are also subject to delipidation by vascular lipoprotein lipase, and deliver fats to tissues throughout the body. In particular, the released fatty acids can be stored in adipose cells as triglycerides. As triglycerides are lost from very low-density lipoproteins, the lipoprotein particles become smaller and denser (since protein is denser than lipid) and ultimately become low-density lipoproteins. A great deal has been written about low-density lipoproteins because they are thought to be atherogenic.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03866
How could an electric aircraft hope to achieve the thrust of a kerosene jet?
A portion of a turbojet's power does come from the heated gases creating extra exhaust pressure, but a good portion comes from the mouth of the engine being very wide and sucking a lot of air in, which is then compressed by the engine and shot out the back at higher speeds. Theoretically, electrical motors can provide enough power to move the air through similarly to the kerosene engine, and if you look at cars, electrical motors are smaller than engines yet can provide the same power. So the issue wouldn't be with electrical motors instead of the kerosene turbine, it would be with creating and supplying the required watts of electricity TO the electrical motor(s). 90,000 horsepower = 67,000 kilowatts, which is the output of a couple small hydro power plants (dam on a small river).
[ "Section::::Applications.:Airplanes.\n\nBoeing researchers and industry partners throughout Europe conducted experimental flight tests in February 2008 of a manned airplane powered only by a fuel cell and lightweight batteries. The Fuel Cell Demonstrator Airplane, as it was called, used a Proton Exchange Membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which was coupled to a conventional propeller.\n", "A fuel cell uses the reaction between two fluids such as hydrogen and oxygen to create electricity. Unlike a battery, the fluids are not stored in the battery but are drawn in from outside. This offers the prospect of much greater range than batteries and experimental examples have flown, but the technology has yet to reach production.\n\nSection::::Design.:Microwaves.\n", "Kerosene is used to fuel smaller-horsepower outboard motors built by Yamaha, Suzuki, and Tohatsu. Primarily used on small fishing craft, these are dual-fuel engines that start on gasoline and then transition to kerosene once the engine reaches optimum operating temperature. Multiple fuel Evinrude and Mercury Racing engines also burn kerosene, as well as jet fuel.\n", "The motor ran on a test dynamometer for 1,000 hours. The iron bird is a Caravan forward fuselage used as a test bed, with the usual PT6 turboprop engine replaced by an electric motor, inverter and a liquid-cooling system, including radiators, driving a Cessna 206 propeller.\n\nThe production motor will produce at 1,900 rpm, down from the test motor's 2,500 rpm, allowing the installation of the propeller without a reduction gearbox.\n", "On August 31, 2010, Boeing worked with the U.S. Air Force to test the Boeing C-17 running on 50% JP-8, 25% Hydro-treated Renewable Jet fuel and 25% of a Fischer–Tropsch fuel with successful results.\n\nSection::::Environmental record.:Electric propulsion.\n\nFor NASA's N+3 future airliner program, Boeing has determined that hybrid electric engine technology is by far the best choice for its subsonic design. Hybrid electric propulsion has the potential to shorten takeoff distance and reduce noise.\n\nSection::::Political contributions, federal contracts, advocacy.\n", "In 2003, the world's first propeller-driven airplane to be powered entirely by a fuel cell was flown. The fuel cell was a stack design that allowed the fuel cell to be integrated with the plane's aerodynamic surfaces. Fuel cell-powered unmanned aerial vehicles (UAV) include a Horizon fuel cell UAV that set the record distance flown for a small UAV in 2007. Boeing researchers and industry partners throughout Europe conducted experimental flight tests in February 2008 of a manned airplane powered only by a fuel cell and lightweight batteries. The fuel cell demonstrator airplane, as it was called, used a proton exchange membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which was coupled to a conventional propeller.\n", "In 2003, the world's first propeller driven airplane to be powered entirely by a fuel cell was flown. The fuel cell was a unique FlatStack stack design which allowed the fuel cell to be integrated with the aerodynamic surfaces of the plane.\n", "Section::::Use.:As fuel.:Transportation.\n\nIn the mid-20th century, kerosene or tractor vaporising oil (TVO) was used as a cheap fuel for tractors. The engine would start on gasoline, then switch over to kerosene once the engine warmed up. A heat valve on the manifold would route the exhaust gases around the intake pipe, heating the kerosene to the point where it was vaporized and could be ignited by an electric spark.\n", "BULLET::::- NASA's single-aisle turbo-electric aircraft with an aft boundary layer propulsor (STARC-ABL) is a conventional tube-and-wing 737-sized airliner with an aft-mounted electric fan ingesting the fuselage boundary layer hybrid-electric propulsion, with 5.4 MW of power distributed to three electric motors: the design will be evaluated by Aurora Flight Sciences;\n", "To evaluate electric propulsion systems, two test stands were constructed: one with two 250 kW UQM motors and two Hartzell Propellers, built with Yates Electrospace, the other on a trailer to be brought to high altitude test sites.\n\nIn May 2018, Jetex, a Dubai fixed-base operator with 30 bases, invested in the company.\n\nBy November 2018, Wright was testing a commercially available electric motor, before combining it with a Pratt & Whitney Canada PT6A turboprop to be installed on an existing nine-seater for 2019 flight tests, which may be marketed subsequently.\n", "Electric Aircraft Corporation ElectraFlyer Trike\n\nThe Electric Aircraft Corporation ElectraFlyer Trike is an ultralight trike fitted with an electric motor, instead of a traditional gasoline engine.\n\nSection::::Design and development.\n", "A number of electrically powered aircraft, such as the QinetiQ Zephyr, have been designed since the 1960s. Some are used as military drones. In France in late 2007, a conventional light aircraft powered by an 18 kW electric motor using lithium polymer batteries was flown, covering more than , the first electric airplane to receive a certificate of airworthiness.\n\nLimited experiments with solar electric propulsion have been performed, notably the manned Solar Challenger and Solar Impulse and the unmanned NASA Pathfinder aircraft.\n", "Ayaks uses thermochemical reactors (TCRs): the heating energy from air friction is used to increase the heat capacity of the fuel, by cracking the fuel with a catalytic chemical reaction. The aircraft has double shielding between which water and ordinary, cheap kerosene circulates in hot parts of the airframe. The energy of surface heating is absorbed through heat exchangers to trigger a series of chemical reactions in presence of a nickel catalyzer, called hydrocarbon steam reforming. Kerosene and water spits into a new fuel reformate: methane (70–80% in volume) and carbon dioxide (20–30%) in a first stage:\n", "Kerosene lamps are widely used for lighting in rural areas of Africa and Asia, where electricity is not distributed or is too costly. Kerosene lamps consume an estimated 77 billion litres of fuel per year, equivalent to 1.3 million barrels of oil per day, comparable to annual U.S. jet-fuel consumption of 76 billion litres per year.\n\nSection::::Types.\n\nSection::::Types.:Flat-wick lamp.\n", "The most advanced are the Zunum Aero 10-seater, the Airbus E-Fan X demonstrator, the VoltAero Cassio, UTC is modifying a Bombardier Dash 8, while an Ampaire prototype first flew on 6 June 2019.\n\nSection::::History.:Ion wind.\n\nIn November 2018, MIT engineers flew the first plane with no moving parts, propelled by ion wind thrust.\n\nSection::::Applications.\n", "On 3 November 2015, Jetpack Aviation demonstrated the JB-9 in Upper New York Bay in front of the Statue of Liberty. The JB-9 carries of kerosene fuel that burns through two vectored thrust AMT Nike jet engines at a rate of per minute for up to ten minutes of flying time, depending on pilot weight. Weight of fuel is a consideration, but it is reported to start with per minute climb rate that doubles as the fuel burns off. While this model has been limited to , the prototype of the JB-10 is reported to fly at over .\n", "On the upside, below a chamber pressure of about 1000 psi (6.9 MPa), kerosene can produce sooty deposits on the inside of the nozzle and chamber liner. This acts as a significant insulation layer and can reduce the heat flow into the wall by roughly a factor of two. Most modern hydrocarbon engines, however, run above this pressure, therefore this is not a significant effect for most engines.\n", "In 2014, Alphabet Energy introduced the world’s first industrial-scale thermoelectric generator, the E1. The E1 takes exhaust heat from large industrial engines and turns it into electricity. The result is an engine that needs less fuel to deliver the same power. The E1 is optimized for engines up to 1,400 kW, and works on any engine or exhaust source, currently generating up to 25 kWe on a standard 1,000 kW engine. The E1's modules are interchangeable but currently come with a low-cost proprietary thermoelectric material and the device is rated for a 10-year life span. As advances in thermoelectric materials are made, new modules can be swapped in for old ones, to continually improve fuel efficiency to as much as 10%.\n", "Specific energy is the important criterion in selecting an appropriate fuel to power an aircraft. Much of the weight of an aircraft goes into fuel storage to provide the range, and more weight means more fuel consumption. Aircraft have a high peak power and thus fuel demand during take-off and landing. This has so far prevented electric aircraft using electric batteries as the main propulsion energy store becoming widely commercially viable.\n\nSection::::Types of aviation fuel.\n\nSection::::Types of aviation fuel.:Conventional aviation fuels.\n\nSection::::Types of aviation fuel.:Conventional aviation fuels.:Jet fuel.\n", "In 2009 the Naval Research Laboratory's (NRL's) Ion Tiger utilized a hydrogen-powered fuel cell and flew for 23 hours and 17 minutes. Fuel cells are also being tested and considered to provide auxiliary power in aircraft, replacing fossil fuel generators that were previously used to start the engines and power on board electrical needs, while reducing carbon emissions.\n", "Section::::Research and university partnerships.:University of Cambridge.\n\nBoeing UK and Cambridge developed and tested the world's first parallel hybrid-electric plane that has the ability to recharge its batteries while it is flying and using 30% less fuel than an aircraft with a petrol-only engine. In its 2017 Supplier of the Year Awards, Boeing UK presented the university with the Boeing Innovation Award for its performance in research and development efforts that had been instrumental to Boeing's products and future business needs.\n\nSection::::Research and university partnerships.:Cranfield University.\n", "In the initial phase of liftoff, the Saturn V launch vehicle was powered by the reaction of liquid oxygen with RP-1. For the five 6.4 meganewton sea-level thrust F-1 rocket engines of the Saturn V, burning together, the reaction generated roughly 1.62 × 10 watts (J/s) (162 gigawatt) or 217 million horsepower.\n\nKerosene is sometimes used as an additive in diesel fuel to prevent gelling or waxing in cold temperatures.\n", "After 38 minutes of flying various manoeuvres, battery charge may be 25%. From the inside, the Electro is very similar to the gasoline-powered version, but from the outside, the Electro is much quieter. Electricity costs are about 1/10 of gasoline.\n\nThe Electro is now certified in the USA. In 2015 Pipistrel intended to fly the Electro from France to England two days before the Airbus E-Fan, but was prevented by Siemens. Four Electro aircraft will be used to provide flight training in Fresno, California starting in late 2017 as part of the Sustainable Aviation Project.\n\nSection::::Operators.\n", "EWZ-H6 Hybrid system is an unmanned multi-rotor developed by Ewatt in conventional multi-rotor layout and constructed of aluminum alloy and carbon fiber, using vertical take-off and landing. EWZ-H6 uses a generator and dual power battery. Specification: .\n\nBULLET::::- Length: 1705mm\n\nBULLET::::- Width: 1705mm\n\nBULLET::::- Height: 280mm\n\nBULLET::::- Main rotor diameter: 26 inches\n\nBULLET::::- Battery: 5000mAh 6SX2\n\nBULLET::::- Material: Carbon fiber\n\nBULLET::::- Weight: 14.3 kg\n\nBULLET::::- Max take-off weight: 24.3 kg\n\nBULLET::::- Max take-off altitude: 2000m\n\nBULLET::::- Payload capacity: 10 kg\n\nBULLET::::- Max cruising time: 2.5 +hours with 1 gallon of gasoline (3.7 liter)\n\nBULLET::::- Take off & landing mode: Vertical take-off and landing\n", "Section::::Market sectors.:UAVs.\n\nThe firm provides fuel cells to power UAVs and aerial drones. Its UAV Fuel Cell Modules run on hydrogen and ambient air to produce DC power in a lightweight package providing extended flight times when compared to battery systems.\n\nSection::::Market sectors.:Stationary Power.\n\nThe company's fuel cell systems are used to provide diesel replacement and backup power initially for telecom towers but also for other sectors. The company has field proven its fuel cell products in the Indian telecommunications market with a tower uptime of close to 100%.\n\nSection::::Membership of industry consortia and trade associations.\n" ]
[]
[]
[ "normal" ]
[ " electric aircraft cannot achieve the thrust of a kerosene jet" ]
[ "false presupposition", "normal" ]
[ "Electrical motors can provide the same power. It just requires a lot of power constantly. " ]
2018-10027
How does "3 in 1" shampoo/conditioner/body wash work?
They don't work well. Shampoos and Body Wash are both variants of soap. They are both designed to strip dirt and oils from the hair and body, though body washes will also normally have an exfoliant in them to help remove dead skin as well. These mixed products tend to have fewer oil removing agents and no exfoliating agents so they do the job of shampoo and body wash more poorly than they would otherwise do. Conditioner is designed to add back some of the oils removed from the hair by the shampoo, reduce static as the hair dries, and the like. It does not work at all in the combined products because the shampoo agents just remove anything the conditioner was applying.
[ "BULLET::::1. dilution, in case the product comes in contact with eyes after running off the top of the head with minimal further dilution\n\nBULLET::::2. adjusting pH to that of non-stress tears, approximately 7, which may be a higher pH than that of shampoos which are pH adjusted for skin or hair effects, and lower than that of shampoo made of soap\n\nBULLET::::3. use of surfactants which, alone or in combination, are less irritating than those used in other shampoos (e.g. Sodium lauroamphoacetate)\n", "Conditioners are available in a wide range of forms including viscous liquids, gels and creams as well as thinner lotions and sprays. Hair conditioner is usually used after the hair has been washed with shampoo. It is applied and worked into the hair and may either be washed out a short time later or left in. For short hair, 2-3 tablespoons is the recommended amount. For long hair, up to 8 tablespoons may be used.\n\nSection::::History.\n", "The typical reason of using shampoo is to remove the unwanted build-up of sebum in the hair without stripping out so much as to make hair unmanageable. Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water.\n", "Shampoo is generally made by combining a surfactant, most often sodium lauryl sulfate or sodium laureth sulfate, with a co-surfactant, most often cocamidopropyl betaine in water to form a thick, viscous liquid. Other essential ingredients include salt (sodium chloride), which is used to adjust the viscosity, a preservative and fragrance. Other ingredients are generally included in shampoo formulations to maximize the following qualities:\n\nBULLET::::- pleasing foam\n\nBULLET::::- ease of rinsing\n\nBULLET::::- minimal skin and eye irritation\n\nBULLET::::- thick or creamy feeling\n\nBULLET::::- pleasant fragrance\n\nBULLET::::- low toxicity\n\nBULLET::::- good biodegradability\n\nBULLET::::- slight acidity (pH less than 7)\n", "Section::::All My Children.\n", "Section::::Types.\n\nSection::::Types.:Hair gel.\n\nHair gel is a hairstyle product that is used to stiffen hair into a particular hairstyle. The end result is similar to, but stronger than, those of hair spray. Hair gel is most commonly used in the hairstyling of men, but it is not gender specific. Hair gel can come in tubes, pots, small bags, or even in a spray form.\n\nSection::::Types.:Hair wax.\n", "BULLET::::- Ordinary conditioners combine some aspects of \"pack\" and \"leave-in\" conditioners. Ordinary conditioners are generally applied directly after using shampoo, and manufacturers usually produce a conditioner counterpart for different types of shampoo for this purpose.\n\nBULLET::::- Hold conditioners, based on cationic polyelectrolyte polymers, hold the hair in a desired shape. These have a function and composition similar to diluted hair gels.\n\nSection::::Ingredients.\n\nThere are several types of hair conditioner ingredients, differing in composition and functionality:\n", "Shampoo\n\nShampoo () is a hair care product, typically in the form of a viscous liquid, that is used for cleaning hair. Less commonly, shampoo is available in bar form, like a bar of soap. Shampoo is used by applying it to wet hair, massaging the product into the hair, and then rinsing it out. Some users may follow a shampooing with the use of hair conditioner.\n", "While the hairstyling products listed above are the most commonly used, there are other types of products as well. Serums, leave-in conditioner, clays, hair tonic, hair dry powder shampoo, and heat protection sprays are frequently used hairstyling products in salons and homes across the country.\n\nSection::::Disadvantages.\n", "Pet shampoos which include fragrances, deodorants or colors may harm the skin of the pet by causing inflammations or irritation. Shampoos that do not contain any unnatural additives are known as hypoallergenic shampoos and are increasing in popularity.\n\nSection::::Specialized shampoos.:Solid.\n\nSolid shampoos or shampoo bars use as their surfactants soaps or other surfactants formulated as solids. They have the advantage of being spill-proof. They are easy to apply; one may simply rub the bar over wet hair, and work the soaped hair into a low lather.\n\nSection::::Specialized shampoos.:Jelly and gel.\n", "Section::::History.:As a \"feminine hygiene\" product.\n", "Shower gels for men may contain the ingredient menthol, which gives a cooling and stimulating sensation on the skin, and some men's shower gels are also designed specifically for use on hair and body. Shower gels contain milder surfactant bases than shampoos, and some also contain gentle conditioning agents in the formula. This means that shower gels can also double as an effective and perfectly acceptable substitute to shampoo, even if they are not labelled as a hair and body wash. Washing hair with shower gel should give approximately the same result as using a moisturising shampoo.\n\nSection::::Marketing.\n", "Section::::Types of Dry Shampoos.\n\nDry shampoo can be administered as a powder, where all the ingredients of dry shampoo are combined together and applied to the scalp with the hand, or through the aerosol form where the dry shampoo is sprayed directly onto the head. In the aerosol form, the powders comprising the dry shampoo are dispersed throughout pressurized gas inside a can; when the release is pressed, the pressurized gas and powders inside are released, forming the aerosol that lands on the head or scalp.\n\nSection::::Types of Dry Shampoos.:Homemade Dry Shampoo.\n", "Section::::Processing methods.:Space-holder technique.\n", "Section::::Commercial performance.\n", "BULLET::::- adjusting pH to that of \"non-stress tears\", approximately 7, which may be a higher pH than that of shampoos which are pH adjusted for skin or hair effects, and lower than that of shampoo made of soap\n\nBULLET::::- use of surfactants which, alone or in combination, are less irritating than those used in other shampoos\n\nBULLET::::- use of nonionic surfactants of the form of polyethoxylated synthetic glycolipids and/or polyethoxylated synthetic monoglycerides, which counteract the eye sting of other surfactants without producing the anesthetizing effect of alkyl polyethoxylates or alkylphenol polyethoxylates\n", "Most contain sodium trideceth sulfate, which is formulated to act as a low-irritation cleansing agent.\n\nAlternatively, baby shampoo may be formulated using other classes of surfactants, most notably non-ionics which are much milder than any charged anionics used.\n\nSection::::Common ingredients.:Functional claims.\n", "Mohamed Hashish\n\nMohamed Hashish (born May 22, 1947) is an Egyptian-born research scientist best known as the father of the abrasive water jet cutter.\n\nSection::::Youth and Schooling.\n", "When visiting Australia, Redmond realised that a wide variety of various ingredients such as Blue Gum Leaves, Australian Custard Apple, Quandong, Mint Balm, Wild Cherry Bark and Jojoba Seed Oil could be used to produce hair products.\n\nAccordingly, Redmond was inspired to develop the first of the Aussie products: Australian 3 Minute Miracle (an intensive, conditioner that claims to produce results in three minutes).\n\nSection::::Products.\n\nSection::::Products.:Shampoos.\n\nBULLET::::- Mega Shampoo – an everyday shampoo with Australian Kangaroo Paw Flower extract.\n\nBULLET::::- Moist Shampoo – for dry hair, with Australian guava.\n", "Cosmetotextile\n\nCosmetotextile is a technology merging cosmetics and textiles through the process of micro-encapsulation. According to the Bureau de Normalisation des Industries Textiles et de l'Habillement (BNITH), “a cosmetotextile is a textile consumer article containing durably a cosmetic product which is released over time.” \n\nCosmetotextiles are impregnated with a finish composed of solid microcapsules, each holding a specific amount of cosmetic substance meant to be released totally and instantly on the human body. Cosmetotextiles currently offered on the market claim to be moisturising, perfumed, cellulite reducing or body slimming.\n", "Section::::Solid–solid mixing.\n", "BULLET::::- Extreme Soap Bars - two bars of soap (one yellow, the other black) start from the bathroom and skid around in sporty fashion, finishing up from a long jump off the stair banisters. Usually, however, the yellow bar winds up landing in the most awkward of places - a coal bag, a litter tray, a big of nails, etc.\n\nBULLET::::- A flourish camera takes photos of various OOglies characters, but is left exasperated when they mess up their shoots.\n", "BULLET::::- Hey! Our Toys Have Arrived – The band is sitting before their model toys. 2D says that they are useless, saying that his head doesn't wobble like the toy. Russel grabs hold of 2D's neck and shakes it, revealing that it does. Noodle is amazed, while Murdoc encourages him to do it again.\n", "BULLET::::- Continuous Processor\n\nBULLET::::- Cone Screw Blender\n\nBULLET::::- Screw Blender\n\nBULLET::::- Double Cone Blender\n\nBULLET::::- Double Planetary\n\nBULLET::::- High Viscosity Mixer\n\nBULLET::::- Counter-rotating\n\nBULLET::::- Double & Triple Shaft\n\nBULLET::::- Vacuum Mixer\n\nBULLET::::- High Shear Rotor Stator\n\nBULLET::::- Impinging mixer\n\nBULLET::::- Dispersion Mixers\n\nBULLET::::- Paddle\n\nBULLET::::- Jet Mixer\n\nBULLET::::- Mobile Mixers\n\nBULLET::::- Drum Blenders\n\nBULLET::::- Intermix mixer\n\nBULLET::::- Horizontal Mixer\n\nBULLET::::- Hot/Cold mixing combination\n\nBULLET::::- Vertical mixer\n\nBULLET::::- Turbomixer\n\nBULLET::::- Planetary mixer\n\nBULLET::::- A \"planetary mixer\" is a device used to mix round products including adhesives, pharmaceuticals, foods (including dough), chemicals, electronics, plastics and pigments.\n", "The product changed ownership many times throughout the 20th century and was bought by its current owners, the WD-40 Company, in 1995. The current marketing slogan is \"The Tool Kit In A Can,\" with the logo of the text \"3 in\" inside a large numeral \"1\".\n\nA few other products are now produced under the 3-in-1 brand, including a white lithium grease, silicone spray, and oil with added PTFE.\n\nIn 2000, the can was redesigned to look like the early 20th century oil can design (hemisphere base with tapered straight spout).\n" ]
[ "3 in 1 shampoo/conditioner/body wash works." ]
[ "3 in 1 shampoo/conditioner/body wash does not work at all in the combined products." ]
[ "false presupposition" ]
[ "3 in 1 shampoo/conditioner/body wash works.", "\"3 in 1\" shampoo/conditioner/body wash works." ]
[ "false presupposition", "normal" ]
[ "3 in 1 shampoo/conditioner/body wash does not work at all in the combined products.", "\"3 in 1\" shampoo/conditioner/body wash does not work well." ]
2018-04234
Why some cities with horrible rush hour traffic have no traffic on the weekends and others are congested all the time?
Overpopulated areas. Basically, too many people travelling in the same direction and not enough road surface area
[ "Efforts to manage transportation demand during rush hour periods vary by state and by metropolitan area. In some states, freeways have designated lanes that become HOV (High-Occupancy Vehicle, aka car-pooling) only during rush hours, while open to all vehicles at other times. In others, such as the Massachusetts portion of I-93, travel is permitted in the breakdown lane during this time. Several states use ramp meters to regulate traffic entering freeways during rush hour. Transportation officials in Colorado and Minnesota have added value pricing to some urban freeways around Denver, the Twin Cities, and Seattle, charging motorists a higher toll during peak periods.\n", "Scheduled mass transit (i.e. buses or trains) trades off service frequency and load factor. Buses and trains must run on a predefined schedule, even during off-peak times when demand is low and vehicles are nearly empty. So to increase load factor, transportation planners try to predict times of low demand, and run reduced schedules or smaller vehicles at these times. This increases passengers' wait times. In many cities, trains and buses do not run at all at night or on weekends.\n", "The term \"third rush hour\" has been used to refer to a period of the midday in which roads in urban and suburban areas become congested due to a large number of people taking lunch breaks using their vehicles. These motorists often frequent restaurants and fast food locations, where vehicles crowding the entrances cause traffic congestion. Active retirees, who travel by automobile to engage in many midday activities, also contribute to the midday rush hour. Areas which have large school-age populations may also experience added congestion due to the large number of school buses and kiss-and-ride traffic that flood the roads after lunch, but before the evening rush hour. In many European countries (e.g., Germany, Austria, Hungary) the schools are only half-day and many people work only half-time too. This causes a third rush hour around 12:30–2 pm, which diverts some traffic from the evening rush hour, thus leaving the morning rush hour the most intense period of the day.\n", "Most commuters travel at the same time of day, resulting in the morning and evening rush hours, with congestion on roads and public transport systems not designed or maintained well enough to cope with the peak demands. As an example, Interstate 405 located in Southern California is one of the busiest freeways in the United States. Commuters may sit up to two hours in traffic during rush hour. Construction work or collisions on the freeway distract and slow down commuters, contributing to even longer delays.\n\nSection::::Pollution.\n", "In the morning (6–9am), and evening (4:30–7pm), Sydney, Brisbane and Melbourne, and Auckland and Christchurch are usually the most congested cities in Australia and New Zealand respectively. In Melbourne the Monash Freeway, which connects Melbourne's suburban sprawl, to the city is usually heavily congested each morning and evening. In Perth, Mitchell Freeway, Kwinana Freeway and various arterial roads are usually congested between peak hours, making movement between suburbs and the city quite slow.\n\nEfforts to minimise traffic congestion during peak hour vary on a state by state and city by city basis.\n\nIn Melbourne, congestion is managed by means including:\n", "There is also an afternoon rush hour. For example, in the New York City area, the afternoon rush hour can begin as early as 3 pm and last until 7 pm. Some people who live in Connecticut but work in New York often do not arrive home until 7 pm or later. On the other hand, in a smaller city like Cleveland, the afternoon rush hour takes place in a more literal sense such that heavy traffic congestion typically only occurs between 5 and 6 pm. Usually the RTA in Cleveland has an afternoon rush hour schedule like the morning.\n", "Staggered hours have been promoted as a means of spreading demand across a longer time span—for example, in \"Rush Hour\" (1941 film) and by the International Labour Office.\n\nSection::::Traffic management by country.\n\nSection::::Traffic management by country.:Australia and New Zealand.\n", "Fourthly, in conditions where by their acute nature (e.g. AMI, pulmonary embolus and child birth), they are likely to be admitted on the day and be no more common on weekdays or weekends, the effect is still seen. Such conditions are normally looked after by teams that work normal patterns at weekends. This may also imply that the effect is strong, a ‘whole system’ issue, and be difficult to overcome. This is a concern. It is not clear why stroke has a variable effect. Fifthly, and finally—and perhaps the strongest argument—it is unlikely that elective care patients would have been chosen for surgery at the weekend if they were high risk.\n", "In a US study of 25,301 COPD patients (Rinne et al., 2015), there were significantly fewer discharges on the weekend (1922 per weekend day vs 4279 per weekday, p0.01); weekend discharges were significantly associated with lower odds of mortality within 30 days after discharge (OR = 0.80; 95% CI 0.65-0.99).\n\nPulmonary embolus\n", "Rush hour occurs on weekdays between 5 am and 10 am, and in the afternoon between 3 pm and 7 pm (although rush-hour traffic can occasionally spill out to 11 am and start again from 2:00 pm until as late as 10 pm, especially on Fridays). Traffic can occur at almost any time, particularly before major holidays (including Thanksgiving, Christmas, and three-day weekends) and even on regular weekends when one otherwise would not expect it. Experienced Angelenos recognize the need to factor traffic into their commute.\n", "In another study on major trauma in 2016, in all 22 UK major trauma centres (MTC), Metcalfe et al., investigated 49,070 patients. Using multivariable logistic regression models, odds of secondary transfer into an MTC were higher at night (OR = 2.05, 95% CI 1.93-2.19) but not during the day at weekends (OR = 1.09; CI 0.99-1.19). Neither admission at night nor at the weekend was associated with increased length of stay, or higher odds of in-hospital death.\n\nSection::::Published research: Disease-specific (selected) patients: Specialist surgery.:Vascular surgery.\n", "Burstein et al., in a study of 71,180 South African paediatric trauma patients found that 8,815 (12.4%) resulted from Road Traffic Accidents. RTAs were more common on weekends than weekdays (2.98 vs 2.19 patients/day; p0.001), representing a greater proportion of daily all-cause trauma (15.5% vs 11.2%; p0.001). Moreover, weekend RTA patients sustained more severe injuries than on weekdays, and compared to weekend all-cause trauma patients (injury score 1.66 vs. 1.46 and 1.43; both p0.001).\n\nIn obstetric and paediatrics, most studies did show a weekend effect. This is a concern as both specialties, traditionally, have similar work patterns throughout the week.\n", "BULLET::::14. El Paso, Texas: 32.6 hours\n\nBULLET::::15. Denver, Colorado: 31.7 hours\n\nBULLET::::16. New Haven, Connecticut: 31.2 hours\n\nBULLET::::17. Fort Worth, Texas: 30.6 hours\n\nBULLET::::18. Albuquerque, New Mexico: 29.3 hours\n\nBULLET::::19. Detroit, Michigan: 28.5 hours\n\nBULLET::::20. Colorado Springs, Colorado: 26.8 hours\n\nBULLET::::21. St. Louis, Missouri: 25.6 hours\n\nBULLET::::22. Indianapolis, Indiana: 24.9 hours\n\nBULLET::::23. Baltimore, Maryland: 23.4 hours\n\nBULLET::::24. Las Vegas, Nevada: 22.1 hours\n\nBULLET::::25. Salt Lake City, Utah: 21.9 hours\n\nBULLET::::26. Lubbock, Texas: 21.5 hours\n\nBULLET::::27. Provo, Utah: 21.2 hours\n\nBULLET::::28. Aurora, Colorado: 20.7 hours\n\nBULLET::::29. New Orleans, Louisiana: 20.2 hours\n\nBULLET::::30. Arlington, Texas: 19.8 hours\n\nBULLET::::31. Hartford, Connecticut: 19.6 hours\n", "Two stroke studies have looked at working patterns. In 2013 in France, Bejot el al. found that onset during weekends/public holidays was associated with a higher risk of 30-day mortality during 1985-2003 but not during 2004-2010; before and after the introduction of a dedicated stroke care network. In 2014, Bray et al., in the UK, found that the highest risk of death observed was in stroke services with the lowest nurse/bed ratios.\n", "In 2016, in the US, Gaeteno et al. investigated 31,614 with cirrhosis and ascites. Among these admissions, approximately 51% (16,133) underwent paracentesis. Patients admitted on a weekend demonstrated less frequent use of early paracentesis (50% weekend vs 62% weekday) and increased mortality (OR = 1.12; 95% CI 1.01-1.25).\n", "In Toronto, rush hour typically lasts from 8:00-9:00 in the morning and later from 2 pm until at least 7:30–8 pm. Montreal, however, has rush hour times from 6:30–8:30 am and 3:30–6 pm.\n\nIn the cities of Edmonton and Calgary, rush hour typically lasts from 7–9 am and begins again at 2:30–6 pm. The overwhelming traffic causes significant delays on freeways and commuter routes, most notably being Anthony Henday Drive in Edmonton, where the province has committed to widening, and Deerfoot Trail in Calgary.\n\nSection::::Traffic management by country.:China.\n", "In summary, in neuroscience, the evidence is less clear. In stroke, the weekend effect probably exists. Except for two studies (by Kazley et al., and Hoh et al., both in 2010), all the studies over 20,000 patients show the effect. In neurological/surgical conditions that may require surgery, there is variable evidence of a weekend effect.\n\nSection::::Published research: Disease-specific (selected) patients: Paediatrics and obstetrics.\n\nNeonatal mortality\n\nSeveral paediatric and obstetric studies have been performed. In fact, the first studies on the weekend effect were in this area in the late 1970s.\n", "The ability to accurately forecast request patterns is an important requirement of capacity planning. A practical consequence of burstiness and heavy-tailed and correlated arrivals is difficulty in capacity planning.\n", "Traffic congestion is a condition on road networks that occurs as use increases, and is characterized by slower speeds, longer trip times, increased pollution, and increased vehicular queueing. The Texas Transportation Institute estimated that, in 2000, the 75 largest metropolitan areas experienced 3.6 billion vehicle-hours of delay, resulting in 5.7 billion U.S. gallons (21.6 billion liters) in wasted fuel and $67.5 billion in lost productivity, or about 0.7% of the nation's GDP. It also estimated that the annual cost of congestion for each driver was approximately $1,000 in very large cities and $200 in small cities. Traffic congestion is increasing in major cities and delays are becoming more frequent in smaller cities and rural areas.\n", "Another usage of \"third rush hour\" can be to describe congestion later at night (generally between 10–11 pm and 2–3 am the next morning, particularly on Thursdays, Fridays, and Saturdays) of people returning home from nights spent out at restaurants, bars, nightclubs, casinos, concerts, amusement parks, movie theaters, and sporting events. At other times (such as evenings and weekends), additional periods of congestion can be the result of various special events, such as sports competitions, festivals, or religious services. Out-of-the-ordinary congestion can be the result of an accident, construction, long holiday weekends, or inclement weather.\n\nSection::::See also.\n\nBULLET::::- Carpool\n", "In measured traffic data, have been found that are qualitatively the same for different highways in different countries. Some of these common features distinguish the wide moving jam and synchronized flow phases of congested traffic in Kerner's three-phase traffic theory.\n\nSection::::Congested traffic.:Rush hour.\n\nDuring business days in most major cities, traffic congestion reaches great intensity at predictable times of the day due to the large number of vehicles using the road at the same time. This phenomenon is called \"rush hour\" or \"peak hour\", although the period of high traffic intensity often exceeds one hour.\n\nSection::::Congestion mitigation.\n", "In 2013 in France, Béjot et al., in a study of 5864 patients, found that onset during weekends/bank holidays was associated with a higher risk of 30-day mortality during 1985-2003 (OR = 1.26; 95% CI 1.06-1.51; p=0.01), but not during 2004-2010 (OR = 0.99; 95% CI 0.69-1.43; p=0.97). The authors concluded \"The deleterious effect of weekends/bank holidays on early stroke mortality disappeared after the organization of a dedicated stroke care network in our community\".\n", "BULLET::::32. Miami, Florida: 19.5 hours\n\nBULLET::::33. Tampa, Florida: 19.4 hours\n\nBULLET::::34. Daytona Beach, Florida: 19.2 hours\n\nBULLET::::35. Boise, Idaho: 18.7 hours\n\nBULLET::::36. Rio Rancho, New Mexico: 18.4 hours\n\nBULLET::::37. Wichita, Kansas: 18.1 hours\n\nBULLET::::38. Mobile, Alabama: 17.6 hours;\n\nBULLET::::39. Fort Collins, Colorado: 16.9 hour\n\nBULLET::::40. Kansas City, Missouri: 16.7 hours\n\nBULLET::::41. Columbia, Missouri: 16.3 hours\n\nBULLET::::42. Abilene, Texas: 16.1 hours\n\nBULLET::::43. Sacramento, California: 15.8 hours\n\nBULLET::::44. Midland, Texas: 15.4 hours\n\nBULLET::::45. Westminster, Colorado: 14.7 hours\n\nBULLET::::46. Plano, Texas: 14.5 hours\n\nBULLET::::47. Temple, Texas: 14.2 hours\n\nBULLET::::48. Loveland, Colorado: 13.8 hours\n\nBULLET::::49. Amarillo, Texas: 13.2 hours\n\nBULLET::::50. Odessa, Texas: 12.8 hours\n", "In Sydney, congestion is managed by many means including:\n\nBULLET::::- Buses increase frequency from 4 per hour to 12 per hour on the Metrobus network, other routes increase limited and express services\n\nBULLET::::- The CityRail network runs double-decker electric multiple unit trains that allowed many more passengers to board the trains compared to the 1950s single-level 'Red Rattlers', and 'Silver Ghosts'.\n\nBULLET::::- Time-of-day ticket prices allow train commuters to board trains before 6 am or after 7 pm at a cheaper rate on single or day return tickets\n\nBULLET::::- Transit and/or HOV Lanes are installed on many major arterial roads,\n", "Cities such as Atlanta, Houston, Austin, Boston, Chicago, Honolulu, New York, Los Angeles, Philadelphia, San Francisco, and Washington, D.C., to name a few, are ranked as having the worst traffic in the country. Los Angeles also has the highest amount of time spent in congestion, followed by Honolulu and Washington.\n\nSection::::Third rush hour.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04855
How are scientists able to determine the age of the universe?
So they can see how far apart certain things are. They can measure how fast things are moving apart and how much they are accelerating. The formula for figuring this out shows they can predict it accurately. So this also means they can roll it backwards. If something has gone x distance and is going f(t) speed then you can figure out how long it has been traveling. So if something starts at 1 mph and every hour it accelerates an additional 1 mph then if it has gone 10 miles it has traveled for 4 hours. Hour 1 - 1 mph 1 mile Hour 2 2 mph 2 miles +1=3 Hour 3 3 mph 1+2+3 miles =6 Hour 4. 4 mph 1+2+3+4 miles =10 Now work that backwards. Start at 10 miles and 4 mph and lose 1 mph every hour. You end up with 4 hours. Sorry I’m on mobile so....
[ "The discovery of microwave cosmic background radiation announced in 1965 finally brought an effective end to the remaining scientific uncertainty over the expanding universe. It was a chance result from work by two teams less than 60 miles apart. In 1964, Arno Penzias and Robert Wilson were trying to detect radio wave echoes with a supersensitive antenna. The antenna persistently detected a low, steady, mysterious noise in the microwave region that was evenly spread over the sky, and was present day and night. After testing, they became certain that the signal did not come from the Earth, the Sun, or our galaxy, but from outside our own galaxy, but could not explain it. At the same time another team, Robert H. Dicke, Jim Peebles, and David Wilkinson, were attempting to detect low level noise which might be left over from the Big Bang and could prove whether the Big Bang theory was correct. The two teams realized that the detected noise was in fact radiation left over from the Big Bang, and that this was strong evidence that the theory was correct. Since then, a great deal of other evidence has strengthened and confirmed this conclusion, and refined the estimated age of the universe to its current figure.\n", "The age of the universe as estimated from the Hubble expansion and the CMB is now in good agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars.\n", "The exact timings of the first stars, galaxies, supermassive black holes, and quasars, and the start and end timings and progression of the period known as reionization, are still being actively researched, with new findings published periodically. As of 2019, the earliest confirmed galaxies date from around 380–400 million years (for example GN-z11), suggesting surprisingly fast gas cloud condensation and stellar birth rates, and observations of the Lyman-alpha forest and other changes to the light from ancient objects allows the timing for reionization, and its eventual end, to be narrowed down. But these are all still areas of active research.\n", "This measurement is made by using the location of the first acoustic peak in the microwave background power spectrum to determine the size of the decoupling surface (size of the universe at the time of recombination). The light travel time to this surface (depending on the geometry used) yields a reliable age for the universe. Assuming the validity of the models used to determine this age, the residual accuracy yields a margin of error near one percent.\n\nSection::::Planck.\n", "In physical cosmology, the age of the universe is the time elapsed since the Big Bang. The current measurement of the age of the universe is billion (10) years within the Lambda-CDM concordance model. The uncertainty has been narrowed down to 21 million years, based on a number of studies which all gave extremely similar figures for the age. These include studies of the microwave background radiation, and measurements by the \"Planck\" spacecraft, the Wilkinson Microwave Anisotropy Probe and other probes. Measurements of the cosmic background radiation give the cooling time of the universe since the Big Bang, and measurements of the expansion rate of the universe can be used to calculate its approximate age by extrapolating backwards in time.\n", "Apart from the Planck satellite, the Wilkinson Microwave Anisotropy Probe (WMAP) was instrumental in establishing an accurate age of the universe, though other measurements must be folded in to gain an accurate number. CMB measurements are very good at constraining the matter content Ω and curvature parameter Ω. It is not as sensitive to Ω directly, partly because the cosmological constant becomes important only at low redshift. The most accurate determinations of the Hubble parameter \"H\" come from Type Ia supernovae. Combining these measurements leads to the generally accepted value for the age of the universe quoted above.\n", "The space probes WMAP, launched in 2001, and Planck, launched in 2009, produced data that determines the Hubble constant and the age of the universe independent of galaxy distances, removing the largest source of error.\n\nSection::::See also.\n\nBULLET::::- Age of the Earth\n\nBULLET::::- Anthropic principle\n\nBULLET::::- Cosmic Calendar (age of universe scaled to a single year)\n\nBULLET::::- Cosmology\n\nBULLET::::- Dark Ages Radio Explorer\n\nBULLET::::- Expansion of the universe\n\nBULLET::::- Hubble Deep Field\n\nBULLET::::- Illustris project\n\nBULLET::::- Multiverse\n\nBULLET::::- Observable universe\n\nBULLET::::- Redshift observations in astronomy\n\nBULLET::::- Static universe\n\nBULLET::::- \"The First Three Minutes\" (1977 book by Steven Weinberg)\n", "In 2015, the Planck Collaboration estimated the age of the universe to be billion years, slightly higher but within the uncertainties of the earlier number derived from the WMAP data. By combining the Planck data with external data, the best combined estimate of the age of the universe is old.\n\nSection::::Assumption of strong priors.\n", "If one has accurate measurements of these parameters, then the age of the universe can be determined by using the Friedmann equation. This equation relates the rate of change in the scale factor \"a\"(\"t\") to the matter content of the universe. Turning this relation around, we can calculate the change in time per change in scale factor and thus calculate the total age of the universe by integrating this formula. The age \"t\" is then given by an expression of the form\n", "NASA's Wilkinson Microwave Anisotropy Probe (WMAP) project's nine-year data release in 2012 estimated the age of the universe to be years (13.772 billion years, with an uncertainty of plus or minus 59 million years).\n\nHowever, this age is based on the assumption that the project's underlying model is correct; other methods of estimating the age of the universe could give different ages. Assuming an extra background of relativistic particles, for example, can enlarge the error bars of the WMAP constraint by one order of magnitude.\n", "There is also currently an observational effort underway to detect the faint 21 cm spin line radiation, as it is in principle an even more powerful tool than the cosmic microwave background for studying the early universe.\n\nSection::::The Dark Ages and large-scale structure emergence.:Dark Ages.:Speculative \"habitable epoch\".\n", "The age of the universe based on the best fit to Planck 2015 data alone is billion years (the estimate of billion years uses Gaussian priors based on earlier estimates from other studies to determine the combined uncertainty). This number represents an accurate \"direct\" measurement of the age of the universe (other methods typically involve Hubble's law and the age of the oldest stars in globular clusters, etc.). It is possible to use different methods for determining the same parameter (in this case – the age of the universe) and arrive at different answers with no overlap in the \"errors\". To best avoid the problem, it is common to show two sets of uncertainties; one related to the actual measurement and the other related to the systematic errors of the model being used.\n", "Measuring the mass functions of galaxy clusters, which describe the number density of the clusters above a threshold mass, also provides evidence for dark energy . By comparing these mass functions at high and low redshifts to those predicted by different cosmological models, values for and are obtained which confirm a low matter density and a non zero amount of dark energy.\n\nSection::::Evidence for acceleration.:Age of the universe.\n\nGiven a cosmological model with certain values of the cosmological density parameters, it is possible to integrate the Friedmann equations and derive the age of the universe.\n", "More recent measurements from WMAP and the Planck spacecraft lead to an estimate of the age of the universe of 13.80 billion years with only 0.3 percent uncertainty (based on the standard Lambda-CDM model), and modern age measurements for globular clusters and other objects are currently smaller than this value (within the measurement uncertainties). A substantial majority of cosmologists therefore believe the age problem is now resolved.\n", "In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters. While there still remain some questions as to how accurately the ages of the clusters are measured, globular clusters are of interest to cosmology as some of the oldest objects in the universe.\n", "Since the universe must be at least as old as the oldest things in it, there are a number of observations which put a lower limit on the age of the universe; these include the temperature of the coolest white dwarfs, which gradually cool as they age, and the dimmest turnoff point of main sequence stars in clusters (lower-mass stars spend a greater amount of time on the main sequence, so the lowest-mass stars that have evolved off of the main sequence set a minimum age).\n\nSection::::Cosmological parameters.\n", "Calculating the age of the universe is accurate only if the assumptions built into the models being used to estimate it are also accurate. This is referred to as strong priors and essentially involves stripping the potential errors in other parts of the model to render the accuracy of actual observational data directly into the concluded result. Although this is not a valid procedure in all contexts (as noted in the accompanying caveat: \"based on the fact we have assumed the underlying model we used is correct\"), the age given is thus accurate to the specified error (since this error represents the error in the instrument used to gather the raw data input into the model).\n", "The cosmological constant makes the universe \"older\" for fixed values of the other parameters. This is significant, since before the cosmological constant became generally accepted, the Big Bang model had difficulty explaining why globular clusters in the Milky Way appeared to be far older than the age of the universe as calculated from the Hubble parameter and a matter-only universe. Introducing the cosmological constant allows the universe to be older than these clusters, as well as explaining other features that the matter-only cosmological model could not.\n\nSection::::WMAP.\n", "Her research is in cosmology, studying the chronology of the universe using the Atacama Cosmology Telescope, the Simons Observatory, and the Large Synoptic Survey Telescope (LSST).\n", "Nucleocosmochronology has been employed to determine the age of the Sun ( billion years) and of the Galactic thin disk ( billion years), among others. It has also been used to estimate the age of the Milky Way itself, as exemplified by a recent study of Cayrel's Star in the Galactic halo, which due to its low metallicity, is believed to have formed early in the history of the Galaxy. Limiting factors in its precision are the quality of observations of faint stars and the uncertainty of the primordial abundances of r-process elements.\n\nSection::::See also.\n\nBULLET::::- Astrochemistry\n\nBULLET::::- Geochronology\n", "BULLET::::- Ole Rømer makes the first quantitative estimate of the speed of light in 1676 by timing the motions of Jupiter's satellite Io with a telescope\n\nBULLET::::- Arno Penzias and Robert Wilson detect the cosmic microwave background radiation, giving support to the theory of the Big Bang (1964)\n\nBULLET::::- Kerim Kerimov launches Kosmos 186 and Kosmos 188 as experiments on automatic docking eventually leading to the development of space stations (1967)\n\nBULLET::::- The Supernova Cosmology Project and the High-Z Supernova Search Team discover, by observing Type Ia supernovae, that the expansion of the Universe is accelerating (1998)\n\nSection::::Biology.\n", "On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map of the cosmic microwave background. The map suggests the universe is slightly older than previously thought. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370,000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is 13.799 ± 0.021 billion years old and the Hubble constant was measured to be 67.74 ± 0.46 (km/s)/Mpc.\n", "The age problem was eventually thought to be resolved by several developments between 1995-2003: firstly, a large program with the Hubble Space Telescope measured the Hubble constant at 72 (km/s)/Mpc with 10 percent uncertainty. Secondly, measurements of parallax by the Hipparcos spacecraft in 1995 revised globular cluster distances upwards by 5-10 percent; this made their stars brighter than previously estimated and therefore younger, shifting their age estimates down to around 12-13 billion years. Finally, from 1998-2003 a number of new cosmological observations including supernovae, cosmic microwave background observations and large galaxy redshift surveys led to the acceptance of dark energy and the establishment of the Lambda-CDM model as the standard model of cosmology. The presence of dark energy implies that the universe was expanding more slowly at around half its present age than today, which makes the universe older for a given value of the Hubble constant. The combination of the three results above essentially removed the discrepancy between estimated globular cluster ages and the age of the universe.\n", "The standard model of cosmology is based on a model of space-time called the Friedmann–Lemaître–Robertson–Walker (\"FLRW\") metric. A metric provides a measure of distance between objects, and the FLRW metric is the exact solution of Einstein's field equations if some key properties of space such as homogeneity and isotropy are assumed to be true. The FLRW metric very closely matches overwhelming other evidence, showing that the universe has expanded since the Big Bang.\n", "Reported in March 2016, \"Spitzer\" and \"Hubble\" were used to discover the most distant-known galaxy, GN-z11. This object was seen as it appeared 13.4 billion years ago. (List of the most distant astronomical objects)\n\nSection::::Successors to GO instruments.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-21886
I just saw the results of a survey on the news. They had an error margin of "3.3 percentage points, 19 times out of 20." What's the error margin the remaining 1 time out of 20?
The other one time out of twenty, the margin of error is more than 3.3 percentage points. Let's say the result of the survey was 50% yes. This means that half of the respondents said yes to the survey. Now what does that say about all the people who didn't answer the survey? If only two people were asked, it doesn't say a lot. But if a lot of people were asked, the survey is very likely to be a good reflection of the general population (barring problems with the way survey respondents were selected, which is pretty common). So saying there is a margin of error of 3.3 percentage points 19 times out of 20 means that there is a 95% chance (19/20) that if you asked every single person, 46.7%-53.3% of the population would say yes to the survey question.
[ "In cases where the sampling fraction exceeds 5%, analysts can adjust the margin of error using a \"finite population correction\" (FPC), to account for the added precision gained by sampling close to a larger percentage of the population. FPC can be calculated using the formula:\n", "An example from the 2004 U.S. presidential campaign will be used to illustrate concepts throughout this article. According to an October 2, 2004 survey by \"Newsweek\", 47% of registered voters would vote for John Kerry/John Edwards if the election were held on that day, 45% would vote for George W. Bush/Dick Cheney, and 2% would vote for Ralph Nader/Peter Camejo. The size of the sample was 1,013. Unless otherwise stated, the remainder of this article uses a 95% level of confidence.\n\nSection::::Concept.:Basic concept.\n", "While the margin of error typically reported in the media is a poll-wide figure that reflects the maximum sampling variation of any percentage based on all respondents from that poll, the term \"margin of error\" also refers to the radius of the confidence interval for a particular statistic.\n\nThe margin of error for a particular individual percentage will usually be smaller than the maximum margin of error quoted for the survey. This maximum only applies when the observed percentage is 50%, and the margin of error shrinks as the percentage approaches the extremes of 0% or 100%.\n", "Sampling theory provides methods for calculating the probability that the poll results differ from reality by more than a certain amount, simply due to chance; for instance, that the poll reports 47% for Kerry but his support is actually as high as 50%, or is really as low as 44%. This theory and some Bayesian assumptions suggest that the \"true\" percentage will probably be fairly close to 47%. The more people that are sampled, the more confident pollsters can be that the \"true\" percentage is close to the observed percentage. The margin of error is a measure of how close the results are likely to be.\n", "Confidence intervals can be calculated, and so can margins of error, for a range of statistics including individual percentages, differences between percentages, means, medians, and totals.\n\nThe margin of error for the difference between two percentages is larger than the margins of error for each of these percentages, and may even be larger than the maximum margin of error for any individual percentage from the survey.\n\nSection::::Comparing percentages.\n", "The margin of error has been described as an \"absolute\" quantity, equal to a confidence interval radius for the statistic. For example, if the true value is 50 percentage points, and the statistic has a confidence interval radius of five percentage points, then we say the margin of error is five percentage points. As another example, if the true value is 50 people, and the statistic has a confidence interval radius of five people, then we might say the margin of error is five people.\n", "Polls based on samples of populations are subject to sampling error which reflects the effects of chance and uncertainty in the sampling process. Sampling polls rely on the law of large numbers to measure the opinions of the whole population based only on a subset, and for this purpose the absolute size of the sample is important, but the percentage of the whole population is not important (unless it happens to be close to the sample size). The possible difference between the sample and whole population is often expressed as a margin of error - usually defined as the radius of a 95% confidence interval for a particular statistic. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%. Others suggest that a poll with a random sample of 1,000 people has margin of sampling error of ±3% for the estimated percentage of the whole population.\n", "In a plurality voting system, where the winner is the candidate with the most votes, it is important to know who is ahead. The terms \"statistical tie\" and \"statistical dead heat\" are sometimes used to describe reported percentages that differ by less than a margin of error, but these terms can be misleading. For one thing, the margin of error as generally calculated is applicable to an \"individual percentage\" and not the difference between percentages, so the difference between two percentage estimates may not be statistically significant even when they differ by more than the reported margin of error. The survey results also often provide strong information even when there is not a statistically significant difference.\n", "The title refers to the results of a poll on voting intentions for the mayoral primary featured in the series. In statistical analysis, the margin of error expresses the amount of the random variation underlying a survey's results. This can be thought of as a measure of the variation one would see in reported percentages if the same poll were taken multiple times. The larger the margin of error, the less confidence one has that the poll's reported percentages are close to the \"true\" percentages; that is, the percentages in the whole population.\n\nSection::::Production.:Epigraph.\n", "If an article about a poll does not report the margin of error, but does state that a simple random sample of a certain size was used, the margin of error can be calculated for a desired degree of confidence using one of the above formulae. Also, if the 95% margin of error is given, one can find the 99% margin of error by increasing the reported margin of error by about 30%.\n", "Various opinion polls were conducted to estimate the outcome of the proposition. Those margins with differences less than their margins of error are marked as \"n.s.\", meaning not significant (see Statistical significance). Those margins considered statistically significant are indicated with the percentage points and the side favored in the poll, as either \"pro\" for in favor of the proposition's passage (e.g., 1% pro), or \"con\" for against its passage (e.g., 1% con).\n", "Another way to reduce the margin of error is to rely on poll averages. This makes the assumption that the procedure is similar enough between many different polls and uses the sample size of each poll to create a polling average. An example of a polling average can be found here: 2008 Presidential Election polling average. Another source of error stems from faulty demographic models by pollsters who weigh their samples by particular variables such as party identification in an election. For example, if you assume that the breakdown of the US population by party identification has not changed since the previous presidential election, you may underestimate a victory or a defeat of a particular party candidate that saw a surge or decline in its party registration relative to the previous presidential election cycle.\n", "Total survey error\n\nIn survey sampling, total survey error includes all forms of survey error including sampling variability, interviewer effects, frame errors, response bias, and non-response bias. Total survey error is discussed in detail in many sources including Salant and Dillman.\n\nSection::::Definition.\n", "A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500–1,000 is a typical compromise for political polls. (Note that to get complete responses it may be necessary to include thousands of additional participators.)\n", "However, the margin of error only accounts for random sampling error, so it is blind to systematic errors that may be introduced by non-response or by interactions between the survey and subjects' memory, motivation, communication and knowledge.\n\nSection::::Concept.:Calculations assuming random sampling.\n\nThis section will briefly discuss the standard error of a percentage, the corresponding confidence interval, and connect these two concepts to the margin of error. For simplicity, the calculations here assume the poll was based on a simple random sample from a large population.\n", "However, often only one margin of error is reported for a survey. When results are reported for population subgroups, a larger margin of error will apply, but this may not be made clear. For example, a survey of 1000 people may contain 100 people from a certain ethnic or economic group. The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%.\n", "If the exact confidence intervals are used, then the margin of error takes into account both sampling error and non-sampling error. If an approximate confidence interval is used (for example, by assuming the distribution is normal and then modeling the confidence interval accordingly), then the margin of error may only take random sampling error into account. It does not represent other potential sources of error or bias such as a non-representative sample-design, poorly phrased questions, people lying or refusing to respond, the exclusion of people who could not be contacted, or miscounts and miscalculations.\n\nSection::::Concept.\n", "Margin of error is usually defined as the \"radius\" (or half the width) of a confidence interval for a particular statistic from a survey. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%.\n", "The Apdex formula is the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples:\n\nwhere the sub-script t is the target time, and the tolerable time is assumed to be 4 times the target time. So it is easy to see how this ratio is always directly related to users' perceptions of satisfactory application responsiveness.\n", "Margin of Error (The Wire)\n\n\"Margin of Error\" is the sixth episode of the fourth season of the HBO original series \"The Wire\". Written by Eric Overmyer from a story by Ed Burns & Eric Overmyer, and directed by Dan Attias, it originally aired on October 15, 2006.\n\nSection::::Plot summary.\n\nSection::::Plot summary.:Democratic Primary Campaign.\n", "As an example of the above, a random sample of size 400 will give a margin of error, at a 95% confidence level, of 0.98/20 or 0.049 - just under 5%. A random sample of size 1600 will give a margin of error of 0.98/40, or 0.0245 - just under 2.5%. A random sample of size 10,000 will give a margin of error at the 95% confidence level of 0.98/100, or 0.0098 - just under 1%.\n\nSection::::Concept.:Maximum and specific margins of error.\n", "The five measures used to evaluate the accuracy of different forecasts were: symmetric mean absolute percentage error (also known as symmetric MAPE), average ranking, median symmetric absolute percentage error (also known as median symmetric APE), percentage better, and median RAE.\n", "Polls basically involve taking a sample from a certain population. In the case of the \"Newsweek\" poll, the population of interest is the population of people who will vote. Because it is impractical to poll everyone who will vote, pollsters take smaller samples that are intended to be representative, that is, a random sample of the population. It is possible that pollsters sample 1,013 voters who happen to vote for Bush when in fact the population is evenly split between Bush and Kerry, but this is extremely unlikely (\"p\" = 2 ≈ 1.1 × 10) given that the sample is random.\n", "Section::::Critical reception.\n", "UCA - Sponsored by END, Canal 10, CNC - \"Poll Conducted October, 2006\"\n\n\"This poll sample size is 15,330. The margin of error is 0.8%.\"\n\nCID-Gallup - \"Poll Conducted October, 2006 and August 16 to 19, 2006\"\n\nThe October poll sample size is 5,090. The margin of error is 2%. August poll sample size is 1,258. Methodology: Telephone interviews. Margin of error is 2.8%.\n\nSection::::Pre-election poll results.:Parliamentary election.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-13590
What is the difference between using Venmo and using an online banking app to transfer money?
You didn't used to be able to transfer money from all banking apps. Some were free, some charged a free, some didn't permit it at all. For those that did support it it was a multi step process where you either needed their account and routing number or it sent them an email to claim their money Many banking apps have added Zelle over the last year for person to person transfers. Zelle only came into being in 2017 and they've been pushing it this year to compete against venmo. Even with Zelle it's a multi step process for them to actually get their money where they get a text and then have to get it into their account Venmo makes it super easy to send money to the same person again. There's no confirmation on their side, no getting their money. It's streamlined which is the most important feature for mainstream adoption
[ "In addition, the service offers a reload function, which, when enabled, takes money from a user's linked checking account in $10 increments if their Venmo balance drops too low to cover a purchase. Customers could be subject to fees or other consequences from their bank if they overdraft that account. Card purchases show up in a user's Venmo transaction history, and the card can be canceled from within the app. These features make the card similar to a traditional bank debit card, but adds the ability to directly track spending in-app.\n\nSection::::Product.:Social component.\n", "Users create an account via a mobile app or website and provide basic information and bank account information. Recipients of transactions can be found via phone number, Venmo username, or email.\n", "Each transaction includes a description of what the payment is for in text, emoji, or both. This description is required to complete the transaction, but Venmo does not enforce content requirements (e.g., someone could describe the transaction as \"nothing\"). Venmo recommends emoji when a common expense is entered as the description. Overall, 30% of Venmo transactions include at least one emoji.\n", "Venmo requires its users to set up an account in its system that is separate from an ordinary banking account and is not insured by FDIC or NCUA, which banking accounts are. The Venmo account can be linked to a bank account so that necessary funds will be automatically withdrawn from there. However, funds can only be withdrawn from a Venmo account by first transferring them to a bank account and then withdrawing the money from the bank account (a process that adds an extra step and involves some additional delay and possibly a fee). In contrast, Zelle transfers money directly between bank accounts, so it requires no separate account or extra steps to obtain access to funds. Zelle is also accessible through banking institution websites and apps as well as through the separate Zelle mobile app.\n", "Users have a Venmo balance that is used for their transactions. They can link their bank accounts, debit cards, or credit cards, to their Venmo account, alternatively users can order a Venmo MasterCard and pay through it. Paying with a bank account or debit card is free, but payments via credit card have a 3% fee for each transaction. If a user does not have enough funds in the account when making a transaction, it will automatically withdraw the necessary funds from the registered bank account or card.\n", "Venmo was founded by Andrew Kortina and Iqram Magdon-Ismail, who met as freshman roommates at the University of Pennsylvania. According to Kortina, the duo were initially inspired to create a transaction solution while, in the process of helping start a friend's yogurt shop, they \"realized how horrible traditional point of sales software was\". At a local jazz show, Kortina and Magdon-Ismail conceived the idea of instantly buying an MP3 of the show via text message. Finally, the idea was cemented when Magdon-Ismail forgot his wallet during a trip to visit Kortina. The process of settling their debt was a considerable inconvenience, especially compared to the possibility of mobile phone-based transactions. Shortly after, they began working on a way to send money through mobile phones. Their original prototype sent money through text messages, but they eventually transitioned from text messages to a smartphone app.\n", "Venmo includes three social feeds: a public feed, a friends feed, and a private feed. By default, all Venmo transactions are shared publicly. Anyone who opens the app to the public feed, including people who do not themselves use Venmo, can see these publicly shared posts. The privacy settings can be changed so that all posts are either shared only with a user's Venmo contacts, or even kept private. If posts are shared only with contacts, they still appear in a friends feed, whereas private transactions are only visible to the two parties involved in the transaction. If two users involved in a single transaction have differing privacy settings, Venmo applies the more restrictive level. Users can override their overall preference for any individual transaction, including after the transaction has been made.\n", "Starting in January 2018, Venmo began to also offer a more rapid transfer option than its typical 1–3 day transfer service, but Venmo charges a fee for the service, which Zelle-affiliated banks currently do not. The Zelle network itself does not charge a fee to users for money transfers. Banks are allowed to charge a fee for Zelle transfers involving their accounts, but they have generally not chosen to do so.\n", "Regardless of a transaction's privacy, only the two people involved in a transaction can see the transaction's amount. Transactions are persistent within the feed and scroll infinitely. A previous transaction may be difficult to revisit if a user or their contacts have many transactions.\n", "Venmo's social model has attracted attention from researchers. A research group from University of Washington observes that the social feed in Venmo differs from other social networks in that activity is driven by financial transactions. A user could make a trivial transaction to make a post (e.g., sending someone $0.01, or requesting $0.02), but only one participant in their studies reported ever doing this. Further, neither reading the feed nor sharing a transaction memo publicly or with friends is necessary to send or receive money.\n", "On Venmo, people transact with both friends and businesses via the app. Analysis of public transactions identifies a spectrum of use patterns, from regular users who create transactions for a variety of expenses, to niche users who use Venmo with a small cluster of friends to pay for just a few things (e.g., bills among roommates).\n\nSection::::Product.:Security.\n", "Section::::Differences with mobile banking.\n\nThe major difference between mobile banking and mobile payments is the total absenteeism of the bank account number. In mobile banking or Internet banking, money can be transferred only when the account number of the payee is known before-hand. The account of the payee has to be registered with the payer and only then can a fund transfer happen.\n", "Venmo\n\nVenmo is a mobile payment service owned by PayPal. Venmo account holders can transfer funds to others via a mobile phone app; both the sender and receiver have to live in the U.S. Venmo is a type of payment rail. It handled $12 billion in transactions in the first quarter of 2018.\n", "The social networking interaction component allows users to send and request money, and split bills with others, similarly to Venmo in the United States. When the user makes a transaction, the details are posted on the social timeline, and available for other users to see, subject to privacy settings. \n", "A user will use a particular set of communication channels depending on the capabilities of the mobile phone. The implementation of the standards will vary depending on the set used.\n\nSection::::Communication channels.:Application based.\n\nMost banks provide a Java application that can be downloaded on a Java-enabled phone which will guide the user through the money transfer process. An SMS sent through a Java application on the mobile device is as secure as an Internet banking transaction, since it can be encrypted between the user and the bank.\n\nSection::::Communication channels.:SMS and IVR.\n", "BULLET::::- In January 2010, Venmo launched as a mobile payment system through SMS, which transformed into a social app where friends can pay each other for minor expenses like a cup of coffee, rent and paying your share of the restaurant bill when you forget your wallet. It is popular with college students, but has some security issues. It can be linked to your bank account, credit/debit card or have a loaded value to limit the amount of loss in case of a security breach. Credit cards and non-major debit cards incur a 3% processing fee.\n", "In Nov 2017 the State Bank of India launched an integrated banking platform in India called YONO offering conventional banking functions but also payment services for things such as online shopping, travel planning, taxi booking or online education.\n\nIn January 2019, the German direct bank N26 overtook Revolut as the most valuable mobile bank in Europe with a valuation of $2.7 billion and 1.5 million users.\n", "Since 2008, cash transfers using Venmo were not instantaneous and could be canceled after an initial transfer is sent. Like traditional wire transfer they can take one to three business days to become final. In January 2018, PayPal rolled out an instant transfer feature on Venmo, allowing users to deposit funds to their debit cards typically within 30 minutes. A 1% fee is deducted from the amount for each transfer, while the standard bank transfer (typically completed within 1–3 business days) is available for no fee.\n\nSection::::Product.\n", "Transactions through mobile banking depend on the features of the mobile banking app provided and typically includes obtaining account balances and lists of latest transactions, electronic bill payments, remote check deposits, P2P payments, and funds transfers between a customer's or another's accounts. Some apps also enable copies of statements to be downloaded and sometimes printed at the customer's premises.\n", "Prior to October 2015, Venmo prohibited merchants from accepting Venmo payments. On January 27, 2016, PayPal announced that Venmo was working with select merchants who would accept Venmo as payment. Initial launch partners included Munchery and Gametime. All merchants that accept PayPal can now accept Venmo. As of May 2018, Venmo's merchant product did not permit \"selling goods or services in person\"; however, research into mobile payment trends among mom-and-pop restaurants in New York City that month revealed a grey market use case whereby some Chinese takeouts and food trucks used personal Venmo QR codes to accept payments from customers. This QR payment behavior was similar to that used via Chinese mobile applications WeChat and Alipay within these same establishments.\n", "Venmo includes social networking interaction; it was created so friends could quickly split bills, whether that is for movies, dinner, rent, tickets, etc. When a user makes a transaction, the transaction details (stripped of the payment amount) are shared on the user's \"news feed\" and to the user's network of friends. This mimics that of a social media feed. There is a \"world wide\" Venmo feed, a \"friends only\" feed, and then personal feed. Venmo encourages social interaction on the application through comments using jokes or emojis and/or likes. Early on, Venmo required new users to sign up through Facebook, which made it easy to find peers they wanted to pay and also provided Venmo with free marketing. For users not friends on Facebook, the application allowed the opportunity to search by username and phone number. Profiles are personalized with profile pictures, usernames and Venmo transaction history. The transactions can be made private, but most users keep the default and do not change the privacy settings. Venmo does not have either buyer or seller protection.\n", "In May 2010, the company raised $1.2 million of seed money in a financing round led by RRE Ventures.\n\nIn 2012, the company was acquired by Braintree for $26.2 million.\n\nIn December 2013, PayPal acquired Braintree for $800 million.\n", "Section::::Competition with PayPal's Venmo service.\n\nThe Zelle service's principal competitor is PayPal and its Venmo payment service. Venmo is more popular, based on public awareness, opinion polling, and active engagement with users, but Zelle processes a much larger dollar volume of money transfers. The two services work very similarly from the user's perspective – e.g., both services use email addresses and mobile phone numbers to identify recipients, but Venmo lacks the direct integration with banking institutions that Zelle has, and Zelle money transfers are typically processed more quickly.\n", "Swedish payments company Trustly also enables mobile bank transfers, but is used mainly for business-to-consumer transactions that occur solely online. If an e-tailer integrates with Trustly, its customers can pay directly from their bank account. As opposed to Swish, users don't need to register a Trustly account or download software to pay with it.\n\nThe Danish MobilePay and Norwegian Vipps are also popular in their countries. They use direct and instant bank transfers, but also for users not connected to a participating bank, credit card billing.\n", "Mobile banking before 2010 was most often performed via SMS or the mobile web. Apple's initial success with iPhone and the rapid growth of phones based on Google's Android (operating system) have led to increasing use of special mobile apps, downloaded to the mobile device. With that said, advancements in web technologies such as HTML5, CSS3 and JavaScript have seen more banks launching mobile web based services to complement native applications. These applications are consisted of a web application module in JSP such as J2EE and functions of another module J2ME.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00394
Why does a smaller diameter air duct produce more drag?
More of the air is rubbing against the duct's lining, which isn't moving forward at all. That's drag. Less of the air is just happily surrounded by other moving air.
[ "Thus, the entry length in turbulent flow is much shorter as compared to laminar one. In most practical engineering applications, this entrance effect becomes insignificant beyond a pipe length of 10 times the diameter and hence it is approximated to be :\n\nformula_8\n\nOther authors give much longer entrance length, e.g. \n\nBULLET::::- Nikuradse recommends 40 D\n\nBULLET::::- Lien et al. recommend 150 D for high Reynolds number turbulent flow.\n\nSection::::Hydrodynamic entrance length.:Entry length for pipes with non-circular cross-sections.\n", "The perimeter of an air column varies directly with column diameter. While the cross-sectional area varies with the square of the diameter, the large column has proportionately fewer peripheries, and therefore less drag. The air column from a diameter fan, therefore, has more than six times as much friction interface per volume of air moved as does the air column from a diameter fan.\n", "Increasing the blade area of the whirligig increases the surface area so more air particles collide with the whirligig. This causes the drag force to reach its maximum value and the whirligig to reach its terminal speed in less time. Conversely the terminal speed is smaller when thin or short blades with a smaller surface area are utilized, resulting in the need for a higher wind speed to start and operate the whirligig.\n", "A good rule-of-thumb is that, for acceptably low and test-conditions-independent aerodynamic interference in a high-Reynolds-number, high-dynamic-pressure wind tunnel, a sting should have a diameter \"d\" not larger than 30% to 50% of model base diameter \"D\" and should have a length \"L\" of at least three model base diameters, e.g. as specified for the AGARD-C calibration model), .\n", "Form drag or pressure drag arises because of the shape of the object. The general size and shape of the body are the most important factors in form drag; bodies with a larger presented cross-section will have a higher drag than thinner bodies; sleek (\"streamlined\") objects have lower form drag. Form drag follows the drag equation, meaning that it increases with velocity, and thus becomes more important for high-speed aircraft.\n", "Fabric ducts requires a minimum of certain range of airflow and static pressure in order for it to work.\n\nSection::::Materials.:PVC low profile ducting.\n", "In the small airways where flow is laminar, resistance is proportional to gas viscosity and is not related to density and so heliox has little effect. The Hagen–Poiseuille equation describes laminar resistance. In the large airways where flow is turbulent, resistance is proportional to density, so heliox has a significant effect.\n", "BULLET::::- The Mistral wind in Southern France increases in speed through the Rhone valley.\n\nBULLET::::- Low-speed wind tunnels can be considered very large Venturi because they take advantage of the Venturi effect to increase velocity and decrease pressure to simulate expected flight conditions.\n", "The peniche itself affects the fluid dynamics around the half-model. It increases the local angle of attack on an inboard wing, while having no influence on an outboard wing. The blocking of the peniche in the flow field leads to further displacement of the flow, which in turn leads to higher flow speeds and local angles of attack. How strong of an effect the peniche has is a function of the angle of attack, with the effect present at all angles.\n", "L* is the length from some arbitrary point where Mach number is Ma. At the sonic region, Ma=1. The integration results in the following equation :\n\nformula_29\n\nWhere f- is average friction factor from x=0 to L*. Here D is the diameter of circular duct; in cases where the duct is not of circular cross section D=(4*area)/perimeter.\n\nSection::::Choking due to friction.\n\nChoking of duct is basically reduction of duct mass flow. The flow conditions change if the actual length L is greater than the predicted maximum length L*, and there are two classifications :\n\nSection::::Choking due to friction.:Subsonic inlet.\n", "BULLET::::- The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently, Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect.\n", "BULLET::::- formula_10 is the density of the fluid in question (water, air, oil etc.),\n\nBULLET::::- formula_11 is the Fanning friction factor,\n\nBULLET::::- formula_12 is the length of the duct,\n\nBULLET::::- formula_13 is the perimeter of the duct,\n\nBULLET::::- formula_14 is the area of the duct,\n\nBULLET::::- formula_15 is the velocity of the fluid.\n\nThe practicalities of mine ventilation led Atkinson to group some of these variables into one all-encompassing term:\n\nBULLET::::- Area and perimeter were incorporated because mine airways are of irregular shape, and both vary along the length of an airway.\n", "Heliox has a similar viscosity to air but a significantly lower density (0.5 g/l versus 1.25 g/l at STP). Flow of gas through the airway comprises laminar flow, transitional flow and turbulent flow. The tendency for each type of flow is described by the Reynolds number. Heliox's low density produces a lower Reynolds number and hence higher probability of laminar flow for any given airway. Laminar flow tends to generate less resistance than turbulent flow.\n", "Several of these explanations use the Bernoulli principle to connect the flow kinematics to the flow-induced pressures. In cases of incorrect (or partially correct) explanations relying on the Bernoulli principle, the errors generally occur in the assumptions on the flow kinematics and how these are produced. It is not the Bernoulli principle itself that is questioned, because this principle is well established (the airflow above the wing \"is\" faster, the question is \"why\" it is faster).\n\nSection::::Misapplications of Bernoulli's principle in common classroom demonstrations.\n", "Where air is flowing in a laminar manner it has less resistance than when it is flowing in a turbulent manner. If flow becomes turbulent, and the pressure difference is increased to maintain flow, this response itself increases resistance. This means that a large increase in pressure difference is required to maintain flow if it becomes turbulent.\n\nWhether flow is laminar or turbulent is complicated, however generally flow within a pipe will be laminar as long as the Reynolds number is less than 2300.\n\nSection::::Determinants of airway resistance.:Specific airway resistance (sR).\n", "The lower ring of louvers (3) convey large masses of air (33) almost directly into the low-pressure end of the vortex. The lower ring of louvers (3) are crucial to get high mass flows, because air from them (33) spins more slowly, and thus has lower centripetal forces and a higher pressure at the vortex.\n", "Heliox has a similar viscosity to air but a significantly lower density (0.5 g/l versus 1.2 5g/l at STP). Flow of gas through the airways comprises laminar flow, transitional flow and turbulent flow. The tendency for each type of flow is described by the Reynolds number. Heliox's low density produces a lower Reynolds number and hence higher probability of laminar flow for any given airway. Laminar flow tends to generate less resistance than turbulent flow.\n", "BULLET::::- formula_10 = Length of pipe\n\nBULLET::::- formula_11 = the dynamic viscosity\n\nBULLET::::- formula_6 = the volumetric flow rate (Q is usually used in fluid dynamics, however in respiratory physiology it denotes cardiac output)\n\nBULLET::::- formula_13 = the radius of the pipe\n\nDividing both sides by formula_6 and given the above definition shows:-\n\nWhile the assumptions of the Hagen–Poiseuille equation are not strictly true of the respiratory tract it serves to show that, because of the fourth power, relatively small changes in the radius of the airways causes large changes in airway resistance.\n", "In the small airways where flow is laminar, resistance is proportional to gas viscosity and is not related to density and so heliox has little effect. The Hagen–Poiseuille equation describes laminar resistance. In the large airways where flow is turbulent, resistance is proportional to density, so heliox has a significant effect.\n", "Many types of flow instrumentation, such as flow meters, require a fully developed flow to function properly. Common flow meters, including vortex flow meters and differential-pressure flow meters, require hydrodynamically fully developed flow. Hydraulically fully developed flow is commonly achieved by having long, straight sections of pipe before the flow meter. Alternatively, flow conditioners and straightening devices may be used to produce the desired flow.\n\nSection::::Applications.:Wind tunnels.\n", "When the down column of air from an HVLS fan reaches the floor, the air turns in the horizontal direction away from the column in all directions. The air flowing outward is called the \"horizontal floor jet.\" Since the height of the floor jet is determined by the diameter of the column of air, a larger diameter fan naturally produces a larger air column and thus a higher floor jet.\n\nSmaller high-speed fans of equivalent displacement are incapable of producing the same effect.\n", "For an annular duct, such as the outer channel in a tube-in-tube heat exchanger, the hydraulic diameter can be shown algebraically to reduce to\n\nwhere\n\nFor calculation involving flow in non-circular ducts, the hydraulic diameter can be substituted for the diameter of a circular duct, with reasonable accuracy, if the aspect ratio AR of the duct cross-section remains in the range AR 4.\n\nSection::::Laminar–turbulent transition.\n", "Form drag is caused by movement of the aircraft through the air. This type of drag, also known as air resistance or profile drag varies with the square of speed (see drag equation). For this reason profile drag is more pronounced at higher speeds, forming the right side of the lift/velocity graph's U shape. Profile drag is lowered primarily by streamlining and reducing cross section.\n", "V/STOL tunnels require large cross section area, but only small velocities. Since power varies with the cube of velocity, the power required for the operation is also less. An example of a V/STOL tunnel is the NASA Langley 14' x 22' tunnel.\n\nSection::::Classification.:Aeronautical wind tunnels.:Spin tunnels.\n\nAircraft have a tendency to go to spin when they stall. These tunnels are used to study that phenomenon.\n\nSection::::Classification.:Automotive tunnels.\n\nAutomotive wind tunnels fall into two categories:\n\nBULLET::::- External flow tunnels are used to study the external flow through the chassis\n", "Helicopter manufacturers try to reduce this differential effect (that is, aim for more equality of lift along the blade length). This has two main aspects:\n\nBULLET::::1. tapering a blade toward its tip, which reduces its surface area, in turn reducing its lift;\n\nBULLET::::2. twisting the blade (commonly called geometric twist) so that the blade root near the hub presents a higher angle-of-attack, thus higher lift.\n\nWhen the helicopter is travelling forwards with respect to the atmosphere, a further phenomenon comes into play, dissymmetry of lift.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04414
Why does hand sanitizer do a better job at removing temporary tattoos than soap and water?
Because it contains mostly alcohol, which is a solvent that dissolves the thin base material that temporary tattoos are made of. Soap and water just scrapes it off, that's why it always balls up on hairs if you only use soap.
[ "Permanent Makeup is typically done with some form of tattoo machine. There are machines now that are needle-less which should result in less painful procedures. These needle-free devices are considered safer and more sterile to use than traditional tattoo machines. They are designed to create a more comforting experience during the application process and eliminate the possibility of spreading disease like HIV, hepatitis and other healthcare issues. The needle-less device is also capable of inserting the pigment deeper into the skin than machines that use needles. \n", "Another tattoo alternative is henna-based tattoos, which generally contain no additives. Henna is a plant-derived substance which is painted on the skin, staining it a reddish-orange-to-brown color. Because of the semi-permanent nature of henna, they lack the realistic colors typical of decal temporary tattoos. Due to the time-consuming application process, it is a relatively poor option for children. Dermatological publications report that allergic reactions to natural henna are very rare and the product is generally considered safe for skin application. Serious problems can occur, however, from the use of henna with certain additives. The FDA and medical journals report that painted black henna temporary tattoos are especially dangerous.\n", "Although they have become more popular and usually require a greater investment, airbrush temporary tattoos are less likely to achieve the look of a permanent tattoo, and may not last as long as press-on temporary tattoos. An artist sprays on airbrush tattoos using a stencil with alcohol-based cosmetic inks. Like decal tattoos, airbrush temporary tattoos also are easily removed with rubbing alcohol or baby oil.\n\nSection::::Temporary tattoos.:Types of temporary tattoos.:Henna temporary tattoos.\n", "Certain devices are promoted to allegedly remove toxins from the body. One version involves a foot-bath using a mild electric current, while another involves small adhesive pads applied to the skin (usually the foot). In both cases, the production of an alleged brown \"toxin\" appears after a brief delay. In the case of the foot bath, the \"toxin\" is actually small amounts of rusted iron leaching from the electrodes. The adhesive pads change color due to oxidation of the pads' ingredients in response to the skin's moisture. In both cases, the same color-changes occur irrespective of whether the water or patch even make contact with the skin (they merely require water—thus proving the color-change does not result from any body-detoxification process).\n", "Proper hygiene requires a body modification artist to wash his or her hands before starting to prepare a client for the stencil, between clients, and at any other time when cross contamination can occur. The use of single use gloves is also mandatory and disposed after each stage of tattooing. The same gloves should not be used to clean the tattoo station, tattoo the client, and clean the tattoo.\n", "The ingredients in some \"glow\" inks are listed as: (PMMA) Polymethylmethacrylate 97.5% and microspheres of fluorescent dye 2.5% suspended in UV sterilized, distilled water.\n\nSection::::Other tattoo inks.:Removable tattoo ink.\n\nWhile tattoo ink is generally very painful and laborious to remove, tattoo removal being quite involved, a recently introduced ink has been developed to be easier to remove by laser treatments than traditional inks.\n\nSection::::Other tattoo inks.:Black henna.\n", "Typically, black and other darker-colored inks can be removed completely using Q-switched lasers while lighter colors such as yellows and greens are still very difficult to remove. Success can depend on a wide variety of factors including skin color, ink color, and the depth at which the ink was applied.\n", "Tattoo removal\n\nTattoo removal has been performed with various tools since the start of tattooing. While tattoos were once considered permanent, it is now possible to remove them with treatments, fully or partially.\n\nThe \"standard modality for tattoo removal\" is the non-invasive removal of tattoo pigments using Q-switched lasers. Different types of Q-switched lasers are used to target different colors of tattoo ink depending on the specific light absorption spectra of the tattoo pigments. \n", "Section::::Temporary tattoos.:Temporary tattoo safety.:Airbrush tattoo safety.\n\nThe types of airbrush paints manufactured for crafting, creating art or decorating clothing should never be used for tattooing. These paints can be allergenic or toxic.\n\nSection::::Temporary tattoos.:Temporary tattoo safety.:Henna tattoo safety.\n", "Tattoo pigments have specific light absorption spectra. A tattoo laser must be capable of emitting adequate energy within the given absorption spectrum of the pigment to provide an effective treatment. Certain tattoo pigments, such as yellows and fluorescent inks are more challenging to treat than darker blacks and blues, because they have absorption spectra that fall outside or on the edge of the emission spectra available in the tattoo removal laser. Recent pastel coloured inks contain high concentrations of titanium dioxide which is highly reflective. Consequently, such inks are difficult to remove since they reflect a significant amount of the incident light energy out of the skin.\n", "While tattoos are considered permanent, it is sometimes possible to remove them, fully or partially, with laser treatments. Typically, black and some colored inks can be removed more completely than inks of other colors. The expense and pain associated with removing tattoos are typically greater than the expense and pain associated with applying them. Pre-laser tattoo removal methods include dermabrasion, salabrasion (scrubbing the skin with salt), cryosurgery and excision—which is sometimes still used along with skin grafts for larger tattoos. These older methods, however, have been nearly completely replaced by laser removal treatment options.\n\nSection::::Temporary tattoos.\n", "The most common method of tattooing in modern times is the electric tattoo machine, which inserts ink into the skin via a single needle or a group of needles that are soldered onto a bar, which is attached to an oscillating unit. The unit rapidly and repeatedly drives the needles in and out of the skin, usually 80 to 150 times a second. This modern procedure is ordinarily sanitary. The needles are single-use needles that come packaged individually. The tattoo artist must wash his or her hands and must also wash the area that will be tattooed. Gloves must be worn at all times and the wound must be wiped frequently with a wet disposable towel of some kind. The equipment must be sterilized in a certified autoclave before and after every use.\n", "Unfortunately the dye systems used to change the wavelength result in significant power reduction such that the use of multiple separate specific wavelength lasers remains the gold standard.\n", "Some wearers decide to cover an unwanted tattoo with a new tattoo. This is commonly known as a cover-up. An artfully done cover-up may render the old tattoo completely invisible, though this will depend largely on the size, style, colors and techniques used on the old tattoo and the skill of the tattoo artist.\n\nCovering up a previous tattoo necessitates darker tones in the new tattoo to effectively hide the older, unwanted piece.\n", "Some tattoo pigments contain metals that could theoretically break down into toxic chemicals in the body when exposed to light. This has not yet been reported in vivo but has been shown in laboratory tests. Laser removal of traumatic tattoos may similarly be complicated depending on the substance of the pigmenting material. In one reported instance, the use of a laser resulted in the ignition of embedded particles of firework debris.\n\nSection::::References.\n\nNotes\n\nFurther reading\n", "In amateur tattooing, such as that practiced in prisons, however, there is an elevated risk of infection. Infections that can theoretically be transmitted by the use of unsterilized tattoo equipment or contaminated ink include surface infections of the skin, fungal infections, some forms of hepatitis, herpes simplex virus, HIV, staph, tetanus, and tuberculosis.\n", "Topical anesthetics are often used by technicians prior to cosmetic tattooing and there is the potential for adverse effects if topical anesthetics are not used safely. In 2013 the International Industry association CosmeticTattoo.org published a detailed position and general safety precautions for the entire industry.\n\nThe causes of a change of colour after cosmetic tattooing are both complex and varied. As discussed in the detailed industry article \"Why Do Cosmetic Tattoos Change Colour\", primarily there are four main areas that have influence over the potential for a cosmetic tattoo to change colour;\n\nBULLET::::1. Factors related to the pigment characteristics\n", "BULLET::::- Dye modules are available for some lasers to convert 532 nm to 650 nm or 585 nm light which allows one laser system to safely and effectively treat multi-color tattoo inks. When dye modules take 532 nm laser wavelength and change it, there is a loss of energy. Treatments with dye packs, while effective for the first few treatments, many not be able to clear these ink colors fully. The role of dye lasers in tattoo removal is discussed in detail in the literature.\n\nPulsewidth or pulse duration is a critical laser parameter. All Q-switched lasers have appropriate pulse durations for tattoo removal.\n", "Tattoo ink is generally permanent. Tattoo removal is difficult, painful, and the degree of success depends on the materials used. Recently developed inks claim to be comparatively easy to remove. Unsubstantiated claims have been made that some inks fade over time, yielding a \"semi-permanent tattoo.\"\n\nSection::::Regulations.\n", "There are a number of factors that determine how many treatments will be needed and the level of success one might experience. Age of tattoo, ink density, color and even where the tattoo is located on the body, all play an important role in how many treatments will be needed for complete removal. However, a rarely recognized factor of tattoo removal is the role of the client’s immune response. The normal process of tattoo removal is fragmentation followed by phagocytosis which is then drained away via the lymphatics. Consequently, it’s the inflammation resulting from the actual laser treatment and the natural stimulation of the hosts’s immune response that ultimately results in removal of tattoo ink; thus variations in results are enormous.\n", "Tattoo removal is most commonly performed using lasers that break down the ink particles in the tattoo into smaller particles. Dermal macrophages are part of the immune system, tasked with collecting and digesting cellular debris. In the case of tattoo pigments, macrophages collect ink pigments, but have difficulty breaking them down. Instead, they store the ink pigments. If a macrophage is damaged, it releases its captive ink, which is taken up by other macrophages. This can make it particularly difficult to remove tattoos. When treatments break down ink particles into smaller pieces, macrophages can more easily remove them.\n", "Decal temporary tattoos, when legally sold in the United States, have had their color additives approved by the U.S. Food and Drug Administration (FDA) as cosmetics – the FDA has determined these colorants are safe for \"direct dermal contact\". While the FDA has received some accounts of minor skin irritation, including redness and swelling, from this type of temporary tattoo, the agency has found these symptoms to be \"child specific\" and not significant enough to support warnings to the public. Unapproved pigments, however, which are sometimes used by non-US manufacturers, can provoke allergic reactions in anyone.\n", "Section::::Laser removal.:Factors contributing to the success of laser tattoo removal.\n", "General consensus for care advises against removing the flakes or scab that may form on a new tattoo, and avoiding exposing a new tattoo to the sun for extended periods for at least three weeks; both of these can contribute to fading of the image. It is agreed that a new tattoo needs to be kept clean. Various products may be recommended for application to the skin, ranging from those intended for the treatment of cuts, burns and scrapes, to panthenol, cocoa butter, A&D, hemp, lanolin, or salves. Oil based ointments are almost always recommended for use on very thin layers due to their inability to evaporate and therefore over-hydrate the already perforated skin. Recent scientific studies have demonstrated that wounds that are kept moist heal faster than wounds healing under dry conditions. In recent years specific commercial products have been developed for tattoo aftercare. Although opinions about these products vary, soap and warm water work well to keep a tattoo clean and free from infection. It is advised that fragrance-free soaps and soaps that contain alcohol be used to clean the tattoo to avoid burning or drying out the tattoo too quickly. Try to avoid loofahs and wash clothes can contain bacteria which could get into the tattoo and cause an infection. The best way to dry your tattoo is to use a paper towel as regular towels may also contain bacteria, even if they are just washed. \n", "Before the tattooing begins the client is asked to approve the final position of the applied stencil. After approval is given the artist will open new, sterile needle packages in front of the client, and always use new, sterile, or sterile disposable instruments and supplies, and fresh ink for each session (loaded into disposable ink caps which are discarded after each client). Also, all areas which may be touched with contaminated gloves will be wrapped in clear plastic to prevent cross-contamination. Equipment that cannot be autoclaved (such as counter tops, machines, and furniture) will be wiped with an approved disinfectant.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00028
Why does touching a room temperature object stop pain from a brunch?
So after you burn your skin, the burned spot and surrounding tissue remains hotter even after you have removed it from the heat source. This will result in the burn being larger than just the spot that contacted the hot surface. First aid for a 1st/2nd degree burn is always to cool the area as quickly as possible. Running the effected area under cold tap water for > 10 minutes helps to cool the surrounding skin and reduce additional damage, as well as reduce pain. Pressing it against the cooler table probably had an similar effect, my guess is that a room temperature metal table would have had an even greater effect as it would have pulled heat out of the blister even faster. Keep the cold water trick in mind for the future, it will reduce the size of the blister and allow you to heal more quickly. (just don't use ice as this can cause further damage to already injured skin)
[ "BULLET::::- When touching paradoxical objects, one can feel a hole when actually touching a bump. These \"illusory\" objects can be used to create tactile \"virtual objects\".\n\nBULLET::::- The thermal grill illusion occurs when one touches the hand down on an interlaced grid of warm and cool bars and experiences the illusion of burning heat.\n\nBULLET::::- When the thumb and forefinger are slid repeatedly along the edge of a wedge, a rectangular block then handled in the same manner will feel deformed.\n", "Many episodes of pain come from muscle exertion or strain, which creates tension in the muscles and soft tissues. This tension can constrict circulation, sending pain signals to the brain. Heat application eases pain by:\n\nBULLET::::- dilating the blood vessels surrounding the painful area. Increased blood flow provides additional oxygen and nutrients to help heal the damaged muscle tissue.\n\nBULLET::::- stimulating sensation in the skin and therefore decreasing the pain signals being transmitted to the brain\n\nBULLET::::- increasing the flexibility (and decreasing painful stiffness) of soft tissues surrounding the injured area, including muscles and connective tissue.\n", "Church of Scientology founder L. Ron Hubbard said that when one is in pain, \"the energy from a shock will make a standing wave in the body.\" He went on to explain that the purpose of a \"touch assist\" is to \"unlock the standing waves that are small electronic ridges of nervous energy that is not flowing as it should.\" This contradicts medical science's current conception of the nervous system, which holds that nerves transmit pain, and do not store it.\n", "Temperature and pain are thought to be represented as “feelings” of coolness/warmness and pleasantness/unpleasantness in the brain. These sensory and affective characteristics of thermoregulation may motivate certain behavioral responses depending on the state of the body (for example, moving away from a source of heat to a cooler space). Such perturbations in the internal homeostatic environment of an organism are thought to be key aspects of a motivational process giving rise to emotional states, and have been proposed to be represented principally by the insular cortex as feelings. These feelings then influence drives when the anterior cingulate cortex is activated.\n", "Experiments looking at the WDR neurons in animals have shown that a strong tactile stimulus in the peripheral inhibitory field could reduce the response to a painful stimulus to the same extent as a weak tactile stimulus closer to the centre of the receptive field.\n\nThese data show the Gate Control Theory of Pain was correct in the prediction that activation of large tactile afferent fibres inhibit the nociceptive afferent signal being transmitted to the brain.\n\nSection::::Interactions between touch and pain.\n", "BULLET::::9. Sometime during the period 1698-1704, John Locke wrote his book \"Elements of Natural Philosophy\", which was first published in 1720: John Locke with Pierre Des Maizeaux, ed., \"A Collection of Several Pieces of Mr. John Locke, Never Before Printed, Or Not Extant in His Works\" (London, England: R. Francklin, 1720). From p. 224: \"\"Heat\", is a very brisk agitation of the insensible parts of the object, which produces in us that sensation, from whence we denominate the object \"hot\": so what in our sensation is heat, in the object is nothing but motion. This appears by the way, whereby heat is produc'd: for we see that the rubbing of a brass-nail upon a board, will make it very hot; and the axle-trees of carts and coaches are often hot, and sometimes to a degree, that it sets them on fire, by rubbing of the nave of the wheel upon it.\"\n", "Diazepam is a GABAA receptor benzodiazepine ligand that is an anxiety modulator. Studies using diazepam with the hot plate test showed that diazepam modified the behavioral structure of the pain response not from pain modulation but rather by reducing anxiety levels.\n\nSection::::Ethics.\n\nThe Ethical Committee of the International Association for the Study of Pain has developed guidelines for the ethical use of this procedure. In the United States, such experiments must be approved by an Institutional Animal Care and Use Committee.\n", "The two basic types of sensation are touch-position and pain-temperature. Touch-position input comes to attention immediately, but pain-temperature input reaches the level of consciousness after a delay; when a person steps on a pin, the awareness of stepping on something is immediate but the pain associated with it is delayed.\n", "The interactions between touch and pain are mostly inhibitory (as is predicted by the Gate Control Theory). Research shows that there both acute and chronic pain perception is influenced by touch, with both psychophysical changes and differences in brain activation.\n\nSection::::Interactions between touch and pain.:Touch and acute pain.\n\nThe intensity of pain reported is consistently reduced in response to touch. This occurs whether the touch is at the same time as the pain, or even if the touch occurs before the pain.\n\nTouch also reduces the activation of cortical areas that respond to painful stimuli.\n", "Adam and Jamie subjected themselves to four painful stimuli – heat and electric current for Jamie, capsaicin (injected under the skin) and cold for Adam – and chose to use cold for their investigations. They then built a chair for test subjects to sit in, with an ice bath at into which they would immerse one hand for as long as they could endure it, and imposed a 3-minute maximum. The following four myths were tested.\n\nSection::::Episode 143 – \"Mythssion Control\".\n\nBULLET::::- Original air date: May 5, 2010\n\nSection::::Episode 143 – \"Mythssion Control\".:Crash Force.\n", "BULLET::::7. \"Of the mechanical origin of heat and cold\" in: Robert Boyle, \"Experiments, Notes, &c. About the Mechanical Origine or Production of Divers Particular Qualities:\" … (London, England: E. Flesher (printer), 1675). At the conclusion of Experiment VI, Boyle notes that if a nail is driven completely into a piece of wood, then further blows with the hammer cause it to become hot as the hammer's force is transformed into random motion of the nail's atoms. From pp. 61-62: \" … the impulse given by the stroke, being unable either to drive the nail further on, or destroy its interness [i.e., entireness, integrity], must be spent in making various vehement and intestine commotion of the parts among themselves, and in such an one we formerly observed the nature of heat to consist.\"\n", "and Gamble, Cincinnati, OH). This heat product is a cloth wrap that houses several small disks made of iron powder, activated charcoal, sodium chloride, and water. When the wrap is removed from its sealed pouch and exposed to oxygen, the disks oxidize, producing an exothermic reaction. When this product was applied to the low back muscles, it provided greater pain relief for 24 hours after application when compared to ibuprofen, acetaminophen, and no treatment. When the same product was applied to the wrist, it decreased pain and improved range of motion (ROM) in patients experiencing wrist pain.\n", "When a person touches a hot object and withdraws their hand from it without actively thinking about it, the heat stimulates temperature and pain receptors in the skin, triggering a sensory impulse that travels to the central nervous system. The sensory neuron then synapses with interneurons that connect to motor neurons. Some of these send motor impulses to the flexors that lead to the muscles in the arm to contract, while some motor neurons send inhibitory impulses to the extensors so flexion is not inhibited. This is referred to as reciprocal innervation.\n", "Mirror box therapy produces the illusion of movement and touch in a phantom limb which in turn may cause a reduction in pain.\n", "BULLET::::- In the Stephen King mini-series \"The Langoliers\" (1995) a character says, \"You ever watch Mr. Spock on \"Star Trek\"?\", \"'Cause if you don't shut your cakehole, you bloody idiot, I'll be happy to demonstrate his Vulcan sleeper-hold for you.\"\n\nBULLET::::- In a third season episode of \"My Name Is Earl\" titled \"Early Release\" Darnell incapacitates a prison guard using the nerve pinch. Joy realizes that Darnell had used the same technique on her several nights previous.\n", "Additional research has shown that the experience of pain is shaped by a plethora of contextual factors, including vision. Researchers have found that when a subject views the area of their body that is being stimulated, the subject will report a lowered amount of perceived pain. For example, one research study used a heat stimulation on their subjects' hands. When the subject was directed to look at their hand when the painful heat stimulus was applied, the subject experienced an analgesic effect and reported a higher temperature pain threshold. Additionally, when the view of their hand was increased, the analgesic effect also increased and vice versa. This research demonstrated how the perception of pain relies on visual input.\n", "According to the laws of thermodynamics, all particles of matter are in constant random motion as long as the temperature is above absolute zero. Thus the molecules and atoms which make up the human body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as feeling cold.\n", "Section::::Origin and foundations.\n", "Although pain is considered to be aversive and unpleasant and is therefore usually avoided, a meta-analysis which summarized and evaluated numerous studies from various psychological disciplines, found a reduction in negative affect. Across studies, participants that were subjected to acute physical pain in the laboratory subsequently reported feeling better than those in non-painful control conditions, a finding which was also reflected in physiological parameters. A potential mechanism to explain this effect is provided by the opponent-process theory.\n\nSection::::Theory.\n\nSection::::Theory.:Historical theories.\n", "BULLET::::- In \"central sensitization,\" nociceptive neurons in the dorsal horns of the spinal cord become sensitized by peripheral tissue damage or inflammation. This type of sensitization has been suggested as a possible causal mechanism for chronic pain conditions. The changes of central sensitization occur after repeated trials to pain. Research from animals has consistently shown that when a trial is repeatedly exposed to a painful stimulus, the animal’s pain threshold will change and result in a stronger pain response. Researchers believe that there are parallels that can be drawn between these animal trials and persistent pain in people. For example, after a back surgery that removed a herniated disc from causing a pinched nerve, the patient may still continue to “feel” pain. Also, newborns who are circumcised without anesthesia have shown tendencies to react more greatly to future injections, vaccinations, and other similar procedures. The responses of these children are an increase in crying and a greater hemodynamic response (tachycardia and tachypnea).\n", "Alfred Goldscheider (1884) confirmed the existence of distinct heat and cold sensors, by evoking heat and cold sensations using a fine needle to penetrate to and electrically stimulate different nerve trunks, bypassing their receptors. Though he failed to find specific pain sensitive spots on the skin, Goldscheider concluded in 1895 that the available evidence supported pain specificity, and held the view until a series of experiments were conducted in 1889 by Bernhard Naunyn. Naunyn had rapidly (60–600 times/second) prodded the skin of tabes dorsalis patients, below their touch threshold (e.g., with a hair), and in 6–20 seconds produced unbearable pain. He obtained similar results using other stimuli including electricity to produce rapid, sub-threshold stimulation, and concluded pain is the product of summation. In 1894 Goldscheider extended the intensive theory, proposing that each tactile nerve fiber can evoke three distinct qualities of sensation – tickle, touch and pain – the quality depending on the intensity of stimulation; and extended Naunyn's summation idea, proposing that, over time, activity from peripheral fibers may accumulate in the dorsal horn of the spinal cord, and \"spill over\" from the peripheral fiber to a pain-signalling spinal cord fiber once a threshold of activity has been crossed. The British psychologist, Edward Titchener, pronounced in his 1896 textbook, \"excessive stimulation of any sense organ or direct injury to any sensory nerve occasions the common sensation of pain.\"\n", "Further research is needed to determine if balnotherapy for osteoarthritis (mineral baths or spa treatments) improves a person's quality of life or ability to function. The use of ice or cold packs may be beneficial; however, further research is needed. There is no evidence of benefit from placing hot packs on joints.\n\nThere is low quality evidence that therapeutic ultrasound may be beneficial for people with osteoarthritis of the knee; however, further research is needed to confirm and determine the degree and significance of this potential benefit.\n", "According to Fourier's law, the heat flow between the bodies is found by the relation:\n\nwhere formula_2 is the heat flow, formula_3 is the thermal conductivity, formula_4 is the cross sectional area and formula_5 is the temperature gradient in the direction of flow.\n\nFrom considerations of energy conservation, the heat flow between the two bodies in contact, bodies A and B, is found as:\n", "Otherwise, metallic bonding can be very strong, even in molten metals, such as Gallium. Even though gallium will melt from the heat of one's hand just above room temperature, its boiling point is not far from that of copper. Molten gallium is, therefore, a very nonvolatile liquid thanks to its strong metallic bonding.\n", "When the two bodies come in contact, surface deformation may occur on both bodies. This deformation may either be plastic or elastic, depending on the material properties and the contact pressure. When a surface undergoes plastic deformation, contact resistance is lowered, since the deformation causes the actual contact area to increase\n\nSection::::Factors influencing contact conductance.:Surface cleanliness.\n\nThe presence of dust particles, acids, etc., can also influence the contact conductance.\n\nSection::::Measurement of thermal contact conductance.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03871
How do movie studios actually get paid for their movies in theaters? Do movie theaters send them a check every month or something?
Movie theaters and studios have a revenue share deal. That is the studio gets a portion of the ticket sales, and the movie theater gets a portion ticket sales. It starts by favoring the studio. For example, week 1 it may be 90/10 (90% to the studio, 10% to the theater). Week 2, may be 80/20, and so on. The theaters get all the money from concessions though, where their margin is much bigger. > Do movie theaters send them a check every month or something? Not an individual theater, but the company that owns a theater (and most theaters are owned by giant companies with 100s+ of theaters) will add up all they owe each studio at the end of the month and send them a check or wire transfer. So for all the example here. Regal Theaters (who own 500 theaters) finance department at their HQ in Tennessee will add up all their ticket sales for the month, figure out how much share goes to the studios, and cut them a check.
[ "At one time, Hollywood Video was headquartered in Beaverton, Oregon, in an office building. In 1996, Hollywood moved its employees out of the building two years into its five-year lease. In 1996, Poorman-Douglas Corp agreed to occupy all of the space in the Beaverton building, relieving Hollywood of extra rent payments.\n", "A number of governments run programs to subsidise the cost of producing films. For instance, until it was abolished in March 2011, in the United Kingdom the UK Film Council provided National Lottery funding to producers, as long as certain conditions were met. Many of the Council's functions have now been taken over by the British Film Institute. States such as Georgia, Ohio, Louisiana, New York, Connecticut, Oklahoma, Pennsylvania, Utah, and New Mexico, will provide a subsidy or tax credit provided all or part of a film is filmed in that state.\n", "Section::::Theatrical distribution.\n\nIf a distributor is working with a theatrical exhibitor, the distributor secures a written contract stipulating the amount of the gross ticket sales the exhibitor will be allowed to retain (usually a percentage of the gross). The distributor collects the amount due, audits the exhibitor's ticket sales as necessary to ensure the gross reported by the exhibitor is accurate, secures the distributor's share of these proceeds, surrenders the exhibitor's portion to it, and transmits the remainder to the production company (or to any other [intermediary], such as a film release agent). \n", "Section::::Hollywood and politics.:Political donations.\n\nToday, donations from Hollywood help to fund federal politics. On February 20, 2007, for example, Democratic then-presidential candidate Barack Obama had a $2,300-a-plate Hollywood gala, being hosted by DreamWorks founders David Geffen, Jeffrey Katzenberg, and Steven Spielberg at the Beverly Hilton.\n\nSection::::Spread to world markets.\n", "BULLET::::3. \"Minimum Guarantee + Royalty\" – Here, the exhibitor pays the distributor a minimum lump sum irrespective of the box office performance of the film. Rental is not chargeable per show. Any surplus after deducting tax and show rental is shared in a pre-set ratio (1:2) between the distributor and exhibitor typically.\n", "BULLET::::- Refunds: Most cinema companies issue refunds if there is a technical fault such as a power outage that stops people from seeing a movie. Refunds may be offered during the initial 30 minutes of the screening. The \"New York Times\" reported that some audience members walked out of Terrence Malick's film \"Tree of Life\" and asked for refunds. At AMC theaters, \"...patrons who sat through the entire film and then decided they wanted their money back were out of luck, as AMC's policy is to only offer refunds 30 minutes into a screening. The same goes for Landmark, an independent movie chain... whose policy states, \"If a film is not what is expected… and the feature is viewed less than 30 minutes a refund can be processed for you at the box office.\"\n", "Three main factors in Hollywood accounting reduce the reported profit of a movie, and all have to do with the calculation of overhead:\n\nBULLET::::- Production overhead: Studios, on average, calculate production overhead by using a figure around 15% of total production costs.\n\nBULLET::::- Distribution overhead: Film distributors typically keep 30% of what they receive from movie theaters (\"gross rentals\").\n\nBULLET::::- Marketing overhead: To determine this number, studios usually choose about 10% of all advertising costs.\n", "BULLET::::- Cast: While the bulk of the cast usually gets paid by the Actors Guild standard rate of about 2,300 US$ per week, famous and bankable film stars can demand fees up to $30 million per film, plus perks (trailer, entourage, etc.) and possible gross participation. Sometimes an actor will accept a minimal fee in exchange for a more lucrative share of the profits. Union extras are paid around $130 per day (plus extra for overtime or if they provide their own wardrobe), but on a low-budget film non-union extras are paid less, sometimes nothing at all.\n", "Distributors typically enter into one of the two types of film booking contracts. The most common is the aggregate deal where total box office revenue that a given film generates is split by a pre-determined mutually-agreed percentage between distributor and movie theater. The other method is the sliding scale deal, where the percentage of box office revenue taken by theaters declines each week of a given film's run. The sliding scale actually has two pieces that starts with a minimum amount of money that theater is to keep—often called “the house nut”—after which the sliding scale kicks in for revenue generated above the house nut. However, this sliding scale method is falling out of use. Whatever the method, box office revenue is usually shared roughly 50/50 between film distributors and theaters.\n", "When a film is initially produced, a feature film is often shown to audiences in a movie theater. Typically, one film is the featured presentation (or feature film). Before the 1970s, there were \"double features\"; typically, a high-quality \"\"A\" picture\" rented by an independent theater for a lump sum, and a lower-quality \"\"B\" picture\" rented for a percentage of the gross receipts. Today, the bulk of the material shown before the feature film consists of previews for upcoming movies (also known as trailers) and paid advertisements.\n\nSection::::History.\n", "Section::::Pricing and admission.:Revenue.\n", "In the movie \"Snakes on a Plane\", reptile wrangler Jules Sylvester had brought over 27 types of snakes on the movie set. Snakes get tired after an hour of work, and can become over stressed, causing a snake not to eat for a week. It is for this reason that particular snakes are leased for an hour at a time.\n\nSection::::Ethics.:Animal Safety.\n", "In the United States, many movie theater chains sell discounted passes, which can be exchanged for tickets to regular showings. These passes are traditionally sold in bulk to institutional customers and also to the general public at Bulktix.com. Some passes provide substantial discounts from the price of regular admission, especially if they carry restrictions. Common restrictions include a waiting period after a movie's release before the pass can be exchanged for a ticket or specific theaters where a pass is ineligible for admission.\n", "Non-theatrical distribution includes the airlines and film societies. Non-theatrical distribution is generally handled by companies that specialise in this market, of which Motion Picture Licensing Company (MPLC) and Filmbankmedia are the two largest:\n\nMotion Picture Licensing Company\n\nFilmbankmedia\n", "BULLET::::- Ask above-the-line talent to defer their salaries. In exchange for dropping their large upfront salaries, actors, directors, and producers can receive a large share of the film's gross profits. This has the disadvantage of cutting the financier's eventual takings. It has the further disadvantage of ambiguity. In the case of net profit participation instead of gross profit participation, disagreements due to Hollywood accounting methods can lead to audits and litigation, as happened between Peter Jackson and New Line Cinema, after New Line claimed \"The Lord of the Rings\" film trilogy, which grossed over 2 billion USD, failed to make any profit and thus denied payments to actors, the Tolkien estate, and Jackson.\n", "For this service the talent agency negotiates a packaging fee. Instead of collecting the usual 10% fee from individual clients, the agency receives the equivalent of 5% of what the studio or network pays the production company; 5% of half of any profit the production company earns; and 15% of adjusted gross (syndication revenue minus costs the network does not pay).\n", "Guarantee (filmmaking)\n\nIn filmmaking, a guarantee, or informally a \"pay-or-play\" contract, is a term in a contract of an actor, director, or other participant that guarantees remuneration if the participant is released from the contract without being responsible.\n\nStudios are reluctant to agree to guarantees but accept them as part of the deal for signing major talent. They also have the advantage of enabling a studio to remove a participant under such a contract, with few legal complications.\n", "Section::::Video series.:\"What's the Damage?\".\n\n\"What's the Damage\" is a video series where \"CinemaSins\" counts the actual cost of things damaged in a movie, with the prices coinciding with their worth at the time of release.\n\nSection::::Podcast.\n", "BULLET::::2. \"Fixed Hire\" – Here, the exhibitor pays the distributor a maximum lump sum irrespective of the box office performance of the film. Rental is not chargeable per show. Any surplus after deducting tax is retained by the exhibitor. Effectively, the exhibitor becomes a \"distributor\" in the eyes of the market. So, the entire risk of box office performance of the film remains with the exhibitor.\n", "Prior to A Paying ghost's public release, Lade Bros. Films PVT LTD hosted a private screenings on 28 May 2015 at cities like Mumbai, Aurangabad, Nagpur, Kolhapur, Pune and some other cities. Lade Bros. Films then showed A Paying Ghost to some of the Marathi & Bollywood industry's filmmakers and actors in a first-look screening at the Fun Republic Theatre on 29 May 2015. On the following day, the film was screened at 200 screens.\n\nSection::::Releases.:Overseas.\n", "In Canada, the total operating revenue in the movie theater industry was $1.7 billion in 2012, an 8.4% increase from 2010. This increase was mainly the result of growth in box office and concession revenue. Combined, these accounted for 91.9% of total industry operating revenue. In the US, the \"...number of tickets sold fell nearly 11% between 2004 and 2013, according to the report, while box office revenue increased 17%\" due to increased ticket prices.\n\nSection::::Pricing and admission.:Revenue.:New forms of competition.\n", "The Supreme Court eventually ruled that the major studios ownership of theaters and film distribution was a violation of the Sherman Antitrust Act. As a result, the studios began to release actors and technical staff from their contracts with the studios. This changed the paradigm of film making by the major Hollywood studios, as each could have an entirely different cast and creative team.\n", "This practice is most common with blockbuster movies. Muvico Theaters, Regal Entertainment Group, Pacific Theatres and AMC Theatres are some theaters that interlock films.\n\nSection::::Presentation.:Live broadcasting to movie theaters.\n\nSometimes movie theaters provide digital projection of a live broadcast of an opera, concert, or other performance or event. For example, there are regular live broadcasts to movie theaters of Metropolitan Opera performances, with additionally limited repeat showings. Admission prices are often more than twice the regular movie theater admission prices.\n\nSection::::Pricing and admission.\n", "BULLET::::1. \"Theatre Hire\" – Here, the exhibitor pays the distributor the entire box office collection after deducting tax and show rentals. So, the entire risk of box office performance of the film remains with the distributor. This is the most common channel for low-budget films, casting rank newcomers, with unproven track record. In Chennai, a moderate theater with AC and DTS can fetch around 1 lakh as weekly rent\n", "Nobody discovered us - we discovered ourselves. We didn't come in to this business as paupers and we won't go out of it as paupers ... It's like this- we're honest and our door is open to everybody. We've got no overhead - our overhead begins when we start shooting and ends the day we put the film in the can. That's the way we do business and we're not going to stop until we get an Academy Award and land one of our pictures in the Radio City Music Hall.\n" ]
[]
[]
[ "normal" ]
[ "Movie theaters send studios a check every month." ]
[ "false presupposition", "normal" ]
[ "The company that owns the theater does the finances for all the theaters and makes the payment to the studio." ]
2018-03870
how does a semi-truck/tractor trailer run into an overpass due to height issues?
What happens is that each time the road is repaved, the gap between the road and the bridge gets a few inches smaller. But the city/state/whoever in charge of the road rarely replaces the bridge height sign & city maps to reflect the new lower clearance. Semi drivers usually do know the exact height of their truck & load, and they commonly look up the bridge heights along their path to make sure they will clear before they head out. But when the posted height isn’t the actual height, that doesn’t do them any good.
[ "On July 22, 2019, at approximately 1:15 pm, a 2005 Peterbilt tractor trailer driven by Michael Dodds and loaded with dry beans attempted to cross over the restricted-weight bridge. The bridge collapsed, and the trailer became hung up on the west abutment. The bridge was rated for gross weight, with restrictions marked. Dodds's truck's weight was just over 43 tons, or . An overload citation of $11,400 was issued. Dodds was uninjured. The estimated replacement cost of the bridge is between $800,000.00 and $1,000,000.\n", "Section::::Hazards.\n\nOversize loads present a hazard to roadway structures as well as to road traffic. Because they exceed design clearances, there is a risk that such vehicles can hit bridges and other overhead structures. Over-height vehicle impacts are a frequent cause of damage to bridges, and truss bridges are particularly vulnerable, due to having critical support members over the roadway. An over-height load struck the overhead beams on the I-5 Skagit River bridge in 2013, which caused the bridge to collapse.\n\nSection::::Licensing.\n", "There is no federal height limit, and states may set their own limits which range from (mostly on the east coast) to (west coast)., As a result, the majority of trucks are somewhere between and high. Truck drivers are responsible for checking bridge height clearances (usually indicated by a warning sign) before passing underneath an overpass or entering a tunnel. Not having enough vertical clearance can result in a \"top out\" or \"bridge hit,\" causing considerable traffic delays and costly repairs for the bridge or tunnel involved.\n", "Since off-road versions do not have to drive on roads at highway speeds, a typical top speed is just . It is rare for these vehicles to be on highways, so it was very unusual when a pedestrian was accidentally struck and dragged by a yard truck at an intersection in Bellevue, Washington, in February 2014.\n", "The problem is complicated by the location of Peabody Street, which runs parallel to the tracks, and intersects Gregson, just before the bridge. Not all trucks traveling on Gregson will continue under the bridge. Some large trucks must turn right onto Peabody to make their deliveries. Over-height trucks are allowed on Gregson, as long as they turn just before the bridge.\n\nSection::::Official actions.:New traffic light.\n", "Since this viaduct was a well-known traffic bottleneck (even more so for transport trucks, since the viaduct was so low, it would peel the roof off their trailers), and would flood with around a foot of water from even a light rain, that it was completely closed, torn down, and rebuilt in August 1998, and finished 2 weeks ahead of schedule, and 2 million dollars under budget. The new underpass is built of concrete, is four lanes wide, and is designed to handle the largest of transport trucks.\n\nSection::::Other tornadoes.\n", "Joshua Harman, a Virginia guardrail engineer, was whistleblower in a federal suit which accused Trinity of failing to notify the Federal Highway Administration (FHWA) of a change of size of the end piece from five to four inches. The Federal Highway Administration requires that changes be reported immediately. The change from five inches to four allegedly saved the company $2 per end terminal. A federal jury found Trinity guilty of fraud by not reporting a change of one inch made to the ET-Plus end terminal. The lawsuit resulted in a fraud verdict $175 million, which would be tripled.\n", "In effect, the formula reduces the legal weight limit for shorter trucks with fewer axles (see table below). For example, a three-axle dump truck would have a gross weight limit of , instead of , which is the standard weight limit for five-axle tractor-trailer. FHWA regulation §658.17 states: \"The maximum gross vehicle weight shall be except where lower gross vehicle weight is dictated by the bridge formula.\"\n\nSection::::Bridge collapse.\n", "In special cases involving unusually overweight trucks (which require special permits), not observing a bridge weight limit can lead to disastrous consequences. Fifteen days after the collapse of the Minneapolis bridge, a heavy truck collapsed a small bridge in Oakville, Washington.\n\nSection::::Formula law.\n", "Maximum width of any vehicle is and a height of . In the past few years, allowance has been made by several states to allow certain designs of heavy vehicles up to high but they are also restricted to designated routes. In effect, a 4.6 meter high B-double will have to follow two sets of rules: they may access only those roads that are permitted for B-doubles \"and\" for 4.6 meter high vehicles.\n", "Counterweight rigging systems use either tracked or wire-guided arbor guide systems. The tracks or wire guides limit lateral movement of the arbors during arbor travel. Wire-guided systems have lower capacities and are not in common use.\n\nIn addition to guiding the arbors, a tracked counterweight system is provided with bump stops at arbor high and low trim that establish the limits of an arbor's travel.\n", "The payphone then rings, bringing a report from the county engineer that the bridge is safe, and can be crossed. The bus driver asks the troopers, \"Are you sure? I don't like that old bridge. She sways in the wind and is not a suspension.\" The troopers tell him that if the county engineer says she’s safe, she’s safe. \n", "Gregson Street is a one-way street going southbound, so the bridge is only hit from the north side. Despite numerous signs and warning devices, a truck crashes into the bridge on average at least once a month. Most crashes involve rental trucks, even though rental agencies warn renters about the under-height bridges in the area. \n\n, there have been no deaths and only three minor injuries at the bridge, leading officials to concentrate on more urgent safety issues.\n\nSection::::Official actions.\n", "Section::::Swept into Morrison Bridge.\n", "There are some truss bridges like Permanent Bridge where the deck passes \"through\" the arch. Since the deck does not lie \"on top\" of the load-bearing arch, but is suspended a bit measured from the top of the arch, those types are excluded. Eads Bridge was not excluded because the existence of such a lower deck does not change the appearance of the bridge much with respect of the arch.\n", "BULLET::::- On September 11, 2010, around 2:30 a.m., a Toronto-bound M34 double-decker coach missed an exit to the William F. Walsh Regional Transportation Center in Syracuse, NY, and hit a railway overpass carrying the St. Lawrence Subdivision along NY Route 370 farther away. Four passengers were killed, all in the front of the upper deck, which was crushed into the lower deck in the crash, and 17 others were injured.\n", "Some tied-arch bridges only tie a segment of the \"main arch\" directly and prolong the strengthened chord to tie to the top ends of \"auxiliary (half-)arches\". The latter usually support the deck from below and join their bottom feet to those of the main arch(es). The supporting piers at this point may be slender, because the outward-directed horizontal forces of main and auxiliary arch ends counterbalance. The whole structure is \"self-anchored\". Like the simple case it exclusively places vertical loads on all ground-bound supports.\n", "The collapse was caused by a southbound semi-trailer truck from Canada hauling an oversize load to Vancouver, Washington, directly damaging sway struts and, indirectly, the compression chords in the overhead steel frame (trusswork) on the northernmost span of the bridge. The vertical clearance from the roadway to the upper arched beam in the outer lane is , and all trucks with oversize loads are expected to travel in the inside lane where the clearance is around . The oversize truck instead entered the bridge in the outer lane, while a second semi-truck and a BMW were passing it in the inner lane. The oversize truck had received a State oversize permit for a wide and tall load, for a height of , and after the collapse a \"dented upper corner and a scrape along the upper side [were] visible on the 'oversize load' equipment casing being hauled on the truck.\" The National Transportation Safety Board (NTSB) measured the truck's height, after the crash, to be . A pilot car was hired to ensure the load could pass safely. The pilot car never signalled the truck driver that there would be a problem crossing the Skagit bridge and did not warn the trucker to use an inside lane.\n", "BULLET::::- At the time of the collapse of the bridge, approximately 300 tons of construction equipment was located near several of the under-designed gusset plates.\n\nBULLET::::- The bridge was completed in 1967, but in 1977 and 1998, a median barrier, larger outside walls, and a thicker concrete deck were added to the bridge, causing additional loading on the already under-designed gusset plates.\n\nBULLET::::- The temperature on the day of the collapse also could have introduced additional expansion stresses on the gusset plates, since the bearings on the bridge were partially \"frozen\" (due to corrosion, not temperature) limiting their effectiveness.\n", "A KT gusset plate connects several members together through one gusset plate. The gusset plate is welded to a beam, and then two or three columns, beams, or truss chord are connect to the other side of the gusset plate through bolts or welds.\n\nA uniform force bracing connection connects a beam, column, and one other member. The gusset plate is bolted to the column and welded to the beam. The connection of the last remaining member can be through either bolts or welds.\n\nSection::::Notable failures.\n\nSection::::Notable failures.:Steel bridges.\n", "BULLET::::- Powered roof supports – to provide an open work space shielded against mine roof collapse, roof supports use powerful hydraulic cylinders to elevate and hold strong, thick steel plates in a horizontal position to protect the longwall shearer and armored face conveyor. The supports advance with the longwall shearer and armored face conveyor units, resulting in controlled roof falls behind the supports. A longwall face may range up to 400 meters in length.\n\nBULLET::::- Armored face conveyors – material handling conveyors that transport material cut by the shearer away from the longwall face.\n", "On November 23, 2015, the Indian Road overpass was damaged by an flatbed trailer carrying an over-height load. The eastbound lanes of the highway were closed for 60 hours, while the girders damaged in the impact had to be removed. The overpass was repaired by May 2016 at a cost of $554,000 (overall damage of $3 million). The trucking company, under the Highway Traffic Act, was held liable and had to compensate the Ministry of Transportation of Ontario for the repair costs.\n\nSection::::External links.\n\nBULLET::::- Highway 402 @ Asphaltplanet.ca\n\nBULLET::::- Video of the entire route of Highway 402\n", "BULLET::::- The Interstate H-3 viaducts through the Ko'olau Mountains, Oahu, Hawaii (CIP Segmental)\n\nBULLET::::- The new Pennsylvania Turnpike (I-76) bridges over the Susquehanna River south of Harrisburg, Pennsylvania (precast)\n\nBULLET::::- The Eastern span replacement of the San Francisco-Oakland Bay Bridge viaduct (precast)\n\nBULLET::::- The Benicia-Martinez Bridge (northbound span) between Benicia and Martinez, CA (CIP Segmental)\n\nBULLET::::- The Four Bears Bridge over the Missouri River in North Dakota utilizes precast concrete segments, erected with the balanced cantilever method (precast)\n\nBULLET::::- The High Five Interchange connecting US-75 and I-635 in Dallas, TX (precast)\n", "The Georgia DOT found that failure of the same epoxy at fault for the ceiling collapse was also to blame for the 2011 fall of a fenced and lighted covered-walkway structure attached to the south side of the relatively new 17th Street Bridge, which links Atlantic Station to Midtown Atlanta over Interstate 75/85. No injuries occurred in that incident, as the collapse was in the overnight hours, with very little traffic on the freeway.\n\nSection::::See also.\n\nBULLET::::- Ted Williams Tunnel\n\nBULLET::::- Sasago Tunnel — Japanese tunnel where a similar ceiling collapse occurred in 2012\n", "A new type of arbor was introduced by Thern Stage Equipment in 2010. It is referred to as a front loading counterweight arbor. This arbor has shelves and a gate to secure the counterweights in the arbor. Spreader plates are not required with the front loading arbor. The arbor counterweights are loaded from the front, rather than from the sides.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00890
Why do bone fractures create massive amounts of pain? Shouldn't we still go on like nothing ever happened?
Bones are living tissue. They have blood running through them. They have nerves and pain receptors in them. Not only that, but if they get broken, they may have sharp fragments that will poke through softer tissues like muscle and skin. Why wouldn't that be painful?
[ "In normal bone, fractures occur when there is significant force applied, or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism.\n", "Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter–Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation.\n\nSection::::Clinical significance.:Tumors.\n", "The human skeletal system is a complex organ in constant equilibrium with the rest of the body. In addition to support and structure of the body, bone is the major reservoir for many minerals and compounds essential for maintaining a healthy pH balance. The deterioration of the body with age renders the elderly particularly susceptible to and affected by poor bone health. Illnesses like osteoporosis, characterized by weakening of the bone’s structural matrix, increases the risk of hip-fractures and other life-changing secondary symptoms. In 2010, over 258,000 people aged 65 and older were admitted to the hospital for hip fractures. Incidence of hip fractures is expected to rise by 12% in America, with a projected 289,000 admissions in the year 2030. Other sources estimate up to 1.5 million Americans will have an osteoporotic-related fracture each year. The cost of treating these people is also enormous, in 1991 Medicare spent an estimated $2.9 billion for treatment and out-patient care of hip fractures, this number can only be expected to rise.\n", "Pain caused by cancer within bones is one of the most serious forms of pain. Because of its severity and uniqueness with respect to other forms of pain, it is extensively researched. According to studies of bone cancer in mouse femur models, it has been determined that bone pain related to cancer occurs as a result of destruction of bone tissue. Chemical changes that occur within the spinal cord as a result of bone destruction give further insight into the mechanism of bone pain.\n", "Section::::Treatment.\n\nThe treatment of osteopenia is controversial. Currently, candidates for therapy include those at the highest risk of osteoporotic bone fracture based on bone mineral density and clinical risk factors. As of 2008, recommendations from the US National Osteoporosis Foundation (NOF) are based on risk assessments from the World Health Organization (WHO) Fracture Risk Assessment Tool (FRAX). According to these recommendations, consideration of therapy should be made for postmenopausal women, and men older than 50 years of age, if any one of the following is present:\n\nBULLET::::1. Prior hip or vertebral fracture\n", "Section::::Mechanical stress and activity indicators.:Injury and workload.\n\nFractures to bones during or after excavation will appear relatively fresh, with broken surfaces appearing white and unweathered. Distinguishing between fractures around the time of death and post-depositional fractures in bone is difficult, as both types of fractures will show signs of weathering. Unless evidence of bone healing or other factors are present, researchers may choose to regard all weathered fractures as post-depositional.\n", "Pain is present in about 78% of cases. Slight pain is present in the earliest stage of ainhum, caused by pressure on the underlying nerves. Fracture of the phalanx or chronic sepsis is accompanied with severe pain.\n\nSection::::Cause.\n", "Section::::Signs and symptoms.\n\nOsteoporosis itself has no symptoms; its main consequence is the increased risk of bone fractures. Osteoporotic fractures occur in situations where healthy people would not normally break a bone; they are therefore regarded as fragility fractures. Typical fragility fractures occur in the vertebral column, rib, hip and wrist.\n\nSection::::Signs and symptoms.:Fractures.\n", "Nociceptors responsible for bone pain can be activated via several mechanisms including deterioration of surrounding tissue, bone destruction, and physical stress which shears the bone, vascular, muscle, and nervous tissue.\n\nSection::::Treatment.\n\nThe use of anesthetics within the actual bone has been a common treatment for several years. This method provides a direct approach using analgesics to relieve pain sensations.\n", "Traumatic injury is viewed through the lens of modern traumatic injury patterns. “Many aspects of the Kerma injury pattern were comparable to clinical [modern] observations: males experienced a higher frequency of trauma, the middle-aged group exhibited the most trauma, the oldest age cohort revealed the least amount of accumulated injuries, a small group experienced multiple trauma and fractures occurred more frequently than dislocations or muscle pulls”. Parry fractures (often occur when an individual is fending off a blow from an attacker) are common. These do not necessarily result from assault, however, and Judd does acknowledge this. She does not use the same parsing strategy when considering Colles' fractures (of the wrist, usually occur when falling onto one's hands) may result from being pushed from a height rather than interpersonal violence, and this is not acknowledged.\n", "BULLET::::- A study on the age of human and animal skeletal remains associated with the purported Aurignacian lithic assemblage from the Fontana Nuova site (Sicily, Italy) is published by Di Maida \"et al.\" (2019), who report that these remains date to Holocene rather than Aurignacian.\n\nBULLET::::- A study on the skeletal trauma of the human calvaria from Cioclovina (Romania), dated to approximately 33,000 calendar years before present, is published by Kranioti, Grigorescu & Harvati (2019), who interpret this finding as evidence of fatal interpersonal violence among early Upper Paleolithic modern humans of Europe.\n", "Fractures are a common symptom of osteoporosis and can result in disability. Acute and chronic pain in the elderly is often attributed to fractures from osteoporosis and can lead to further disability and early mortality. These fractures may also be asymptomatic. The most common osteoporotic fractures are of the wrist, spine, shoulder and hip. The symptoms of a vertebral collapse (\"compression fracture\") are sudden back pain, often with radicular pain (shooting pain due to nerve root compression) and rarely with spinal cord compression or cauda equina syndrome. Multiple vertebral fractures lead to a stooped posture, loss of height, and chronic pain with resultant reduction in mobility.\n", "Section::::Prognosis.:Hip fractures.\n\nHip fractures are responsible for the most serious consequences of osteoporosis. In the United States, more than 250,000 hip fractures annually are attributable to osteoporosis. A 50-year-old white woman is estimated to have a 17.5% lifetime risk of fracture of the proximal femur. The incidence of hip fractures increases each decade from the sixth through the ninth for both women and men for all populations. The highest incidence is found among men and women ages 80 or older.\n\nSection::::Prognosis.:Vertebral fractures.\n", "The link between age-related reductions in bone density and fracture risk goes back at least to Astley Cooper, and the term \"osteoporosis\" and recognition of its pathological appearance is generally attributed to the French pathologist Jean Lobstein. The American endocrinologist Fuller Albright linked osteoporosis with the postmenopausal state. Bisphosphonates were discovered in the 1960s.\n\nAnthropologists have studied skeletal remains that showed loss of bone density and associated structural changes that were linked to a chronic malnutrition in the agricultural area in which these individuals lived. \"It follows that the skeletal deformation may be attributed to\n", "Note that due to the fractures present on the bones being peri-mortem, the blows to the bones could have been made immediately prior (including as cause of) or soon after death. However, because of their precision placement, a peri-mortem \"Cause of Death\" is not likely, and rather the impacts were placed after the bone was defleshed.\n\nSection::::Skull cult practices.\n", "For many years it has been known that bones are innervated with sensory neurons. Yet their exact anatomy remained obscure due to the contrasting physical properties of bone and neural tissue. More recently, it is becoming clear what types of nerves innervated which sections of bone. The periosteal layer of bone tissue is highly pain-sensitive and an important cause of pain in several disease conditions causing bone pain, like fractures, osteoarthritis, etc. However, in certain diseases the endosteal and haversian nerve supply seems to play an important role, e.g. in osteomalacia, osteonecrosis, and other bone diseases. Thus there are several types of bone pain, each with many potential sources or origins of cause.\n", "Section::::Epidemiology.\n\nClavicle fractures occur at 30–64 cases per 100,000 a year and are responsible for 2.6–5.0% of all fractures. This type of fracture occurs more often in males. About half of all clavicle fractures occur in children under the age of seven and is the most common pediatric fracture. Clavicle fractures involve roughly 5% of all fractures seen in hospital emergency admissions. Clavicles are the most commonly broken bone in the human body.\n\nSection::::History.\n\nHippocrates, 4th century BC:\n", "BULLET::::- A study on the variation in trabecular bone structure of the femoral head in fossil hominins attributed to the species \"Australopithecus africanus\", \"Paranthropus robustus\" and to the genus \"Homo\", attempting to reconstruct hip joint loading conditions in these fossil hominins, is published by Ryan \"et al.\" (2018).\n\nBULLET::::- A study on the habitats and diets of \"Paranthropus boisei\" and \"Homo rudolfensis\" from the Early Pleistocene of the Malawi Rift is published by Lüdecke \"et al.\" (2018).\n", "In the case of bone fractures, surgical treatment is generally the most effective. Analgesics can be used in conjunction with surgery to help ease pain of damaged bone.\n\nSection::::Research.\n", "There is significant evidence of healing of the bones of the skull in prehistoric skeletons, suggesting that many of those that proceeded with the surgery survived their operation. In some studies, the rate of survival surpassed 50%.\n\nSection::::Origins.:Setting bones.\n\nExamples of healed fractures in prehistoric human bones, suggesting setting and splinting have been found in the archeological record.\n", "In a 1968 study, Laurence Levy recorded six catastrophic injuries to porters at Harare Central Hospital in Harare, Zimbabwe. Of these, one died instantaneously, and five became quadriplegic, one as a result of a herniated intervertebral disc and four from fractures or fracture-dislocations.\n\nSection::::By activity.:Rugby.\n", "BULLET::::- A study on the location, number, and severity of fractures in the teeth of \"Homo naledi\" and their implications for the diet of the taxon is published by Towle, Irish & De Groote (2017).\n\nBULLET::::- A study on the body size, proportions and absolute and relative brain size in \"Homo naledi\" is published by Garvin \"et al.\" (2017).\n\nBULLET::::- A study on the tooth formation and eruption in \"Homo naledi\" is published by Cofran & Walker (2017).\n", "Bone pain\n\nBone pain (also known medically by several other names) is pain coming from a bone. It occurs as a result of a wide range of diseases and/or physical conditions and may severely impair the quality of life. \n\nBone pain belongs to the class of deep somatic pain, often experienced as a dull pain that cannot be localized accurately by the patient. This is in contrast with the pain which is mediated by superficial receptors in, e.g., the skin. Bone pain can have several possible causes ranging from extensive physical stress to serious diseases such as cancer.\n", "BULLET::::- Seinsheimer classification, Evans-Jensen classification, Pipkin classification, and Garden classification for hip fractures\n\nSection::::Prevention.\n\nBoth high- and low-force trauma can cause bone fracture injuries. Preventive efforts to reduce motor vehicle crashes, the most common cause of high-force trauma, include reducing distractions while driving. Common distractions are driving under the influence and texting or calling while driving, both of which lead to an approximate 6-fold increase in crashes. Wearing a seatbelt can also reduce the likelihood of injury in a collision.\n", "Over 2.5 million child abuse and neglect cases are reported every year, and thirty-five out of every hundred cases are physical abuse cases. Bone fractures are sometimes part of the physical abuse of children; knowing the symptoms of bone fractures in physical abuse and recognizing the actual risks in physical abuse will help forward the prevention of future abuse and injuries. Astoundingly, these abuse fractures, if not dealt with correctly, have a potential to lead to the death of the child.\n" ]
[ "Breaking a bone should cause no pain at all." ]
[ "Bones have nerves and blood vessels and are living things. When they break the nerve endings are damaged and you feel all of that as pain. " ]
[ "false presupposition" ]
[ "Breaking a bone should cause no pain at all." ]
[ "false presupposition" ]
[ "Bones have nerves and blood vessels and are living things. When they break the nerve endings are damaged and you feel all of that as pain. " ]
2018-02028
How men and women are born in almost equal ratio on this planet?
1. Suppose male births are less common than female. 2. A newborn male then has better mating prospects than a newborn female, and therefore can expect to have more offspring. 3. Therefore parents genetically disposed to produce males tend to have more than average numbers of grandchildren born to them. 4. Therefore the genes for male-producing tendencies spread, and male births become more common. 5. As the 1:1 sex ratio is approached, the advantage associated with producing males dies away. 6. The same reasoning holds if females are substituted for males throughout. Therefore 1:1 is the equilibrium ratio. URL_0
[ "Males typically produce billions of sperm each month, many of which are capable of fertilization. Females typically produce one ovum a month that can be fertilized into an embryo. Thus during a lifetime males are able to father a significantly greater number of children than females can give birth to. The most fertile female, according to the Guinness Book of World Records, was the wife of Feodor Vassilyev of Russia (1707–1782) who had 69 surviving children. The most prolific father of all time is believed to be the last Sharifian Emperor of Morocco, Mulai Ismail (1646–1727) who reportedly fathered more than 800 children from a harem of 500 women.\n", "In a scientific paper published in 2008, James states that conventional assumptions have been:\n\nBULLET::::- there are equal numbers of X and Y chromosomes in mammalian sperm\n\nBULLET::::- X and Y stand equal chance of achieving conception\n\nBULLET::::- therefore equal numbers of male and female zygotes are formed\n\nBULLET::::- therefore any variation of sex ratio at birth is due to sex selection between conception and birth.\n", "Section::::Factors affecting sex ratio in humans.:Economic factors.\n", "Section::::Factors affecting sex ratio in humans.:Social factors.:Early marriage and parents' age.\n", "Manning and colleagues have shown that 2D:4D ratios vary greatly between different ethnic groups. In a study with Han, Berber, Uygur and Jamaican children as subjects, Manning et al. found that Han children had the highest mean values of 2D:4D (0.954±−0.032), they were followed by the Berbers (0.950±0.033), then the Uygurs (0.946±0.037), and the Jamaican children had the lowest mean 2D:4D (0.935±0.035). This variation is far larger than the differences between sexes; in Manning's words, \"There's more difference between a Pole and a Finn, than a man and a woman.\"\n", "The relationship between natural factors and human sex ratio at birth, and with aging, remains an active area of scientific research.\n\nSection::::Factors affecting sex ratio in humans.:Environmental factors.\n\nSection::::Factors affecting sex ratio in humans.:Environmental factors.:Effects of climate change.\n", "Section::::Examples in non-human species.:Domesticated animals.\n\nTraditionally, farmers have discovered that the most economically efficient community of animals will have a large number of females and a very small number of males. A herd of cows with a few bulls or a flock of hens with one rooster are the most economical sex ratios for domesticated livestock.\n\nSection::::Examples in non-human species.:Dioecious plants secondary sex ratio and amount of pollen.\n", "Section::::History.\n\nThe notion of greater male variability — at least in respect to physical characteristics — can be traced back to the writings of Charles Darwin. When he expounded his theory of sexual selection in the \"The Descent of Man and Selection in Relation to Sex\", Darwin noted that in many species, including humans, males tended to show greater variation than females in sexually selected traits: \n", "BULLET::::- Whether the mother has a partner or other support network, although this correlation is widely considered to be the result of an unknown third factor.\n\nBULLET::::- Latitude, with countries near the equator producing more females than near the poles.\n", "Like most sexual species, the sex ratio in humans is approximately 1:1. In humans, the natural ratio between males and females at birth is slightly biased towards the male sex, being estimated to be about 1.05 or 1.06 males/per female born. Sex imbalance may arise as a consequence of various factors including natural factors, exposure to pesticides and environmental contaminants, war casualties, sex-selective abortions, infanticides, aging, gendercide and problems with birth registration.\n", "Section::::Factors affecting sex ratio in humans.:Other gestational factors.\n", "Section::::Kin selection.:Conflict over sex ratio.\n", "The human sex ratio is of particular interest to anthropologists and demographers. In human societies, however, sex ratios at birth may be considerably skewed by factors such as the age of mother at birth, and by sex-selective abortion and infanticide. Exposure to pesticides and other environmental contaminants may be a significant contributing factor as well. As of 2014, the global sex ratio at birth is estimated at 107 boys to 100 girls (1000 boys per 934 girls).\n\nSection::::Types.\n\nIn most species, the sex ratio varies according to the age profile of the population.\n\nIt is generally divided into four subdivisions:\n", "Section::::Influences on natural fertility rates.:Fertile window.\n\nThe number of children born to one woman can vary dependent on her window from menarche to menopause. The average window of fertility is from 13.53 to 49.24. Taking into consideration lactational amenorrhea and the period between conception and birth, the average woman is capable of experiencing around 20 births. However, if the duration of lactation is cut short due to use of a formula substitute or the woman has multiple births, the number of offspring could exceed 20.\n\nSection::::Influences on natural fertility rates.:Male contribution.\n", "While recombination of chromosomes is an essential process during meiosis, there is a large range of frequency of cross overs across organisms and within species. Sexually dimorphic rates of recombination are termed heterochiasmy, and are observed more often than a common rate between male and females. In mammals, females often have a higher rate of recombination compared to males. It is theorised that there are unique selections acting or meiotic drivers which influence the difference in rates. The difference in rates may also reflect the vastly different environments and conditions of meiosis in oogenesis and spermatogenesis.\n\nSection::::Meiosis indicators.\n", "Section::::Factors affecting sex ratio in humans.:Social factors.:Data sources and data quality issues.\n", "Each trait has its own advantages and disadvantages, but sometimes a trait that is found desirable may not be favorable in terms of certain biological factors such as reproductive fitness, and traits that are not highly valued by the majority of people may be favorable in terms of biological factors. For example, women tend to have fewer pregnancies on average than before and therefore net worldwide fertility rates are dropping. Moreover, this leads to the fact that multiple births tend to be favorable in terms of number of children and therefore offspring count; when the average number of pregnancies and the average number of children was higher, multiple births made only a slight relative difference in number of children. However, with fewer pregnancies, multiple births can make the difference in number of children relatively large. A hypothetical scenario would be that couple 1 has ten children and couple 2 has eight children, but in both couples, the woman undergoes eight pregnancies. This is not a large difference in ratio of fertility. However, another hypothetical scenario can be that couple 1 has three children and couple 2 has one child but in both couples the woman undergoes one pregnancy (in this case couple 2 has triplets). When the proportion of offspring count in the latter hypothetical scenario is compared, the difference in proportion of offspring count becomes higher. A trait in women known to greatly increase the chance of multiple births is being a tall woman (presumably the chance is further increased when the woman is very tall among both women and men). Yet very tall women are not viewed as a desirable phenotype by the majority of people, and the phenotype of very tall women has not been highly favored in the past. Nevertheless, values placed on traits can change over time.\n", "To exemplify this greater male variability in humans, Darwin also cites some observations made by his contemporaries. For example, he highlights findings from the Novara Expedition of 1861–67 where \"a vast number of measurements of various parts of the body in different races were made, and the men were found in almost every case to present a greater range of variation than the women\" (p. 275). To Darwin, the evidence from the medical community at the time, which suggested a greater prevalence of physical abnormalities among men than women, was also indicative of man’s greater physical variability.\n", "Some studies have found that certain kinds of environmental pollution, in particular dioxins leads to higher rates of female births.\n\nSection::::Factors affecting sex ratio in humans.:Social factors.\n", "The \"First World\" G7 members all have a gender ratio in the range of 0.95–0.98 for the total population, of 1.05–1.07 at birth, of 1.05–1.06 for the group below 15, of 1.00–1.04 for the group aged 15–64, and of 0.70–0.75 for those over 65.\n", "James cautions that available scientific evidence stands against the above assumptions and conclusions. He reports that there is an excess of males at birth in almost all human populations, and the natural sex ratio at birth is usually between 1.02 and 1.08. However the ratio may deviate significantly from this range for natural reasons.\n", "Section::::Humans.\n\nSome research has suggested that historically, women have had a far higher reproductive success rate than men. Dr. Baumeister has suggested that the modern human has twice as many female ancestors as male ancestors.\n", "“It has been shown in the present volume that the offspring from the union of two distinct individuals, especially if their progenitors have been subjected to very different conditions, have an immense advantage in height, weight, constitutional vigour and fertility over the self-fertilised offspring from one of the same parents. And this fact is amply sufficient to account for the development of the sexual elements, that is, for the genesis of the two sexes.”\n", "Various scientists have examined the question whether human birth sex ratios have historically been affected by environmental stressors such as climate change and global warming. Catalano et al. report that cold weather is an environmental stressor, and women subjected to colder weather abort frail male fetuses in greater proportion, thereby lowering birth sex ratios. But cold weather stressors also extend male longevity, thereby raising the human sex ratio at older ages. The Catalano team finds that a 1 °C increase in annual temperature predicts one more male than expected for every 1,000 females born in a year.\n", "Counting the combinations with the same number of A and B, we get the following table.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04549
Why do animals have such nice teeth and humans always need to brush and/or get braces?
We eat a lot more acidic and sugary foods and they don't care about how aesthetically pleasing their smile is. Anyone with a dog knows how bad their breath can be, I'm sure other animals are the same.
[ "Most species are omnivorous, but fruit is the preferred food among all but some human groups. Chimpanzees and orangutans primarily eat fruit. When gorillas run short of fruit at certain times of the year or in certain regions, they resort to eating shoots and leaves, often of bamboo, a type of grass. Gorillas have extreme adaptations for chewing and digesting such low-quality forage, but they still prefer fruit when it is available, often going miles out of their way to find especially preferred fruits. Humans, since the neolithic revolution, consume mostly cereals and other starchy foods, including increasingly highly processed foods, as well as many other domesticated plants (including fruits) and meat. Hominid teeth are similar to those of the Old World monkeys and gibbons, although they are especially large in gorillas. The dental formula is . Human teeth and jaws are markedly smaller for their size than those of other apes, which may be an adaptation to eating cooked food since the end of the Pleistocene.\n", "The dental formula of humans is: . Humans have proportionately shorter palates and much smaller teeth than other primates. They are the only primates to have short, relatively flush canine teeth. Humans have characteristically crowded teeth, with gaps from lost teeth usually closing up quickly in young individuals. Humans are gradually losing their third molars, with some individuals having them congenitally absent.\n\nSection::::Biology.:Genetics.\n", "The results of a study of dental microwear on tooth enamel for specimens of the carnivore species from LaBrea pits, including dire wolves, suggest that these carnivores were not food-stressed just before their extinction. The evidence also indicated that the extent of carcass utilization (i.e., amount consumed relative to the maximum amount possible to consume, including breakup and consumption of bones) was less than among large carnivores today. These finding indicates that tooth breakage was related to hunting behavior and the size of prey.\n\nSection::::Adaptation.:Climate impact.\n", "Mammals, in general, are diphyodont, meaning that they develop two sets of teeth. In humans, the first set (the \"baby,\" \"milk,\" \"primary\" or \"deciduous\" set) normally starts to appear at about six months of age, although some babies are born with one or more visible teeth, known as neonatal teeth. Normal tooth eruption at about six months is known as teething and can be painful. Kangaroos, elephants, and manatees are unusual among mammals because they are polyphyodonts.\n\nSection::::Mammals.:Aardvark.\n\nIn Aardvarks, teeth lack enamel and have many pulp tubules, hence the name of the order Tubulidentata.\n\nSection::::Mammals.:Canines.\n", "The mesowear method or tooth wear scoring method is a quick and inexpensive process of determining the lifelong diet of a taxon (grazer or browser) and was first introduced in the year 2000.\n\nThe mesowear technique can be extended to extinct and also extant animals.\n", "Section::::Mammals.:Rabbit.\n", "In dogs, the teeth are less likely than humans to form dental cavities because of the very high pH of dog saliva, which prevents enamel from demineralizing. Sometimes called cuspids, these teeth are shaped like points (cusps) and are used for tearing and grasping food\n\nSection::::Mammals.:Cetaceans.\n", "Generally, tooth development in non-human mammals is similar to human tooth development. The variations usually lie in the morphology, number, development timeline, and types of teeth. However, some mammals' teeth do develop differently than humans'.\n", "Section::::Abnormalities.:Environmental.:Destruction after development.\n", "Section::::Supporting structures.:Gingiva.\n", "BULLET::::- Pinney, Chris C. (1992) \"The illustrated veterinary guide for dogs, cats, birds & exotic pets\", 1st ed., Blue Ridge Summit, PA: Tab Books\n\nBULLET::::- Randall-Bowman, [n.i.] (2004) \"Gummed Out: Young Horses Lose Many Teeth, Vet Says\", archived webpage, accessed 8 October 2007]\n\nBULLET::::- Ross, Michael H., Kaye, G.I. and Pawlina, W. (2006) \"Histology: a text and atlas\", 5th ed., Philadelphia; London: Lippincott Williams & Wilkins,\n\nBULLET::::- Springer, Shelley C. and Annibale, D.J. (2006) \"Kernicterus\", eMedicen online, accessed 7 October 2007\n", "Section::::Abnormalities.:Environmental.:Discoloration.\n", "Pain originating from dental problems is very rarely recognized by owners or professionals. Seldom will an animal become anorexic due to a dental problem. The exception to this is in the case of severe soft tissue injury, for example chronic gingivostomatitis. In general dental pain is a chronic pain, and it is only after treatment that an owner reports how much better their pet is doing. Pain is often mistaken for a pet just getting old.\n\nVery few clients examine their pets’ teeth unless they are carrying out daily home care, so actual dental problems often go unnoticed. \n\nbrbr\n", "Dental caries (non-human)\n\nDental caries, also known as tooth decay, is uncommon among companion animals. The bacteria \"Streptococcus mutans\" and \"Streptococcus sanguis\" cause dental caries by metabolising sugars.\n\nThe term \"feline cavities\" is commonly used to refer to feline odontoclastic resorptive lesions, however, sacchrolytic acid-producing bacteria (the same responsible for Dental plaque) are not involved in this condition.\n\nSection::::In dogs.\n", "Section::::Interaction with humans.:Conservation.\n", "Tooth development is the complex process by which teeth form from embryonic cells, grow, and erupt into the mouth. Although many diverse species have teeth, their development is largely the same as in humans. For human teeth to have a healthy oral environment, enamel, dentin, cementum, and the periodontium must all develop during appropriate stages of fetal development. Primary teeth start to form in the development of the embryo between the sixth and eighth weeks, and permanent teeth begin to form in the twentieth week. If teeth do not start to develop at or near these times, they will not develop at all.\n", "Dental abrasion or tooth wear is common in ferrets, and is caused by mechanical wear of the teeth. Eating manufactured dry food (kibble) will erode (due to the hard and extremely dry kibble) the carnassial teeth of the ferret, the wear from the eating kibble can become significant with old age (after three to five years). If teeth are overly ground down, a ferret cannot use them as scissors to eat raw meat. Tooth erosion eventually affects a ferret's ability to eat solid food. Dental abrasion can also be caused by excessive chewing on fabrics or toys, and cage biting. If the ferret engages in these activities a lot, it might be a sign of boredom, and more stimulating activities (such as play) should rectify the situation.\n", "In 2015, a study looked at specimens of all of the carnivore species from Rancho La Brea, California, including remains of the large wolf \"Canis dirus\" that was also a megafaunal hypercarnivore. The evidence suggests that these carnivores were not food-stressed just before extinction, and that carcass utilization was less than among large carnivores today. The high incidence of tooth breakage likely resulted from the acquisition and consumption of larger prey.\n\nSection::::Diet.\n", "Humans are to some extent predatory, using weapons and tools to fish, hunt and trap animals. They also use other predatory species such as dogs, cormorants, and falcons to catch prey for food or for sport. \n\nTwo mid-sized predators, dogs and cats, are the animals most often kept as pets in western societies.\n", "Oral disease is not a new problem for cats. A study performed by O'Neill et al. (2014), discovered the same feline dental diseases found today, in the skulls of cats living fifty years ago. This highlights that felines are predisposed to poor oral health, a trait which is likely due to their origins as a desert species and the typical diet they consume. It is important to note that purebred cats are not at a higher risk of developing oral diseases when compared to mixed breeds. This further supports the supposition that the lifestyle of the cat plays the largest role on dental health.\n", "Carnivorans include carnivores, omnivores, and even a few primarily herbivorous species, such as the giant panda and the binturong. Important teeth for carnivorans are the large, slightly recurved canines, used to dispatch prey, and the carnassial complex, used to rend meat from bone and slice it into digestible pieces. Dogs have molar teeth behind the carnassials for crushing bones, but cats have only a greatly reduced, functionless molar behind the carnassial in the upper jaw. Cats will strip bones clean but will not crush them to get the marrow inside. Omnivores, such as bears and raccoons, have developed blunt, molar-like carnassials. Carnassials are a key adaptation for terrestrial vertebrate predation; all other placental orders are primarily herbivores, insectivores, or aquatic.\n", "Some ungulates completely lack upper incisors and instead have a dental pad to assist in browsing. It can be found in camels, ruminants, and some toothed whales; modern baleen whales are remarkable in that they have baleen instead to filter out the krill from the water. On the other spectrum teeth have been evolved as weapons or sexual display seen in pigs and peccaries, some species of deer, musk deer, hippopotamuses, beaked whales and the Narwhal, with its long canine tooth.\n\nSection::::Characteristics.:Anatomy.:Cranial appendages.\n", "A later La Brea pits study compared tooth breakage of dire wolves in two time periods. One pit contained fossil dire wolves dated 15,000YBP and another dated 13,000YBP. The results showed that the 15,000YBP dire wolves had three times more tooth breakage than the 13,000YBP dire wolves, whose breakage matched those of nine modern carnivores. The study concluded that between 15,000–14,000YBP prey availability was less or competition was higher for dire wolves, and that by 13,000YBP, as the prey species moved towards extinction, predator competition had declined and therefore the frequency of tooth breakage in dire wolves had also declined.\n", "Studies of Australopithecine diets through dental microwear showed that they were largely frugivorous but there is some archaeological evidence for meat consumption. The shift in dietary capacities gave Australopithecines the advantage survive in several different habitats.\n\nSection::::History.:Archaic megadont hominids.\n\nMegadont hominids, in normal, show the greatest reduction in canines, but the premolars were abnormally large.\n\nSection::::History.:Archaic megadont hominids.:\"Paranthropus robustus\".\n", "There have been a plethora of research studies to calculate prevalence of certain dental anomalies in CLP populations however a variety of results have been obtained.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06618
How did a comet hit the earth and only kill the dinosaurs?
It's estimated that the Cretaceous–Paleogene extinction event eliminated 75% of all species of plants and animals on Earth, far more than just the dinosaurs. Dinosaurs are just the highest-profile ones and so tend to be talked about more, because people like dinosaurs. Not a lot of ammonite fans out there, even though they went extinct as well, among many others.
[ "The collision of Comet Shoemaker–Levy 9 with Jupiter in 1994 demonstrated that gravitational interactions can fragment a comet, giving rise to many impacts over a period of a few days if the comet should collide with a planet. Comets undergo gravitational interactions with the gas giants, and similar disruptions and collisions are very likely to have occurred in the past. This scenario may have occurred on Earth at the end of the Cretaceous, though Shiva and the Chicxulub craters might have been formed 300,000 years apart.\n", "Kolbert explains that the main cause of the Cretaceous–Paleogene extinction event was not the impact of the asteroid itself. It was the dust created by the impact. The debris from the impact incinerated anything in its path. She states that it is impossible to estimate the full extent of the various species that died out due to this catastrophe. However, one class of animals we know did die out because of the effects of the asteroid's impact, are the ammonites. Kolbert explains that, even though ammonites were 'fit' for their current environment, a single moment can completely change which traits are advantageous and which are lethal.\n", "If widespread fires occurred, they would have increased the content of the atmosphere and caused a temporary greenhouse effect once the dust clouds and aerosol settled, and, this would have exterminated the most vulnerable organisms that survived the period immediately after the impact.\n\nAlthough most paleontologists now agree that an asteroid did hit the Earth at approximately the end of the Cretaceous, there is an ongoing dispute whether the impact was the sole cause of the extinctions.\n\nSection::::Chicxulub impact.:2016 Chicxulub crater drilling project.\n", "A large impact might have triggered other mechanisms of extinction described below, such as the Siberian Traps eruptions at either an impact site or the antipode of an impact site. The abruptness of an impact also explains why more species did not rapidly evolve to survive, as would be expected if the Permian-Triassic event had been slower and less global than a meteorite impact.\n\nSection::::Theories about cause.:Impact event.:Possible impact sites.\n", "The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires.\n\nMost paleontologists now agree that an asteroid did hit the Earth about 66 Ma ago, but there is an ongoing dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event.\n", "BULLET::::- T Rex: a male \"Tyrannosaurus rex\" killed by the comet crashes on Earth, he seems to be the first one killed by the impact of his kind. Before he dies, he is a 45 feet long healthy apex predator.\n\nBULLET::::- \"Quetzalcoatlus\": a pterosaur capable of flight with wings of a diameter of around 40–50 feet.\n", "The impact may also have produced acid rain, depending on what type of rock the asteroid struck. However, recent research suggests this effect was relatively minor. Chemical buffers would have limited the changes, and the survival of animals vulnerable to acid rain effects (such as frogs) indicates that this was not a major contributor to extinction. Impact theories can only explain very rapid extinctions, since the dust clouds and possible sulphuric aerosols would wash out of the atmosphere in a fairly short time—possibly under ten years.\n\nSection::::Possible causes.:Chicxulub Crater.\n", "BULLET::::- At least five putative hydrogen sulfide-induced mass extinctions, such as the Great Dying,\n\nThe list does not include the Cretaceous–Paleogene extinction event, since this was, at least partially, externally induced by a meteor impact.\n", "The fourth mass extinction was the Triassic-Jurassic extinction event in which almost all synapsids and archosaurs became extinct, probably due to new competition from dinosaurs.\n\nThe fifth and most recent mass extinction was the K-T extinction. In 66 Ma, a asteroid struck Earth just off the Yucatán Peninsula—somewhere in the south western tip of then Laurasia—where the Chicxulub crater is today. This ejected vast quantities of particulate matter and vapor into the air that occluded sunlight, inhibiting photosynthesis. 75% of all life, including the non-avian dinosaurs, became extinct, marking the end of the Cretaceous period and Mesozoic era.\n", "Section::::Causes.\n", "Section::::Context.\n\nMajor astronomical impact events have significantly shaped Earth's history, having been implicated in the formation of the Earth–Moon system, the origin of water on Earth, the evolutionary history of life, and several mass extinctions. Notable prehistorical impact events include the Chicxulub impact, 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event. The 37 million years old asteroid impact that caused Mistastin crater generated temperatures exceeding 2,370 °C, the highest known to have naturally occurred on the surface of the Earth.\n", "BULLET::::- Chodas P. W., and Yeomans D. K. (1996), \"The Orbital Motion and Impact Circumstances of Comet Shoemaker–Levy 9\", in \"The Collision of Comet Shoemaker–Levy 9 and Jupiter\", edited by K. S. Noll, P. D. Feldman, and H. A. Weaver, Cambridge University Press, pp. 1–30\n\nBULLET::::- Chodas P. W. (2002), \"Communication of Orbital Elements to Selden E. Ball, Jr.\" Accessed February 21, 2006\n\nSection::::External links.\n\nBULLET::::- Comet Shoemaker–Levy 9 FAQ\n\nBULLET::::- Comet Shoemaker–Levy 9 Photo Gallery\n\nBULLET::::- Downloadable gif Animation showing time course of impact and size relative to earthsize\n\nBULLET::::- Comet Shoemaker-Levy 9 Dan Bruton, Texas A&M University\n", "In , an international panel of 41 scientists reviewed 20 years of scientific literature and endorsed the asteroid hypothesis, specifically the Chicxulub impact, as the cause of the extinction, ruling out other theories such as massive volcanism. They had determined that a asteroid hurtled into Earth at Chicxulub on Mexico's Yucatán Peninsula. The collision would have released the same energy as —more than a billion times the energy of the atomic bombings of Hiroshima and Nagasaki.\n", "One of the leading theories for the cause of the Cretaceous–Paleogene extinction event that included the dinosaurs is a large meteorite impact. The Chicxulub Crater has been identified as the site of this impact. There has been a lively scientific debate as to whether other major extinctions, including the ones at the end of the Permian and Triassic periods might also have been the result of large impact events, but the evidence is much less compelling than for the end Cretaceous extinction.\n", "Near-Earth object may be a fragment of Encke.\n\nSection::::Meteor showers.:Mercury.\n", "Explosives would also have been superfluous. At its closing velocity of 10.2 km/s, the Impactor's kinetic energy was equivalent to 4.8 tonnes of TNT, considerably more than its actual mass of only 372 kg.\n\nThe mission coincidentally shared its name with the 1998 film, \"Deep Impact\", in which a comet strikes the Earth.\n\nSection::::Mission profile.\n", "The effect of impact events on the biosphere has been the subject of scientific debate. Several theories of impact-related mass extinction have been developed. In the past 500 million years there have been five generally accepted major mass extinctions that on average extinguished half of all species. One of the largest mass extinctions to have affected life on Earth was the Permian-Triassic, which ended the Permian period 250 million years ago and killed off 90 percent of all species; life on Earth took 30 million years to recover. The cause of the Permian-Triassic extinction is still a matter of debate; the age and origin of proposed impact craters, i.e. the Bedout High structure, hypothesized to be associated with it are still controversial. The last such mass extinction led to the demise of the dinosaurs and coincided with a large meteorite impact; this is the Cretaceous–Paleogene extinction event (also known as the K–T or K–Pg extinction event), which occurred 66 million years ago. There is no definitive evidence of impacts leading to the three other major mass extinctions.\n", "Although the concurrence of the end-Cretaceous extinctions with the Chicxulub asteroid impact strongly supports the impact hypothesis, some scientists continue to support other contributing causes: volcanic eruptions, climate change, sea level change, and other impact events. The end-Cretaceous event is the only mass extinction known to be associated with an impact, and other large impacts, such as the Manicouagan Reservoir impact, do not coincide with any noticeable extinction events.\n\nSection::::Alternative hypotheses.:Deccan Traps.\n", "Section::::Earth impactor model.:Lake Cheko.\n", "BULLET::::- Comet Levy, 1990c, C/1990 K1, May 20, 1990\n\nBULLET::::- Periodic Comet Levy, P/1991 L3, June 14, 1991\n\nBULLET::::- Comet Takamizawa-Levy, C/1994 G1, April 15, 1994\n\nBULLET::::- Periodic Comet 255P/Levy, October 2, 2006\n\nBULLET::::- Photographic, as part of team of Eugene and Carolyn Shoemaker and David Levy:\n\nBULLET::::- Periodic Comet Shoemaker-Levy 1, 1990o, P/1990 V1\n\nBULLET::::- Periodic Comet Shoemaker-Levy 2, 1990p, 137 P/1990 UL3\n\nBULLET::::- Comet Shoemaker-Levy, 1991d C/1991 B1\n\nBULLET::::- Periodic Comet Shoemaker-Levy 3, 1991e, 129P/1991 C1\n\nBULLET::::- Periodic Comet Shoemaker-Levy 4, 1991f, 118P/1991 C2\n\nBULLET::::- Periodic Comet Shoemaker-Levy 5, 1991z, 145P/1991 T1\n\nBULLET::::- Comet Shoemaker-Levy, 1991a1, C/1991 T2\n", "BULLET::::- \"Crocodylus anthropophagus\"\n\nBULLET::::- Kali Gedeh giant crocodile (\"species inquirenda\")\n\nBULLET::::- \"Crocodylus palaeindicus\"\n\nBULLET::::- \"Crocodylus thorbjarnarsoni\"\n\nBULLET::::- \"Euthecodon\"\n\nBULLET::::- \"Gavialis bengawanicus\"\n\nBULLET::::- \"Rimasuchus\"\n\nBULLET::::- \"Toyotamaphimeia\"\n\nBULLET::::- \"Australopithecus\"\n\nBULLET::::- \"Dinopithecus\"\n\nBULLET::::- Giant ape (\"Gigantopithecus\")\n\nBULLET::::- Various \"Homo\" sp.\n\nBULLET::::- \"Homo antecessor\"\n\nBULLET::::- \"Homo ergaster\"\n\nBULLET::::- \"Homo gautengensis\"\n\nBULLET::::- \"Homo habilis\"\n\nBULLET::::- \"Homo heildenbergensis\" (\"Homo\" \"rhodesiensis\")\n\nBULLET::::- \"Homo naledi\"\n\nBULLET::::- \"Homo sapiens idaltu\"\n\nBULLET::::- \"Homo rudolfensis\"\n\nBULLET::::- \"Parapapio\"\n\nBULLET::::- \"Paranthropus\"\n\nBULLET::::- \"Theropithecus brumpti\" et \"Theropithecus oswaldi\"\n\nBULLET::::- Pelagornithidae (e.g. \"Pelagornis\")\n\nSection::::Pleistocene or Ice Age extinction event.:Africa and southern Asia.:Megafauna that disappeared in Africa or southern Asia during the Late Pleistocene.\n", "BULLET::::- Powerful goshawk and the Gracile goshawk (\"Accipiter efficax et Accipiter quartus\")\n\nBULLET::::- \"Sylviornis\" (giant, flightless New Caledonian galliform- largest in existence)\n\nBULLET::::- Noble megapode (\"Megavitornis altirostris\")\n\nBULLET::::- New Caledonian gallinule (\"Porphyrio kukwiedei\")\n\nBULLET::::- Giant megapodes\n\nBULLET::::- Giant malleefowl (\"Leipoa gallinacea\")\n\nBULLET::::- Pile-builder megapode (\"Megapodius molistructor\")\n\nBULLET::::- Consumed scrubfowl (\"Megapodius alimentum\")\n\nBULLET::::- Viti Levu scrubfowl (\"Megapodius amissus\")\n\nBULLET::::- New Caledonian ground dove (\"Gallicolumba longitarsus\")\n\nBULLET::::- New Caledonian snipe et Viti Levu snipe (\"Coenocorypha miratropica\" et \"Coenocorypha neocaledonica\")\n\nBULLET::::- Niue night heron (\"Nycticorax kalavikai\")\n\nBULLET::::- Marquesas cuckoo-dove (\"Macropygia heana\")\n\nBULLET::::- New Caledonian barn owl (\"Tyto letocarti\")\n\nBULLET::::- Various \"Galliraillus\" sp.\n", "But life on Earth was not completely destroyed: fish and crocodiles surviving underwater; small mammals, snakes, insects, arachnids, and lizards hid underground; birds flew or swam away from the disaster. Three years pass before sunlight finally reaches the planet again, and plant life finally carpets the Earth again, setting the stage for a new era: the era of mammals. Mammals now multiply and diversify, with countless species of mammals evolving, until 10,000 species explode across the planet and one species, humans, eventually rule the planet like the dinosaurs once had.\n\nSection::::External links.\n\nBULLET::::- \"Last Day of the Dinosaurs\" at Discovery.com\n", "The third mass extinction was the Permian-Triassic, or the Great Dying, event was possibly caused by some combination of the Siberian Traps volcanic event, an asteroid impact, methane hydrate gasification, sea level fluctuations, and a major anoxic event. Either the proposed Wilkes Land crater in Antarctica or Bedout structure off the northwest coast of Australia may indicate an impact connection with the Permian-Triassic extinction. But it remains uncertain whether either these or other proposed Permian-Triassic boundary craters are either real impact craters or even contemporaneous with the Permian-Triassic extinction event. This was by far the deadliest extinction ever, with about 57% of all families and 83% of all genera killed.\n", "Probably the most convincing evidence for a worldwide catastrophe was the discovery of the crater which has since been named Chicxulub Crater. This crater is centered on the Yucatán Peninsula of Mexico and was discovered by Tony Camargo and Glen Pentfield while working as geophysicists for the Mexican oil company PEMEX. What they reported as a circular feature later turned out to be a crater estimated to be in diameter. This convinced the vast majority of scientists that this extinction resulted from a point event that is most probably an extraterrestrial impact and not from increased volcanism and climate change (which would spread its main effect over a much longer time period).\n" ]
[ "A comet hitting the earth only killed the dinosaurs.", "Dinosaurs were the only animals that went extinct after the comet hit." ]
[ "The extinction event also eliminated 75% of all species of plants and animals.", "Other animals went extinct too like the ammonite but they aren't as famous." ]
[ "false presupposition" ]
[ "A comet hitting the earth only killed the dinosaurs.", "Dinosaurs were the only animals that went extinct after the comet hit." ]
[ "false presupposition", "false presupposition" ]
[ "The extinction event also eliminated 75% of all species of plants and animals.", "Other animals went extinct too like the ammonite but they aren't as famous." ]
2018-19339
sometimes i find "4k" version of a movie that has smaller size than a 1080p version movie.
What you are seeing is compression. Compression is basically using math to figure out what parts of an image can be thrown away and calculated again later, instead of sending every pixel. There are many different kinds of compression, some are lossless, meaning the full image can be rebuilt perfectly pixel for pixel. Others have some loss, but they can get even smaller. JPEG is an example of compression with loss, you can see JPEG start to get fuzzy, because the compression is allowed to make small mistakes to get the image to be smaller. Which one is better? It is hard to say without knowing what kind of compression they went through. The FHD could be basically uncompressed while the 4k image is compressed with a lossless method. This would mean that the 4k is better even if it is smaller. Once again, I don't know what you are looking at or how they were compressed, so I don't know for sure.
[ "BULLET::::- 40962160 (full frame, 256135 or ≈1.901 aspect ratio)\n\nBULLET::::- 39962160 (flat crop, 1.851 aspect ratio)\n\nBULLET::::- 40961716 (CinemaScope crop, ≈2.391 aspect ratio)\n\n2K distributions can have a frame rate of either 24 or 48FPS, while 4K distributions must have a frame rate of 24FPS.\n", "BULLET::::- \"The Suite Life on Deck\" (season one episodes used \"FilmLook\" processing)\n\nBULLET::::- \"True Jackson, VP\"\n\nBULLET::::- \"Drake & Josh\" (season one used \"FilmLook\" processing\")\n\nBULLET::::- \"iCarly\"\n\nBULLET::::- \"Grange Hill\"\n\nBULLET::::- \"A.N.T. Farm\"\n\nBULLET::::- \"Family Feud\" (2012–present)\n\nBULLET::::- \"Choo Choo Soul\" (used FilmLook before switching back to Filmized for the Disney Songs)\n\nBULLET::::- \"The Mighty Boosh (series 1)\"\n\nMany digitally-shot productions have been filmized during mastering.\n\nSection::::Limitations.\n", "BULLET::::- 39962160 (flat crop, 1.851 aspect ratio)\n\nBULLET::::- 40961716 (CinemaScope crop, ≈2.391 aspect ratio)\n\nThe DCI 4K standard has twice the horizontal and vertical resolution of DCI 2K (), with four times as many pixels overall.\n\nDigital movies made in 4K may be produced, scanned, or stored in a number of other resolutions depending on what storage aspect ratio is used. In the digital cinema production chain, a resolution of 4096 × 3112 is often used for acquiring \"open gate\" or anamorphic input material, a resolution based on the historical resolution of scanned Super 35 mm film.\n\nSection::::Resolutions.:Other 4K resolutions.\n", "This resolution has an aspect ratio of 169, with 8,294,400 total pixels. It is exactly double the horizontal and vertical resolution of 1080p () for a total of 4 times as many pixels, and triple the horizontal and vertical resolution of 720p () for a total of 9 times as many pixels. It is sometimes referred to as \"2160p\", based on the naming patterns established by the previous 720p and 1080p HDTV standards.\n", "Section::::Reception.:United States and Canada.\n", "Theaters began projecting movies at 4K resolution in 2011. Sony was offering 4K projectors as early as 2004. The first 4K home theater projector was released by Sony in 2012.\n\nSony is one of the leading studios promoting UHDTV content, offering a little over 70 movie and television titles via digital download to a specialized player that stores and decodes the video. The large files (≈40GB), distributed through consumer broadband connections, raise concerns about data caps.\n", "Section::::Release.\n\n\"Resolution\" had its world premiere at the Tribeca Film Festival April 20, 2012. Tribeca Film gave the film an initial limited theatrical release on January 25, 2013. Cinedigm and Tribeca Film released \"Resolution\" on DVD, Blu-ray, and video-on-demand October 8, 2013.\n\nSection::::Reception.\n\nSection::::Reception.:Reviews.\n", "On November 29, 2012, Sony announced the 4K Ultra HD Video Player—a hard disk server preloaded with ten 4K movies and several 4K video clips that they planned to include with the Sony XBR-84X900. The preloaded 4K movies are \"The Amazing Spider-Man\", \"Total Recall\" (2012), \"The Karate Kid\" (2010), \"Salt\", \"\", \"The Other Guys\", \"Bad Teacher\", \"That's My Boy\", \"Taxi Driver\", and \"The Bridge on the River Kwai\". Additional 4K movies and 4K video clips will be offered for the 4K Ultra HD Video Player in the future .\n", "BULLET::::- 2004: \"Spider-Man 2\" – The first digital intermediate on a new Hollywood film to be done entirely at 4K resolution. Although scanning, recording, and color-correction was done at 4K by EFILM, most of the visual effects were created at 2K and were upscaled to 4K.\n\nBULLET::::- 2005: \"Serenity\" - The first film to fully conform to Digital Cinema Initiatives specifications, marking \"a major milestone in the move toward all-digital projection\".\n", "In 2015, Logmar of Denmark made a one-off batch of 50 \"digicanical\" pro-level Super 8 cameras to celebrate the 50th anniversary of Super 8. These cameras use a widened gate as well, providing an 11% increase in imaging area over the standard Super8 frame and achieving aspect ratio of 1.5.\n", "Section::::Production.:Music.\n\nThe musical score was composed by Henry Jackman, who also worked as composer for the 3D Disney computer-animated feature film \"Wreck-It Ralph\" (2012). In June 2015, American rapper Waka Flocka Flame released a single entitled \"Game On\", featuring Good Charlotte, which serves as part of the film's soundtrack. The Queen song \"We Will Rock You\" was also remade to fit into the film's Donkey Kong scenes.\n\nSection::::Release.\n", "In 2009, using Prime Focus World's proprietary View-D technology, the company converted \"Clash of the Titans\" into 3D. The work on this movie led to Prime Focus delivering stereo conversions for a number of other movie blockbusters, including: \"\", \"\", and \"\"; \"\" and \"\"; \"\"; \"Wrath of the Titans\"; \"\"; and \"Shrek 2\".\n", "On January 6, 2016, director James Gunn stated that the 2017 film \"Guardians of the Galaxy Vol. 2\" would be the first (feature) film to be shot in 8K, using the Red Weapon 8K VV.\n\nSection::::History.:Broadcasting.\n", "It is the third DCOM on DVD to be certified Platinum in DVD sales; the first is \"The Cheetah Girls\". The \"Wendy Wu: Homecoming Warrior\" sold more than 13,933 in DVD on amazon.com making the DVD the #14 most popular Kids DVD ever sold on Amazon.com. Despite being filmed in the 16:9 aspect ratio, the original and Kickin Edition DVD releases featured a 4:3 \"full screen\" version (though not pan and scan as the camera stays directly in the center of the image), the format of the film as shown on the Disney Channel.\n\nSection::::Soundtrack.\n", "BULLET::::- Star Wars: The legendary first three ‘Star Wars’ feature films were all scanned at 4K with full restoration including stabilization, grain reduction, and dust & dirt removal.\n\nBULLET::::- Avatar & Titanic: In addition to image processing and grain reduction, Z axis issues that had occurred during the original 3D shooting were fixed.\n\nBULLET::::- Disney: Short films and platinum classic films like 'Dumbo', 'Snow White', 'Cinderella', 'Sleeping Beauty' and several others were scanned at 4K, color corrected and restored according to fixed specifications. Dirt and dust were removed from each of these films, and scratches and grains were cleaned up.\n", "Pixel shifting, as described here, was pioneered in the consumer space by JVC, and later in the commercial space by Epson. That said, it isn't the same thing as \"true\" 4K. More recently, some DLP projectors claim 4K UHD (which the JVCs and Epsons do not claim).\n", "Section::::Release.\n\nThe movie premiered in 4 April 2019 on 285 screens under distribution of Buena Vista International.\n\nSection::::Reception.\n\nSection::::Reception.:Box office.\n", "Some photographic still cameras such as DSLRs can exceed 5K resolution when capturing still images, but not when capturing video. For example, the Canon EOS 5D Mark IV announced in August 2016 has a maximum resolution of 67204480 pixels (around 30 megapixels in a 3:2 aspect ratio) which is used for high resolution still images, but it can only capture video at a maximum of 40962160 and a framerate of 30 Hz.\n\nSection::::History.:First TV with 5K resolution.\n", "The 4K UHD standard doesn't specify how large the pixels are, so a 4K UHD projector (Optoma, BenQ, Dell, et al.) counts because these projectors have a 2718×1528 pixel structure. Those projectors process the true 4K of data and project it with overlapping pixels, which is what pixel shifting is. Unfortunately, each of those pixels is far larger: each one has 50% more area than true 4K. Pixel shifting projectors project a pixel, shift it up to the right, by a half diameter, and project it again, with modified data, but that second pixel overlaps the first.\n", "As technology gets better, the quality of telesyncs also improves, although even the best telesyncs are lossy and will be inferior in quality to direct rips from Blu-ray, DVD or digital transfers from the film itself (see telecine). Some release groups use high-definition video cameras to get the clearest picture possible. When an unlicensed copy of a film exists even before its official publication, it is often because a telesync version could be easily produced.\n", "On May 17, 2013, the Franklin Institute premiered \"To Space and Back\", an 8K×8K, 60 fps, 3D video running approximately 25 minutes. During its first run at the Fels Planetarium it was played at 4K, 60 fps.\n\nIn November 2013, NHK screened the experimental-drama short film \"The Chorus\" at Tokyo Film Festival which was filmed in 8K and 22.2 sound format.\n", "The film was also released in both high-definition formats, HD DVD, which featured both standard and high definitions on the same disc, and Blu-ray. It was the best-selling title on both formats in 2006, and was among the best-sellers of both formats of 2007.\n\nSection::::Unproduced sequel and reboot.\n", "The film was originally scheduled to be released on May 15, 2015, but on August 12, 2014, the release date was changed to July 24, 2015. In the United States and Canada, it was released in the Dolby Vision format in Dolby Cinema, which is the first ever for Sony. It was released in China on September 15, 2015.\n\nSection::::Release.:Marketing.\n", "In 2014, Netflix began streaming \"House of Cards\", \"Breaking Bad\", and \"some nature documentaries\" at 4K to compatible televisions with an HEVC decoder. Most 4K televisions sold in 2013 did not natively support HEVC, with most major manufacturers announcing support in 2014. Amazon Studios began shooting their full-length original series and new pilots with 4K resolution in 2014. They are now currently available though Amazon Video.\n\nIn March 2016 the first players and discs for Ultra HD Blu-ray—a physical optical disc format supporting 4K resolution and HDR at 60 frames per second—were released.\n", "In comparison to 4K UHD (38402160), the 16:9 5K resolution of 51202880 offers 1280 extra columns and 720 extra lines of display area, an increase of 33.% in each dimension. This additional display area can allow 4K content to be displayed at native resolution without filling the entire screen, which means that additional software such as video editing suite toolbars will be available without having to downscale the content previews.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-07584
What does it mean when a cough “moves into your chest”? Was it somewhere else before?
People generally say that in references to colds.A “cold” is a blanket term for any viral infection of the nose and throat. In the early stages of a cold, your body is fighting the cold directly and most of the symptoms are in your sinuses, nose, and throat. After a few days the virus is pretty much dead and the nose starts to feel better, but at that point so much mucus has drained down your throat that you start coughing to clear it out. So it feels like the cold has moved from your nose into your chest, when in reality the chest symptoms are your body cleaning up after the cold.
[ "Coughing is a mechanism of the body that is essential to normal physiological function of clearing the throat which involves a reflex of the afferent sensory limb, central processing centre of the brain and the efferent limb. In conjunction to the components of the body that are involved, sensory receptors are also used. These receptors include rapidly adapting receptors which respond to mechanical stimuli, slowly adapting receptors and nociceptors which respond to chemical stimuli such as hormones in the body. To start the reflex, the afferent impulses are transmitted to the medulla of the brain this involves the stimulus which is then interpreted. The efferent impulses are then triggered by the medulla causing the signal to travel down the larynx and bronchial tree. This then triggers a cascade of events that involve the intercostal muscles, abdominal wall, diaphragm and pelvic floor which in conjunction together create the reflex known as coughing.\n", "Cough\n\nA cough is a sudden, and often repetitively occurring, protective reflex which helps to clear the large breathing passages from fluids, irritants, foreign particles and microbes. The cough reflex consists of three phases: an inhalation, a forced exhalation against a closed glottis, and a violent release of air from the lungs following opening of the glottis, usually accompanied by a distinctive sound.\n", "A foreign body can sometimes be suspected, for example if the cough started suddenly when the patient was eating. Rarely, sutures left behind inside the airway branches can cause coughing. A cough can be triggered by dryness from mouth breathing or recurrent aspiration of food into the windpipe in people with swallowing difficulties.\n\nSection::::Differential diagnosis.:Angiotensin-converting enzyme inhibitor.\n", "A cough is a protective reflex in healthy individuals which is influenced by psychological factors. The cough reflex is initiated by stimulation of two different classes of afferent nerves, namely the myelinated rapidly adapting receptors, and nonmyelinated C-fibers with endings in the lungs. However it is not certain that the stimulation of nonmyelinated C-fibers leads to cough with a reflex as it's meant in physiology (with its own five components): this stimulation may cause mast cells degranulation (through an asso-assonic reflex) and edema which may work as a stimulus for rapidly adapting receptors.\n\nSection::::Diagnostic approach.\n", "The efferent neural pathway then follows, with relevant signals transmitted back from the cerebral cortex and medulla via the vagus and superior laryngeal nerves to the glottis, external intercostals, diaphragm, and other major inspiratory and expiratory muscles. The mechanism of a cough is as follows:\n\nBULLET::::- Diaphragm (innervated by phrenic nerve) and external intercostal muscles (innervated by segmental intercostal nerves) contract, creating a negative pressure around the lung.\n\nBULLET::::- Air rushes into the lungs in order to equalise the pressure.\n\nBULLET::::- The glottis closes (muscles innervated by recurrent laryngeal nerve) and the vocal cords contract to shut the larynx.\n", "Some cases of chronic cough may be attributed to a sensory neuropathic disorder. Treatment for neurogenic cough may include the use of certain neuralgia medications. Coughing may occur in tic disorders such as Tourette syndrome, although it should be distinguished from throat-clearing in this disorder.\n\nSection::::Differential diagnosis.:Other.\n", "The reflex is impaired in the person whose abdominals and respiratory muscles are weak. This problem can be caused by disease condition that lead to muscle weakness or paralysis, by prolonged inactivity, or as outcome of surgery involving these muscles. Bed rest interferes with the expansion of the chest and limits the amount of air that can be taken into the lungs in preparation for coughing, making the cough weak and ineffective. This reflex may also be impaired by damage to the internal branch of the superior laryngeal nerve which relays the afferent branch of the reflex arc. This nerve is most commonly damaged by swallowing a foreign object, such as a chicken bone, resulting in it being lodged in the piriform recess (in the laryngopharynx) or by surgical removal of said object.\n", "BULLET::::- Gastroesophageal reflux disease (GERD) is identified with 2 mechanisms which are the distal esophageal acid stimulating the esophageal-treachebronchial cough reflex due to the vagus nerve and the microbial esophageal contents of the pharynx and tracheobronchial causing a cough reflex.\n\nSection::::Diagnosis.:Imaging.\n\nBULLET::::- X-rays are used to check for lung cancer, pneumonia and other lung diseases are contributing to the chronic cough. X-rays on the sinus also provide evidence of an infection in the area.\n\nBULLET::::- CT scans are used to check the conditions of the patients lungs and to check sinus cavities for infections.\n", "The coughing episodes are also episodic, progressively severe and eventually nocturnal. They are initiated by a “ticking sensation” deep in the throat or more typically behind the suprasternal notch. This sensation then causes the patient to cough. The cough is therefore a ‘normal’ response to this ticking sensation rather than an uncontrolled muscular contraction of the diaphragm or intercostal muscles. The coughing episodes can be severe with incontinence, vomiting, visual phosphenes (“seeing stars”), and post-tussive headache. These episodes typically progress from mild to severe over the years and eventually will awaken the patient from sleep. In between episodes, the patient is normal and respiratory examination does not reveal reactive airways disease (asthma) or allergies. The coughing episodes can be triggered by loud or prolonged talking, exercise or the choking episodes described above.\n", "Normally, the sound of the patient's voice becomes less distinct as the auscultation moves peripherally; bronchophony is the phenomenon of the patient's voice remaining loud at the periphery of the lungs or sounding louder than usual over a distinct area of consolidation, such as in pneumonia. This is a valuable tool in physical diagnosis used by medical personnel when auscultating the chest.\n", "After a respiratory tract infection has cleared, the person may be left with a postinfectious cough. This typically is a dry, non-productive cough that produces no phlegm. Symptoms may include a tightness in the chest, and a tickle in the throat. This cough may often persist for weeks after an illness. The cause of the cough may be inflammation similar to that observed in repetitive stress disorders such as carpal tunnel syndrome. The repetition of coughing produces inflammation which produces discomfort, which in turn produces more coughing. Postinfectious cough typically does not respond to conventional cough treatments. Treatment consists of any anti-inflammatory medicine (such as ipratropium) to treat the inflammation, and a cough suppressant to reduce frequency of the cough until inflammation clears. Inflammation may increase sensitivity to other existing issues such as allergies, and treatment of other causes of coughs (such as use of an air purifier or allergy medicines) may help speed recovery. A bronchodilator, which helps open up the airways, may also help treat this type of cough.\n", "People who are on mechanical ventilation are often sedated and are rarely able to communicate. As such, many of the typical symptoms of pneumonia will either be absent or unable to be obtained. The most important signs are fever or low body temperature, new purulent sputum, and hypoxemia (decreasing amounts of oxygen in the blood). However, these symptoms may be similar for tracheobronchitis.\n\nSection::::Cause.\n\nSection::::Cause.:Risk factors.\n", "Section::::Physiology.\n", "A full clinical diagnosis can only be made from a lung biopsy of the tissue, fully best performed by a VATS, done by a cardio-thoracic surgeon. Some pulmonologists may first attempt a bronchoscopy, however this frequently fails to give a full or correct diagnosis.\n", "BULLET::::- The abdominal muscles contract to accentuate the action of the relaxing diaphragm; simultaneously, the other expiratory muscles contract. These actions increase the pressure of air within the lungs.\n\nBULLET::::- The vocal cords relax and the glottis opens, releasing air at over 100 mph.\n\nBULLET::::- The bronchi and non-cartilaginous portions of the trachea collapse to form slits through which the air is forced, which clears out any irritants attached to the respiratory lining.\n", "A cough can be the result of a respiratory tract infection such as the common cold, acute bronchitis, pneumonia, pertussis, or tuberculosis. In the vast majority of cases, acute coughs, i.e. coughs shorter than 3 weeks, are due to the common cold. In people with a normal chest X-ray, tuberculosis is a rare finding. Pertussis is increasingly being recognised as a cause of troublesome coughing in adults.\n", "The syndrome was first described and named in 1893 by Henri Huchard, a French cardiologist, who called it \"précordialgie\" (from the latin \"praecordia\" meaning \"before the heart\"), or \"Syndrôme de Huchard\" (\"Huchard syndrome\"). The term \"precordial\" had entered the French medical lexicon with the 1370 translation of Guy de Chauliac's \"Chirurgia magna\". Previously, the Latin term \"\"praecordia\"\" had been used to refer to the diaphragm, a sense now obsolete.\n", "Section::::Diagnosis.\n\nThere are 3 main types of chronic cough which are the following:\n\nBULLET::::- Upper airway cough syndrome is the most common cause of chronic coughing. It is diagnosed when the secretion of excess mucus from the nose / sinus drains into the pharynx or the back of the throat causing an induced cough.\n\nBULLET::::- Asthma is the main way to identify the chronic cough as a cause from asthma is that the airflow is obstructed when coughing causes a shortness of breath, wheezing, dyspnea and coughing.\n", "BULLET::::- Scope tests is used if the above tests are not able to diagnose the chronic cough, a special test may be used involving a thin, flexible tube which contains a light and camera. This is then inserted within the patient through the respiratory tract. A bronchoscope is used for the lungs and air passages, whilst a biopsy is used for the linings of your airway. Additionally, a rhinoscope can be used to examine the upper airway tract.\n\nBULLET::::- Children are typically diagnosed with chest x-rays or spirometry\n", "In a small portion of individuals, the auricular nerve is the afferent limb of the Ear-Cough or Arnold Reflex. Physical stimulation of the external acoustic meatus innervated by the auricular nerve elicits a cough, much like the other cough reflexes associated with the vagus nerve. Rarely, on introduction of speculum in the external ear, patients have experienced syncope due to the stimulation of the auricular branch of the vagus nerve.\n\nSection::::Clinical application.\n\nThis nerve may be stimulated as a diagnostic or therapeutic technique\n", "Stimulation of the auricular branch of the vagus nerve supplying the ear may also elicit a cough. This is known as Arnold's reflex. Respiratory muscle weakness, tracheostomy, or vocal cord pathology (including paralysis or anesthesia) may prevent effective clearing of the airways.\n\nSection::::Dysfunction.\n", "BULLET::::6. Increased need for oxygen on the ventilator\n\nBULLET::::7. Chest X-rays: at least two serial x-rays showing sustained or worsening shadowing (infiltrates or consolidations)\n\nBULLET::::8. Positive cultures that were obtained directly from the lung environment, such as from the trachea or bronchioles\n\nAs an example, some institutions may require one clinical symptoms such as shortness of breath, one clinical sign such as fever, plus evidence on chest xray and in tracheal cultures.\n", "A different less studied infection found in mechanically ventilated people is ventilator-associated tracheobronchitis (VAT). As with VAP, tracheobronchial infection can colonise the trachea and travel to the bronchi. VAT may be a risk factor for VAP.\n\nSection::::Signs and symptoms.\n", "The term \"brown lung\" is a misnomer, as the lungs of affected individuals are not brown.\n\nSection::::Symptoms.\n\nBULLET::::- Breathing difficulties\n\nBULLET::::- Chest tightness\n\nBULLET::::- Wheezing\n\nBULLET::::- Cough\n\nByssinosis can ultimately result in narrowing of the airways, lung scarring and death from infection or respiratory failure.\n\nSection::::Diagnosis.\n\nPatient history should reveal exposure to cotton, flax, hemp, or jute dust. Diagnostic tests include a lung function test and a chest x ray or CT scan.\n", "The cough reflex has both sensory (afferent) mainly via the vagus nerve and motor (efferent) components. Pulmonary irritant receptors (cough receptors) in the epithelium of the respiratory tract are sensitive to both mechanical and chemical stimuli. The bronchi and trachea are so sensitive to light touch that slight amounts of foreign matter or other causes of irritation initiate the cough reflex. The larynx and carina are especially sensitive. Terminal bronchioles and even the alveoli are sensitive to chemical stimuli such as sulfur dioxide gas or chlorine gas. Rapidly moving air usually carries with it any foreign matter that is present in the bronchi or trachea. Stimulation of the cough receptors by dust or other foreign particles produces a cough, which is necessary to remove the foreign material from the respiratory tract before it reaches the lungs.\n" ]
[ "A cough can move into your chest." ]
[ "Initial common cold symptoms occur in the sinuses, nose, and throat; subsequently mucus drains down the throat, causing one to cough to clear it out, which can give the sensation that the cold has moved to the chest." ]
[ "false presupposition" ]
[ "A cough can move into your chest.", "A cough can move into your chest." ]
[ "normal", "false presupposition" ]
[ "Initial common cold symptoms occur in the sinuses, nose, and throat; subsequently mucus drains down the throat, causing one to cough to clear it out, which can give the sensation that the cold has moved to the chest.", "Initial common cold symptoms occur in the sinuses, nose, and throat; subsequently mucus drains down the throat, causing one to cough to clear it out, which can give the sensation that the cold has moved to the chest." ]
2018-00063
Why is liver cancer a big problem, considering it can regenerate? Couldn't you just have the afflicted portion removed?
Surgery is the only curative treatment for liver cancer. The problem is that it's only effective at early stages, before the cancer has spread through the liver or before it's spread to other organs. The liver has an excellent blood supply, so metastases (spread) are common. Most liver cancers are not found at early stage; only ~30% are. In patients you can't chop the cancer out but it hasn't spread to other organs, transplantation is an option. Selection of patients transplant is most likely to be successful for is controversial. In addition, liver cancer usually arises from sustained severe liver damage (most frequently in cirrhosis caused by alcohol, hepatitis C and fatty liver disease in the West), and this damage impairs liver regeneration and makes surgery much more dangerous and less likely to be successful. This [review]( URL_0 ) is a bit old now but still good and hopefully relatively easy to understand.
[ "Primary liver cancer, or hepatocellular carcinoma (HCC) is the most common form of liver cancer, responsible for about 90% of the primary malignant liver tumours in adults. Liver cancer is the sixth most common cancer in the world and the third leading cause of cancer-related deaths globally. More than 600,000 cases of liver cancer are diagnosed worldwide each year. This comprises approximately 19,000 in the US, 54,000 in Europe and 390,000 in China, Korea and Japan. The incidence of HCC is increasing due to increased rates of chronic infection with Hepatitis B and Hepatitis C in Asia. Other risk factors include iron overload, alcoholic cirrhosis and some congenital disorders. Five year survival rates for liver cancer patients are low relative to other cancers.\n", "At presentation, 20-25% of patients will have clinically detectable liver metastases and up to 50% of all patients will develop liver metastases after re-section of the primary tumour within three years of follow up. Of those patients with metastatic liver disease, approximately 25% (on average) are eligible for liver re-section surgery, which represents the only potential cure available to patients. The remainder are eligible for alternative forms of treatment, namely chemotherapy and other technologies including SIR-Spheres microspheres.\n\nSection::::Research and development.\n", "Liver cancer death rates for adults aged 25 and over increased 43 percent from 7.2 per 100,000 U.S. standard population in 2000 to 10.3 in 2016. Liver cancer death rates increased 43 percent from 10.5 in 2000 to 15.0 in 2016 for men and 40 percent from 4.5 to 6.3 for women. The death rate for men was between 2.0–2.5 times the rate for women throughout this\n\nperiod.\n\nSection::::Research.\n\nHepcortespenlisimut-L is an oral immunotherapy that is going through a phase 3 clinical trial for HCC.\n\nSection::::See also.\n\nBULLET::::- Timeline of liver cancer\n\nSection::::External links.\n", "Section::::The medical problem.:Metastatic colorectal cancer.\n\nColorectal cancer (CRC), also called colon cancer or large bowel cancer, is the third leading cause of cancer-related death in the western world. An estimated 1.6 million people are diagnosed with the disease worldwide every year. An estimated 50% of CRC patients will show liver metastases. It is this form of metastatic colorectal cancer (mCRC) Sirtex targets with SIR-Spheres microspheres. It has been estimated that in 30-40% of patients with advanced disease, the liver is the only site of spread.\n", "Of late there has been renewed interest in liver transplantation from deceased donors along with add on therapy. Prognosis remains poor.\n\nSection::::Epidemiology.\n\nApproximately 15,000 new cases of liver and biliary tract carcinoma are diagnosed annually in the United States, with roughly 10% of these cases being Klatskin tumors. Cholangiocarcinoma accounts for approximately 2% of all cancer diagnoses, with an overall incidence of 1.2/100,000 individuals. Two-thirds of cases occur in patients over the age of 65, with a near ten-fold increase in patients over 80 years of age. The incidence is similar in both men and women. \n", "Secondary prevention includes both cure of the agent involved in the formation of cancer (carcinogenesis) and the prevention of carcinogenesis if this is not possible. Cure of virus-infected individuals is not possible, but treatment with antiviral drugs such as interferon can decrease the risk of liver cancer. Chlorophyllin may have potential in reducing the effects of aflatoxin.\n\nTertiary prevention includes treatments to prevent the recurrence of liver cancer. These include the use of chemotherapy drugs and antiviral drugs.\n\nSection::::Treatment.\n\nSection::::Treatment.:Hepatocellular carcinoma.\n", "Preventive efforts include immunization against hepatitis B and treating those infected with hepatitis B or C. Screening is recommended in those with chronic liver disease. Treatment options may include surgery, targeted therapy and radiation therapy. In certain cases, ablation therapy, embolization therapy or liver transplantation may be used. Small lumps in the liver may be closely followed.\n", "Percutaneous ablation is the only non-surgical treatment that can offer cure. There are many forms of percutaneous ablation, which consist of either injecting chemicals into the liver (ethanol or acetic acid) or producing extremes of temperature using radio frequency ablation, microwaves, lasers or cryotherapy. Of these, radio frequency ablation has one of the best reputations in HCC, but the limitations include inability to treat tumors close to other organs and blood vessels due to heat generation and the heat sink effect, respectively. In addition, long-term of outcomes of percutaneous ablation procedures for HCC have not been well studied. In general, surgery is the preferred treatment modality when possible.\n", "Section::::Epidemiology.:India.\n\nThe number of new cases of hepatocellular carcinoma per year in India in males is about 4.1 and for females 1.2 per 100,000. It typically occurs between 40 and 70 years of age.\n\nSection::::Epidemiology.:United Kingdom.\n\nLiver cancer is the eighteenth most common cancer in the UK (around 4,300 people were diagnosed with liver cancer in the UK in 2011), and it is the twelfth most common cause of cancer death (around 4,500 people died of the disease in 2012).\n\nSection::::Epidemiology.:United States.\n", "Infection by some hepatitis viruses, especially hepatitis B and hepatitis C, can induce a chronic viral infection that leads to liver cancer in about 1 in 200 of people infected with hepatitis B each year (more in Asia, fewer in North America), and in about 1 in 45 of people infected with hepatitis C each year. People with chronic hepatitis B infection are more than 200 times more likely to develop liver cancer than uninfected people. Liver cirrhosis, whether from chronic viral hepatitis infection or alcohol abuse or some other cause, is independently associated with the development of liver cancer, and the combination of cirrhosis and viral hepatitis presents the highest risk of liver cancer development. Because chronic viral hepatitis is so common, and liver cancer so deadly, liver cancer is one of the most common causes of cancer-related deaths in the world, and is especially common in East Asia and parts of sub-Sarahan Africa.\n", "Globally, , liver cancer resulted in 754,000 deaths, up from 460,000 in 1990, making it the third leading cause of cancer death after lung and stomach. In 2012, it represented 7% of cancer diagnoses in men, the 5th most diagnosed cancer that year. Of these deaths 340,000 were secondary to hepatitis B, 196,000 were secondary to hepatitis C, and 150,000 were secondary to alcohol. HCC, the most common form of liver cancer, shows a striking geographical distribution. China has 50% of HCC cases globally, and more than 80% of total cases occur in sub-Saharan Africa or in East-Asia due to hepatitis B virus. Cholangiocarcinoma also has a significant geographical distribution, with Thailand showing the highest rates worldwide due to the presence of liver fluke.\n", "Section::::Research.\n\nResearch in liver cancer has generally received low priority for federal funding in this country, contributing to the lack of effective treatment for chronically infected individuals. The ALC research program is looking for novel approaches to increase the efficacy of diagnosis, prognosis, and treatment through the development of a liver cancer research program with an emphasis on liver cancer genomics, biomarkers, molecular targets, and investigational anti-tumor agents.\n", "PALF (Pediatric Acute Liver Failure): PALF study is the primary multi-national shared study aimed at characterizing, and researching strategies for children, adolescents, and infants who are in the presence of Acute Liver failure. ALF (Acute Liver Failure) happens when numerous cells in the liver expire, or halt in performance in a small stage of time. ALF builds at a very rapid pace, and utilizes urgent care. This multimillion-dollar study includes 19 centers in three countries.\n", "Section::::Causes.:Other causes in adults.\n\nBULLET::::- High grade dysplastic nodules are precancerous lesions of the liver. Within two years, there is a risk for cancer arising from these nodules of 30-40%.\n\nBULLET::::- Obesity has emerged as an important risk factor, as it can lead to steatohepatitis.\n\nBULLET::::- Diabetes increases the risk for HCC.\n\nBULLET::::- Smoking increases the risk for HCC compared to non-smokers and previous smokers.\n\nBULLET::::- There is around 5-10% lifetime risk of cholangiocarcinoma in people with primary sclerosing cholangitis.\n", "Potential candidates for liver transplantation for treatment of HCC are evaluated and re-evaluated periodically by repeated imaging tests as they wait for donor organ availability. So long as the cancer does not exceed Milan criteria, the person may remain a candidate for transplantation. Thus, accurate and consistent evaluation of the disease burden is critical. For example, if someone with three small HCC lesions develops a new fourth liver nodule, an unequivocal diagnosis of this lesion as cancerous would disqualify this person from transplant candidacy. \n", "The risks of liver transplantation extend beyond risk of the procedure itself. The immunosuppressive medication required after surgery to prevent rejection of the donor liver also impairs the body's natural ability to combat dysfunctional cells. If the tumor has spread undetected outside the liver before the transplant, the medication effectively increases the rate of disease progression and decreases survival. With this in mind, liver transplant \"can be a curative approach for patients with advanced HCC without extrahepatic metastasis\". Patient selection is considered a major key for success.\n\nSection::::Treatment.:Ablation.\n", "BULLET::::- Liver fluke infection increases the risk for cholangiocarcinoma, and this is the reason why Thailand has particularly high rates of this cancer.\n\nSection::::Causes.:Children.\n\nIncreased risk for liver cancer in children can be caused by Beckwith–Wiedemann syndrome (associated with hepatoblastoma), familial adenomatous polyposis (associated with hepatoblastoma), low birth weight (associated with hepatoblastoma), Progressive familial intrahepatic cholestasis (associated with HCC) and Trisomy 18 (associated with hepatoblastoma).\n\nSection::::Diagnosis.\n", "Prevention of cancers can be separated into primary, secondary, and tertiary prevention. Primary prevention preemptively reduces exposure to a risk factor for liver cancer. One of the most successful primary liver cancer preventions is vaccination against hepatitis B. Vaccination against the hepatitis C virus is currently unavailable. Other forms of primary prevention are aimed at limiting transmission of these viruses by promoting safe injection practices, screening blood donation products, and screening high-risk asymptomatic individuals. Aflatoxin exposure can be avoided by post-harvest intervention to discourage mold, which has been effective in west Africa. Reducing alcohol abuse, obesity, and diabetes would also reduce rates of liver cancer. Diet control in hemochromatosis could decrease the risk of iron overload, decreasing the risk of cancer.\n", "Among colorectal cancer patients, 15-25% will have liver metastases already when the colorectal cancer is discovered, and another 25-50% will develop them in the three years after resection of their primary cancer. Of patients who die from metastasised colorectal cancer, 20% have metastasis in the liver alone.\n\nSurgical resection of liver metastases from colorectal cancer has been found to be safe and cost-effective.\n", "Many cancers found in the liver are not true liver cancers, but are cancers from other sites in the body that have spread to the liver (known as metastases). Frequently, the site of origin is the gastrointestinal tract, since the liver is close to many of these metabolically active, blood-rich organs near to blood vessels and lymph nodes (such as pancreatic cancer, stomach cancer, colon cancer and carcinoid tumors mainly of the appendix), but also from breast cancer, ovarian cancer, lung cancer, renal cancer, prostate cancer.\n\nSection::::Prevention.\n", "HCC remains associated with a high mortality rate, in part related to initial diagnosis commonly at an advanced stage of disease. As with other cancers, outcomes are significanty improved if treatment is initiated earlier in the disease process. Because the vast majority of HCC occurs in people with certain chronic liver diseases, especially those with cirrhosis, liver screening is commonly advocated in this population. Specific screening guidelines continue to evolve over time as evidence of its clinical impact becomes available. In the United States, the most commonly observed guidelines are those published by the American Association for the Study of Liver Diseases, which recommends screening people with cirrhosis with ultrasound every 6 months, with or without measurement of blood levels of tumor marker alpha-fetoprotein (AFP). Elevated levels of AFP are associated with active HCC disease, although inconsistently reliable. At levels 20 sensitivity is 41-65% and specificity is 80-94%. However, at levels 200 sensitivity is 31, specificity is 99%.\n", "Focal nodular hyperplasia (FNH) is the second most common tumor of the liver. This tumor is the result of a congenital arteriovenous malformation hepatocyte response. This process is one in which all normal constituents of the liver are present, but the pattern by which they are presented is abnormal. Even though those conditions exist the liver still seems to perform in the normal range. Other types include nodular regenerative hyperplasia and hamartoma.\n\nSection::::Workup.\n", "Cholangiocarcinoma is rare in the Western world, with estimates of it occurring in 0.5–2 people per 100,000 per year. Rates are higher in South-East Asia where liver flukes are common. Rates in parts of Thailand are 60 per 100,000 per year. It typically occurs in people in their 70s, however in those with primary sclerosing cholangitis it often occurs in the 40s. Rates of cholangiocarcinoma within the liver in the Western world have increased.\n\nSection::::Signs and symptoms.\n", "Section::::Signs and symptoms.\n\nBecause liver cancer is an umbrella term for many types of cancer, the signs and symptoms depend on what type of cancer is present. Cholangiocarcinoma is associated with sweating, jaundice, abdominal pain, weight loss and liver enlargement. Hepatocellular carcinoma is associated with abdominal mass, abdominal pain, emesis, anemia, back pain, jaundice, itching, weight loss and fever.\n\nSection::::Causes.\n\nSection::::Causes.:Viral infection.\n", "There are also many pediatric liver diseases, including biliary atresia, alpha-1 antitrypsin deficiency, alagille syndrome, progressive familial intrahepatic cholestasis, Langerhans cell histiocytosis and hepatic hemangioma a benign tumour the most common type of liver tumour, thought to be congenital. A genetic disorder causing multiple cysts to form in the liver tissue, usually in later life, and usually asymptomatic, is polycystic liver disease. Diseases that interfere with liver function will lead to derangement of these processes. However, the liver has a great capacity to regenerate and has a large reserve capacity. In most cases, the liver only produces symptoms after extensive damage.\n" ]
[ "Liver cancer can be treated by surgery and regeneration.", "Liver cancer can be addressed easily by removing the afflicted area." ]
[ "Liver cancer can be treated by surgery only in the early stages.", "The issue with removing liver cancer is it needs to be removed quite early before it spreads accross the liver and other organs, due to the liver having an excellent blood supply, it spreads quite quickly." ]
[ "false presupposition" ]
[ "Liver cancer can be treated by surgery and regeneration.", "Liver cancer can be addressed easily by removing the afflicted area." ]
[ "false presupposition", "false presupposition" ]
[ "Liver cancer can be treated by surgery only in the early stages.", "The issue with removing liver cancer is it needs to be removed quite early before it spreads accross the liver and other organs, due to the liver having an excellent blood supply, it spreads quite quickly." ]
2018-08813
How did very early Homo Sapiens survive during the end of the latest ice age - before the current Holocene epoch?
Humans had already mastered fire and fur clothing by then, and they lived in small enough numbers and hunted megafauna like Mammoths so that they wouldn't starve. Many of them also had access to fish.
[ "and it is unlikely that there was a land bridge during the Pleistocene.\n\nSection::::Causes for dispersal.\n\nSection::::Causes for dispersal.:Climate change and hominin flexibility.\n\nFor a given species in a given environment, available resources will limit the number of individuals that can survive indefinitely. \n", "The Old World tropics were relatively spared by the Late Pleistocene extinctions. Sub-Saharan Africa and southern Asia are the only regions that have terrestrial mammals weighing over 1000 kg today. However, there are indications of megafaunal extinction events throughout the Pleistocene, particularly in Africa two million years ago, which coincide with key stages of human evolution and climatic trends. The centre of human evolution and expansion, Africa and Asia were inhabited by advanced hominids by 2mya, with \"Homo habilis\" in Africa, and \"Homo erectus\" on both continents. By the advent and proliferation of \"Homo sapiens\" circa 315,000 BCE, dominant species included \"Homo heidelbergensis\" in Africa, the Denisovans and Neanderthals (fellow \"H. heidelbergensis\" descendants) in Eurasia, and \"Homo erectus\" in Eastern Asia. Ultimately, on both continents, these groups and other populations of Homo were subsumed by successive radiations of \"H. sapiens\". There is evidence of an early migration event 268,000 BCE and later within Neanderthal genetics, however the earliest dating for \"H. sapiens\" inhabitation is 118,000 BCE in Arabia, China and Israel, and 71,000 BCE in Indonesia. Additionally, not only have these early Asian migrations left a genetic mark on modern Papuan populations, the oldest known pottery in existence was found in China, dated to 18,000 BCE. Particularly during the late Pleistocene, megafaunal diversity was notably reduced from both these continents, often without being replaced by comparable successor fauna. Climate change has been explored as a prominent cause of extinctions in Southeast Asia.\n", "From the point of view of human archaeology, the last glacial period falls in the Paleolithic and early Mesolithic periods. When the glaciation event started, \"Homo sapiens\" were confined to lower latitudes and used tools comparable to those used by Neanderthals in western and central Eurasia and by \"Homo erectus\" in Asia. Near the end of the event, \"Homo sapiens\" migrated into Eurasia and Australia. Archaeological and genetic data suggest that the source populations of Paleolithic humans survived the last glacial period in sparsely wooded areas and dispersed through areas of high primary productivity while avoiding dense forest cover. \n", "One million years after its dispersal, \"H. erectus\" was diverging into new species. \"H. erectus\" is a chronospecies and was never extinct, so that its \"late survival\" is a matter of taxonomic convention. Late forms of \"H. erectus\" are thought to have survived until after about 0.5 million ago to 143,000 years ago at the latest, with derived forms classified as \"H. antecessor\" in Europe around 800,000 years ago and \"H. heidelbergensis\" in Africa around 600,000 years ago. \"H. heidelbergensis\" in its turn spread across East Africa (\"H. rhodesiensis\") and to Eurasia, where it gave rise to Neanderthals and Denisovans.\n", "The earliest well-dated Eurasian \"H. erectus\" site is Dmanisi in Georgia, securely dated to 1.8 Ma.1.85-1.78 Ma 95% CI. /ref A skull found at Dmanisi is evidence for caring for the old. The skull shows that this \"Homo erectus\" was advanced in age and had lost all but one tooth years before death, and it is perhaps unlikely that this hominid would have survived alone. It is not certain, however, that this is sufficient proof for caring – a partially paralysed chimpanzee at the Gombe reserve survived for years without help.\n", "50,000 years ago, Homo sapiens migrated out of Africa. They began replacing other Hominins in Asia. They also began replacing Neanderthals in Europe. However some of the Homo sapiens and Neanderthals interbred. Currently, persons of European descent are two to four percent Neanderthal. With the exception of this small amount of Neanderthal DNA that exists today, Neanderthals went extinct 30,000 years ago.\n\nThe last glacial maximum ran from 26,500 years ago to 20,000 years ago. Although different ice sheets reached maximum extent at somewhat different times, this was the time when ice sheets overall were at maximum extent.\n", "During this time, the Neanderthals were slowly being displaced. Because it took so long for Europe to be occupied, it appears that humans and Neanderthals may have been constantly competing for territory. The Neanderthals had larger brains, and were larger overall, with a more robust or heavily built frame, which suggests that they were physically stronger than modern \"Homo sapiens\". Having lived in Europe for 200,000 years, they would have been better adapted to the cold weather. The anatomically modern humans known as the Cro-Magnons, with widespread trade networks, superior technology and bodies likely better suited to running, would eventually completely displace the Neanderthals, whose last refuge was in the Iberian peninsula. After about 25,000 years ago the fossil record of the Neanderthals ends, indicating extinction. The last known population lived around a cave system on the remote south-facing coast of Gibraltar from 30,000 to 24,000 years ago.\n", "Both \"Homo erectus\" and \"Homo neanderthalensis\" became extinct by the end of the Paleolithic. Descended from \"Homo Sapiens\", the anatomically modern \"Homo sapiens sapiens\" emerged in eastern Africa  BP, left Africa around 50,000 BP, and expanded throughout the planet. Multiple hominid groups coexisted for some time in certain locations. \"Homo neanderthalensis\" were still found in parts of Eurasia  BP years, and engaged in an unknown degree of interbreeding with \"Homo sapiens sapiens\". DNA studies also suggest an unknown degree of interbreeding between \"Homo sapiens sapiens\" and \"Homo sapiens denisova\".\n", "\"H. sapiens\" dispersed from Africa in several waves, from possibly as early as 250,000 years ago, and certainly by 130,000 years ago, the so-called Southern Dispersal beginning about 70,000 years ago leading to the lasting colonisation of Eurasia and Oceania by 50,000 years ago.\n\nBoth in Africa and Eurasia, \"H. sapiens\" met with and interbred with archaic humans. Separate archaic (non-\"sapiens\") human species are thought to have survived until around 40,000 years ago (Neanderthal extinction), with possible late survival of hybrid species as late as 12,000 years ago (Red Deer Cave people).\n\nSection::::Names and taxonomy.\n", "It is estimated that the average life span of hominids on the African savanna between 4,000,000 and 200,000 years ago was 20 years. This means that the population would be completely renewed about five times per century, assuming that infant mortality has already been accounted for. It is further estimated that the population of hominids in Africa fluctuated between 10,000 and 100,000 individuals, thus averaging about 50,000 individuals. Roughly multiplying 40,000 centuries by 50,000 to 500,000 individuals per century, yields a total of 2 billion to 20 billion hominids, or an average estimate of about 10 billion hominids that lived during that approximately 4,000,000 year time span.\n", "\"H. heidelbergensis\", Neanderthals and Denisovans expanded north beyond the 50th parallel (Eartham Pit, Boxgrove 500kya, Swanscombe Heritage Park 400kya, Denisova Cave 50 kya). It has been suggested that late Neanderthals may even have reached the boundary of the Arctic, by c. 32,000 years ago, when they were being displaced from their earlier habitats by \"H. sapiens\", based on 2011 excavations at the site of Byzovaya in the Urals (Komi Republic, ).\n", "In the beginning of the last ice age a supervolcano erupted in Indonesia. Theory states the effects of the eruption caused global climatic changes for many years, effectively obliterating most of the earlier cultures. Y-chromosomal Adam (90000 - 60000 BP, dated data) was initially dated here. Neanderthals survived this abrupt change in the environment, so it's possible for other human groups too. According to the theory humans survived in Africa, and began to resettle areas north, as the effects of the eruption slowly vanished. Upper Paleolithic revolution began after this extreme event, the earliest finds are dated c.50000 BCE.\n", "Short and repetitive migrations of archaic humans before 1 million years ago suggest that their residence in Europe was not permanent at the time. Colonisation of Europe in prehistory was not achieved in one immigrating wave, but instead through multiple dispersal events. Most of these instances in Eurasia were limited to 40th parallel north. Besides the findings from East Anglia, the first constant presence of humans in Europe begins 500,000–600,000 years ago. However, this presence was limited to western Europe, not reaching places like the Russian plains, until 200,000–300,000 years ago. The exception to this was discovered in East Anglia, England, where hominids briefly inhabited 700,000 years ago. Prior to arriving in Europe, the source of hominids appeared to be East Africa, where stone tools and hominid fossils are the most abundant and recorded. Arising in Europe at least 400,000 years ago, the Neanderthals would become more stable residents of the continent, until they were displaced by a more recent migration of African hominids, in their new home are referred to as European early modern humans (historically called Cro-Magnon Man), leading to the extinction of Neanderthals about 40,000 years ago.\n", "\"Homo erectus\" emerges just after 2 million years ago. \n\nEarly \"H. erectus\" would have lived face to face with \"H. habilis\" in East Africa for nearly half a million years.\n\nThe oldest \"Homo erectus\" fossils appear almost contemporaneously, shortly after two million years ago, both in Africa and in the Caucasus. \n", "This is the carrying capacity. Upon reaching this threshold, individuals may find it easier to gather resources in the poorer yet less exploited peripheral environment than in the preferred habitat. \"Homo habilis\" could have developed some baseline behavioural flexibility prior to its expansion into the peripheries (such as encroaching into the predatory guild). This flexibility could then have been positively selected and amplified, leading to \"Homo erectus\" adaptation to the peripheral open habitats. A new and environmentally flexible hominin population could have come back to the old niche and replaced the ancestral population. Moreover, some step-wise shrinking of the woodland and the associated reduction of hominin carrying capacity in the woods around 1.8 Ma, 1.2 Ma, and 0.6 Ma would have stressed the carrying capacity's pressure for adapting to the open grounds.\n", "Section::::\"Homo sapiens\".:Holocene migrations.\n\nThe Holocene is taken to begin 12,000 years ago, after the end of the Last Glacial Maximum.\n\nDuring the Holocene climatic optimum, beginning about 9,000 years ago, human populations which had been geographically confined to refugia began to migrate.\n\nBy this time, most parts of the globe had been settled by \"H. sapiens\"; however, large areas that had been covered by glaciers were now re-populated.\n\nThis period sees the transition from the Mesolithic to the Neolithic stage throughout the temperate zone.\n", "All Paleolithic sites in the Central Balkans, including Pešturina, have the noticeable absence of the Aurignacian layers. That points to the theory that the expansion of the early modern humans into Europe occurred via the Danube corridor, which allowed for the small Neanderthal communities to survive beyond 40,000 BP in some isolated pockets. Based on the dating of the animal remains, and comparing it to the corresponding tools, Pešturina is the first site in the region with the quasi-continuous habitation from 102,000 BP+ 5,000 to 39,000 BP+ 3,000.\n", "Today, all humans belong to one population of \"Homo sapiens sapiens\", which is individed by species barrier. However, according to the \"Out of Africa\" model this is not the first species of hominids: the first species of genus \"Homo\", \"Homo habilis\", evolved in East Africa at least 2 Ma, and members of this species populated different parts of Africa in a relatively short time. \"Homo erectus\" evolved more than 1.8 Ma, and by 1.5 Ma had spread throughout the Old World.\n", "The first possible indications of habitation by hominins are the 7.2 million year old finds of \"Graecopithecus\", and 5.7 million year old footprints in Crete — however established habitation is noted in Georgia from 1.8 million years ago, proceeded to Germany and France, by \"Homo erectus\". Prominent co-current and subsequent species include \"Homo antecessor\", \"Homo cepranensis\", \"Homo heidelbergensis\", Neanderthals and Denisovans, preceding habitation by Homo sapiens circa 38,000 BCE. Extensive contact between African and Eurasian Homo groups is known at least in part through transfers of stone-tool technology in 500,000 BCE and again at 250,000 BCE.\n", "800,000 years ago, the short-faced bear (\"Arctodus simus\") became abundant in North America.\n\nThe evolution of the \"Homo heidelbergensis\" happened 600,000 years ago.\n\nThe evolution of Neanderthals occurred 350,000 years ago.\n\n300,000 years ago, \"Gigantopithicus\" went extinct.\n\n250,000 years ago in Africa were the first anatomically modern humans.\n\nSection::::Last Glacial Period.\n\nThe last glacial period began 115,000 years ago and ended 11,700 years ago. This time period saw the great advancement of polar ice sheets into the middle latitudes of the Northern Hemisphere.\n", "An important difference between Europe and other parts of the inhabited world was the northern latitude. Archaeological evidence suggests humans, whether Neanderthal or Cro-Magnon, reached sites in Arctic Russia by 40,000 years ago.\n", "The evolution of anatomically modern humans took place during the Pleistocene. In the beginning of the Pleistocene \"Paranthropus\" species were still present, as well as early human ancestors, but during the lower Palaeolithic they disappeared, and the only hominin species found in fossilic records is \"Homo erectus\" for much of the Pleistocene. Acheulean lithics appear along with \"Homo erectus\", some 1.8 million years ago, replacing the more primitive Oldowan industry used by \"A. garhi\" and by the earliest species of \"Homo\".\n\nThe Middle Paleolithic saw more varied speciation within \"Homo\", including the appearance of \"Homo sapiens\" about 200,000 years ago.\n", "Generally small and widely-dispersed fossil sites suggest that Neanderthals lived in less numerous and socially more isolated groups than contemporary \"Homo sapiens\". Tools such as Mousterian flint stone flakes and Levallois points are remarkably sophisticated from the outset, yet they have a slow rate of variability and general technological inertia is noticeable during the entire fossil period. Artifacts are of utilitarian nature, and symbolic behavioral traits are undocumented before the arrival of modern humans in Europe around 40,000 to 35,000 years ago.\n", "Other archaic human species are assumed to have spread throughout Africa by this time, although the fossil record is sparse. Their presence is assumed based on traces of admixture with modern humans found in the genome of African populations. \"Homo naledi\", discovered in South Africa in 2013 and tentatively dated to about 300,000 years ago, may represent fossil evidence of such an archaic human species.\n", "Around 9 million years ago most of Europe's hominid species fell victim to the Vallesian crisis, an extinction event caused by the disappearance of the continent's forests. Some hominid species survived the event: \"Orepithecus\", which became isolated in forest refugia; and \"Ouranopithecus\", which adapted to the open environments of the late Miocene. However, both were extinct by 7 million years ago.\n" ]
[ "It is improbable that early homo sapiens could've survived the latest ice age." ]
[ "Homosapiens had mastered fire by then, they also knew of many other survival tactics allowing them to survive." ]
[ "false presupposition" ]
[ "It is improbable that early homo sapiens could've survived the latest ice age.", "It is improbable that early homo sapiens could've survived the latest ice age." ]
[ "normal", "false presupposition" ]
[ "Homosapiens had mastered fire by then, they also knew of many other survival tactics allowing them to survive.", "Homosapiens had mastered fire by then, they also knew of many other survival tactics allowing them to survive." ]
2018-18616
why isn't libertarianism well received in politics?
Libertarianism (in the right-wing North American sense) has often been compatible with the Republican Party (US) or the larger right wing parties in Canada (the modern Conservative Party is a merger between the old Progressive Conservatives and the Alliance Party, who were actually the Official Opposition for a while back when they were called the Reform Party). It tends to be easier for rightwing ideologies to find common ground, despite their differences, than Left-wing ones (as a leftist, my least favourite thing about leftism is the tendency towards ideological infighting). Not many people know this, but the first people to call themselves "libertarians" were French socialists in the 1850s... it wasn't until the 1950s that laissez-faire capitalists started calling themselves that. The most well known "Lib-soc" in the modern day is probably Noam Chomsky. URL_0
[ "As was true historically, there are far more libertarians in the United States than those who belong to the party touting that name. In the United States, libertarians may emphasize economic and constitutional rather than religious and personal policies, or personal and international rather than economic policies such as the Tea Party movement (founded in 2009) which has become a major outlet for Libertarian Republican ideas, especially rigorous adherence to the Constitution, lower taxes and an opposition to a growing role for the federal government in health care. However, polls show that many people who identify as Tea Party members do not hold traditional libertarian views on most social issues and tend to poll similarly to socially conservative Republicans. Eventually during the 2016 presidential election, many Tea Party members abandoned more libertarian-leaning views in favor of Donald Trump and his right-wing populism. Additionally, the Tea Party was considered to be a key force in Republicans reclaiming control of the House of Representatives in 2010.\n", "Among others, former Arizona Senator Barry Goldwater and former Texas Congressman Ron Paul popularized libertarian economics and rhetoric in opposition to state interventionism and worked to pass some reforms. California Governor Ronald Reagan appealed to American libertarians in a 1975 interview with \"Reason\" when he said: \"I believe the very heart and soul of conservatism is libertarianism\". However, many libertarians are ambivalent about Reagan's legacy as President due its social conservatism and the fact that Reagan turned the United States' big trade deficit into debt and that under the Reagan administration the United States became a debtor nation for the first time since World War I.\n", "BULLET::::- Justin Amash – Representative from Michigan\n\nBULLET::::- Eric Brakey – State Representative from Maine and 2018 Senate candidate\n\nBULLET::::- Nick Freitas – State Delegate from Virginia and 2018 Senate candidate\n\nBULLET::::- Barry Goldwater – former Senator from Arizona and 1964 presidential candidate\n\nBULLET::::- Gary Johnson – former New Mexico Governor and 2012 and 2016 Libertarian Party presidential candidate\n\nBULLET::::- Mike Lee – Senator from Utah\n\nBULLET::::- Thomas Massie – Representative from Kentucky\n\nBULLET::::- Rand Paul – Senator from Kentucky and 2016 presidential candidate\n\nBULLET::::- Ron Paul – former Representative from Texas and 1988, 2008 and 2012 presidential candidate\n", "California Governor Ronald Reagan appealed to American libertarians in a 1975 interview with \"Reason\" when he said: \"I believe the very heart and soul of conservatism is libertarianism\". However, President Reagan turned the United States' big trade deficit into debt and the United States became a debtor nation for the first time since World War I under the Reagan administration.\n\nEdward Feser emphasized that libertarianism does not require individuals to reject traditional conservative values. Libertarianism supports the ideas of liberty, privacy and ending the war on marijuana at the legal level without changing personal values.\n\nSection::::Philosophy.:Economics.\n", "Through twenty polls on this topic spanning thirteen years, Gallup found that voters who are libertarian on the political spectrum ranged from 17–23% of the American electorate. This includes members of the Libertarian Party, Republican Party (see Libertarian Republicans) and Democratic Party (see Libertarian Democrats) as well as independents. The largest libertarian currents present in the Democratic Party are neoclassical liberalism and neo-libertarianism while the majority strand in the Libertarian and Republican parties is right-libertarianism and libertarian conservatism, respectively.\n\nSection::::History.\n", "Libertarians criticism of the IGF tends to be tied to a belief that the writers do not go far enough in advocating for reducing government regulations that limit citizens personal and economic life. Yet, a second source of criticism, among libertarians is that the IGF writers do not advocate election law reforms that would allow the Libertarian Party to freely compete for votes in elections. \n", "In 2012 the only Libertarian on the ballot was Gary Johnson for US President, who finished with 42,202 (1.4%) of the statewide total.\n", "Section::::Philosophy.:Capital punishment.\n\nRight-libertarians are divided on capital punishment, also known as the death penalty. Those opposing it generally see it as an excessive abuse of state power which is by its very nature irreversible, with American libertarians possibly seeing it also in conflict with the Bill of Rights ban on \"cruel and unusual punishment\". Some libertarians who believe capital punishment can be just under certain circumstances may oppose execution based on practical considerations. Those who support the death penalty do so on self-defense or retributive justice grounds.\n\nSection::::Philosophy.:Ethics.\n", "In 2013, Michael Lind observed that of the 195 countries in the world, none have fully actualized a society as advocated by right-libertarians: \n\nFurthermore, Lind has criticized right-libertarianism as being incompatible with democracy and apologetic towards autocracy. In response, right-libertarian Warren Redlich argues that the United States \"was extremely libertarian from the founding until 1860, and still very libertarian until roughly 1930\".\n\nBULLET::::- Tacit authoritarianism\n\nThe anarchist tendency known as platformism has been criticized by Situationists, insurrectionaries, synthesis anarchists and others of preserving tacitly statist, authoritarian or bureaucratic tendencies.\n\nSection::::External links.\n\nBULLET::::- Mike Huben's Critiques of Libertarianism (Wiki format)\n", "Until fairly recently, American libertarians have allied politically with modern conservatives over economic issues and gun laws while they are more prone to ally with liberals on other civil liberties issues and non-interventionism. As conservatives increasingly favor protectionism over free and open trade and progressives censorship over free speech, the popular characterization of libertarian policy as economically conservative and socially liberal has been rendered less meaningful. Libertarians may choose to vote for candidates of other parties depending on the individual and the issues they promote. Paleolibertarians have a long-standing affinity with paleoconservatives in opposing United States interventions and promoting decentralization and cultural conservatism.\n", "Criticism of libertarianism includes ethical, economic, environmental, pragmatic and philosophical concerns, although they are mainly related to right-libertarianism, including the view that it has no explicit theory of liberty. For instance, it has been argued that \"laissez-faire\" capitalism does not necessarily produce the best or most efficient outcome, nor does its philosophy of individualism and policies of deregulation prevent the abuse of natural resources. \n", "Libertarians support free markets. We defend the right of individuals to form corporations, cooperatives and other types of entities based on voluntary association. We oppose all forms of government subsidies and bailouts to business, labor, or any other special interest. Government should not compete with private enterprise. We assert that disruptive block chain technology remain sovereign and free of regulation as global cyber tools that aim to fight corruption by decentralization. Non-violent technology of cyberspace should be left unhindered.\n\n2.8 Labor Markets\n", "A number of countries have libertarian parties that run candidates for political office. In the United States, the Libertarian Party was formed in 1972 and is the third largest American political party, with 511,277 voters (0.46% of total electorate) registered as Libertarian in the 31 states that report Libertarian registration statistics and Washington, D.C.\n", "Section::::Political parties.:Relationship with the Conservative Party.\n\nIn an opinion piece, Jason Walsh held that the 1980s economic liberalism of Margaret Thatcher was \"libertarianism-lite\" when compared to minimal state views of more modern libertarians which were becoming more popular after ten years of New Labour's \"increasingly authoritarian policies\".\n\nSection::::Political parties.:Relationship with the UK Independence Party.\n", "The libertarian faction has influenced the presidential level as well in the post-Bush era. Alaska Senator and presidential aspirant Mike Gravel left the Democratic Party midway through the 2008 presidential election cycle to seek the Libertarian Party presidential nomination and many anti-war and civil libertarian Democrats were energized by the 2008 and 2012 presidential campaigns of Ron Paul. This constituency has arguably embraced the 2016 presidential campaign of independent Democrat Bernie Sanders for the same reasons. In the state of New Hampshire, libertarians operating from the Free State Project have been elected to various offices running as a mixture of both Republicans and Democrats. A 2015 Reuters poll found that 22% of Democratic voters identified themselves as \"libertarian,\" more than the percentage of Republicans but less than the percentage of independents.\n", "In the 21st century, libertarian groups have been successful in advocating tax cuts and regulatory reform. While some argue that the American public as a whole shifted away from libertarianism following the fall of the Soviet Union, citing the success of multinational organizations such as NAFTA and the increasingly interdependent global financial system, others argue that libertarian ideas have moved so far into the mainstream that many Americans who do not identify as libertarian now hold libertarian views. Texas Congressman Ron Paul's 2008 and 2012 campaigns for the Republican Party presidential nomination were largely libertarian. Paul was affiliated with the libertarian-leaning Republican Liberty Caucus and founded the Campaign for Liberty, a libertarian-leaning membership and lobbying organization. His son Rand Paul is a Senator who continues the tradition, albeit more moderately as he has described himself as a constitutional conservative and has both embraced and rejected libertarianism.\n", "BULLET::::- Jack Hunter, radio talk show host (\"The Southern Avenger\"), political commentator, former aide to Rand Paul, editor of Rare Politics – has written of his \"attraction to libertarianism.\" Hunter formerly expressed neo-Confederate views, which libertarian commentator and law professor Ilya Somin criticized in 2013 as inconsistent with libertarianism.\n\nBULLET::::- Glenn Jacobs, Professional Wrestler with WWE and current Republican Mayor of Knoxville, TN.\n\nBULLET::::- Kennedy, TV commentator and former MTV VJ\n", "In the November 2006 mid-term election, the median vote percentage for Libertarians who ran for US House (excluding races with only one major party nominee) was 2.04%; while the median percentage for Greens who ran for that office (again excluding races with only one major party nominee) was 1.41%. Over 13,400,000 votes were cast for Libertarian Party candidates in 2006. In the 2007 general elections, Libertarian Party candidates won 14 elective offices, including an election for mayor of Avis, Pennsylvania.\n\nSection::::Election cycles in the 2000s.:2008.\n", "In 2012, anti-war presidential candidates (Libertarian Republican Ron Paul and Libertarian Party candidate Gary Johnson) raised millions of dollars and garnered millions of votes despite opposition to their obtaining ballot access by Democrats and Republicans. The 2012 Libertarian National Convention, which saw Gary Johnson and James P. Gray nominated as the 2012 presidential ticket for the Libertarian Party, resulted in the most successful result for a third-party presidential candidacy since 2000 and the best in the Libertarian Party's history by vote number. Johnson received 1% of the popular vote, amounting to more than 1.2 million votes. Johnson has expressed a desire to win at least 5 percent of the vote so that the Libertarian Party candidates could get equal ballot access and federal funding, thus subsequently ending the two-party system.\n", "The official Libertarian party platform states: \"Recognizing that abortion is a sensitive issue and that people can hold good-faith views on all sides, we believe that government should be kept out of the matter, leaving the question to each person for their conscientious consideration\". Libertarians have very different opinions on the issue, just like in the general public. Some, like the group Libertarians for Life, consider abortion to be an act of aggression from the government or mother against a fetus. Others, like the group Pro-Choice Libertarians, consider denying a woman the right to choose abortion to be an act of aggression from the government against her.\n", "Libertarian socialists in the early 21st century have been involved in the alter-globalization movement, squatter movement; social centers; infoshops; anti-poverty groups such as Ontario Coalition Against Poverty and Food Not Bombs; tenants' unions; housing cooperatives; intentional communities generally and egalitarian communities; anti-sexist organizing; grassroots media initiatives; digital media and computer activism; experiments in participatory economics; anti-racist and anti-fascist groups like Anti-Racist Action and Anti-Fascist Action; activist groups protecting the rights of immigrants and promoting the free movement of people, such as the No Border network; worker co-operatives, countercultural and artist groups; and the peace movement.\n\nSection::::Contemporary libertarianism.:American libertarianism.\n", "BULLET::::- Hans-Hermann Hoppe – political philosopher and paleolibertarian trained under the Frankfurt School, staunch critic of democracy and developer of argumentation ethics\n\nBULLET::::- Michael Huemer – political philosopher, ethical intuitionist and author of \"The Problem of Political Authority\"\n\nBULLET::::- Rose Wilder Lane – silent editor of her mother's \"Little House on the Prairie\" books and author of \"The Discovery of Freedom\"\n\nBULLET::::- Ludwig von Mises – prominent figure in the Austrian School, classical liberal and founder of the \"a priori\" economic method of praxeology\n", "Gay activist Richard Sincere has pointed to the longstanding support of gay rights by the party, which has supported same-sex marriage since its first platform was drafted in 1972 (40 years before the Democratic Party adopted same-sex marriage into their platform in 2012). Many LGBT political candidates have run for office on the Libertarian Party ticket and there have been numerous LGBT caucuses in the party, with the most active in recent years being the Outright Libertarians. With regard to non-discrimination laws protecting LGBT people, the party is more divided, with some Libertarians supporting such laws, and others opposing them on the grounds that they violate freedom of association.\n", "Libertarian members often cite the departure of Ed Crane (of the Cato Institute, a libertarian think tank) as a key turning point in the early party history. Crane (who in the 1970s had been the party's first Executive Director) and some of his allies resigned from the party in 1983 when their preferred candidates for national committee seats lost in the elections at the national convention. Others like Mary Ruwart say that despite this apparent victory of those favoring radicalism, the party has for decades been slowly moving away from those ideals.\n", "The Libertarian Party of Nevada has continued to grow in both influence and voter registration since its inception. This is largely due to the big-government ideologies of the Democratic and Republican parties; voters with libertarian ideologies began to look elsewhere.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01159
Once scratched, why are Teflon pots unusable?
The coating is not good for you, and can stay in your body for a while. Once it starts to chip or gets scratched it all starts to come off like paint when it begins to peel off. Use wood , plastic or silicone utensils to not scratch it.
[ "During regular usage, small pieces of insulation can become stuck inside the 110 block contacts; this renders that given pair unreliable or unusable, until the pieces are removed. A tool known as a spudger can be used to remove excess insulation pieces. The wire hook which comes on many punch down tools can also be used to remove wire pieces.\n\nA new wire inserted over an existing insulation remnant may be unreliable, as the 110 IDC was designed to hold only a single wire per contact.\n", "The downsides of Teflon include the fact that it can be scratched off and get into food during the cooking process. Another problem is that Teflon begins to break down at around 350˚C and can give off poisonous fluorocarbon gasses. The final problem is that the bonding of Teflon to the pan uses a surfactant called perfluorooctanic acid (PFOA), which can also break down at high temperatures and poison food.\n\nSection::::Materials in cooking.:Silicone.\n", "Creep can cause gradual cut-through of wire insulation, especially when stress is concentrated by pressing insulated wire against a sharp edge or corner. Special creep-resistant insulations such as Kynar (polyvinylidene fluoride) are used in wirewrap applications to resist cut-through due to the sharp corners of wire wrap terminals. Teflon insulation is resistant to elevated temperatures and has other desirable properties, but is notoriously vulnerable to cold-flow cut-through failures caused by creep.\n", "Several Wacom models, including the Intuos4 and Bamboo, were criticized for the drawing surface's roughness, which caused the small pressure-sensitive 'nib' to wear down, and become slanted or scratchy in the same way as pencil lead, albeit more slowly. This could also cause the surface to become smoother where it is used more, resulting in uneven slick and non-slick areas. As the nibs were only short lengths of plastic, it was possible for a user wanting a more durable nib to improvise a replacement from a short length of nylon 'wire' (approx 0.065 inches or 1.7mm diameter) like that found in grass trimmer or 'weed-eater' refills, suitably straightened by hand and smoothed (rounded off) at one end with abrasive paper. Additionally, a thin sheet of glass or acetate can be placed over the drawing surface to avert surface or nib damage in the same way as screen protectors are used on phones, although in the case of glass this may induce a—usually modest—parallax error when tracing.\n", "Utensils used with PTFE-coated pans can scratch the coating, if the utensils are harder than the coating; this can be prevented by using non-metallic (usually plastic or wood) cooking tools.\n\nSection::::PTFE (Teflon).:Health concerns.\n\nWhen pans are overheated beyond approximately 350 °C (660 °F) the PTFE coating begins to dissociate, releasing perfluorooctanoic acid (PFOA) which can cause polymer fume fever in humans and can be lethal to birds. Concerns have been raised over the possible negative effects of using PTFE-coated cooking pans.\n", "Miraflex is a new type of fiberglass batt that has curly fibers that are less itchy and create less dust. You can also look for fiberglass products factory-wrapped in plastic or fabric.\n", "Galaxy IV was a telecommunications satellite that was disabled and lost due to short circuits caused by tin whiskers in 1998. It was initially thought that space weather contributed to the failure, but later it was discovered that a conformal coating had been mis-applied, allowing whiskers formed in the pure tin plating to find their way through a missing coating area, causing a failure of the main control computer. The manufacturer, Hughes, has moved to nickel plating, rather than tin, to reduce the risk of whisker growth. The tradeoff has been an increase in weight, adding per payload.\n", "Polytetrafluoroethylene (PTFE) is a synthetic fluoropolymer used in various applications including non-stick coatings. Teflon is a brand of PTFE, often used as a generic term for PTFE. The metallic substrate is roughened by abrasive blasting, then sometimes electric-arc sprayed with stainless steel. The irregular surface promotes adhesion of the PTFE and also resists abrasion of the PTFE. Then one to seven layers of PTFE are sprayed or rolled on, with a larger number of layers and spraying being better. The number and thickness of the layers and quality of the material determine the quality of the non-stick coating. Better-quality coatings are more durable, and less likely to peel and flake, and keep their non-stick properties for longer. Any PTFE-based coating will rapidly lose its non-stick properties if overheated; all manufacturers recommend that temperatures be kept below, typically, .\n", "Sharklet's topography creates mechanical stress on settling bacteria, a phenomenon known as mechanotransduction. Nanoforce gradients caused by surface variations induces stress gradients within the lateral plane of the surface membrane of a settling microorganism during initial contact. This stress gradient disrupts normal cell functions, forcing the microorganism to provide energy to adjust its contact area on each topographical feature to equalize the stresses. This expenditure of energy is thermodynamically unfavorable to the settler, inducing it to search for a different surface to attach to. Sharklet is made, however, with the same material as other plastics.\n\nSection::::External links.\n", "However, studies have shown that even vacuum cleaners featuring HEPA (High Efficiency Particulate Air) filters tend to release a large amount of allergens back into the air in the exhaust.\n\nIn general, more recent and more expensive models do perform better than older and less expensive ones.\n\nSection::::Existing technology.\n", "Internally, the contacts on the plugs have sharp prongs that, when crimped, pierce the wire insulation and connect with the conductor, a mechanism known as insulation displacement. Ethernet cables, in particular, may have solid or stranded (tinsel wire) conductors and the sharp prongs are different in the 8P8C connectors made for each type of wire. A modular plug for solid (single-strand) wire often has three slightly splayed prongs on each contact to securely surround and grip the conductor. Modular plugs for stranded have prongs that are designed to connect to multiple wire strands. Connector plugs are designed for either solid or stranded wire and a mismatch between plug and wire type may result in an unreliable connection.\n", "Fattening pens can be outside or in a greenhouse. High summer temperatures and insufficient moisture cause dwarfing and malformations of some snails. This is more a problem inside greenhouses if the sun overheats the building. A sprinkler system (e.g., a horticultural system or common lawn sprinklers) can supply moisture. Make sure excess water can drain.\n", "Plucking (tweezing) is often described as \"time consuming\". Because the tweezers operate on only one hair at a time and it requires several seconds of application on each hair, this technique is even slower than normal tweezing. The US FDA suggests that, because of the difficulty of using these devices, many people end up effectively only using them as tweezers, with no permanent hair removal.\n", "Modern sets typically have a long and thin wedge that extends all the way down to the back end of the socket, separating the two electrical conductors and contacts and preventing water between them when used outdoors. Many sets have an optional \"locking\" tab on the edge of the base that snaps onto or into the side of the socket, to prevent bulbs from becoming loose and falling out and causing the set to fail to light. This is problematic for light sets with covers like icicles, which will not fit over this type of base or socket unless they have a special notch, which can in turn allow rainwater or snowmelt into the decoration.\n", "The natural bristles are often dark in color, not super soft, sheds, hold pigments better and difficult to wash. As the natural bristles are very porous they pick up more pigments and distributes them evenly. The natural bristled brushes best applies powder products and it is best to avoid liquid or cream products as they will drink up most of the products. Although natural bristles are more preferred in the cosmetic industry, the bristles themselves can cause allergic reactions to the animal hair. \n", "Section::::Differences from conventional rotary brushes.\n", "However, Tefal was not the only company to utilize PTFE in nonstick cookware coatings. In subsequent years, many cookware manufacturers developed proprietary PTFE-based formulas, including Swiss Diamond International, which uses a diamond-reinforced PTFE formula; Scanpan, which uses a titanium-reinforced PTFE formula; and both All-Clad and Newell Rubbermaid's Calphalon, which use a non-reinforced PTFE-based nonstick. Other cookware companies, such as Meyer Corporation's Anolon, use Teflon nonstick coatings purchased from Chemours. Chemours is a 2015 corporate spin-off of DuPont.\n", "In early experiments, the trough was first constructed from metals such as brass. However difficulties arose with contamination of the sub-phase by metal ions. To combat this, glass troughs were used for a time, with a wax coating to prevent contamination from glass pores. This was eventually abandoned in favor of plastics that were insoluble in ordinary solvents, such as Teflon (polytetrafluoroethylene). Teflon is hydrophobic and chemically inert, making it a highly suitable material, and the most commonly used for troughs today. Occasionally metal or glass troughs coated with a thin layer of Teflon are used; however they are not as enduring as solid PTFE troughs.\n", "Permanent punctal plugs are usually made of silicone. These are available in various sizes. For maximum effectiveness, the largest size that fits should be used. These are more effective than collagen plugs. They can sometimes become loose and fall out, in which case they can be replaced.\n\nSome plugs are made of thermally reactive material. Some of these are inserted into the punctum as a liquid, and then harden and conform to the individual's drainage system. Others start out rigid and become soft and flexible, adapting to the individual's punctal size after they are inserted.\n\nSection::::Risks.\n", "Section::::Spray foam.:Types.\n\nBULLET::::- Icynene spray formula: R-3.7 (RSI-0.63) per inch. Icynene uses water for its spray application instead of ozone depleting chemicals. Icynene will expand up to 100 times it original size within the first 6 seconds of being applied. It fills all the tiny gaps around electrical sockets and hard to reach areas.\n\nBULLET::::- Icynene spray foam insulation will allow water to drain through it rather than storing it; closed cell foams will not allow water to enter at all.\n", "The exterior of the thermal lenses (or the lenses, in non-thermal masks) is usually made of Polycarbonate. This material provides excellent impact resistance. Because polycarbonate is soft, these lenses are manufactured with anti-scratch coatings. But great care must be taken to keep proper care of the lenses. Many vendors recommend the immediate replacement of very scratched lenses, or lenses subjected to very strong impacts.\n\nGenerally, more expensive masks tend to be smaller (which in turn makes the player a smaller target), more comfortable, have more interchangeable parts and be made of soft enough material to get some bounces.\n", "In the al-Qaeda in the Arabian Peninsula October 2010 cargo plane bomb plot, two PETN-filled printer cartridges were found at East Midlands Airport and in Dubai on flights bound for the US on an intelligence tip. Both packages contained sophisticated bombs concealed in computer printer cartridges filled with PETN. The bomb found in England contained of PETN, and the one found in Dubai contained of PETN. Hans Michels, professor of safety engineering at University College London, told a newspaper that of PETN—\"around 50 times less than was used—would be enough to blast a hole in a metal plate twice the thickness of an aircraft's skin\". In contrast, according to an experiment conducted by a BBC documentary team designed to simulate Abdulmutallab's Christmas Day bombing, using a Boeing 747 plane, even 80 grams of PETN was not sufficient to materially damage the fuselage.\n", "Mechanical stress issues can be overcome by bonding the devices to the board through a process called \"underfilling\", which injects an epoxy mixture under the device after it is soldered to the PCB, effectively gluing the BGA device to the PCB. There are several types of underfill materials in use with differing properties relative to workability and thermal transfer. An additional advantage of underfill is that it limits tin whisker growth.\n", "Section::::Use.\n", "Since stripping the insulation from wires is time-consuming, many connectors intended for rapid assembly use insulation-displacement connectors so that the insulation is cut as the wire is inserted. These generally take the form of a fork-shaped opening in the terminal, into which the insulated wire is pressed and which cut through the insulation to contact the conductor within. To make these connections reliably on a production line, special tools are used which accurately control the forces applied during assembly. On small scales, these tools tend to cost more than tools for crimped connections.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-18071
If the Russians kept beating the US in the early space race, what kept them from reaching the moon first?
The US saw it as matter of national pride. They were worried that if Russia beat them to the moon (the last "easy" space challenge), they'd forever claim superiority. So America pulled out its chequebook. It spent $25bn on the Apollo programme (a massive amount for the time). It managed to mobilise resources and expertise faster than the Russians at this.
[ "Section::::Reaction to Apollo.:Launch schedules.\n\nAs of 1967, the L1/L3 launch schedules were:\n\nUR-500K(Proton)/L1(Zond) program\n\nN1/L3 program\n\nKorolev's death in 1966, along with various technical and administrative reasons, as well as a lack of financial support, resulted in both programs being delayed.\n\nSection::::Reaction to Apollo.:Cosmonauts.\n", "A group of journalists are investigating a highly secret document when they uncover a sensational story: that before the Second World War, in 1938, the first rocket was made in the USSR and Soviet scientists were planning to send an orbiter to the Moon and back. The evidence is convincing; it is clear that in this case, Soviet cosmonauts were first.\n", "The Soviet government issued a response to the American Apollo challenge after three years. According to the first government decree about the Soviet crewed Moon programs (Decree 655-268, ' \"On Work on the Exploration of the Moon and Mastery of Space\" '), adopted in August 1964, Chelomei was instructed to develop a Moon flyby program with a projected first flight by the end of 1966, and Korolev was instructed to develop the Moon landing program with a first flight by the end of 1967.\n", "The complete L3 lunar expedition complex with the 7K-LOK and LK for the Moon flyby and landing was prepared for a fifth launch, using a modified N1 rocket in August 1974. If this mission and the next had been successful, it would have led to the decision to launch up to five Soviet crewed N1-L3 expeditions in 1976–1980. To gain technical and scientific interest in the program, the modified multi-launched N1F-L3M missions were planned to have significantly more time on the Moon's surface than Apollo.\n", "Section::::Early Pioneer missions.\n\nThe earliest missions were attempts to achieve Earth's escape velocity, simply to show it was feasible and study the Moon. This included the first launch by NASA which was formed from the old NACA. These missions were carried out by the US Air Force and Army.\n\nSection::::Early Pioneer missions.:Able space probes (1958–1960).\n\nBULLET::::- Pioneer 0 (Thor-Able 1, Pioneer) – Lunar orbiter, destroyed (Thor failure 77 seconds after launch) August 17, 1958\n\nBULLET::::- Pioneer 1 (Thor-Able 2, Pioneer I) – Lunar orbiter, missed Moon (third stage partial failure) October 11, 1958\n", "There are differing opinions about the goals and success of the mission \"Kosmos-146\". Most sources report that Kosmos 146 achieved escape velocity. The goal of Kosmos-146 could not have been to orbit the Moon, since time and place of the launch did not allow for such a trajectory.\n\nThe engines of Block D were not started immediately, but only after about eight orbits were completed, which is unusual. Reportedly some speculated that this delay was meant to simulate the arrival of a separately launched crew at the Soyuz spacecraft.\n\nSection::::Moon race.\n", "BULLET::::- 1965: First probe to hit another planet of the Solar System (Venus), Venera 3\n\nBULLET::::- 1966: First probe to make a soft landing on and transmit from the surface of the Moon, \"Luna 9\"\n\nBULLET::::- 1966: First probe in lunar orbit, \"Luna 10\"\n\nBULLET::::- 1967: First unmanned rendezvous and docking, Cosmos 186/Cosmos 188.\n\nBULLET::::- 1968: First living beings to reach the Moon (circumlunar flights) and return unharmed to Earth, Russian tortoises and other lifeforms on Zond 5\n\nBULLET::::- 1969: First docking between two manned craft in Earth orbit and exchange of crews, Soyuz 4 and Soyuz 5\n", "The USSR made no manned flights during this period but continued to develop its Soyuz craft and secretly accepted Kennedy's implicit lunar challenge, designing Soyuz variants for lunar orbit and landing. They also attempted to develop the N1, a large, manned Moon-capable launch vehicle similar to the US Saturn V.\n", "While the government and the Communist Party used the program's successes as propaganda tools after they occurred, systematic plans for missions based on political reasons were rare, one exception being Valentina Tereshkova, the first woman in space, on Vostok 6 in 1963. Missions were planned based on rocket availability or ad hoc reasons, rather than scientific purposes. For example, the government in February 1962 abruptly ordered an ambitious mission involving two Vostoks simultaneously in orbit launched \"in ten days time\" to obscure John Glenn's Mercury-Atlas 6 that month; the program could not do so until August, with Vostok 3 and Vostok 4.\n", "Section::::Early concepts.\n\nAs early as 1961, the Soviet leadership had made public pronouncements about landing a man on the Moon and establishing a lunar base; however serious plans were not made until several years later. Sergei Korolev, the senior Soviet rocket engineer, was more interested in launching a heavy orbital station and in crewed flights to Mars and Venus. With this in mind, Korolev began the development of the super-heavy N-1 rocket with a 75-ton payload.\n\nSection::::Early concepts.:Soyuz-A-B-C and N1.\n", "Launched by a 3-staged Proton rocket, the L1(Zond) was a spacecraft from the Soyuz family and consisted of two or three modified modules of the main craft Soyuz 7K-OK with a total weight of 5.5 tons. The Apollo orbital spacecraft (command ship) for the lunar flyby also had two modules (command and service) but was five times heavier, carried a crew of three and entered lunar orbit, whereas the L1 (Zond) performed a flight around the Moon and came back on a return trajectory. Planned for 8 December 1968 for priority over the US, a first crewed mission of the L1 (Zond) was canceled due to the insufficient readiness of the capsule and rocket. After Apollo 8 won the first (lunar flyby) phase of the Moon Race at the end of 1968, the Soviet leadership lost political interest in the L1 (Zond) program. A few reserve units of L1 (Zond) made unpiloted flights, but by the end of 1970, this program was canceled.\n", "The new political leaders, along with Korolev, ended the technologically troublesome Voskhod program, cancelling Voskhod 3 and 4, which were in the planning stages, and started concentrating on reaching the Moon. Voskhod 2 ended up being Korolev's final achievement before his death on January 14, 1966, as it became the last of the many space firsts that demonstrated the USSR's domination in spacecraft technology during the early 1960s. According to historian Asif Siddiqi, Korolev's accomplishments marked \"the absolute zenith of the Soviet space program, one never, ever attained since.\" There was a two-year pause in Soviet piloted space flights while Voskhod's replacement, the Soyuz spacecraft, was designed and developed.\n", "Section::::Early concepts.:UR-500K / LK-1 and UR-700 / LK-3.\n\nAnother main space design bureau headed by Vladimir Chelomei proposed a competing cislunar orbiting mission using a heavy UR-500K rocket (later renamed the Proton rocket) and a two-crew LK-1 spacecraft. Later, Chelomei also proposed a Moon landing program with a super-heavy UR-700 rocket, an LK-700 lunar lander, and an LK-3 spacecraft.\n\nSection::::Reaction to Apollo.\n", "According to US sources, the \"race\" peaked with the July 20, 1969, US landing of the first humans on the Moon with Apollo 11. Most US sources will point to the Apollo 11 lunar landing as a singular achievement far outweighing any combination of Soviet achievements. In any case the USSR attempted several crewed lunar missions, but eventually canceled them and concentrated on Earth orbital space stations, while the US landed several more times on the Moon.\n", "Section::::Soviet lunar orbit satellites (1966–1974).\n\nLuna 10 became the first spacecraft to orbit the Moon on 3 April 1966.\n\nSection::::Soviet circumlunar loop flights (1967–1970).\n", "Soviet leader Nikita Khrushchev said in October 1963 the USSR was \"not at present planning flight by cosmonauts to the Moon,\" while insisting that the Soviets had not dropped out of the race. Only after another year would the USSR fully commit itself to a Moon-landing attempt, which ultimately failed.\n", "BULLET::::- 1959: First rocket ignition in Earth orbit, first man-made object to escape Earth's gravity, \"Luna 1\"\n\nBULLET::::- 1959: First data communications, or telemetry, to and from outer space, \"Luna 1\".\n\nBULLET::::- 1959: First man-made object to pass near the Moon, first man-made object in Heliocentric orbit, \"Luna 1\"\n\nBULLET::::- 1959: First probe to impact the Moon, \"Luna 2\"\n\nBULLET::::- 1959: First images of the moon's far side, \"Luna 3\"\n\nBULLET::::- 1960: First animals to safely return from Earth orbit, the dogs Belka and Strelka on Sputnik 5.\n\nBULLET::::- 1961: First probe launched to Venus, Venera 1\n", "In 1967, both nations faced serious challenges that brought their programs to temporary halts. Both had been rushing at full-speed toward the first piloted flights of Apollo and Soyuz, without paying due diligence to growing design and manufacturing problems. The results proved fatal to both pioneering crews.\n", "The N1 rocket would then carry the L3 Moon expedition complex, with two spacecraft (LOK and LK) and two (Block G and Block D) boosters.\n", " The Soviet Union had attempted an earlier rendezvous on August 12, 1962. However, Vostok 3 and Vostok 4 only came within five kilometers of one another, and operated in different orbital planes. \"Pravda\" did not mention this information, but indicated that a rendezvous had taken place.\n\nSection::::See also.\n\nBULLET::::- List of communications satellite firsts\n\nBULLET::::- List of space exploration milestones, 1957–1969\n\nBULLET::::- Timeline of space exploration\n\nBULLET::::- Timeline of first orbital launches by country\n\nBULLET::::- Timeline of space travel by nationality\n\nSection::::External links.\n\nBULLET::::- Timeline of the Space Race/Moon Race\n\nBULLET::::- Chronology: Moon Race at russianspaceweb.com\n", "On the Moon, the cosmonaut would take Moon walks, use Lunokhods, collect rocks, and plant the Soviet flag.\n\nAfter a few hours on the lunar surface, the LK's engine would fire again using its landing structure as a launch pad, as with Apollo. To save weight, the engine used for landing would blast the LK back to lunar orbit for an automated docking with the LOK. The cosmonaut then would spacewalk back to the LOK carrying rock samples.\n\nThe LK would then be cast off, after which the LOK would fire its rocket for the return to Earth.\n", "By June 16, 1962, the Union launched a total of six Vostok cosmonauts, two pairs of them flying concurrently, and accumulating a total of 260 cosmonaut-orbits and just over sixteen cosmonaut-days in space.\n", "Korolev's design bureau produced two prospectuses for circumlunar spaceflight (March 1962 and May 1963), the main spacecraft for which were early versions of his Soyuz design. Soviet Communist Party Central Committee Command 655-268 officially established two secret, competing crewed programs for circumlunar flights and lunar landings, on August 3, 1964. The circumlunar flights were planned to occur in 1967, and the landings to start in 1968.\n", "Section::::Origins of the Space Race.:The Soviet Union in the Space Race.:Soviet Space Travel from 1960–1962.\n\nThe Luna (\"Moon\") program was a giant step forward for the Soviets in achieving the goal of putting the first man on the moon. It also \"planted the building blocks\" of a program for the soviets to \"sustain human beings safely and productively in low Earth orbit\" with the creation of the Soviet \"equivalent to the Apollo command module, the Soyuz space capsule\".\n", "Although the specifics on planned activity while on the lunar surface remain vague, the small size and limited payload capacity of the N-1/Soyuz LOK/LK compared to the Saturn/Apollo meant that not much in the way of scientific experiments could have been performed. Most likely, the cosmonaut would plant the Soviet flag on the Moon, collect soil samples, take photographs, and deploy a few small scientific packages. Long duration missions, lunar rovers, and other activities performed on the late Apollo landings were not possible at all.\n\nSection::::The N1-L3 flight plan.:Earth return.\n" ]
[ "If Russia accomplished more space accomplishments than the USA, then the USA should have never made it to the moon before Russia. " ]
[ "Due to the USA losing in terms of space accomplishments to Russia, they chose to spend billions of dollars to make it to the moon before Russia." ]
[ "false presupposition" ]
[ "If Russia accomplished more space accomplishments than the USA, then the USA should have never made it to the moon before Russia. ", "If Russia accomplished more space accomplishments than the USA, then the USA should have never made it to the moon before Russia. " ]
[ "normal", "false presupposition" ]
[ "Due to the USA losing in terms of space accomplishments to Russia, they chose to spend billions of dollars to make it to the moon before Russia.", "Due to the USA losing in terms of space accomplishments to Russia, they chose to spend billions of dollars to make it to the moon before Russia." ]
2018-10858
Why aren't mailmen called post officers?
Because they're not officers. They're merely employees of the postal service. The "post office" is not the whole postal service. It's merely the office where you post letters and pick up letters posted to you.
[ "Until 1993, active letter carriers were barred from taking any significant volunteer role for any political campaigns. The primary sentiment behind the law was to protect federal employees from being strong-armed and intimidated into helping their bosses run for reelection.\n", "This series of events in turn has influenced American culture, as seen in the slang term \"going postal\" (see Patrick Sherrill for information on his August 20, 1986, rampage) and the computer game \"Postal\". Also, in the opening sequence of \"\", a yell of \"Disgruntled postal workers\" is heard, followed by the arrival of postal workers with machine guns. In an episode of \"Seinfeld\", the mailman character, Newman, explained in a dramatic monologue that postal workers \"go crazy and kill everyone\" because the mail never stops. In \"The Simpsons\" episode \"Sunday, Cruddy Sunday,\" Nelson Muntz asks Postmaster Bill if he has \"ever gone on a killing spree\"; Bill replies, \"The day of the gun-toting, disgruntled postman shooting up the place went out with the Macarena\".\n", "As referenced on the Simpsons, during Episode \"Sunday Cruddy Sunday\", Springfield Elementary visits the post office on a class trip and Nelson asks the Postmaster:\n\nBULLET::::- Nelson: Have you ever gone on a killing spree?\n\nBULLET::::- Postmaster Bill: (laughing) Ho ho ho, nooo noo, the day of the gun toting disgruntled postman shooting up the place went out with the Macarena.\n", "In 1998, the United States Congress conducted a joint hearing to review the violence in the U.S. Postal Service. In the hearing, it was noted that despite the postal service accounting for less than 1% of the full-time civilian labor force, 13% of workplace homicides were committed at postal facilities by current or former employees.\n\nSection::::Cultural impact.\n", "BULLET::::- Keith Knox, a Scottish footballer who also worked as a postman throughout his 25-year career\n\nBULLET::::- Tom Kruse, MBE (28 August 1914 – 30 June 2011) was a former mailman on the Birdsville Track in the border area between South Australia and Queensland\n\nBULLET::::- Stephen Law, philosopher. Expelled from school and worked as a postman until being accepted to Trinity College, Oxford to study philosophy\n\nBULLET::::- John Prine, Grammy winning folk singer\n\nBULLET::::- Bon Scott, former lead singer of AC/DC was once a 'postie' in Australia\n\nBULLET::::- Allan Smethurst, English singer known as \"The Singing Postman\"\n", "Section::::Origin.\n\nThe earliest known use of the phrase was on December 17, 1993, in the \"St. Petersburg Times\":\n\nOn December 31, 1993, the \"Los Angeles Times\" stated:\n\nSection::::Notable postal shootings.\n\nSection::::Notable postal shootings.:Los Angeles, California, 1970.\n\nAugust 13, 1970, Harry Sendrow, 54, a postal supervisor, was shot in the back 3 times by Alfred Kellum, 41, whom Sendrow had sent home for being intoxicated. Five hours later Kellum was found unconscious and arrested. Police officers said he appeared to be intoxicated.\n\nSection::::Notable postal shootings.:Edmond, Oklahoma, 1986.\n", "As a result of these two shootings, in 1993 the USPS created 85 Workplace Environment Analysts for domicile at its 85 postal districts. These new positions were created to help with violence prevention and workplace improvement. In February 2009, the USPS unilaterally eliminated these positions as part of its downsizing efforts.\n\nSection::::Notable postal shootings.:Goleta, California, 2006.\n", "BULLET::::- Substitute rural carriers (Designation Code 73) are those employees hired prior to July 21, 1981, with an appointment without time limitation.\n", "BULLET::::- The comedy film \"Dear God\" (1996), starring Greg Kinnear and Laurie Metcalf, portrays a group of quirky postal workers in a dead letter office that handle letters addressed to the Easter Bunny, Elvis, and even God himself.\n\nBULLET::::- In 2015, \"The Inspectors\", which depicts a group of postal inspectors investigating postal crimes, debuted on CBS. The series uses the USPIS seal and features messages and tips from the Chief Postal Inspector at the end of each episode.\n", "BULLET::::- \"The Tainted Eagle\" by Charlie Withers, a union steward in the Royal Oak Post Office at the time of the shootings in Royal Oak, Michigan. ()\n\nBULLET::::- \"Lone Wolf\" by Pan Pantziarka, a comprehensive study of the spree killer phenomenon, and looks in detail at a number of cases in the U.S., UK and Australia. ()\n\nBULLET::::- Bob Dart, \"'Going postal' is a bad rap for mail carriers, study finds\", \"Austin American-Statesman\", September 2, 2000, p. A28\n\nSection::::External links.\n\nBULLET::::- Postal Work Unfairly Maligned, Study Says, September 1, 2000\n", "BULLET::::- Carl Schliff - a letter carrier in \"Dead Rising 2\" who was more concerned with completing his route then getting bit by a zombie\n\nBULLET::::- Gordon Smith - \"See Spot Run\"\n\nBULLET::::- Stan - the postman in \"Wizadora\"\n\nBULLET::::- Rita Sullivan - postmistress at the Kabin newsagents and post office, \"Coronation Street\"\n\nBULLET::::- Moist von Lipwig - \"Going Postal\" (postmaster)\n\nBULLET::::- Mr. Wilson - retired mail carrier from the American comic strip \"Dennis the Menace\"\n\nBULLET::::- Mr. Zip - a cartoon character used by the United States Postal Service\n", "During War time, Police officers and their auxiliary support were not allowed to resign. In June 1945, in response to a question in the House Sir D Somervell, Secretary of State for the Home Department announced the discontinuation of this restriction on PAMs:\n", "BULLET::::- President: Frank H. Cunningham (Omaha, Nebraska)\n\nBULLET::::- Vice President: B. Pitts Woods (Cherokee, Iowa)\n\nBULLET::::- Secretary: W. F. Tumber (Lockport, New York)\n\nBULLET::::- Treasurer: W. L. Fetters (Bluffton, Indiana)\n\nBULLET::::- Executive Committee: H. E. Niven (Berthoud, Colorado), F. A. Putnam (Dudley, Massachusetts) and E. Dwyer (Aurora, Illinois)\n", "BULLET::::- W. Reginald Bray mailed himself within England by ordinary mail in 1900 and then by registered mail in 1903.\n\nBULLET::::- Suffragettes Elspeth Douglas McClelland and Daisy Solomon mailed themselves successfully to the then Prime Minister of the United Kingdom, H. H. Asquith at 10 Downing Street on 23 February 1909 but his office refused to accept the letters.\n", "BULLET::::- At a society party in the 2008 movie \"The Loss of a Teardrop Diamond\", Jimmy (Chris Evans) is designated Postman in a game that arouses jealousy in outcast debutante Fisher Willow (Bryce Dallas Howard).\n\nBULLET::::- In \"The X-Files\" seventh season episode \"Closure\", Mulder and Scully are about to perform a séance and after Scully sarcastically comments that she hasn't done one since high school, Mulder jokingly suggests that afterwards they should play “Postman” and Spin the bottle.\n", "List of fictional postal employees\n\nThis is a list of fictional post office employees with a significant role in notable works of fiction.\n\nBULLET::::- Agent K - played by Tommy Lee Jones in \"Men in Black II\"; within the \"MiB\" universe, most postal workers are aliens\n\nBULLET::::- Masood Ahmed - \"EastEnders\"\n\nBULLET::::- Anghammarad - \"Going Postal\"\n\nBULLET::::- Mr. Beasley - \"Blondie\"\n\nBULLET::::- Henry Chinaski - Charles Bukowski's alter ego in the book \"Post Office\"\n\nBULLET::::- Cliff Clavin - \"Cheers\"\n\nBULLET::::- Pat Clifton - \"Postman Pat\" (postman)\n\nBULLET::::- Norris Cole - postmaster at The Kabin newsagents and post office, \"Coronation Street\"\n", "Section::::United States.\n\nIn the United States, there are three types of mail carriers: City Letter Carriers, who are represented by the National Association of Letter Carriers; Rural Carriers, who are represented by the National Rural Letter Carriers' Association; and Highway Contract Route carriers, who are independent contractors. While union membership is voluntary, city carriers are organized near 70 per cent nationally.\n", "The 1993 episode of \"Seinfeld\" titled \"The Old Man\" makes reference to the term, in a scene between the characters George and Newman, whose occupation is a USPS postal worker.\n\nThe 1994 comedy film \"\" includes a scene where the main character must deal with a series of escalating threats, including the sudden appearance of dozens of disgruntled postal workers randomly firing weapons in every direction.\n", "There have been cases over the millennia of governments opening and copying or photographing the contents of private mail. Subject to the laws in the relevant jurisdiction, correspondence may be openly or covertly opened, or the contents determined via some other method, by the police or other authorities in some cases relating to a suspected criminal conspiracy, although black chambers (largely in the past, though there is apparently some continuance of their use today) opened and open letters extralegally.\n", "In 1974, staff at Canada Post's Montreal office were noticing a considerable number of letters addressed to Santa Claus entering the postal system, and those letters were being treated as undeliverable. Since employees handling those letters did not want the writers, mostly young children, to be disappointed at the lack of response, they started answering the letters themselves.\n", "BULLET::::- Mail handlers and processors, prepare, separate, load and unload mail and parcels, by delivery ZIP code and station, for the clerks. They work almost exclusively at the plants or larger mail facilities now after having their duties excessed and reassigned to clerks in Post Offices and Station branches.\n", "BULLET::::- Viv Hope - postmistress at the village post office in \"Emmerdale\"\n\nBULLET::::- Stanley Howler - \"Going Postal\"\n\nBULLET::::- Clive James - writer and broadcaster; had a walk-on part in \"Neighbours\" as Ramsay Street’s postman\n\nBULLET::::- Special Delivery Kluger - \"Santa Claus is Comin' to Town\" (1970)\n\nBULLET::::- Gordon Krantz - the eponymous protagonist of \"The Postman\"\n\nBULLET::::- Lag Seeing - delivery boy from the manga and anime \"Tegami Bachi\"\n\nBULLET::::- Myron Larabee - \"Jingle All the Way\"\n\nBULLET::::- Willie Lumpkin - mailman of the Fantastic Four in Marvel Comics\n\nBULLET::::- Miss Maccalariat - \"Going Postal\"\n", "In the \"Brooklyn Nine-Nine\" episode \"USPIS\", a self-righteous United States Postal Inspection Service agent passionate about his job is adamant that \"going postal\" is the term most associated with bringing goodness into people's lives, which is a view also shared by his co-workers.\n\nSection::::See also.\n\nBULLET::::- 2010 Panama City school board shootings\n\nBULLET::::- Amok\n\nBULLET::::- Fragging\n\nBULLET::::- List of postal killings\n\nBULLET::::- List of massacres\n\nBULLET::::- Road rage\n\nBULLET::::- Spree killer\n\nBULLET::::- School shooting\n\nBULLET::::- Son of Sam – serial killer who worked for the postal service\n\nBULLET::::- List of rampage killers (workplace killings)\n\nSection::::Further reading.\n", "A former United States postal worker, Joseph M. Harris, killed his former supervisor, Carol Ott, and killed her boyfriend, Cornelius Kasten Jr., at their home with a katana. The following morning, on October 10, 1991, Harris shot and killed two mail handlers, Joseph M. VanderPaauw, 59, of Prospect Park, New Jersey, and Donald McNaught, 63, of Pompton Lakes, New Jersey, at the Ridgewood Post Office.\n\nSection::::Notable postal shootings.:Royal Oak, Michigan, 1991.\n", "The series of massacres led the USPS to issue a rule prohibiting the possession of any type of firearms (except for those issued to Postal Inspectors) in all designated USPS facilities.\n\nIn 2016, video footage was released showing a group of police officers from the New York City Police Department (NYPD) arresting a USPS worker while he was in the middle of his deliveries. The footage showed that the officers were dressed in civilian clothing. The NYPD is reportedly investigating alleged disorderly conduct.\n\nSection::::In fiction.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-21805
How do we naturally wake up if we are not disturbed by noise or other stuff
You have sleep cycles approximately 90 minutes each. At the end you basically "wake up" but you don't remember it, your brain erases that information just like a dream. The moment you wake up "naturally" is the perfect combination of you being at the end of a sleep cycle AND you had enough sleep cycles.
[ "In the early morning, light activates the \"cry\" gene and its protein CRY causes the breakdown of TIM. Thus PER/TIM dimer dissociates, and the unbound PER becomes unstable. PER undergoes progressive phosphorylation and ultimately degradation. Absence of PER and TIM allows activation of \"clk\" and \"cyc\" genes. Thus, the clock is reset to start the next circadian cycle.\n\nSection::::In \"Drosophila\".:PER-TIM Model.\n", "All living animals have an internal clock, the circadian rhythm, which is close to 24 hours' duration. For humans, the average duration is 24 hours 20 minutes, and individually some people have more or less than 24 hours. Everyday exposure to the morning light resets the circadian rhythm to 24 hours, so that there is no drifting.\n", "Humans, like most living organisms, have various biological rhythms. These biological clocks control processes that fluctuate daily (e.g. body temperature, alertness, hormone secretion), generating circadian rhythms. Among these physiological characteristics, our sleep-wake propensity can also be considered one of the daily rhythms regulated by the biological clock system. Our sleeping cycles are tightly regulated by a series of circadian processes working in tandem, which allow us to experience moments of consolidated sleep during the night and a long wakeful moment during the day. Conversely, disruptions to these processes and the communication pathways between them can lead to problems in sleeping patterns, which are collectively referred to as Circadian rhythm sleep disorders.\n", "A dawn simulator can be used as an alarm clock. Light enters through the eyelids triggering the body to begin its wake-up cycle, including the release of cortisol, so that by the time the light is at full brightness, sleepers wake up on their own, without the need for an alarm.\n", "Circadian clock\n\nA circadian clock, or circadian oscillator, is a biochemical oscillator that cycles with a stable phase and is synchronized with solar time.\n\nSuch a clock's \"in vivo\" period is necessarily almost exactly 24 hours (the earth's current solar day). In most living things, internally synchronized circadian clocks make it possible for the organism to anticipate daily environmental changes corresponding with the day–night cycle and adjust its biology and behavior accordingly.\n", "Today, many humans wake up with an alarm clock; however, people can also reliably wake themselves up at a specific time with no need for an alarm. Many sleep quite differently on workdays versus days off, a pattern which can lead to chronic circadian desynchronization. Many people regularly look at television and other screens before going to bed, a factor which may exacerbate disruption of the circadian cycle. Scientific studies on sleep have shown that sleep stage at awakening is an important factor in amplifying sleep inertia.\n\nSection::::Timing.\n", "Free-running organisms that normally have one or two consolidated sleep episodes will still have them when in an environment shielded from external cues, but the rhythm is not entrained to the 24-hour light–dark cycle in nature. The sleep–wake rhythm may, in these circumstances, become out of phase with other circadian or ultradian rhythms such as metabolic, hormonal, CNS electrical, or neurotransmitter rhythms.\n\nRecent research has influenced the design of spacecraft environments, as systems that mimic the light–dark cycle have been found to be highly beneficial to astronauts.\n\nSection::::Importance in animals.:Arctic animals.\n", "An organism whose circadian clock exhibits a regular rhythm corresponding to outside signals is said to be \"entrained\"; an entrained rhythm persists even if the outside signals suddenly disappear. If an entrained human is isolated in a bunker with constant light or darkness, he or she will continue to experience rhythmic increases and decreases of body temperature and melatonin, on a period which slightly exceeds 24 hours. Scientists refer to such conditions as free-running of the circadian rhythm. Under natural conditions, light signals regularly adjust this period downward, so that it corresponds better with the exact 24 hours of an Earth day.\n", "The clock is reset as an organism senses environmental time cues of which the primary one is light. Circadian oscillators are ubiquitous in tissues of the body where they are synchronized by both endogenous and external signals to regulate transcriptional activity throughout the day in a tissue-specific manner. The circadian clock is intertwined with most cellular metabolic processes and it is affected by organism aging. The basic molecular mechanisms of the biological clock have been defined in vertebrate species, \"Drosophila melanogaster\", plants, fungi, bacteria, and presumably also in Archaea.\n", "Section::::Research.\n\nReppert has published more than 180 papers. He is the principal inventor on seven patents derived from his research.\n\nSection::::Research.:Fetal circadian clocks.\n", "Humans with regular circadian function have been shown to maintain regular sleep schedules, regulate daily rhythms in hormone secretion, and sustain oscillations in core body temperature. Even in the absence of Zeitgebers, humans will continue to maintain a roughly 24-hour rhythm in these biological activities. Regarding sleep, normal circadian function allows people to maintain balance rest and wakefulness that allows people to work and maintain alertness during the day's activities, and rest at night.\n", "Section::::Physiology.:Awakening.\n\nAwakening can mean the end of sleep, or simply a moment to survey the environment and readjust body position before falling back asleep. Sleepers typically awaken soon after the end of a REM phase or sometimes in the middle of REM. Internal circadian indicators, along with successful reduction of homeostatic sleep need, typically bring about awakening and the end of the sleep cycle. Awakening involves heightened electrical activation in the brain, beginning with the thalamus and spreading throughout the cortex.\n", "A circadian rhythm is an entrainable, endogenous, biological activity that has a period of roughly twenty-four hours. This internal time-keeping mechanism is centralized in the suprachiasmatic nucleus (SCN) of humans and allows for the internal physiological mechanisms underlying sleep and alertness to become synchronized to external environmental cues, like the light-dark cycle. The SCN also sends signals to peripheral clocks in other organs, like the liver, to control processes such as glucose metabolism. Although these rhythms will persist in constant light or dark conditions, different Zeitgebers (time givers such as the light-dark cycle) give context to the clock and allow it to entrain and regulate expression of physiological processes to adjust to the changing environment. Genes that help control light-induced entrainment include positive regulators BMAL1 and CLOCK and negative regulators PER1 and CRY. A full circadian cycle can be described as a twenty-four hour circadian day, where circadian time zero (CT 0) marks the beginning of a subjective day for an organism and CT 12 marks the start of subjective night. \n", "The human organism physically restores itself during sleep, healing itself and removing metabolic wastes which build up during periods of activity. This restoration takes place mostly during slow-wave sleep, during which body temperature, heart rate, and brain oxygen consumption decrease. The brain, especially, requires sleep for restoration, whereas in the rest of the body these processes can take place during quiescent waking. In both cases, the reduced rate of metabolism enables countervailing restorative processes.\n", "The WELL Building standard additionally provides direction for circadian emulation in multi-family residences. In order to more accurately replicate natural cycles lighting users must be able to set a wake and bed time. A equivalent melanopic lux of 250 must be maintained in the period of the day between the indicated wake time and two hours before the indicated bed time. An equivalent melanopic lux of 50 or less is required for the period of the day spanning from two hours before the indicated bed time through the wake time. In addition at the indicated wake time melanopic lux should increase from 0 to 250 over the course of at least 15 minutes.\n", "It is not, however, clear precisely what signal (or signals) enacts principal entrainment to the many biochemical clocks contained in tissues throughout the body. See section \"regulation of circadian oscillators\" below for more details.\n\nSection::::Transcriptional and non-transcriptional control.\n", "In 1993, a different model called the opponent process model was proposed. This model explained that these two processes opposed each other to produce sleep, as against Borbely's model. According to this model, the SCN, which is involved in the circadian rhythm, enhances wakefulness and opposes the homeostatic rhythm. In opposition is the homeostatic rhythm, regulated via a complex multisynaptic pathway in the hypothalamus that acts like a switch and shuts off the arousal system. Both effects together produce a see-saw like effect of sleep and wakefulness. More recently, it has been proposed that both models have some validity to them, while new theories hold that inhibition of NREM sleep by REM could also play a role. In any case, the two process mechanism adds flexibility to the simple circadian rhythm and could have evolved as an adaptive measure.\n", "Any biological process in the body that repeats itself over a period of approximately 24 hours and maintains this rhythm in the absence of external stimuli is considered a circadian rhythm. It is believed that the brain's suprachiasmatic nucleus (SCN), or internal pacemaker, is responsible for regulating the body's biological rhythms, influenced by a combination of internal and external cues. To maintain clock-environment synchrony, zeitgebers induce changes in the concentrations of the molecular components of the clock to levels consistent with the appropriate stage in the 24-hour cycle, a process termed entrainment.\n", "Sleep timing is controlled by the circadian clock (Process C), sleep–wake homeostasis (Process S), and to some extent by individual will.\n\nSection::::Timing.:Circadian clock.\n", "Scientific studies on sleep having shown that sleep stage at awakening is an important factor in amplifying sleep inertia. Alarm clocks involving \"sleep stage monitoring\" appeared on the market in 2005. The alarm clocks use sensing technologies such as EEG electrodes and accelerometers to wake people from sleep.Dawn simulators are another technology meant to mediate these effects.\n", "Unprovoked awakening occurs most commonly during or after a period of REM sleep, as body temperature is rising.\n\nSection::::Continuation during wakefulness.\n", "The internal circadian clock is profoundly influenced by changes in light, since these are its main clues about what time it is. Exposure to even small amounts of light during the night can suppress melatonin secretion, and increase body temperature and wakefulness. Short pulses of light, at the right moment in the circadian cycle, can significantly 'reset' the internal clock. Blue light, in particular, exerts the strongest effect, leading to concerns that electronic media use before bed may interfere with sleep.\n", "Section::::Other machine states and LAN wakeup signals.:Waking up without operator presence.\n", "Because hormones play a major role in energy balance and metabolism, and sleep plays a critical role in the timing and amplitude of their secretion, sleep has a sizable effect on metabolism. This could explain some of the early theories of sleep function that predicted that sleep has a metabolic regulation role.\n\nSection::::Sleep function.:Memory processing.\n", "The hormones cortisol and melatonin are effected by the signals light sends through the body's nervous system. These hormones help regulate blood sugar to give the body the appropriate amount of energy that is required throughout the day. Cortisol is levels are high upon waking and gradual decrease over the course of the day, melatonin levels are high when the body is entering and exiting a sleeping status and are very low over the course of waking hours. The earth's natural light-dark cycle is the basis for the release of these hormones.\n" ]
[ "Nothing causes natural wake up. " ]
[ "Having enough sleep cycles causes natural wake up. " ]
[ "false presupposition" ]
[ "Nothing causes natural wake up. ", "Nothing causes natural wake up. " ]
[ "false presupposition", "normal" ]
[ "Having enough sleep cycles causes natural wake up. ", "Having enough sleep cycles causes natural wake up. " ]
2018-20434
Why are colours on a screen extremely weird when you look at the screen from an angle?
Because the structure that makes up the screen if you imagine it in giant form, isn't completely flat. Imagine you're looking down on city, full of skyscrapers. If you look directly down, you can see the roads fine, but if you look across at an angle, the roads become more and more obscured and difficult to see until eventually you can't see anything other than the buildings and their sides, but no road because it's hidden by the buildings. The same effect happens on an LCD screen. Looking straight at it, all is well, look at an angle and the tiny structure of the screen starts getting in the way of the light that's being sent out, sending the colours all out of whack, because some colours get obscured before others do.
[ "TN displays suffer from limited viewing angles, especially in the vertical direction. Colors will shift when viewed off-perpendicular. In the vertical direction, colors will shift so much that they will invert past a certain angle.\n", "When different screens are combined, a number of distracting visual effects can occur, including the edges being overly emphasized, as well as a moiré pattern. This problem can be reduced by rotating the screens in relation to each other. This screen angle is another common measurement used in printing, measured in degrees clockwise from a line running to the left (9 o'clock is zero degrees).\n", "A fortunate side-effect of inversion (see above) is that, for most display material, what little cross-talk there is largely cancelled out. For most practical purposes, the level of crosstalk in modern LCDs is negligible.\n\nCertain patterns, particularly those involving fine dots, can interact with the inversion and reveal visible cross-talk. If you try moving a small Window in front of the inversion pattern (above) which makes your screen flicker the most, you may well see cross-talk in the surrounding pattern.\n\nDifferent patterns are required to reveal cross-talk on different displays (depending on their inversion scheme).\n", "This way of proceeding is suitable only when the display device does not exhibit \"loading effects\", which means that the luminance of the test pattern is varying with the size of the test pattern. Such loading effects can be found in CRT-displays and in PDPs. A small test pattern (e.g. 4% window pattern) displayed on these devices can have significantly higher luminance than the corresponding full-screen pattern because the supply current may be limited by special electronic circuits.\n\nSection::::Full-swing contrast.\n", "In offset printing, colors are output on separate lithographic plates. Failing to use the correct set of angles to output every color may lead to a sort of optical noise called a moiré pattern which may appear as bands or waves in the final print. There is another disadvantage associated with incorrect sets of angle values, as the colors will look dimmer due to overlapping.\n\nWhile the angles depend on how many colors are used and the preference of the press operator, typical CMYK process printing uses any of the following screen angles:\n", "BULLET::::1. Apply the first test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,\n\nBULLET::::2. Measure the luminance and/or the chromaticity of the first test pattern and record the result,\n\nBULLET::::3. Apply the second test pattern to the electrical interface of the display under test and wait until the optical response has settled to a stable steady state,\n\nBULLET::::4. Measure the luminance and/or the chromaticity of the second test pattern and record the result,\n", "An example of pixel shape affecting \"resolution\" or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or \"sharper\". However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to \"fix\" the non-native resolution input into the display's native resolution output.\n", "BULLET::::2. The LCD moves around two axes which are at a right angle to each other, so that the screen both tilts and swivels. This type is called \"swivel screen\". Other names for this type are \"vari-angle screen\", \"fully articulated screen\", \"fully articulating screen\", \"rotating screen\", \"multi-angle screen\", \"variable angle screen\", \"flip-out-and-twist screen\", \"twist-and-tilt screen\" and \"swing-and-tilt screen\".\n", "If the reflective properties of the projection screen (usually depending on direction) are included in the measurement, the luminance reflected from the centers of the rectangles has to be measured for a (set of) specific directions of observation.\n\nLuminance, contrast and chromaticity of LCD-screens is usually varying with the direction of observation (i.e. viewing direction). The variation of electro-optical characteristics with viewing direction can be measured sequentially by mechanical scanning of the viewing cone (\"gonioscopic\" approach) or by simultaneous measurements based on conoscopy.\n\nSection::::See also.\n\nBULLET::::- Contrast (vision)\n\nSection::::External links.\n\nBULLET::::- Charles Poynton:\" Reducing eyestrain from video and computer monitors\"\n", "Screen angle\n\nIn offset printing, the screen angle is the angle at which the halftones of a separated color is made output to a lithographic film, hence, printed on final product media.\n\nSection::::Why screen angles should differ.\n", "BULLET::::- Test cards including large circles were used to confirm the linearity of the set's deflection systems. As solid-state components replaced vacuum tubes in receiver deflection circuits, linearity adjustments were less frequently required (few newer sets have user-adjustable \"VERT SIZE\" and \"VERT LIN\" controls, for example). In LCD and other deflectionless displays, the linearity is a function of the display panel's manufacturing quality; for the display to work, the tolerances will already be far tighter than human perception.\n", "Some LCDs compensate the inter-pixel color mix effect by having borders between pixels slightly larger than borders between subpixels. Then, in the example above, a viewer of such an LCD would see a blue line appearing adjacent to a red line instead of a single magenta line.\n\nSection::::PenTile.:Example with - alternated stripes layout.\n", "The image may seem garbled, poorly saturated, of poor contrast, blurry or too faint outside the stated viewing angle range, the exact mode of \"failure\" depends on the display type in question. For example, some projection screens reflect more light perpendicular to the screen and less light to the sides, making the screen appear much darker (and sometimes colors distorted) if the viewer is not in front of the screen. Many manufacturers of projection screens thus define the viewing angle as the angle at which the luminance of the image is exactly half of the maximum. With LCD screens, some manufacturers have opted to measure the contrast ratio, and report the viewing angle as the angle where the contrast ratio exceeds 5:1 or 10:1, giving minimally acceptable viewing conditions.\n", "BULLET::::- Twisted Nematic (TN): This type of display is the most common and makes use of twisted nematic-phase crystals, which have a natural helical structure and can be untwisted by an applied voltage to allow light to pass through. These displays have low production costs and fast response times but also limited viewing angles, and many have a limited color gamut that cannot take full advantage of advanced graphics cards. These limitations are due to variation in the angles of the liquid crystal molecules at different depths, restricting the angles at which light can leave the pixel.\n", "When projecting images onto a completely flat screen, the distance light has to travel from its point of origin (i.e., the projector) increases the farther away the destination point is from the screen's center. This variance in the distance traveled results in a distortion phenomenon known as the pincushion effect, where the image at the left and right edges of the screen becomes bowed inwards and stretched vertically, making the entire image appear blurry.\n", "Photographs of a TV screen taken with a digital camera often exhibit moiré patterns. Since both the TV screen and the digital camera use a scanning technique to produce or to capture pictures with horizontal scan lines, the conflicting sets of lines cause the moiré patterns. To avoid the effect, the digital camera can be aimed at an angle of 30 degrees to the TV screen.\n\nSection::::Implications and applications.:Marine navigation.\n", "BULLET::::5. Calculate the resulting \"static contrast\" for the two test patterns using one of the metrics listed above (CR,C or K).\n\nWhen luminance and/or chromaticity are measured before the optical response has settled to a stable steady state, some kind of \"transient contrast\" has been measured instead of the \"static contrast\".\n\nSection::::Transient contrast.\n\nWhen the image content is changing rapidly, e.g. during the display of video or movie content, the optical state of the display may not reach the intended stable steady state because of slow response and thus the apparent contrast is reduced if compared to the static contrast.\n", "BULLET::::- Viewing angle: The maximum angle at which the display can be viewed with acceptable quality. The angle is measured from one direction to the opposite direction of the display, such that the maximum viewing angle is 180 degrees. Outside of this angle the viewer will see a distorted version of the image being displayed. The definition of what is acceptable quality for the image can be different among manufacturers and display types. Many manufacturers define this as the point at which the luminance is half of the maximum luminance. Some manufacturers define it based on contrast ratio and look at the angle at which a certain contrast ratio is realized.\n", "BULLET::::- Inverted OLED: In contrast to a conventional OLED, in which the anode is placed on the substrate, an Inverted OLED uses a bottom cathode that can be connected to the drain end of an n-channel TFT especially for the low cost amorphous silicon TFT backplane useful in the manufacturing of AMOLED displays.\n\nSection::::Color Patterning technologies.\n\nSection::::Color Patterning technologies.:Shadow Mask patterning method.\n", "In order to measure the highest contrast possible, the dark state of the display under test must not be corrupted by light from the surroundings, since even small increments ΔL in the denominator of the ratio (L + ΔL) / (L + ΔL) effect a considerable reduction of that quotient. This is the reason why most contrast ratios used for advertising purposes are measured under dark-room conditions (illuminance E ≤ 1 lx).\n", "Informative subject testing done at the Rochester Institute of Technology’s Munsell Color Science Lab discovered consistent color perception difficulties when identical subjects performed the Munsell Vision Test on varying calibrated monitors in a test comparing color vision test results between Apple MacBook Pro laptop displays and a Samsung LCD Monitor. Results garnered from the experiment exemplified the differences that displays can exhibit in failure to accurately quantify color. Incident angle to the test monitor is a final strong source of experimental uncertainty, as very few monitors commercially available are capable of accurately representing hue, tone and saturation consistently at all viewing angles incident to the monitor.\n", "Section::::History.:Curved screen vs. flat.\n", "Since always the dark areas of a display are corrupted by reflected light, reasonable \"ambient contrast\" values can only be maintained when the display is provided with efficient measures to reduce reflections by anti reflection and/or anti-glare coatings.\n\nSection::::Concurrent contrast.\n", "Pixel geometry\n\nThe components of the pixels (primary colors red, green and blue) in an image sensor or display can be ordered in different patterns, called pixel geometry.\n\nThe geometric arrangement of the primary colors within a pixel varies depending on usage (see figure 1). In monitors, such as LCDs or CRTs, that typically display edges or rectangles, the components are arranged in vertical stripes. Displays with motion pictures should instead have triangular or diagonal patterns so that the image variation is perceived better by the viewer.\n", "In graphic arts and prepress, the usual technology for printing full-color images involves the superimposition of halftone screens. These are regular rectangular dot patterns—often four of them, printed in cyan, yellow, magenta, and black. Some kind of moiré pattern is inevitable, but in favorable circumstances the pattern is \"tight\"; that is, the spatial frequency of the moiré is so high that it is not noticeable. In the graphic arts, the term \"moiré\" means an \"excessively visible\" moiré pattern. Part of the prepress art consists of selecting screen angles and halftone frequencies which minimize moiré. The visibility of moiré is not entirely predictable. The same set of screens may produce good results with some images, but visible moiré with others.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-09115
Could a body be far enough away from a fired bullet that it does not pierce flesh but maybe bounces off.
Would be dependent on bullet shape, bullet initial muzzle energy, angle of impact to skin, skin toughness, etcetc. Human skin is not as tough as hog skin or rhino skin. For most bullets, the answer is never. Quote from wiki: Firearms expert Julian Hatcher studied falling bullets in the 1920s and calculated that .30 caliber rounds reach terminal velocities of 90 m/s (300 feet per second or 204 miles per hour). A bullet traveling at only 61 m/s (200 feet per second) to 100 m/s (330 feet per second) can penetrate human skin.
[ "Goodfellow examined Billy and found that two buckshot grains had penetrated Billy's thick Mexican felt hat band embroidered with silver wire, penetrating his head and flattened against the posterior wall of the skull. Another of the grains had passed through two heavy wool shirts and a blanket-lined canvas coat and vest before coming to rest deep in his chest. But Goodfellow was fascinated to find in the folds of a Chinese silk neckerchief around Grounds' neck two shotgun pellets but no holes and no wounds.\n", "In 1887, George E. Goodfellow, of Tombstone, Arizona, documented three cases where bullets had failed to penetrate silk articles of clothing. He described the shooting death of Charlie Storms by gambler Luke Short. Although shot in the heart, \"not a drop of blood\" exited Charlie Storms' wound. Goodfellow found though the bullet did indeed kill Charlie Storms, it failed to pass through a silk handkerchief, essentially catching the bullet, but it was not enough to stop the bullet entirely. Another was the killing of Billy Grounds by Assistant City Marshal Billy Breakenridge.\n", "The autopsy noted the absence of stippling, powder burns around a wound that indicate a shot was fired at a relatively short range. Dr. Michael Graham, the medical examiner, notes gunshot wounds within an inch of the body do not always cause stippling. Microscopic examination of tissue taken from the thumb wound detected the presence of a foreign material consistent with the material which is ejected from a gun while firing. The gunshot wound to the top of Brown's head was consistent with Brown either falling forward or being in a lunging position; the shot was instantly fatal.\n", "Searching the house, authorities found:\n\nBULLET::::- Whole human bones and fragments\n\nBULLET::::- A wastebasket made of human skin\n\nBULLET::::- Human skin covering several chair seats\n\nBULLET::::- Skulls on his bedposts\n\nBULLET::::- Female skulls, some with the tops sawn off\n\nBULLET::::- Bowls made from human skulls\n\nBULLET::::- A corset made from a female torso skinned from shoulders to waist\n\nBULLET::::- Leggings made from human leg skin\n\nBULLET::::- Masks made from the skin of female heads\n\nBULLET::::- Mary Hogan's face mask in a paper bag\n\nBULLET::::- Mary Hogan's skull in a box\n\nBULLET::::- Bernice Worden's entire head in a burlap sack\n", "Dr. Dexter Lloyd examined Stilwell's body and found a bullet wound that passed through his entire body from arm pit through the upper portion of his lungs and out the other arm pit. A second rifle bullet wound had passed through his upper left arm. One round of buckshot left six holes within a radius of , and penetrated his liver, stomach, and abdomen, leaving powder burns on his coat. A second round of buckshot had hit his left leg, breaking the bone, and a rifle shot had struck the fleshy portion of his right leg. Either the shot through the lungs or the buckshot in the abdomen was sufficient to kill him. The coroner reported that Stilwell had been shot by five different weapons.\n", "BULLET::::- they typically have a black outline along the edges of the body and scroll\n\nBULLET::::- no artificial process of heating or chemically treating the wood\n\nBULLET::::- constructed of old wood that was dried naturally\n\nBULLET::::- the bass barring (as well as other aspects) adjusted according to the age and type of wood he used\n\nBULLET::::- his best work is approximately from 1875–1910\n", "BULLET::::- Henry H. Starkweather (1826–1876), a United States Congressman from Connecticut\n\nBULLET::::- John Amsden Starkweather (1925–2001), a professor of medical psychology at University of California, San Francisco\n\nBULLET::::- John Converse Starkweather (1829–1890), a brigadier general in the Civil War and Washington, D.C., lawyer\n\nBULLET::::- Norris Garshom Starkweather (1818–1885), an American architect\n\nBULLET::::- Samuel Starkweather (1799–1876), collector of the ports of Cleveland, Ohio lighthouse superintendent, Cleveland mayor\n\nSection::::Other.\n\nBULLET::::- Starkweather, North Dakota, a city located in Ramsey County, North Dakota\n\nBULLET::::- Starkweather (band), a hardcore / metal (metalcore) band from Philadelphia\n\nBULLET::::- \"Starkweather\" (film), a 2004 film based on Charles Starkweather\n", "Several of the same type 6.5 millimeter test bullets were test-fired by the Warren Commission investigators. The test bullet that most matched the slight side flattening and nearly pristine, still rounded impact tip of CE 399 was a bullet that had only been fired into a long tube containing a thick layer of cotton. Later tests show that such bullets survive intact when fired into solid wood and multiple layers of skin and ballistic gel, as well.\n", "If the bullet exited Connally's chest below the nipple the lapel would be too high to have popped out due to direct contact with the bullet but surgeon John Lattimer has argued that jacket bulged out because of the \"hail of rib fragments and soft tissue\" as the bullet tumbled in Connally's body.\n\nSection::::Neutron activation analysis of bullet fragments.\n\nSection::::Neutron activation analysis of bullet fragments.:Original bullet lead analysis by Vincent Guinn.\n", "Col. LaGarde noted Caspi's wounds were fairly well-placed: three bullets entered the chest, perforating the lungs. One passed through the body, one lodged near the back and the other lodged in subcutaneous tissue. The fourth round went through the right hand and exited through the forearm.\n", "Perry stated three times at a press conference later that day that Kennedy's neck wound appeared to be an entrance wound. Although his statement appeared to be definitive, he had not intended it to be. When interviewed by the Warren Commission, Perry said that he then believed that a \"full jacketed bullet without deformation passing through the skin would leave a similar wound for an exit and entrance wound and with the facts which you have made available and with these assumptions, I believe that it was an exit wound.\"\n", "BULLET::::- broke his right radius wrist bone at its widest point, depositing metal fragments, (post-operative x-rays document that some of the metal fragments are still buried with him, as mentioned above),\n\nBULLET::::- exited the palm (inner) side of Connally's wrist,\n\nBULLET::::- slowed to and entered the front side of his left thigh, creating a documented 10-millimeter nearly round wound,\n\nBULLET::::- buried itself shallowly into Connally's left thigh muscles,\n\nBULLET::::- then fell out at Parkland Hospital, perhaps when Connally was undressed,\n\nBULLET::::- landed on Connally's gurney,\n", "This \"single bullet,\" which was full metal jacketed and specifically designed to pass through the human body, was deformed and not in a pristine state as some detractors claim. Though a side view seems to show no visible damage, a view from the end of the bullet shows a significant flattening which occurred when, according to the theory, the bullet struck Connally's wrist, butt end first. The metallurgical composition of the bullet fragments in the wrist was compared to the composition of the samples taken from the base of CE 399.\n", "Dr. Robert Shaw described the wound on Connally's back as \"a small wound of entrance, roughly elliptical in shape, and approximately a cm. and a half in its longest diameter, in the right posterior shoulder, which is medial to the fold of the axilla\".\n", "Firearms such as muzzleloaders and shotguns often have additional materials in the shot, such as a patch or wadding. While they are generally too lightweight to penetrate at longer ranges, they will penetrate in a contact shot. Since these are often made of porous materials such as cloth and cardboard, there is a significantly elevated risk of infection from the wound.\n\nSection::::Characteristics.\n", "BULLET::::- completely destroyed of Connally's fifth right rib bone as it smashed through his chest interior at a documented 10-degree anatomically downward angle, (post-operative x-rays document that some of the metal fragments remained in Connally's wrist for life and were buried with him many years later. There were no fragments seen in any chest x-rays)\n\nBULLET::::- exited slightly below his right nipple, creating a 50 millimeter, sucking-air, blowout chest wound,\n\nBULLET::::- passed through Connally's shirt and suit coat front, exiting roughly central on the coat's right side, just under the lowest point of the right lapel,\n", "BULLET::::- William Greer Harrison (1836–1916), member of the Committee of Fifty after the 1906 San Francisco earthquake\n\nBULLET::::- William B. Harrison (Alamo defender) (1811–1836), Texan soldier\n\nBULLET::::- William Henry Harrison (architect) (1897–1988), American architect in California\n\nBULLET::::- William Henry Harrison (Georgia), a leader of Georgia's African American community during the Reconstruction Era after the American Civil War\n\nBULLET::::- William Jerome Harrison (1845–1909), British geologist, science writer, and amateur photographer\n\nBULLET::::- \"Victim\" of a 1660 murder found alive two years later; see The Campden Wonder\n\nBULLET::::- Grancer Harrison (1789–1860), plantation owner whose alleged ghost has been the subject of several stories\n", "BULLET::::- impacted, then entered President Kennedy to the right of his spine, creating a wound documented size of 4 millimeters by 7 millimeters in the rear of his upper back with a red-brown to black area of skin surrounding the wound, forming what is called an abrasion collar. This abrasion collar was caused by the bullet's scraping the margins of the skin on penetration and is characteristic of a gunshot wound of entrance. This abrasion collar was photographically documented to be larger at the lower margin half of the wound, which is strong evidence that the bullet's long-axis orientation at the instant of penetration was slightly upward in relation to the plane of the skin immediately surrounding the wound; however, the skin of Kennedy's upper back slopes inward, and the Croft photo (taken at Zapruder frame 162, shortly before Kennedy was hit) shows the President slumped forward. This would suggest that a shooting position above and to the rear of Kennedy was possible\n", "BULLET::::1. Friction ridges develop on the fetus in their definitive form prior to birth.\n\nBULLET::::2. Friction ridges are persistent throughout life except for permanent scarring, disease, or decomposition after death.\n\nBULLET::::3. Friction ridge paths and the details in small areas of friction ridges are unique and never repeated.\n\nBULLET::::4. Overall, friction ridge patterns vary within limits which allow for classification.\n", "The degree of tissue disruption caused by a projectile is related to the cavitation the projectile creates as it passes through tissue. A bullet with sufficient energy will have a cavitation effect in addition to the penetrating track injury. As the bullet passes through the tissue, initially crushing then lacerating, the space left forms a cavity; this is called the permanent cavity. Higher-velocity bullets create a pressure wave that forces the tissues away, creating not only a permanent cavity the size of the caliber of the bullet but also a temporary cavity or secondary cavity, which is often many times larger than the bullet itself. The temporary cavity is the radial stretching of tissue around the bullet's wound track, which momentarily leaves an empty space caused by high pressures surrounding the projectile that accelerate material away from its path. The extent of cavitation, in turn, is related to the following characteristics of the projectile:\n", "Regarding the bullet that he remembered impacting his back, Connally stated, \"...the most curious discovery of all took place when they rolled me off the stretcher and onto the examining table. A metal object fell to the floor, with a click no louder than a wedding band. The nurse picked it up and slipped it into her pocket. It was the bullet from my body, the one that passed through my back, chest, and wrist, and worked itself loose from my thigh.\" Connally does not say how he determined this object to have been a bullet, rather than his missing gold cufflink.\n", "The bullet design can produce deep wounds while failing to pass through structural barriers thicker than drywall or sheet metal. These qualities make it less likely to strike unintended targets, such as people in another room during an indoor shooting. Also, when it strikes a hard surface from which a solid bullet would glance off, it fragments into tiny, light pieces and creates much less ricochet danger.\n", "BULLET::::- 1912: 3427 Kansas (29th) Street\n\nBULLET::::- 1912: 3532 Ray Street\n\nBULLET::::- 1912: *4505 Del Mar\n\nBULLET::::- 1912: 3524 30th Street\n\nBULLET::::- 1912: 3049 Palm Street\n\nBULLET::::- 1913:\n\nBULLET::::- 1913: *3031 Landis Street\n\nBULLET::::- 1913: *3648 Ray Street\n\nBULLET::::- 1913: 2203 Cliff Street\n\nBULLET::::- 1913: 3820 Center Street\n\nBULLET::::- 1913: 4720 Panorama Street\n\nBULLET::::- 1913: 3511 Utah\n\nBULLET::::- 1913: 3634 Utah\n\nBULLET::::- 1914: 2230 Adams Avenue\n\nBULLET::::- 1914: 2242 Adams Avenue\n\nBULLET::::- 1914: 4724 Panorama Street\n\nBULLET::::- 1914: 4780 Panorama Street\n\nBULLET::::- 1914: *4525 Kansas Street\n\nBULLET::::- 1914: 3586 30th Street\n\nBULLET::::- 1914: 3044 Goldsmith\n\nBULLET::::- 1914: 3036 Goldsmith\n", "BULLET::::- \"Silent Death\" describes routes for manufacturing nerve gases (such as tabun and sarin gas), botulinum (botulin toxin), ricin, phosgene, arsine, and other poisons.\n\nBULLET::::- \"Vest Busters\", Fester's smallest book, describes easy methods for manufacturing steel bullets using a lathe and methods of coating them with Teflon, a substance which has been proven to slightly increase the velocity of a bullet. Both velocity and the strength of a bullet's core are the two most significant factors in determining whether a bullet has armor-piercing capabilities.\n", "The theory of a \"single bullet\" places a bullet wound as shown in the autopsy photos and X-rays, at the first thoracic vertebra of the vertebral column. The official autopsy report on the President, Warren Exhibit CE 386, described the back wound as being oval-shaped, 6 x 4 mm, and located \"above the upper border of the scapula\" [shoulder blade] at a location from the tip of the right acromion process, and below the right mastoid process (the boney prominence behind the ear). The report also reported contusion (bruise) of the apex (top tip) of the right lung in the region where it rises above the clavicle, and noted that although the apex of the right lung and the parietal pleural membrane over it had been bruised, they were not penetrated. The report also noted that the thoracic cavity was not penetrated.\n" ]
[ "That there may be a circumstance when a bullet will not pierce human skin if person if far enough away." ]
[ "For most bullets, the answer is \"never\"." ]
[ "false presupposition" ]
[ "That there may be a circumstance when a bullet will not pierce human skin if person if far enough away." ]
[ "false presupposition" ]
[ "For most bullets, the answer is \"never\"." ]
2018-05251
Why can’t lightning occur in a vacuum?
There's no air to ionise, therefore no possible physical path for the electrical charge to take.
[ "Atmospheric pressure discharge\n\nAn atmospheric pressure discharge is an electrical discharge in air at atmospheric pressure.\n\nAn electrical discharge is a plasma, which is an ionized gas. Plasmas are sustained if there is a continuous source of energy to maintain the required degree of ionization and overcome the recombination events that lead to extinction of the discharge. Recombination events are proportional to collisions between molecules and thus to the pressure of the gas. Atmospheric discharges are thus difficult to maintain as they require a large amount of energy.\n\nTypical atmospheric discharges are:\n\nBULLET::::- DC arc\n\nBULLET::::- Lightning\n", "In order for an electrostatic discharge to occur, two preconditions are necessary: firstly, a sufficiently high potential difference between two regions of space must exist, and secondly, a high-resistance medium must obstruct the free, unimpeded equalization of the opposite charges. The atmosphere provides the electrical insulation, or barrier, that prevents free equalization between charged regions of opposite polarity.\n\nIt is well understood that during a thunderstorm there is charge separation and aggregation in certain regions of the cloud; however the exact processes by which this occurs are not fully understood.\n\nSection::::Necessary conditions.:Electrical field generation.\n", "After a cloud, for instance, has started its way to becoming a lightning generator, atmospheric water vapor acts as a substance (or insulator) that decreases the ability of the cloud to discharge its electrical energy. Over a certain amount of time, if the cloud continues to generate and store more static electricity, the barrier that was created by the atmospheric water vapor will ultimately break down from the stored electrical potential energy. This energy will be released to a local oppositely charged region, in the form of lightning. The strength of each discharge is directly related to the atmospheric permittivity, capacitance, and the source's charge generating ability.\n", "An average bolt of lightning carries a negative electric current of 40 kiloamperes (kA) (although some bolts can be up to 120 kA), and transfers a charge of five coulombs and energy of 500 MJ, or enough energy to power a 100-watt lightbulb for just under two months. The voltage depends on the length of the bolt, with the dielectric breakdown of air being three million volts per meter, and lightning bolts often being several hundred meters long. However, lightning leader development is not a simple matter of dielectric breakdown, and the ambient electric fields required for lightning leader propagation can be a few orders of magnitude less than dielectric breakdown strength. Further, the potential gradient inside a well-developed return-stroke channel is on the order of hundreds of volts per meter or less due to intense channel ionization, resulting in a true power output on the order of megawatts per meter for a vigorous return-stroke current of 100 kA .\n", "The flowing movement of gases in pipes alone creates little, if any, static electricity. It is envisaged that a charge generation mechanism only occurs when solid particles or liquid droplets are carried in the gas stream.\n\nSection::::Static discharge.:In space exploration.\n", "Lightning has been observed within the atmospheres of other planets, such as Jupiter and Saturn. Although in the minority on Earth, superbolts appear to be common on Jupiter.\n", "About 90% of ionic channel lengths between \"pools\" are approximately in length. The establishment of the ionic channel takes a comparatively long amount of time (hundreds of milliseconds) in comparison to the resulting discharge, which occurs within a few dozen microseconds. The electric current needed to establish the channel, measured in the tens or hundreds of amperes, is dwarfed by subsequent currents during the actual discharge.\n", "BULLET::::- On 6 August 1944, a ball of lightning went through a closed window in Uppsala, Sweden, leaving a circular hole about in diameter. The incident was witnessed by residents in the area, and was recorded by a lightning strike tracking system on the Division for Electricity and Lightning Research at Uppsala University.\n", "A lightning strike or lightning bolt is an electric discharge between the atmosphere and an object. They mostly originate in a cumulonimbus cloud and terminate on the ground, called cloud to ground (CG) lightning. A less common type of strike, called ground to cloud (GC), is upward propagating lightning initiated from a tall grounded object and reaches into the clouds. About 25% of all lightning events worldwide are strikes between the atmosphere and earth-bound objects. The bulk of lightning events are intra-cloud (IC) or cloud to cloud (CC), where discharges only occur high in the atmosphere. Lightning strikes the average commercial aircraft at least once a year, but modern engineering and design means this is rarely a problem. The movement of aircraft through clouds can even cause lightning strikes.\n", "During the \"Infinite Crisis\" storyline, they were subdued by the League of Assassins in Vietnam who were being paid to break open a prison as part of a worldwide scheme to attack Metropolis with dozens of supervillains.\n", "Section::::General considerations.\n", "\"..let us now assume that such a powerful streamer or spark discharge, in its passage through the air, happens to come upon a vacuous sphere or space formed in the manner described. This space, containing gas highly rarefied, may be just in the act of contracting, at any rate, the intense current, passing through the rarefied gas suddenly raises the same to an extremely high temperature, all the higher as the mass of the gas is very small.\n", "Contrary to popular belief, positive lightning flashes do not necessarily originate from the anvil or the upper positive charge region and strike a rain-free area outside of the thunderstorm. This belief is based on the outdated idea that lightning leaders are unipolar in nature and originating from their respective charge region.\n", "A number of observations by space-based telescopes have revealed even higher energy gamma ray emissions, the so-called terrestrial gamma-ray flashes (TGFs). These observations pose a challenge to current theories of lightning, especially with the recent discovery of the clear signatures of antimatter produced in lightning. Recent research has shown that secondary species, produced by these TGFs, such as electrons, positrons, neutrons or protons, can gain energies of up to several tens of MeV.\n\nSection::::Effects.:Air quality.\n", "The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.\n", "Although sparks and arcs are usually undesirable, they can be useful in applications such as spark plugs for gasoline engines, electrical welding of metals, or for metal melting in an electric arc furnace. Prior to gas discharge the gas glows with distinct colors that depend on the energy levels of the atoms. Not all mechanisms are fully understood.\n\nThe vacuum itself is expected to undergo electrical breakdown at or near the Schwinger limit.\n\nSection::::Mechanism.:Voltage-current relation.\n", "Section::::Atmospheric electricity.\n\nAtmospheric electricity is the term given to the electrostatics and electrodynamics of the atmosphere (or, more broadly, the atmosphere of any planet). The Earth's surface, the ionosphere, and the atmosphere is known as the global atmospheric electrical circuit. Lightning discharges 30,000 amperes, at up to 100 million volts, and emits light, radio waves, X-rays and even gamma rays. Plasma temperatures in lightning can approach 28,000 kelvins and electron densities may exceed 10/m³.\n\nSection::::Atmospheric tide.\n", "Intra-cloud lightning most commonly occurs between the upper anvil portion and lower reaches of a given thunderstorm. This lightning can sometimes be observed at great distances at night as so-called \"sheet lightning\". In such instances, the observer may see only a flash of light without hearing any thunder.\n", "The actual phenomenon that is sometimes called heat lightning is simply cloud-to-ground lightning that occurs very far away, with thunder that dissipates before it reaches the observer. At night, it is possible to see the flashes of lightning from very far distances, up to 100 miles (160 kilometres), but the sound does not carry that far. In Florida, this type of lightning is often seen over the water at night, the remnants of storms that formed during the day along a sea breeze front coming in from the opposite coast.\n", "Runaway electrons are the core element of the runaway breakdown based theory of lightning propagation. Since C.T.R. Wilson's work in 1925, research has been conducted to study the possibility of runaway electrons, cosmic ray based or otherwise, initiating the processes required to generate lightning.\n\nSection::::Extraterrestrial Occurrence.\n\nElectron runaway based lightning may be occurring on the four jovian planets in addition to earth. Simulated studies predict runaway breakdown processes are likely to occur on these gaseous planets far more easily on earth, as the threshold for runaway breakdown to begin is far smaller.\n\nSection::::High Energy Plasma.\n", "Lightning is usually produced by cumulonimbus clouds, which have bases that are typically 1–2 km (0.6–1.25 miles) above the ground and tops up to in height.\n", "Lightning (DC Comics)\n\nLightning is a fictional superhero appearing in American comic books published by DC Comics. Not pinpointed with direct reference, Lightning first appears in the miniseries \"Kingdom Come\" in 1996, written by Mark Waid and illustrated by Alex Ross. The character is given official introduction in \"Justice Society of America\" vol. 3 #12 (March 2008), written by Geoff Johns and illustrated by Dale Eaglesham in the Modern Age of Comic Books.\n", "BULLET::::- The International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, Florida typically uses rocket triggered lightning in their research studies.\n\nBULLET::::- Laser-triggered\n\nBULLET::::- Since the 1970s, researchers have attempted to trigger lightning strikes by means of infrared or ultraviolet lasers, which create a channel of ionized gas through which the lightning would be conducted to ground. Such triggering of lightning is intended to protect rocket launching pads, electric power facilities, and other sensitive targets.\n", "Atmospheric electricity involves both thunderstorms, which create lightning bolts to rapidly discharge huge amounts of atmospheric charge stored in storm clouds, and the continual electrification of the air due to ionization from cosmic rays and natural radioactivity, which ensure that the atmosphere is never quite neutral.\n\nSection::::History.\n", "Along with comic books, Lightning has made appearances in various television shows and the character is portrayed by China Anne McClain in the live action series \"Black Lightning\".\n\nSection::::Publication history.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01317
Why is your voice deeper when you wake up?
Your vocal cords arent stretched. Think of it like a rubber band that isnt completely taut and you pluck it - it will be a lower pitch. Now stretch the rubber band out and pluck it and it will be of a much higher pitch
[ "Preferences for voice pitch change across the cycle. When seeking a short term mating partner, women may prefer a male with a low voice pitch, particularly during the fertile phase. During the late follicular phase, it is common for women demonstrate a preference for mates with a masculine, deep voice. Research has also been conducted on the attractiveness of the female voice throughout the cycle. During their most fertile phase of the menstrual cycle, there is some evidence that female voices are rated as significantly more attractive. This effect is not found with women on the birth control pill.\n", "During an interview with Stevie Rennie on October 28, 2014, Shadows mentioned that he on purpose had changed his voice to become less raspy and distorted while touring in 2014. The change was due to longer live shows lasting from one and a half to two hours in support of \"Hail to the King\". Shadows went on to say that \"you better be taking care of yourself or you're gonna be cancelling shows\" and \"I wanted to hit notes over the rasps\".\n\nSection::::Other projects.\n", "Airway resistance increases by about 230% during NREM sleep. Elastic and flow resistive properties of the lung do not change during NREM sleep. The increase in resistance comes primarily from the upper airway in the retroepiglottic region. Tonic activity of the pharyngeal dilator muscles of the upper airway decreases during the NREM sleep, contributing to the increased resistance, which is reflected in increased esophageal pressure swings during sleep. The other ventilatory muscles compensate for the increased resistance, and so the airflow decreases much less than the increase in resistance.\n\nSection::::Normal.:Steady NREM (Non-REM) sleep.:Arterial blood gases.\n", "Upper airway resistance is expected to be highest during REM sleep because of atonia of the pharyngeal dilator muscles and partial airway collapse. Many studies have shown this, but not all. Some have shown unchanged airway resistance during REM sleep, others have shown it to increase to NREM levels.\n\nSection::::Normal.:Steady REM Sleep.:Arterial blood gases.\n\nHypoxemia due to hypoventilation is noted in REM sleep but this is less well studied than NREM sleep. These changes are equal to or greater than NREM sleep\n\nSection::::Normal.:Steady REM Sleep.:Pulmonary arterial pressure.\n\nPulmonary arterial pressure fluctuates with respiration and rises during REM sleep.\n", "Shadows was specifically looking to add a more gritty, raspy tone to his voice and worked with Anderson for several months on this before \"City of Evil\" was recorded. This change resulted in newly established vocal contributions from each band member during live performances, and remained prevalent on every record the band has released since 2005.\n", "Section::::Physiological process.\n\nIn the modal register, the length, the tension, and the mass of the vocal folds are in a state of flux which causes the frequency of vibration of the vocal folds to vary. As pitch rises, the vocal folds increase in length and in tension, and their edges become thinner. If a speaker or singer holds any of the three factors constant and interferes with the progressive state of change, the laryngeal function of the voice becomes static and eventually breaks occur resulting in obvious changes in vocal quality.\n", "Section::::Clinical significance.:Reinke’s edema.\n\nA voice pathology called Reinke’s edema, swelling due to abnormal accumulation of fluid, occurs in the superficial lamina propria or Reinke’s space. This causes the vocal fold mucosa to appear floppy with excessive movement of the cover that has been described as looking like a loose sock. The greater mass of the vocal folds due to increased fluid lowers the fundamental frequency (\"f\") during phonation.\n\nSection::::Clinical significance.:Wound healing.\n", "BULLET::::- \"air humidity\" - dry air is thought to increase the stress experienced in the vocal folds, however, this has not been proven\n\nBULLET::::- \"hydration\" - dehydration may increase effects of stress inflicted on the vocal folds\n\nBULLET::::- \"background noise\" - people tend to speak louder when background noise is present, even when it isn't necessary. Increasing speaking volume increases stress inflicted on the vocal folds\n\nBULLET::::- \"pitch\" - Using a higher or lower pitch than normal will also increase laryngeal stress.\n", "Rumors were spread that Shadows had lost his ability to scream due to throat surgery needed after Warped Tour 2003. However, producer Andrew Murdock put down these rumors by saying: \"When I met the band after \"Sounding the Seventh Trumpet…\" Matt handed me the CD, and he said to me, 'This record's screaming. The record we want to make is going to be half-screaming and half-singing. I don't want to scream anymore… the record after that is going to be all singing.'\"\n", "If the vocal folds are held slightly further apart than in modal voicing, they produce phonation types like breathy voice (or murmur) and whispery voice. The tension across the vocal ligaments (vocal cords) is less than in modal voicing allowing for air to flow more freely. Both breathy voice and whispery voice exist on a continuum loosely characterized as going from the more periodic waveform of breathy voice to the more noisy waveform of whispery voice. Acoustically, both tend to dampen the first formant with whispery voice being more extreme deviations. \n", "The modal voice is the usual register for speaking and singing, and the vast majority of both are done in this register. As pitch rises in this register, the vocal folds are lengthened, tension increases, and their edges become thinner. A well-trained singer or speaker can phonate two octaves or more in the modal register with consistent production, beauty of tone, dynamic variety, and vocal freedom. This is possible only if the singer or speaker avoids static laryngeal adjustments and allows the progression from the bottom to the top of the register to be a carefully graduated continuum of readjustments.\n", "Signs that consolidation may have occurred include:\n\nBULLET::::- Expansion of the thorax on inspiration is reduced on the affected side\n\nBULLET::::- Vocal fremitus is increased on the affected side\n\nBULLET::::- Percussion is dull in the affected area\n\nBULLET::::- Breath sounds are bronchial\n\nBULLET::::- Possible medium, late, or pan-inspiratory crackles\n\nBULLET::::- Vocal resonance is increased. Here, the patient's voice (or whisper, as in whispered pectoriloquy) can be heard more clearly when there is consolidation, as opposed to the healthy lung where speech sounds muffled.\n\nBULLET::::- A pleural rub may be present.\n", "BULLET::::- \"sac-like\" appearance of the vocal folds\n\nBULLET::::- Hoarseness and deepening of the voice\n\nBULLET::::- Trouble speaking (Dysphonia)\n\nBULLET::::- Reduced vocal range with diminished upper limits\n\nBULLET::::- Stretching of the mucosa (Distension)\n\nBULLET::::- Shortness of breath (Dyspnoea)\n", "Secondly, an increase in the hoarseness and strain of a voice can often be heard. Unfortunately, both properties are difficult to measure objectively, and only perceptual evaluations can be performed.\n\nSection::::Voice care.\n", "In a study of 19 healthy adults, the minute ventilation in NREM sleep was 7.18 ± 0.39(SEM) liters/minute compared to 7.66 ± 0.34 liters/minute when awake.\n\nSection::::Normal.:Steady NREM (Non-REM) sleep.:Rib cage and abdominal muscle contributions.\n\nRib cage contribution to ventilation increases during NREM sleep, mostly by lateral movement, and is detected by an increase in EMG amplitude during breathing. Diaphragm activity is little increased or unchanged and abdominal muscle activity is slightly increased during these sleep stages.\n\nSection::::Normal.:Steady NREM (Non-REM) sleep.:Upper airway resistance.\n", "The vocal cords consist of five layers of cells:\n\nBULLET::::- Squamous epithelium\n\nBULLET::::- Superficial lamina propria (Reinke's space)\n\nBULLET::::- Intermediate lamina propria\n\nBULLET::::- Deep lamina propria\n\nBULLET::::- Vocalis muscle\n", "Increase EMG activity of the diaphragm 150%, increased activity of upper airway dilating muscles 250%, increased airflow and tidal volume 160% and decreased upper airway resistance.\n\nSection::::Normal.:Steady REM Sleep.\n\nSection::::Normal.:Steady REM Sleep.:Ventilation.\n", "James L. Brooks (\"Terms of Endearment\") relates his love for screenwriting: \"I never knew anybody who ever got a Writers Guild card who didn’t have a hard time when somebody said, 'What do you do for a living?' saying, 'I'm a writer.' Your—your voice always catches on 'a writer.' I think it takes about 14 years to not have the catch in your voice if you’re very aggressive. It takes longer if you're not. Because ... so many of us have dreamt about it forever as a dream that could not be realized.\"\n", "Unlike many desktop PCs, the Surface Studio supports Microsoft's Modern Standby (formerly known as InstantGo) specification, enabling background tasks to operate while the computer is sleeping. A firmware update was released in April 2017 that enabled Cortana to be summoned via a \"Hey, Cortana\" voice command from sleep, provided the Studio is running the Creators Update.\n\nSection::::Features.:Accessories.\n", "In July 2013 US Court of Appeals from the Fourth Circuit ruled that Risen must testify in the trial of Jeffrey Sterling. The court wrote \"so long as the subpoena is issued in good faith and is based on a legitimate need of law enforcement, the government need not make any special showing to obtain evidence of criminal conduct from a reporter in a criminal proceeding.\" Judge Roger Gregory dissented, writing \"The majority exalts the interests of the government while unduly trampling those of the press, and in doing so, severely impinges on the press and the free flow of information in our society.\"\n", "Section::::The transition.\n\nSection::::The transition.:Intermediate and deep layers of the lamina propria.\n\nThe intermediate layer of the lamina propria is primarily made up of elastic fibers while the deep layer of the lamina propria is primarily made up of collagenous fibers. These fibers run roughly parallel to the vocal fold edge and these two layers of the lamina propria comprise the vocal ligament. The transition layer is primarily structural, giving the vocal fold support as well as providing adhesion between the mucosa, or cover, and the body, the thyroarytenoid muscle.\n\nSection::::The body.\n\nSection::::The body.:The thyroarytenoid muscle.\n", "Section::::LSVT – BIG.\n", "The facial bones begin to grow as well. Cavities in the sinuses, the nose, and the back of the throat grow bigger, thus creating more space within the head to allow the voice to resonate. Occasionally, voice change is accompanied by unsteadiness of vocalization in the early stages of untrained voices. Due to the significant drop in pitch to the vocal range, people may unintentionally speak in head voice or even strain their voices using pitches which were previously chest voice, the lowest part of the modal voice register.\n\nSection::::History.\n", "Intercostal muscle activity decreases in REM sleep and contribution of rib cage to respiration decreases during REM sleep. This is due to REM related supraspinal inhibition of alpha motoneuron drive and specific depression of fusimotor function. Diaphraghmatic activity correspondingly increases during REM sleep. Although paradoxical thoracoabdominal movements are not observed, the thoracic and abdominal displacements are not exactly in phase. This decrease in intercostal muscle activity is primarily responsible for hypoventilation that occurs in patients with borderline pulmonary function.\n\nSection::::Normal.:Steady REM Sleep.:Upper airway function.\n", "In addition to the stretching of the vocal folds and the increasing tension on them as the pitch rises, the opposing surfaces of the folds, which may be brought into contact, becomes smaller and smaller, as the edges of the folds become thinner. The basic vibratory or phonatory pattern remains the same, with the whole vocal fold still involved in the action, but the vertical excursions are not as large and the rolling motion is not as apparent as it was on the lower pitches of the modal register.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03670
Why are cars seldom offered with diesel-hybrid powertrains?
There's a few reasons at play here. I think first and foremost is the fact that both diesel motors and electric motors both give lots of bottom end torque, as opposed to a petrol engine that gives top end torque. Normally with a petrol hyrbid you will have the electric motor dealing with the low end power, then the petrol engine kicks. With a diesel hybrid you would end up with lots of bottom end torque, but nothing at the top end. There's also the problem that diesel engines are usually more expensive to manufacture. Now bolt a few thousand pound elecric motor onto it and you're making a much more expensive car. The fuel savings will have to be much greater to now account for the increase in purchase price. Petrol cars also have the benefit of having much lower emissions to begin with, which can be made even lower with the addition of an electric motor. A diesel engine with an electric motor may only be brought down to the emission levels of a very good petrol engine. There are a few diesel hybrids I know of offered here in the UK. Mercedes offer a Bluetec hybrid, and Citroen offer their e-HDI engines which are a diesel hybrid which I know is used on the DS5, maybe others as well.
[ "Diesel-electric HEVs use a diesel engine for power generation. Diesels have advantages when delivering constant power for long periods of time, suffering less wear while operating at higher efficiency. The diesel engine's high torque, combined with hybrid technology, may offer substantially improved mileage. Most diesel vehicles can use 100% pure biofuels (biodiesel), so they can use but do not need petroleum at all for fuel (although mixes of biofuel and petroleum are more common). If diesel-electric HEVs were in use, this benefit would likely also apply. Diesel-electric hybrid drivetrains have begun to appear in commercial vehicles (particularly buses); , no light duty diesel-electric hybrid passenger cars are available, although prototypes exist. Peugeot is expected to produce a diesel-electric hybrid version of its 308 in late 2008 for the European market.\n", "Ferdinand Porsche developed the Lohner-Porsche in 1901. But hybrid electric vehicles did not become widely available until the release of the Toyota Prius in Japan in 1997, followed by the Honda Insight in 1999.Initially, hybrid seemed unnecessary due to the low cost of gasoline. Worldwide increases in the price of petroleum caused many automakers to release hybrids in the late 2000s; they are now perceived as a core segment of the automotive market of the future.\n", "BULLET::::- Some vehicles have been modified to use another fuel source if it is available, such as cars modified to run on autogas (LPG) and diesels modified to run on waste vegetable oil that has not been processed into biodiesel.\n\nBULLET::::- Power-assist mechanisms for bicycles and other human-powered vehicles are also included (see Motorized bicycle).\n\nSection::::Engine type.:Fluid power hybrid.\n", "\" Gasoline is the main expense of the cars, practically on a weekly basis, or even more frequently if we use the car very often, we will have to go to our nearest gas station to refuel and make the consequent cost\".\n\nBULLET::::- Diesel\n", "For some users, this type of vehicle may also be financially attractive so long as the electrical energy being used is cheaper than the petrol/diesel that they would have otherwise used. Current tax systems in many European countries use mineral oil taxation as a major income source. This is generally not the case for electricity, which is taxed uniformly for the domestic customer, however that person uses it. Some electricity suppliers also offer price benefits for off-peak night users, which may further increase the attractiveness of the plug-in option for commuters and urban motorists.\n", "The Volvo V60 Plug-in Hybrid, the world's first diesel plug-in hybrid, was released in Sweden by late 2012. Deliveries in the rest of Europe started in 2013. Almost 8,000 units were sold in 2013.\n\nSection::::2013.\n", "BULLET::::- †Bedford CA\n\nBULLET::::- †Bedford CF\n\nBULLET::::- †Bedford Chevanne\n\nBULLET::::- Vauxhall Combo see Opel\n\nBULLET::::- Vauxhall Corsavan\n\nBULLET::::- †Vauxhall Astravan\n\nBULLET::::- †Vauxhall Rascal\n\nBULLET::::- Vauxhall Vivaro\n\nBULLET::::- Vauxhall Movano\n\nVolkswagen Commercial Vehicles\n\nBULLET::::- Volkswagen Caddy\n\nBULLET::::- Volkswagen Routan\n\nBULLET::::- †(T4) Transporter / Kombi / Caravelle / Eurovan / Mutlivan\n\nBULLET::::- (T5) Transporter / Eurovan / Kombi / Caravelle / Mutlivan\n\nBULLET::::- Volkswagen California\n\nBULLET::::- Volkswagen LT\n\nBULLET::::- Volkswagen Crafter\n\nBULLET::::- Volkswagen Type 2 (\"VW Bus\")\n\nThis is not a complete list\n\nSection::::Alternative propulsion.\n\nSince light trucks are often operated in city traffic, hybrid electric models are very useful:\n", "A total of 37,215 hybrids were registered in 2014, and while petrol-electric hybrids increased 32.6% from 2013, diesel-electric hybrids declined 12.6%. Hybrid registrations totaled a record of 44,580 units in 2015, consisting of 40,707 petrol-powered hybrids and 3,873 powered by diesel; the latter experienced a 36.3% increase from 2014, while petrol-powered hybrid grew by 18.1%. The hybrid segment market shared reached 1.69% of new car registrations in the UK that year.\n\nBULLET::::- France\n", "On the other hand, series hybrids, also been referred to as extended-range electric vehicles (EREV) or range-extended electric vehicles (REEV), are designed to be run mostly by the battery, but have a gasoline or diesel generator to recharge the battery when going on long trips. The Chevrolet Volt, Fisker Karma and the upcoming Cadillac ELR are series plug-in hybrids.\n\nBULLET::::- Chevrolet Volt\n", "In the automobile industry, diesel engines in combination with electric transmissions and battery power are being developed for future vehicle drive systems. Partnership for a New Generation of Vehicles was a cooperative research program between the U.S. government and \"The Big Three\" automobile manufacturers (DaimlerChrysler, Ford Motor Company, and General Motors Corporation) that developed diesel hybrid cars.\n\nBULLET::::- \"Third-Millennium Cruiser\", an attempt to commercialize a diesel–electric automobile in the very early 1980s.\n\nBULLET::::- General Motors Precept\n\nBULLET::::- Ford Prodigy\n\nBULLET::::- Dodge Intrepid ESX\n\nBULLET::::- Ford Reflex is a diesel hybrid concept car.\n\nBULLET::::- Zytek develops a diesel hybrid powertrain\n", "BULLET::::- Chevrolet Spark EV (limited production)\n\nBULLET::::- Fiat 500e (limited production)\n\nBULLET::::- Porsche Panamera S E-Hybrid\n\nBULLET::::- Cadillac ELR (limited production)\n\nBULLET::::- 2014\n\nBULLET::::- BMW i3\n\nBULLET::::- Porsche 918 Spyder (limited edition)\n\nBULLET::::- Mercedes-Benz B-Class Electric Drive\n\nBULLET::::- BMW i8\n\nBULLET::::- Volkswagen e-Golf\n\nBULLET::::- Kia Soul EV\n\nBULLET::::- Porsche Cayenne S E-Hybrid\n\nBULLET::::- 2015\n\nBULLET::::- Mercedes-Benz S 500 Plug-in Hybrid\n\nBULLET::::- Volvo XC 90 PHEV\n\nBULLET::::- Tesla Model X\n\nBULLET::::- Bolloré Bluecar (available only for the BlueIndy carsharing fleet)\n\nBULLET::::- Chevrolet Volt (second generation) (production ended in 2019)\n\nBULLET::::- BMW X5 xDrive40e\n\nBULLET::::- Hyundai Sonata PHEV\n", "BULLET::::- Honda Civic Hybrid CVT transmission models only, AT-PZEV available in certain states\n\nBULLET::::- Honda Civic GX Natural Gas\n\nBULLET::::- Honda CR-Z (AT-PZEV)\n\nBULLET::::- Toyota Prius\n\nBULLET::::- Ford Focus SULEV\n\nBULLET::::- BMW SULEV 128i, 328i, 325i, 325Ci, and 325iT\n\nBULLET::::- Subaru PZEV Vehicles beginning with 2008 year models including Forester, Outback, Impreza and Legacy\n\nBULLET::::- Chevrolet Volt\n\nBULLET::::- Hyundai Elantra\n\nBULLET::::- Lexus CT200h\n\nBULLET::::- Honda Clarity PHEV 2018 - LEV3-SULEV20\n\nBULLET::::- Kia Forte\n\nBULLET::::- Volkswagen Jetta\n\nBULLET::::- Mini Cooper Hardtop 4-Door\n\nBULLET::::- Toyota RAV4 Hybrid\n\nBULLET::::- Pontiac Grand Prix, 3800 V6 equipped vehicles beginning with the 2006 model year\n", "Other hybrids released in the U.S. during 2012 are the Audi Q5 Hybrid, BMW 5 Series ActiveHybrid, BMW 3 series Hybrid, Ford C-Max Hybrid, Acura ILX Hybrid. Also during 2012 were released the next generation of Toyota Camry Hybrid and the Ford Fusion Hybrid, both of which offer significantly improved fuel economy in comparison with their previous generations. The 2013 models of the Toyota Avalon Hybrid and the Volkswagen Jetta Hybrid were released in the U.S. in December 2012.\n", "Section::::Engine type.:Fluid power hybrid.:Petro-hydraulic hybrid.\n\nPetro-hydraulic configurations have been common in trains and heavy vehicles for decades. The auto industry recently focused on this hybrid configuration as it now shows promise for introduction into smaller vehicles.\n", "Robert Bosch GmbH is supplying hybrid diesel-electric technology to diverse automakers and models, including the Peugeot 308.\n\nSo far, production diesel-electric engines have mostly appeared in mass transit buses.\n\nFedEx, along with Eaton Corp. in the US and Iveco in Europe, has begun deploying a small fleet of Hybrid diesel electric delivery trucks.\n\nAs of October 2007, Fedex operates more than 100 diesel electric hybrids in North America, Asia and Europe.\n\nBULLET::::- Liquefied petroleum gas\n\nHyundai introduced in 2009 the Hyundai Elantra LPI Hybrid, which is the first mass production hybrid electric vehicle to run on liquefied petroleum gas (LPG).\n", "Gasoline engines are used in most hybrid electric designs and will likely remain dominant for the foreseeable future. While petroleum-derived gasoline is the primary fuel, it is possible to mix in varying levels of ethanol created from renewable energy sources. Like most modern ICE powered vehicles, HEVs can typically use up to about 15% bioethanol. Manufacturers may move to flexible fuel engines, which would increase allowable ratios, but no plans are in place at present.\n", "BULLET::::- 1917: Woods Dual Power Car had a driveline similar to the current GMC/Chevrolet Silverado hybrid pickup truck.\n\nSection::::Automobiles.:2014 and beyond.\n\nBULLET::::- Land Rover Range Rover Hybrid concept, diesel-electric engine (under development) in conjunction with new aluminum body\n\nBULLET::::- Fiat Nuova 500 Hybrid\n\nBULLET::::- Volvo announced the launching of series production of diesel-ey 2012.\n\nBULLET::::- Cadillac SRX Plug-In\n\nBULLET::::- Toyota announced the launching of RAV4 Hybrid as 2012 model in the spring of 2012.\n\nBULLET::::- Porsche Panamera Plug-In\n\nBULLET::::- Ё-mobile (Russia).\n\nSection::::Automobiles.:Unknown Date.\n\nBULLET::::- Nissan Altima Hybrid 2nd generation, 2014 model; 2nd gen drivetrain 100% Nissan\n", "The plug-in-electric-vehicle (PEV) is becoming more and more common. It has the range needed in locations where there are wide gaps with no services. The batteries can be plugged into house (mains) electricity for charging, as well being charged while the engine is running.\n\nSection::::Engine type.:Continuously outboard recharged electric vehicle (COREV).\n", "BULLET::::- 2014\n", ", over 25 models of highway-capable plug-in hybrids have been launched in several markets since December 2008, including the BYD F3DM (out of production), the Chevrolet Volt and its siblings Opel/Vauxhall Ampera and Holden Volt, Prius Plug-in Hybrid (out of production), Fisker Karma (out of production), Ford C-Max Energi, Volvo V60 Plug-in Hybrid, Honda Accord Plug-in Hybrid (out of production), Mitsubishi Outlander P-HEV, Ford Fusion Energi, McLaren P1 (limited production), Porsche Panamera S E-Hybrid, Cadillac ELR, BYD Qin, Volkswagen XL1 (limited production), BMW i8, Porsche Cayenne S E-Hybrid, Volkswagen Golf GTE, Audi A3 e-tron, Porsche 918 Spyder (limited edition), Mercedes-Benz S 500 Plug-in Hybrid, SAIC Roewe 550 PHEV, Mercedes-Benz C 350e Plug-in Hybrid, Volvo S60L PHEV, BYD Tang, Volkswagen Passat GTE, Volvo XC90 T8, BMW X5 xDrive40e and Hyundai Sonata PHEV.\n", "Section::::Advantages and disadvantages.\n\nCompared to a full hybrid vehicle, however, mild hybrids may provide some of the benefits of the application of hybrid technologies, with less of the cost–weight penalty that is incurred by installing a full hybrid series-parallel drivetrain. Fuel savings would generally be lower than expected with use of a full hybrid design, as the design does not facilitate high levels of regenerative braking or necessarily promote the use of smaller, lighter, more efficient internal combustion engines. \n\nSection::::Examples.\n\nSection::::Examples.:General Motors.\n", "During 2012, the Toyota Prius Plug-in Hybrid, Ford C-Max Energi, and Volvo V60 Plug-in Hybrid were released. The following models were launched during 2013 and 2015: Honda Accord Plug-in Hybrid, Mitsubishi Outlander P-HEV, Ford Fusion Energi, McLaren P1 (limited edition), Porsche Panamera S E-Hybrid, BYD Qin, Cadillac ELR, BMW i3 REx, BMW i8, Porsche 918 Spyder (limited production), Volkswagen XL1 (limited production), Audi A3 Sportback e-tron, Volkswagen Golf GTE, Mercedes-Benz S 500 e, Porsche Cayenne S E-Hybrid, Mercedes-Benz C 350 e, BYD Tang, Volkswagen Passat GTE, Volvo XC90 T8, BMW X5 xDrive40e, Hyundai Sonata PHEV, and Volvo S60L PHEV.\n", "BULLET::::- Audi A3 Sportback e-tron\n\nBULLET::::- 2016\n\nBULLET::::- BMW 330e iPerformance\n\nBULLET::::- Mercedes-Benz GLE 550e Plug-in Hybrid\n\nBULLET::::- Toyota Prius Prime (second generation Prius PHEV)\n\nBULLET::::- Chevrolet Bolt EV\n\nBULLET::::- BMW 740e iPerformance\n\nBULLET::::- Mercedes-Benz C 350e Plug-in Hybrid\n\nBULLET::::- 2017\n\nBULLET::::- Chrysler Pacifica Hybrid\n\nBULLET::::- BMW 530e iPerformance\n\nBULLET::::- Tesla Model 3\n\nBULLET::::- Kia Optima PHEV\n\nBULLET::::- Honda Clarity Electric\n\nBULLET::::- Honda Clarity Plug-in Hybrid\n\nBULLET::::- Volvo XC60 Plug-in Hybrid\n\nBULLET::::- Mini Cooper S E ALL4\n\nBULLET::::- Hyundai Ioniq Electric\n\nBULLET::::- Cadillac CT6 Plug-in Hybrid\n\nBULLET::::- Volvo S90 T8 Plug-in Hybrid\n\nBULLET::::- Mitsubishi Outlander P-HEV\n", "BULLET::::- BYD Qin plug-in hybrid\n\nBULLET::::- Audi A1\n\nBULLET::::- Toyota Sienna Hybrid (3rd generation)\n\nBULLET::::- Peugeot 307 CC Hybride HDi, produced in very small number of units, ceased production in 2007.\n\nBULLET::::- Cadillac Urban Luxury Concept\n\nBULLET::::- Daihatsu Hijet Cargo Hybrid a commercial microvan (659 cc) (in Japan, not yet in production)\n\nBULLET::::- Ford hybrid car\n\nBULLET::::- Honda S2000/Honda S3000\n\nBULLET::::- Honda HSV-010 - supposedly the successor for Honda NSX.\n\nBULLET::::- Hyundai Accent Unknown date of production\n\nBULLET::::- Kia Rio Originally for 2007, now delayed along with Hyundai Accent hybrid \"(concept model was shown at the 2007 Geneva Auto Show)\"\n", "General Motors began deliveries of the Chevrolet Volt in the United States in December 2010, and its sibling, the Opel Ampera, was released in Europe by early 2012. , other plug-in hybrids available in several markets were the Fisker Karma, Toyota Prius Plug-in Hybrid and Ford C-Max Energi.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-02362
Why do magnets repel water?
Well, yes and no. Water is slighly diamagnetic. Wikipedia article about diamagnetism: URL_1 Video experiment about magnets repelling water: URL_0
[ "There are related non-chemical devices based on a variety of physical phenomena which have been marketed for over 50 years with similar claims of scale inhibition. Whilst some are effective, such as electrolytic devices, most do not work.\n\nBULLET::::- Electrolysis: \"Electrolytic scale inhibitors\" - two metals such as copper and zinc are used\n\nBULLET::::- Electrostatic: \"Electronic water conditioners\"\n\nBULLET::::- Electromagnetic: fluctuating electromagnetic fields are created\n\nBULLET::::- Catalytic\n\nBULLET::::- Mechanical\n\nBULLET::::- Other devices combine these different methods\n\nOther uses of magnetic devices:\n", "Section::::Claimed mechanisms of action.:Changes to water structure.\n\nSome magnetic products claim that they \"change the molecular structure of water\", a pseudoscientific claim with no real scientific basis. There is no such thing as \"magnetized water\". Water is not paramagnetic, so its water molecules do not align in the presence of a magnetic field. Water is weakly diamagnetic (so it is repelled by magnets), but only to an extent so small that it is undetectable to most instruments.\n\nSection::::Claimed mechanisms of action.:Special detergent.\n", "If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by its reflection.\n\nSection::::Demonstrations.:Levitation.\n", "Magnetic water treatment\n\nMagnetic water treatment (also known as anti-scale magnetic treatment or AMT) is a method of supposedly reducing the effects of hard water by passing it through a magnetic field as a non-chemical alternative to water softening. Magnetic water treatment is regarded as unproven and unscientific.\n\nThere is a lack of peer-reviewed laboratory data, mechanistic explanations, and documented field studies to support its effectiveness. Erroneous conclusions about their efficacy are based on applications with uncontrolled variables. There are, however, some studies which have claimed significant effects and proposed possible mechanisms for the observed decrease in water scale.\n\nSection::::Effectiveness.\n", "Vendors of magnetic water treatment devices frequently use pictures and testimonials to support their claims, but omit quantitative detail and well-controlled studies. Advertisements and promotions generally omit system variables, such as corrosion or system mass balance analyticals, as well as measurements of post-treatment water such as concentration of hardness ions or the distribution, structure, and morphology of suspended particles.\n\nSection::::Hypothesized mechanisms.\n", "In 2008, the Department of Primary Industries and Fisheries (DPI&F) and James Cook University, Australia, reported success with permanent magnets in captive studies with grey reef sharks, hammerheads, sharp-nosed sharks, blacktip sharks, sawfish and the critically endangered spear tooth shark.\n", "Digital magnetofluidics\n\nDigital magnetofluidics is a method for moving, combining, splitting, and controlling drops of water or biological fluids using magnetic fields. This is accomplished by adding superparamagnetic particles to a drop placed on a superhydrophobic surface. Normally this type of surface would exhibit a lotus effect and the drop of water would roll or slide off. But by using magnetic fields, the drop is stabilized and its movements and structure can be controlled.\n", "An area of current research in FO involves direct removal of draw solutes, in this case by means of a magnetic field. Small (nanoscale) magnetic particles are suspended in solution creating osmotic pressures sufficient for the separation of water from a dilute feed. Once the draw solution containing these particles has been diluted by the FO water flux, they may be separated from that solution by use of a magnet (either against the side of a hydration bag, or around a pipe in-line in a steady state process).\n", "This phenomenon was first discovered by Dorn in 1879. He observed that a vertical electric field had developed in a suspension of glass beads in water, as the beads were settling. This was the origin of sedimentation potential, which is often referred to as the Dorn effect.\n", "where \"χ\" and \"ρ\" are the magnetic susceptibility and density of the liquid respectively, B is the magnetic field, \"g\" is the gravity acceleration, and \"μ\" is the magnetic permittivity of vacuum. Actually, the shape of the near surface well depends also on the surface tension of the liquid. The Moses effect enables trapping of floating diamagnetic particles and formation of micro-patterns. The application of a magnetic field (\"B\"≅0.5 T) on diamagnetic liquid/vapor interfaces enables the driving of floating diamagnetic bodies and soap bubbles.\n", "Similar claims for magnetic water treatment are not considered to be valid. For instance, no reduction of scale formation was found when such a magnet device was scientifically tested.\n\nSection::::Health effects.\n\nThe CDC recommends limiting daily total sodium intake to 2,300 mg per day, though the average American consumes 3,500 mg per day. Because the amount of sodium present in drinking water—even after softening—does not represent a significant percentage of a person's daily sodium intake, the EPA considers sodium in drinking water to be unlikely to cause adverse health effects.\n", "Section::::Microfluidics.:Biomedical applications.\n", "MR fluid is different from a ferrofluid which has smaller particles. MR fluid particles are primarily on the micrometre-scale and are too dense for Brownian motion to keep them suspended (in the lower density carrier fluid). Ferrofluid particles are primarily nanoparticles that are suspended by Brownian motion and generally will not settle under normal conditions. As a result, these two fluids have very different applications.\n\nSection::::How it works.\n\nThe magnetic particles, which are typically micrometer or nanometer scale spheres or ellipsoids, are suspended within the carrier oil and distributed randomly in suspension under normal circumstances, as below.\n", "BULLET::::- Settling of ferro-particles can be a problem for some applications.\n\nCommercial applications do exist, as mentioned, but will continue to be few until these problems (particularly cost) are overcome.\n\nSection::::Advances in the 2000s.\n", "Thanks to the easy separation by applying a magnetic field and the very large surface to volume ratio, magnetic nanoparticles have a potential for treatment of contaminated water.\n", "When a magnetic field is applied, however, the microscopic particles (usually in the 0.1–10 µm range) align themselves along the lines of magnetic flux, see below.\n\nSection::::Material behavior.\n\nTo understand and predict the behavior of the MR fluid it is necessary to model the fluid mathematically, a task slightly complicated by the varying material properties (such as yield stress). \n", "Magnetic water softeners claim that their magnetic fields can help remove scale from the washing machine and pipes, and prevent new limescale from adhering. Some companies claim to remove hardness ions from hard water, or to precipitate the molecules in the water so they won't \"stick\" to the pipes, or to reduce the surface tension of water. The claims are dubious, the scientific basis is unclear, the working mechanism is vaguely defined and understudied, and high-quality studies report negative results. The reputation of these products is further damaged by the pseudoscientific explanations that promoters keep putting forward.\n", "Section::::Microfluidics.:Devices.\n", "The set of parameters used to model water or aqueous solutions (basically a force field for water) is called a water model. Water has attracted a great deal of attention due to its unusual properties and its importance as a solvent. Many water models have been proposed; some examples are TIP3P, TIP4P, SPC, flexible simple point charge water model (flexible SPC), ST2, and mW.\n\nSection::::Popular force fields.:Post-translational modifications and unnatural amino acids.\n", "Another method using NMR techniques measures the magnetic field distortion around a sample immersed in water inside an MR scanner. This method is highly accurate for diamagnetic materials with susceptibilities similar to water.\n\nSection::::Tensor susceptibility.\n\nThe magnetic susceptibility of most crystals is not a scalar quantity. Magnetic response is dependent upon the orientation of the sample and can occur in directions other than that of the applied field . In these cases, volume susceptibility is defined as a tensor\n", "So, when a liquid metal moves across magnetic field lines, the interaction of the magnetic field (which are either produced by a current-carrying coil or by a permanent magnet) with the induced eddy currents leads to a Lorentz force (with density formula_1) which brakes the flow. The Lorentz force density is roughly\n", "The first instance it was considered to initialise the electrolysis of water was from the perspective of magnetolysis in 1985, where high strength magnets, or in this case electromagnets, are used in conjunction with homopolar propellers. Ghoroghichian and Bockris conducted this experimental research to determine how a pulsed current can impact the rate of hydrogen production and provide economic advantages. A current density ratio of 2.07 was observed, demonstrating, for the first time, that a pulsed current can double the production of hydrogen, in comparison to a steady state current.\n", "Section::::Modes of operation and applications.\n\nAn MR fluid is used in one of three main modes of operation, these being flow mode, shear mode and squeeze-flow mode. These modes involve, respectively, fluid flowing as a result of pressure gradient between two stationary plates; fluid between two plates moving relative to one another; and fluid between two plates moving in the direction perpendicular to their planes. In all cases the magnetic field is perpendicular to the planes of the plates, so as to restrict fluid in the direction parallel to the plates.\n\nSection::::Modes of operation and applications.:Squeeze-flow mode.\n", "The induction model would only apply to marine animals because as a surrounding medium with high conductivity only salt water is feasible. evidence for this model has been lacking.\n", "Industrial MHD problems can be modeled using the open-source software EOF-Library. Two simulation examples are 3D MHD with a free surface for electromagnetic levitation melting, and liquid metal stirring by rotating permanent magnets.\n\nSection::::Applications.:Magnetic drug targeting.\n" ]
[ "Magnets repel water. " ]
[ "Magnets don't always repel water. " ]
[ "false presupposition" ]
[ "Magnets repel water. ", "Magnets repel water. " ]
[ "normal", "false presupposition" ]
[ "Magnets don't always repel water. ", "Magnets don't always repel water. " ]
2018-15654
After seeing a cross section of tree rings, I’m wondering how trees produce new layers outward from the center?
The living part is the outer layer under the bark. Think of it like one layer that grows larger leaving "dead" layers behind as it expands outward.
[ "Ground layering or mound layering is the typical propagation technique for the popular Malling-Merton series of clonal apple rootstocks, in which the original plants are set in the ground with the stem nearly horizontal, which forces side buds to grow upward. After these are started, the original stem is buried up to some distance from the tip. At the end of the growing season, the side branches will have rooted, and can be separated while the plant is dormant. Some of these will be used for grafting rootstocks, and some can be reused in the nursery for the next growing season's crop.\n", "As the stem ages and grows, changes occur that transform the surface of the stem into the bark. The epidermis is a layer of cells that cover the plant body, including the stems, leaves, flowers and fruits, that protects the plant from the outside world. In old stems the epidermal layer, cortex, and primary phloem become separated from the inner tissues by thicker formations of cork. Due to the thickening cork layer these cells die because they do not receive water and nutrients. This dead layer is the rough corky bark that forms around tree trunks and other stems.\n\nSection::::Periderm.\n", "BULLET::::- \"Canopy structure\" – individuals located in the understory in a stand with multiple vertically-stratified stories will have a lessened amount of total stemflow due to the interception of dominant and codominant individuals\n\nOther\n\nBULLET::::- \"Seasonality\" – in the case of deciduous or mixed forests, stemflow rates are \"slightly\" higher in the dormant season when no leaves are present and evapotranspiration is reduced; this effect becomes more pronounced as the stem diameter increases\n\nBULLET::::- \"Diurnality\" – variations in branch weight influence the amount of stemflow; branches are heavier in the morning (with dew) and lighter in the afternoon\n", "Within the periderm are lenticels, which form during the production of the first periderm layer. Since there are living cells within the cambium layers that need to exchange gases during metabolism, these lenticels, because they have numerous intercellular spaces, allow gaseous exchange with the outside atmosphere. As the bark develops, new lenticels are formed within the cracks of the cork layers.\n\nSection::::Rhytidome.\n", "Ground layering is used in the formation of visible surface roots, known as \"nebari\", on bonsai trees. \n\nSection::::Air layering.\n", "Section::::Chronology of notable practitioners.:Nirandr Boonnetr.\n", "BULLET::::- Reduction reduces the size of a tree, often for clearance for utility lines. Reducing the height or spread of a tree is best accomplished by pruning back the leaders and branch terminals to lateral branches that are large enough to assume the terminal roles (at least one-third the diameter of the cut stem). Compared to topping, reduction helps maintain the form and structural integrity of the tree.\n", "Because this growth usually ruptures the epidermis of the stem or roots, plants with secondary growth usually also develop a cork cambium. The cork cambium gives rise to thickened cork cells to protect the surface of the plant and reduce water loss. If this is kept up over many years, this process may produce a layer of cork. In the case of the cork oak it will yield harvestable cork.\n\nSection::::In nonwoody plants.\n\nSecondary growth also occurs in many nonwoody plants, e.g. tomato, potato tuber, carrot taproot and sweet potato tuberous root. A few long-lived leaves also have secondary growth.\n", "From the outside to the inside of a mature woody stem, the layers include:\n\nBULLET::::1. Bark\n\nBULLET::::1. Periderm\n\nBULLET::::1. Cork (phellem or suber), includes the rhytidome\n\nBULLET::::2. Cork cambium (phellogen)\n\nBULLET::::3. Phelloderm\n\nBULLET::::2. Cortex\n\nBULLET::::3. Phloem\n\nBULLET::::2. Vascular cambium\n\nBULLET::::3. Wood (xylem)\n\nBULLET::::1. Sapwood (alburnum)\n\nBULLET::::2. Heartwood (duramen)\n\nBULLET::::4. Pith (medulla)\n\nIn young stems, which lack what is commonly called bark, the tissues are, from the outside to the inside:\n\nBULLET::::1. Epidermis, which may be replaced by periderm\n\nBULLET::::2. Cortex\n\nBULLET::::3. Primary and secondary phloem\n\nBULLET::::4. Vascular cambium\n\nBULLET::::5. Secondary and primary xylem.\n", "Section::::Chronology of notable practitioners.:Arthur Wiechula.\n", "Section::::Chronology of notable practitioners.:Dan Ladd.\n", "In many vascular plants, secondary growth is the result of the activity of the two lateral meristems, the cork cambium and vascular cambium. Arising from \"lateral\" meristems, secondary growth increases the girth of the plant root or stem, rather than its length. As long as the lateral meristems continue to produce new cells, the stem or root will continue to grow in diameter. In woody plants, this process produces wood, and shapes the plant into a tree with a thickened trunk.\n", "BULLET::::- \"Branch angle\" – stemflow potential heightens as the angle of the branches and twigs increases\n\nBULLET::::- \"Flow path obstructions\" – abnormalities on the flow path, such as detached pieces of bark or scars, on the underside of the branch can divert water from stemflow and become a component in throughfall\n\nBULLET::::- \"Bark\" – stemflow is affected by the degree of absorptive ability and smoothness of the bark alongside the branch and stem\n\nStand Characteristics\n", "During fieldwork in 1878, De Geer noticed that the appearance of laminated sediments deposited in glacial lakes at the margin of the retreating Scandinavian ice sheet at the end of the last ice age, closely resembled tree-rings. In his best known work \"Geochronologia Sueccia\", published in 1940, De Geer wrote \"From the obvious similarity with the regular, annual rings of the trees I got at once the impression that both ought to be annual deposits\" (1940, p. 13).\n", "The rhytidome is the most familiar part of bark, being the outer layer that covers the trunks of trees. It is composed mostly of dead cells and is produced by the formation of multiple layers of suberized periderm, cortical and phloem tissue. The rhytidome is especially well developed in older stems and roots of trees. In shrubs, older bark is quickly exfoliated and thick rhytidome accumulates. It is generally thickest and most distinctive at the trunk or bole (the area from the ground to where the main branching starts) of the tree.\n\nSection::::Chemical composition.\n", "A low-growing stem is bent down to touch a hole dug in the ground, then pinned in place using something shaped like a clothes hanger hook and covered over with soil. However, a few inches of leafy growth must remain above the ground for the bent stem to grow into a new plant. Removing a section of skin from the lower-facing stem part before burying may help the rooting process. If using rooting hormone, the stem should be cut just beneath a node. The resultant notch should be wedged open with a toothpick or similar piece of wood and the hormone applied before burying.\n", "Horizontal cross sections cut through the trunk of a tree can reveal growth rings, also referred to as \"tree rings\" or \"annual rings\". Growth rings result from new growth in the vascular cambium, a layer of cells near the bark that botanists classify as a lateral meristem; this growth in diameter is known as secondary growth. Visible rings result from the change in growth speed through the seasons of the year; thus, critical for the title method, one ring generally marks the passage of one year in the life of the tree. Removal of the bark of the tree in a particular area may cause deformation of the rings as the plant overgrows the scar.\n", "Most woody plants native to colder climates have distinct growth rings produced by each year's production of new vascular tissue. Only the outer handful of rings contain living tissue (the cambium, xylem, pholem, and sapwood). Inner layers have heartwood, dead tissue that serves merely as structural support.\n", "After a disturbance, there are several ways in which regeneration can occur. One way, termed the advance regeneration pathway, is when the primary understory already contains seedlings and saplings. This method is most common in the Neotropics when faced when small scale disturbances. The next pathway is from tree remains, or any growth from bases or roots, and is common in small disturbance gaps. The third route is referred to as the soil seed bank, and is the result of germination of seeds already found in the soil. The final regeneration pathway is the arrival of new seeds via animal dispersal or wind movement. The most critical components of the regeneration are seed distribution, germination, and survival.\n", "The horticultural layering process typically involves wounding the target region to expose the inner stem and optionally applying rooting compounds. In ground layering or simple layering, the stem is bent down and the target region buried in the soil. This is done in plant nurseries in imitation of natural layering by many plants such as brambles which bow over and touch the tip on the ground, at which point it grows roots and, when separated, can continue as a separate plant. In either case, the rooting process may take from several weeks to a year.\n", "Like trees and woody plants, perennial herbs have a growth zone called vascular cambium between the root bark and the root xylem. The vascular cambium ring is active during growing season and produces a new layer of xylem tissue or growth ring every year. This addition of a new lateral layer each year is called secondary growth and is exactly the same as in woody plants. Each individual growth ring consists of earlywood tissue that is formed at the beginning of the growing season and latewood tissue formed in summer and fall. Earlywood tissue is characterized by wide vessels or denser arrangement of vessels, whereas latewood tissue shows narrower vessels and/or lower vessel density.\n", "The effect of rate of growth on the qualities of chestnut wood is summarized by the same authority as follows:\n\nSection::::Physical properties.:Earlywood and latewood.:In diffuse-porous woods.\n\nIn the diffuse-porous woods, the demarcation between rings is not always so clear and in some cases is almost (if not entirely) invisible to the unaided eye. Conversely, when there is a clear demarcation there may not be a noticeable difference in structure within the growth ring.\n", "This depth will increase slowly as elements are added to the tree, but an increase in the overall depth is infrequent, and results in all leaf nodes being one more node farther away from the root.\n", "Often a secondary covering called the periderm forms on small woody stems and many non-woody plants, which is composed of cork (phellem), the cork cambium (phellogen), and the phelloderm. The periderm forms from the phellogen which serves as a lateral meristem. The periderm replaces the epidermis, and acts as a protective covering like the epidermis. Mature phellem cells have suberin in their walls to protect the stem from desiccation and pathogen attack. Older phellem cells are dead, as is the case with woody stems. The skin on the potato tuber (which is an underground stem) constitutes the cork of the periderm.\n", "Section::::Applications.:Forestry.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-14141
why does the body only produce vit D from the sun and not from another light sources like candles or lamps?
Vitamin D3 photosynthesis requires UV light. Since UV light is harmful to humans most lamps are either designed not to emit it or come equipped with filters, and candles don't get hot enough to emit a significant amount of UV. In other words, it doesn't work with other sources of light because those sources don't have the right light.
[ "When the skin is exposed to UV-B light, cholesterol in the skin is transformed into vitamin D3. In general the skin does not need much UV-B energy to generate vitamin D3, and 15 minutes of strong sunshine every day is usually considered enough.\n\nIn Northern European countries especially in the winter when sunlight is scarce, pregnant women may receive UVB light in clinics to assure that their babies have an adequate amount of vitamin D3 when born.\n\nAnimals need UV-B light to produce vitamin D3 and strong bones. \n", "The active UVB wavelengths are present in sunlight, and sufficient amounts of cholecalciferol can be produced with moderate exposure of the skin, depending on the strength of the sun. Time of day, season, and altitude affect the strength of the sun, and pollution, cloud cover or glass all reduce the amount of UVB exposure. Exposure of face, arms and legs, averaging 5–30 minutes twice per week, may be sufficient, but the darker the skin, and the weaker the sunlight, the more minutes of exposure are needed. Vitamin D overdose is impossible from UV exposure; the skin reaches an equilibrium where the vitamin degrades as fast as it is created.\n", "Vitamin D is produced photochemically from 7-dehydrocholesterol in the skin of most vertebrate animals, including humans. The precursor of vitamin D, 7-dehydrocholesterol is produced in relatively large quantities. 7-Dehydrocholesterol reacts with UVB light at wavelengths of 290–315 nm. These wavelengths are present in sunlight, as well as in the light emitted by the UV lamps in tanning beds (which produce ultraviolet primarily in the UVA spectrum, but typically produce 4% to 10% of the total UV emissions as UVB). Exposure to light through windows is insufficient because glass almost completely blocks UVB light.\n", "Type II Photosensitivity is caused by inborn errors in the metabolism of certain biological pigments. In the absence of some key metabolic enzymes, the products of intermediary metabolism accumulate. They are either eliminated through the urine and body fluids or are deposited in some body tissue, such as bone and teeth. A common condition seen in animals is congenital porphyria due to the accumulation of Uroporphyrin, which is deposited in the teeth and bones, giving them a pink discolouration, or excreted through the urine, exhibiting a pinkish fluorescence under ultraviolet light.\n\nSection::::Classification of photosensitivity reactions.:Type III Photosensitivity.\n", "There are many sources of light. A body at a given temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is sunlight, the radiation emitted by the chromosphere of the Sun at around peaks in the visible region of the electromagnetic spectrum when plotted in wavelength units and roughly 44% of sunlight energy that reaches the ground is visible. Another example is incandescent light bulbs, which emit only around 10% of their energy as visible light and the remainder as infrared. A common thermal light source in history is the glowing solid particles in flames, but these also emit most of their radiation in the infrared, and only a fraction in the visible spectrum.\n", "The sun's rays can be used to produce electrical energy. The direct user of sunlight is the solar cell or photovoltaic cell, which converts sunlight directly into electrical energy without the incorporation of a mechanical device. This technology is simpler than the fossil-fuel-driven systems of producing electrical energy. A solar cell is formed by a light-sensitive p-n junction semiconductor, which when exposed to sunlight is excited to conduction by the photons in light. When light, in the form of photons, hits the cell and strikes an atom, photo-ionisation creates electron-hole pairs. The electrostatic field causes separation of these pairs, establishing an electromotive force in the process. The electric field sends the electron to the p-type material, and the hole to the n-type material. If an external current path is provided, electrical energy will be available to do work. The electron flow provides the current, and the cell's electric field creates the voltage. With both current and voltage the silicon cell has power. The greater the amount of light falling on the cell's surface, the greater is the probability of photons releasing electrons, and hence more electric energy is produced.\n", "Section::::Artificial sources.:Incandescent lamps.\n\n'Black light' incandescent lamps are also made, from an incandescent light bulb with a filter coating which absorbs most visible light. Halogen lamps with fused quartz envelopes are used as inexpensive UV light sources in the near UV range, from 400 to 300 nm, in some scientific instruments. Due to its black-body spectrum a filament light bulb is a very inefficient ultraviolet source, emitting only a fraction of a percent of its energy as UV.\n\nSection::::Artificial sources.:Gas-discharge lamps.\n", "The evolution of early reproductive proteins and enzymes is attributed in modern models of evolutionary theory to ultraviolet radiation. UVB causes thymine base pairs next to each other in genetic sequences to bond together into thymine dimers, a disruption in the strand that reproductive enzymes cannot copy. This leads to frameshifting during genetic replication and protein synthesis, usually killing the cell. Before formation of the UV-blocking ozone layer, when early prokaryotes approached the surface of the ocean, they almost invariably died out. The few that survived had developed enzymes that monitored the genetic material and removed thymine dimers by nucleotide excision repair enzymes. Many enzymes and proteins involved in modern mitosis and meiosis are similar to repair enzymes, and are believed to be evolved modifications of the enzymes originally used to overcome DNA damages caused by UV.\n", "Humans with light skin pigmentation living in low sunlight environments experience increased vitamin D synthesis compared to humans with dark skin pigmentation due to the ability to absorb more sunlight. Almost every part of the human body, including the skeleton, the immune system, and brain requires vitamin D. Sunlight is necessary for the production of vitamin D. Vitamin D production in the skin begins when UV radiation penetrates the skin and interacts with a cholesterol-like molecule produce pre-vitamin D3. This reaction only occurs in the presence of medium length UVR, UVB. Most of the UVB and UVC rays are destroyed or reflected by ozone, oxygen, and dust in the atmosphere. UVB reaches the Earth’s surface in the highest amounts when its path is straight and goes through a little layer of atmosphere.\n", "BULLET::::- Seborrhoeic dermatitis\n\nBULLET::::- Autoimmune bullous diseases (immunobullous diseases)\n\nBULLET::::- Mycosis fungoides\n\nBULLET::::- Smith–Lemli–Opitz syndrome\n\nBULLET::::- Porphyria cutanea tarda\n\nAlso, many conditions are aggravated by strong light, including:\n\nBULLET::::- Systemic lupus erythematosus\n\nBULLET::::- Sjögren’s syndrome\n\nBULLET::::- Sinear Usher syndrome\n\nBULLET::::- Rosacea\n\nBULLET::::- Dermatomyositis\n\nBULLET::::- Darier’s disease\n\nBULLET::::- Kindler-Weary syndrome\n\nSection::::Fluorescent lamps.\n\nThe Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) in 2008 reviewed the connections between light from fluorescent lamps, especially from compact fluorescent lamp, and numerous human diseases, with results including:\n", "At the turn of the century it was discovered that human eyes contain a non-imaging photosensor that is the primary regulator of the human circadian rhythm. This photosensor is particularly affected by blue light, and when it observes light the pineal gland stops the secretion of melatonin. The presence of light at night in human dwellings (or for shift workers) makes going to sleep more difficult and reduces the overall level of melatonin in the bloodstream, and exposure to a low-level incandescent bulb for 39 minutes is sufficient to suppress melatonin levels to 50%. Because melatonin is a powerful anti-oxidant, it is hypothesized that this reduction can result in an increased risk of breast and prostate cancer.\n", "... the virgin knelt down with great veneration in an attitude of prayer, and her back was turned to the manger ... And while she was standing thus in prayer, I saw the child in her womb move and suddenly in a moment she gave birth to her son, from whom radiated such an ineffable light and splendour, that the sun was not comparable to it, nor did the candle that St. Joseph had put there, give any light at all, the divine light totally annihilating the material light of the candle ... I saw the glorious infant lying on the ground naked and shining. His body was pure from any kind of soil and impurity. Then I heard also the singing of the angels, which was of miraculous sweetness and great beauty ... \n", "The energy in sunlight is captured by plants, cyanobacteria, purple bacteria, green sulfur bacteria and some protists. This process is often coupled to the conversion of carbon dioxide into organic compounds, as part of photosynthesis, which is discussed below. The energy capture and carbon fixation systems can however operate separately in prokaryotes, as purple bacteria and green sulfur bacteria can use sunlight as a source of energy, while switching between carbon fixation and the fermentation of organic compounds.\n", "Blood levels of folate, a nutrient vital for fetal development, can be degraded by UV radiation, raising concerns about sun exposure for pregnant women. Lifespan and fertility can be adversely affected for individuals born during peaks of the 11-year solar cycle, possibly because of UV-related folate deficiency during gestation.\n\nSection::::Safe level of sun exposure.\n", "Prodigiosin is a secondary metabolite of \"Serratia marcescens\". Because it is easy to detect, it has been used as a model system to study secondary metabolism. Prodigiosin production has long been known to be enhanced by phosphate limitation. In low phosphate conditions, pigmented strains have been shown to grow to a higher density than unpigmented strains.\n\nSection::::Religious function.\n", "In modern science, it is generally accepted that most ignes fatui are caused by the oxidation of phosphine (PH), diphosphane (PH), and methane (CH). These compounds, produced by organic decay, can cause photon emissions. Since phosphine and diphosphane mixtures spontaneously ignite on contact with the oxygen in air, only small quantities of it would be needed to ignite the much more abundant methane to create ephemeral fires. Furthermore, phosphine produces phosphorus pentoxide as a by-product, which forms phosphoric acid upon contact with water vapor.\n\nSection::::See also.\n\nBULLET::::- Will-o'-the-wisp (Ghost lights)\n\nBULLET::::- Marfa lights\n\nBULLET::::- Brown Mountain Lights\n\nBULLET::::- Hessdalen light\n", "The hormones cortisol and melatonin are effected by the signals light sends through the body's nervous system. These hormones help regulate blood sugar to give the body the appropriate amount of energy that is required throughout the day. Cortisol is levels are high upon waking and gradual decrease over the course of the day, melatonin levels are high when the body is entering and exiting a sleeping status and are very low over the course of waking hours. The earth's natural light-dark cycle is the basis for the release of these hormones.\n", "Fats are catabolised by hydrolysis to free fatty acids and glycerol. The glycerol enters glycolysis and the fatty acids are broken down by beta oxidation to release acetyl-CoA, which then is fed into the citric acid cycle. Fatty acids release more energy upon oxidation than carbohydrates because carbohydrates contain more oxygen in their structures. Steroids are also broken down by some bacteria in a process similar to beta oxidation, and this breakdown process involves the release of significant amounts of acetyl-CoA, propionyl-CoA, and pyruvate, which can all be used by the cell for energy. \"M. tuberculosis\" can also grow on the lipid cholesterol as a sole source of carbon, and genes involved in the cholesterol use pathway(s) have been validated as important during various stages of the infection lifecycle of \"M. tuberculosis\".\n", ", and isomerizes to 13-cis upon illumination with light. Several models of the complete proteorhodopsin photocycle have been proposed, based on FTIR and UV–visible spectroscopy; they resemble established photocycle models for bacteriorhodopsin. Complete proteorhodopsin based photosystems have been discovered and expressed in E. coli, giving them additional light mediated energy gradient capability for ATP generation without external need for retinal or precursors; with the PR, gene five other proteins code for the photopigment biosynthetic pathyway.\n\nSection::::Genetic engineering.\n", "Exposure to light during the hours of melatonin production reduces melatonin production. Melatonin has been shown to mitigate the growth of tumors in rats. By suppressing the production of melatonin over the course of the night rats showed increased rates of tumors over the course of a four week period.\n\nArtificial light at night causing circadian disruption additionally impacts sex steroid production. Increased levels of progestagens and androgens was found in night shift workers as compared to \"working hour\" workers.\n", "The mechanism(s) of photoinhibition are under debate, several mechanisms have been suggested. Reactive oxygen species, especially singlet oxygen, have a role in the acceptor-side, singlet oxygen and low-light mechanisms. In the manganese mechanism and the donor side mechanism, reactive oxygen species do not play a direct role. Photoinhibited PSII produces singlet oxygen, and reactive oxygen species inhibit the repair cycle of PSII by inhibiting protein synthesis in the chloroplast.\n\nSection::::Molecular mechanism(s).:Acceptor-side photoinhibition.\n", "Melatonin synthesis is also regulated by the nervous system. Nerve fibers in the retinohypothalamic tract connect the retina to the suprachiasmatic nucleus (SCN). The SCN stimulates the release of Norepinephrine from sympathetic nerve fibers from the superior cervical ganglia that synapse with the pinealocytes. Norepinephrine causes the production of melatonin in the pinealocytes by stimulating the production of cAMP. Because the release of norepinephrine from the nerve fibers occurs at night, this system of regulation maintains the body’s circadian rhythms.\n\nSection::::Melatonin.:Synthesis.\n", "Section::::Mechanism.\n\nLight first passes into a mammal's system through the retina, then takes one of two paths: the light gets collected by rod cells and cone cells and the retinal ganglion cells (RGCs), or it is directly collected by these RGCs.\n", "In case of the fungus \"Neurospora crassa\", the circadian clock is controlled by two light-sensitive domains, known as the white-collar-complex (WCC) and the LOV domain vivid (VVD-LOV). WCC is primarily responsible for the light-induced transcription on the control-gene frequency (FRQ) under day-light conditions, which drives the expression of VVD-LOV and governs the negative feedback loop onto the circadian clock. By contrast, the role of VVD-LOV is mainly modulatory and does not directly affect FRQ. \n\nSection::::Gene expression.\n\nLOV domains have been found to control gene expression through DNA binding and\n", "After the news spread of his wife leaving her with another woman, Chandin gave up his religion, his God, and began to drink heavily. Then one night, he raped Pohpoh, his eldest daughter. Every night he would call one of his daughters into bed with him. During the day, the children went to school like other children but, at night, they lived under the sexual tyranny of their father. Pohpoh had a childhood admirer and friend, whom she called her Boyie. One day, she seduced him in his mother’s house but stopped right before sexual intercourse. Back in the nursing home, Mala begins to have visitors, Otoh and his father Ambrose Mohanty. Ambrose was Mala’s Boyie.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-13572
Why in a lot of movies that contain aliens, humanity is always weaker, but then they (normally) end up defeating them?
ENDER'S GAME decent movie, book was better, humans are stronger than the aliens in it. It bugs me too when a clearly weaker force gets to win when there's no way throwing a few extra jets at something should defeat invading aliens. Usually humans get lucky and find one little loophole or weak point and exploit it and that's how we win. Realistically we would be completely wiped out in most movie scenarios.
[ "The film crew teams up with the Welsh Williams brothers to fight off the aliens, with a great deal of blood and gore. One highlight features Ricky running down some aliens in a combine harvester, to the tune of \"Combine Harvester (Brand New Key)\" by The Wurzels.\n", "The president of Fox's marketing department felt the film was an \"extremely difficult movie to market\", that its story of two species evolving from enemies to friends made the science fiction picture less about the technology used to film it and more \"along the lines of brotherhood.\" This was epitomized by the film's tagline: \"Enemies because they were taught to be, allies because they had to be, brothers because they dared to be.\"\n", "\"Logan's Run\" depicted a futuristic swingers' utopia that practiced euthanasia as a form of population control and \"The Stepford Wives\" anticipated a reaction to the women's liberation movement. \"Enemy Mine\" demonstrated that the foes we have come to hate are often just like us, even if they appear alien.\n", "In his review of \"Alien Resurrection\", Roger Ebert wrote \"I lost interest [in \"Alien 3\"], when I realized that the aliens could at all times outrun and outleap the humans, so all the chase scenes were contrivances.\" Ebert later stated in his review of \"Fight Club\" that he considered \"Alien 3\" \"one of the best-looking bad movies he's ever seen\".\n", "A DVD Verdict review says, \"Alien Siege isn't horrible and plays with some interesting ideas but my interest wavered throughout. It's one of the better SciFi originals, but that's not saying much.\" A Geeks of Doom review says, \"Shamelessly ripping off \"V: The Final Battle\" and \"Independence Day\", and that is just for starters, Robert Stadd’s made-for-television movie is a thoroughly enjoyable slice of sci-fi pie that knows exactly the demographic it is aiming for (if you perked up a little with the V name-drop, congratulations, you are the target) and hits the bull’s eye pretty much dead center.\" A DVD Pub review says, \"So there’s some good ideas being volleyed around in \"ALIEN SIEGE\", but after a while the movie just does the typical save-my-daughter-and-save-the-world thing.\"\n", "The visual style of science fiction film can be characterized by a clash between alien and familiar images. This clash is implemented when alien images become familiar, as in \"A Clockwork Orange\", when the repetitions of the Korova Milkbar make the alien decor seem more familiar. As well, familiar images become alien, as in the films \"Repo Man\" and \"Liquid Sky\". For example, in \"Dr. Strangelove\", the, distortion of the humans make the familiar images seem more alien. Finally, alien and familiar images are juxtaposed, as in \"The Deadly Mantis\", when a giant praying mantis is shown climbing the Washington Monument.\n", "From \"Under the Yoke\" onwards, Draka equipment takes an even more science-fiction turn. Genetically modified baboon shock troops (containing the DNA of certain species of dogs and humans as well), combat spacecraft and pain-inducing irremovable bracelets for troublesome slaves make appearances. While in \"The Stone Dogs\", the Protracted Struggle is still primarily a Cold War-esque arms race of nuclear capability, modified, genetically-targeted diseases and advanced computer viruses also make an appearance.\n", "The extraterrestrial species referred to as \"Aliens\" (technically known as \"Xenomorphs\") are the primary, titular antagonists of the \"Alien\" franchise. Introduced in the first film, Aliens are laid as eggs by a queen. This produces a facehugger, which latches onto and impregnates its prey with an embryo. This in turn produces an Alien with some characteristics of its host which ejects itself from the host's rib cage, killing it in the process. Described as \"pure\" by the android Ash, the Alien's motivation is to ensure the survival of its species; this commonly entails the elimination of creatures, such as humans, who pose a threat. With rudimentary intelligence, the Aliens are difficult to kill.\n", "Most novels of the 1950s and 1960s tended to overestimate near-term progress in space and weapons technologies, while neglecting the rise of computer technology. In this version of the 1990s, characters wield laser weapons, which can kill or stun depending on how focussed the beam is. In the opening sequence the attack on Gregson's flying craft involves both lasers and .50 caliber machine guns, which elicits the comment \"Something old, something new\" from Wellford.\n", "Gravity is fairly low on some levels, and the correct application of the flamethrower or alien weapon allows the player to hover. \"Hopping\" with the grenade launcher or rockets can be used, but usually involves a fair amount of damage to the character.\n", "The war machines are crab-like \"walkers\" with six legs. A Heat Ray is built into the machine's \"head\", and is fired from a single eye. The fighting machines do not appear to have protection against modern artillery (avoiding the \"invisible shields\" seen in the 1953 film version and Steven Spielberg's 2005 film), leaving their ability to conquer unexplained. The aliens do have a substance similar to the black smoke, but is more of a dense green toxic gas unable to rise above ground level, allowing survivors to escape by getting to high places.\n", "On Earth, U.S. President David Coffey receives an offer of conditional surrender from the Fithp. Coffey is willing to let the Fithp withdraw into space, and is reluctant to destroy their technology and cargo of females and children. He is opposed by his advisors, who feel that by allowing the Fithp to escape and regroup, he risks the whole of humanity. When Coffey seemingly folds under the pressure, National Security Adviser Admiral Carrell stages a bloodless \"coup d'etat\", circumventing the President and communicating the rejection of the aliens' terms. An act of sabotage by the humans aboard the alien vessel disables the Fithp engines, allowing the \"Michael\" to inflict heavy damage, which forces the Fithp to accept humanity as the stronger species and surrender themselves to become part of the human \"herd\". In the final scene, the Fithp leader lies down on his back in a submissive gesture, and allows former captive Congressman Wes Dawson to place his foot on his chest, this being the formal Fithp gesture of surrender.\n", "The infeasibility of the H-bomb approach was published by four postgraduate physics students in 2011 and then reported by \"The Daily Telegraph\" in 2012:\n\nIn the commentary track, Ben Affleck says he \"asked Michael why it was easier to train oil drillers to become astronauts than it was to train astronauts to become oil drillers, and he told me to shut the fuck up, so that was the end of that talk.\"\n\nSection::::Reception.:Accolades.\n", "In order to provide subject matter to which audiences can relate, the large majority of intelligent alien races presented in films have an anthropomorphic nature, possessing human emotions and motivations. In films like \"Cocoon\", \"My Stepmother Is an Alien\", \"Species\", \"Contact\", \"The Box\", \"Knowing\", \"The Day the Earth Stood Still\", and \"The Watch\", the aliens were nearly human in physical appearance, and communicated in a common earth language. However, the aliens in \"Stargate\" and \"Prometheus\" were human in physical appearance but communicated in an alien language. A few films have tried to represent intelligent aliens as something utterly different from the usual humanoid shape (e.g. An intelligent life form surrounding an entire planet in \"Solaris\", the ball shaped creature in \"Dark Star\", microbial-like creatures in \"The Invasion\", shape-shifting creatures in \"Evolution\"). Recent trends in films involve building-size alien creatures like in the movie \"Pacific Rim\" where the CGI has tremendously improved over the previous decades as compared in previous films such as \"Godzilla\".\n", "Section::::Themes, imagery, and visual elements.:Disaster films.\n\nA frequent theme among science fiction films is that of impending or actual disaster on an epic scale. These often address a particular concern of the writer by serving as a vehicle of warning against a type of activity, including technological research. In the case of alien invasion films, the creatures can provide as a stand-in for a feared foreign power.\n\nDisaster films typically fall into the following general categories:\n", "Examples of science fiction violence in stand-alone films is \"Saturn 3\" which tells the story of a powerful robot that goes out of control and causes damage and inflicts injury on living beings, resulting in the death of one dog and at least one human with the other dying while destroying the killing-machine.\n", "\"Unknown Origin\" tells the story of an alien organism that falls on Earth and penetrates an underwater submarine. It then begins attacking the crew members, killing many of them and is mostly resistant to their firepower. The human crew members attempt to contain and destroy the organism and prevent it from reaching a dry surface and reproducing itself.\n\n\"Red Planet\" starring Val Kilmer and Terence Stamp is about a crew of humans on a mission to Mars and also bring with them a robot \"Amy\" which goes out of control and frequently attacks them, violently.\n", "The global peace agreement brings great humour to the emissary. The aliens were, in fact, seeking a \"greater\" talent for war, as they had genetically seeded thousands of planets to breed warriors to fight for them across the galaxy. Humanity's \"small talent\" for war (crude weapons, petty bickering over borders) is not significant enough to be of any use to them. And he laughingly states that—worst of all—the people of Earth long for peace. As the ambassador calls down his fleet to destroy the Earth, he thanks the Security Council for an amusing day and their \"delightful sense of the absurd.\" His parting comment is \"...as one of your fine Earth actors, Edmund Gwenn, once said: \"Dying is easy—comedy is hard.\"\"\n", "Science fiction writers from the end of World War II onwards have examined the morality and consequences of space warfare. With Heinlein's \"Starship Troopers\" are A. E. van Vogt's \"War against the Rull\" (1959) and Fredric Brown's \"Arena\" (1944). Opposing them are Murray Leinster's \"First Contact\" (1945), Barry Longyear's \"Enemy Mine,\" Kim Stanley Robinson's \"The Lucky Strike,\" Connie Willis' \"Schwarzchild Radius,\" and John Kessel's \"Invaders.\" In Orson Scott Card's \"Ender's Game\", the protagonist wages war remotely, with no realization that he is doing so.\n", "Pitted against another team of players (riding on a separate vehicle), the recruits board their training vehicles equipped with laser guns, called S4 Alienators (\"Jumbo Judy\") and proceed into the training room, blasting at cardboard cutouts and crudely drawn images of aliens amid flashing red lights. Soon, however, MIB Director Zed (played by Rip Torn) informs the trainees that an alien prison ship has crash landed in the middle of New York City. The guns are then \"set to full power\" as the trainees are instantly launched into the heart of New York, attempting to score as many points as they can by shooting the aliens in their vulnerable areas (the eyes and shoulders). Aliens vary from large, plain-in-sight creatures to small ones hiding in windows and bushes. Certain aliens will fire back causing the cart to spin out of control.\n", "Cameron drew inspiration for the \"Aliens\" story from the Vietnam War, a situation in which a technologically superior force was mired in a hostile foreign environment: \"Their training and technology are inappropriate for the specifics, and that can be seen as analogous to the inability of superior American firepower to conquer the unseen enemy in Vietnam: a lot of firepower and very little wisdom, and it didn't work.\" The attitude of the Colonial Marines was influenced by the Vietnam War; they are portrayed as cocky and confident of their inevitable victory, but when they find themselves facing a technologically inferior but more determined enemy, the outcome is not what they expect. Cameron listed Robert A. Heinlein's novel \"Starship Troopers\" as a major influence that led to the incorporation of various themes and phrases, such as the terms \"the drop\" and \"bug hunt\" as well as the cargo-loader exoskeleton.\n", "The second mechanical method by which violence is discouraged is simply the deadliness of combat in the game. Whilst Story Points act as a buffer helping prevent the actual deaths of a character, combat in the \"Doctor Who\" game tends to be swift and brutal, with many alien weapons simply doing \"Lethal\" damage, rather than a damage number. Characters that seek violent solutions to problems are at grave risk of injury, with damage being removed directly from character attributes.\n", "Moving quickly back towards their shuttle (Carthage), with Trollenberg's head, the heroes quickly escape before the space station explodes from the breach, while the super soldier boards a smaller science vessel/escape pod, taking with it some alien chestbursters. As the space station explodes, Ripley has a bad feeling that the super soldier survived and, in the 'science pod', the super soldier commences work on a second super soldier who was also near completion.\n", "Destruction of planets and stars has been a frequently used aspect of interstellar warfare since the \"Lensman\" series. This is not a realistic capability, as it has been calculated that a force on the order of 10 joules of energy, or roughly the total output of the sun in a week, would be required to overcome the gravity that holds together an Earth-sized planet. The destruction of Alderaan in \"Star Wars Episode IV: A New Hope\" is estimated to require 1.0 × 10 joules of energy, millions of times more than would be necessary to break the planet apart at a slower rate.\n", "Milton Krims's script turns the source story on its head. He introduces an actual alien presence, in place of a human scapegoat, in the form of a jagged light pattern and changes the genuine long space voyage into a simulated voyage that all the participants know from the start is only a simulation that can be terminated at any time by pressing a panic button. The alien then causes trouble between the passengers instead of diverting trouble toward itself as a scapegoat. When Joe Dix will not permit anyone to press the panic button the jagged Antheon light pattern occupies a plant specimen, turning into a human sized monster. After stating it's concerns about the violent behavior of humans then giving a warning to stay away from the planet Antheon, it forces Joe Dix to hit the panic button, thus ending the \"flight.\"\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00169
Topology of a hollow sphere? Can I deform to a disc?
That pinhole does in fact count as changing the object's shape -- this is the whole premise of topology. In topology, we distinguish between objects entirely by the way their faces and edges connect, in a manner similar to graph theory, but for continuous surfaces. So, yes, if you poke a hole in the sphere, it is no longer the same topological shape. Even if it still _looks_ like a sphere, it's technically just a distorted disc as soon as you poke the hole in it.
[ "Sometimes we drop the condition that \"S\" be compressible. If \"D\" were to bound a disk inside \"S\" (which is always the case if \"S\" is incompressible, for example), then compressing \"S\" along \"D\" would result in a disjoint union of a sphere and a surface homeomorphic to \"S\". The resulting surface with the sphere deleted might or might not be isotopic to \"S\", and it will be if \"S\" is incompressible and \"M\" is irreducible.\n\nSection::::Algebraically incompressible surfaces.\n", "Given a compressible surface \"S\" with a compressing disk \"D\" that we may assume lies in the interior of \"M\" and intersects \"S\" transversely, one may perform embedded 1-surgery on \"S\" to get a surface that is obtained by compressing \"S\" along \"D\". There is a tubular neighborhood of \"D\" whose closure is an embedding of \"D\" × [-1,1] with \"D\" × 0 being identified with \"D\" and with\n\nThen\n\nis a new properly embedded surface obtained by compressing \"S\" along \"D\".\n", "One of the most notorious pathologies in topology is the Alexander horned sphere, a counterexample showing that topologically embedding the sphere \"S\" in R may fail to separate the space cleanly. As a counter-example, it motivated the extra condition of \"tameness\" which suppresses the kind of \"wild\" behaviour the horned sphere exhibits.\n", "Descartes' theorem on the \"total defect\" of a polyhedron states that if the polyhedron is homeomorphic to a sphere (i.e. topologically equivalent to a sphere, so that it may be deformed into a sphere by stretching without tearing), the \"total defect\", i.e. the sum of the defects of all of the vertices, is two full circles (or 720° or 4π radians). The polyhedron need not be convex.\n", "BULLET::::- Problem 132 asks for the volume of a sphere with a cylindrical hole drilled through it, but does not note the invariance of the problem under changes of radius.\n\nBULLET::::- . Levi argues that the volume depends only on the height of the hole based on the fact that the ring can be swept out by a half-disk with the height as its diameter.\n", "The above examples of mono-monostatic objects are necessarily inhomogeneous, that is, the density of their material varies across their body. The question of whether it is possible to construct a three-dimensional body which is mono-monostatic but also homogeneous and convex was raised by Russian mathematician Vladimir Arnold in 1995. The requirement of being convex is essential as it is trivial to construct a mono-monostatic non-convex body (an example would be a ball with a cavity inside it). Convex means that a straight line between any two points on a body lies inside the body, or, in other words, that the surface has no sunken regions but instead bulges outward (or is at least flat) at every point. It was already well known, from a geometrical and topological generalization of the classical four-vertex theorem, that a plane curve has at least four extrema of curvature, specifically, at least two local maxima and at least two local minima (see right figure), meaning that a (convex) mono-monostatic object does not exist in two dimensions. Whereas a common anticipation was that a three-dimensional body should also have at least four extrema, Arnold conjectured that this number could be smaller.\n", "These applications follow from the theories of contact, umbilical points, ridges, germs, and cusps. Porteous has suggestions for readers wanting to know more about singularity theory. The underlying theme is the study of critical points of appropriate distance-squared functions. A second edition was published in 2001, where the author was able to report on related work by Vladimir Arnold on spherical curves. In fact, Porteous had translated Arnold's paper from the Russian.\n\nSection::::Death and legacy.\n", "BULLET::::- (a) formula_44: If we remove a cylinder from the 2-sphere, we are left with two disks. We have to glue back informula_42 – that is, two disks - and it's clear that the result of doing so is to give us two disjoint spheres. (Fig. 2a)\n", "For example, a light ray grazing the boundary of a Kottler/Schwarzschild void will not be bended by the lens mass condensation (i.e., does not feel the gravitational potential of the embedded lens) and travels along a straight line path in a flat background universe.\n\nSection::::Properties.\n\nIn order to be an analytical solution of the Einstein's field equation, the embedded lens has to satisfy the following conditions:\n\nBULLET::::1. The mass of the embedded lens (point mass or distributed), should be the same as that from the removed sphere.\n\nBULLET::::2. The mass distribution within the void should be spherically symmetric.\n", "BULLET::::- For the sphere the curvatures of all normal sections are equal, so every point is an umbilic. The sphere and plane are the only surfaces with this property.\n\nBULLET::::5. \"The sphere does not have a surface of centers.\"\n", "Note that 2-spheres are excluded since they have no nontrivial compressing disks by the Jordan-Schoenflies theorem, and 3-manifolds have abundant embedded 2-spheres. Sometimes one alters the definition so that an incompressible sphere is a 2-sphere embedded in a 3-manifold that does not bound an embedded 3-ball. Such spheres arise exactly when a 3-manifold is not irreducible. Since this notion of incompressibility for a sphere is quite different from the above definition for surfaces, often an incompressible sphere is instead referred to as an essential sphere or a reducing sphere.\n\nSection::::Compression.\n", "BULLET::::4. Repeat steps 2–3 on each new disc.\n\nWe can represent these skeletons by rooted trees such that each point is joined to only a finite number of other points: the tree has a point for each disc, and a line joining points if the corresponding discs intersect in the skeleton. \n", "Section::::Mathematical definitions.:Centers of maximal disks (or balls).\n\nA disk (or ball) \"B\" is said to be \"maximal\" in a set \"A\" if\n\nBULLET::::- formula_1, and\n\nBULLET::::- If another disc \"D\" contains \"B\", then formula_2.\n\nOne way of defining the skeleton of a shape \"A\" is as the set of centers of all maximal disks in \"A\".\n\nSection::::Mathematical definitions.:Centers of bi-tangent circles.\n", "In addition, all bodies have the same coefficient of dilatation so every body shrinks and expands in similar proportion as they move about the sphere. To finish the story, Poincaré states that the index of refraction will also vary with the distance \"r\", in inverse proportion to formula_1.\n\nHow will this world look to inhabitants of this sphere? \n", "Some models of gene gun also use a rupture disc, but not as a safety device. Instead, their function is part of the normal operation of the device, allowing for precise pressure-based control of particle application to a sample. In these devices, the rupture disc is designed to fail within an optimal range of gas pressure that has been empirically associated with successful particle integration into tissue or cell culture. Different disc strengths can be available for some gene gun models.\n\nSection::::External links.\n\nBULLET::::- Rupture disc sizing calculator to calculate discharge area requirement of a simple system.\n", "be the standard embedding; then there is a regular homotopy of immersions\n\nsuch that \"ƒ\" = \"ƒ\" and \"ƒ\" = −\"ƒ\".\n\nSection::::History.\n\nAn existence proof for crease-free sphere eversion was first created by .\n\nIt is difficult to visualize a particular example of such a turning, although some digital animations have been produced that make it somewhat easier. The first example was exhibited through the efforts of several mathematicians, including Arnold S. Shapiro and Bernard Morin, who was blind. On the other hand, it is much easier to prove that such a \"turning\" exists and that is what Smale did.\n", "The singularities can also studied topologically. Then, for example, there are no topological singularities of codimension 2. In a 3-dimensional polyhedral space without a boundary (faces not glued to other faces) any point has a neighborhood homeomorphic either to an open ball or to a cone over the projective plane. In the former case, the point is necessarily a codimension 3 metric singularity. The general problem of topologically classifying singularities in polyhedral spaces is largely unresolved (apart from simple statements that e.g. any singularity is locally a cone over a spherical polyhedral space one dimension less and we can study singularities there).\n", "It is tempting to think that every non-convex polyhedron must have some vertices whose defect is negative, but this need not be the case. Two counterexamples to this are the small stellated dodecahedron and the great stellated dodecahedron, which have twelve convex points each with positive defects. \n", "Let the surface of the sphere be \"S\". The volume of the cone with base area \"S\" and height \"r\" is formula_15, which must equal the volume of the sphere: formula_16. Therefore, the surface area of the sphere must be formula_17, or \"four times its largest circle\". Archimedes proves this rigorously in On the Sphere and Cylinder.\n\nSection::::Curvilinear shapes with rational volumes.\n", "Section::::Diamond: Unindexed Search for High-dimensional Data.\n", "The importance of this determinant condition shows the following statement:\n\nBULLET::::- A ruled surface formula_47 is \"developable\" into a plane, if for any point the Gauss curvature vanishes. This is exactly the case if\n\nThe generators of any ruled surface coalesce with one family of its asymptotic lines. Also forming one family of its lines of curvature. It can be shown that \"any developable\" surface is a cone, a cylinder or a surface formed by all tangents of a space curve.\n\nSection::::Further examples.\n\nBULLET::::- Conoid\n\nBULLET::::- Catalan surface\n\nBULLET::::- Oloid\n\nSection::::Application and History of developable surfaces.\n", "Smale's graduate adviser Raoul Bott at first told Smale that the result was obviously wrong . \n", "Although individual ball implants would present too many problems due to migration, flat premade square silicone \"character\" sheets with pre-positioned dots would solve the readability problems. These individual square sheets could be preformed and implanted with the desired effect.\n\nSection::::Aftercare.\n", "The skeleton of a shape \"A\" can also be defined as the set of centers of the discs that touch the boundary of \"A\" in two or more locations. This definition assures that the skeleton points are equidistant from the shape boundary and is mathematically equivalent to Blum's medial axis transform.\n\nSection::::Mathematical definitions.:Ridges of the distance function.\n", "Topological defects have not been observed by astronomers; however, certain types are not compatible with current observations. In particular, if domain walls and monopoles were present in the observable universe, they would result in significant deviations from what astronomers can see.\n" ]
[]
[]
[ "normal" ]
[ "A hollow sphere with a hole in it is still a sphere." ]
[ "false presupposition", "normal" ]
[ "A hollow sphere with a hole in it is no longer a sphere." ]
2018-03502
if tattoo’s fade because white blood cells are constantly eating the ink, then how do tattoo’s not weaken the immune system if so many WBC’s are focused on ink rather than infections?
Weakened is a relative term, and areas of the body are "soft segmented" (they interact, but have points of cell-origin). Since all your bone marrow (but some more than others) produce your blood cells, you have sort of have a "great-wall of defense" beside every bone, reinforced by nearbone garrisons. Your WBC near the tattoo are seeing conflict, but not all of them, and not all of your body will immediately respond (it knows from evolution that WBC's are needed all over - to move the entire army to one area would really weaken the system). White blood cells react to little (chemical) messages dropped by invaders (from their movement or metabolism), and so tattoos result in the dermis (skin) cells messages dropped near the WBCs (which is less motile than a pathogen). This means there is a range in which WBC will react to the messages, and it is quite a great distance in the WBC point of view. So ultimately, the tattoo doesn't attract and cause enough damage to cause a complete meltdown. Your immune system may be attacking it, but it attacks a lot of things all the time without you knowing, so its not much different anyway.
[ "In amateur tattooing, such as that practiced in prisons, however, there is an elevated risk of infection. Infections that can theoretically be transmitted by the use of unsterilized tattoo equipment or contaminated ink include surface infections of the skin, fungal infections, some forms of hepatitis, herpes simplex virus, HIV, staph, tetanus, and tuberculosis.\n", "Tattoo removal is most commonly performed using lasers that break down the ink particles in the tattoo into smaller particles. Dermal macrophages are part of the immune system, tasked with collecting and digesting cellular debris. In the case of tattoo pigments, macrophages collect ink pigments, but have difficulty breaking them down. Instead, they store the ink pigments. If a macrophage is damaged, it releases its captive ink, which is taken up by other macrophages. This can make it particularly difficult to remove tattoos. When treatments break down ink particles into smaller pieces, macrophages can more easily remove them.\n", "There are a number of factors that determine how many treatments will be needed and the level of success one might experience. Age of tattoo, ink density, color and even where the tattoo is located on the body, all play an important role in how many treatments will be needed for complete removal. However, a rarely recognized factor of tattoo removal is the role of the client’s immune response. The normal process of tattoo removal is fragmentation followed by phagocytosis which is then drained away via the lymphatics. Consequently, it’s the inflammation resulting from the actual laser treatment and the natural stimulation of the hosts’s immune response that ultimately results in removal of tattoo ink; thus variations in results are enormous.\n", "While iNKT cells are not very numerous, their unique properties makes them an important regulatory cell that can influence how the immune system develops. They are known to play a role in chronic inflammatory diseases like autoimmune disease, asthma and metabolic syndrome. In human autoimmune diseases, their numbers are decreased in peripheral blood. It is not clear whether this is a cause or effect of the disease. Absence of microbe exposure in early development led to increased iNKT cells and immune morbidity in a mouse model.\n\nSection::::Function.\n", "The amount of ink that remains in the skin throughout the healing process determines how the final tattoo will look. If a tattoo becomes infected or the flakes fall off too soon (e.g., if it absorbs too much water and sloughs off early or is picked or scraped off) then the ink will not be properly fixed in the skin and the final image will be negatively affected.\n", "In 2017, researchers from the European Synchrotron Radiation Facility in France say the chemicals in tattoo ink can travel in the bloodstream and accumulate in the lymph nodes, obstructing their ability to fight infections. However, the authors noted in their paper that most tattooed individuals including the donors analyzed do not suffer from chronic inflammation.\n\nTattoo artists frequently recommend sun protection of skin to prevent tattoos from fading and to preserve skin integrity to make future tattooing easier.\n\nSection::::Removal.\n", "The role of the immune system in response to the presence of a virus has both beneficial and detrimental effects on the cardiac system. The arrival of natural killer cells (NK cells) at the site of infection limits viral proliferation in myocytes. Conversely, while certain cytokines released from immune cells have protective effects, others such as tumor necrosis factor-alpha (TNFα) have deleterious effects on heart cells. Moreover, peak concentrations of T cells in the myocardium during days 7-14 play important roles in both viral clearance and immune mediated cardiac damage. T-cells not only lyse and destroy infected myocytes, but due to molecular mimicry, they also destroy normal, healthy cardiac cells, further driving the heart towards dilated cardiomyopathy.\n", "Mayo Clinic researchers found a three-fold increased incidence of cutaneous NTM infection between 1980 and 2009 in a population-based study of residents of Olmsted County, Minnesota. The most common species were \"M. marinum\", accounting for 45% of cases and \"M. chelonae\" and \"M. abscessus\", together accounting for 32% of patients. \"M. chelonae\" infection outbreaks, as a consequence of tattooing with infected ink, have been reported in the United Kingdom and the United States.\n\nRapidly growing NTMs are implicated in catheter infections, post-LASIK, skin and soft tissue (especially post-cosmetic surgery) and pulmonary infections.\n\nSection::::Pathogenesis.\n", "Some pigment migrates from a tattoo site to lymph nodes, where large particles may accumulate. When larger particles accumulate in the lymph nodes, inflammation may occur. Smaller particles, such as those created by laser tattoo treatments, are small enough to be carried away by the lymphatic system and not accumulate.\n\nSection::::Other adverse effects.:Interference with melanoma diagnosis.\n\nLymph nodes may become discolored and inflamed with the presence of tattoo pigments, but discoloration and inflammation are also visual indicators of melanoma; consequently, diagnosing melanoma in a patient with tattoos is made difficult, and special precautions must be taken to avoid misdiagnoses.\n", "Currently, there are five major distinct iNKT cell subsets. These subset cells produce a different set of cytokines once activated. The subtypes iNKT1, iNKT2 and iNKT17 mirror Th Cell subsets in cytokine production. In addition there are subtypes specialized in T follicular helper-like function and Il-10 dependent regulatory functions. Once activated iNKT cells can impact the type and strength of an immune response. They engage in cross talk with other immune cells, like dendritic cells, neutrophils and lymphocytes. Activation occurs by engagement with their invariant TCR. iNKT cells can also be indirectly activated through cytokine signaling.\n", "IM and ID DNA delivery initiate immune responses differently. In the skin, keratinocytes, fibroblasts and Langerhans cells take up and express antigens and are responsible for inducing a primary antibody response. Transfected Langerhans cells migrate out of the skin (within 12 hours) to the draining lymph node where they prime secondary B- and T-cell responses. In skeletal muscle striated muscle cells are most frequently transfected, but seem to be unimportant in immune response. Instead, IM inoculated DNA “washes” into the draining lymph node within minutes, where distal dendritic cells are transfected and then initiate an immune response. Transfected myocytes seem to act as a “reservoir” of antigen for trafficking professional APCs.\n", "Dermatologists have observed rare but severe medical complications from tattoo pigments in the body, and have noted that people acquiring tattoos rarely assess health risks \"prior\" to receiving their tattoos. Some medical practitioners have recommended greater regulation of pigments used in tattoo ink. The wide range of pigments currently used in tattoo inks may create unforeseen health problems.\n\nSection::::Infection.\n", "Not all nipple-areola tattoos are successful. There is a chance that the tattoo could get infected just as any tattoo can. In a study from 1988—1993 of 103 patients who received nipple-areola tattoos, 5 patients reported getting an infection, 1 patient reported getting a rash, one reported getting slough, and 19 patients had to have their tattoo touched up due to the pigment diminishing from the tattoo through the healing process.\n", "Natural killer cells, one of member ILCs, are lymphocytes and a component of the innate immune system which does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as \"missing self.\" This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. They were named \"natural killer\" because of the initial notion that they do not require activation in order to kill cells that are \"missing self.\" For many years it was unclear how NK cells recognize tumor cells and infected cells. It is now known that the MHC makeup on the surface of those cells is altered and the NK cells become activated through recognition of \"missing self\". Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors (KIR) which essentially put the brakes on NK cells.\n", "From 2009, research has been conducted in Australia and the United States which indicates the presence of an immune cycle.\n\nSection::::Rationale.\n\nThere are multiple rationales proposed for how Coley's toxins affect the patient.\n\nSection::::Rationale.:Macrophages.\n", "Almost all the B cell progenitors in the bursa of 4-day-old chickens express IgM on their cell surface. Studies have shown that B cells of 4 – 8 week old birds are derived from 2 – 4 allotypically committed precursor cells in each follicle. Bursal follicles are colonized by 2-5 pre-bursal stem cells and these undergo extensive proliferation after they are committed to an allotype. Expression of IgM is controlled by a biological clock as opposed to the bursal microenvironment. Moreover, the source of all B cells in adult birds was determined to be a population of self-renewing sIg+ B cells.\n", "Natural killer cells (NK cells) are a component of the innate immune system that does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as \"missing self.\" This term describes cells with abnormally low levels of a cell-surface marker called MHC I (major histocompatibility complex) - a situation that can arise in viral infections of host cells. They were named \"natural killer\" because of the initial notion that they do not require activation in order to kill cells that are \"missing self.\" For many years, it was unclear how NK cell recognize tumor cells and infected cells. It is now known that the MHC makeup on the surface of those cells is altered and the NK cells become activated through recognition of \"missing self\". Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors (KIR) that, in essence, put the brakes on NK cells. The NK-92 cell line does not express KIR and is developed for tumor therapy.\n", "The ability to generate memory cells following a primary infection and the consequent rapid immune activation and response to succeeding infections by the same antigen is fundamental to the role that T and B cells play in the adaptive immune response. For many years, NK cells have been considered to be a part of the innate immune system. However, recently increasing evidence suggests that NK cells can display several features that are usually attributed to adaptive immune cells (e.g. T cell responses) such as dynamic expansion and contraction of subsets, increased longevity and a form of immunological memory, characterized by a more potent response upon secondary challenge with the same antigen.\n", "In 2006, the CDC reported 3 clusters with 44 cases of methicillin-resistant staph infection traced to unlicensed tattooists.\n\nSection::::Reactions to inks.\n\nPerhaps due to the mechanism whereby the skin's immune system encapsulates pigment particles in fibrous tissue, tattoo inks have been described as \"remarkably nonreactive histologically\". However, some allergic reactions have been medically documented. No estimate of the overall incidence of allergic reactions to tattoo pigments exists. Allergies to latex are apparently more common than to inks; many artists will use non-latex gloves when requested.\n", "There are three types of viral infections that can be considered under the topic of viral transformation. These are cytocidal, persistent, and transforming infections. Cytocidal infections can cause fusion of adjacent cells, disruption of transport pathways including ions and other cell signals, disruption of DNA, RNA and protein synthesis, and nearly always leads to cell death. Persistent infections involve viral material that lays dormant within a cell until activated by some stimulus. This type of infection usually causes few obvious changes within the cell but can lead to long chronic diseases. Transforming infections are also referred to as malignant transformation. This infection causes a host cell to become malignant and can be either cytocidal (usually in the case of RNA viruses) or persistent (usually in the case of DNA viruses). Cells with transforming infections undergo immortalization and inherit the genetic material to produce tumors. Since the term cytocidal, or cytolytic, refers to cell death, these three infections are not mutually exclusive. Many transforming infections by DNA tumor viruses are also cytocidal.\n", "BULLET::::2. during initial HAART immune recovery, with pro-inflammatory signaling by antigen-presenting cells without an effector response; and\n\nBULLET::::3. at IRIS, a cytokine storm with a predominant type-1 helper T-cell interferon-gamma response.\n\nThree clinical predictors of cryptococcal-related paradoxical IRIS risk include:\n\nBULLET::::1. lack of initial CSF pleocytosis (i.e. low CSF white blood cell count);\n\nBULLET::::2. elevated C-reactive protein;\n\nBULLET::::3. failure to sterilize the CSF before immune recovery.\n", "Health effects of tattoos\n\nA variety of health effects can result from tattooing. Because it requires breaking the skin barrier, tattooing carries inherent health risks, including infection and allergic reactions. Modern tattooists reduce such risks by following universal precautions, working with single-use disposable needles, and sterilising equipment after each use. Many jurisdictions require tattooists to undergo periodic bloodborne pathogen training, such as is provided through the Red Cross and the U.S. Occupational Safety and Health Administration.\n", "All transgene vectors have the risk of causing moderate to severe side effects with respect to the immune system, and lentiviral vectors are no exception. In the laboratory or clinical trials, one indication of an immune reaction to the vector is a drop in transgene expression. Often, this sudden loss of transgene expression is not due to a simple silencing of a transgene or loss of the vector from the cell, but loss of the cell itself. The body has multiple methods of targeting and ridding itself of any cells infected with the lentivirus, all of them falling under either activity by the innate immune system or adaptive immune system. In the cases of some HIV-1-derived lentiviral vectors, both immune responses can occur.\n", "Since tattoo instruments come in contact with blood and bodily fluids, diseases may be transmitted if the instruments are used on more than one person without being sterilised. However, infection from tattooing in clean and modern tattoo studios employing single-use needles is rare. With amateur tattoos, such as those applied in prisons, however, there is an elevated risk of infection. To address this problem, a programme was introduced in Canada as of the summer of 2005 that provides legal tattooing in prisons, both to reduce health risks and to provide inmates with a marketable skill. Inmates were to be trained to staff and operate the tattoo parlours once six of them opened successfully.\n", "Unlike adaptive and innate immunity, which must sense the infection to be turned on (and can take weeks to become effective in the case of adaptive immunity) intrinsic immune proteins are constitutively expressed and ready to shut down infection immediately following viral entry. This is particularly important in retroviral infections since viral integration into the host genome occurs quickly after entry and reverse transcription and is largely irreversible.\n\nBecause the production of intrinsic immune mediating proteins cannot be increased during infection, these defenses can become saturated and ineffective if a cell is infected with a high level of virus.\n" ]
[ "WBCs are focused on attacking the tattoo." ]
[ "WBCs are only weakly attacking the tattoo and are always in some sort of action state so it is not noticed. " ]
[ "false presupposition" ]
[ "WBCs are focused on attacking the tattoo." ]
[ "false presupposition" ]
[ "WBCs are only weakly attacking the tattoo and are always in some sort of action state so it is not noticed. " ]
2018-04843
Why do some sounds (e.g chewing) sound louder with earphones plugged in
Bone conduction is sound being conducted to the inner ear through the skull. Tapping on your head, chewing, things like that. When you block your ears, you effectively block out everything else and only hear what is conducted through the bones.
[ "Some users have noted that ambient noise can be a problem. The built-in microphone on the pen can pick up small amounts of noise from writing on paper, and adjacent ambient noise is often louder than a far away speaker. However, the included headphones have embedded microphones that reduce this ambient noise. (Headphones for the Echo must be purchased separately).\n", "The World Health Organization warns that increasing use of headphones and earphones puts 1.1 billion teenagers and young adults at risk of hearing loss due to unsafe use of personal audio devices. Many smartphones and personal media players are sold with earphones that do a poor job of blocking ambient noise, leading some users to turn up the volume to the maximum level to drown out street noise. People listening to their media players on crowded commutes sometimes play music at high volumes feel a sense of separation, freedom and escape from their surroundings.\n", "According to the Scientific Committee on Emerging and Newly Identified Health Risks, the risk of hearing damage from digital audio players depends on both sound level and listening time. The listening habits of most users are unlikely to cause hearing loss, but some people are putting their hearing at risk, because they set the volume control very high or listen to music at high levels for many hours per day. Such listening habits may result in temporary or permanent hearing loss, tinnitus, and difficulties understanding speech in noisy environments.\n", "While ambient electrical noise is a serious issue when dealing with millivolt and microvolt signals such as those typically originating from microphones and other input transducers, it is highly unlikely that such induced noise would significantly audibly affect high level output signals in low impedance systems such as the speaker wires connecting an amplifier to a speaker, unless of course, the cables are unusually long, the location is unusually noisy or the listener has an exceptionally good ear.\n", "Wearing earmuffs makes it difficult to communicate because it blocks speech noise which may make speech sound muffled or unintelligible. It also makes it difficult to localize sound.\n\nSection::::Hearing protection.:Specific considerations for hearing protection for workers with hearing loss.\n", "During the amount of time an individual wears earmuffs, the device can be jostled and displaced from the proper position that allows for the highest attenuation. This can be common in the workplace, as many individuals are in motion during the time they are wearing the hearing protection device. Moving the jaw while chewing or talking and perspiration are examples of ways in which readjustment can occur, causing the seal to be broken between the earcup and skin and allowing sound to leak in.\n\nSection::::Hearing protection.:Barriers to effectiveness.:Deterioration.\n", "The vuvuzelas have the potential to cause noise-induced hearing loss. Prof James Hall III, Dr Dirk Koekemoer, De Wet Swanepoel and colleagues at the University of Pretoria found that vuvuzelas can have a negative effect when a listener's eardrums are exposed to the instrument's high-intensity sound. The vuvuzelas produce an average sound pressure of 113 dB(A) at from the device opening. The study finds that subjects should not be exposed to more than 15 minutes per day at an intensity of 100 dB(A). The study assumes that if a single vuvuzela emits a sound that is dangerously loud to subjects within a radius, and numerous vuvuzelas are typically blown together for the duration of a match, it may put spectators at a significant risk of hearing loss. Hearing loss experts at the U.S. National Institute for Occupational Safety and Health (NIOSH) recommend that exposure at the 113 dB(A) level not exceed 45 seconds per day. A newer model has a modified mouthpiece that reduces the volume by 20 dB.\n", "One simple method for checking earmuff fit is to lift one or both muffs away from the head while in a noisy environment. If the noise is considerably louder with the adjustment, then the earmuffs are providing at least some degree of noise reduction.\n", "Some models have adjustable gain on the microphone itself to be able to accommodate different level sources, such as loud instruments or quiet voices. Adjustable gain helps to avoid clipping and maximize signal to noise ratio.\n\nSome models have adjustable squelch, which silences the output when the receiver does not get a strong or quality signal from the microphone, instead of reproducing noise. When squelch is adjusted, the threshold of the signal quality or level is adjusted.\n\nSection::::Products.\n", "Active earmuffs have an electronic component and microphones that allow the user to control their access to communication while attenuating background noise. When in loud, hazardous settings, the wearer may still be required to listen to outside sources, such as machinery work, their supervisor's commands, or talk to their colleagues. While the material and design of the muff allows for a reasonable attenuation (roughly 22 dBNRR), the user has the option to allow some sounds in that are necessary for their job. These earmuffs incorporate a volume control to increase and decrease the attenuation.\n", "The noise reduction of passive earplugs varies with frequency but is largely independent of level (soft noises are reduced as much as loud noises). As a result, while loud noises are reduced in level, protecting hearing, it can be difficult to hear low level noises. Active electronic earplugs exist, where loud noises are reduced more than soft noises, and soft sounds may even be amplified, providing dynamic range compression. This is done by having a standard passive earplug, together with a microphone/speaker pair (microphone on outside, speaker on inside; formally a pair of transducers), so sound can be transmitted without being attenuated by the earplug. When external sounds exceed an established threshold (typically 82 dBA SPL), the amplification of the electronic circuit is reduced. At very high levels, the amplification is turned off automatically and you receive the full attenuation of the earplug just as if it were turned off and seated in the ear canal. This protects hearing, but allows one to hear normally when sounds are in safe ranges – for example, have a normal conversation in a low-noise situation, but be protected from sudden loud noises, for example at a construction site or a while hunting.\n", "The usual way of limiting sound volume on devices driving headphones is by limiting output power. This has the additional undesirable effect of being dependent of the efficiency of the headphones; a device producing the maximum allowed power may not produce adequate volume when paired with low-efficiency, high-impedance equipment, while the same amount of power can reach dangerous levels with very efficient earphones.\n", "The energy density of sound waves decreases as they spread out, so that increasing the distance between the receiver and source results in a progressively lesser intensity of sound at the receiver. In a normal three-dimensional setting, with a point source and point receptor, the intensity of sound waves will be attenuated according to the inverse square of the distance from the source.\n\nSection::::Damping.\n", "Section::::Digital signal processing.:Sound around mode.\n\nSound around mode allows for real time overlapping of music and the sounds surrounding the listener in her environment, which are captured by a microphone and mixed into the audio signal. As a result, the user may hear playing music and external sounds of the environment at the same time. This can increase user safety (especially in big cities and busy streets), as a user can hear a mugger following her or hear an oncoming car.\n\nSection::::Controversy.\n", "The amount of masking will vary depending on the characteristics of both the target signal and the masker, and will also be specific to an individual listener. While the person in the example above was able to detect the cat scratching at 26 dB SPL, another person may not be able to hear the cat scratching while the vacuum was on until the sound level of the cat scratching was increased to 30 dB SPL (thereby making the amount of masking for the second listener 20 dB).\n\nSection::::Simultaneous masking.\n", "In Frank Herbert's science fiction novel \"Dune\"—first serialized in \"Analog\" from 1963 to 1965 and then published independently in August 1965—the Baron Harkonnen employs a \"cone of silence\" when having a private discussion with Count Fenring. In the novel's glossary, Herbert describes the device as the sound-deadening \"field of a distorter that limits the carrying power of the voice or any other vibrator by damping the vibrations with an image-vibration 180 degrees out of phase\". Used for privacy, the field does not visually obscure lip movement. Herbert had previously mentioned the cone of silence, on a much smaller scale, in his 1955 short story \"Cease Fire\".\n", "Section::::Varieties.:Passive vs. active.\n\nThere are two different types of earmuffs used to protect the user from loud sounds based on the acoustical properties and materials used to create them: passively attenuating and actively attenuating earmuffs.\n\nThe ability of a passive earmuff to attenuate a signal is based on the materials used. The material and structure of the earmuff device is used to decrease the sound level that is transmitted through the ear canal. Materials, such as a cupped foam coated in hard plastic, will block sound due to the thick and dampening properties of the foam.\n", "Some electronic HPDs, known as Hearing Enhancement Protection Systems, provide hearing protection from high-level sounds while allowing transmission of other sounds like speech. Some also have the ability to amplify low-level sounds. This type may be beneficial for users who are in noisy environments, but still need access to lower level sounds. For example, soldiers who need to protect their hearing but also need to be able to identify enemy forces and communicate in noise, hunters who rely on detecting and localizing soft sounds of wildlife but still wish to protect their hearing from recreational firearm blasts, as well as users with pre-existing hearing loss who are in noisy environments may all benefit from Hearing Enhancement Protection Systems.\n", "The \"unmasked threshold\" is the quietest level of the signal which can be perceived without a masking signal present. The \"masked threshold\" is the quietest level of the signal perceived when combined with a specific masking noise. The amount of masking is the difference between the masked and unmasked thresholds.\n", "Recent research on users of bone-anchored upper and lower limb prostheses showed that this osseoperception is not only mediated by mechanoreceptors but also by auditory receptors. This means that, rather than just feeling mechanical influences on the device, users also hear the movements of their prosthesis. This joint mechanical and auditory sensory perception is likely responsible for the improved environment perception of users of osseointegrated prostheses compared to traditional socket suspended devices. It is not clear, however, to what extent this implicit sensory feedback actually influences prosthesis users in everyday life.\n\nSection::::Applications.\n", "Because bone conduction headphones transmit sound to the inner ear through the bones of the skull, users can consume audio content while maintaining situational awareness.\n\nSection::::Safety.:Use in the 21st century.\n\nThe Google Glass device employs bone conduction technology for the relay of information to the user through a transducer that sits beside the user's ear. The use of bone conduction means that any vocal content that is received by the Glass user is nearly inaudible to outsiders.\n", "A person with normal hearing can experience this by sticking their fingers into their ears and talking. Otherwise, this effect is often experienced by hearing aid users who only have a mild to moderate high-frequency hearing loss, but use hearing aids which block the entire ear canal.\n", "Section::::Amplitude difference.\n", "Contrasting with the company's usual method for designing motherboards, the first components that Asus placed on the device's PCB were the speakers. This was done to ensure the other components did not force speaker placement towards one side, which would harm sound quality. The design saw the implementation of two microphones to ensure the user's hand placement on the device would not muffle sound during videoconferencing, while the headphone jack was moved to the bottom of the device, preventing the headphone wire from draping across the screen.\n", "Broadcasting organisations experienced difficulties with their presentations. Television and radio audiences often heard only the sound of vuvuzelas. The BBC, RTÉ, ESPN and BSkyB have examined the possibility of filtering the ambient noise while maintaining game commentary.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-19548
How do escalators maintain the same speed regardless of how many people are on them?
Whenever electricity is involved it can't be truly ELI5 but I'll try by skipping over the details. I'm a winder electrician, which means I build motors and motor systems. They are driven by induction motors that always turn at a constant speed linked to the 60Hz frequency of the electric lines. Every time the electricity changes polarity, the motor moves on to the next magnetic pole (simplified). Typical speeds are 1200/1800/3600RPM for 6/4/2 pole motors respectively. As the load increases the motor will draw more power up until it stalls. Until then it won't slow down much. Real speeds are actually a bit lower due to "slip" of the rotor but true synchronous motors do exist and are used as generators and in heavy industry. The only motors that vary in speed greatly due to load are the brushed sort usually found in tools and small appliances. For reasons I won't go into, they are not linked to the frequency of the power, and can often run on both AC or DC power.
[ "Many public transport systems handle a high directional flow of passengers— often traveling to work in a city in the morning rush hour and away from the said city in the late afternoon. To increase the passenger throughput, many systems can be reconfigured to change the direction of the optimized flow. A common example is a railway or metro station with more than two parallel escalators, where the majority of the escalators can be set to move in one direction. This gives rise to the measure of the peak-flow rather than a simple average of half of the total capacity.\n", "The direction of escalator movement (up or down) can be permanently set, controlled manually depending on the predominant flow of the crowd, or controlled automatically. In some setups, the direction is controlled by whoever arrives first.\n\nSection::::Design, components, and operation.:Design and layout considerations.\n", "BULLET::::- In the Park Pobedy station of the Moscow Metro, the escalators are or 740 steps long, and high. It takes three minutes to transit.\n\nBULLET::::- The longest escalator in Prague, and in the European Union, is at the Náměstí Míru station at long and high.\n\nBULLET::::- The longest escalators in Western Europe are in the Elbphilharmonie in Hamburg with a length of , in Helsinki Koivusaari Metro Station (), in Helsinki Airport Railway Station (), and at Stockholm Metro station Västra skogen ().\n", "Most countries require escalators to have moving handrails that keep pace with the movement of the steps as a safety measure. This helps riders steady themselves, especially when stepping onto the moving stairs. Occasionally a handrail moves at a slightly different speed from the steps, causing it to \"creep\" slowly forward or backward relative to the steps; it is only slippage and normal wear that causes such losses of synchronicity, and is not by design.\n", "BULLET::::- The tallest escalator on the London Underground system is at Angel station on the Northern line with a length of , and a vertical rise of .\n\nBULLET::::- The longest wooden escalators in the United Kingdom are at the Tyne Cyclist and Pedestrian Tunnel, with a length of . (See above)\n\nBULLET::::- The longest escalator of a European shopping mall is at MyZeil, Frankfurt, Germany, with a length of .\n", "Temporal traffic patterns must be anticipated. Some escalators need only to move people from one floor to another, but others may have specific requirements, such as funneling visitors towards exits or exhibits. The visibility and accessibility of the escalator to traffic is relevant. Designers need to account for the projected traffic volumes. For example, a single-width escalator traveling at about per second can move about 2000 people per hour, assuming that passengers ride single file. The carrying capacity of an escalator system is typically matched to the expected peak traffic demand. For example, escalators at transit stations must be designed to cater for the peak traffic flow discharged from a train, without excessive bunching at the escalator entrance.\n", "Design factors include physical requirements, location, traffic patterns, safety considerations and aesthetics. Physical factors such as the distance to be spanned determine the length and pitch of the escalator, while factors such as the infrastructure's ability to provide support and power must be considered. How upward and downward traffic is separated and load/unload areas are other important considerations.\n", "BULLET::::- Central-Mid-Levels escalator, : in Hong Kong, tens of thousands of commuters travel each work day between Central and the Mid-levels, a residential district hundreds of feet uphill, using this long distance system of escalators and moving walkways. It is the world's longest outdoor escalator \"system\" (not a single escalator span). It goes only one way at a time; the direction reverses depending on rush hour traffic direction.\n", "BULLET::::- The longest escalator in Bangkok, Thailand and Southeast Asia is in the MRT's Si Lom Station. It connects the concourse level with platform 1 which in turn connects to Hua Lam Phong. It is in length and in depth.\n\nSection::::Notable examples.:Longest individual escalators.:Australia.\n\nBULLET::::- The longest set of single-span uninterrupted escalators in the Southern Hemisphere is at Parliament underground railway station in Melbourne.\n", "BULLET::::- The largest \"single truss escalator\" is in the Bentall Centre in Kingston upon Thames in Greater London, UK. It connects the ground floor with the second floor with top and bottom supports.\n\nSection::::Notable examples.:Longest individual escalators.:North and South America.\n\nBULLET::::- The longest set of single-span uninterrupted escalators in the Western Hemisphere is at Wheaton station on the Washington Metro Red Line. They are long with a vertical rise of , and take what is variously described as 2 minutes and 45 seconds or nearly three-and-a-half minutes, to ascend or descend without walking.\n", "In the state capital, Athens, members of the Presidential Guard, provide a 24-hour honor guard, with an hourly guard change, at the Presidential Mansion and at the Tomb of the Unknown Soldier, off Syntagma Square at the foot of the Hellenic Parliament. The Changing the Guard at the Tomb of the Unknown Soldier in particular has become a tourist attraction, with many people marvelling at the guards, who stand motionless for two 20-minute intervals, during their 1-hour shifts.\n", "BULLET::::- The longest escalators in the world are installed in deep underground stations of the Saint Petersburg Metro. The Ploshchad Lenina, Chernyshevskaya, and Admiralteyskaya stations have escalators up to long and high.\n\nBULLET::::- The longest \"freestanding\" (supported only at the ends) escalator in the world is inside CNN Center’s atrium in Atlanta. It rises 8 stories and is long. Originally built as the entrance to the amusement park \"The World of Sid and Marty Krofft\", the escalator is now used for CNN studio tours.\n\nSection::::Notable examples.:Longest individual escalators.:Asia.\n", "BULLET::::- The total length of the escalators is .\n\nBULLET::::- The arch has a cross-sectional diameter greater than that of a cross-channel Eurostar train.\n\nSection::::Stadium.:Pitch.\n\nThe pitch size, as lined for association football, is long by wide, slightly narrower than the old Wembley, as required by the UEFA stadium categories for a category four stadium, the top category.\n", "Escalators typically rise at an angle of about 30 degrees from the ground. They move at per second (like moving walkways) and may traverse vertical distances in excess of . Most modern escalators have single-piece aluminum or stainless steel steps that move on a system of tracks in a continuous loop.\n", "BULLET::::- In December 2011, a network of six escalators of length, equivalent to 28 stories high, was opened in Medellín, Colombia, offering the 12,000 residents of Comuna 13 a six-minute ride to the city center compared to the previous 35-minute climb on foot.\n\nBULLET::::- Cascade, Yerevan: an escalator system of length and height.\n\nBULLET::::- Ocean Park, Hong Kong: a long escalator system connecting two parts of the Park, with an overall length of .\n\nSection::::Notable examples.:Longest individual escalators.\n\nSection::::Notable examples.:Longest individual escalators.:World.\n", "An electric motor scoops up a ball every minute. Every five minutes, the top rail will dump and deposit a ball on the second rail. Every hour, the upper and middle rails dump and one ball is transferred to the bottom rail to increment the hours. At 1:00 all three rails dump their balls to the feed rail at the bottom.\n\nSection::::Variations.\n", "Escalators are typically configured in one of three ways: \"parallel\" (up and down escalators adjacent or nearby, often seen in metro stations and multilevel movie theaters), \"multiple parallel\" (banks of more than one escalator going in the same direction parallel to banks going the other direction), or \"crisscross\" (escalators going in one direction \"stacked\" with escalators going the opposite direction oriented adjacent but perpendicular, frequently used in department stores or shopping centers).\n", "Section::::Just In Sequence is Just In Time.:Displacement of buffers upwards to suppliers.\n", "BULLET::::- In March 2017 eighteen people suffered injuries at a Hong Kong's Langham Place shopping mall when an escalator maintained by Otis reversed direction from up to down.\n", "same speed as the rotor and vanes, but there is a very slight 'walking behind' of the vane contact area and there is a very slight speed slippage which results in the inner belt wear being spread out and this results in much longer belt life. Also, the belt set now confines the pressure and speed-squared forces like a pressure vessel and the potential speed of operation is very much higher. The result of all this is to raise both the operating pressure and the operating speed and this amounts to a 10 times increase in hydraulic packaging density and similar decrease in weight per unit power.\n", "Some fans attempted to escape by utilizing the median to either slide down, or get to the other escalator. After the escalator was stopped, a photo was released of the escalator stairs crumpled at the bottom, exposing jagged and exposed metal plates.\n\nSection::::Victims.\n\nSeven people were reported to have been seriously injured, with varying accounts of the total amount of injured. Some of the injured were trapped between the metal plates of the steps at the bottom of the escalator.\n\nSection::::Aftermath.\n", "Escalators have the capacity to move large numbers of people. They can be placed in the same physical space as a staircase. They have no waiting interval (except during very heavy traffic). They can be used to guide people toward main exits or special exhibits. They may be weatherproofed for outdoor use. A nonfunctional escalator can function as a normal staircase, whereas many other methods of transport become useless when they break down or lose power.\n\nSection::::Design, components, and operation.\n\nSection::::Design, components, and operation.:Operation and layout.\n", "The track has a complex height profile: Immediately after passing the bridge behind \"Sportivnaya\" is a nearly long incline with two sharp radii. There is also an approximately long tunnel, a cable-stayed bridge and another bridge.\n\nSection::::Rolling stock.\n", "Section::::Design, components, and operation.:Components.:Steps.\n", "The ride uses four trains, each consisting of two rows, each with four across seating and can carry eight people per train. The trains use standard Euro-Fighter over-the-shoulder restraints.\n\nSection::::Ride Experience.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04169
Why sometimes when you hit your head it either hurts like hell or you don't feel a thing
It depends on where you hit your head. It's kind of like anywhere else, it won't hurt if someone hit your hand for example, but if you accidentally hit your "funny bone" on something, it'll hurt hard af. It's same on the head. The frontal lobe, which is like responsible for thinking, logic, judgment, etc etc is what's most vulnerable in the brain and injuries there are extremely sensitive and dangerous.
[ "Contusions are identified with two forms of diagnosis: acceleration of the brain and direct trauma. A direct trauma injury is much more severe than an acceleration injury (in most cases) and requires much more intensive diagnosis and testing. The full extent of the injury may not be known until testing done in a hospital is complete.\n", "If another blow to the head occurs after a concussion but before its symptoms have gone away, there is a slight risk of developing the serious second-impact syndrome (SIS). In SIS, the brain rapidly swells, greatly increasing intracranial pressure. People who have repeated mild head injuries over a prolonged period, such as boxers and Gridiron football players, are at risk for chronic traumatic encephalopathy (or the related variant dementia pugilistica), a severe, chronic disorder involving a decline in mental and physical abilities.\n\nSection::::Epidemiology.\n", "Even in the absence of an impact, significant acceleration or deceleration of the head can cause TBI; however in most cases a combination of impact and acceleration is probably to blame. Forces involving the head striking or being struck by something, termed \"contact\" or \"impact loading\", are the cause of most focal injuries, and movement of the brain within the skull, termed \"noncontact\" or \"inertial loading\", usually causes diffuse injuries. The violent shaking of an infant that causes shaken baby syndrome commonly manifests as diffuse injury. In impact loading, the force sends shock waves through the skull and brain, resulting in tissue damage. Shock waves caused by penetrating injuries can also destroy tissue along the path of a projectile, compounding the damage caused by the missile itself.\n", "Second-impact syndrome, in which the brain swells dangerously after a minor blow, may occur in very rare cases. The condition may develop in people who receive second blow days or weeks after an initial concussion before its symptoms have gone away. No one is certain of the cause of this often fatal complication, but it is commonly thought that the swelling occurs because the brain's arterioles lose the ability to regulate their diameter, causing a loss of control over cerebral blood flow. As the brain swells, intracranial pressure rapidly rises. The brain can herniate, and the brain stem can fail within five minutes. Except in boxing, all cases have occurred in athletes under age 20. Due to the very small number of documented cases, the diagnosis is controversial, and doubt exists about its validity. A 2010 \"Pediatrics\" review article stated that there is debate whether the brain swelling is due to two separate hits or to just one hit, but in either case, catastrophic football head injuries are three times more likely in high school athletes than in college athletes.\n", "In sports, most cerebral contusions are caused when the brain is either suddenly accelerated, decelerated, or strikes an immovable object. When the blow happens, brain tissue can be damaged, sometimes resulting in the need for hospitalization and surgery. A resection of the contused tissue is needed within surgery pending the severity of the incident. The highest rates of contusions occur in men between the ages of 15 to 24, somewhat due to their aggressive nature. If a person sustains a contusion one time, they are more likely to sustain a repeated one.\n\nSection::::Cerebral contusions.:Signs and symptoms in sports.\n", "Section::::Causes.\n\nBrain injuries can result from a number of conditions including:\n\nBULLET::::- trauma; multiple traumatic injuries can lead to chronic traumatic encephalopathy. A coup-contrecoup injury occurs when the force impacting the head is not only strong enough to cause a contusion at the site of impact, but also able to move the brain and cause it to displace rapidly into the opposite side of the skull, causing an additional contusion.\n\nBULLET::::- open head injury\n\nBULLET::::- closed head injury\n", "SIS is a potential complication from an athlete returning to a game before symptoms from a minor head injury have subsided. Such symptoms include headache, cognitive difficulties, or visual changes.\n", "Damage may occur directly under the site of impact, or it may occur on the side opposite the impact (coup and contrecoup injury, respectively). When a moving object impacts the stationary head, coup injuries are typical, while contrecoup injuries are usually produced when the moving head strikes a stationary object.\n\nSection::::Mechanism.:Primary and secondary injury.\n", "Diffuse axonal injury, or DAI, usually occurs as the result of an acceleration or deceleration motion, not necessarily an impact. Axons are stretched and damaged when parts of the brain of differing density slide over one another. Prognoses vary widely depending on the extent of damage.\n\nSection::::Signs and symptoms.\n\nThree categories used for classifying the severity of brain injuries are mild, moderate or severe.\n\nSection::::Signs and symptoms.:Mild brain injuries.\n", "BULLET::::- Basilar skull fractures, those that occur at the base of the skull, are associated with Battle's sign, a subcutaneous bleed over the mastoid, hemotympanum, and cerebrospinal fluid rhinorrhea and otorrhea.\n\nBecause brain injuries can be life-threatening, even people with apparently slight injuries, with no noticeable signs or complaints, require close observation; They have a chance for severe symptoms later on. The caretakers of those patients with mild trauma who are released from the hospital are frequently advised to rouse the patient several times during the next 12 to 24 hours to assess for worsening symptoms.\n", "The initial injury may be a concussion, or it may be another, more severe, type of head trauma, such as cerebral contusion. However, the first concussion need not be severe for the second impact to cause SIS. Also, the second impact may be very minor, even a blow such as an impact to the chest that causes the head to jerk, thereby transmitting forces of acceleration to the brain. Loss of consciousness during the second injury is not necessary for SIS to occur. Both injuries may take place in the same game.\n", "Section::::Classification.:Intracranial bleeding.:Cerebral contusion.\n\nCerebral contusion is bruising of the brain tissue. The piamater is not breached in contusion in contrary to lacerations. The majority of contusions occur in the frontal and temporal lobes. Complications may include cerebral edema and transtentorial herniation. The goal of treatment should be to treat the increased intracranial pressure. The prognosis is guarded.\n\nSection::::Classification.:Intracranial bleeding.:Diffuse axonal injury.\n", "Measures that prevent head injuries in general also prevent SIS. Thus athletes are advised to use protective gear such as helmets, though helmets do not entirely prevent the syndrome.\n", "Changes indicative of SIS may begin occurring in the injured brain within 15 seconds of the second concussion. Pathophysiological changes in SIS can include a loss of autoregulation of the brain's blood vessels, which causes them to become congested. The vessels dilate, greatly increasing their diameter and leading to a large increase in cerebral blood flow. Progressive cerebral edema may also occur. The increase of blood and brain volume within the skull causes a rapid and severe increase in intracranial pressure, which can in turn cause uncal and cerebellar brain herniation, a disastrous and potentially fatal condition in which the brain is squeezed past structures within the skull.\n", "Section::::Signs and symptoms.:Symptoms in children.\n\nSymptoms observed in children include changes in eating habits, persistent irritability or sadness, changes in attention, disrupted sleeping habits, or loss of interest in toys.\n", "Brain injury can occur at the site of impact, but can also be at the opposite side of the skull due to a \"contrecoup\" effect (the impact to the head can cause the brain to move within the skull, causing the brain to impact the interior of the skull opposite the head-impact). If the impact causes the head to move, the injury may be worsened, because the brain may ricochet inside the skull causing additional impacts, or the brain may stay relatively still (due to inertia) but be hit by the moving skull (both are contrecoup injuries).\n", "Head injuries can be caused by a large variety of reasons. All of these causes can be put into two categories used to classify head injuries; those that occur from impact (blows) and those that occur from shaking. Common causes of head injury due to impact are motor vehicle traffic collisions, home and occupational accidents, falls, assault, and sports related accidents. Head injuries from shaking are most common amongst infants and children.\n", "DAI can occur across the spectrum of traumatic brain injury (TBI) severity, wherein the burden of injury increases from mild to severe. Concussion may be a milder type of diffuse axonal injury.\n\nSection::::Mechanism.\n\nDAI is the result of traumatic shearing forces that occur when the head is rapidly accelerated or decelerated, as may occur in car accidents, falls, and assaults. Vehicle accidents are the most frequent cause of DAI; it can also occur as the result of child abuse such as in shaken baby syndrome.\n", "Section::::Symptoms.\n\nIf symptoms of a head injury are seen after an accident, medical care is necessary to diagnose and treat the injury. Without medical attention, injuries can progress and cause further brain damage, disability, or death.\n\nSection::::Symptoms.:Common symptoms.\n\nBecause the brain swelling that produces these symptoms is often a slow process, these symptoms may not surface for days to weeks after the injury.\n\nCommon symptoms of a closed-head injury include:\n\nBULLET::::- headache\n\nBULLET::::- dizziness\n\nBULLET::::- nausea\n\nBULLET::::- vomiting\n\nBULLET::::- slurred speech\n\nSection::::Symptoms.:Severe injury symptoms.\n", "Studies on animals have shown that the brain may be more vulnerable to a second concussive injury administered shortly after a first. In one such study, a mild impact administered within 24 hours of another one with minimal neurological impairment caused massive breakdown of the blood brain barrier and subsequent brain swelling. Loss of this protective barrier could be responsible for the edema found in SIS.\n\nAnimal studies have shown that the immature brain may be more vulnerable to brain trauma; these findings may provide clues as to why SIS primarily affects people under age 18.\n\nSection::::Diagnosis.\n", "By one estimate, the syndrome kills four to six people under the age of 18 per year. According to the Centers for Disease Control, about 1.5 people die each year from concussion in the US; in most of these cases, the person had received another concussion previously.\n\nIn part due to the poor documentation of the initial injury and continuing symptoms in recorded cases, some professionals think that the condition is over-diagnosed, and some doubt the validity of the diagnosis altogether.\n\nSection::::History.\n", "Often caused by a blow to the head, contusions commonly occur in coup or contre-coup injuries. In coup injuries, the brain is injured directly under the area of impact, while in contrecoup injuries it is injured on the side opposite the impact.\n", "Section::::Diagnosis.\n", "Head Injuries (band)\n\nHead Injuries is an American pop punk band from Fort Collins, Colorado formed in 2012. The group consists of Jared Russel (lead vocals, Guitar), Zack Hill (guitar, vocals), Coty Eikenberg (bass) and Nate Rodriguez (drums).\n", "The band formed in October 2012 and recorded the \"Boogie Nights EP\" with Brandon Carlisle of Teenage Bottlerocket that same month. In 2012 the band headed out on \"The Mission To Del Taco Tour\". And later that year, the group recorded its debut self-titled album at Black In Bluhm Studio, in Denver, Colorado; the album was released in January 2013. In July 2013, the band supported its debut album on \"The Mission To Get Buck Tour\". The group recorded its next album, \"Bail\", at The Blasting Room, in Fort Collins, Colorado.\n\nSection::::History.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-05067
If the big 3 religions and their denominations all worship the same God, how can there be so much violence and disagreement between them (and even within them between denominations)? Considering that they have the biggest part in common (the deity), can't they agree to disagree on other stuff?
The Abrahamic religions only worship the same god insofar as they all believe in the Old Testament, but they have drastically different views on what was divinely revealed after that. The Christians believe Jesus came down and was the son of God and thus that God is a trinity and that you need to worship Jesus, while the Muslims believe that Jesus was just a prophet and actually Muhammad was the final prophet and you need to follow him, and that God is not a trinity, which is to say that they don't really have the same idea of God. And the Jews don't believe any of that is legitimate. Aside from this they all have substantially different religious practices, hierarchies, and attitudes toward evangelism. Because each religion takes its own doctrines so seriously, they all view each other as false religions, and also have internal fighting over specific doctrines. To top it off, they have a long history of fighting over the same lands, especially in the Middle East and Europe, and thus tend to see each other as enemy factions.
[ "BULLET::::- Reuven Firestone, a Jewish Rabbi writes about the \"tension\" between the \"particularity\" of one's \"own religious experience\" and the \"universality of the divine reality\" that as expressed in history has led to verbal and violent conflict. So, although this tension may never be \"fully resolved,\" Firestone says that \"it is of utmost consequence for leaders in religion to engage in the process of dialogue.\"\n\nThe Interfaith Amigos\n", "These conflicts are among the most difficult to resolve, particularly when both sides believe that God is on their side and that he has endorsed the moral righteousness of their claims. One of the most infamous quotes associated with religious fanaticism was uttered in 1209 during the siege of Béziers, a Crusader asked the Papal Legate Arnaud Amalric how to tell Catholics from Cathars when the city was taken, to which Amalric replied:\n\n\"\"Caedite eos. Novit enim Dominus qui sunt eius,\"\" or \"Kill them all; God will recognize his.\"\n\nSection::::Ritual violence.\n", "In the twentieth century, Ian Ramsey developed the theory of analogy, a development later cited in numerous works by Alister McGrath. He argued that various models of God are provided in religious writings that interact with each other: a range of analogies for salvation and the nature of God. Ramsey proposed that the models used modify and qualify each other, defining the limits of other analogies. As a result, no one analogy on its own is sufficient, but the combination of every analogy presented in Scripture gives a full and consistent depiction of God. The use of other analogies may then be used to determine if any one model of God is abused or improperly applied.\n", "The Talmud warns against causing an idolater to take oaths. The commentators living in Christian Germany in the 12th century, called Tosafists, permitted Jews to bring a Christian partner to court in partnership during a breakup even though the Christian would take an oath by God, which to Christians would include Jesus, by saying that so long as another deity is not mentioned explicitly, there is no forbidden oath taking place, but only an association. Although all of the Tosafists agreed that partnerships that may lead to such an oath may not be entered into originally, they disagree as to once such a partnership exists whether or not one may go to court in order to not to lose his portion of the partnership and even though such an oath is a side-effect. In a terse comment, they wrote:\n", "Rovelli discusses his religious views in several articles and in his book on Anaximander. He argues that the conflict between rational/scientific thinking and structured religion may find periods of truce (\"there is no contradiction between solving Maxwell's equations and believing that God created Heaven and Earth\"), but it is ultimately unsolvable because (most) religions demand the acceptance of some unquestionable truths while scientific thinking is based on the continuous questioning of any truth. Thus, for Rovelli the source of the conflict is not the pretense of science to give answers—the universe, for Rovelli, is full of mystery and a source of awe and emotions—but, on the contrary, the source of the conflict is the acceptance of our ignorance at the foundation of science, which clashes with religions' pretense to be depositories of certain knowledge.\n", "In practice, the predominant position of Modern Orthodoxy on this issue is based on the position of Rabbi Joseph Soloveitchik in an essay entitled \"Confrontation\". He held that Judaism and Christianity are \"two faith communities (which are) intrinsically antithetic\". In his view \"the language of faith of a particular community is totally incomprehensible to the man of a different faith community. Hence the confrontation should occur not at a theological, but at a mundane human level... the great encounter between man and God is a holy, personal and private affair, incomprehensible to the outsider...\" As such, he ruled that theological dialogue between Judaism and Christianity was not possible.\n", "The theological foundations of interreligious dialogue have also been critiqued on the grounds that any interpretation of another faith tradition will be predicated on a particular cultural, historical and anthropological perspective\n", "The most prominent event in the way of dialogue between religions has arguably been the 1986 Peace Prayer in Assisi to which Pope John Paul II, against considerable resistance also from within the Roman Catholic church, invited representatives of all world religions. John Paul II’s remarks regarding Christian denominations were found in his Ut unum sint address. This initiative was taken up by the Community of Sant'Egidio, who, with the support of John Paul II, organized yearly peace meetings of religious representatives. These meetings, consisting of round tables on different issues and of a common time of prayer has done much to further understanding and friendship between religious leaders and to further concrete peace initiatives. In order to avoid the reproaches of syncretism that were leveled at the 1986 Assisi meeting where the representatives of all religions held one common prayer, the follow-up meetings saw the representatives of the different religions pray in different places according to their respective traditions.\n", "The strategy for developing these foundational relationships, according to Munayer, was to bring both sides together. \"I came to the conclusion that the theology of reconciliation was the best theology to deal with all these issues, and that more than anything else, the Jewish and Palestinian believers needed to be brought together, face to face. Anything less would not work, because of the dehumanization and demonization going on from both sides.\"\n", "Section::::Buddhism.\n\nThe earliest reference to Buddhist views on religious pluralism in a political sense is found in the Edicts of Emperor Ashoka:\n\nAll religions should reside everywhere, for all of them desire self-control and purity of heart. Rock Edict Nb7 (S. Dhammika)\n\nContact (between religions) is good. One should listen to and respect the doctrines professed by others. Beloved-of-the-Gods, King Piyadasi, desires that all should be well-learned in the good doctrines of other religions. Rock Edict Nb12 (S. Dhammika)\n\nWhen asked, \"Don’t all religions teach the same thing? Is it possible to unify them?\" the Dalai Lama said:\n", "Sulzer repeatedly reassured Zanchi that he and the other mediators were there in Strasbourg to draw up a common statement “which would end the controversy and reconcile the two sides, not to reach a final agreement on the doctrinal issues. At the official ceremony of reconciliation, Zanchi was unwilling to shake hands with Marbach, who still condemned Zanchi's teachings... At this point Sulzer took Zanchi aside and told him that the handshake did not mean that the two parties agreed on doctrine; such agreement could only be reached at a general synod. Instead, the handshake would signify two things: that Zanchi accepted the Consensus' formulation of doctrine, and that he sincerely forgave the other party for wrongs committed against him in the course of the controversy.” \n", "Christians believe that Jesus is the Christian Messiah, Savior of the World and the divine Son of God; Jews and Muslims do not. Similarly, Muslims believe that the \"Qur'an\" was divinely authored, while Jews and Christians do not. There are many examples of such contrasting views, indeed, opposing fundamental beliefs (schisms) exist even within each major religion. Christianity, for example, has many subsets (denominations), which differ greatly on issues of doctrine. Hinduism, with its conception of multiple avatars being expressions of one Supreme God, is more open to the possibility that other religions might be correct for their followers, but this same principle requires the rejection of the exclusivity demanded by each of the Abrahamic religions.\n", "BULLET::::- All three of the Godhead co-exist simultaneously (John 14:16-17; Eph. 3:14-17; 2 Cor. 13:14); and\n\nBULLET::::- All three of the Godhead coinhere, that is, they mutually indwell one another (John 14:10-11; 17:21, 23).\n", "\"Human beings ought to treat each other with respect and hold each other dear independently of theological dialogues, Biblical studies, and independently of what they believe about each other's religion. I am free to reject any religion as humbug if that is what I think of it; but I am duty-bound to respect the dignity of every human being no matter what I may think of his religion. It is not inter-religious understanding that mankind needs but inter-human understanding – an understanding based on our common humanity and wholly independent of any need for common religious beliefs and theological principles.\" (“Judaism in the Post-Christian Era”, Judaism 15:1, Winter 1966, p. 82)\n", "For the Catholic Church, there has been a move at reconciliation not only with Judaism, but also Islam. The Second Vatican Council states that salvation includes others who acknowledge the same creator, and explicitly lists Muslims among those (using the term Mohammedans, which was the word commonly used among non-Muslims at the time). The official Catholic position is therefore that Jews, Muslims and Christians (including churches outside of Rome's authority) all acknowledge the same God, though Jews and Muslims have not yet received the gospel while other churches are generally considered deviant to a greater or lesser degree.\n", "Peter Donovan criticises the language-games approach for failing to recognise that religions operate in a world containing other ideas and that many religious people make claims to truth. He notes that many religious believers not only believe their religion to be meaningful and true in its own context, but claim that it is true against all other possible beliefs; if the language games analogy is accepted, such a comparison between beliefs is impossible. Donovan proposes that debates between different religions, and the apologetics of some, demonstrates that they interact with each other and the wider world and so cannot be treated as isolated language games.\n", "BULLET::::- Signatories\n\nSection::::Contents.:Quotations.\n\n\"Muslims and Christians together make up well over half of the world's population. Without peace and justice between these two religious communities, there can be no meaningful peace in the world. The future of the world depends on peace between Muslims and Christians.\"\n\n\"The basis for this peace and understanding already exists. It is part of the very foundational principles of both faiths: love of the One God, and love of the neighbour. These principles are found over and over again in the sacred texts of Islam and Christianity.\"\n", "BULLET::::- recognize each other as churches in which the gospel is rightly preached and the sacraments rightly administered according to the Word of God;\n\nBULLET::::- withdraw any historic condemnation by one side or the other as inappropriate for the life and faith of our churches today;\n\nBULLET::::- continue to recognize each other's Baptism and authorize and encourage the sharing of the Lord's Supper among their members;\n\nBULLET::::- recognize each others' various ministries and make provision for the orderly exchange of ordained ministers of Word and Sacrament;\n", "Section::::Abrahamic religions.\n\nHector Avalos argues that, because religions claim to have divine favor for themselves, over and against other groups, this sense of self-righteousness leads to violence because conflicting claims of superiority, based on unverifiable appeals to God, cannot be objectively adjudicated.\n\nSimilarly, Eric Hickey writes, \"the history of religious violence in the West is as long as the historical record of its three major religions, Judaism, Christianity, and Islam, with their mutual antagonisms and their struggles to adapt and survive despite the secular forces that threaten their continued existence.\"\n", "I do not deny for a moment that the truth of God has reached others through other channels - indeed, I hope and pray that it has. So while I have a special attachment to one mediator, I have respect for them all. (p. 12)\n\nThe Church of Jesus Christ of Latter-day Saints also teaches a form of religious pluralism, that there is at least some truth in almost all religions and philosophies.\n\nSection::::Christianity.:Classical Christian views.\n", "BULLET::::1. Know, in the first place, that mankind agree in essence, as they do in their limbs and senses.\n\nBULLET::::2. Mankind differ as much in essence as they do in form, limbs, and senses – and only so, and not more\".\n\nTo these points Blake has annotated \"This is true Christian philosophy far above all abstraction.\"\n", "Other Christians have held that there can be truth value and salvific value in other faith traditions. John Macquarrie, described in the \"Handbook of Anglican Theologians\" (1998) as \"unquestionably Anglicanism's most distinguished systematic theologian in the second half of the twentieth century\", wrote that \"there should be an end to proselytizing but that equally there should be no syncretism of the kind typified by the Baha'i movement\" (p. 2). In discussing 9 founders of major faith traditions (Moses, Zoroaster, Lao-zu, Buddha, Confucius, Socrates, Krishna, Jesus, and Muhammad), which he called \"mediators between the human and the divine\", Macquarrie wrote that:\n", "Section::::Analogies of games.\n\nThe analogy of a game was first proposed by Hans-Georg Gadamer in an attempt to demonstrate the epistemic unity of language. He suggested that language is like a game which everyone participates in and is played by a greater being. Gadamer believed that language makes up the fundamental structure of reality and that human language participates in a greater language; Christianity teaches this to be the divine word which created the world and was incarnate in Jesus Christ.\n", "Section::::Dimensions of the conflict.:Knowledge of the nature of God.\n\nAndrew Louth writes that \"[t]he controversy between St Gregory Palamas and Barlaam the Calabrian is now seen by some scholars as less a conflict between Western influences (represented by Barlaam) and authentic Orthodox spirituality, as a conflict within Greek Christianity about the true meaning of Dionysian language about the nature of God: Barlaam interpreting his apophatic theology as intellectual dialectic, and Gregory seeing it as concerned with the ineffable experience of God.\n\nSection::::Dimensions of the conflict.:Scholasticism.\n", "The concept of mutual exclusivity of different religions itself (as opposed to religious pluralism) is primarily associated with Abrahamic faiths; pagan religions, historically the most common forms of worship, were typically polytheistic and compatible with each other. The roots of the mutual exclusivity may be seen in the \"Torah\", where Jews are ordered to worship the God of Israel to the exclusion of all others.\n\nSection::::Appearances.\n" ]
[ "Because the three main religions worship the same God, they should be able to put the other differences aside. " ]
[ "The three main religions are unable to put their differences aside because they are very large contradictions within the beliefs of the three religions, ones that cannot be overlooked." ]
[ "false presupposition" ]
[ "Because the three main religions worship the same God, they should be able to put the other differences aside. ", "Because the three main religions worship the same God, they should be able to put the other differences aside. " ]
[ "normal", "false presupposition" ]
[ "The three main religions are unable to put their differences aside because they are very large contradictions within the beliefs of the three religions, ones that cannot be overlooked.", "The three main religions are unable to put their differences aside because they are very large contradictions within the beliefs of the three religions, ones that cannot be overlooked." ]
2018-01879
How storage companies like Dropbox keep files safe if all of your files are stored in one gigantic cluster that other users use
Imagine Dropbox is a giant storage facility, like the ones where you can rent a storage locker and put all of the things from your attic. To prevent people from trying to break into others' storage units, customers don't actually get to enter. If you want to store stuff you just walk up to the front gate and hand them a box, along with your ID, and they store it for you. Later when you want to access it, you give them your ID and ask for the box by its name, and they retrieve it for you. With this model, they don't really need separate storage lockers for every user. They just need to do a really good job of keeping track of who owns which box, and never give a box to someone who isn't authorized to open it. The boxes could be in one giant room, organized alphabetically by name. All that matters is that the boxes are tagged with names and owner IDs. In this analogy, Amazon is simply leasing them the land where they bought the storage lockers. That has no effect on their security.
[ "BULLET::::- Dropbox : By default, Dropbox saves a history of all deleted and earlier versions of files for 30 days for all Dropbox accounts.\n\nBULLET::::- Dropmysite : Provides incremental backups with the ability to download every snapshot.\n\nBULLET::::- ElephantDrive : Any number of versions can be kept for any amount of time.\n\nBULLET::::- Google Drive : Old versions of files are kept for 30 days or 100 revisions. Revisions can be set not to be automatically deleted.\n", "When a file or folder is deleted, users can recover it within 30 days. For Dropbox Plus users, this recovery time can be extended to one year, by purchasing an \"Extended Version History\" add-on.\n\nDropbox accounts that are not accessed or emails not replied in a year are automatically deleted.\n\nDropbox also offers a LAN sync feature, where, instead of receiving information and data from the Dropbox servers, computers on the local network can exchange files directly between each other, potentially significantly improving synchronization speeds.\n", "On an HDFS cluster, a file is split into one or more equal-size blocks, except for the possibility of the last block being smaller. Each block is stored on multiple DataNodes, and each may be replicated on multiple DataNodes to guarantee availability. By default, each block is replicated three times, a process called \"Block Level Replication\".\n", "Convergent encryption derives the key from the file content itself and means an identical file encrypted on different computers result in identical encrypted files. This enables the cloud storage provider to de-duplicate data blocks, meaning only one instance of a unique file (such as a document, photo, music or movie file) is actually stored on the cloud servers but made accessible to all uploaders. A third party who gained access to the encrypted files could thus easily determine if a user has uploaded a particular file simply by encrypting it themselves and comparing the outputs.\n", "Data protection is formed using Reed–Solomon error correction coding. When a file is written it is spread across several nodes using parity calculated by which level you set the whole or parts of the cluster to.\n", "In order to conserve resources, cut costs, and maintain efficiency, cloud service providers often store more than one customer's data on the same server. As a result, there is a chance that one user's private data can be viewed by other users (possibly even competitors). To handle such sensitive situations, cloud service providers should ensure proper data isolation and logical storage segregation.\n", "Section::::Technology.:Functionality.\n\nIn enterprise infrastructures NFS is mainly used by Linux systems whereas Windows systems are using SMB. Object storage needs data in form of objects rather than files. For all cloud storage gateways it is mandatory to cache the incoming files and destage them to object storage on a later step. The time of destaging is subject to the gateway and a policy engine allows functions like\n\nBULLET::::- pinning = bind specific files to the cache and destage them only for mirroring purpose\n", "HA clusters often also use quorum witness storage (local or cloud) to avoid this scenario. A witness device cannot be shared between two halves of a split cluster, so in the event that all cluster members cannot communicate with each other (e.g., failed heartbeat), if a member cannot access the witness, it cannot become active.\n\nSection::::Application design requirements.\n", "To facilitate fault tolerance, each chunk is replicated onto multiple (default, three) chunk servers. A chunk is available on at least one chunk server. The advantage of this scheme is simplicity. The master is responsible for allocating the chunk servers for each chunk and is contacted only for metadata information. For all other data, the client has to interact with the chunk servers.\n", "The Dropbox Plus subscription (named Dropbox Pro prior to March 2017) gives users 2 terabytes of storage space, as well as additional features, including:\n\nBULLET::::- Advanced sharing controls: When sharing a link to a file or folder, users can set passwords and expiration limits.\n\nBULLET::::- Remote wipe: If a device is stolen or lost, users can remotely wipe the Dropbox folder from the device the next time it comes online.\n\nBULLET::::- \"Extended Version History\": An available add-on, it makes Dropbox keep deleted and previous versions of files for one year, a significant extension of the default 30-day recovery time.\n", "BULLET::::- Tahoe-LAFS is an open source secure, decentralized, fault-tolerant filesystem utilizing encryption as the basis for a least-authority replicated design.\n\nBULLET::::- A FAT12 and FAT16 (and FAT32) extension to support automatic file distribution across nodes with extra attributes like \"local\", \"mirror on update\", \"mirror on close\", \"compound on update\", \"compound on close\" in IBM 4680 OS and Toshiba 4690 OS. The distribution attributes are stored on a file-by-file basis in special entries in the directory table.\n\nSection::::Distributed file systems.:Distributed parallel file systems.\n", "Dropbox uses SSL transfers for synchronization and stores the data via Advanced Encryption Standard (AES)-256 encryption.\n\nThe functionality of Dropbox can be integrated into third-party applications through an application programming interface (API).\n\nDropbox prevents sharing of copyrighted data, by checking the hash of files shared in public folders or between users against a blacklist of copyrighted material. This only applies to files or folders shared with other users or publicly, and not to files kept in an individual's Dropbox folder that are not shared.\n\nSection::::Mailbox.\n", "Third-party cloud-based companies provide solutions which can be used to manage mobile files but are not controlled by corporate IT organizations. Companies that utilize Mobile Device Management solutions can also secure content on mobile devices, but usually cannot provide direct access and connection to a corporate file server.\n", "Multiple instances of access methods can be opened on a file at the same time, each serving a single client. If a file is opened for update access, conflicts can occur when the same record is being accessed by multiple clients. To prevent such conflicts, a lock can be obtained on an entire file. Also, if a file is opened for \"update\" a lock is obtained on a record by the first client to read it and released when that client updates it. All other clients must wait for the lock's release.\n\nSection::::Inside DDM.:DDM file models.:Stream-oriented files.\n", "Files hosted in the cloud are fragmented and encrypted before leaving the local machine. They are then distributed randomly using a load balancing and geo-distribution algorithm to other nodes in the cooperative. Users can add an additional layer of security and reduce storage space by compressing and encrypting files before they are copied to the cloud.\n\nSection::::Data redundancy.\n", "All MDM products are built with an idea of Containerization. The MDM Container is secured using the latest cryptographic techniques (AES-256 or more preferred). Corporate data such as email, documents, and enterprise applications are encrypted and processed inside the container. This ensures that corporate data is separated from user’s personal data on the device. Additionally, encryption for the entire device and/or SD Card can be enforced depending on MDM product capability.\n", "Section::::Architectures.:Cluster-based architectures.:Design principles.\n\nSection::::Architectures.:Cluster-based architectures.:Design principles.:Goals.\n\nGoogle File System (GFS) and Hadoop Distributed File System (HDFS) are specifically built for handling batch processing on very large data sets.\n\nFor that, the following hypotheses must be taken into account:\n\nBULLET::::- High availability: the cluster can contain thousands of file servers and some of them can be down at any time\n\nBULLET::::- A server belongs to a rack, a room, a data center, a country, and a continent, in order to precisely identify its geographical location\n", "Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage.\n\nTwo water-based options are\n\nBULLET::::- sprinkler\n\nBULLET::::- mist\n\nBULLET::::- No water - some of the benefits of using chemical suppression (clean agent fire suppression gaseous system).\n\nSection::::Data center design.:Security.\n", "The FlexOS-based operating systems IBM 4680 OS and IBM 4690 OS support unique distribution attributes stored in some bits of the previously reserved areas in the directory entries:\n\nBULLET::::1. Local: Don't distribute file but keep on local controller only.\n\nBULLET::::2. Mirror file on update: Distribute file to server only when file is updated.\n\nBULLET::::3. Mirror file on close: Distribute file to server only when file is closed.\n\nBULLET::::4. Compound file on update: Distribute file to all controllers when file is updated.\n\nBULLET::::5. Compound file on close: Distribute file to all controllers when file is closed.\n", "Multiple instances of the Stream access method can be opened on a file at the same time, each serving a single client. If a file is opened for \"update\" access, conflicts can occur when the same sub-stream is being accessed by multiple clients. To prevent such conflicts, a lock can be obtained on an entire file. Also, if a file is opened for \"update\" a lock is obtained on a sub-stream by the first client to \"read\" it and released when that client \"updates\" it. All other clients must wait for the lock's release.\n\nSection::::Inside DDM.:DDM file models.:Hierarchical directories.\n", "BULLET::::- Intrusion detection systems monitor traffic and collect data for offline analysis for security anomalies. Because IDSs unlike firewalls do not filter packets in real-time, they traditionally are capable of more complex inspection than firewalls which must make an accept/reject decision about each packet as it arrives.\n", "As such, snapshots are also an easy way to avoid the impact of ransomware.\n\nSection::::Terminology and storage structure.:Resizing of vdevs, pools, datasets and volumes.\n", "A pool can be expanded into unused space, and the datasets and volumes within a pool can be likewise expanded to use any unused pool space. Datasets do not need a fixed size and can dynamically grow as data is stored, but volumes, being block devices, need to have their size defined by the user, and must be manually resized as required (which can be done 'live').\n\nResizing example:\n\nSection::::Features.\n\nSection::::Features.:Data integrity.\n", "BULLET::::- Xsan from Apple Inc. Available for macOS. Asymmetric. Interoperable with StorNext File System.\n\nBULLET::::- VMFS from VMware/EMC Corporation. Available for VMware ESX Server. Symmetric.\n\nSection::::Distributed file systems.\n\nDistributed file systems are also called network file systems. Many implementations have been made, they are location dependent and they have access control lists (ACLs), unless otherwise stated below.\n\nBULLET::::- 9P, the Plan 9 from Bell Labs and Inferno distributed file system protocol. One implementation is v9fs. No ACLs.\n\nBULLET::::- Amazon S3\n", "\"Distributed file systems\" do not share block level access to the same storage but use a network protocol. These are commonly known as \"network file systems\", even though they are not the only file systems that use the network to send data. Distributed file systems can restrict access to the file system depending on access lists or capabilities on both the servers and the clients, depending on how the protocol is designed.\n" ]
[ "Files are not safe if they are all stored in one spot." ]
[ "Files can be organized and only given to authorized people regardless of where it is stored." ]
[ "false presupposition" ]
[ "Files are not safe if they are all stored in one spot." ]
[ "false presupposition" ]
[ "Files can be organized and only given to authorized people regardless of where it is stored." ]
2018-03986
Why does animal breath not stink as bad as humans if we don't brush our teeth?
A modern diet, with cooked food, high protein/carb content, and much less roughage than we'd get eating raw food, isn't the diet that our teeth/mouth/saliva developed to cope with. The flip side of radically better nutrition via cooking is that it's hard on our teeth. Other animals, on the other hand, are eating exactly what they've developed to eat.
[ "Dental disease or mouth ulcers can produce rotten smelling breath (halitosis). Dental calculus harbors numerous bacteria which produce odor and foul breath. Dental disease can also lead to excessive drooling, and the skin around the mouth can become infected, leading to more odor production. Dogs can also acquire foul smelling breath as a result of coprophagia, the practice of eating their own feces or the feces of other animals. Commercially prepared food additives can be purchased which, when added to a dog's food, impart a bitter flavor to their feces thereby reducing the tendency towards consuming their own feces.\n", "The human mouth contains around 500 to 1,000 different types of bacteria with various functions as part of the human flora and oral microbiology. About 100 to 200 species may live in them at any given time. Individuals that practice oral hygiene have 1,000 to 100,000 bacteria living on each tooth surface, while less clean mouths can have between 100 million and 1 billion bacteria on each tooth. While some of the bacteria in our mouths are harmful and can cause serious illness, much of our oral bacteria are actually beneficial in preventing disease. Streptococci make up a large part of oral bacteria.\n", "In addition, human OR genes lack motifs that are highly conserved in mouse OR genomes, implicating that not all human OR genes encode functional OR proteins. These differences are explained by the reduced reliance of smell in humans in comparison to rodents. It is still unclear whether the extensive OR repertoires of mice enable them to detect a larger range of odorants than humans. When human OR sequences are analyzed phylogenetically, intact human genes are found in most OR subfamilies. Assuming that various OR subfamilies bind to different odorant classes, it is likely that humans are able to detect a wide range of smell similarly to mice.\n", "BULLET::::- Ornate shrew, \"Sorex ornatus\" (ssp. \"relictus\": )\n\nBULLET::::- Pacific shrew, \"Sorex pacificus\"\n\nBULLET::::- American water shrew, \"Sorex palustris\"\n\nBULLET::::- Preble's shrew, \"Sorex preblei\"\n\nBULLET::::- Olympic shrew, \"Sorex rohweri\" (formerly in \"Sorex cinereus\")\n\nBULLET::::- Fog shrew, \"Sorex sonomae\"\n\nBULLET::::- Inyo shrew, \"Sorex tenellus\"\n\nBULLET::::- Trowbridge's shrew, \"Sorex trowbridgii\"\n\nBULLET::::- Tundra shrew, \"Sorex tundrensis\" (Alaska only)\n\nBULLET::::- Barren ground shrew, \"Sorex ugyunak\" (Alaska only)\n\nBULLET::::- Vagrant shrew, \"Sorex vagrans\"\n\nBULLET::::- Alaska tiny shrew, \"Sorex yukonicus\" (Alaska only)\n\nBULLET::::- Family: Talpidae (moles)\n\nBULLET::::- Subfamily: Scalopinae\n\nBULLET::::- Tribe: Condylurini\n\nBULLET::::- Star-nosed mole, \"Condylura cristata\"\n\nBULLET::::- Tribe: Scalopini\n\nBULLET::::- Hairy-tailed mole, \"Parascalops breweri\"\n", "Zinc compounds, more specifically zinc ascorbate, also play a role in preventing plaque accumulation due to antimicrobial activity. Zinc salts inhibit bacterial growth by binding to sulfur to control plaque formation, as well as reduce foul oral odours. However, research has only been performed on cats, so the same evidence may not be directly applicable to dogs.\n", "The mesowear method or tooth wear scoring method is a quick and inexpensive process of determining the lifelong diet of a taxon (grazer or browser) and was first introduced in the year 2000.\n\nThe mesowear technique can be extended to extinct and also extant animals.\n", "Dental caries (non-human)\n\nDental caries, also known as tooth decay, is uncommon among companion animals. The bacteria \"Streptococcus mutans\" and \"Streptococcus sanguis\" cause dental caries by metabolising sugars.\n\nThe term \"feline cavities\" is commonly used to refer to feline odontoclastic resorptive lesions, however, sacchrolytic acid-producing bacteria (the same responsible for Dental plaque) are not involved in this condition.\n\nSection::::In dogs.\n", "Skunks generally do not drink a great deal of water, but clean water should always be available.\n\nSection::::Skunk care.:Veterinary care.\n", "Section::::Tartar and dental abrasion.\n", "BULLET::::- Baird's shrew, \"Sorex bairdi\"\n\nBULLET::::- Marsh shrew, \"Sorex bendirii\"\n\nBULLET::::- Cinereus shrew, \"Sorex cinereus\"\n\nBULLET::::- Maryland shrew, \"Sorex cinereus fontinalis\"\n\nBULLET::::- Long-tailed shrew, \"Sorex dispar\"\n\nBULLET::::- Gaspé shrew, \"Sorex gaspensis\"\n\nBULLET::::- Smoky shrew, \"Sorex fumeus\"\n\nBULLET::::- Prairie shrew, \"Sorex haydeni\"\n\nBULLET::::- American pygmy shrew, \"Sorex hoyi\"\n\nBULLET::::- Pribilof Island shrew, \"Sorex pribilofensis\" (Alaska only)\n\nBULLET::::- Saint Lawrence Island shrew, \"Sorex jacksoni\" (Alaska only)\n\nBULLET::::- Southeastern shrew, \"Sorex longirostris\"\n\nBULLET::::- Mount Lyell shrew, \"Sorex lyelli\"\n\nBULLET::::- Merriam's shrew, \"Sorex merriami\"\n\nBULLET::::- Montane shrew, \"Sorex monticolus\"\n\nBULLET::::- Dwarf shrew, \"Sorex nanus\"\n\nBULLET::::- New Mexico shrew, \"Sorex neomexicanus\"\n", "Whale meat products from certain species have been shown to contain pollutants such as PCBs, mercury, and dioxins. Levels of pollutants in toothed-whale products are significantly higher than those of baleen whales, reflecting the fact that toothed whales feed at a higher trophic level than baleen whales in the food chain (other high-up animals such as sharks, swordfish and large tuna show similarly high levels of mercury contamination). Organochloride pesticides HCH and HCB are also at higher levels in toothed species, while minke whales show lower levels than most other baleens.\n", "The several reasons for the popularity of gerbils as household pets include: The animals are typically not aggressive, and they rarely bite unprovoked or without stress. They are small and easy to handle, since they are sociable creatures that enjoy the company of humans and other gerbils. Gerbils also have adapted their kidneys to produce a minimum of waste to conserve body fluids, which makes them very clean with little odor.\n\nSection::::Health concerns.\n\nSection::::Health concerns.:Teeth problems.\n", "Domestic dogs often roll in odoriferous substances, choosing items such as cow manure, a road kill, or rotten fish.\n\nSection::::Canines.:Wolves.\n\nCaptive wolves will scent roll in a wide range of substances including animal feces, carrion (elk, mouse, pig, badger), mint extract, perfume, animal repellant, fly repellent, etc.\n\nSection::::Bears.\n", "A number of animals have been used to measure varying kinds of air pollution. These include honey bees for air pollution, bivalve molluscs for online water-quality survey and pigeons for atmospheric lead. Bats and swallows have been used to monitor pesticide contamination due to their diet of insects that may have been affected by the chemicals.\n", "Section::::Interactions with other organisms.\n\nDespite their apparent simplicity, bacteria can form complex associations with other organisms. These symbiotic associations can be divided into parasitism, mutualism and commensalism. Due to their small size, commensal bacteria are ubiquitous and grow on animals and plants exactly as they will grow on any other surface. However, their growth can be increased by warmth and sweat, and large populations of these organisms in humans are the cause of body odour.\n\nSection::::Interactions with other organisms.:Predators.\n", "BULLET::::- Least chipmunk, \"Tamias minimus\"\n\nBULLET::::- California chipmunk, \"Tamias obscurus\"\n\nBULLET::::- Yellow-cheeked chipmunk, \"Tamias ochrogenys\"\n\nBULLET::::- Palmer's chipmunk, \"Tamias palmeri\"\n\nBULLET::::- Panamint chipmunk, \"Tamias panamintinus\"\n\nBULLET::::- Long-eared chipmunk, \"Tamias quadrimaculatus\"\n\nBULLET::::- Colorado chipmunk, \"Tamias quadrivittatus\"\n\nBULLET::::- Red-tailed chipmunk, \"Tamias ruficaudus\"\n\nBULLET::::- Hopi chipmunk, \"Tamias rufus\"\n\nBULLET::::- Allen's chipmunk, \"Tamias senex\"\n\nBULLET::::- Siskiyou chipmunk, \"Tamias siskiyou\"\n\nBULLET::::- Sonoma chipmunk, \"Tamias sonomae\"\n\nBULLET::::- Lodgepole chipmunk, \"Tamias speciosus\"\n\nBULLET::::- Eastern chipmunk, \"Tamias striatus\"\n\nBULLET::::- Townsend's chipmunk, \"Tamias townsendii\"\n\nBULLET::::- Uinta chipmunk, \"Tamias umbrinus\"\n\nBULLET::::- Family: Geomyidae\n\nBULLET::::- Desert pocket gopher, \"Geomys arenarius\"\n\nBULLET::::- Attwater's pocket gopher, \"Geomys attwateri\"\n", "Bart Knols of Wageningen Agricultural University in the Netherlands received a 2006 \"Ig Nobel Prize\" for demonstrating that the female \"Anopheles gambiae\" mosquito, known for transmitting malaria, is \"attracted equally to the smell of Limburger cheese and to the smell of human feet\". Fredros Okumu, of the Ifakara Health Institute in Tanzania, received grants in 2009 and 2011 to develop mosquito attractants and traps to combat malaria. He used a blend of eight chemicals four times more effective than actual human secretions.\n\nSection::::Prevention.\n", "BULLET::::- Genus \"Gerbilliscus\"\n\nBULLET::::- \"Gerbilliscus afra\" (Cape gerbil)\n\nBULLET::::- \"Gerbilliscus boehmi\" (Boehm's gerbil)\n\nBULLET::::- \"Gerbilliscus brantsii\" (highveld gerbil)\n\nBULLET::::- \"Gerbilliscus guineae\" (Guinean gerbil)\n\nBULLET::::- \"Gerbilliscus inclusus\" (Gorongoza gerbil)\n\nBULLET::::- \"Gerbilliscus kempi\" (northern savanna gerbil)\n\nBULLET::::- \"Gerbilliscus leucogaster\" (bushveld gerbil)\n\nBULLET::::- \"Gerbilliscus nigricaudus\" (black-tailed gerbil)\n\nBULLET::::- \"Gerbilliscus phillipsi\" (Phillips's gerbil)\n\nBULLET::::- \"Gerbilliscus robustus\" (fringe-tailed gerbil)\n\nBULLET::::- \"Gerbilliscus validus\" (southern savanna gerbil)\n\nBULLET::::- Genus \"Gerbillurus\"\n\nBULLET::::- \"Gerbillurus paeba\" (hairy-footed gerbil)\n\nBULLET::::- \"Gerbillurus setzeri\" (Setzer's hairy-footed gerbil)\n\nBULLET::::- \"Gerbillurus tytonis\" (dune hairy-footed gerbil)\n\nBULLET::::- \"Gerbillurus vallinus\" (bushy-tailed hairy-footed gerbil)\n\nBULLET::::- Genus \"Gerbillus\"\n\nBULLET::::- Subgenus \"Dipodillus\"\n\nBULLET::::- \"Gerbillus simoni\"\n\nBULLET::::- \"Gerbillus zakariai\"\n", "In a study of the odours most likely to attract mosquitos, smelly socks were found to be the most effective, topping the list along with Limburger cheese. Their strong odour will also attract other dangerous wild animals such as bear.\n", "BULLET::::- American water shrew, \"Sorex palustris\"\n\nBULLET::::- Fog shrew, \"Sorex sonomae\"\n\nBULLET::::- Vagrant shrew, \"Sorex vagrans\"\n\nBULLET::::- Cinereus shrew, \"Sorex cinereus\"\n\nBULLET::::- Maryland shrew, \"Sorex cinereus fontinalis\"\n\nBULLET::::- Olympic shrew, \"Sorex rohweri\" (formerly in \"Sorex cinereus\")\n\nBULLET::::- Prairie shrew, \"Sorex haydeni\"\n\nBULLET::::- Saint Lawrence Island shrew, \"Sorex jacksoni\"\n\nBULLET::::- Southeastern shrew, \"Sorex longirostris\"\n\nBULLET::::- Mount Lyell shrew, \"Sorex lyelli\"\n\nBULLET::::- Preble's shrew, \"Sorex preblei\"\n\nBULLET::::- Pribilof Island shrew, \"Sorex pribilofensis\"\n\nBULLET::::- Barren ground shrew, \"Sorex ugyunak\"\n\nBULLET::::- Alaska tiny shrew, \"Sorex yukonicus\"\n\nBULLET::::- Arctic shrew, \"Sorex arcticus\"\n\nBULLET::::- Maritime shrew, \"Sorex maritimensis\"\n\nBULLET::::- Tundra shrew, \"Sorex tundrensis\"\n", "The sense of smell is less developed in the catarrhine primates, and nonexistent in cetaceans, which compensate with a well-developed sense of taste. In some strepsirrhines, such as the red-bellied lemur, scent glands occur atop the head. In many species, olfaction is highly tuned to pheromones; a male silkworm moth, for example, can sense a single molecule of bombykol.\n", "BULLET::::- American badger, \"Taxidea taxus\"\n\nBULLET::::- North American river otter, \"Lontra canadensis\"\n\nBULLET::::- Sea otter, \"Enhydra lutris\" (ssp. \"nereis\" and \"kenyoni\": , ssp. \"nereis\" also )\n\nBULLET::::- Family: Otariidae (eared seals, sealions)\n\nBULLET::::- Northern fur seal, \"Callorhinus ursinus\"\n\nBULLET::::- Guadalupe fur seal, \"Arctocephalus townsendi\"\n\nBULLET::::- Steller sea lion, \"Eumetopias jubatus\" (except west of 144° W, where ) (ssp. \"monteriensis\": )\n\nBULLET::::- California sea lion, \"Zalophus californianus\"\n\nBULLET::::- Family: Odobenidae\n\nBULLET::::- Walrus, \"Odobenus rosmarus\" (Alaska only)\n\nBULLET::::- Family: Phocidae (earless seals)\n\nBULLET::::- Hooded seal, \"Cystophora cristata\"\n\nBULLET::::- Bearded seal, \"Erignathus barbatus\"\n\nBULLET::::- Ribbon seal, \"Histriophoca fasciata\" (Alaska almost only)\n", "BULLET::::- Gray-collared chipmunk, \"Tamias cinereicollis\"\n\nBULLET::::- Cliff chipmunk, \"Tamias dorsalis\"\n\nBULLET::::- Merriam's chipmunk, \"Tamias merriami\"\n\nBULLET::::- Least chipmunk, \"Tamias minimus\"\n\nBULLET::::- California chipmunk, \"Tamias obscurus\"\n\nBULLET::::- Yellow-cheeked chipmunk, \"Tamias ochrogenys\"\n\nBULLET::::- Palmer's chipmunk, \"Tamias palmeri\"\n\nBULLET::::- Panamint chipmunk, \"Tamias panamintinus\"\n\nBULLET::::- Long-eared chipmunk, \"Tamias quadrimaculatus\"\n\nBULLET::::- Colorado chipmunk, \"Tamias quadrivittatus\"\n\nBULLET::::- Red-tailed chipmunk, \"Tamias ruficaudus\"\n\nBULLET::::- Hopi chipmunk, \"Tamias rufus\"\n\nBULLET::::- Allen's chipmunk, \"Tamias senex\"\n\nBULLET::::- Siskiyou chipmunk, \"Tamias siskiyou\"\n\nBULLET::::- Sonoma chipmunk, \"Tamias sonomae\"\n\nBULLET::::- Lodgepole chipmunk, \"Tamias speciosus\"\n\nBULLET::::- Eastern chipmunk, \"Tamias striatus\"\n\nBULLET::::- Townsend's chipmunk, \"Tamias townsendii\"\n\nBULLET::::- Uinta chipmunk, \"Tamias umbrinus\"\n\nSection::::Rodents.:Muroidea.\n", "Generally, cooked and marinated foods should be avoided, as well as sauces and gravies, which may contain ingredients that, although well tolerated by humans, may be toxic to animals. Xylitol, an alternative sweetener found in chewing gum and baked goods designed for diabetics, is highly toxic to cats, dogs, and ferrets.\n\nSection::::Labeling and regulation.\n", "Rats and mice are considered to be \"whisker specialists\", but marine mammals may make even greater investment in their vibrissal sensory system. Seal whiskers, which are similarly arrayed across the mystacial region, are each served by around 10 times as many nerve fibres as those in rats and mice, so that the total number of nerve cells innervating the mystacial vibrissae of a seal has been estimated to be in excess of 300,000. Manatees, remarkably, have around 600 vibrissae on or around their lips.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-21617
What determines if a virus goes airborne? Why are some immediately airborne?
There are four modes of viral transmission: direct contact, fomite, respiratory, and water-borne. Direct contact is self explanatory. Fomite is when an infected person touches an object (like a doorknob) and then an uninflected person touches that same object. Respiratory is what you’re looking for - when an infected person, say, coughs in the vicinity of uninflected people. And water-borne is when bodies of water (like a well) are shared between people and one person contaminates it. Source: currently taking a Virology course
[ "For example, secondary person-to-person spread may occur after a common source exposure or an environmental vectors may spread a zoonotic diseases agent.\n\nSection::::Transmission.\n\nBULLET::::- Airborne transmission: Airborne transmission is the spread of infection by droplet nuclei or dust in the air. Without the intervention of winds or drafts the distance over which airborne infection takes place is short, say 10 to 20 feet.\n\nBULLET::::- Arthropod transmission: Arthropod transmission takes place by an insect, either mechanically through a contaminated proboscis or feet, or biologically when there is growth or replication of an organism in the arthropod.\n", "Typically, influenza is transmitted from infected mammals through the air by coughs or sneezes, creating aerosols containing the virus, and from infected birds through their droppings. Influenza can also be transmitted by saliva, nasal secretions, feces and blood. Healthy individuals can become infected if they breathe in a virus-laden aerosol directly, or if they touch their eyes, nose or mouth after touching any of the aforementioned bodily fluids (or surfaces contaminated with those fluids). Flu viruses can remain infectious for about one week at human body temperature, over 30 days at 0 °C (32 °F), and indefinitely at very low temperatures (such as lakes in northeast Siberia). Most influenza strains can be inactivated easily by disinfectants and detergents.\n", "Airborne diseases include any that are caused via transmission through the air. Many airborne diseases are of great medical importance. The pathogens transmitted may be any kind of microbe, and they may be spread in aerosols, dust or liquids. The aerosols might be generated from sources of infection such as the bodily secretions of an infected animal or person, or biological wastes such as accumulate in lofts, caves, garbage and the like. Such infected aerosols may stay suspended in air currents long enough to travel for considerable distances, though the rate of infection decreases sharply with the distance between the source and the organism infected.\n", "Many common infections can spread by airborne transmission at least in some cases, including: Anthrax (inhalational), Chickenpox, Influenza, Measles, Smallpox, Cryptococcosis, and Tuberculosis.\n\nAirborne diseases can also affect non-humans. For example, Newcastle disease is an avian disease that affects many types of domestic poultry worldwide which is transmitted via airborne contamination.\n", "Airborne transmission of disease depends on several physical variables endemic to the infectious particle. Environmental factors influence the efficacy of airborne disease transmission; the most evident environmental conditions are temperature and relative humidity. The sum of all the factors that influence temperature and humidity, either meteorological (outdoor) or human (indoor), as well as other circumstances influencing the spread of the droplets containing the infectious particles, as winds, or human behavior, sum up the factors influencing the transmission of airborne diseases.\n", "Influenza can be spread in three main ways: by direct transmission (when an infected person sneezes mucus directly into the eyes, nose or mouth of another person); the airborne route (when someone inhales the aerosols produced by an infected person coughing, sneezing or spitting) and through hand-to-eye, hand-to-nose, or hand-to-mouth transmission, either from contaminated surfaces or from direct personal contact such as a handshake. The relative importance of these three modes of transmission is unclear, and they may all contribute to the spread of the virus. In the airborne route, the droplets that are small enough for people to inhale are 0.5 to 5µm in diameter and inhaling just one droplet might be enough to cause an infection. Although a single sneeze releases up to 40,000 droplets, most of these droplets are quite large and will quickly settle out of the air. How long influenza survives in airborne droplets seems to be influenced by the levels of humidity and UV radiation, with low humidity and a lack of sunlight in winter aiding its survival.\n", "The relative importance of these three modes of transmission is unclear, and they may all contribute to the spread of the virus. In the airborne route, the droplets that are small enough for people to inhale are 0.5 to 5 µm in diameter and inhaling just one droplet might be enough to cause an infection. Although a single sneeze releases up to 40,000 droplets, most of these droplets are quite large and will quickly settle out of the air. How long influenza survives in airborne droplets seems to be influenced by the levels of humidity and UV radiation: with low humidity and a lack of sunlight in winter probably aiding its survival.\n", "BULLET::::- Shedding: The viruses must spread to sites where shedding into the environment can occur. The respiratory, alimentary and urogenital tracts and the blood are the most frequent sites of shedding.\n\nSection::::Factors that Affect these Pathogenic Mechanisms are:.\n\nBULLET::::- How accessible the host tissues are to the virus: The degree to which the tissues of the body and organs are accessible. Accessibility is affected by physical barriers (for example: tissue barriers and mucus). It is also impacted by the distance to be traveled through the body and by the natural defense mechanisms.\n", "Section::::Routes.:Airborne.\n\n\"Airborne transmission refers to infectious agents that are spread via droplet nuclei (residue from evaporated droplets) containing infective microorganisms. These organisms can survive outside the body and remain suspended in the air for long periods of time. They infect others via the upper and lower respiratory tracts.\" Diseases that are commonly spread by coughing or sneezing include bacterial meningitis, chickenpox, common cold, influenza, mumps, strep throat, tuberculosis, measles, rubella, whooping cough, SARS and leprosy. \n", "BULLET::::- Dispersal: The replicated viruses must spread to target organs (disease sites) throughout the body. The most common route of spread from the portal of entry is the circulatory system, which the virus reaches via the lymphatic system. Viruses can access target organs from the blood capillaries by multiplying inside endothelial cells, moving through gaps, or by being carried inside the organ on leukocytes. Some viruses, such as Herpes, rabies and polio viruses, can also disseminate via nerves.\n", "Section::::Replication cycle.\n\nTypically, influenza is transmitted from infected mammals through the air by coughs or sneezes, creating aerosols containing the virus, and from infected birds through their droppings. Influenza can also be transmitted by saliva, nasal secretions, feces and blood. Infections occur through contact with these bodily fluids or with contaminated surfaces. Out of a host, flu viruses can remain infectious for about one week at human body temperature, over 30 days at , and indefinitely at very low temperatures (such as lakes in northeast Siberia). They can be inactivated easily by disinfectants and detergents.\n", "Viruses spread in many ways; viruses in plants are often transmitted from plant to plant by insects that feed on plant sap, such as aphids; viruses in animals can be carried by blood-sucking insects. These disease-bearing organisms are known as vectors. Influenza viruses are spread by coughing and sneezing. Norovirus and rotavirus, common causes of viral gastroenteritis, are transmitted by the faecal–oral route and are passed from person to person by contact, entering the body in food or water. HIV is one of several viruses transmitted through sexual contact and by exposure to infected blood. The variety of host cells that a virus can infect is called its \"host range\". This can be narrow, meaning a virus is capable of infecting few species, or broad, meaning it is capable of infecting many.\n", "BULLET::::2. Endocytosis: The host cell takes in the viral particle through the process of endocytosis, essentially engulfing the virus like it would a food particle.\n\nBULLET::::3. Viral Penetration: The viral capsid or genome is injected into the host cell's cytoplasm.\n\nThrough the use of green fluorescent protein (GFP), virus entry and infection can be visualized in real-time. Once a virus enters a cell, replication is not immediate and indeed takes some time (seconds to hours).\n\nSection::::Reducing cellular proximity.:Entry via Membrane Fusion.\n", "There are many ways in which viruses spread from host to host but each species of virus uses only one or two. Many viruses that infect plants are carried by organisms; such organisms are called vectors. Some viruses that infect animals, including humans, are also spread by vectors, usually blood-sucking insects. However, direct transmission is more common. Some virus infections, such as norovirus and rotavirus, are spread by contaminated food and water, hands and communal objects and by intimate contact with another infected person, while others are airborne (influenza virus). Viruses such as HIV, hepatitis B and hepatitis C are often transmitted by unprotected sex or contaminated hypodermic needles. It is important to know how each different kind of virus is spread to prevent infections and epidemics.\n", "The cell from which the virus itself buds will often die or be weakened and shed more viral particles for an extended period. The lipid bilayer envelope of these viruses is relatively sensitive to desiccation, heat, and detergents, therefore these viruses are easier to sterilize than non-enveloped viruses, have limited survival outside host environments, and typically must transfer directly from host to host. Enveloped viruses possess great adaptability and can change in a short time in order to evade the immune system. Enveloped viruses can cause persistent infections.\n\nSection::::Enveloped examples.\n\nClasses of enveloped viruses that contain human pathogens:\n", "Viruses spread in many ways. Just as many viruses are very specific as to which host species or tissue they attack, each species of virus relies on a particular method for propagation. Plant viruses are often spread from plant to plant by insects and other organisms, known as \"vectors\". Some viruses of animals, including humans, are spread by exposure to infected bodily fluids. Viruses such as influenza are spread through the air by droplets of moisture when people cough or sneeze. Viruses such as norovirus are transmitted by the faecal–oral route, which involves the contamination of hands, food and water. Rotavirus is often spread by direct contact with infected children. The human immunodeficiency virus, HIV, is transmitted by bodily fluids transferred during sex. Others, such as the Dengue virus, are spread by blood-sucking insects.\n", "Viruses may reach the lung by a number of different routes. Respiratory syncytial virus is typically contracted when people touch contaminated objects and then they touch their eyes or nose. Other viral infections occur when contaminated airborne droplets are inhaled through the mouth or nose. Once in the upper airway, the viruses may make their way in the lungs, where they invade the cells lining the airways, alveoli, or lung parenchyma. Some viruses such as measles and herpes simplex may reach the lungs via the blood. The invasion of the lungs may lead to varying degrees of cell death. When the immune system responds to the infection, even more lung damage may occur. Primarily white blood cells, mainly mononuclear cells, generate the inflammation. As well as damaging the lungs, many viruses simultaneously affect other organs and thus disrupt other body functions. Viruses also make the body more susceptible to bacterial infections; in this way, bacterial pneumonia can occur at the same time as viral pneumonia.\n", "Prior to entry, a virus must attach to a host cell. Attachment is achieved when specific proteins on the viral capsid or viral envelope bind to specific proteins called receptor proteins on the cell membrane of the target cell. A virus must now enter the cell, which is covered by a phospholipid bilayer, a cell's natural barrier to the outside world. The process by which this barrier is breached depends upon the virus. Types of entry are:\n\nBULLET::::1. Membrane Fusion or Hemifusion State: The cell membrane is punctured and made to further connect with the unfolding viral envelope.\n", "The three species of rhinovirus (A, B, and C) include around 160 recognized types of human rhinoviruses that differ according to their surface proteins (serotypes). They are lytic in nature and are among the smallest viruses, with diameters of about 30 nanometers. By comparison, other viruses, such as smallpox and vaccinia, are around 10 times larger at about 300 nanometers; while flu viruses are around 80–120 nm.\n\nSection::::Transmission and epidemiology.\n\nThere are two modes of transmission: via aerosols of respiratory droplets and from fomites (contaminated surfaces), including direct person-to-person contact.\n", "The mechanisms for infection, proliferation, and persistence of a virus in cells of the host are crucial for its survival. For example, some diseases such as measles employ a strategy whereby it must spread to a series of hosts. In these forms of viral infection, the illness is often treated by the body's own immune response, and therefore the virus is required to disperse to new hosts before it is destroyed by immunological resistance or host death. In contrast, some infectious agents such as the Feline leukemia virus, are able to withstand immune responses and are capable of achieving long-term residence within an individual host, whilst also retaining the ability to spread into successive hosts.\n", "Three of the four types of influenza viruses affect humans: Type A, Type B, and Type C. Type D has not been known to infect humans, but is believed to have the potential to do so. Usually, the virus is spread through the air from coughs or sneezes. This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are showing symptoms. The infection may be confirmed by testing the throat, sputum, or nose for the virus. A number of rapid tests are available; however, people may still have the infection even if the results are negative. A type of polymerase chain reaction that detects the virus's RNA is more accurate.\n", "Viruses are by far the most abundant biological entities on Earth and they outnumber all the others put together. They infect all types of cellular life including animals, plants, bacteria and fungi. Different types of viruses can infect only a limited range of hosts and many are species-specific. Some, such as smallpox virus for example, can infect only one species—in this case humans, and are said to have a narrow host range. Other viruses, such as rabies virus, can infect different species of mammals and are said to have a broad range. The viruses that infect plants are harmless to animals, and most viruses that infect other animals are harmless to humans. The host range of some bacteriophages is limited to a single strain of bacteria and they can be used to trace the source of outbreaks of infections by a method called phage typing.\n", "BULLET::::- Local Replication and Local Spread: Local replication and spread of the virus follows implantation. Replicated virus from the initially infected cell has the capability to disperse to neighboring extracellular fluids or cells. Spread occurs by the neighboring cell being infected or the virus being released into extracellular fluid.\n\nBULLET::::- Replication: The invading virus must reproduce itself in large numbers. It usually does this intracellularly.\n\nBULLET::::- Dissemination in Nerves: The spread of virus through the nerves is less common than the spread through the bloodstream.\n", "Like other pathogens, viruses use these methods of transmission to enter the body, but viruses differ in that they must also enter into the host's actual cells. Once the virus has gained access to the host's cells, the virus' genetic material (RNA or DNA) must be introduced to the cell. Replication between viruses is greatly varied and depends on the type of genes involved in them. Most DNA viruses assemble in the nucleus while most RNA viruses develop solely in cytoplasm.\n", "BULLET::::- Abortive infection: This kind of infection occurs when a virus successfully invades a host cell but is unable to complete its full replication cycle and produce more infectious viruses.\n\nBULLET::::- Acute infection: Many common viral infections fallow this pattern. Acute infections are brief since they are often completely eliminated by the immune system. Acute infection is frequently associated with epidemics since most of virus replication happens before the onset of symptoms.\n\nBULLET::::- Chronic infection: These infections have a prolonged course and are hard to eliminate since the virus stays in the host for a significant period.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01304
How are vape juice flavors extracted
When you drink a Coke, or eat a donut, or an apple, or anything else for that matter, you are tasting combinations of different chemicals that make something taste a certain way. Some flavorings (such as a citrus oil) may be extracted from a natural source, or some may be made up of a chemical or combinations of chemicals that are combined to replicate a certain flavor. An example of this is ethyl maltol, which tastes faintly of cotton candy. For a better idea of what I'm talking about, take a look at [this page]( URL_0 ) and click on "list" next to one of those flavorings. A lot of people hear "chemicals" and freak out, but virtually everything in the universe is made of chemicals. Companies that make flavorings have vast amounts of experience in designing things to taste a certain way. If you vape, you've surely heard of all the controversy surrounding ingredients like diacetyl, acetyl propionyl, and acetoin. These are/were common ingredients in many flavorings, particularly ones which are intended to have a rich, creamy or buttery flavor. Alternatives to these ingredients exist, but they simply do not produce the same results. It's also worth pointing out that e-liquids are not necessarily always created out of a single "glazed donut" (or whatever else) flavoring. While this may be the case sometimes, other times flavors are made from a combination of many different flavorings. To add to that, many flavorings which companies make available for use in e-liquids are simply combinations of already existing flavorings.
[ "BULLET::::- oranges, 0.5–3.5%\n\nBULLET::::- carrots 1.4%\n\nBULLET::::- citrus peels, 30%\n\nThe main raw materials for pectin production are dried citrus peels or apple pomace, both by-products of juice production. Pomace from sugar beets is also used to a small extent.\n", "Liquid brem is made from fermented mash of black/ white glutinous rice (known as \"Ketan\") using a dry-starter called \"Ragi tape\". Glutinous rice is soaked and drained, steamed for 1 hour, and then cooled down. The cooled rice is then inoculated with \"Ragi tape\" and amylolysis begins. A honey-like rice syrup settles in the bottom of the malting vessel. Following 3 days of conversion from rice starch to sugar, a yeast culture is added and alcoholic fermentation begins. Alcoholic fermentation typically goes on for two weeks.\n\nSection::::See also.\n\nBULLET::::- Alcohol in Indonesia\n\nBULLET::::- Sake\n\nBULLET::::- Tapai\n", "Section::::Recent developments.\n", "After the infusion is extracted, a second set of spirits is added to the fruit and allowed to rest for a few weeks. After this second infusion is drawn off, the remaining fruit is pressed to obtain the natural sugars and juice. The fruit-infused spirits and juices from the final pressing are then combined, and finally, the berry infusion is married with a proprietary blend of cognac, natural vanilla extract, black raspberries, citrus peel, honey, and herbs and spices. The liqueur is 16.5% alcohol by volume.\n\nSection::::Bottle.\n", "Usually during this maceration, the fermentation process starts with ambient yeasts in the cellar. Often the wine will have fermented to the point where the grape spirits are added before maceration has ended and the wine is pressed off the skins, a process known as \"mutage sur grains\". The added alcohol during maceration allows for more ethanol-soluble phenolics and flavor compounds to be extracted. After pressing, the wine is left to settle in concrete vats over the winter before it is racked into barrels or other containers.\n", "Belvedere also produces flavored vodkas which are produced using a maceration process. The brand's flavored variants are produced by combining pure spirit with a macerated fruit, which is then distilled at a low temperature to form a macerate concentrate. The concentrated mixture is blended with the distillery's artesian well water to bottling strength and bottled. These flavors are not charcoal filtered, and are filtered through cellulose particle filters prior to bottling. This is done with the intention of ensuring essential oils that carry fruit flavors and give mouth feel are retained. The Belvedere Maceration product line includes Mango Passion, Lemon Tea, Bloody Mary, Pink Grapefruit, Black Raspberry, Orange, Citrus and Ginger Zest.\n", "In the oak casks, a process of \"maceration\", supposedly unique to Noilly Prat, takes place over a period of three weeks. A blend of some twenty herbs and spices is added by hand every day. The exact mix of herbs and spices that goes into Noilly Prat is a closely guarded secret, but includes camomile, bitter orange peel, nutmeg, centaury (Yellow Gentian), coriander, and cloves. After a further six weeks, the finished product is ready for bottling and is shipped in tankers to Beaucaire, Gard, where it is bottled by Martini & Rossi.\n\nSection::::Variants.\n", "The zest is then macerated in alcohol, to allow the oils of the skin to impregnate the flavors and aromas characteristic of this limoncello. After a few days, the infusion is ready to be strained and filtered several times, at a constant temperature, and mixed with purified water and sugar, and prepared over a certain time, before bottled. \n\nSection::::Certificates & identifications.\n\nBULLET::::- Kosher (only for Israel)\n\nBULLET::::- P.G.I .lemons (Protected Geographical Indication)\n\nBULLET::::- Gluten free\n\nBULLET::::- Suitable for vegan\n\nBULLET::::- Premium sugar beet based alcohol\n\nBULLET::::- Allergen (ex dir. 2003/89/CE) free\n\nBULLET::::- Pure refined Italian sugar\n", "The basic manufacture process of NCS involves juice extraction, physical elimination of impurities and clarification of the juice, evaporation of the water content of the juice, crystallization, eventually drying and packaging.\n\nThe cane juice is generally extracted from cleaned and eventually shredded cane stalks by mechanical processes, commonly with simple crushers consisting of three metal rollers. It is filtered to separate bagasse particles and/or allowed to settle so to eliminate solid impurities.\n", "Mavrodaphni is initially vinified in large vats exposed to the sun. Once the wine reaches a certain level of maturity, fermentation is stopped by adding distillate prepared from previous vintages. Then the Mavrodaphni distillate and the wine, still containing residual sugar, is transferred to the underground cellars to complete its maturation. There it is \"educated\" by contact with older wine using the solera method of serial blending. Once aged, the wine is bottled and sold as a dessert wine under the Mavrodaphni Protected designation of origin.\n\nSection::::Wine.\n", "The amount of sugar in the \"liqueur d'expédition\" determines the sweetness of the Champagne, the sugar previously in the wine having been consumed in the second fermentation. Generally, sugar is added to balance the high acidity of the Champagne, rather than to produce a sweet taste. Brut Champagne will only have a little sugar added, and Champagne called \"nature\" or \"zéro dosage\" will have no sugar added at all. A cork is then inserted, with a capsule and wire cage (muselet) securing it in place.\n", "BULLET::::2. \"Star Anis\" – 3:28\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::3. \"Lemon Grass\" – 4:21\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::4. \"Four Thieves Vinegar\" – 3:34\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::5. \"Galangal Root\" – 2:33\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::6. \"Spikenard\" – 3:33\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::7. \"Cinquefoil\" – 2:59\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::8. \"Hyssop\" – 3:33\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::9. \"Agrimony\" – 2:06\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::10. \"Arabic Gum\" – 2:50\n\nBULLET::::- Produced by Metal Fingers\n\nBULLET::::11. \"Benzoin Gum\" – 2:47\n\nBULLET::::- Produced by Metal Fingers\n", "BULLET::::- the method by gasification: a \"liqueur de dosage\" is added to the wine then carbon dioxide is injected into the vat. The wine is bottled under pressure. This is the method of production for flavoured sparkling wines.\n\nThe top ten producers of Sparkling wine in the world\n", "Due to its concentrated sugar and yeast content, the captured liquid naturally and immediately ferments into a mildly alcoholic drink called \"toddy\", \"tuak\", or occasionally \"palm wine\". Within a few hours after collection, the toddy is poured into large wooden vats, called \"wash backs\", made from the wood of teak or Berrya cordifolia. The natural fermentation process is allowed to continue in the wash backs until the alcohol content reaches 5-7% and deemed ready for distillation.\n", "Section::::Production.\n\nAkvavit is distilled from either grain or potatoes. After distillation, it is flavoured with herbs, spices, or fruit oil. Commonly seen flavours are caraway, cardamom, cumin, anise, fennel, and lemon or orange peel. Dill and grains of paradise are also used. The Danish distillery Aalborg makes an akvavit distilled with amber.\n", "Clarification is carried out to coagulate the particulates, which come to the surface during boiling and are skimmed off. A variety of materials are used, such as plant material, ash, etc. With the aim of neutralizing the juice, which facilitates the formation of sugar crystals, lime or sulfur dioxide are added. In some of the larger factories the juice is filtered and chemically clarified.\n", "After the primary fermentation of red grapes the free run wine is pumped off into tanks and the skins are pressed to extract the remaining juice and wine. The press wine is blended with the free run wine at the winemaker's discretion. The wine is kept warm and the remaining sugars are converted into alcohol and carbon dioxide.\n", "POM branded products are produced from fruit obtained from their own corporate orchards, and other orchards in the same area. The company employs a proprietary process in their own facilities to mechanically extract juice for various pomegranate based products.\n\nSection::::Sponsorship of research.\n", "BULLET::::- Wine and Craft Beverage\n\nBULLET::::- Wine and Spirit World Updates\n\nBULLET::::- Alcohol and Tobacco Tax and Trade Bureau AVA definition\n\nBULLET::::- Alcohol and Tobacco Tax and Trade Bureau Appellation definition\n\nBULLET::::- Wine Institute US AVA's\n\nBULLET::::- AVA definition\n\nBULLET::::- New York Wine and Grape Foundation AVA's\n\nBULLET::::- Northern Cross Vineyard\n\nBULLET::::- Upper Hudson Wine Trail\n\nBULLET::::- Times Union of Albany Upper Hudson Wine Trail press release\n\nBULLET::::- News 10 Mary Wilson - Upper Hudson Harvest\n\nBULLET::::- Upper Hudson Wine Region Map\n\nBULLET::::- Upper Hudson Wine Trail Blog\n\nBULLET::::- Upper Hudson AVA Petition Kane's Beverage News Daily\n", "Caffeine can also be extracted from coffee beans and tea leaves using a direct organic extraction. The beans or leaves can be soaked in ethyl acetate which favorably dissolves the caffeine, leaving a majority of the coffee or tea flavor remaining in the initial sample.\n\nSection::::Techniques.:Multistage countercurrent continuous processes.\n", "Various additives are combined into the shredded tobacco product mixtures, with humectants such as propylene glycol or glycerol, as well as flavoring products and enhancers such as cocoa solids, licorice, tobacco extracts, and various sugars, which are known collectively as \"casings\". The leaf tobacco is then shredded, along with a specified amount of small laminate, expanded tobacco, BL, RL, ES, and IS. A perfume-like flavor/fragrance, called the \"topping\" or \"toppings\", which is most often formulated by , is then blended into the tobacco mixture to improve the consistency in flavor and taste of the cigarettes associated with a certain brand name. Additionally, they replace lost flavors due to the repeated wetting and drying used in processing the tobacco. Finally, the tobacco mixture is filled into cigarette tubes and packaged.\n", "Openvape\n\nOpenvape (also spelled O.penVAPE) manufactures and distributes personal vaporizer devices for use with herbal extract oil-filled cartridges. Founded in 2012, the company is headquartered in Denver, Colorado, and sells products at 1,200+ retail locations across a distribution network of licensed affiliates in Colorado, California, Oregon, Arizona, Nevada, Connecticut, Maine, Massachusetts, Vermont, New Mexico, Jamaica, Czech Republic, France, the Netherlands, United Kingdom, Canada, Poland, Ireland, Scotland, and South Africa.\n\nO.penVAPE licenses its intellectual property to eleven distribution partners in ten states and Jamaica. Licensees employ O.penVAPE’s Organa Labs technology and proprietary processes to manufacture cannabis oil using supercritical CO2 extraction.\n", "Immediately after disgorging but before final corking, the liquid level is topped up with \"liqueur d'expédition\", commonly a little sugar, a practice known as \"dosage.\" The \"liqueur d'expédition\" is a mixture of the base wine and sucrose, plus 0.02 to 0.03 grams of sulfur dioxide as a preservative. Some \"maisons de Champagne\" (Champagne brands) claim to have secret recipes for this, adding ingredients such as old Champagne wine and candi sugar. In the \"Traité théorique et pratique du travail des vins\" (1873), Maumené lists the additional ingredients \"usually present in the \"liqueur d'expédition\"\": port wine, cognac, elderberry wine, kirsch, framboise wine, alum solutions, tartaric acid, and tannins.\n", "The process is almost always used in conjunction with red wine production since some of the flavor compounds produced by volatile phenols tend to form undesirable flavors with white wine grape varieties.\n\nSection::::Wine production.:Other techniques.\n", "The Upper Hudson Wine Trail legislation was passed by the New York State Senate and Assembly during the 2017 Legislative session. For the 2016-17 legislative session Assemblywoman Carrie Woerner, Senator Kathy Marchione and Wine Trail President Andrew Weber worked to get passage of the bill. It was signed into law by Governor Cuomo on August 21, 2017. The process began in the fall of 2015 after the AVA petition had been accepted as perfected by TTB. During the 2015-16 New York legislative session the Upper Hudson Wine Trail was introduced as A10609 and S8052 where it was passed by the Senate.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-04227
How does anesthesia work and is a person rendered asleep or unconscious?
The person is unconscious not asleep. Asleep is a pattern in neurons firing rate, direction and cycle in specific regions off the brain, like the ascending reticular system. Anesthesia uses 3 bases, unconscious, analgesia and muscle paralysis. For gas anesthesia, we actually don’t know how it works in molecular levels. In total venous anesthesia we do know. For this kind of anesthesia the most common drugs are Propofol for Unconsciousness and it works on chlorine channels in neurons making them hard to fire upon a stimulus, like auditory or visual stimulation. For analgesia we use remifentanil working on opioid receptors to make the neurons hard to fire on noxious, pain, stimulus. And for muscle relaxation we use rocuronium that blocks the sinapses between neurons and muscles connecting in the receptors in the muscle end on the sinapses The gas anesthesia with sevoflurane we really don’t know how the gas block the pain, give unconsciousness and some muscle relaxation. Several theories speculate that it modify the neurons in some way that make it hard to fire on stimulus. There are a beautiful example destroying this theory on YouTube when scientists uses gases to numb a plant that contract its leaves on touch. Since plants don’t have neurons, how it works remains a big mystery I’m an anesthesiologist
[ "Section::::Anaesthesia.:Obstetric anaesthesia.\n", "Section::::Techniques.:General anesthesia.:Monitoring.\n", "BULLET::::- \"Tumescent anesthesia\": a large amount of very dilute local anesthetics are injected into the subcutaneous tissues during liposuction.\n\nBULLET::::- \"Systemic local anesthetics\": local anesthetics are given systemically (orally or intravenous) to relieve neuropathic pain\n\nSection::::Techniques.:Regional anesthesia.:Nerve blocks.\n", "Anesthesia\n\nAnesthesia or anaesthesia (from Greek \"without sensation\") is a state of controlled, temporary loss of sensation or awareness that is induced for medical purposes. It may include analgesia (relief from or prevention of pain), paralysis (muscle relaxation), amnesia (loss of memory), or unconsciousness. A patient under the effects of anesthetic drugs is referred to as being anesthetized. \n\nAnesthesia enables the painless performance of medical procedure that would otherwise cause severe or intolerable pain to an unanesthetized patient, or would otherwise be technically unfeasible. Three broad categories of anesthesia exist:\n", "Section::::Techniques.\n", "Section::::Techniques.:General anesthesia.:Equipment.\n", "Section::::Techniques.:General anesthesia.\n", "Anesthesia (disambiguation)\n\nAnesthesia or anaesthesia has traditionally meant the condition of having the perception of pain and other sensations blocked. \n\nIn some countries, the term is also used to mean anesthesiology.\n\nAnesthesia may also refer to:\n\nBULLET::::- Veterinary anesthesia\n\nSection::::Music.\n\nBULLET::::- \"Anesthesia\" (album) , a 1995 album by Fun People\n\nBULLET::::- \"Anesthesia\" , a 1992 album by Premature Ejaculation\n\nBULLET::::- \"Anesthesia\", the sixth track on the album \"Against the Grain\" (1990) by Bad Religion\n\nBULLET::::- \"Anesthesia\", the thirteenth track on the album \"Life Is Killing Me\" (2003) by Type O Negative\n", "Anesthesia is a combination of the endpoints (discussed above) that are reached by drugs acting on different but overlapping sites in the central nervous system. General anesthesia (as opposed to sedation or regional anesthesia) has three main goals: lack of movement (paralysis), unconsciousness, and blunting of the stress response. In the early days of anesthesia, anesthetics could reliably achieve the first two, allowing surgeons to perform necessary procedures, but many patients died because the extremes of blood pressure and pulse caused by the surgical insult were ultimately harmful. Eventually, the need for blunting of the surgical stress response was identified by Harvey Cushing, who injected local anesthetic prior to hernia repairs. This led to the development of other drugs that could blunt the response leading to lower surgical mortality rates.\n", "BULLET::::- Regional and local anesthesia, which blocks transmission of nerve impulses from a specific part of the body. Depending on the situation, this may be used either on its own (in which case the patient remains conscious), or in combination with general anesthesia or sedation. Drugs can be targeted at peripheral nerves to anesthetize an isolated part of the body only, such as numbing a tooth for dental work or using a nerve block to inhibit sensation in an entire limb. Alternatively, epidural, spinal anesthesia, or a combined technique can be performed in the region of the central nervous system itself, suppressing all incoming sensation from nerves outside the area of the block.\n", "BULLET::::- \"Infiltrative anesthesia\": a small amount of local anesthetic is injected in a small area to stop any sensation (such as during the closure of a laceration, as a continuous infusion or \"freezing\" a tooth). The effect is almost immediate.\n\nBULLET::::- \"Peripheral nerve block\": local anesthetic is injected near a nerve that provides sensation to particular portion of the body. There is significant variation in the speed of onset and duration of anesthesia depending on the potency of the drug (e.g. Mandibular block).\n", "Anesthesia is administered to prevent pain from an incision, tissue manipulation and suturing. Based on the procedure, anesthesia may be provided locally or as general anesthesia. Spinal anesthesia may be used when the surgical site is too large or deep for a local block, but general anesthesia may not be desirable. With local and spinal anesthesia, the surgical site is anesthetized, but the patient can remain conscious or minimally sedated. In contrast, general anesthesia renders the patient unconscious and paralyzed during surgery. The patient is intubated and is placed on a mechanical ventilator, and anesthesia is produced by a combination of injected and inhaled agents.\n", "Anesthesia dolorosa or anaesthesia dolorosa or deafferentation pain is pain felt in an area (usually of the face) which is completely numb to touch. The pain is described as constant, burning, aching or severe. It can be a side effect of surgery involving any part of the trigeminal system, and occurs after 1–4% of peripheral surgery for trigeminal neuralgia. No effective medical therapy has yet been found. Several surgical techniques have been tried, with modest or mixed results. The value of surgical interventions is difficult to assess because published studies involve small numbers of mixed patient types and little long term follow-up.\n", "Section::::Safety.\n", "Section::::Agents and Doses.\n", "The first hospital anesthesia department was established at the Massachusetts General Hospital in 1936, under the leadership of Henry K. Beecher (1904–1976). Beecher, who received his training in surgery, had no previous experience in anesthesia.\n", "Blood or blood expanders may be administered to compensate for blood lost during surgery. Once the procedure is complete, sutures or staples are used to close the incision. Once the incision is closed, the anesthetic agents are stopped or reversed, and the patient is taken off ventilation and extubated (if general anesthesia was administered).\n\nSection::::Description of surgical procedure.:Post-operative care.\n", "As an example sequence of induction drugs:\n\nBULLET::::1. Pre-oxygenation to fill lungs with oxygen to permit a longer period of apnea during intubation without affecting blood oxygen levels\n\nBULLET::::2. Lidocaine for sedation and systemic analgesia for intubation\n\nBULLET::::3. Fentanyl for systemic analgesia for intubation\n\nBULLET::::4. Propofol for sedation for intubation\n\nBULLET::::5. Switching from oxygen to a mixture of oxygen and inhalational anesthetic\n\nLaryngoscopy and intubation are both very stimulating and induction blunts the response to these maneuvers while simultaneously inducing a near-coma state to prevent awareness.\n\nSection::::Induction.:Physiologic monitoring.\n", "BULLET::::- General anesthesia suppresses central nervous system activity and results in unconsciousness and total lack of sensation. A patient receiving general anesthesia can lose consciousness with either intravenous agents or inhalation agents.\n\nBULLET::::- Sedation suppresses the central nervous system to a lesser degree, inhibiting both anxiety and creation of long-term memories without resulting in unconsciousness.\n", "Horace Wells conducted the first public demonstration of the inhalational anesthetic at the Massachusetts General Hospital in Boston in 1845. However, the nitrous oxide was improperly administered and the patient cried out in pain. On 16 October 1846, Boston dentist William Thomas Green Morton gave a successful demonstration using diethyl ether to medical students at the same venue. Morton, who was unaware of Long's previous work, was invited to the Massachusetts General Hospital to demonstrate his new technique for painless surgery. After Morton had induced anesthesia, surgeon John Collins Warren removed a tumor from the neck of Edward Gilbert Abbott. This occurred in the surgical amphitheater now called the Ether Dome. The previously skeptical Warren was impressed and stated, \"Gentlemen, this is no humbug.\" In a letter to Morton shortly thereafter, physician and writer Oliver Wendell Holmes, Sr. proposed naming the state produced \"anesthesia\", and the procedure an \"anesthetic\".\n", "Some sedation is sometimes provided to help the patient relax and pass the time during the procedure, but with a successful spinal anaesthetic the surgery can be performed with the patient wide awake.\n\nSection::::Technique.:Anatomy.\n", "Over the past 100 years, the study and administration of anesthesia has become more complex. Historically anesthesia providers were almost solely utilized during surgery to administer general anesthesia in which a person is placed in a pharmacologic coma. This is performed to permit surgery without the individual responding to pain (analgesia) during surgery or remembering (amnesia) the surgery.\n", "When pain is blocked from a part of the body using local anesthetics, it is generally referred to as regional anesthesia. There are many types of regional anesthesia either by injecting into the tissue itself, a vein that feeds the area or around a nerve trunk that supplies sensation to the area. The latter are called nerve blocks and are divided into peripheral or central nerve blocks.\n\nThe following are the types of regional anesthesia:\n", "Section::::Neurological theories of action.\n\nThe full mechanism of action of volatile anaesthetic agents is unknown and has been the subject of intense debate. \"Anesthetics have been used for 160 years, and how they work is one of the great mysteries of neuroscience,\" says anaesthesiologist James Sonner of the University of California, San Francisco. Anaesthesia research \"has been for a long time a science of untestable hypotheses,\" notes Neil L. Harrison of Cornell University.\n", "Section::::History of anesthesia.:History of ether's application.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-12328
When it comes to bullet calibers what do the numbers indicate? E.g 7.62x39mm, 5.56x45mm, 9x19.
Example: 5.56x45mm 5.56 means the diameter of the bullet 45 means the length of the casing
[ "BULLET::::- was a United States Navy \"Admirable\"-class minesweeper during World War II\n\nBULLET::::- was a United States Navy during World War II\n\nBULLET::::- was a United States Navy \"Edsall\"-class destroyer escort during World War II\n\nBULLET::::- was a United States Navy \"General G. O. Squier\"-class transport ship during World War II\n\nBULLET::::- was a United States Navy \"Nightingale\"-class coastal minesweeper during World War II\n\nBULLET::::- was a United States Navy \"Trefoil\"-class concrete barge during World War II\n\nSection::::In transportation.\n\nBULLET::::- The Alfa Romeo 149 car\n\nBULLET::::- The Detroit Diesel 149 series of diesel engines of the 1960s\n", "BULLET::::- Mf. \"Munitionsfabrik\" - The portions of Spandau arsenal deicated to making cartridge cases and bullets, assembling full cartridges, and packing them into cartons and crates. This would be found in the middle of the first line of the ammo carton label, followed by F1, F2 or F3 (the number of the assembly line that assembled the ammunition).\n\nBULLET::::- P \"Pulverfabrik\" - The portions of Spandau arsenal dedicated to manufacturing propellants. The code \"P.\" would be followed by the propellant batch number, the letter \"L.\" (for \"Lieferung\" \"Shipment\") and the 2-digit year of manufacture.\n", "BULLET::::- .32 H&R Magnum - the only revolver cartridge in this caliber which is in wide use today, mostly in small-frame revolvers. This is an extended version of the much earlier .32 S&W long, which is an extended version of the .32 S&W.\n\nBULLET::::- .327 Federal Magnum - a new cartridge developed jointly by Ruger and Federal. This cartridge is an extended version of the .32 H&R Magnum\n\nSection::::Rifle cartridges in 7.62 mm caliber.\n\nThe most common and historical rifle cartridges in this caliber are:\n\nBULLET::::- .30 Carbine, used in the M1/M2/M3 carbines, sometimes called the 7.62×33mm\n", "BULLET::::- Length 104 .1 (41.0 in.),\n\nBULLET::::- Width 40.6 cm (16.0 in.),\n\nBULLET::::- Height 273.7 cm (9.0 in.)\n\nBULLET::::- Inventory Number: A19500094005\n", "BULLET::::- Semitrailer, 7-Ton, panal cargo, Gramm Motor and Trailer Co.\n\nBULLET::::- G596\n\nBULLET::::- Semitrailer, 7-Ton, Cargo, Highway Trailer Co.\n\nBULLET::::- G597\n\nBULLET::::- Semitrailer, 7-Ton, Cargo, Carter\n\nBULLET::::- G598\n\nBULLET::::- Semitrailer, 7-Ton, Cargo, Whitehead\n\nBULLET::::- G599\n\nBULLET::::- Semitrailer, 11-Ton, Refer, Hyde model KR-20\n\nSection::::G600 to G699.\n\nBULLET::::- G600\n\nBULLET::::- Semitrailer, 7½-Ton, Low Platform,\n\nBULLET::::- G601\n\nBULLET::::- Semitrailer, 10-Ton, stake, Fruehauf trailer co.\n\nBULLET::::- G602\n\nBULLET::::- Semitrailer, 10-Ton, low bed, Highway trailer Co.\n\nBULLET::::- G603\n\nBULLET::::- Semitrailer, 12½-Ton, van, Fruehauf trailer co.\n\nBULLET::::- G604\n\nBULLET::::- Semitrailer, 22½-Ton, Low Platform, Trailer Co. of America\n\nBULLET::::- G605\n\nBULLET::::- trailer 1/2-Ton, public address van.\n", "BULLET::::- Semitrailer, 3½-Ton, Combination Stake and Platform, Checker\n\nBULLET::::- G561\n\nBULLET::::- semitrailer, 3-Ton, van, Gramm model DF-40\n\nBULLET::::- G562\n\nBULLET::::- semitrailer, 3½-Ton, Combination Stake and Platform, Checker model C-4\n\nBULLET::::- G563\n\nBULLET::::- semitrailer, 3½-Ton, Combination Stake and Platform, Dorsey model D-S\n\nBULLET::::- G564\n\nBULLET::::- Semitrailer, 3½-Ton, Combination Stake and Platform, Hobbs, model 5-DF\n\nBULLET::::- G565\n\nBULLET::::- Semitrailer, 6-Ton, Combination Stake and Platform, Kingham Trailer Co. Model H-308\n\nBULLET::::- G566\n\nBULLET::::- semitrailer, 3-Ton,\n\nBULLET::::- G567\n\nBULLET::::- Semitrailer, 3½-Ton, Combination Stake and Platform, Utility Trailer Manufacturing Company\n\nBULLET::::- G568\n\nBULLET::::- Semitrailer, 6-Ton, Combination Stake and Platform, Winter-Wiess\n\nBULLET::::- G569\n", "Section::::Current GRAU indices.:Misconceptions.\n\nSeveral common misconceptions surround the scope and originating body of these indices. The GRAU designation is not an industrial designation, nor is it assigned by the design bureau. In addition to its GRAU designation, a given piece of equipment could have a design name, an industrial name and a service designation.\n\nFor example, one of the surface-to-air missiles in the S-25 Berkut air defense system had at least four domestic designations:\n\nBULLET::::- design name: La-205\n\nBULLET::::- GRAU index: 5V7\n\nBULLET::::- industry name: Article 205 ()\n\nBULLET::::- Soviet military designation: V-300\n", "BULLET::::- M1074 Truck, Palletized load system, 10 × 10 with Material Handling Crane and 20K winch\n\nBULLET::::- M1075 truck, Palletized load system,\n\nBULLET::::- M1076 trailer, Palletized load system, (PLST)\n\nBULLET::::- M1077 truck, flatrack, Palletized load system, (PLST)\n\nBULLET::::- M1078 2.5-ton Cargo Truck, (LMTV)\n\nBULLET::::- M1079 2.5-ton Van\n\nBULLET::::- M1080 2.5-ton Chassis\n\nBULLET::::- M1081 2.5-ton Cargo Truck LVAD LAPES/AD\n\nBULLET::::- M1082 2.5-ton Trailer\n\nBULLET::::- M1083 5-ton Cargo Truck\n\nBULLET::::- M1084 5-ton Cargo Truck with MHE\n\nBULLET::::- M1085 5-ton Long-wheelbase Cargo Truck\n\nBULLET::::- M1086 5-ton Long-wheelbase Cargo Truck with MHE\n\nBULLET::::- M1087 5-ton Expansible Van\n\nBULLET::::- M1088 5-ton Tractor\n", "BULLET::::- M1089 5-ton Wrecker\n\nBULLET::::- M1090 5-ton Dump Truck\n\nBULLET::::- M1091 5-ton Fuel Truck\n\nBULLET::::- M1092 Truck, Chassis 5-ton\n\nBULLET::::- M1093 5-ton Cargo Truck LVAD LAPES/AD\n\nBULLET::::- M1094 5-ton Dump Truck LVAD LAPES/AD\n\nBULLET::::- M1095 5-ton Trailer\n\nBULLET::::- M1096 5-ton Long-wheelbase Chassis\n\nBULLET::::- M1097A1 Truck, HMMWV variant, heavy, 1¼-ton, 4 × 4,\n\nBULLET::::- M1097A2 Truck, HMMWV, maintenance, heavy, 1¼-ton, 4 × 4,\n\nBULLET::::- M1097 Avenger, short-range air defense system\n\nBULLET::::- M1098 5000-gallon semitrailer\n\nSection::::M1100 to M1199.\n\nBULLET::::- M1100 trailer, for M120 120 mm mortar\n\nBULLET::::- M1101 trailer, cargo, light, (for HMMWV)\n\nBULLET::::- M1102 trailer, cargo, heavy, (for HMMWV)\n", "BULLET::::- delivered as BLS B 831\n\nBULLET::::- renumbered with UIC number 50 63 20-33 801-5\n\nBULLET::::- rebuilt 1991 and renumbered 50 63 20-33 705-4\n\nBULLET::::- registered as 50 85 20-35 451-7 CH-BLS\n\nSection::::Standard practices.:United Kingdom.\n", "BULLET::::- was a T2 tanker during World War II\n\nBULLET::::- was a \"Barracuda\"-class submarine during World War II\n\nBULLET::::- was an \"Alamosa\"-class cargo ship during World War II\n\nBULLET::::- was an \"Admirable\"-class minesweeper during World War II\n\nBULLET::::- was a \"Trefoil\"-class concrete barge during World War II\n\nBULLET::::- was a during World War II\n\nBULLET::::- was a during World War II\n\nBULLET::::- was a yacht during World War I\n\nBULLET::::- was a during World War II\n\nSection::::In sports.\n\nBULLET::::- Baseball Talk was a set of 164 talking baseball cards released by Topps Baseball Card Company in 1989\n\nSection::::In transportation.\n", "BULLET::::- M941 truck, chassis, 5-ton, 6 × 6 - M939 series 5-ton 6x6 truck\n\nBULLET::::- M942 truck, chassis, 5-ton, 6 × 6 (XLWB) - M939 series 5-ton 6x6 truck\n\nBULLET::::- M943 truck, chassis, 5-ton, 6 × 6 (XLWB W/winch) - M939 series 5-ton 6x6 truck\n\nBULLET::::- M944 truck, chassis, 5-ton, 6 × 6 - M939 series 5-ton 6x6 truck\n\nBULLET::::- M944A1 truck, chassis, 5-ton, 6 × 6 Mobile shop equipped\n\nBULLET::::- M945 truck, chassis, 5-ton, 6 × 6 - M939 series 5-ton 6x6 truck\n\nBULLET::::- M963 truck, cargo, 2.5-ton, 6 × 6,\n", "BULLET::::- M408 truck, -ton, 6 × 6\n\nBULLET::::- M409 truck, 10-ton, 8 × 8\n\nBULLET::::- XM410 truck, -ton, 8 x 8, Chrysler\n\nBULLET::::- M411 truck shop van MGM-18 Lacrosse\n\nBULLET::::- M412 truck shop van MGM-18 Lacrosse\n\nBULLET::::- M416 trailer, cargo, -ton, 2-wheeled (G857) (1962)\n\nBULLET::::- M416A1 Trailer, cargo, -ton, 2-wheeled, (1976)\n\nBULLET::::- M416B1 Trailer, cargo, -ton, 2-wheeled\n\nBULLET::::- M417 trailer, cargo, 1-ton (G875)\n\nBULLET::::- M420 trailer, MGR-3 Little John rocket\n\nBULLET::::- M422 'Mighty Mite' Truck, utility, lightweight, -ton, 4 × 4 (G843) (1959)\n\nBULLET::::- M422A1 'Mighty Mite' Truck, utility, lightweight, -ton, 4 × 4, (1960), 6 inch longer\n", "BULLET::::- 178th Airlift Squadron unit of the North Dakota Air National Guard\n\nBULLET::::- Blohm & Voss P 178 was an experimental jet-powered dive bomber during World War II\n\nBULLET::::- Panhard 178 was an advanced French reconnaissance 4×4 armoured car during World War II\n\nBULLET::::- was a U.S. Navy miscellaneous auxiliary, bathymetric Survey Ship during World War II\n\nBULLET::::- was a U.S. Navy during World War II\n\nBULLET::::- was a U.S. Navy \"Alamosa\"-class cargo ship during World War II\n\nBULLET::::- was a U.S. Navy during World War II\n\nBULLET::::- was a U.S. Navy during World War II\n", "BULLET::::- M819 Truck, tractor wrecker, medium w/5th wheel, diesel engine, 5-ton, 6 × 6 (G908)– M809 series 5-ton 6x6 truck\n\nBULLET::::- M820 Truck, van, 5-ton, 6 × 6, expandable (G908)– M809 series 5-ton 6x6 truck\n\nBULLET::::- M821 Truck, bridging, 5-ton, 6 × 6 (G908)– M809 series 5-ton 6x6 truck\n\nBULLET::::- M822 semitrailer, electronic van, 10-ton,\n\nBULLET::::- M823 semitrailer, electronic van, 10-ton,\n\nBULLET::::- M824 semitrailer, electronic van, 10-ton,\n\nBULLET::::- M825 Truck, recoilless rifle, 106 mm, -ton, 4 × 4, (1970)\n\nBULLET::::- M829 dolly set, includes M830, and M831\n\nBULLET::::- M830 dolly set, front\n\nBULLET::::- M831 dolly set, rear\n\nBULLET::::- M832 trailer dolly\n", "BULLET::::- M927 Truck, cargo, 5-ton, 6 × 6 XLWB- M939 series 5-ton 6x6 truck\n\nBULLET::::- M928 Truck, cargo, 5-ton, 6 × 6 XLWB (w/winch) - M939 series 5-ton 6x6 truck\n\nBULLET::::- M929 Truck, dump, 5-ton, 6 × 6 - M939 series 5-ton 6x6 truck\n\nBULLET::::- M930 Truck, dump, 5-ton, 6 × 6 (w/winch) - M939 series 5-ton 6x6 truck\n\nBULLET::::- M931 Truck, tractor, 5-ton, 6 × 6 - M939 series 5-ton 6x6 truck\n\nBULLET::::- M932 Truck, tractor, 5-ton, 6 × 6 (w/winch) - M939 series 5-ton 6x6 truck\n", "BULLET::::- ISO 233 Information and documentation – Transliteration of Arabic characters into Latin characters\n\nBULLET::::- ISO 234 Files and rasps\n\nBULLET::::- ISO 234-1:1983 Part 1: Dimensions\n\nBULLET::::- ISO 234-2:1982 Part 2: Characteristics of cut\n\nBULLET::::- ISO 235:2016 Parallel shank jobber and stub series drills and Morse taper shank drills\n\nBULLET::::- ISO 236 Reamers\n\nBULLET::::- ISO 236-1:1976 Hand reamers\n\nBULLET::::- ISO 236-2:2013 Part 2: Long fluted machine reamers with Morse taper shanks\n\nBULLET::::- ISO 237:1975 Rotating tools with parallel shanks - Diameters of shanks and sizes of driving squares\n", "BULLET::::- Chinese Type 95 / QBZ-95 assault rifle\n\nBULLET::::- Chinese Type 95B / QBZ-95B carbine\n\nBULLET::::- Chinese Type 95 LSW / Type 95 SAW / QBB-95 light support weapon / squad automatic weapon\n\nBULLET::::- Chinese Type 88 / QBU-88 sniper rifle\n\nBULLET::::- Chinese Type 03 / QBZ-03 assault rifle\n\nBULLET::::- Chinese Type 88 / QJY-88 light machine gun\n\nBULLET::::- Chinese integrated combat system QTS-11\n\nSection::::See also.\n\nBULLET::::- 5 mm caliber\n\nBULLET::::- .22 Savage Hi-Power\n\nBULLET::::- 6 mm SAW\n\nBULLET::::- .243 Winchester\n\nBULLET::::- 6.5×54mm Mannlicher–Schönauer\n\nBULLET::::- 6mm BR\n\nBULLET::::- List of rifle cartridges\n\nBULLET::::- Table of handgun and rifle cartridges\n", "BULLET::::- PMAG 7.62 AC: 5-round (PMAG 5 7.62 AC) and 10-round (PMAG 10 7.62 AC) magazines for short-action cartridges built on a 0.470\" case head and overall length of up to 2.86\", such as 7.62x51mm NATO/.308 Winchester, 7mm-08 Remington, 6.5mm Creedmoor, .260 Remington, .243 Winchester, etc.\n\nBULLET::::- PMAG AC L, Standard: 5-round magazines for long-action cartridges built on a 0.470\" case head and overall length of up to 3.50\", such as .30-06, .25-06 Remington, .270 Winchester, .280 Remington, 8mm-06, etc.\n", "BULLET::::- Semitrailers 5- to 6-ton\n\nBULLET::::- G774\n\nBULLET::::- Semitrailers, 10- to 11-ton\n\nBULLET::::- G775\n\nBULLET::::- Trailers 1-ton\n\nBULLET::::- G776\n\nBULLET::::- Trailers, 1.1/2-ton\n\nBULLET::::- G777\n\nBULLET::::- Trailers 2- and 2.1/2-ton\n\nBULLET::::- G778\n\nBULLET::::- Trailers, 3- and 3.1/2-ton\n\nBULLET::::- G779\n\nBULLET::::- Trailers, 5- and 6-ton\n\nBULLET::::- G780\n\nBULLET::::- power units, willys engine type, model CJ-3A.\n\nBULLET::::- G781\n\nBULLET::::- trailer, laundry, 2-wheel, 2-trailer,\n\nBULLET::::- G782\n\nBULLET::::- M271 trailer, 3.5 ton 1-axle, pole hauler\n\nBULLET::::- V-13 trailer\n\nBULLET::::- G783\n\nBULLET::::- ambulance 3/4-ton, metropolitan, Cadillac 5186, (1952)\n\nBULLET::::- G789\n\nBULLET::::- M242 trailer, van radar dish mount, for M33 fire control system, Nike (rocket)\n", "BULLET::::- M989 trailer, ammunition, 11-ton, heavy expanded mobility\n\nBULLET::::- M990 semitrailer, van, 6-ton,\n\nBULLET::::- M991 semitrailer, van, repair facility,\n\nBULLET::::- M992 carrier, ammunition, (FAASV), (M109A2 chassis)\n\nBULLET::::- M993 M270 Multiple Launch Rocket System\n\nBULLET::::- M995 semitrailer, van, test station,\n\nBULLET::::- M996 truck, ambulance, 4 × 4, armored, 2-litter,\n\nBULLET::::- M997 Truck, ambulance, 4-litter, armor, 1¼-ton, 4 × 4 (HMMWV)\n\nBULLET::::- M997A1 Truck, ambulance, 4-litter, armor, 1¼-ton, 4 × 4 (HMMWV)\n\nBULLET::::- M998 Truck, cargo, personnel, 1¼-ton, 4 × 4, w/o winch (HMMWV)\n\nBULLET::::- M998A1 Truck, cargo, personnel, 1¼-ton, 4 × 4, w/o winch (HMMWV)\n\nSection::::M1000 to M1099.\n", "BULLET::::- * * \"Hirtenberger Patronenfabrik\" - Hirtenberg, Baden bei Wien, Lower Austria, Austria. \"Clean\" export headstamp used by Hirtenberger - with the stars at 3 o'clock and 6 o'clock, the 2-digit year at 12 o'clock, and the caliber at 6 o'clock. The marks are either two 5-point stars, two 6-point asterisks, or a 5-point star and a 6-point asterisk.\n\nBULLET::::- B \"Wöllersdorfer Werke\" - Berndorf, Bezirk Baden, Lower Austria, Austria.\n", "BULLET::::- M767 truck, chassis, -ton, 6 × 6,\n\nBULLET::::- M768 truck, chassis, -ton, 6 × 6,\n\nBULLET::::- M769 truck, chassis, -ton, 6 × 6,\n\nBULLET::::- M770 truck, cargo, -ton, 6 × 6,\n\nBULLET::::- M771 truck, cargo, -ton, 6 × 6,\n\nBULLET::::- M772 truck, cargo, -ton, 6 × 6,\n\nBULLET::::- M773 truck, cargo, -ton, 6 × 6,\n\nBULLET::::- M774 truck, cargo, -ton, 6 × 6,\n\nBULLET::::- M775 truck, cargo, -ton, 6 × 6,\n\nBULLET::::- M776 truck, tanker, -ton, 6 × 6,\n\nBULLET::::- M777 truck, chassis, -ton, 6 × 6,\n\nBULLET::::- M778 truck, cargo, dropside, -ton, 6 × 6,\n", "6.5mm\n\n6.5mm or 6.5mm gauge may refer to:\n\nSection::::Rail transport modelling.\n\nBULLET::::- Z gauge, 1:220 scale with rails 6.5 mm apart, representing standard gauge\n\nBULLET::::- Nn3 gauge, 1:160 scale with rails 6.5 mm apart, representing metre/3-foot gauge\n\nBULLET::::- H0f gauge, 1:87 scale with rails 6.5 mm apart, representing narrow gauge gauge\n\nSection::::Firearms.\n\nSection::::Firearms.:Pistol cartridges.\n\nBULLET::::- 6.5mm Bergmann, centerfire cartridge\n\nBULLET::::- 6.5×25mm CBJ, pistol cartridge\n\nSection::::Firearms.:Rifle cartridges.\n\nBULLET::::- 6.5mm Grendel, cartridge designed for the AR-15\n\nBULLET::::- 6.5mm Creedmoor, centerfire rifle cartridge\n\nBULLET::::- 6.5mm Remington Magnum, belted bottlenecked cartridge\n\nBULLET::::- 6.5×47mm Lapua, smokeless powder rimless bottlenecked rifle cartridge\n\nBULLET::::- 6.5×50mmSR Arisaka, Japanese military cartridge\n", "BULLET::::- M813A1 Truck, Cargo Dropside, 5-ton, 6 × 6 (G908)– M809 series 5-ton 6x6 truck\n\nBULLET::::- M814 Truck, cargo, 5-ton, 6 × 6, XLWB (G908)– M809 series 5-ton 6x6 truck\n\nBULLET::::- M815 Truck, bolster, 5-ton, 6 × 6 (G908)– M809 series 5-ton 6x6 truck\n\nBULLET::::- M816 Truck, Wrecker, medium, 5-ton, 6 × 6, Cummins 250 Engine\n\nBULLET::::- M817 Truck, dump, 5-ton, 6 × 6 (G908)– M809 series 5-ton 6x6 truck\n\nBULLET::::- M818 Truck, tractor, 5-ton, 6 × 6 (G908)– M809 series 5-ton 6x6 truck\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-12315
While in America, do non US citizens have the right to free speech?
Yes, foreign nationals still have free speech in the US. The principle is that free speech includes the spread of ideas, and the government does not have the power to regulate these regardless of who is saying them. You probably *don't* have the right to free speech with respect to immigration. For example if you just landed at the airport and you spend all your time at passport control saying "The USA is evil, and every person in your government is going right to hell," you may be denied entry. This is not considered a punishment, but merely border control.
[ "In the United States, freedom of expression is protected by the First Amendment to the United States Constitution, and by precedents set in various legal cases. There are several common-law exceptions, including\n", "The following list is partially composed of the respective countries' government claims and does not necessarily reflect the \"de facto\" situation.\n\nSection::::International law.\n\nThe United Nations Universal Declaration of Human Rights, adopted in 1948, provides, in Article 19, that:\n\nEveryone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.\n", "Canada has had a string of high-profile court cases in which writers and publishers have been the targets of human rights complaints for their writings, in both magazines and web postings. The human rights process in Canada is civil in nature, not criminal. Most of those complaints were withdrawn or dismissed.\n", "Section::::Europe.:European Union.:Sweden.\n\nFreedom of speech is regulated in three parts of the Constitution of Sweden:\n\nBULLET::::- \", Chapter 2 (Fundamental Rights and Freedoms) protects personal freedom of expression \"whether orally, pictorially, in writing, or in any other way\".\n", "In adopting the United Nations Universal Declaration of Human Rights, Ireland, Italy, Luxembourg, Monaco, Australia and the Netherlands insisted on reservations to Article 19 insofar as it might be held to affect their systems of regulating and licensing broadcasting.\n\nSection::::Africa.\n\nThe majority of African constitutions provide legal protection for freedom of speech, with the extent and enforcement varying from country to country.\n\nSection::::Africa.:Eritrea.\n\nEritrea allows no independent media and uses draft evasion as a pretext to crack down on dissent, spoken or otherwise. In Eritrea since 2001, fourteen journalists have been imprisoned in unknown places without a trial.\n\nSection::::Africa.:South Africa.\n", "Section::::Europe.:Switzerland.\n", "Section::::Asia.:Saudi Arabia.\n\nBlasphemy against Islam is illegal in Saudi Arabia, under punishment of death.\n\nSection::::Asia.:South Korea.\n\nThe South Korean constitution guarantees freedom of speech, press, petition and assembly for its nationals. However, behaviors or speeches in favor of the North Korean regime or communism can be punished by the National Security Law, though in recent years prosecutions under this law have been rare.\n", "Section::::Europe.:European Union.:France.\n\nThe \"Declaration of the Rights of Man and of the Citizen\", of constitutional value, states, in its article 11:\n\nIn addition, France adheres to the European Convention on Human Rights and accepts the jurisdiction of the European Court of Human Rights.\n", "Section::::Asia.:Japan.\n\nFreedom of speech is guaranteed by Chapter III, Article 21 of the Japanese constitution. There are few exemptions to this right and a very broad spectrum of opinion is tolerated by the media and authorities.\n\nArticle 21:\n\nSection::::Asia.:Malaysia.\n", "Historically, local communities and governments have sometimes sought to place limits upon speech that was deemed subversive or unpopular. There was a significant struggle for the right to free speech on the campus of the University of California at Berkeley in the 1960s. And, in the period from 1906 to 1916, the Industrial Workers of the World, a working class union, found it necessary to engage in free speech fights intended to secure the right of union organizers to speak freely to wage workers. These free speech campaigns were sometimes quite successful, although participants often put themselves at great risk.\n", "The government may not criminally punish immigrants based on speech that would be protected if said by a citizen. On entry across borders, the government may bar non-citizens from the United States based on their speech, even if that speech would have been protected if said by a citizen. Speech rules as to deportation, on the other hand, are unclear. Lower courts are divided on the question, while the leading cases on the subject are from the Red Scare.\n\nSection::::See also.\n\nBULLET::::- Gag orders in the United States\n\nBULLET::::- Prior restraint\n\nBULLET::::- \"Virginia v. Black\"\n", "Section::::North America.\n\nSection::::North America.:Canada.\n\nSection::::North America.:Canada.:Constitutional guarantees.\n\nFreedom of expression in Canada is guaranteed by section 2(b) of the Canadian Charter of Rights and Freedoms:\n\nSection 1 of the Charter establishes that the guarantee of freedom of expression and other rights under the Charter are not absolute and can be limited under certain situations:\n\nOther laws that protect freedom of speech in Canada, and did so, to a limited extent, before the Charter was enacted in 1982, include the Implied Bill of Rights, the \"Canadian Bill of Rights\" and the Saskatchewan Bill of Rights.\n\nSection::::North America.:Canada.:Supreme Court decisions.\n", "In response to libel tourism, in 2010 the United States enacted the SPEECH Act making foreign defamation judgments unenforceable in U.S. courts unless those judgments are compliant with the First Amendment.\n\nSection::::South America.\n\nSection::::South America.:Brazil.\n\nIn Brazil, freedom of expression is a Constitutional right. Article Five of the Constitution of Brazil establishes that the \"expression of thought is free, anonymity being forbidden\". Furthermore, the \"expression of intellectual, artistic, scientific, and communications activities is free, independently of censorship or license\".\n", "The Indian Constitution guarantees freedom of speech to every citizen, but itself allows significant restrictions. In India, citizens are free to criticize government, politics, politicians, bureaucracy and policies. However, speech can be restricted on grounds of security, morality, and incitement. There have been landmark cases in the Indian Supreme Court that have affirmed the nation's policy of allowing free press and freedom of expression to every citizen, with other cases in which the Court has upheld restrictions on freedom of speech and of the press. Article 19 of the Indian constitution states that:\n\nAll citizens shall have the right —\n", "Section::::Asia.:Bangladesh.\n\nUnder chapter III of the Fundamental rights in Bangladesh\n\nThe Bangladesh constitution ostensibly guarantees\n\nfreedom of speech to every citizen according to PART III of the Laws in Bangladesh.\n\nBangladesh constitution states that:\n\nAll the citizens shall have the following right\n\nBULLET::::- 39. (1) Freedom of thought and conscience\n\nis guaranteed.\n\nBULLET::::- (2) Subject to any reasonable restrictions\n\nimposed by law in the interests of the\n\nsecurity of the State, friendly relations with\n\nforeign states, public order, decency or\n\nmorality, or in relation to contempt of court,\n\ndefamation or incitement to an offence–\n", "The Constitution of the Republic of China (Taiwan) guarantees freedom of speech, teaching, writing, publishing, assembly and association for its nationals under Articles 11 and 14. These rights were suspended under martial law and Article 100 of the Criminal Code, which were lifted and abolished in July 15, 1987 and March 2, 1991 respectively. In 2018, Reporters Without Borders ranked Taiwan 42nd in the world, citing concerns about media independence due to economic pressure from China.\n\nSection::::Asia.:Thailand.\n", "Section::::Private actors, private property, private companies.\n", "Freedom of speech is the concept of the inherent human right to voice one's opinion publicly without fear of censorship or punishment. \"Speech\" is not limited to public speaking and is generally taken to include other forms of expression. The right is preserved in the United Nations Universal Declaration of Human Rights and is granted formal recognition by the laws of most nations. Nonetheless the degree to which the right is upheld in practice varies greatly from one nation to another. In many nations, particularly those with authoritarian forms of government, overt government censorship is enforced. Censorship has also been claimed to occur in other forms (see propaganda model) and there are different approaches to issues such as hate speech, obscenity, and defamation laws.\n", "It is also illegal under Section 269/C of the penal code and punishable with three years of imprisonment, to publicly \"deny, question, mark as insignificant, attempt to justify the genocides carried out by the National Socialist and Communist regimes, as well as the facts of other crimes against humanity.\"\n\nSection::::Europe.:European Union.:Ireland.\n", "Freedom of speech is restricted by the National Security Act of 1980 and in the past, by the Prevention of Terrorism Ordinance (POTO) of 2001, the Terrorist and Disruptive Activities (Prevention) Act (TADA) from 1985 to 1995, and similar measures. Freedom of speech is also restricted by Section 124A of the Indian Penal Code, 1860 which deals with sedition and makes any speech or expression which brings contempt towards government punishable by imprisonment extending from three years to life. In 1962 the Supreme Court of India held this section to be constitutionally valid in the case \"Kedar Nath Singh vs State of Bihar\".\n", "BULLET::::- In February 2006, Calgary Sufi Muslim leader Syed Soharwardy filed a human rights complaint against \"Western Standard\" publisher Ezra Levant. Levant was compelled to appear before the Alberta Human Rights Commission to discuss his intention in publishing the Muhammad cartoons. Levant posted a video of the hearing on YouTube. Levant questioned the competence of the Commission to take up the issue, and challenged it to convict him, \"and sentence me to the apology\", stating that he would then take \"this junk into the real courts, where eight hundred years of common law\" would come to his aid. In February 2008, Soharwardy dropped the complaint noting that \"most Canadians see this as an issue of freedom of speech, that that principle is sacred and holy in our society.\"\n", "Unlike what has been called a strong international consensus that hate speech needs to be prohibited by law and that such prohibitions override, or are irrelevant to, guarantees of freedom of expression, the United States is perhaps unique among the developed world in that under law, some hate speech is protected.\n", "BULLET::::- (a) the right of every citizen to freedom of\n\nspeech and expression; and\n\nBULLET::::- (b) freedom of the press, are guaranteed.\n\nSection::::Asia.:Hong Kong.\n\nUnder \"Chapter III: Fundamental Rights and Duties of the Residents\" () of the Hong Kong Basic Law:\n\nSection::::Asia.:India.\n", "Other laws or exceptions related to freedom of expression in Sweden concern high treason, war mongering, espionage, unauthorized handling of classified information, recklessness with classified information, Insurgency, treason, recklessness that damages the nation, rumour mongering that hurts national security, inciting crime, crimes that obstruct civil liberties, illegal depictions of violence, libel, insults, illegal threats, threats towards police officers or security guards and abuse during legal proceedings.\n\nSection::::Europe.:European Union.:United Kingdom.\n", "Section::::Restrictions based on special capacity of government.\n\nSection::::Restrictions based on special capacity of government.:As employer.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-02290
How do trees, which are full of moisture, burn down?
forest fires are extremely hot. Look-up "flash point" and it may help you understand. A wet log will need a hot fire to burn, which if you are starting a fire or have a small fire in the fireplace a wet log wont burn well because the fire simply isn't hot enough. The fact that a forest fire can get so hot is the reason why wet trees burn.
[ "BULLET::::- Wetting (\"hygriscence\")\n\nBULLET::::- Warming by the sun (\"soliscence\")\n\nBULLET::::- Drying atmospheric conditions (\"xyriscence\")\n\nBULLET::::- Fire (\"pyriscence\") — this is the most common and best studied case, and the term \"serotiny\" is often used where \"pyriscence\" is intended.\n\nBULLET::::- Fire followed by wetting (\"pyrohydriscence\")\n\nSome plants may respond to more than one of these triggers. For example, \"Pinus halepensis\" exhibits primarily fire-mediated serotiny, but responds weakly to drying atmospheric conditions. Similarly, Sierras sequoias and some \"Banksia\" species are strongly serotinous with respect to fire, but also release some seed in response to plant or branch death.\n", "To conceptualize the allocation of growing space, imagine a party where there is one pizza: if there are ten guests, each gets one slice and wants more, but if there are only one or two guests they are well fed. Trees get their energy from the sun, and there is only so much sunlight falling on a given area. If that area is occupied by a very large number of trees, each receives a small portion. There may be a lot of energy being captured, but it is being distributed between so many stems that none grow very quickly. \n", "Section::::Death and regeneration.\n\nWoody material, often referred to as coarse woody debris, decays relatively slowly in many forests in comparison to most other organic materials, due to a combination of environmental factors and wood chemistry (see lignin). Trees growing in arid and/or cold environments do so especially slowly. Thus, tree trunks and branches can remain on the forest floor for long periods, affecting such things as wildlife habitat, fire behavior, and tree regeneration processes.\n\nSection::::Water.\n", "Section::::Physical properties.:Water content.\n\nWater occurs in living wood in three locations, namely:\n\nBULLET::::- in the cell walls,\n\nBULLET::::- in the protoplasmic contents of the cells\n\nBULLET::::- as free water in the cell cavities and spaces, especially of the xylem\n\nIn heartwood it occurs only in the first and last forms. Wood that is thoroughly air-dried retains 8–16% of the water in the cell walls, and none, or practically none, in the other forms. Even oven-dried wood retains a small percentage of moisture, but for all except chemical purposes, may be considered absolutely dry.\n", "BULLET::::- Researchers from the Technion - Israel Institute of Technology and a number of US institutions studied the combined effects of drought and heat stress on \"Arabidopsis thaliana\". Their research suggests that the combined effects of heat and drought stress cause sucrose to serve as the major osmoprotectant.\n\nBULLET::::- Plant physiologists from The University of Putra Malaysia and The University of Edinburgh investigated the relative effects of tree age and tree size on the physiological attributes of two broadleaf species. A photosynthetic system was used to measure photosynthetic rate per unit of leaf mass.\n", "Trees transpire large amounts of water through pores in their leaves called stomata, and through this process of evaporative cooling, forests interact with climate at local and global scales.\n", "Through evapotranspiration, forests reduce water yield, except in unique ecosystems called cloud forests. Trees in cloud forests collect the liquid water in fog or low clouds onto their surface, which drips down to the ground. These trees still contribute to evapotranspiration, but often collect more water than they evaporate or transpire.\n", "The first chapter, \"Birth\", begins with lightning starting a forest fire. The heat dries the Douglas-fir cones enough for their scales to spread and release winged seeds. Rain water transports one seed to a sunlit area with well-drained soil. Rodents and insectivores, whose food stashes were destroyed in the fire, eat truffles, which survived underground, and leave feces containing nitrogen-fixing bacteria in the soil. Following one dormant winter stage, the seed begins to germinate.\n", "Section::::Wood–water relationships.\n\nThe timber of living trees and fresh logs contains a large amount of water which often constitutes over 50% of the wood's weight. Water has a significant influence on wood. Wood continually exchanges moisture or water with its surroundings, although the rate of exchange is strongly affected by the degree to which wood is sealed.\n\nWood contains water in three forms:\n", "Section::::Biotic responses and adaptations.:Plants.:Fire resistance.\n\nFire-resistant plants suffer little damage during a characteristic fire regime. These include large trees whose flammable parts are high above surface fires. Mature ponderosa pine (\"Pinus ponderosa\") is an example of a tree species that suffers virtually no crown damage under a naturally mild fire regime, because it sheds its lower, vulnerable branches as it matures.\n\nSection::::Biotic responses and adaptations.:Animals, birds and microbes.\n", "Wildfires occur when all the necessary elements of a fire triangle come together in a susceptible area: an ignition source is brought into contact with a combustible material such as vegetation, that is subjected to enough heat and has an adequate supply of oxygen from the ambient air. A high moisture content usually prevents ignition and slows propagation, because higher temperatures are needed to evaporate any water in the material and heat the material to its fire point. Dense forests usually provide more shade, resulting in lower ambient temperatures and greater humidity, and are therefore less susceptible to wildfires. Less dense material such as grasses and leaves are easier to ignite because they contain less water than denser material such as branches and trunks. Plants continuously lose water by evapotranspiration, but water loss is usually balanced by water absorbed from the soil, humidity, or rain. When this balance is not maintained, plants dry out and are therefore more flammable, often a consequence of droughts.\n", "Trees, and plants in general, affect the water cycle significantly:\n\nBULLET::::- their canopies intercept a proportion of precipitation, which is then evaporated back to the atmosphere (canopy interception);\n\nBULLET::::- their litter, stems and trunks slow down surface runoff;\n\nBULLET::::- their roots create macropores – large conduits – in the soil that increase infiltration of water;\n\nBULLET::::- they contribute to terrestrial evaporation and reduce soil moisture via transpiration;\n\nBULLET::::- their litter and other organic residue change soil properties that affect the capacity of soil to store water.\n", "For instance, lignin is a component of wood, which is relatively resistant to decomposition and can in fact only be decomposed by certain fungi, such as the black-rot fungi. Wood decomposition is a complex process involving fungi which transport nutrients to the nutritionally scarce wood from outside environment. Because of this nutritional enrichment the fauna of saproxylic insects may develop and in turn affect dead wood, contributing to wood decomposition and nutrient cycling in the forest floor. Lignin is one such remaining product of decomposing plants with a very complex chemical structure causing the rate of microbial breakdown to slow. Warmth increases the speed of plant decay, by the same amount regardless of the composition of the plant\n", "According to \"The Bioenergy Knowledge Centre\", the energy content of wood is more closely related to its moisture content than its species. The energy content improves as moisture content decreases.\n\nIn 2008, wood for fuel cost $15.15 per 1 million BTUs (0.041 EUR per kWh).\n\nSection::::Environmental impacts.\n\nSection::::Environmental impacts.:Combustion by-products.\n\nAs with any fire, burning wood fuel creates numerous by-products, some of which may be useful (heat and steam), and others that are undesirable, irritating or dangerous.\n", "\"Acacia oncinocarpa\" and \"Eucalyptus miniata\", for example, and perennial herbs all have adaptive mechanisms that enable them to live in fire-prone areas of Australia. Both the acacia (a small spreading shrub) and eucalyptus (an overstorey tree) can regenerate from seeds and vegetatively regenerate new shoots from buds that escape fire. Reproduction and seed fall occur during the eight dry months. Due to the area's frequent fires, the seeds are usually released onto a recently burnt seed bed.\n", "Trees can also grow back after a fire, due to the lignotubers and epicormic buds protected by the thick bark. In some cases, where trees are tightly packed, the fire destroys only the main branches leaving the underground portions and protected trunks of the plants to survive. As plants grow back after a fire, other species can take advantage of the light gaps created, leading to a thick mixture of tea-tree, cutting grass and species such as \"Bauera\".\n\nSection::::Ecology.:Pollination.\n", "One type of air-dried log is \"dead standing,\" which refers to trees which have died from natural causes (bug kill, virus, fire etc.) and cut down after they died. Standing dead trees may be cut one month or several decades after they died, so the term \"dead standing\" does not necessarily mean the logs have dried down to equilibrium moisture content. Dead standing logs can be green, or more-or-less dry.\n", "BULLET::::- Drought response is similar to the leaf-fall response of drought-deciduous trees; however, leafy shoots are shed in place of leaves. Western red cedar (\"Thuja plicata\") provides an example, as do other members of the family Cupressaceae.\n\nBULLET::::- In tropical forests, infestation of tree canopies by woody climbers or lianas can be a serious problem. Cladoptosis – by giving a clean bole with no support for climbing plants – may be an adaptation against lianas, as in the case of \"Castilla\".\n\nSection::::See also.\n\nBULLET::::- Abscission\n\nBULLET::::- Marcescence: the opposite phenomenon – withered branches (or leaves) stay on\n\nSection::::External links.\n", "For example, in Alaskan boreal forests, a paper birch stand 20 years after a fire may have , but after 60 to 90 years, the number of trees will decrease to as spruce replaces the birch. After approximately 75 years, the birch will start dying and by 125 years, most paper birch will have disappeared unless another fire burns the area.\n", "Section::::Some examples of fire in different ecosystems.:Shrublands.\n\nShrub fires typically concentrate in the canopy and spread continuously if the shrubs are close enough together. Shrublands are typically dry and are prone to accumulations of highly volatile fuels, especially on hillsides. Fires will follow the path of least moisture and the greatest amount of dead fuel material. Surface and below-ground soil temperatures during a burn are generally higher than those of forest fires because the centers of combustion lie closer to the ground, although this can vary greatly. Common plants in shrubland or chaparral include manzanita, chamise and Coyote Brush.\n", "BULLET::::- White Asphodel (\"Asphodelus albus\")\n\nFor some species of pine, such as Aleppo Pine (\"Pinus halepensis\"), European Black Pine (\"Pinus nigra\") and Lodgepole Pine (\"Pinus contorta\"), the effects of fire can be antagonistic: if moderate, it helps pine cone bursting, seed dispersion and the cleaning of the underwoods; if intense, it destroys these resinous trees.\n\nSection::::Active pyrophytes.\n\nSome trees and shrubs such as the Eucalyptus of Australia actually encourage the spread of fires by producing inflammable oils, and are dependent on their resistance to fire which keeps other species of tree from invading their habitat.\n\nSection::::Pyrophile plants.\n", "Section::::Fire ecology.:Uses.\n", "BULLET::::- their leaves control the humidity of the atmosphere by transpiring. 99% of the water absorbed by the roots moves up to the leaves and is transpired.\n\nAs a result, the presence or absence of trees can change the quantity of water on the surface, in the soil or groundwater, or in the atmosphere. This in turn changes erosion rates and the availability of water for either ecosystem functions or human services. Deforestation on lowland plains moves cloud formation and rainfall to higher elevations.\n", "When occasional wildfires burn down tall woody trees surrounding \"Rubus flagellaris\", the resulting burning has a positive effect on population growth for the species. Other research has also shown that occasional wildfires are beneficial to the population's growth. The plant has a high tolerance to hedging from livestock or wildlife browsing.\n\nSection::::Distribution and habitat.:Ecology.:Pollinators.\n", "The total (harmful) air emissions produced by wood kilns, including their heat source, can be significant. Typically, the higher the temperature the kiln operates at, the larger amount of emissions are produced (per pound of water removed). This is especially true in the drying of thin veneers and high-temperature drying of softwoods.\n\nSection::::See also.\n\nBULLET::::- Shakes (timber)\n\nSection::::Further reading.\n\nBULLET::::- ABARE (2000). National Plantation Inventory, March, 2000. 4p.\n" ]
[ "Things that are wet cannot burn.", "Because trees are full of moisture, they shouldn't be able to burn down. " ]
[ "Things that are wet can burn from a hot enough fire.", "Forest fires are extremely hot, and actually hot enough to burn the trees down completely despite them being full of moisture." ]
[ "false presupposition" ]
[ "Things that are wet cannot burn.", "Because trees are full of moisture, they shouldn't be able to burn down. " ]
[ "false presupposition", "false presupposition" ]
[ "Things that are wet can burn from a hot enough fire.", "Forest fires are extremely hot, and actually hot enough to burn the trees down completely despite them being full of moisture." ]
2018-04944
Why is it so easy to eat or drink when you are hungry of thirsty, but so hard when you are full or hydrated?
Psychological. Your body is hungry or thirsty and so the brain imparts that desire you feel to eat or drink to get you to actually. It’s sorta why stuff you might not even like tastes good when you’re hungry/thirsty. It’s an incentive from the brain.
[ "Section::::Detection.:Decreased volume.:Renin-angiotensin system.\n", "Section::::Detection.:Decreased volume.\n", "However, this theory has been questioned, since it implies synapsids were necessarily less advantaged in water retention, that synapsid decline coincides with climate changes or archosaur diversity (neither of which has been tested) and the fact that desert-dwelling mammals are as well adapted in this department as archosaurs, and some cynodonts like \"Trucidocynodon\" were large-sized predators.\n", "The median preoptic nucleus and the subfornical organ receive signals of decreased volume and increased osmolite concentration. Finally, the signals are received in cortex areas of the forebrain where ultimately the conscious craving arises. The subfornical organ and the organum vasculosum of the lamina terminalis contribute to regulating the overall bodily fluid balance by signalling to the hypothalamus to form vasopressin, which is later released by the pituitary gland. \n", "However, this theory has been questioned, since it implies synapsids were necessarily less advantaged in water retention, that synapsid decline coincides with climate changes or archosaur diversity (neither of which tested) and the fact that desert dwelling mammals as well adapted in this department as archosaurs, and some cynodonts like \"Trucidocynodon\" were large sized predators.\n\nSection::::Main forms.\n", "It is vital for organisms to be able to maintain their fluid levels in very narrow ranges. The goal is to keep the interstitial fluid, the fluid outside the cell, at the same concentration as the intracellular fluid, fluid inside the cell. This condition is called isotonic and occurs when the same level of solutes are present on either side of the cell membrane so that the net water movement is zero. If the interstitial fluid has a higher concentration of solutes than the intracellular fluid it will pull water out of the cell. This condition is called hypertonic and if enough water leaves the cell it will not be able to perform essential chemical functions. If the interstitial fluid becomes less concentrated the cell will fill with water as it tries to equalize the concentrations. This condition is called hypotonic and can be dangerous because it can cause the cell to swell and rupture. One set of receptors responsible for thirst detects the concentration of interstitial fluid. The other set of receptors detects blood volume.\n", "BULLET::::- Modern mammals excrete urea, which requires a relatively high urinary rate to keep it from leaving the urine by diffusion in the kidney tubules. Their skins also contain many glands, which also lose water. Assuming that early synapsids had similar features, e.g., as argued by the authors of \"Palaeos\", they were at a disadvantage in a mainly arid world. The same well-respected site points out that \"for much of Australia's Plio-Pleistocene history, where conditions were probably similar, the largest terrestrial predators were not mammals but gigantic varanid lizards (\"Megalania\") and land crocs.\"\n", "Come and quench this thirsting of my soul.br \n\nBread of heaven, feed me till I want no more.br\n\nFill my cup, fill it up and make me whole!br \n\nbr \n\nThere are millions in this world who are cravingbr \n\nThe pleasures earthly things afford.br \n\nBut none can match the wondrous treasurebr \n\nThat I find in Jesus Christ my Lord.br \n\nbr\n\nFill my cup Lord, I lift it up, Lord!br \n\nCome and quench this thirsting of my soul.br \n\nBread of heaven, feed me till I want no more.br \n", "Adjunctive behaviour has been used as evidence of animal welfare problems. Pregnant sows are typically fed only a fraction of the amount of food they would consume by choice, and they remain hungry for almost the whole day. If a water dispenser is available, some sows will drink two or three times their normal daily intake, and under winter conditions, warming this amount of cold water to body temperature, only to discharge it as dilute urine, involves an appreciable caloric cost. However, if such sows are given a bulky high-fibre food (which under typical circumstances would result in an increase in water intake), they spend much longer eating, and the excessive drinking largely disappears. In this case, much of the sows’ water intake appeared to be adjunctive drinking that was not linked to thirst.\n", "One day on his travels, he passed by a group of fishermen while he was saying \"not much\". The fishermen could not catch any fish and were very angry at him. He asked them what he should be saying instead. They told him to say \"Get it full\".\n", "There are receptors and other systems in the body that detect a decreased volume or an increased osmolite concentration. They signal to the central nervous system, where central processing succeeds. Some sources, therefore, distinguish \"extracellular thirst\" from \"intracellular thirst\", where extracellular thirst is thirst generated by decreased volume and intracellular thirst is thirst generated by increased osmolite concentration. Nevertheless, the craving itself is something generated from central processing in the brain, no matter how it is detected.\n\nSection::::Detection.\n", "Section::::Detection.:Decreased volume.:Others.\n\nBULLET::::- Arterial baroreceptors sense a decreased arterial pressure, and signals to the central nervous system in the area postrema and nucleus tractus solitarii.\n\nBULLET::::- Cardiopulmonary receptors sense a decreased blood volume, and signal to area postrema and nucleus tractus solitarii as well.\n\nSection::::Detection.:Cellular dehydration and osmoreceptor stimulation.\n", "Thirst quenching varies among animal species, with dogs, camels, sheep, goats, and deer replacing fluid deficits quickly when water is available, whereas humans and horses may need hours to restore fluid balance.\n\nSection::::Neurophysiology.\n\nThe areas of the brain that contribute to the sense of thirst are mainly located in the midbrain and the hindbrain. Specifically, the hypothalamus appears to play a key role in the regulation of thirst.\n", "Section::::Chart performance.\n", "Thirst\n\nThirst is the craving for potable fluids, resulting in the basic instinct of animals to drink. It is an essential mechanism involved in fluid balance. It arises from a lack of fluids or an increase in the concentration of certain osmolites, such as salt. If the water volume of the body falls below a certain threshold or the osmolite concentration becomes too high, the brain signals thirst.\n", "Section::::Short-term regulation of hunger and food intake.:Nutrient signals.\n\nBlood levels of glucose, amino acids, and fatty acids provide a constant flow of information to the brain that may be linked to regulating hunger and energy intake. Nutrient signals that indicate fullness, and therefore inhibit hunger include rising blood glucose levels, elevated blood levels of amino acids, and blood concentrations of fatty acids.\n\nSection::::Short-term regulation of hunger and food intake.:Hormone signals.\n", "Section::::Critical reception.\n", "Palatability\n\nPalatability is the hedonic reward (i.e., pleasure) provided by foods or fluids that are agreeable to the \"palate\", which often varies relative to the homeostatic satisfaction of nutritional, water, or energy needs. The palatability of a food or fluid, unlike its flavor or taste, varies with the state of an individual: it is lower after consumption and higher when deprived. It has increasingly been appreciated that this can create a hunger that is independent of homeostatic needs.\n\nSection::::Brain mechanism.\n", "Osmometric thirst occurs when the solute concentration of the interstitial fluid increases. This increase draws water out of the cells, and they shrink in volume. The solute concentration of the interstitial fluid increases by high intake of sodium in diet or by the drop in volume of extracellular fluids (such as blood plasma and cerebrospinal fluid) due to loss of water through perspiration, respiration, urination and defecation. The increase in interstitial fluid solute concentration causes water to migrate from the cells of the body, through their membranes, to the extracellular compartment, by osmosis, thus causing cellular dehydration.\n", "One of Socrates' lessons was that men should abstain from foods that might provoke a man to eat when he has no hunger, and drinks that might provoke him to drink when he has no thirst. He went on to say that the best sauce in the world is to be hungry.\n", "This behavior is thought to have been developed millions of years ago when plants began their journey onto dry land. While this migration led to much easier consumption of CO, it greatly reduced the amount of water readily available to the plants. Thus, strong evolutionary pressure was put on the ability to find more water.\n\nSection::::Mechanism.\n", "Electrolyte and volume homeostasis is a complex mechanism that balances the body's requirements for blood pressure and the main electrolytes sodium and potassium. In general, electrolyte regulation precedes volume regulation. When the volume is severely depleted, however, the body will retain water at the expense of deranging electrolyte levels.\n", "This is one of two types of thirst and is defined as thirst caused by loss of blood volume (hypovolemia) without depleting the intracellular fluid. This can be caused by blood loss, vomiting, and diarrhea. This loss of volume is problematic because if the total blood volume falls too low the heart cannot circulate blood effectively and the eventual result is hypovolemic shock. The vascular system responds by constricting blood vessels thereby creating a smaller volume for the blood to fill. This mechanical solution however has definite limits and usually must be supplemented with increased volume. The loss of blood volume is detected by cells in the kidneys and triggers thirst for both water and salt via the renin-angiotensin system.\n", "Section::::Interpersonal variability.\n", "BULLET::::- Stimulate the expression of cocaine and amphetamine regulated transcript (CART).\n\nThough rising blood levels of leptin do promote weight loss to some extent, its main role is to protect the body against weight loss in times of nutritional deprivation. Other factors also have been shown to effect long-term hunger and food intake regulation including insulin.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-15503
Why does bacon cook to a brown/red while other pork cuts cook to white?
It's the [Maillard reaction]( URL_0 ) that gives bacon it's brown/red hue. This is caused by a reaction between the amino acids (proteins) mixing with the sugars found in bacon. Regardless of the presence of nitrates, any bacon cooked in hot temperatures will experience this effect. This reaction can be elicited by searing most, if not all, meats. For instance, take a steak and place it into a hot cast iron pan. Flip it after a few minutes, and you'll notice this same brown/red crust formed where meat met pan.
[ "The bacon used for the meal can vary somewhat depending on individual preference. Usually back bacon is used for the recipe, but other cuts of bacon are sometimes preferred. However, the bacon used is almost always cured. The traditional curing process is a long process which involves storing the bacon in salt, however, in modern times, mass-produced bacon is cured using brine which is less frequently injected into the meat to speed-up the process. The bacon can also be smoked which adds a depth of flavour which some people prefer. In Ireland, one can also purchase what is known as \"home-cured\" or \"hard-cured\" which is bacon cured over a long period and then stored for another long spell, wrapped in paper. This makes the bacon very salty, hard in texture and yellowish in colour.\n", "Bacon is distinguished from other salt-cured pork by differences in the cuts of meat used and in the brine or dry packing. Historically, the terms \"ham\" and \"bacon\" referred to different cuts of meat that were brined or packed identically, often together in the same barrel. Today, ham is defined as coming from the hind portion of the pig and brine specifically for curing ham includes a greater amount of sugar, while bacon is less sweet, though ingredients such as brown sugar or maple syrup are used for flavor. Bacon is similar to salt pork, which in modern times is often prepared from similar cuts, but salt pork is never smoked, and has a much higher salt content.\n", "BULLET::::- Back bacon contains meat from the loin in the middle of the back of the pig. It is a leaner cut, with less fat compared to side bacon. Most bacon consumed in the United Kingdom and Ireland is back bacon.\n\nBULLET::::- Collar bacon is taken from the back of a pig near the head.\n\nBULLET::::- Cottage bacon is made from the lean meat from a boneless pork shoulder that is typically tied into an oval shape.\n", "Bacon is cured through either a process of injecting with or soaking in brine, known as wet curing, or using plain crystal salt, known as dry curing. Bacon brine has added curing ingredients, most notably sodium nitrite (or less often, potassium nitrate), which speed the curing and stabilize color. Fresh bacon may then be dried for weeks or months in cold air, or it may be smoked or boiled. Fresh and dried bacon are typically cooked before eating, often by pan frying. Boiled bacon is ready to eat, as is some smoked bacon, but they may be cooked further before eating. Differing flavours can be achieved by using various types of wood, or less common fuels such as corn cobs or peat. This process can take up to eighteen hours, depending on the intensity of the flavour desired. \"The Virginia Housewife\" (1824), thought to be one of the earliest American cookbooks, gives no indication that bacon is ever \"not\" smoked, though it gives no advice on flavouring, noting only that care should be taken lest the fire get too hot. In early American history, the curing and smoking of bacon (like the making of sausage) seems to have been one of the few food-preparation processes not divided by gender.\n", "Pork is particularly common as an ingredient in sausages. Many traditional European sausages are made with pork, including chorizo, fuet, Cumberland sausage and salami. Many brands of American hot dogs and most breakfast sausages are made from pork. Processing of pork into sausages and other products in France is described as charcuterie.\n\nHam and bacon are made from fresh pork by curing with salt (pickling) or smoking. Shoulders and legs are most commonly cured in this manner for Picnic shoulder and ham, whereas streaky and round bacon come from the side (round from the loin and streaky from the belly).\n", "BULLET::::- Side bacon, or streaky bacon, comes from the pork belly. It has long alternating layers of fat and muscle running parallel to the rind. This is the most common form of bacon in the United States.\n\nBULLET::::- Pancetta is an Italian form of side bacon, sold smoked or unsmoked (\"aqua\"). It is generally rolled up into cylinders after curing, and is known for having a strong flavour.\n", "The so-called \"Iowa Chop\" is a thick center cut; the term was coined in 1976 by the Iowa Pork Producers Association. A \"Bacon Chop\" is cut from the shoulder end and leaves the pork belly meat attached. Pork chops are sometimes sold marinated to add flavor; marinades such as a chili sauce or a barbecue sauce are common. As pork is often cooked more thoroughly than beef, thus running the risk of drying out the meat, pork chops can be brined to maintain moistness. One could also wrap their pork chops in bacon to add further moistness during the cooking process.\n", "BULLET::::- Jowl bacon is cured and smoked cheeks of pork. Guanciale is an Italian jowl bacon that is seasoned and dry cured but not smoked.\n\nThe inclusion of skin with a cut of bacon, known as the 'bacon rind', varies, though is less common in the English-speaking world.\n\nSection::::Around the world.\n\nBacon is often served with eggs and sausages as part of a full breakfast.\n\nSection::::Around the world.:Australia and New Zealand.\n", "In the US and Europe, bacon is commonly used as a condiment or topping on other foods, often in the form of bacon bits. Streaky bacon is more commonly used as a topping in the US on such items as pizza, salads, sandwiches, hamburgers, baked potatoes, hot dogs, and soups. In the US, sliced smoked back bacon is used less frequently than the streaky variety, but can sometimes be found on pizza, salads, and omelettes.\n", "Bacon is defined as any of certain cuts of meat taken from the sides, belly or back that have been cured or smoked. In continental Europe, it is used primarily in cubes (lardons) as a cooking ingredient valued both as a source of fat and for its flavour. In Italy, besides being used in cooking, bacon (\"pancetta\") is also served uncooked and thinly sliced as part of an \"antipasto\". Bacon is also used for barding roasts, especially game birds. Bacon is often smoked with various wood fuels for up to ten hours. Bacon is eaten fried, baked, or grilled.\n", "The Bacon Explosion is made of bacon, sausage, barbecue sauce and barbecue seasoning or rub. The bacon is assembled in a weave to hold the sausage, sauce and crumbled bacon. Once rolled, the Bacon Explosion is cooked, basted, cut and served. The Bacon Explosion's creators produced a cookbook featuring the recipes which ultimately won the 2010 Gourmand World Cookbook Awards for \"Best Barbecue Book in the World\". The Bacon Explosion also won at the 2013 Blue Ribbon Bacon Festival.\n\nSection::::History and origin.\n", "The meat for turkey bacon comes from the whole turkey and can be cured or uncured, smoked, chopped, and reformed into strips that resemble bacon. Turkey bacon is cooked by pan-frying. Cured turkey bacon made from dark meat can be 90% fat free. The low fat content of turkey bacon means it does not shrink while being cooked and has a tendency to stick to the pan.\n\nSection::::Alternatives.:Macon.\n", "The third chapter details the industry outside the United Kingdom. The fourth chapter discusses the current practices of the bacon factory, including the stages in which the pigs are received, killed, branded and processed. The usage of the entire carcass is covered, from the blood to the fat and hair of the pig. Chapter five details the distribution and wholesale centers of the industry and the terms and regulations used. Chapter six details the selection and grading of the cuts, beginning with the most popular Wiltshire cut. Chapter seven and eight details the retail distribution of the bacon, and dividing the Wiltshire cut into different cuts and pricing. Chapter nine concludes with the retail distribution of the American and Canadian cuts. The book includes fold-out anatomical charts that were popular during the time.\n", "The USDA defines bacon as \"the cured belly of a swine carcass\", while other cuts and characteristics must be separately qualified (e.g. \"smoked pork loin bacon\"). \"USDA Certified\" bacon means that it has been treated for \"Trichinella\".\n\nThe canned meat Spam is made of chopped pork shoulder meat and ham.\n\nSection::::Industrial raw material.\n", "Wiltshire cure\n\nThe Wiltshire cure is a traditional English technique for curing bacon and ham. The technique originated in the 18th century in Calne, Wiltshire; it was developed by the Harris family. Originally it was a dry cure method that involved applying salt to the meat for 10–14 days. Storing the meat in cold rooms meant that less salt was needed. The Wiltshire cure has been a wet cure, soaking the meat in brine for 4–5 days, since the First World War. Smoking is not part of the process, although bacon is often smoked after being cured.\n", "Section::::Modernisation.\n\nProduction methods moved from the traditional \"dry-curing\" process of rubbing salt, spices and sugar into the bacon to the less labour-intensive \"wet-curing\" process in which the bacon is left to soak in brine. Wet curing can also be used to increase the water content of the meat to add bulk and to add sodium nitrate and phosphates to shorten the process, which can then take as little as six hours compared to 2–3 days for dry curing.\n", "Section::::Pork products.\n\nPork may be cooked from fresh meat or cured over time. Cured meat products include ham and bacon. The carcass may be used in many different ways for fresh meat cuts, with the popularity of certain cuts and certain carcass proportions varying worldwide.\n\nSection::::Pork products.:Fresh meat.\n", "The clams, bacon, and other ingredients are cooked in various ways depending on the recipe, and then added with breading to half the clam shell and baked or broiled (grilled from above) to a golden brown.\n\nThere are many variations on the dish, but the constant factor is the bacon: \"Bacon remains the major key to its success\", with some chefs recommending smoked bacon for its salty flavor and others advocating an unsmoked variety.\n\nSection::::History.\n", "Turkey bacon can be cooked by pan-frying or deep-frying. Cured turkey bacon made from dark meat can be 90% fat free. It can be used in the same manner as bacon (such as in a BLT sandwich), but the low fat content of turkey bacon means it does not shrink while being cooked and has a tendency to stick to the pan, thus making deep-frying a faster and more practical option.\n\nSection::::Alternative to pork bacon.\n", "The bacon martini was invented independently by Sang Yoon, owner of the gastropub Father's Office in Santa Monica, California, and P. Moss, owner of the Double Down Saloon in Las Vegas, Nevada. Sang Yoon made his bacon martini in 1998, inspired by the Bacon of the Month Club run by the Grateful Palate in Fairfield, California. P. Moss appears to have concocted his bacon martini the same year. Sang Yoon's version of the drink uses juniper-cured bacon, while P. Moss's method calls for hickory-smoked bacon.\n\nSection::::Preparation.\n", "The introduction contains a guide to \"the international world of bacon\", and Villas compares dishes Salt Pork and Pancetta; and Paprikaspeck and Bauchspeck. The introduction also includes a list of places to receive mail-order products. The book goes over the history of curing bacon, discussing the various international traditions. Smokehouses listed as resources in the book include Benton's in Tennessee, Newsom's in Kentucky, Edwards & Sons in Virginia, Nueske in Wisconsin, and Lazy H Smokehouse in Kirbyville, Texas. Villas instructs the reader in techniques of smoking bacon, how to buy and store it, and how to utilize bacon fat for cooking purposes.\n", "Lardons may be prepared from different cuts of pork, including pork belly and fatback, or from cured cuts such as bacon or salt pork. According to food writer Regina Schrambling, when the lardon is salt-cured but not smoked in the style of American bacon, \"the flavor comes through cleanly, more like ham but richer because the meat is from the belly of the pig, not the leg\". One food writer takes this as evidence that the French \"do bacon right\". The meat (fat) is usually cut into small strips or cubes about one centimeter ( inch) wide, then blanched or fried.\n", "Some of the meanings of bacon overlap with the German-language term \"Speck\". Germans use the term \"bacon\" explicitly for \"Frühstücksspeck\" ('breakfast \"Speck\"') which are cured or smoked pork slices. Traditional German cold cuts favor ham over bacon, however \"Wammerl\" (grilled pork belly) remains popular in Bavaria.\n", "Research has also found that bacon is treated with a chemical called sodium nitrite. This chemical preserves the red colour of the meat, keeping it looking fresh as opposed to turning grey. However, this chemical has been thought to lead to a number of health risks, including being a carcinogen. On the other hand, sodium nitrite has also been found to prevent botulism by limiting bacterial growth.\n", "Section::::Regional variations.:United States.\n\nIn American cuisine, bacon is most often made from pork bellies. Salt pork is made from pork bellies also, which is commonly used for making soups and stews.\n\nSection::::Futures.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-22991
How can the “value of the dollar” drop? like how can the value of currencies change?
The value of products increases making it cost more money meaning people need to get paid more money and as such the value of money has decreased. Population increasing at the rate it is probably contributed more than anything because the demand for everything increases
[ "Both the 1930s episode and the outbreak of competitive devaluation that began in 2009 occurred during global economic downturns. An important difference with the 2010s period is that international traders are much better able to hedge their exposures to exchange rate volatility due to more sophisticated financial markets. A second difference is that during the later period devaluations have invariably been effected by nations expanding their money supplies—either by creating money to buy foreign currency, in the case of direct interventions, or by creating money to inject into their domestic economies, with quantitative easing. If all nations try to devalue at once, the net effect on exchange rates could cancel out leaving them largely unchanged, but the expansionary effect of the interventions would remain. So while there has been no collaborative intent, some economists such as Berkeley's Barry Eichengreen and Goldman Sachs's Dominic Wilson have suggested the net effect will be similar to semi-coordinated monetary expansion which will help the global economy.\n", "BULLET::::- Inflation levels and trends: Typically a currency will lose value if there is a high level of inflation in the country or if inflation levels are perceived to be rising. This is because inflation erodes purchasing power, thus demand, for that particular currency. However, a currency may sometimes strengthen when inflation rises because of expectations that the central bank will raise short-term interest rates to combat rising inflation.\n", "In recent years changes in the money supply have historically taken a long time to show up in the price level, with a rule of thumb lag of at least 18 months. More recently Alan Greenspan cited the time lag as taking between 12 and 13 quarters. Bonds, equities and commodities have been suggested as reservoirs for buffering changes in money supply.\n\nSection::::Causes and corresponding types.:Credit deflation.\n", "The demand for money means demand to hold real cash balances. If the money supply is increased to an amount beyond which the public desires to hold (from MS to MS'), this is interpreted as a movement from O to A, as illustrated in our figure. With the increase in supply of money, people find themselves with larger money balances than they wish to hold and thus reside temporarily at point A. If we assume that there has been no change in the demand for money, these excesses will be spent on goods, services or financial assets thereby increasing their prices, leading to a movement from point A to new equilibrium point B. The increase in the aggregate price level (P* P') is associated with excess supplies of money which reflects these individual increases. The price level continues to rise with the increase in spending of the excess money balances and eventually reaches point B where the higher nominal supply of money is held at higher price (1/P', where P' P*).\n", "A market-based exchange rate will change whenever the values of either of the two component currencies change. A currency becomes more valuable whenever demand for it is greater than the available supply. It will become less valuable whenever demand is less than available supply (this does not mean people no longer want money, it just means they prefer holding their wealth in some other form, possibly another currency).\n", "The decline in the value of the U.S. dollar corresponds to price inflation, which is a rise in the general level of prices of goods and services in an economy over a period of time. A consumer price index (CPI) is a measure estimating the average price of consumer goods and services purchased by households. The United States Consumer Price Index, published by the Bureau of Labor Statistics, is a measure estimating the average price of consumer goods and services in the United States. It reflects inflation as experienced by consumers in their day-to-day living expenses. A graph showing the U.S. CPI relative to 1982–1984 and the annual year-over-year change in CPI is shown at right.\n", "In addition to the trade deficit, the U.S. dollar's decline was linked to a variety of other factors, including a major spike in oil prices. Economists such as Alan Greenspan suggested that another reason for the decline of the dollar was its decreasing role as a major reserve currency. Chinese officials signaled plans to diversify the nation's $1.9 trillion reserve in response to a falling U.S. currency which also set the dollar under pressure.\n", "After the cancellation of the conversion of dollars into gold, the United States forced foreign central banks to buy United States treasuries that are used to finance the federal deficit and large military. In exchange for providing a net surplus of assets, commodities, debt financing, goods and services, foreign countries are forced to hold an equal amount of United States treasuries. It drives United States interest rates down, which drives down the dollar's foreign exchange rate.\n", "Section::::Competitive devaluation in 2009.\n\nFollowing the financial crisis of 2008 widespread concern arose among advanced economies concerning the size of their deficits; they increasingly joined emerging economies in viewing export-led growth as their ideal strategy. In March 2009, even before international cooperation reached its peak with the 2009 G-20 London Summit Economist Ted Truman became one of the first to warn of the dangers of competitive devaluation breaking out. He also coined the phrase \"competitive non-appreciation\".\n\nOn 27 September 2010, Brazilian Finance Minister Guido Mantega said that the world is \"in the midst of an international currency war.\"\n", "Section::::Prospects.\n\nIn recent years there has been a revival of interest in lending in domestic currency, especially in Latin America, and there is evidence that public debt has in fact become less dollarized. This revival may represent an attempt to \"lean against the wind\" in the face of expectations of currency appreciation as well as a response to the collapse of Argentina's Convertibility regime, which illustrated the macroeconomic risks of extensive DLD.\n", "When home prices went down, the Federal Reserve kept its loose monetary policy and lowered interest rates; the attempt to slow price declines in one asset class, e.g. real estate, may well have caused prices in other asset classes to rise, e.g. commodities.\n\nSection::::Link with inflation.:Rates of growth.\n\nIn terms of percentage changes (to a close approximation, under low growth rates), the percentage change in a product, say XY, is equal to the sum of the percentage changes %ΔX + %ΔY). So, denoting all percentage changes as per unit of time, \n\nThis equation rearranged gives the basic inflation identity:\n", "For example, suppose a government has set 10 units of its currency equal to one US dollar. To revalue, the government might change the rate to 9.9 units per dollar. This would result in that currency being slightly more expensive to people buying that currency with U.S. dollars than previously and the US dollar costing slightly less to those buying it with foreign currency.\n\nSection::::Causes.\n", "By 2009 some of the conditions required for a currency war had returned, with a severe economic downturn seeing global trade in that year decline by about 12%. There was a widespread concern among advanced economies about the size of their deficits; they increasingly joined emerging economies in viewing export led growth as their ideal strategy. In March 2009, even before international co-operation reached its peak with the 2009 G-20 London Summit , economist Ted Truman became one of the first to warn of the dangers of competitive devaluation. He also coined the phrase \"competitive non-appreciation\".\n", "Section::::Theories.:Coordination games.\n\nMathematical approaches to modeling financial crises have emphasized that there is often positive feedback between market participants' decisions (see strategic complementarity). Positive feedback implies that there may be dramatic changes in asset values in response to small changes in economic fundamentals. For example, some models of currency crises (including that of Paul Krugman) imply that a fixed exchange rate may be stable for a long period of time, but will collapse suddenly in an avalanche of currency sales in response to a sufficient deterioration of government finances or underlying economic conditions.\n", "Other commentators including world statesmen such as Manmohan Singh and Guido Mantega suggested a \"currency war\" was indeed underway and that the leading participants are China and the US, though since 2009 many other states have been taking measures to either devalue or at least check the appreciation of their currencies. The US does not acknowledge that it is practicing competitive devaluation and its official policy is to let the dollar float freely. While the US has taken no direct action to devalue its currency, there is close to universal consensus among analysts that its quantitative easing programmes exert downwards pressure on the dollar.\n", "Currency risk is the risk that foreign exchange rates or the implied volatility will change, which affects, for example, the value of an asset held in that currency. Currency fluctuations in the marketplace can have a drastic impact on an international firm's value because of the price effect on domestic and foreign goods, as well as the value of foreign currency denominate assets and liabilities. When a currency appreciates or depreciates, a firm can be at risk depending on where they are operating and what currency denominations they are holding. The fluctuation in currency markets can have effects on both the imports and exports of an international firm. For example, if the euro depreciates against the dollar, the U.S. exporters take a loss while the U.S. importers gain. This is because it takes less dollars to buy a euro and vice versa, meaning the U.S. wants to buy goods and the EU is willing to sell them; it's to expensive for the EU to import from U.S. at this time.\n", "A change in exchange rates can be a cause of loss (or gain) in international trade. For example, now, we suppose that the euro to U.S. dollar exchange rate is 1.00 (€1.00 buys one dollar). In addition, we suppose that an importer of U.S. buys goods for €1000 from a European exporter and the U.S. importer will pay the money 1 month later. If the euro to U.S. dollar exchange rate is 2.00 one month later, the U.S. importer must prepare €1000 with $2000 in the foreign exchange market for the payment. On the other hand, If the euro to U.S. dollar exchange rate is 0.50 one month later, the U.S. importer is able to prepare €1000 with $500 for the payment.\n", "The 'first generation' of models of currency crises began with Paul Krugman's adaptation of Stephen Salant and Dale Henderson's model of speculative attacks in the gold market. In his article, Krugman argues that a sudden speculative attack on a fixed exchange rate, even though it appears to be an irrational change in expectations, can result from rational behavior by investors. This happens if investors foresee that a government is running an excessive deficit, causing it to run short of liquid assets or \"harder\" foreign currency which it can sell to support its currency at the fixed rate. Investors are willing to continue holding the currency as long as they expect the exchange rate to remain fixed, but they flee the currency \"en masse\" when they anticipate that the peg is about to end.\n", "A nation's current account balance is influenced by numerous factors – its trade policies, exchange rate, competitiveness, forex reserves, inflation rate and others.\n\nSince the trade balance (exports minus imports) is generally the biggest determinant of the current account surplus or deficit, the current account balance often displays a cyclical trend. During a strong economic expansion, import volumes typically surge; if exports are unable to grow at the same rate, the current account deficit will widen. Conversely, during a recession, the current account deficit will shrink if imports decline and exports increase to stronger economies.\n", "Currency War of 2009–11\n\nThe Currency War of 2009–2011 was an episode of competitive devaluation which became prominent in the financial press in September 2010. Competitive devaluation involves states competing with each other to achieve a relatively low valuation for their own currency, so as to assist their domestic industry. With the financial crises of 2008 the export sectors of many emerging economies have experienced declining orders, and from 2009 several states began or increased their levels of intervention to push down their currencies.\n", "\"The volatility and interest rates found its way into commodity inputs and all sectors of the world economy.\"\n\nHence, in the case of an economic crisis commodities prices follow the trends in exchange rate (coupled) and its prices decrease in case there are downward trends of diminishing money supply.\n\nForeign exchange impacts commodities prices and so does money supply: the advent of a crisis will pull commodities prices down.\n\nSection::::Boom.\n", "Monetary-disequilibrium is a short-run phenomenon as it contains within itself the process by which a new equilibrium is established i.e. through changes in the price level. If the demand for real balances changes, either the nominal money supply or price level can adjust to monetary equilibrium in the long run as seen from the figure.\n", "Several countries use a crawling peg model, wherein currency is devalued at a fixed rate relative to the dollar. For example, the Nicaraguan córdoba is devalued by 5% per annum.\n\nBelarus, on the other hand, pegged its currency, the Belarusian ruble, to a basket of foreign currencies (U.S. dollar, euro and Russian ruble) in 2009. In 2011 this led to a currency crisis when the government became unable to honor its promise to convert Belarusian rubles to foreign currencies at a fixed exchange rate. BYR exchange rates dropped by two thirds, all import prices rose and living standards fell.\n", "Reuters suggested that both China and the United States were \"winning\" the currency war, holding down their currencies while pushing up the value of the Euro, the Yen, and the currencies of many emerging economies.\n", "By late February Bloomberg reported that talk of currency war had subsided, with several emerging economies choosing to allow currency appreciation as a way to combat inflation.\n\nFebruary saw the US dollar fall to its lowest level since 1973, based on comparison against a weighted basket composed of the currencies of its major trading partners and analysts began to converge on the view that the Feds QE2 program would begin to wind up by the middle of the year.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-15817
How does water boarding work? Why is it so traumatic? Are you actually being deprived of oxygen? If you know they don’t want to kill you, could you just wait it out?
Go in your shower. Wet your washcloth fully. Place it over your face. Try to breathe. Make worse by 100x. It’s horrifying.
[ "Recommended equipment common to these tables includes:\n\nBULLET::::- a means of securely holding the casualty at a measured depth, such as a harness and 20 metre lazy shot line with a 20 kg lead weight at the bottom and a buoy at the top of at least 40 litres buoyancy\n\nBULLET::::- a means of allowing the casualty to ascend slowly, such as loops in the line to which the harness could be clipped\n\nBULLET::::- full face diving masks for the casualty and for an in-water attendant diver with two-way communication to the surface and an umbilical gas supply system\n", "Section::::Curriculum.:Resistance and escape.\n\nTraining on how to survive and resist the enemy in the event of capture is largely based on the experiences of past U.S. prisoners of war.\n\nSection::::Curriculum.:Water survival.\n\nHow to survive in water is taught at a separate Professional Military Education (PME) course; it takes three days and is typically attended after the main SERE course. In addition to training in the use of aquatic survival gear, more academic skills include first aid tailored to an aquatic environment, communication protocols, ocean ecology, and equipment maintenance.\n\nSection::::Curriculum.:Code of conduct.\n", "Surface water rescue\n\nSurface Water Rescue is defined as the rescue of a patient who is afloat on the surface of a body of water.\n", "Holden testified that \"six soldiers came into the cubicle where I was being held and grabbed me. They held me down on the floor and one of them placed a towel over my face, and they got water and they started pouring the water through the towel all round my face, very slowly. After a while you can't get your breath but you still try to get your breath, so when you were trying to breathe in through your mouth you are sucking the water in, and if you try to breathe in through your nose, you are sniffing the water in. It was continual, a slow process, and at the end of it you basically feel like you are suffocating.\"\n", "BULLET::::- a means of securely holding the casualty at a measured depth, such as a harness and 20 metre line with a 20 kg lead weight at the bottom and 40 litre buoy at the top\n\nBULLET::::- a means of allowing the casualty to ascend slowly, such as loops in the line to which the harness could be clipped\n\nBULLET::::- full face diving masks for the casualty and for an in-water tending diver including two-way communication to the surface and an umbilical gas supply\n\nBULLET::::- surface supplied breathing gases including pure oxygen and air delivered to the casualty by umbilical\n", "Boot Camp - Lee completed boot camp from Orlando Florida company C246. This tragedy started before his entry in SAR school. During basic swimming tests, it became clear to EVERYONE that Lee was afraid of the water. It took at least 5 tries to get him to pass the tread water requirements. The company commander, BMC Treacy kept commenting \"and you are going to be a rescue swimmer my a**. Lee told me, his rack mate, that he was terrified of the water and knew he would \"never\" make it to the Rescue Swimmer and would have to reclass. \n", "BULLET::::- Cierre de Houdini (Houdini's Escape): Participantes are handcuffed and zipped inside a large plastic bag, which is then submerged. Participantes must release one wrist from a cuff using a key tied to the cuff, unzip oneself from the bag, resurface, and swim to the edge of the pool. They must also signal the safety divers in case they want to quit from the stunt. The one with the slowest time or spent the least time underwater before quitting must leave for the Philippines.\n", "BULLET::::- Cofre en el Agua (Coffin in Water): Participantes lay down inside a transparent glass coffin being submerged underwater. At the signal of a pilot light, the Participante must open a lock using one of three keys to release themselves from the coffin and swim toward the edge of the pool marked with the logo. The two who finished the stunt the fastest would be exempted from further competing in the round. In case no one finished the stunt, the lengths of time the Participantes lasted underwater will also be assessed.\n", "BULLET::::- Hypothermia\n\nBULLET::::- Adequacy of Equipment in Remote Areas,\n\nBULLET::::- Seasickness,\n\nBULLET::::- Operator Expertise and Training,\n\nBULLET::::- Safety of the Diving Attendant and the Boat Tenders,\n\nBULLET::::- Requirement for Medical Supervision,\n\nBULLET::::- Transport Availability,\n\nBULLET::::- Misuse of Equipment,\n\nBULLET::::- Pulmonary Barotrauma Cases.\n\nIn 2018, a group of diving medical experts issued a consensus guideline on pre-hospital decompression sickness management and concluded that IWR is only appropriate in groups that have been trained in IWR techniques.\n\nSection::::Equipment.\n\nSome of the equipment needed includes:\n", "Dry-boarding\n\nDry-boarding is a torture method that induces the first stages of death by asphyxiation. Unlike waterboarding, where water is poured on a wet cloth placed over a supine subject's airways, so their breathing slowly fills their lungs with water, dryboarding induces asphyxiation through stuffing the subject's airways with rags, then taping shut his mouth and nose. It is among techniques used by the United States during its war on terror: CIA and military agents under the Bush administration described this as among enhanced interrogation techniques. It has since legally been defined by US courts as torture.\n", "Water Safety Instructor\n\nThe Water Safety Instructor (Commonly referred to as \"WSI\") program is an aquatics program, specific to swim instructing, regulated and certified primarily through the Canadian Red Cross. It is also recognized by the American Red Cross.\n", "BULLET::::- Forced water ingestion: The prisoner was strapped to a table and forced to drink large amounts of water. Guards then jump on a board laid on the swollen stomach to force the water out.\n\nBULLET::::- Immersion in water: A plastic bag was placed over the prisoner's head and he was submerged in water for long periods of time.\n", "BULLET::::- Various First Aid Skills - Used to treat various injuries and sudden illnesses that can occur. Some of the various in-water skills taught are:\n\nBULLET::::- Active-Victim Rescue - An active victim rescue is designed to quickly remove and calm a victim from the water. Depending on whether the victim is facing you or facing away, changes the rescue.\n", "Potable water diving\n\nPotable water diving is diving in a tank for potable water. This is usually done for inspection and cleaning tasks. A person who is trained to do this work may be described as a potable water diver. The risks to the diver associated with potable water diving are related to the access, confined spaces and outlets for the water. The risk of contamination of the water is managed by isolating the diver in a clean dry-suit and helmet or full-face mask which are decontaminated before the dive.\n\nSection::::Scope.\n", "Sarah Polley, who was nine years old at the time of filming, described it as a traumatic experience. \"[I]t definitely left me with a few scars ... It was just so dangerous. There were so many explosions going off so close to me, which is traumatic for a kid whether it's dangerous or not. Being in freezing cold water for long periods of time and working endless hours. It was physically grueling and unsafe.\"\n", "Dr. Jerald Ogrisseg, former head of Psychological Services for the Air Force SERE School has stated in testimony before the U.S. Senate's Committee on Armed Services that there are fundamental differences between SERE training and what occurs in real world settings. Dr. Ogrisseg further states that his experience is limited to SERE training, but that he did not believe waterboarding to be productive in either setting.\n\nJane Mayer wrote for The New Yorker:\n\nand continues to report:\n", "In the most common method of waterboarding, the captive's face is covered with cloth or some other thin material, and the subject is immobilized on their back at an incline of 10 to 20 degrees. Torturers pour water onto the face over the breathing passages, causing an almost immediate gag reflex and creating a drowning sensation for the captive.\n", "Personnel directly involved with support should be qualified to a minimum of an operations level while everyone else working in and around the scene should hold a minimum of an awareness qualification.\n\nAs with any rescue discipline, the knowledge and skill required to perform a rescue is not neatly packaged. For example, while performing a surface water rescue, a rescue team may utilize many skills that include search techniques, rope-work and rigging, emergency patient care, and a functional knowledge of confined space, swift-water, and dive recovery. Therefore, an effective rescue team will be trained with multiple technical disciplines.\n", "Most hyperbaric treatment is done in hyperbaric chambers where environmental hazards can be controlled, but occasionally treatment is done in the field by in-water recompression when a suitable chamber cannot be reached in time. The risks of in-water recompression include maintaining gas supplies for multiple divers and people able to care for a sick patient in the water for an extended period of time.\n\nSection::::Background.\n\nRecompression of diving casualties presenting symptoms of decompression sickness has been the treatment of choice since the late 1800s. This acceptance was primarily based on clinical experience.\n", "...they picked up the plank to which I was still attached and carried me into the kitchen. ... fixed a rubber tube to the metal tap which shone just above my face. He wrapped my head in a rag... When everything was ready, he said to me: 'When you want to talk, all you have to do is move your fingers.' And he turned on the tap. The rag was soaked rapidly. Water flowed everywhere: in my mouth, in my nose, all over my face. But for a while I could still breathe in some small gulps of air. I tried, by contracting my throat, to take in as little water as possible and to resist suffocation by keeping air in my lungs for as long as I could. But I couldn’t hold on for more than a few moments. I had the impression of drowning, and a terrible agony, that of death itself, took possession of me. In spite of myself, all the muscles of my body struggled uselessly to save me from suffocation. In spite of myself, the fingers of both my hands shook uncontrollably. ‘That’s it! He’s going to talk,’ said a voice.\n", "Within Canada, any significant sized body of water, whether in mid-summer or winter, is considered cold water. Although multiple agencies respond to such rescues, including police, fire department, and Emergency medical services, their functions, responsibilities, and level of training for such a technical rescue are quite different. As such, a best-practice will identify and adopt industry standards that include specific training and equipment. This supports the opinion that any individual entering the water for the purpose of rescue should be trained to the level of a Rescue Technician.\n", "BULLET::::- Decompression comprises an approximated continuous ascent with stops every 2 fsw as shown in the graphic profile, with a stop at 4 fsw for 4 hours to avoid inadvertent loss of pressure due to seal failure at low pressure differences.\n\nSection::::Hyperbaric chamber treatment schedules.:US Navy Treatment Table 8.\n\nUse: Mainly for treating deep uncontrolled ascents when more than 60 minutes of decompression have been omitted.\n\nBULLET::::- Treatment table 8 is included in the US Navy Diving Manual Revision 6 and is currently authorized for use.\n\nBULLET::::- Adapted from Royal Navy Treatment Table 65.\n", "This three weeks training is especially to waterborne special forces such as STAR and PASKAL. Trainees who pass the Pre-Basic Commando Course will be trained in the physical aspect of land and water. This training is the final selection process before trainees sent to the Basic Commando Course.\n\nTrainees need to pass two fitness test and one 'drown-proofing' test:\n\nBULLET::::- Fitness Test (Land)\n\nBULLET::::- run under 34 minutes, 50 push-ups under 90 seconds. 50 sit-ups under 90 seconds, rope climbing etc.\n\nBULLET::::- Fitness Test (Water)\n", "The South African Truth and Reconciliation Commission received testimony from Charles Zeelie and Jeffrey Benzien, officers of the South African Police under Apartheid, that they used waterboarding, referred to as \"tubing\", or the \"wet bag technique\" on political prisoners as part of a wide range of torture methods to extract information. Specifically, a cloth bag was wet and placed over victim's heads, to be removed only when they were near asphyxiation; the procedure was repeated several times. The TRC concluded that the act constituted torture and a gross human rights violation, for which the state was responsible.\n", "I talked with one soldier who lay shivering in a bunk in the hospital coach. He had no visible sign of injury but his face was a ghastly green shade. He wanted more blankets and a cigarette, and I gave him both. An hour later, I helped move his body to the other coach.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-07018
Why do buildings only show the elevators' locations when you're on the ground floor but not when you're on other floors?
The floor indicator adds to cost, so they just put it at the most heavily used location -- the lobby.
[ "There are some cases, especially in shopping malls in the Philippines, that the floor numbering in the elevator does not align with the floor numbering that is created by the management. However, in order to avoid confusion from mall visitors, the usage of the management's floor numbering in advertising is more prevalent than the one posted in the elevators.\n", "An arrangement often found in high rise public housing blocks, particularly those built in the United Kingdom during the 1960s and 1970s, is that elevators would only call at half the total number of floors, or at an intermediate level between a pair of floors; for example a lift of a 24-storey building would only stop at 12 levels, with staircases used to access the \"upper\" or \"lower\" level from each intermediate landing. This halves any building costs associated with elevator shaft doors. Where the total traffic necessitates a second lift the alternate floors strategy is sometimes still applied, not only for the doorway reduction but also, provisionally upon the passengers preferring no particular floor beyond capacity, it tends toward halving the total delay imposed by the stops en-route.\n", "In a real building, there are complicated factors such as: the tendency of elevators to be frequently required on the ground or first floor, and to return there when idle; lopsided demand where everyone wants to go down at the end of the day; people on the lower floors being more willing to take the stairs; or the way full elevators ignore external floor-level calls. These factors tend to shift the frequency of observed arrivals, but do not eliminate the paradox entirely. In particular, a user very near the top floor will perceive the paradox even more strongly, as elevators are infrequently present or required above their floor.\n", "Elevators may feature talking devices as an accessibility aid for the blind. In addition to floor arrival notifications, the computer announces the direction of travel (OTIS is well known for this in some of their GEN2 model elevators), and notifies the passengers before the doors are to close.\n", "The same destination scheduling concept can also be applied to public transit such as in group rapid transit.\n\nSection::::Special operating modes.\n\nSection::::Special operating modes.:Anti-crime protection.\n\nThe anti-crime protection (ACP) feature will force each car to stop at a pre-defined landing and open its doors. This allows a security guard or a receptionist at the landing to visually inspect the passengers. The car stops at this landing as it passes to serve further demand.\n\nSection::::Special operating modes.:Up peak.\n", "In addition to the call buttons, elevators usually have floor indicators (often illuminated by LED) and direction lanterns. The former are almost universal in cab interiors with more than two stops and may be found outside the elevators as well on one or more of the floors. Floor indicators can consist of a dial with a rotating needle, but the most common types are those with successively illuminated floor indications or LCDs. Likewise, a change of floors or an arrival at a floor is indicated by a sound, depending on the elevator.\n", "To prevent this problem, in one implementation of destination control, every user is given an RFID card, for identification and tracking, so that the system knows every user call and can cancel the first call if the passenger decides to travel to another destination, preventing empty calls. The newest invention knows even where people are located and how many on which floor because of their identification, either for the purposes of evacuating the building or for security reasons. Another way to prevent this issue is to treat everyone travelling from one floor to another as one group and to allocate only one car for that group.\n", "It can also improve accessibility, as a mobility-impaired passenger can move to his or her designated car in advance.\n\nInside the elevator there is no call button to push, or the buttons are there but they cannot be pushed — except door opening and alarm button — they only indicate stopping floors.\n\nThe idea of destination control was originally conceived by Leo Port from Sydney in 1961, but at that time elevator controllers were implemented in relays and were unable to optimise the performance of destination control allocations.\n", "Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies, where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space, however, and add to the amount of time spent commuting between floors.\n", "In Spain, the level above ground level (the mezzanine) is sometimes called \"entresuelo\" (\"entresòl\" in Catalan, etc., which literally means \"interfloor\"), and elevators may skip it. The next level is sometimes called \"principal\". The \"first floor\" can therefore be three levels above ground level. In Italy, in the ancient palaces the first floor is called \"piano nobile\" (\"noble floor\"), since the noble owners of the palace lived there.\n", "Elevators that reach the top tenant floor also require overhead machine rooms; those are sometimes put into full-size mechanical floors but most often into a mechanical penthouse, which can also contain communications gear and window-washing equipment. On most building designs this is a simple \"box\" on the roof, on others it is concealed inside a decorative spire. A consequence of this is that if the topmost mechanical floors are counted in the total, there can be no such thing as a true \"top-floor office\" in a skyscraper with this design.\n\nSection::::Mechanical concerns.\n", "Section::::Special operating modes.:Riot mode.\n\nIn the event of civil disturbance, insurrection, or rioting, management can prevent elevators from stopping at the lobby or parking areas, preventing undesired persons from using the elevators while still allowing the building tenants to use them within the rest of the building.\n\nSection::::Special operating modes.:Emergency power operation.\n", "Because the flooring tiles are rarely removed once equipment has been installed, the space below them is seldom cleaned, and fluff and other debris settles, making working on cabling underneath the flooring a dirty job. Smoke detectors under the raised floor can be triggered by workers disturbing the dust, resulting in false alarms.\n\nSection::::Cooling load implications.\n", "Mechanical floors are generally counted in the building's floor numbering (this is required by some building codes) but are accessed only by service elevators. Some zoning regulations exclude mechanical floors from a building's maximum square footage calculation permitting a significant increase in building sizes; this is the case in New York City. Sometimes buildings are designed with a mechanical floor located on the thirteenth floor, to avoid problems in renting the space due to superstitions about that number.\n\nSection::::Structural concerns.\n", "Before the widespread use of elevators, most residential buildings were limited to about seven stories. The wealthy lived on lower floors, while poorer residents—required to climb many flights of stairs—lived on higher floors. The elevator reversed this social stratification, exemplified by the modern penthouse suite.\n\nEarly users of elevators sometimes reported nausea caused by abrupt stops while descending, and some users would use stairs to go down. In 1894, a Chicago physician documented \"elevator sickness\".\n", "Elevators are typically controlled from the outside by a call box, which has up and down buttons, at each stop. When pressed at a certain floor, the button (also known as a \"hall call\" button) calls the elevator to pick up more passengers. If the particular elevator is currently serving traffic in a certain direction, it will only answer calls in the same direction unless there are no more calls beyond that floor.\n", "This layout is usually reflected in the internal elevator zoning. Since nearly all elevators require machine rooms above the last floor they service, mechanical floors are often used to divide shafts that are stacked on top of each other to save space. A transfer level or skylobby is sometimes placed just below those floors.\n", "Buildings and office furniture are often designed with cable management in mind; for instance, desks sometimes have holes to pass cables, and dropped ceilings, raised floors, and In Floor Cellular Raceway Systems provide easy access. Some cables have requirements for minimum bend radius or proximity to other cables, particularly power cables, to avoid crosstalk or interference. Power cables often need to be grouped separately and suitably apart from data cables, and only cross at right angles which minimizes electromagnetic interference.\n", "The Equitable Life Building completed in 1870 in New York City was thought to be the first office building to have passenger elevators. However Peter Ellis, an English architect, installed the first elevators that could be described as paternoster elevators in Oriel Chambers in Liverpool in 1868.\n", "Some taller buildings may have the Sabbath elevator alternate floors in order to save time and energy; for example, an elevator may stop at only even-numbered floors on the way up, and then the odd-numbered floors on the way down.\n\nSection::::Special operating modes.:Independent service.\n", "In Hawaii, the Hawaiian-language floor label uses the British System, but the English-language floor label uses the American system. For example, \"Papa akolu\" (P3) = Level 4 (4 or L4).\n\nSection::::Lift/elevator buttons.\n\nIn most of the world, elevator buttons for storeys above the ground level are usually marked with the corresponding numbers. In many countries, modern elevators also have Braille numbers—often mandated by law.\n\nSection::::Lift/elevator buttons.:European scheme.\n", "Elevator paradox\n\nThe elevator paradox is a paradox first noted by Marvin Stern and George Gamow, physicists who had offices on different floors of a multi-story building. Gamow, who had an office near the bottom of the building noticed that the first elevator to stop at his floor was most often going down, while Stern, who had an office near the top, noticed that the first elevator to stop at his floor was most often going up.\n", "In more modern buildings, elevator operators are still occasionally encountered. For example, they are commonly seen in Japanese department stores such as Sogo and Mitsukoshi in Japan and Taiwan, as well as high speed elevators in skyscrapers, as seen in Taipei 101, and at the Lincoln Center for the Performing Arts. Some monuments, such as the Space Needle in Seattle, the Eiffel Tower in Paris and the CN Tower in Toronto employ elevator operators to operate specialized or high-speed elevators, discuss the monument (or the elevator technology) and to help direct crowd traffic. \n\nSection::::Remaining examples.:New York City Subway stations.\n", "The invention of the elevator was a precondition for the invention of skyscrapers, given that most people would not (or could not) climb more than a few flights of stairs at a time. The elevators in a skyscraper are not simply a necessary utility like running water and electricity, but are in fact closely related to the design of the whole structure. A taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core (which contains the elevator shafts) becomes too big, it can reduce the profitability of the building. Architects must therefore balance the value gained by adding height against the value lost to the expanding service core. Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies, where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space and add to the amount of time spent commuting between floors. Other buildings such as the Petronas Towers use double-deck elevators allowing more people to fit in a single elevator and reaching two floors at every stop. It is possible to use even more than two levels on an elevator although this has yet to be tried. The main problem with double-deck elevators is that they cause everyone in the elevator to stop when only people on one level need to get off at a given floor.\n", "BULLET::::- Marburg, Germany – some parts of the historic city core built on higher ground (Uppertown, \"Oberstadt\" in German) are accessible from the lower street level by elevators. These elevators are unique in servicing also various buildings partially embedded in the steep-sloping terrain\n\nBULLET::::- Monaco – seven elevators\n\nBULLET::::- Nagasaki, Japan – Skyway\n\nBULLET::::- Naples, Italy – three public elevators\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-01560
How do semitruck drivers know what route to take to avoid overpasses that are too low for the truck?
Interstates must have a clearance no less than 16 feet. In cities it can be 14 feet, but some route through it must still be 16. States generally also have these standards, or something very close, and the handful of exceptions to them on state routes are available online, in atlases, and on truck GPS systems. So if you've got a normal semi, and you use the highways and observe signs about truck restrictions, you're good to go. If you've got something taller than this, or an oversize load of any kind, it's a special case, and you've probably worked this out in advance before you start driving. If there's an accident or something that forces you to detour, you just be really careful.
[ "There is no federal height limit, and states may set their own limits which range from (mostly on the east coast) to (west coast)., As a result, the majority of trucks are somewhere between and high. Truck drivers are responsible for checking bridge height clearances (usually indicated by a warning sign) before passing underneath an overpass or entering a tunnel. Not having enough vertical clearance can result in a \"top out\" or \"bridge hit,\" causing considerable traffic delays and costly repairs for the bridge or tunnel involved.\n", "Other obvious obstacles would be mountains and canyons. Truck prohibited routes sometimes create this same phenomenon, requiring a driver to drive several truck legal routes and approaching a destination from behind (essentially driving a fish hook shaped route), because the most direct route cannot accommodate heavy truck traffic.\n", "For overwidth or overheight trucks one escort vehicle will drive 2500' to 1 miles ahead of the truck to ensure the road ahead can accommodate the truck's oversize dimensions. This lead vehicle is usually equipped with a long pole (high-pole) that extends upward from the front bumper; its length is adjusted six to eight inches above the height of the truck's load or the tallest part of the load within a convoy. If the pole strikes any overhanging objects such as bridges, overhead signs, or power lines, the truck or convoy can be alerted and stopped or diverted long before an accident occurs. \n", "It has long been recognized that travel demand is influenced by network supply. The example of a new bridge opening where none was before inducing additional traffic has been noted for centuries. Much research has gone into developing methods for allowing the forecasting system to directly account for this phenomenon. Evans (1974) published a doctoral dissertation on a mathematically rigorous combination of the gravity distribution model with the equilibrium assignment model. The earliest citation of this integration is the work of Irwin and Von Cube, as related by Florian et al. (1975), who comment on the work of Evans:\n", "Since off-road versions do not have to drive on roads at highway speeds, a typical top speed is just . It is rare for these vehicles to be on highways, so it was very unusual when a pedestrian was accidentally struck and dragged by a yard truck at an intersection in Bellevue, Washington, in February 2014.\n", "Separated roadways for trucks are uncommon. One example is the New Jersey Turnpike, the northern portion of which features completely separated dual roadways, one reserved for passenger cars only, and the other open to both commercial and non-commercial traffic. Access ramps are provided to both roadways at major interchanges (Figure 12). Light trucks are considered as eligible vehicles on some HOV lanes if they carry the requisite persons. Restricted geometrics on many existing concurrent leftmost median lanes limit opportunities to serve large commercial trucks, and sight distance and other freeway lane prohibitions typically mean these vehicles cannot use leftmost lane treatments unless a separate roadway is provided with a minimum of two travel lanes. There are truck lanes on European motorways leading in and out of the ports in Rotterdam in the Netherlands. In the US, dedicated roadways for trucks are being studied and in at least several cases proposed, but no freeway examples currently exist in the US. Missouri is currently considering using dedicated roadways for trucks on I-70 across the state, and several U.S. port cities are examining truck lanes and roadways. Climbing lanes for trucks typically are built to improve safe operations on grades by separating slow moving heavy vehicles from the rest of traffic.\n", "Usually, the lowest bridge classification number (regardless of vehicle type or conditions of traffic flow) sets the load classification of a route. If no bridge is located on the route, the worst section of road governs the route's classification. Vehicles having higher load classifications than a particular route are sometimes able to use that route if a recon overlay or a special recon shows that a change in traffic control, such as making a bridge a single-flow crossing, would permit use of the route by heavier traffic.\n", "Interchange bypass lanes for trucks have been implemented in Southern California and Portland, Oregon, to improve safety by routing trucks around a major interchange typically containing left hand ramps. This design approach improves the merge condition affecting traffic operations at the interchange. Similar ramp options are provided for trucks on this separate roadway system as are provided for the mainlanes.\n", "Truck-mounted VMSes (also called Portable Changeable Message Signs or PCMS) are sometimes dispatched by highway agencies such as Caltrans to warn traffic of incidents such as accidents in areas where permanent VMSes aren't available or near enough as a preventive measure for reducing secondary accidents. They are often deployed in pairs so that the second VMS truck can take over when the traffic queue overtakes the first truck, requiring the first truck to reposition further upstream from the queue, to be effective. An optional third truck, the team leader, may be utilized for driving by and monitoring the incident itself, traffic patterns and delay times, to make strategic decisions for minimizing delays—analogous to spotter planes used in fighting forest fires.\n", "Load path analysis may be performed using the concept of a load transfer index, U*.. In a structure, the main portion of the load is transferred through the stiffest route. The U* index represents the internal stiffness of every point within the structure. Consequently, the line connecting the highest U* values is the main load path. In other words, the main load path is the ridge line of the U* distribution (contour) This method of analysis has been verified in physical experimentation.\n\nSection::::Load path calculation using U* index.\n", "Section::::Precursor steps.\n\nAlthough not identified as steps in the UTP process, a lot of data gathering is involved in the UTP analysis process. Census and land use data are obtained, along with home interview surveys and journey surveys. Home interview surveys, land use data, and special trip attraction surveys provide the information on which the UTP analysis tools are exercised.\n", "Although controlled traffic farming is still in its infancy as far as adoption is concerned (partially because the enabling technology of satellite guidance is still relatively new), there is a better engineering solution that would reduce tracked areas to less than 10%. This is not a recent concept, having been pioneered by Alexander Halkett in the 1850s and David Dowler in the 1970s, but the concept of a wide span vehicle is becoming increasingly attractive because of the other advantages it brings.\n", "Routable street centerlines take into account differences between northbound and southbound lanes on a freeway or turnpike. For example, to reach a point in the southbound lanes of a turnpike, service vehicles may need to drive north to the next exit then return on the southbound side. The analysis of a routable street network takes this into account so long as the event location is accurately reported. Routable systems account for barriers like lakes by calculating the distance of the driven route rather than a straight-line distance. It is assumed the service vehicle driver knows the shortest path or that all drivers make similar numbers of wrong turns.\n", "Position data is collected by antennas at locations in addition to fee collection locations. The New York State Department of Transportation (NYSDOT), for example, collect transponder information to provide real-time estimates of travel times between common destinations. By subtracting the time when vehicles pass under the first sign from the current time, the sign can display the expected travel time between the sign and the destination point ahead. This information is also used to determine the best times to schedule maintenance-related lane closures and for other traffic management purposes. According to NYSDOT, the individual tag information is encrypted, is deleted as soon as the vehicle passes the last reader, and is never made available to the Department.\n", "Generally, the Lead Pilot Car Operator functions to maintain an assured clear distance - - ahead of the Load. The Lead Pilot Car Operator frequently transmits his lane position, road name, lack of overhead obstruction up to and including a landmark or building and direction of travel. The Lead Pilot Car Operator is expected to observe any obstruction or complication that may require the Load to alter its path and transmit clear instructions to the Load and other necessary vehicles. Hearing the transmission from the Lead Pilot car, the Rear Pilot Car Operator ensures that the desired lane is clear and moves the Rear Pilot Car to operate in the now-clear lane. The Rear Pilot Car Operator transmit the clear lane availability to the Load. After passing the obstruction, the Lead Pilot car Operator informs the Load that the Lead Pilot car is now operating in a specific lane as it passes a specific landmark or building. The Rear Pilot Car Operator ensures and transmits that the desired lane is currently clear and moves the Rear Pilot Car to operate in the now-clear lane.\n", "CITE comprises District 7 of the Institute of Transportation Engineers, which consists of transportation professionals in more than 70 countries who are responsible for the safe and efficient movement of people and goods on streets, highways and transit systems.\n\nSection::::Software Tools.\n", "Some loads are weighed at the point of origin and the driver is responsible for ensuring weights conform to maximum allowed standards. This may involve using on-board weight gauges (load pressure gauges), knowing the empty weight of the transport vehicle and the weight of the load, or using a commercial weight scale. In route weigh stations check that gross vehicle weights do not exceed the maximum weight for that particular jurisdiction and will include individual axle weights. This varies by country, states within a country, and may include federal standards. The United States uses FMCSA federal standards that include bridge law formulas. Many states, not on the national road system, use their own road and bridge standards. Enforcement scales may include portable scales, scale houses with low speed scales or weigh-in-motion (WIM) scales.\n", "In this way, travellers can accurately assess their location, and road authorities can identify each bridge uniquely.\n\nSometimes, houses with RAPID numbering can also be used to determine the position. For example, house number 1530 is 15.3 km from the start of the highway.\n\nSection::::Safety.\n", "These are solutions that alert forklift drivers of the people found in its vicinity. Pedestrians must carry a radio frequency device (electronic tags) which, emit a signal when a truck detects them, alerting the driver of the potential risk of an accident. It detects both in the front and at the back and it differentiates between people and the usual obstacles found in warehouses. For this reason, the driver is only alerted when there is a pedestrian near the truck. \n\nThere are different solutions on the market:\n\nBULLET::::- Pedestrian Alert System PAS\n\nSection::::Manufacturers' worldwide ranking.\n", "In the U.S., a great deal of this research has been accomplished through partnerships between government agencies and government-supported research institutes housed at universities. Because U.S. regulation of occupational road safety is largely limited to the safety of large trucks and buses, these institutes have focused on providing the evidence base to support changes to the FMCSA safety regulations that cover these larger vehicles. Two research institutes that conduct much of the research on heavy-vehicle safety are the University of Michigan Transportation Research Institute (UMTRI) and the Virginia Tech Transportation Institute (VTTI).\n", "The primary algorithm used by the Ministry is known as the McMaster algorithm, designed by Professor Fred Hall of McMaster University, in Hamilton, Ontario. Incident Detection algorithms have also been widely used throughout the COMPASS-enabled area.\n", "BULLET::::- Height range – The maximum recommended incline of a yard ramp is 7 degrees or 1 in 8, though some yard ramps are capable of raising beyond this angle.\n\nBULLET::::- Movement of the yard ramp – Yard ramps are typically moved around using a simple tow bar which is pinned into the tow hitch on the back of most standard fork trucks, though some designs offer alternative methods, such as pushing the ramp around using pockets which accept the standard forks of a fork truck.\n", "Despite the conjecture, there is reduced chance of pedestrians using the truck apron as a sidewalk if there is a large vehicle approaching. Even when a truck apron is not present, most pedestrians will step back from the edge of the road when there is a large vehicle or semi-truck approaching for two reasons: to avoid the blast of air that accompanies large vehicles traveling at speed; because most people feel uncomfortable within close proximity to a large vehicle and will attempt to create distance by backing from the roadway as the vehicle approaches.\n", "Following increased pressure from \"The Times\" \"Cities Fit For Cycling\" campaign and from other media in Spring 2012, warning signs are now displayed on the backs of many HGVs. These signs are directed against a common type of accident which occurs when the large vehicle turns left at a junction: a cyclist trying to pass on the nearside can be crushed against the HGV's wheels, especially if the driver cannot see the cyclist. The signs, such as the winning design of the InTANDEM road safety competition launched in March 2012, advocate extra care when passing a large vehicle on the nearside.\n", "Truckers also use their 4 ways flashing up a steep hill, mountain roads and on ramps on express ways to let others know that they are traveling at a slow speed and to be cautious approaching them.\n\nSection::::Visual signaling.:Greeting.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-04915
Why do hot and cold water make different noises when travelling through a pipe?
Water has different densities and viscosities at different temperatures, so the fluid dynamics are slightly different, causing different noises.
[ "Heat pipes must be tuned to particular cooling conditions. The choice of pipe material, size, and coolant all have an effect on the optimal temperatures at which heat pipes work.\n", "The result is that the top pipe which received hot water, now has cold water leaving it at 20 °C, while the bottom pipe which received cold water, is now emitting hot water at close to 60 °C. In effect, most of the heat was transferred.\n\nSection::::Three current exchange systems.:Countercurrent flow—almost full transfer.:Conditions for higher transfer results.\n\nNearly complete transfer in systems implementing countercurrent exchange, is only possible if the two flows are, in some sense, \"equal\".\n", "formula_23\n\nwhere:\n\nformula_24\n\nformula_25\n\nSection::::Thermal entrance length.:Heat transfer.\n\nThe development of the temperature profile in the flow is driven by heat transfer determined conditions on the inside surface of the pipe and the fluid. Heat transfer may be a result of a constant heat flux or constant surface temperature. Constant heat flux may be caused by joule heating from a heat source, like heat tape, wrapped around the pipe. Constant temperature conditions may be produced by a phase transition, such as condensation of saturated steam on a pipe surface.\n", "Working fluids are chosen according to the temperatures at which the heat pipe must operate, with examples ranging from liquid helium for extremely low temperature applications (2–4 K) to mercury (523–923 K), sodium (873–1473 K) and even indium (2000–3000 K) for extremely high temperatures. The vast majority of heat pipes for room temperature applications use ammonia (213–373 K), alcohol (methanol (283–403 K) or ethanol (273–403 K)) or water (298–573 K) as the working fluid. Copper/water heat pipes have a copper envelope, use water as the working fluid and typically operate in the temperature range of 20 to 150 °C. Water heat pipes are sometimes filled by partially filling with water, heating until the water boils and displaces the air, and then sealed while hot.\n", "Temperature profiles for the pipes are formula_3 and formula_4 where \"x\" is the distance along the pipe. Assume a steady state, so that the temperature profiles are not functions of time. Assume also that the only transfer of heat from a small volume of fluid in one pipe is to the fluid element in the other pipe at the same position, i.e., there is no transfer of heat along a pipe due to temperature differences in that pipe. By Newton's law of cooling the rate of change in energy of a small volume of fluid is proportional to the difference in temperatures between it and the corresponding element in the other pipe:\n", "Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube, double pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types.\n", "An oscillating heat pipe, also known as a pulsating heat pipe, is only partially filled with liquid working fluid. The pipe is arranged in a serpentine pattern in which freely moving liquid and vapor segments alternate. Oscillation takes place in the working fluid; the pipe remains motionless.\n\nSection::::Heat transfer.\n", "Section::::Initiation.:Flux.\n", "This is a popular arrangement where higher flow rates are required for limited periods. Water is heated in a pressure vessel that can withstand a hydrostatic pressure close to that of the incoming mains supply. A pressure reducing valve is sometimes employed to limit the pressure to a safe level for the vessel. In North America, these vessels are called \"hot water tanks\", and may incorporate an electrical resistance heater, a heat pump, or a gas or oil burner that heats water directly.\n", "In this example, hot water at 60 °C enters the top pipe. It warms water in the bottom pipe which has been warmed up along the way, to almost 60 °C. A minute but existing heat difference still exists, and a small amount of heat is transferred, so that the water leaving the bottom pipe is at close to 60 °C. Because the hot input is at its maximum temperature of 60 °C, and the exiting water at the bottom pipe is nearly at that temperature but not quite, the water in the top pipe can warm the one in the bottom pipe to nearly its own temperature. At the cold end—the water exit from the top pipe, because the cold water entering the bottom pipe is still cold at 20 °C, it can extract the last of the heat from the now-cooled hot water in the top pipe, bringing its temperature down nearly to the level of the cold input fluid (21 °C).\n", "Section::::Types of hydronic system.:Piping arrangements.\n\nHydronic systems may be divided into several general piping arrangement categories:\n\nBULLET::::- Single or one-pipe\n\nBULLET::::- Two pipe steam (direct return or reverse return)\n\nBULLET::::- Three pipe\n\nBULLET::::- Four pipe\n\nBULLET::::- Series loop\n\nSection::::Single-pipe steam.\n", "Hot water service piping can also be traced, so that a circulating system is not needed to provide hot water at outlets. The combination of trace heating and the correct thermal insulation for the operating ambient temperature maintains a thermal balance where the heat output from the trace heating matches the heat loss from the pipe. Self-limiting or regulating heating tapes have been developed and are very successful in this application.\n", "Thermally developed flow results in reduced heat transfer compared to developing flow because the difference between the surface temperature of the pipe and the mean temperature of the flow is greater than the temperature difference between surface temperature of the pipe and the temperature of the fluid near the pipe boundary.\n\nSection::::Concentration entrance length.\n", "Flow patterns in pipes are governed my the diameter of the pipe, the physical properties of the fluids and their flow rates. As velocity and gas-liquid ratio is increased, \"bubble flow\" transitions into \"mist flow\". At high liquid-gas ratios, liquid forms the continuous phase an at low values it forms the disperse phase. In plug and slug flow, gas flows faster than the liquid and the liquid forms a 'slug' which becomes detached and velocity decreases until the next liquid slug catches up.\n", "Water and gas taps have adjustable flow: gate valves are more progressive; ball valves more coarse, typically used in on-off applications. Turning a valve knob or lever adjusts flow by varying the aperture of the control device in the valve assembly. The result when opened in any degree is a choked flow. Its rate is independent of the viscosity or temperature of the fluid or gas in the pipe, and depends only weakly on the supply pressure, so that flow rate is stable at a given setting. At intermediate flow settings the pressure at the valve restriction drops nearly to zero from the Venturi effect; in water taps, this causes the water to boil momentarily at room temperature as it passes through the restriction. Bubbles of cool water vapor form and collapse at the restriction, causing the familiar hissing sound. At very low flow settings, the viscosity of the water becomes important and the pressure drop (and hissing noise) vanish; at full flow settings, parasitic drag in the pipes becomes important and the water again becomes silent.\n", "BULLET::::- Bursting discs, restriction orifices, strainers and filters, steam traps, moisture traps, sight-glasses, silencers, flares and vents, flame arrestors, vortex breakers, eductors\n\nBULLET::::- Process piping, sizes and identification, including:\n\nBULLET::::- Pipe classes and piping line numbers\n\nBULLET::::- Flow directions\n\nBULLET::::- Interconnections references\n\nBULLET::::- Permanent start-up, flush and bypass lines\n\nBULLET::::- Pipelines and flowlines\n\nBULLET::::- Blinds and spectacle blinds\n\nBULLET::::- Insulation and heat tracing\n\nBULLET::::- Process control instrumentation and designation (names, numbers, unique tag identifiers), including:\n\nBULLET::::- Valves and their types and identifications (e.g. isolation, shutoff, relief and safety valves, valve interlocks)\n", "This system can be difficult to balance due to the supply line being a different length than the return; the further the heat transfer device is from the boiler, the more pronounced the pressure difference. Because of this, it is always recommended to: minimize the distribution piping pressure drops; use a pump with a , include balancing and flow-measuring devices at each terminal or branch circuit; and use control valves with a at the terminals.\n\nSection::::Two-pipe reverse return system.\n", "BULLET::::- Tube Layout: refers to how tubes are positioned within the shell. There are four main types of tube layout, which are, triangular (30°), rotated triangular (60°), square (90°) and rotated square (45°). The triangular patterns are employed to give greater heat transfer as they force the fluid to flow in a more turbulent fashion around the piping. Square patterns are employed where high fouling is experienced and cleaning is more regular.\n", "Section::::Sound.\n\nUsually sound is understood in terms of pressure variations accompanied by an oscillating motion of a medium (gas, liquid or solid). In order to understand thermoacoustic machines, it is of importance to focus on the temperature-position variations rather than the usual pressure-velocity variations.\n", "Heat pipes have an envelope, a wick, and a working fluid. Heat pipes are designed for very long term operation with no maintenance, so the heat pipe wall and wick must be compatible with the working fluid. Some material/working fluids pairs that appear to be compatible are not. For example, water in an aluminum envelope will develop large amounts of non-condensable gas over a few hours or days, preventing normal operation of the heat pipe.\n", ", where formula_8 is the thermal energy per unit length and γ is the thermal connection constant per unit length between the two pipes. This change in internal energy results in a change in the temperature of the fluid element. The time rate of change for the fluid element being carried along by the flow is:\n\nwhere formula_11 is the \"thermal mass flow rate\". The differential equations governing the heat exchanger may now be written as:\n", "Section::::Initiation.\n", "and \"A\" and \"B\" are two as yet undetermined constants of integration. Let formula_22 and formula_23 be the temperatures at x=0 and let formula_24 and formula_25 be the temperatures at the end of the pipe at x=L. Define the average temperatures in each pipe as:\n\nUsing the solutions above, these temperatures are:\n\nChoosing any two of the temperatures above eliminates the constants of integration, letting us find the other four temperatures. We find the total energy transferred by integrating the expressions for the time rate of change of internal energy per unit length:\n", "Section::::Structure, design and construction.:Different types of heat pipes.\n\nIn addition to standard, Constant Conductance Heat Pipes (CCHPs), there are a number of other types of heat pipes, including:\n\nBULLET::::- Vapor Chambers (planar heat pipes), which are used for heat flux transformation, and isothermalization of surfaces\n\nBULLET::::- Variable Conductance Heat Pipes (VCHPs), which use a Non-Condensable Gas (NCG) to change the heat pipe effective thermal conductivity as power or the heat sink conditions change\n", "Section::::Circulator pump potential side effects.\n\nIt is important to take note of the increased heat in the piping system, which in turn increases system pressure. Piping that is sensitive to the water condition (i.e., copper, and soft water) will be adversely affected by the continual flow. Although water is conserved, the parasitic heat loss through the piping will be greater as a result of the increased heat passing through it.\n\nSection::::Quantitative measures of function.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-20833
How does a gas fridge keep things cold by making heat?
Gas-powered refrigerators are [absorption refrigerators]( URL_0 ), which as actually an older design than the compression refrigerators powered by electricity which we're used to. The single-pressure variant uses three different refrigerants (water, ammonia, and hydrogen) and depends on somewhat complex changes of the partial pressures between these in different parts of the system. But the main idea is that the heat from burning the gas (usually propane) is used to heat up an ammonia-water mixture to extract the ammonia. This goes through a heat exchanger, dumping its heat to the outside and codensing. The now liquid ammonia can be used to cool down the inside of the fridge, after which it is absorbed by water. Then it returns to the gas-powered boiler.
[ "Section::::Principle of operation.\n\nFigure 1 represents the Stirling-type single-orifice Pulse-Tube Refrigerator (PTR), which is filled with a gas, typically helium at a pressure varying from 10 to 30 bar. From left to right the components are:\n\nBULLET::::- a compressor, with a piston moving back and forth at room temperature \"T\"\n\nBULLET::::- a heat exchanger X where heat is released to the surroundings at room temperature\n", "In the tube the gas is thermally isolated (adiabatic), so the temperature of the gas in the tube vary with the pressure.\n\nAt the cold end of the tube, the gas enters the tube via X when the pressure is high with temperature \"T\" and return when the pressure is low with a temperature below \"T\", hence taking up heat from X : this gives the desired cooling effect at X.\n", "Refrigerators and freezers may be free-standing, or built into a kitchen.\n\nThree distinct classes of refrigerator are common:\n", "A typical run-around coil system comprises two or more multi-row finned tube coils connected to each other by a pumped pipework circuit. The pipework is charged with a heat exchange fluid, normally water, which picks up heat from the exhaust air coil and gives up heat to the supply air coil before returning again. Thus heat from the exhaust air stream is transferred through the pipework coil to the circulating fluid, and then from the fluid through the pipework coil to the supply air stream.\n", "The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time.\n", "Absorption cooling was invented by the French scientist Ferdinand Carré in 1858. The original design used water and sulphuric acid.\n\nIn 1922 Baltzar von Platen and Carl Munters, while they were still students at the Royal Institute of Technology in Stockholm, Sweden, enhanced the principle with a 3-fluid configuration. This \"Platen-Munters\" design can operate without a pump.\n", "In the late 19th century, the most common phase change refrigerant material for absorption cooling was a solution of ammonia and water. Today, the combination of lithium bromide and water is also in common use. One end of the system of expansion/condensation pipes is heated, and the other end gets cold enough to make ice. Originally, natural gas was used as a heat source in the late 19th century. Today, propane is used in recreational vehicle absorption chiller refrigerators. Innovative hot water solar thermal energy collectors can also be used as the modern \"free energy\" heat source.\n", "The thermal mass used in Michael Reynolds' design is a combination of a liquid (i.e. water or beer) together with concrete mass. Concrete's temperature can be decreased quickly, while a liquid's (such as beer or water) with its higher thermal mass requires more energy to change temperature, holding the cold for longer. In Michael Reynolds' design, the liquid is added in the form of beer cans, placed in the back of the refrigerator.\n", "BULLET::::- Absorption refrigerators may be used in caravans and trailers, and dwellings lacking electricity, such as farms or rural cabins, where they have a long history. They may be powered by any heat source: gas (natural or propane) or kerosene being common. Models made for camping and RV use often have the option of running (inefficiently) on 12 volt battery power.\n", "The pure ammonia gas then enters the condenser. In this heat exchanger, the hot ammonia gas transfers its heat to the outside air, which is below the boiling point of the full-pressure ammonia, and therefore condenses. The condensed (liquid) ammonia flows down to be mixed with the hydrogen gas released from the absorption step, repeating the cycle.\n\nSection::::See also.\n\nBULLET::::- Adsorption refrigeration\n\nBULLET::::- Icyball\n\nBULLET::::- Quantum absorption refrigerator\n\nBULLET::::- RV Fridge\n\nSection::::External links.\n\nBULLET::::- Absorption Heat Pumps (Office of Energy Efficiency and Renewable Energy).\n\nBULLET::::- Arizona Energy Explanation with diagrams\n", "BULLET::::- 1926 – General Electric Company introduced the first hermetic compressor refrigerator\n\nBULLET::::- 1929 - David Forbes Keith of Toronto, Ontario, Canada received a patent for the Icy Ball which helped hundreds of thousands of families through the Dirty Thirties.\n\nBULLET::::- 1933 – William Giauque and others – Adiabatic demagnetization refrigeration\n\nBULLET::::- 1937 – Pyotr Leonidovich Kapitsa, John F. Allen, and Don Misener discover superfluidity using helium-4 at 2.2 K\n\nBULLET::::- 1937 – Frans Michel Penning invents a type of cold cathode vacuum gauge known as Penning gauge\n\nBULLET::::- 1944 – Manne Siegbahn, the Siegbahn pump\n", "BULLET::::- From d to a. The lp valve is closed and the hp valve opened with fixed position of the displacer. The gas, now in the hot end of the cold head, is compressed and heat is released to the surroundings. In the end of this step we are back in position a.\n\nSection::::Pulse-tube refrigerators.\n", "The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as \"aqua ammonia\") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde, an engineering professor at the Technological University Munich in Germany, patented an improved method of liquefying gases in 1876. His new process made possible the use of gases such as ammonia (NH), sulfur dioxide (SO) and methyl chloride (CHCl) as refrigerants and they were widely used for that purpose until the late 1920s.\n\nSection::::History.:Domestic refrigerator.\n", "The Crosley Icyball was an example of a gas-absorption refrigerator, as can be found today in recreational vehicles or campervans. Unlike most refrigerators, the Icyball had no moving parts, and instead of operating continuously, was manually cycled. Typically it would be charged in the morning, and provide cooling throughout the heat of the day.\n", "Section::::Refrigerator mother theory.\n", "In the 1890s gold miners in Australia developed the Coolgardie safe, based on the same principles.\n", "The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as \"aqua ammonia\") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde, an engineering professor at the Technological University Munich in Germany, patented an improved method of liquefying gases in 1876. His new process made possible using gases such as ammonia, sulfur dioxide , and methyl chloride (CHCl) as refrigerants and they were widely used for that purpose until the late 1920s.\n\nSection::::See also.\n\nBULLET::::- Absorption refrigerator\n\nBULLET::::- Einstein refrigerator\n\nBULLET::::- Air conditioning\n\nBULLET::::- Flash evaporation\n\nBULLET::::- Heat pump\n", "BULLET::::- a regenerator consisting of a porous medium with a large specific heat (which can be stainless steel wire mesh, copper wire mesh, phosphor bronze wire mesh or lead balls or lead shot or rare earth materials to produce very low temperature) in which the gas flows back and forth\n\nBULLET::::- a heat exchanger X, cooled by the gas, where the useful cooling power formula_1 is delivered at the low temperature \"T\", taken from the object to be cooled\n\nBULLET::::- a tube in which the gas is pushed and pulled\n", "Section::::History.:Refrigeration research.\n\nThe history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time.\n", "Section::::Specialized applications.:Intermodal.\n", "In 1926, Albert Einstein and his former student Leó Szilárd proposed an alternative design known as the Einstein refrigerator.\n\nAt the 2007 TED Conference, Adam Grosser presented his research of a new, very small, \"intermittent absorption\" vaccine refrigeration unit for use in third world countries. The refrigerator is a small unit placed over a campfire, that can later be used to cool 15 liters of water to just above freezing for 24 hours in a 30 °C environment.\n\nSection::::Principles.\n", "Section::::History.:Cryogenic refrigeration.\n", "In 1876, German engineer Carl von Linde patented the process of liquefying gas that would later become an important part of basic refrigeration technology (U.S. Patent 1027862). In 1879 and 1891, two African American inventors patented improved refrigerator designs in the United States (Thomas Elkins – U.S. patent #221222 and respectively John Standard – U.S. patent #455891).\n", "The first gas heater made use of the same principles of the Bunsen burner invented in the previous year. It was first commercialized by the English company \"Pettit and Smith\" in 1856. The flame heats the air locally. This heated air then spreads by convection, thus heating the whole room. Today the same principle applies with outdoor patio heaters or \"mushroom heaters\" which act as giant Bunsen burners.\n", "The fluid from the open or closed loop is circulated through a heat pump. The refrigerant in the heat pump either extracts heat from the fluid or rejects heat to it, cooling or warming the refrigerant. When heat is absorbed by the refrigerant, the heat pump boosts its temperature and sends it to the air handler to circulate hot air to heat the home and (optionally) to a hot water heater to produce domestic hot water. The now cooled fluid goes back into the closed loop or, in an open loop system, is sent back to its source.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-12020
Do taller people have a slower reaction time than short or average height people due to longer nerves?
Yes, technically they do, but it is so small that it would not be remotely relevant to anything humans do.
[ "Section::::Weightlifting.\n\nIn weightlifting shorter levers are advantageous and taller than average competitors usually compete in the + group. Short people also have a lower consumption of ATP and glycogen than a tall person to make the same proportional effort. So at professional level, it is more advantageous to have short stature.\n", "Section::::Taekwondo.\n\nIn taekwondo, a taller height gives the advantage of having a longer range and increased chance to strike a target when sparring. However, due to the length of the kicks, combinations and reflexes will not be as quick when compared with a fighter standing at a shorter height. A shorter height will also increase a lower centre of gravity giving a fighter better balance. \n\nSection::::Tennis.\n", "Lance has a very high tolerance of pain. He sometimes practices his heightening on himself. However, despite his famous reputation as the demonic Ignitor, he only kills when he is forced to.\n\nSection::::Characters.:Reide.\n\nLast name is currently unknown. \n\nAlso known as: The Viper\n\nAge: 23\n\nHeight: 180 cm\n\nTrigger: Repetition of a sequence of notes. (exact notes will be added)\n\nHeightening: (Amplificatory) Dramatically heightened brain assimilation. Results in faster judgement and reaction time. Average (heightened) reaction time: 0.06 seconds.\n", "In judo, height can work both ways. The shorter person tends to have an advantage (both offensively and defensively) in hip throws, dropping techniques i.e. drop seoi nage and wrestling style pickups such as ura nage. A low center of gravity allows better position for these throws. Taller players have an advantage in sweeps because they can often use these maneuvers before getting in range of the shorter person's kuzushi. They can also use their long legs to their advantage with many ashi waza techniques and throws with a significance on legs such as uchi mata and harai goshi. In ne waza (groundwork) having shorter limbs makes it more difficult to be submitted but also more difficult to use your legs to escape from hold downs. In contrast being taller can help with escaping hold downs and certain submissions i.e. sangaku jime. The 2013 judo rules somewhat shift or even out the advantages that shorter fighters had by no longer allowing leg grabs and no longer being allowed to prevent your opponent from taking grips.\n", "However, there are some advantages a being a shorter fencer. Shorter fencers generally have less target area to defend. Furthermore, it has been noted that if footwork abilities are equal, a smaller fencer finds it easier to co-ordinate their footwork when moving in and out of distance than taller opponents due to their lower center of balance. In certain instances they may have a slight advantage when infighting as they can retract and replace their weapon point with more ease.\n", "A recent study showed reduced activity in the TPJ of adolescents compared to adults during an extinction task, suggesting a role for the TPJ in anxiety disorders.\n\nSection::::Disorders.:Future of possible treatments.\n", "In fencing, it is generally advantageous (especially in épée) to be taller because a longer arm span allows one's weapon to reach one's opponent's body from a further distance as well as in some instances being easier to make more angulated attacks.\n", "A 1999 study that was conducted on a sample on 32,887 Swedish men, aged 18, free of growth defects showed that, by and large, shorter men (with 2 standard deviations below the mean) demonstrated poorer physical and psychological performance in the context of military service, with increased risk of musculoskeletal diagnoses. Additionally, increased height showed a relationship with increased mean intellectual performance and, under conditions of stress, shorter men showcased demonstrably worse leadership capability and psychological function.\n", "In Wistar rats, it was found that cell size is the crucial property in determining neuronal recruitment. Motor neurons of different sizes have similar voltage thresholds. Smaller neurons have higher membrane resistance and require lower depolarizing current to reach spike threshold. The cell size contribution to recruitment in motor neurons during postnatal development is investigated in this experiment. Experiments were done on 1- to 7-day-old Wistar rats and 20- to 30-day-old Wistar rats as well. The 1- to 7-day-old Wistar rats were selected because early after birth, the rats show an increase in cell size. In 20- to 30-day-old Wistar rats, the physiological and anatomical features of oculomotor nucleus motor neurons remain unchanged. Rat oculomotor nucleus motor neurons were intracellularly labelled and tested using electrophysical properties. The size principle applies to the recruitment order in neonatal motor neurons and also in the adult oculomotor nucleus. The increase in size of motor neurons led to a decrease in input resistance with a strong linear relationship in both age groups.\n", "Human locomotion is often examined from the perspective of the gait cycle. Cutaneous reflexes demonstrate variations in the muscles activated and the timing at which they are activated depending on which portion of the gait cycle the stimulation occurs. This variation suggests a functional role for the reflex to provide us with a smooth gait alteration when encountering or anticipating obstacles and challenging terrain. The major muscles impacted involve four (4) motions important to locomotion:\n\nSection::::Functional role.:Superficial fibular nerve (SF).\n", "There are generally two types of syndromes that cause short stature. One is disproportionate limb size on a normal size torso. The second is proportionate, where they are generally small for their average age. There are a variety of causes including skeletal dysplasia, chondrodysrophy, and growth hormone deficiencies. Short stature can cause a number of other disabilities including eye problems, joint defects, joint dislocation or limited range of movement.\n\nSection::::Disability groups.:Spinal cord injuries.\n\nPeople with spinal cord injuries compete in this class, including F1, F2 sportspeople.\n\nSection::::Disability groups.:Spinal cord injuries.:F1.\n", "In 2008, Hill, Hanton, Matthews, and Fleming studied sub-optimal performance in sports, also known as the phenomenon of \"choking\". They determined that when individuals were worried about negative evaluations by the audience, and performing tasks that they were not familiar with, they often would perform at a lower level than when they did without an audience.\n\nIn 2011, Anderson-Hanley, Snyder, Nimon, and Arciero found that older adults riding \"cybercycles\", virtual-reality enhanced stationary bikes with interactive competitions, exercised at higher rates than adults riding stationary bikes.\n", "Section::::Causes of conduction velocity deviations.:Anthropometric and other individualized factors.:Height.\n\nConduction velocities in both the Median sensory and Ulnar sensory nerves are negatively related to an individual's height, which likely accounts for the fact that, among most of the adult population, conduction velocities between the wrist and digits of an individual's hand decrease by 0.5 m/s for each inch increase in height. As a direct consequence, impulse latencies within the Median, Ulnar, and Sural nerves increases with height.\n\nThe correlation between height and the amplitude of impulses in the sensory nerves is negative.\n", "Reflexes can be very simple, as in the monosynaptic reflex, which only contains one synapse, or more complicated, as in the polysynaptic reflex, which involves more than one synapse. The knee jerk reflex is a common example of a monosynaptic reflex when one is looking at the quadriceps motor response of kicking your leg out. It can also be used as an example of a polysynaptic reflex when looking at the involvement of inhibitory interneurons to relax the hamstrings. The complexity of the reflex can be estimated by examining the time delay, or latency, between the electrical stimulation of the sensory neuron and the corresponding motor response, as measured by EMG (electromyography). Most reflexes can be categorized in one of three groups depending on the latency of EMG response. The short-latency reflex (SLR) is the fastest (~40-50 ms) and involves a mono-synaptic pathway. The medium-latency reflex (MLR) utilizes interneurons within the spinal cord and is typically ~80-90 ms. The long-latency reflex (LLR) is ~120-140 ms, suggesting that it is mediated by additional supraspinal input from the brain.\n", "The challenge and threat hypothesis states that people perform worse on complex tasks and better on simple tasks when in the presence of others because of the type of cardio-vascular response to the task. When performing a simple task in the presence of others, people show a normal cardiovascular response. However, when performing a complex task in the presence of others, the cardiovascular response is similar to that of a person in a threatening position. The normal cardiovascular response serves to improve performance, but the threat-like cardiovascular response serves to impede performance.\n\nSection::::Major theoretical approaches.:Evaluation approach.\n", "There is a large body of research in psychology, economics, and human biology that has assessed the relationship between several seemingly innocuous physical features (e.g., body height) and occupational success. The correlation between height and success was explored decades ago. Shorter people are considered to have an advantage in certain sports (e.g., gymnastics, race car driving, etc.), whereas in many other sports taller people have a major advantage. In most occupational fields, body height is not relevant to how well people are able to perform; nonetheless several studies found that success was positively correlated with body height, although there may be other factors such as gender or socioeonomic status that are correlated with height which may account for the difference in success.\n", "In opposite to many other established measurements methods like Chair Rising Test, Stand-up and Go test and others the maximum power output relative to body weight during a jump of maximum height measured by Mechanography is a much better reproducible and does not have a training effect even when repeated more frequently.\n", "When such discrepancies are taken into account in comparing two or more registers of patients with cerebral palsy and also the extent to which children with mild cerebral palsy are included, the incidence rates still converge toward the average rate of 2:1000.\n", "The capability of moving around without falling is necessary for activities of daily living (ADL's) . Patients exhibiting delay in the reaction time, decreased movement velocity, restricted LoS boundary or cone of stability, or uncontrolled CoG movement are at a higher risk of falling. A delayed reaction time suggests that the individual might have problems in cognitive processing. Reduced movement velocities are indicate high-level of central nervous system deficits. Reduced Endpoint excursions, excessively larger maximum excursions and poor directional control are all indicative of motor control abnormalities.\n", "Studies also show that the variation of limb stiffness is important when hopping, and that different people may control this stiffness variation in different ways. One study showed that adults had more feedforward neural control, muscle reflexes, and higher relative leg stiffness than their juvenile counterparts when performing a hopping task. This indicates that the control of stiffness may vary from person to person.\n\nSection::::Stiffness modulation.:Movement accuracy.\n", "The pressure drag is related to cross section size, the friction drag is related to total skin surface (which is generally higher in tall people), and the wave-making drag \"decreases\" with body length because a longer body will generally generate less waves due to a decreasing Froude number. Studies have found that total drag does not increase as swimmer height increases, mostly due to the decrease in wave-making drag. Since taller swimmers tend to have bigger muscles and bigger hands and feet to propel them, then they are generally at an advantage.\n", "A 2004 report citing a 2003 UNICEF study on the effects of malnutrition in North Korea, due to \"successive famines,\" found young adult males to be significantly shorter. In contrast South Koreans \"feasting on an increasingly Western-influenced diet,\" without famine, were growing taller. The height difference is minimal for Koreans over forty years old, who grew up at a time when economic conditions in the North were roughly comparable to those in the South, while height disparities are most acute for Koreans who grew up in the mid-1990s – a demographic in which South Koreans are about taller than their North Korean counterparts – as this was a period during which the North was affected by a harsh famine where hundreds of thousands, if not millions, died of hunger. A study by South Korean anthropologists of North Korean children who had defected to China found that eighteen-year-old males were 5 inches (13 cm) shorter than South Koreans their age due to malnutrition.\n", "In 2007, research by the University of Central Lancashire suggested that the Napoleon complex (described in terms of the theory that shorter men are more aggressive to dominate those who are taller than they are) is likely to be a myth. The study discovered that short men were less likely to lose their temper than men of average height. The experiment involved subjects dueling each other with sticks, with one subject deliberately rapping the other's knuckles. Heart monitors revealed that the taller men were more likely to lose their tempers and hit back. University of Central Lancashire lecturer Mike Eslea commented that \"when people see a short man being aggressive, they are likely to think it is due to his size, simply because that attribute is obvious and grabs their attention.\"\n", "Using electromyography (EMG), the neural strategies of muscle activation can be measured. Ramp-force threshold refers to an index of motor neuron size in order to test the size principle. This is tested by determining the recruitment threshold of a motor unit during isometric contraction in which the force is gradually increased. Motor units recruited at low force (low-threshold units) tend to be small motor units, while high-threshold units are recruited when higher forces are needed and involve larger motor neurons. These tend to have shorter contraction times than the smaller units. The number of additional motor units recruited during a given increment of force declines sharply at high levels of voluntary force. This suggests that, even though high threshold units generate more tension, the contribution of recruitment to increase voluntary force declines at higher force levels.\n", "Section::::Determinants of reaction time.:Visual location.:Refinements and improvements.\n\nThe reverse scenario was tested in a 1954 experiment by Richard L. Deninger and Paul Fitts, in which it was demonstrated that subjects responded more quickly when the stimulus and response were compatible. Solid evidence that S-R compatibility impacted the response planning phase was not found until 1995, when Bernhard Hommel demonstrated that modifying stimuli in ways unrelated to S-R compatibility, such as the size of the objects on the computer screen, did not increase reaction time.\n\nSection::::Determinants of reaction time.:Auditory location.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-03359
Why does spit get stringy after using mouthwash?
I think it's because mouthwash has alcohols in it that start to degrade the proteins in your saliva. Because denatured proteins are long chains or organic matter they start associating with each other and clump together as opposed to being more discrete identities beforehand.
[ "However, mouthing may also be iconic, as in the word for (of food or drink) in ASL, UtCbf\", where the mouthing suggests something hot in the mouth and does not correspond to the English word \"hot\".\n\nMouthing is an essential element of cued speech and simultaneous sign and speech, both for the direct instruction of oral language and to disambiguate cases where there is not a one-to-one correspondence between sign and speech. However, mouthing does not always reflect the corresponding spoken word; when signing 'thick' in Auslan (Australian Sign Language), for example, the mouthing is equivalent to spoken \"fahth\".\n", "Spit hood\n\nA spit hood, spit mask, mesh hood or spit guard is a restraint device intended to prevent someone from spitting or biting.\n\nProponents, often including police unions and associations, say the spit hoods can help protect personnel from exposure to risk of serious infection like hepatitis and that in London, 59% of injecting drug users test positive for Hepatitis C.\n", "Section::::Scaffolds.\n", "With a 0.1-0.7 mm thick mucus layer, the oral cavity serves as an important route of administration for mucoadhesive dosages. Permeation sites can be separated into two groups: sublingual and buccal, in which the former is much more permeable than the latter. However, the sublingual mucosa also produces more saliva, resulting in relatively low retention rates. Thus, sublingual mucosa is preferable for rapid onset and short duration treatments, while the buccal mucosa is more appropriate for longer dosage and onset times. Because of this dichotomy, the oral cavity is suitable for both local and systemic administration. Some common dosage forms for the oral cavity include gels, ointments, patches, and tablets. Depending on the dosage form, some drug loss can occur due to swallowing of saliva. This can be minimized by layering the side of the dosage facing the oral cavity with an impermeable coating(,) commonly seen in patches.\n", "Senator Nick Xenophon described the cigarette industry as \"\"parasitic\"\" and urged the Government to cancel the event, but Substance Abuse Minister Jane Lomax Smith said she would not \"interfere\" with the party. \"\"While we are making life tougher for cigarette companies, we wouldn't interfere in the affairs of a legitimate business running a private function in a no-smoking venue.\"\" she said.\n", "Fasting spittle\n\nFasting spittle – saliva produced first thing in the morning, before breakfast – has been used to treat a wide variety of diseases for many hundreds of years. Spittle cures are usually considered to be more effective if fasting spittle is used.\n\nAn early recorded use of spittle as a cure comes from the Gospel of St Mark, believed to have been written in about 70 AD:\n", "The device has to be worn at night as well as one hour during daytime. The effect can be increased by doing bite exercises during this time. The wearing comfort is pretty high, however talking is completely impossible. In order to wear the device the patient must be able to breathe through the nose. During the first days, it can fall out of the mouth.\n", "Section::::Research.\n\nChew and Spit has not received much attention in the research industry regarding treatment, long term effects of chewing and spitting, and its associations with other behaviors and eating disorders. More research is needed on his topic to further understand the effect this behavior has on individuals physically and psychologically.\n", "We would also like to thank all the bands we have toured with, to our booking agent Ian Armstrong and all the promoters who made it all happen for us. We would like to thank the producers we have worked with (Pete Miles and Charlie Hugall) and to everyone who has helped and supported us over the years by putting us up, putting up with us and driving and feeding us. It is amazing what we have achieved doing things DIY and it would not have been possible without you.\n", "He later added it back, claiming the bit's rhythm does not work without it. In his comedy routine, Carlin would make fun of each word; for example, he would say that \"tits\" should not be on the list because it sounds like a nickname of a snack (\"New Nabisco Tits! ... corn tits, cheese tits, tater tits!\").\n\nSection::::Availability.\n\nCarlin performed the routine many times and included it, in whole or in part on several of his records and HBO specials. Parts or all of the performance appear on the following releases:\n", "Section::::Cleaning.\n\nIt is important to clean the voice prosthesis regularly, as the silicone material is exposed to yeast (candida) and bacteria in the food pipe, which is normally present in these areas. If yeast begins growing on or in the area of the valve flap of the voice prosthesis, it may not close well enough anymore. When this happens fluid starts to leak into the windpipe when eating or drinking.\n\nSection::::Cleaning.:Brushing.\n", "Dry mouth, if severe to the point of causing difficulty speaking or swallowing, may be managed by dosage reduction or temporary discontinuation of the drug. Patients may also chew sugarless gum or suck on sugarless candy in order to increase the flow of saliva. Some artificial saliva products may give temporary relief. \n", "As with any intra-oral appliance, wearing it should not cause sore spots on the gingiva, as this could lead to permanent tissue damage. If it does cause sores, the parts of the device causing the damage have to be cut off as soon as possible.\n\nSection::::History.\n", "Mouthing often originates from oralist education, where sign and speech are used together. Thus mouthing may preserve an often abbreviated rendition of the spoken translation of a sign. In educated Ugandan Sign Language, for example, where both English and Ganda are influential, the word for , Av\", is accompanied by the mouthed syllable \"nyo\", from Ganda \"nnyo\" 'very', and , jO*[5]v\", is accompanied by \"vu\", from Ganda \"onvuma\". Similarly, the USL sign , t55bf, is mouthed \"fsh\", an abbreviation of English \"finish\", and , }HxU, is mouthed \"df\". \n", "BULLET::::- Washington Senator R. Lorraine Wojahn noted that her mother washed out her mouth with soap when she was five years old, for trying some of her father's chewing tobacco.\n\nBULLET::::- Former president George W. Bush recalled that his mother had washed his mouth out with soap for \"getting fresh\" with her.\n\nBULLET::::- Following Toledo, Ohio mayor Carty Finkbeiner's use of profanity in a news conference in 1998, presidential candidate Ralph Nader sent him a bar of soap with which to wash out his mouth.\n\nSection::::See also.\n\nBULLET::::- Hotsaucing\n\nBULLET::::- Castor oil\n\nBULLET::::- Grounding (discipline technique)\n\nBULLET::::- Spoiled child\n", "In the last couple of years various members of Mouthwash have performed/recorded with The Awful Crew, Herbert Wrecking Crew, Florence and the Machine, Crystal Fighters, Paloma Faith, Jack Penate, Saints of insanity, Mr. Exhaust, The Murderhunks and Wonk Unit to name but a few.\n", "Mouthwash should not be used immediately after brushing the teeth so as not to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to \"spit don't rinse\" after toothbrushing as part of a National Health Service campaign in the UK.\n", "BULLET::::- When Lois wipes off Stewie's fake pencil mustache, Stewie compares the saliva being cleaned on his upper lip to the time he had dinner with Martin Landau. A cutaway shows Martin Landau having a distinct speech pattern by not chewing up his food as he speaks.\n", "In order to correct the deepening of the nasolabial fold more accurately, the deep plane facelift was developed. Differing from the SMAS lift by freeing cheek fat and some muscles from their bone implement. This technique has a higher risk at damaging the facial nerve. The SMAS lift is an effective procedure to reposition the platysma muscle; however, the nasolabial fold is according to some surgeons better addressed by a deep plane facelift or composite facelift.\n\nSection::::Procedure.:Composite facelift.\n", "In any case, lexical recognition likely contributes significantly to speech segmentation through the contextual clues it provides, given that it is a heavily probabilistic system—based on the statistical likelihood of certain words or constituents occurring together. For example, one can imagine a situation where a person might say \"I bought my dog at a ____ shop\" and the missing word's vowel is pronounced as in \"net\", \"sweat\", or \"pet\". While the probability of \"netshop\" is extremely low, since \"netshop\" isn't currently a compound or phrase in English, and \"sweatshop\" also seems contextually improbable, \"pet shop\" is a good fit because it is a common phrase and is also related to the word \"dog\".\n", "The statistical distribution of phonemes within the lexicon of a language is uneven. While there are clusters of words which are phonemically similar to each other ('lexical neighbors', such as spit/sip/sit/stick...etc.), others are unlike all other words: they are 'unique' in terms of the distribution of their phonemes ('umbrella' may be an example). Skilled users of the language bring this knowledge to bear when interpreting speech, so it is generally harder to identify a heard word with many lexical neighbors than one with few neighbors. Applying this insight to seen speech, some words in the language can be unambiguously lip-read even when they contain few visemes - simply because no other words could possibly 'fit'.\n", "In sign language, mouthing is the production of visual syllables with the mouth while signing. That is, signers sometimes say or mouth a word in a spoken language at the same time as producing the sign for it. Mouthing is one of the many ways in which the face and mouth is used while signing. Although not present in all sign languages, and not in all signers, where it does occur it may be an essential (that is, phonemic) element of a sign, distinguishing signs which would otherwise be homophones; in other cases a sign may seen to be flat and incomplete without mouthing even if it is unambiguous. Other signs use a combination of mouth movements and hand movements to indicate the sign; for example, the ASL sign for includes a mouth gesture where the mouth is slightly open. In such cases, mouthing is not available.\n", "BULLET::::- The alveolar lateral approximant is velarised in pre-pausal and preconsonantal positions and often also in morpheme-final positions before a vowel. There have been some suggestions that onset is also velarised, although that needs to be further researched. Some speakers vocalise preconsonantal, syllable-final and syllabic instances of to a close back vowel similar to , so that \"milk\" can be pronounced and \"noodle\" . This is more common in South Australia than elsewhere.\n\nBULLET::::- Yod-dropping and coalescence\n", "There are currently no biochemical diagnostic tests clinically available, as no sensitive diagnostic test has yet been found that can detect reversible changes before this is clinically visible and detectable. There are many salivary biomarkers and biomarkers in the crevicular fluid surrounding implants that are present in much higher levels when there is peri-implant mucositis or peri-implant disease but all these present after or at the same time as clinical signs and symptoms. Therefore, there is currently no benefit to assessing the peri-implant fluid or analysing the saliva. Research continues in this field, though there is also no biochemical diagnostic test clinically available to detect the progression of gingivitis or periodontitis as of yet.\n", "BULLET::::- Convicted murderer Steven W. Bowman was alleged to have washed out his girlfriend's mouth with soap in July 2000, when she mentioned her other romantic partner's name; before murdering him.\n\nBULLET::::- A teacher in Rochester, New York was suspended in 2004 for washing out the mouth of a student for using vulgar language. Following her suspension, parents and family members of her students signed a petition supporting her actions and requesting her reinstatement.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-06820
Why does breathing into a paper bag help people who are hyperventilating?
Breathing in a bag helps you recycle the carbon dioxide in your breath. This lowers the pH of your blood, which has gotten too alkaline. That causes you to resume breathing normally. It also changes brain chemistry, reducing feelings of panic.
[ "The respiratory centers try to maintain an arterial pressure of 40 mm Hg. With intentional hyperventilation, the content of arterial blood may be lowered to 10–20 mm Hg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving.\n\nSection::::See also.\n\nBULLET::::- Arterial blood gas\n\nBULLET::::- Bosch reaction\n\nBULLET::::- Bottled gas\n\nBULLET::::- Carbon dioxide sensor\n\nBULLET::::- Carbon sequestration\n", "The original traditional treatment of breathing into a paper bag to control psychologically based hyperventilation syndrome (which is now almost universally known and often shown in movies and TV dramas) was invented by New York City physician (later radiologist), Alexander Winter, M.D. [1908-1978], based on his experiences in the U.S. Army Medical Corps during World War II and published in the Journal of the American Medical Association in 1951. Because other medical conditions can be confused with hyperventilation, namely asthma and heart attacks, most medical studies advise against using a paper bag since these conditions worsen when CO levels increase.\n", "The respiratory centers try to maintain an arterial CO pressure of 40 mm Hg. With intentional hyperventilation, the CO content of arterial blood may be lowered to 10–20 mm Hg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving.\n\nSection::::Nitric oxide.\n", "BULLET::::1. Controlled hyperventilation: The first phase involves 30 cycles of breathing. Each cycle goes as follows: take a powerful breath in, fully filling the lungs. Breathe out by passively releasing the breath, but not actively exhaling. Repeat this cycle at a steady pace thirty times. Hof says that this form of hyperventilation may lead to tingling sensations or light-headedness.\n\nBULLET::::2. Exhalation: After completion of the 30 cycles of controlled hyperventilation, take another deep breath in, and let it out completely. Hold the breath for as long as possible.\n", "Although breathing into a paper bag was a common recommendation for short-term treatment of symptoms of an acute panic attack, it has been criticized as inferior to measured breathing, potentially worsening the panic attack and possibly reducing needed blood oxygen. While the paper bag technique increases needed carbon dioxide and so reduces symptoms, it may excessively lower oxygen levels in the blood stream.\n\nCapnometry, which provides exhaled CO levels, may help guide breathing.\n\nSection::::Treatment.:Therapy.\n", "While traditional intervention for an acute episode has been to have the patient breathe into a paper bag, causing rebreathing and restoration of CO₂ levels, this is not advised. The same benefits can be obtained more safely from deliberately slowing down the breathing rate by counting or looking at the second hand on a watch. This is sometimes referred to as \"7-11 breathing\", because a gentle inhalation is stretched out to take 7 seconds (or counts), and the exhalation is slowed to take 11 seconds. This in-/exhalation ratio can be safely decreased to 4-12 or even 4-20 and more, as the O₂ content of the blood will easily sustain normal cell function for several minutes at rest when normal blood acidity has been restored.\n", "BULLET::::- \"Mapelson F\" systems are also used for children, and consist of an adapted Mapelson E system to which a reservoir bag has been added to the tubing - this is called the \"Jackson-Rees modification\", after Gordon Jackson Rees. This allows both spontaneous and controlled ventilation, as well as the application of continuous positive airway pressure.\n", "Airway, breathing, and circulation, therefore work in a cascade; if the patient's airway is blocked, breathing will not be possible, and oxygen cannot reach the lungs and be transported around the body in the blood, which will result in hypoxia and cardiac arrest. Ensuring a clear airway is therefore the first step in treating any patient; once it is established that a patient's airway is clear, rescuers must evaluate a patient's breathing, as many other things besides a blockage of the airway could lead to an absence of breathing.\n\nSection::::Medical use.:CPR.\n", "To understand how changes in respiration might affect blood pH, consider the effects of ventilation on P in the lungs. If one were to hold his or her breath (or breathe very slowly, as in the case of respiratory depression), the blood would continue delivering carbon dioxide to the alveoli in the lungs, and the amount of carbon dioxide in the lungs would increase. On the other hand, if one were to hyperventilate, then fresh air would be drawn into the lungs and carbon dioxide would rapidly be blown out. In the first case, because carbon dioxide is accumulating in the lungs, alveolar P would become very high. In the second case, because carbon dioxide is rapidly exiting the lungs, alveolar P would be very low. Note that these two situations, those of respiratory depression and hyperventilation, produce effects that are immediately analogous to the experiment described previously, in which the partial pressures of carbon dioxide were varied and the resulting changes in pH observed. As indicated by the Davenport diagram, respiratory depression, which results in a high P, will lower blood pH. Hyperventilation will have the opposite effects. A decrease in blood pH due to respiratory depression is called respiratory acidosis. An increase in blood pH due to hyperventilation is called respiratory alkalosis (Fig. 11).\n", "Section::::Treatment.\n", "Then, the therapist may start giving suggestions on how to test the belief. She may suggest, \"Why don't you try hyperventilating into this plastic bag? If you show signs of having a heart attack, I have training in CPR and I'll be able to help you while waiting for the authorities.\" After some initial apprehension, the patient may agree with the experiment and start breathing into a plastic bag while the therapist watches. Since the patient with panic disorder most likely will not have a heart attack while hyperventilating, he will be less likely to believe in the original thought, even though he may have been scared of testing the belief at first.\n", "Even with quiet breathing, the inspiratory flow rate at the nares of an adult usually exceeds 12 liters a minute, and can exceed 30 liters a minute for someone with mild respiratory distress. Traditional oxygen therapy is limited to six liters a minute and does not begin to approach the inspiratory demand of an adult and therefore the oxygen is then diluted with room air during inspiration.\n", "Factors that may induce or sustain hyperventilation include: physiological stress, anxiety or panic disorder, high altitude, head injury, stroke, respiratory disorders such as asthma, pneumonia or hyperventilation syndrome, cardiovascular problems such as pulmonary embolisms, anemia, an incorrectly calibrated medical respirator and adverse reactions to certain drugs. \n\nHyperventilation can also be induced intentionally to achieve an altered state of consciousness such as in the choking game, during holotrophic breathwork, or in an attempt to extend a breath-hold dive.\n\nSection::::See also.\n\nBULLET::::- List of terms of lung size and activity\n\nBULLET::::- Control of respiration\n", "In individuals with chronic obstructive pulmonary disease who receive supplemental oxygen, carbon dioxide accumulation may occur through two main mechanisms:\n\nBULLET::::- Ventilation/perfusion matching: under-ventilated lung usually has a low oxygen content which leads to localised vasoconstriction limiting blood flow to that lung tissue. Supplemental oxygen abolishes this constriction, leading to poor ventilation/perfusion matching. This redistribution of blood to areas of the lung with poor ventilation reduces the amount of carbon dioxide eliminated from the system.\n", "BULLET::::- 1918: Oxygen masks are used to treat combat-induced pulmonary edema.\n\nSection::::Twentieth Century (1900s).:1920-1940.\n\nBULLET::::- 1928: Phillip Drinker develops the \"iron lung\" negative pressure ventilator.\n\nBULLET::::- 1935: Carl Matthes invented the first noninvasive oximeter employing an ear probe.\n\nSection::::Twentieth Century (1900s).:1940-1960.\n\nBULLET::::- 1943: Dr. Edwin R. Levine, MD began training technicians in basic inhalation therapy for post-surgical patients.\n\nBULLET::::- 1946: (US) Dr Levine and his technicians formed the Inhalation Therapy Association.\n\nBULLET::::- 1954: (US) March 16, 1954 the ITA is renamed the American Association of Inhalation Therapists (AAIT).\n", "Section::::History.:Additional Findings.\n", "BULLET::::- The Haldane effect: most carbon dioxide is carried by the blood as bicarbonate, and deoxygenated hemoglobin promotes the production of bicarbonate. Increasing the amount of oxygen in the blood by administering supplemental oxygen reduces the amount of deoxygenated hemoglobin, and thus reduces the capacity of blood to carry carbon dioxide.\n\nSection::::Prevention.\n\nIn people with chronic obstructive pulmonary disease, carbon dioxide toxicity can be prevented by careful control of the supplemental oxygen. Just enough oxygen is given to maintain an oxygen saturation of 88 - 92%.\n", "The hyperventilation is self-promulgating as rapid breathing causes carbon dioxide levels to fall below healthy levels, and respiratory alkalosis (high blood pH) develops. This makes the symptoms worse, which causes the person to breathe even faster, which then, further exacerbates the problem.\n\nThe respiratory alkalosis leads to changes in the way the nervous system fires and leads to the paresthesia, dizziness, and perceptual changes that often accompany this condition. Other mechanisms may also be at work, and some people are physiologically more susceptible to this phenomenon than others.\n\nSection::::Causes.\n", "Heliox generates less airway resistance than air and thereby requires less mechanical energy to ventilate the lungs. \"Work of Breathing\" (WOB) is reduced. It does this by two mechanisms:\n\nBULLET::::1. increased tendency to laminar flow;\n\nBULLET::::2. reduced resistance in turbulent flow.\n", "If these homeostats are compromised, then a respiratory acidosis, or a respiratory alkalosis will occur. In the long run these can be compensated by renal adjustments to the H and HCO concentrations in the plasma; but since this takes time, the hyperventilation syndrome can, for instance, occur when agitation or anxiety cause a person to breathe fast and deeply thus causing a distressing respiratory alkalosis through the blowing off of too much CO from the blood into the outside air.\n", "Ventilation is normally unconscious and automatic, but can be overridden by conscious alternative patterns. Thus the emotions can cause yawning, laughing, sighing (etc.), social communication causes speech, song and whistling, while entirely voluntary overrides are used to blow out candles, and breath holding (to swim, for instance, underwater). Hyperventilation may be entirely voluntary or in response to emotional agitation or anxiety, when it can cause the distressing hyperventilation syndrome. The voluntary control can also influence other functions such as the heart rate as in yoga practices and meditation.\n", "Respiratory alkalosis (\"Pa\" CO < 35 mmHg) occurs when there is too little carbon dioxide in the blood. This may be due to hyperventilation or else excessive breaths given via a mechanical ventilator in a critical care setting. The action to be taken is to calm the person and try to reduce the number of breaths being taken to normalize the pH. The respiratory pathway tries to compensate for the change in pH in a matter of 2–4 hours. If this is not enough, the metabolic pathway takes place.\n\nUnder normal conditions, the Henderson–Hasselbalch equation will give the blood pH\n", "Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of CO in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis.\n\nAlthough the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels.\n", "Acute cardiogenic pulmonary edema often responds rapidly to medical treatment. Positioning upright may relieve symptoms. A loop diuretic such as furosemide (Lasix®) is administered, often together with morphine to reduce respiratory distress. Both diuretic and morphine may have vasodilator effects, but specific vasodilators may be used (particularly intravenous glyceryl trinitrate or ISDN) provided the blood pressure is adequate.\n\nContinuous positive airway pressure and bilevel positive airway pressure (BIPAP/NIPPV) has been demonstrated to reduce the need of mechanical ventilation in people with severe cardiogenic pulmonary edema, and may reduce mortality.\n", "BULLET::::- The Buteyko method focuses on nasal breathing, relaxation and reduced breathing. These techniques provide the lungs with more NO and thus dilate the airways and should prevent the excessive exhalation of CO2 and thus improve oxygen metabolism.\n\nBULLET::::- Coherent Breathing is a method that involves breathing at the rate of five breaths per minute with equal periods of inhalation and exhalation and conscious relaxation of anatomical zones.\n\nSection::::Applications.\n\nSection::::Applications.:Meditation.\n\nConscious breathing in meditation usually does not change the depth or rhythm of breathing, but uses breathing as an anchor for concentration and awareness.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-03780
How do you figure out what percentage a battery is at?
The voltage of a battery cell decreases slightly as it drains. The phone/device can measure this, and based on how it drained in the past, calculate how much charge is left given the current voltage.
[ "A BMS may monitor the state of the battery as represented by various items, such as:\n\nBULLET::::- Voltage: total voltage, voltages of individual cells, minimum and maximum cell voltage or voltage of periodic taps\n\nBULLET::::- Temperature: average temperature, coolant intake temperature, coolant output temperature, or temperatures of individual cells\n\nBULLET::::- State of charge (SOC) or depth of discharge (DOD), to indicate the charge level of the battery\n\nBULLET::::- State of health (SOH), a variously-defined measurement of the remaining capacity of the battery as % of the original capacity\n", "Both ammeters and voltmeters individually or together can be used to assess the operating state of an automobile battery and charging system.\n\nSection::::Electronic devices.\n\nA battery indicator is a feature of many electronic devices. In mobile phones, the battery indicator usually takes the form of a bar graph - the more bars that are showing, the better the battery's state of charge.\n\nSection::::Computers.\n", "battery desires to be charged and discharged in constant rate such as Coulomb-counting. This method gives precise estimation of battery SoC, but they are protracted, costly, and interrupt main battery performance. Therefore, researchers are looking for some online techniques. In general there are five methods to determine SoC indirectly:\n\nBULLET::::- chemical\n\nBULLET::::- voltage\n\nBULLET::::- current integration\n\nBULLET::::- Kalman filtering\n\nBULLET::::- pressure\n\nSection::::Determining SoC.:Chemical method.\n", "In addition, the designer of the battery management system defines an arbitrary weight for each of the parameter's contribution to the SoH value. The definition of how SoH is evaluated can be a trade secret.\n\nSection::::SOH threshold.\n\nAs stated before, the method by which the battery management system evaluates the SoH of a battery is arbitrary.\n", "Section::::Integrated battery testers.\n\nThere are many types of integrated battery testers, each one corresponding to a specific condition testing procedure, according to the type of battery being tested, such as the “421” test for lead-acid vehicle batteries. Their common principle is based on the empirical fact that after having applied a given current for a given number of seconds to the battery, the resulting voltage output is related to the battery's overall condition, when compared to a healthy battery's output.\n\nSection::::External links.\n\nBULLET::::- Power Equipment Engine Technology By Edward Abdo\n\nBULLET::::- Automotive Technology: A Systems Approach By Jack Erjavec\n", "BULLET::::- First, a battery management system evaluates the SoH of the battery under its management and reports it.\n\nBULLET::::- Then, the SoH is compared to a threshold (typically done by the application in which the battery is used), to determine the suitability of the battery to a given application.\n\nKnowing the SoH of a given battery and the SoH threshold of a given application:\n\nBULLET::::- a determination can be made whether the present battery conditions make it suitable for that application\n\nBULLET::::- an estimate can be made of the battery's useful lifetime in that application\n\nSection::::SoH evaluation.:Parameters.\n", "The battery's open-circuit voltage can also be used to gauge the state of charge. If the connections to the individual cells are accessible, then the state of charge of each cell can be determined which can provide a guide as to the state of health of the battery as a whole, otherwise the overall battery voltage may be assessed.\n\nSection::::Voltages for common usage.\n", "At least in some battery technologies such as lead-acid AGM batteries there is a correlation between the Depth of discharge and the Cycle life of the battery.\n\nDepth of Discharge (DOD) is defined as: Capacity in Ampere Hours (Ah) that is discharged from a fully charged battery, divided by battery nominal capacity (C20). DoD is normally presented in percent (%).\n\nExample: if a 100Ah battery is discharged for 20 minutes at 50A, the Depth of Discharge is: 50*(20/60)/100= 16.7%\n\nSection::::See also.\n\nBULLET::::- Battery balancer\n\nBULLET::::- Battery monitoring\n\nBULLET::::- Battery charger\n\nBULLET::::- Deep cycle battery\n\nBULLET::::- State of health\n", "Section::::Capacity and discharge.:C rate.\n\nThe C-rate is a measure of the rate at which a battery is being charged or discharged. It is defined as the current through the battery divided by the theoretical current draw under which the battery would deliver its nominal rated capacity in one hour. It has the units h. \n", "As SoH does not correspond to a particular physical quality, there is no consensus in the industry on how SoH should be determined.\n\nThe designer of a battery management system may use any of the following parameters (singly or in combination) to derive an arbitrary value for the SoH.\n\nBULLET::::- Internal resistance / impedance / conductance\n\nBULLET::::- Capacity\n\nBULLET::::- Voltage\n\nBULLET::::- Self-discharge\n\nBULLET::::- Ability to accept a charge\n\nBULLET::::- Number of charge–discharge cycles\n\nBULLET::::- Age of the battery\n\nBULLET::::- Temperature of battery during its previous uses\n\nBULLET::::- Total energy charged and discharged\n", "Depth of discharge\n\nDepth of discharge (DoD) is an alternate method to indicate a battery's state of charge (SoC). The DoD is the complement of SoC: as one increases, the other decreases. While the SoC units are percent points (0% = empty; 100% = full), DoD can use Ah units (e.g.: 0 = full, 50 Ah = empty) or percent points (100% = empty; 0% = full). As a battery may actually have higher capacity than its nominal rating, it is possible for the DoD value to exceed the full value (e.g.: 55 Ah or 110%).\n", "In fact, it is a stated goal of battery design to provide a voltage as constant as possible no matter the SoC, which makes this method difficult to apply.\n\nSection::::Determining SoC.:Current integration method.\n\nThis method, also known as \"coulomb counting\", calculates the SoC by measuring the battery current and integrating it in time.\n", "SOC, or state of charge, is the equivalent of a fuel gauge for a battery. SOC cannot be determined by a simple voltage measurement, because the terminal voltage of a battery may stay substantially constant until it is completely discharged. In some types of battery, electrolyte specific gravity may be related to state of charge but this is not measurable on typical battery pack cells, and is not related to state of charge on most battery types. Most SOC methods take into account voltage and current as well as temperature and other aspects of the discharge and charge process to in essence count up or down within a pre-defined capacity of a pack. More complex state of charge estimation systems take into account the Peukert effect which relates the capacity of the battery to the discharge rate.\n", "Section::::Determining SoC.:Voltage method.\n\nThis method converts a reading of the battery voltage to SoC, using the known discharge curve (voltage vs. SoC) of the battery. However, the voltage is more significantly affected by the battery current (due to the battery's electrochemical kinetics) and temperature. This method can be made more accurate by compensating the voltage reading by a correction term proportional to the battery current, and by using a look-up table of battery's open circuit voltage vs. temperature.\n", "The higher the discharge rate, the lower the capacity. The relationship between current, discharge time and capacity for a lead acid battery is approximated (over a typical range of current values) by Peukert's law:\n\nwhere\n", "For example, for a battery with a capacity of 500 mAh, a discharge rate of 5000 mA (i.e., 5 A) corresponds to a C-rate of 10 (per hour), meaning that such a current can discharge 10 such batteries in one hour. Likewise, for the same battery a charge current of 250 mA corresponds to a C-rate of 1/2 (per hour), meaning that this current will increase the state of charge of this battery by 50% in one hour.\n", "Batteries that are part of a system, such as computer batteries, can have their properties checked and logged in operation to assist in determining remaining charge. A real battery can be modeled as an ideal battery with a specified EMF, in series with an internal resistance. As a battery discharges, the EMF may drop or the internal resistance increase; in many cases the EMF remains more or less constant during most of the discharge, with the voltage drop across the internal resistance determining the voltage supplied. Determining the charge remaining in many battery types not connected to a system that monitors battery use is not reliably possible with a voltmeter. In battery types where EMF remains approximately constant during discharge, but resistance increases, voltage across battery terminals is not a good indicator of capacity. A meter such as an equivalent series resistance meter (ESR meter) normally used for measuring the ESR of electrolytic capacitors can be used to evaluate internal resistance. ESR meters fitted with protective diodes cannot be used, a battery will simply destroy the diodes and damage itself. An ESR meter known not to have diode protection will give a reading of internal resistance for a rechargeable or non-rechargeable battery of any size down to the smallest button cells which gives an indication of the state of charge. To use it, measurements on fully charged and fully discharged batteries of the same type can be used to determine resistances associated with those states.\n", "Additionally, a BMS may calculate values based on the above items, such as:\n\nBULLET::::- Maximum charge current as a charge current limit (CCL)\n\nBULLET::::- Maximum discharge current as a discharge current limit (DCL)\n\nBULLET::::- Energy [kWh] delivered since last charge or charge cycle\n\nBULLET::::- Internal impedance of a cell (to determine open circuit voltage)\n\nBULLET::::- Charge [Ah] delivered or stored (sometimes this feature is called Coulomb counter)\n\nBULLET::::- Total energy delivered since first use\n\nBULLET::::- Total operating time since first use\n\nBULLET::::- Total number of cycles\n\nSection::::Functions.:Communication.\n", "To overcome the shortcomings of the voltage method and the current integration method, a Kalman filter can be used. The battery can be modeled with an electrical model which the Kalman filter will use to predict the over-voltage, due to the current. In combination with coulomb counting, it can make an accurate estimation of the state of charge. The strength of a Kalman filter is that it is able to adjust its trust of the battery voltage and coulomb counting in real time.\n\nSection::::Determining SoC.:Pressure method.\n", "BULLET::::- This review of current research includes chapters by Nadeen L. Kaufman, Elizabeth O. Lichtenberger, Jennie Kaufman Singer, Elaine Fletcher-Janzen, Nancy Mather, Kyle Bassett, Thomas Oakland, Jack A. Naglieri, Samuel O. Ortiz, Dawn P. Flanagan, Robert J. Sternberg, Randy W. Kamphaus, Cecil R. Reynolds, Jason C. Cole, Claire Énéa-Drapeau, Michèle Carlier, Toshinori Ishikuma, Jan Alm, R. Steve McCallum, and Bruce A. Bracken.\n", "Depth of discharge (DOD) is normally stated as a percentage of the nominal ampere-hour capacity; 0% DOD means no discharge. As the usable capacity of a battery system depends on the rate of discharge and the allowable voltage at the end of discharge, the depth of discharge must be qualified to show the way it is to be measured. Due to variations during manufacture and aging, the DOD for complete discharge can change over time or number of charge cycles. Generally a rechargeable battery system will tolerate more charge/discharge cycles if the DOD is lower on each cycle.\n", "Battery manufacturers' technical notes often refer to voltage per cell (VPC) for the individual cells that make up the battery. For example, to charge a 12 V lead-acid battery (containing 6 cells of 2 V each) at 2.3 VPC requires a voltage of 13.8 V across the battery's terminals.\n\nSection::::Charging and discharging.:Damage from cell reversal.\n", "BULLET::::- Capacity in mAh: mAh stands for milli Ampere-hour and measures the amount of power flow that can be supplied by a certain power bank at a specific voltage. Many manufacturers rate their products at 3.7 V, the voltage of cell(s) inside. Since USB outputs at 5 V, calculations at this voltage will yield a lower mAh number. For example, a battery pack advertised with a 3000 mAh capacity (at 3.7 V) will produce 2220 mAh at 5 V. Power losses due to efficiency of the charging circuitry also occur.\n", "As with lead-acid batteries to maximize the life of AGM battery is important to follow charging specifications and a voltage regulated charger is recommended. and also there is a correlation between the depth of discharge (DOD) and the Cycle life of the battery, with differences between 500 and 1300 cycles depending on depth of discharge.\n\nSection::::Gel battery.\n", "Battery indicator\n\nA battery indicator (also known as a battery gauge) is a device which gives information about a battery. This will usually be a visual indication of the battery's state of charge. It is particularly important in the case of a battery electric vehicle.\n\nSection::::Automobiles.\n\nSome automobiles are fitted with a battery condition meter to monitor the starter battery. This meter is, essentially, a voltmeter but it may also be marked with coloured zones for easy visualization. \n" ]
[]
[]
[ "normal" ]
[]
[ "normal" ]
[]
2018-09933
Shouldn't the earth's core cool down over time?
This was actually a major discrepancy when the first calculations of earth’s age were made in the 1800s. Lord Kelvin (a pioneer in thermodynamics) based it off of how long it would take for a molten ball of rock to cool and came up with under a billion years. It wasn’t until 1898 when Marie Curie discovered radioactive decay that the issue with his calculations were discovered. Radioactive decay produces heat. This keeps the molten ball churning for a while longer than it should have.
[ "Earth's internal heat powers most geological processes and drives plate tectonics. Despite its geological significance, this heat energy coming from Earth's interior is actually only 0.03% of Earth's total energy budget at the surface, which is dominated by 173,000 TW of incoming solar radiation. The insolation that eventually, after reflection, reaches the surface penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. This renders solar radiation minimally relevant for internal processes.\n\nSection::::Heat and early estimate of Earth's age.\n", "Neutrino flux measurements from the Earth's core (see kamLAND) show the source of about two-thirds of the heat in the inner core is the radioactive decay of K, uranium and thorium. This has allowed plate tectonics on Earth to continue far longer than it would have if it were simply driven by heat left over from Earth's formation; or with heat produced from gravitational potential energy, as a result of physical rearrangement of denser portions of the Earth's interior toward the center of the planet (i.e., a type of prolonged falling and settling).\n\nSection::::See also.\n", "If this is true, the time when Earth finished its transition from having a hot, molten surface and atmosphere full of carbon dioxide, to being very much like it is today, can be roughly dated to about 4.0 billion years ago. The actions of plate tectonics and the oceans trapped vast amounts of carbon dioxide, thereby reducing the greenhouse effect and leading to a much cooler surface temperature and the formation of solid rock, and possibly even life.\n\nSection::::See also.\n\nBULLET::::- – the first sections describe the formation of the Earth\n\nSection::::Further reading.\n", "Based on calculations of Earth's cooling rate, which assumed constant conductivity in the Earth's interior, in 1862 William Thomson (later made Lord Kelvin) estimated the age of the Earth at 98 million years, which contrasts with the age of 4.5 billion years obtained in the 20th century by radiometric dating. As pointed out by John Perry in 1895 a variable conductivity in the Earth's interior could expand the computed age of the Earth to billions of years, as later confirmed by radiometric dating. Contrary to the usual representation of Kelvin's argument, the observed thermal gradient of the Earth's crust would not be explained by the addition of radioactivity as a heat source. More significantly, mantle convection alters how heat is transported within the Earth, invalidating Kelvin's assumption of purely conductive cooling.\n", "Many possible triggering mechanisms could account for the beginning of a snowball Earth, such as the eruption of a supervolcano, a reduction in the atmospheric concentration of greenhouse gases such as methane and/or carbon dioxide, changes in Solar energy output, or perturbations of Earth's orbit. Regardless of the trigger, initial cooling results in an increase in the area of Earth's surface covered by ice and snow, and the additional ice and snow reflects more Solar energy back to space, further cooling Earth and further increasing the area of Earth's surface covered by ice and snow. This positive feedback loop could eventually produce a frozen equator as cold as modern Antarctica.\n", "The mantle remained hotter than modern day temperatures throughout the Archean. Over time the Earth began to cool as planetary accretion slowed and heat stored within the magma ocean was lost to space through radiation.\n", "BULLET::::- After radioactive decay was discovered, it was realized it would release heat inside the planet. This undermines the cooling effect upon which the shrinking planet theory is based.\n\nBULLET::::- Identical fossils have been found thousands of kilometres apart, showing the planet was once a single continent which broke apart because of plate tectonics.\n\nSection::::Current status.\n", "When the Archean began, the Earth's heat flow was nearly three times as high as it is today, and it was still twice the current level at the transition from the Archean to the Proterozoic (2,500 million years ago). The extra heat was the result of a mix of remnant heat from planetary accretion, from the formation of the metallic core, and from the decay of radioactive elements.\n", "One of the ways to estimate the age of the inner core is by modeling the cooling of the Earth, constrained by a minimum value for the heat flux at the core–mantle boundary (CMB). That estimate is based on the prevailing theory that the Earth's magnetic field is primarily triggered by convection currents in the liquid part of the core, and the fact that a minimum heat flux is required to sustain those currents. The heat flux at the CMB at present time can be reliably estimated because it is related to the measured heat flux at Earth's surface and to the measured rate of mantle convection.\n", "The Earth's internal heat comes from a combination of residual heat from planetary accretion, heat produced through radioactive decay, latent heat from core crystallization, and possibly heat from other sources. The major heat-producing isotopes in the Earth are potassium-40, uranium-238, uranium-235, and thorium-232. At the center of the planet, the temperature may be up to 7,000 K and the pressure could reach 360 GPa (3.6 million atm). Because much of the heat is provided by radioactive decay, scientists believe that early in Earth history, before isotopes with short half-lives had been depleted, Earth's heat production would have been much higher. Heat production was twice that of present-day at approximately 3 billion years ago, resulting in larger temperature gradients within the Earth, larger rates of mantle convection and plate tectonics, allowing the production of igneous rocks such as komatiites that are no longer formed.\n", "Most paleoclimatologists think the cold episodes were linked to the formation of the supercontinent Rodinia. Because Rodinia was centered on the equator, rates of chemical weathering increased and carbon dioxide (CO) was taken from the atmosphere. Because CO is an important greenhouse gas, climates cooled globally.\n", "Not only do increasing carbon dioxide concentrations lead to increases in global surface temperature, but increasing global temperatures also cause increasing concentrations of carbon dioxide. This produces a positive feedback for changes induced by other processes such as orbital cycles. Five hundred million years ago the carbon dioxide concentration was 20 times greater than today, decreasing to 4–5 times during the Jurassic period and then slowly declining with a particularly swift reduction occurring 49 million years ago.\n", "The iron-rich core region of the Earth is divided into a radius solid inner core and a radius liquid outer core. The rotation of the Earth creates convective eddies in the outer core region that cause it to function as a dynamo. This generates a magnetosphere about the Earth that deflects particles from the solar wind, which prevents significant erosion of the atmosphere from sputtering. As heat from the core is transferred outward toward the mantle, the net trend is for the inner boundary of the liquid outer core region to freeze, thereby releasing thermal energy and causing the solid inner core to grow. This iron crystallization process has been ongoing for about a billion years. In the modern era, the radius of the inner core is expanding at an average rate of roughly per year, at the expense of the outer core. Nearly all of the energy needed to power the dynamo is being supplied by this process of inner core formation.\n", "From this equation, it is inferred that carbon dioxide is consumed during chemical weathering and thus lower concentrations of the gas will be present in the atmosphere as long as chemical weathering rates are high enough.\n\nSection::::Climate-driven tectonism.\n", "The early formation of the Earth's dense core could have caused superheating and rapid heat loss, and the heat loss rate would slow once the mantle solidified. Heat flow from the core is necessary for maintaining the convecting outer core and the geodynamo and Earth's magnetic field, therefore primordial heat from the core enabled Earth's atmosphere and thus helped retain Earth's liquid water.\n\nSection::::Heat flow and tectonic plates.\n", "Recent studies may have again complicated the idea of a snowball earth. In October 2011, a team of French researchers announced that the carbon dioxide during the last speculated \"snowball earth\" may have been lower than originally stated, which provides a challenge in finding out how Earth was able to get out of its state and if it were a snowball or slushball.\n\nSection::::Transitions.\n\nSection::::Transitions.:Causes.\n", "Section::::Primordial heat.\n\nPrimordial heat is the heat lost by the Earth as it continues to cool from its original formation, and this is in contrast to its still actively-produced radiogenic heat. The Earth core's heat flow—heat leaving the core and flowing into the overlying mantle—is thought to be due to primordial heat, and is estimated at 5–15 TW. Estimates of mantle primordial heat loss range between 7 and 15 TW, which is calculated as the remainder of heat after removal of core heat flow and bulk-Earth radiogenic heat production from the observed surface heat flow.\n", "Earth's internal heat comes from a combination of residual heat from planetary accretion (about 20%) and heat produced through radioactive decay (80%). The major heat-producing isotopes within Earth are potassium-40, uranium-238, and thorium-232. At the center, the temperature may be up to , and the pressure could reach . Because much of the heat is provided by radioactive decay, scientists postulate that early in Earth's history, before isotopes with short half-lives were depleted, Earth's heat production was much higher. At approximately , twice the present-day heat would have been produced, increasing the rates of mantle convection and plate tectonics, and allowing the production of uncommon igneous rocks such as komatiites that are rarely formed today.\n", "The mean heat loss from Earth is , for a global heat loss of . A portion of the core's thermal energy is transported toward the crust by mantle plumes, a form of convection consisting of upwellings of higher-temperature rock. These plumes can produce hotspots and flood basalts. More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. The final major mode of heat loss is through conduction through the lithosphere, the majority of which occurs under the oceans because the crust there is much thinner than that of the continents.\n", "By comparison, Earth's present global equilibrium temperature is 255 K (−18 °C), which is raised to 288 K (15 °C) by greenhouse effects. However, when life evolved early in Earth's history, the Sun's energy output is thought to have been only about 75% of its current value, which would have correspondingly lowered Earth's equilibrium temperature under the same albedo conditions. Yet Earth maintained equable temperatures in that era, perhaps with a more intense greenhouse effect, or a lower albedo, than at present.\n", "The initiation of a snowball Earth event would involve some initial cooling mechanism, which would result in an increase in Earth's coverage of snow and ice. The increase in Earth's coverage of snow and ice would in turn increase Earth's albedo, which would result in positive feedback for cooling. If enough snow and ice accumulates, run-away cooling would result. This positive feedback is facilitated by an equatorial continental distribution, which would allow ice to accumulate in the regions closer to the equator, where solar radiation is most direct.\n", "There are also longer-term cycles, the mini ice-age that preceded the medieval warm period may have been a transition to an ice age, the last ice-age lasted from ~130,000 years ago until the onset of the Holocene. This ice-age may have been aborted by other factors including global warming. Such a stalling of long-term cycles is believed to be a factor in the Dryas period, a warming interrupted by surface impacts of extraterrestrial origin may have occurred over hundreds of years. But the anthropogenic greenhouse effects and changing insolation patterns may have unpredictable long-term effects. Reductions of glacial ice on land masses can cause isotatic rebounds and may affect earthquakes and volcanism over a wide range. Rising sea levels can also affect patterns, and was seen in Indonesia, simply drilling a gas well in the wrong place may have touched off a mud volcano and there are some signs that this may precede a new caldera formation for a volcano. Over the very long term, the change in temperature of the Earth's crust on geothermal and volcanic processes is unknown. How this plays into climate-forcing events with magnitudes that are unpredictable is unknown.\n", "The \"New Core Paradox\" posits that the new upward revisions to the empirically measured thermal conductivity of iron at the pressure and temperature conditions of Earth's core imply that the dynamo is thermally stratified at present, driven solely by compositional convection associated with the solidification of the inner core. However, wide spread paleomagnetic evidence for a geodynamo older than the likely age of the inner core (~1 Gyr) creates a paradox as to what powered the geodynamo prior to inner core nucleation. Recently it has been proposed that a higher core cooling rate and lower mantle cooling rate can resolve the paradox in part. However, the paradox remains unresolved.\n", "However, Earth's energy balance and heat fluxes depend on many factors, such as atmospheric composition (mainly aerosols and greenhouse gases), the albedo (reflectivity) of surface properties, cloud cover and vegetation and land use patterns.\n\nChanges in surface temperature due to Earth's energy budget do not occur instantaneously, due to the inertia of the oceans and the cryosphere. The net heat flux is buffered primarily by becoming part of the ocean's heat content, until a new equilibrium state is established between radiative forcings and the climate response.\n\nSection::::Energy budget.\n", "Geoneutrino detectors can detect the decay of U and Th and thus allow estimation of their contribution to the present radiogenic heat budget, while U and K are not thus detectable. Regardless, K is estimated to contribute 4 TW of heating. However, due to the short half-lives the decay of U and K contributed a large fraction of radiogenic heat flux to the early Earth, which was also much hotter than at present. Initial results from measuring the geoneutrino products of radioactive decay from within the Earth, a proxy for radiogenic heat, yielded a new estimate of half of the total Earth internal heat source being radiogenic, and this is consistent with previous estimates.\n" ]
[ "The earth's core should cool down over time." ]
[ "Radioactive decay produces heat and keeps the earth's core hot." ]
[ "false presupposition", "normal" ]
[ "The earth's core should cool down over time.", "The earth's core should cool down over time." ]
[ "false presupposition", "normal" ]
[ "Radioactive decay produces heat and keeps the earth's core hot.", "Radioactive decay produces heat and keeps the earth's core hot." ]
2018-00518
What prevents some parasites from infecting us, while other animals get infected?
Hygiene. Humans get lots of parasites. Worms, lice, bedbugs, ticks, leech and whatnot. What keeps them away is hygiene like not drink from puddles, not eat sand, clean clothes etc. etc. Also many parasites such as lice are highly specializes sub-species that only infect one particular animal (this is also the case with some worms, but for example not ticks since they just wait for warm-blooded animals). The lice your dog gets can't survive on you. Humans have 2 lice species, the ones that infect your head and the other ones that -ahem- infects you in the genital region. Genetics have showed that these two species once emerged from one and the same and were closely related to the lice species that is specific to other primates. When humans in evolution lost big parts of their body hair, the lice could no longer move from the lower part to the upper part of the body, slowly evolving into two different sub-species that are now perfectly adapted to the different living conditions.
[ "Other risks that can lead people to acquire parasites are walking with barefeet, inadequate disposal of feces, lack of hygiene, close contact with someone carrying specific parasites, and eating undercooked foods, unwashed fruits and vegetables or foods from contaminated regions.\n\nParasites can also be transferred to their host by the bite of an insect vector, i.e. mosquito, bed bug, fleas.\n\nSection::::Treatment.\n\nParasitic infections can usually be treated with antiparasitic drugs.\n", "Parasites in fish are a common natural occurrence. Parasites can provide information about host population ecology. In fisheries biology, for example, parasite communities can be used to distinguish distinct populations of the same fish species co-inhabiting a region. Additionally, parasites possess a variety of specialized traits and life-history strategies that enable them to colonize hosts. Understanding these aspects of parasite ecology, of interest in their own right, can illuminate parasite-avoidance strategies employed by hosts.\n", "Just as humans are subject to infections by the apicomplexans \"Plasmodium\" and \"Cryptosporidium\", animals are also subject to infection by apicomplexans including \"Toxoplasma\", \"Babesia\", \"Neospora\", and \"Eimeria\". It is said anecdotally, that almost every animal on earth has one or more species of apicomplexan parasite that challenge it. The economic burden from apicomplexan parasites is estimated in the billions of dollars, (see also Malaria) on top of the human and animal costs of these organisms. An increased understanding of the evolutionary roles and functions of apicoplasts and apical complexes can impact on research about the apicomplexan parasites of livestock animals, making \"C. velia\" of interest in an agricultural context as well as in the medical and ecological fields.\n", "Some parasites modify host behaviour in order to increase their transmission between hosts, often in relation to predator and prey (parasite increased trophic transmission). For example, in the California coastal salt marsh, the fluke \"Euhaplorchis californiensis\" reduces the ability of its killifish host to avoid predators. This parasite matures in egrets, which are more likely to feed on infected killifish than on uninfected fish. Another example is the protozoan \"Toxoplasma gondii\", a parasite that matures in cats but can be carried by many other mammals. Uninfected rats avoid cat odors, but rats infected with \"T. gondii\" are drawn to this scent, which may increase transmission to feline hosts. The malaria parasite modifies the skin odour of its human hosts, increasing their attractiveness to mosquitoes and hence improving the chance that the parasite will be transmitted.\n", "Below are some life cycles of fish parasites that can infect humans:\n\nSection::::See also.\n\nBULLET::::- Bath treatment (fishkeeping)\n\nBULLET::::- Cyanotoxin\n\nBULLET::::- Diseases and parasites in cod\n\nBULLET::::- European Community Reference Laboratory for Fish Diseases\n\nBULLET::::- Fish farming\n\nBULLET::::- Fish kill\n\nBULLET::::- Fish toxins\n\nBULLET::::- Fish medicine\n\nBULLET::::- List of aquarium diseases\n\nBULLET::::- Mathematical modelling of infectious disease\n\nBULLET::::- Red tide\n\nBULLET::::- Veterinary parasitology\n\nSection::::References.\n\nBULLET::::- U.S. Food and Drug Administration (FDA) (2001) Compliance Regulatory Information: Fish and Fisheries Products Hazards and Controls Guidance Third edition.\n\nBULLET::::- Rohde, Klaus (2005) \"Marine Parasitology\" Csiro Publishing. .\n", "Persistent infections cause millions of deaths globally each year. Chronic infections by parasites account for a high morbidity and mortality in many underdeveloped countries.\n\nSection::::Pathophysiology.:Transmission.\n\nFor infecting organisms to survive and repeat the infection cycle in other hosts, they (or their progeny) must leave an existing reservoir and cause infection elsewhere. Infection transmission can take place via many potential routes:\n", "More than 40 species of parasites may reside on the skin and internally of the ocean sunfish, motivating the fish to seek relief in a number of ways. \n", "Obligate intracellular parasites of humans include:\n\nBULLET::::- Viruses\n\nBULLET::::- Certain bacteria, including:\n\nBULLET::::- \"Chlamydia\", and closely related species.\n\nBULLET::::- \"Rickettsia\"\n\nBULLET::::- \"Coxiella\"\n\nBULLET::::- Certain species of \"Mycobacterium\" such as \"Mycobacterium leprae\" and \"Mycobacterium tuberculosis\"\n\nBULLET::::- Certain protozoa, including:\n\nBULLET::::- Apicomplexans (\"Plasmodium\" spp., \"Toxoplasma gondii\" and \"Cryptosporidium parvum\")\n\nBULLET::::- Trypanosomatids (\"Leishmania\" spp. and \"Trypanosoma cruzi\")\n\nBULLET::::- Certain fungi\n\nBULLET::::- \"Pneumocystis jirovecii\"\n\nThe mitochondria in eukaryotic cells may also have originally been such parasites, but ended up forming a mutualistic relationship (endosymbiotic theory).\n", "Parasitic disease\n\nA parasitic disease, also known as parasitosis, is an infectious disease caused or transmitted by a parasite. Many parasites do not cause diseases as it may eventually lead to death of both organism and host. Parasitic diseases can affect practically all living organisms, including plants and mammals. The study of parasitic diseases is called parasitology.\n\nSome parasites like \"Toxoplasma gondii\" and \"Plasmodium\" spp. can cause disease directly, but other organisms can cause disease by the toxins that they produce.\n\nSection::::Signs and symptoms.\n", "Human parasite\n\nHuman parasites include various protozoa and worms that may infect humans that cause parasitic diseases.\n\nHuman parasites are divided into endoparasites, which cause infection inside the body, and ectoparasites, which cause infection superficially within the skin.\n", "However, not all parasites want to keep their hosts alive, and there are parasites with multistage life cycles who go to some trouble to kill their host. For example, some tapeworms make some fish behave in such a way that a predatory bird can catch it. The predatory bird is the next host for the parasite in the next stage of its life cycle. Specifically, the tapeworm \"Schistocephalus solidus\" turns infected threespine stickleback white, and then makes them more buoyant so that they splash along at the surface of the water, becoming easy to see and easy to catch for a passing bird.\n", "Section::::Effects of parasitic worms.:Negative effects.:HIV.\n\nBecause the two diseases are abundant in developing countries, there are many patients with both HIV (Human immunodeficiency virus) and parasites, and specifically bloodflukes. In his article, Dr. Kamal relates the findings that those infected with parasites are more likely to be infected by HIV. However, it is disputed whether or not the viral infection is more severe because of the parasites.\n\nSection::::Effects of parasitic worms.:Negative effects.:Tuberculosis.\n", "Usually parasites (and pathogens) need to avoid killing their hosts, since extinct hosts can mean extinct parasites. Evolutionary constraints may operate so parasites avoid killing their hosts, or the natural variability in host defensive strategies may suffice to keep host populations viable. Parasite infections can impair the courtship dance of male threespine sticklebacks. When that happens, the females reject them, suggesting a strong mechanism for the selection of parasite resistance.\"\n", "Parasites provide an opportunity for the transfer of genetic material between species. On rare, but significant, occasions this may facilitate evolutionary changes that would not otherwise occur, or that would otherwise take even longer.\n\nBelow are some life cycles of fish parasites:\n\nSection::::Cleaner fish.\n", "Among the behavioral changes caused by parasites is carelessness, making their hosts easier prey. The protozoan \"Toxoplasma gondii\", for example, infects small rodents and causes them to become careless and attracted to the smell of feline urine, which increases their risk of predation and the parasite's chance of infecting a cat, its definitive host.\n\nParasites may alter the host's behavior by infecting the host's central nervous system, or by altering its neurochemical communication, studied in neuro-parasitology.\n\nSection::::Behavioral change.\n\nSection::::Behavioral change.:Types.\n", "Trophically transmitted parasites are transmitted by being eaten by a host. They include trematodes (all except schistosomes), cestodes, acanthocephalans, pentastomids, many round worms, and many protozoa such as \"Toxoplasma\". They have complex life cycles involving hosts of two or more species. In their juvenile stages, they infect and often encyst in the intermediate host. When this animal is eaten by a predator, the definitive host, the parasite survives the digestion process and matures into an adult; some live as intestinal parasites. Many trophically transmitted parasites modify the behaviour of their intermediate hosts, increasing their chances of being eaten by a predator. Like directly transmitted parasites, the distribution of trophically transmitted parasites among host individuals is aggregated. Coinfection by multiple parasites is common. Autoinfection, where (by exception) the whole of the parasite's life cycle takes place in a single primary host, can sometimes occur in helminths such as \"Strongyloides stercoralis\".\n", "Holistic and interdisciplinary approaches to the study of human disease have revealed a reciprocal relationship between humans and parasites. The variety of parasites found within the human body often reflects the diversity of the environment in which that individual resides. For instance, Bushmen and Australian Aborigines have half as many intestinal parasites as African and Malaysian hunter-gatherers living in a species-rich tropical rainforest. Infectious diseases can be either chronic or acute, and epidemic or endemic, impacting the population in any given community to different extents. Thus, human-mediated disturbance can either increase or decrease species diversity in a landscape, causing a corresponding change in pathogenic diversity.\n", "Parasites can provide information about host population ecology. In fisheries biology, for example, parasite communities can be used to distinguish distinct populations of the same fish species co-inhabiting a region. Additionally, parasites possess a variety of specialized traits and life-history strategies that enable them to colonize hosts. Understanding these aspects of parasite ecology, of interest in their own right, can illuminate parasite-avoidance strategies employed by hosts.\n\nSection::::Fields.:Conservation biology of parasites.\n", "Disease is a prime agent affecting fish mortality, especially when fish are young. Fish can limit the impacts of pathogens and parasites with behavioural or biochemical means, and such fish have reproductive advantages. Interacting factors result in low grade infection becoming fatal diseases. In particular, things that causes stress, such as natural droughts or pollution or predators, can precipitate outbreak of disease.\n\nDisease can also be particularly problematic when pathogens and parasites carried by introduced species affect native species. An introduced species may find invading easier if potential predators and competitors have been decimated by disease.\n", "Broadcast 14 November 1990, this episode focuses on those species that co-operate and depend on (or exploit) others. Spotted deer follow langur monkeys as they travel from tree to tree, eating any leaves that get dropped from above. In return, the deer serve as a lookout when the primates are feeding on the ground. Underwater, a hermit crab is shown adding sea anemones to its shell in order to protect itself from attack by an octopus, and a goby assists a virtually blind shrimp. Fleas, lice and mites are parasites: they share no mutual partnership and instead take advantage of creatures for food or shelter. However, parasites have their predators, and an example are the finches of the Galápagos Islands that clear the resident giant tortoises of their ticks, and oxpeckers, which do the same for giraffes in Africa (and even use its fur to line their nests). Some fish regularly clean others, and wrasse and shrimp appear to specialise in this regard, as do remora, which permanently hang on to their hosts. One parasite that grows inside its host is the fluke, and one is shown gestating inside a snail, having previously been unknowingly eaten. Because it needs to transfer to a bird's gut to develop further, it causes the snail to advertise its presence to allow itself to be consumed — thus completing the circle. However, some microscopic creatures inhabit the stomachs of large herbivores in order to break down the cellulose of their diet, thereby aiding their digestion.\n", "The parasitologist F.E.G. Cox noted that \"Humans are hosts to nearly 300 species of parasitic worms and over 70 species of protozoa, some derived from our primate ancestors and some acquired from the animals we have domesticated or come in contact with during our relatively short history on Earth\".\n", "Infection with \"Eimeria\" results in life-long immunity to that particular parasite species, but does not give cross protection against other species. For these reasons, vaccines for control seem promising, of which live attenuated vaccines are most effective. However, the search for highly immunogenic antigens and overcoming antigenic variation of the parasites remains a challenge. Immunity to the parasite varies depending on parasite and host species, as well as the site of invasion. CD4+ T cells and interferon gamma (γ) are crucial components of natural immunity to infection. Humoral immunity is thought to play little role in protection, and is most likely mediated through secretory IgA antibodies.\n", "While some parasites exploit their hosts' typical immune responses, others seem to alter the immune response itself. For example, the typical immune response in rodents is characterized by heightened anxiety. Infection with \"Toxoplasma gondii\" inhibits this response, increasing the risk of predation by \"T. gondii\"s subsequent hosts. Research suggests that the inhibited anxiety-response could be the result of immunological damage to the limbic system.\n\nSection::::Mechanisms.:Altered neurochemical communication.\n", "Conservation biology of parasites\n\nA large proportion of living species on Earth live a parasitic way of life. Parasites have traditionally been seen as targets of eradication efforts, and they have often been overlooked in conservation efforts. In the case of parasites living in the wild – and thus harmless to humans and domesticated animals – this view is changing.\n\nSection::::Endangered parasite species.\n", "Veterinary parasitology studies both external and internal parasites in animals. External parasites, such as fleas, mites, ticks and mosquitoes can cause skin irritation and are often carriers of other diseases or of internal parasites.\n\nSection::::Diseases.:Infectious diseases.:Parasites.:External parasites.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-17592
when animals feel pain (e.g. when lobsters are put in a boiling pot of water), is it the mental equivalent of how humans feel pain?
They feel pain via their nervous system, but it's supposedly not connected to an emotional response, like us.
[ "Continuing into the 1990s, discussions were further developed on the roles that philosophy and science had in understanding animal cognition and mentality. In subsequent years, it was argued there was strong support for the suggestion that some animals (most likely amniotes) have at least simple conscious thoughts and feelings and that the view animals feel pain differently to humans is now a minority view.\n\nSection::::Background.:Scientific investigation.\n\nIn the 20th and 21st centuries, there were many scientific investigations of pain in non-human animals.\n\nSection::::Background.:Scientific investigation.:Mammals.\n\nAt the turn of the century, studies were published showing that arthritic rats self-select analgesic opiates.\n", "Continuing into the 1990s, discussions were further developed on the roles that philosophy and science had in understanding animal cognition and mentality. In subsequent years, it was argued there was strong support for the suggestion that some animals (most likely amniotes) have at least simple conscious thoughts and feelings and that the view animals feel pain differently to higher primates is now a minority view.\n\nSection::::Background.:Scientific investigation.\n\nIn the 20th- and 21st-century, there were many scientific investigations of pain in non-human animals.\n\nSection::::Background.:Scientific investigation.:Mammals.\n\nAt the turn of the century, studies were published showing that arthritic rats self-select analgesic opiates.\n", "Pain in cephalopods\n\nPain in cephalopods is a contentious issue. Pain is a complex mental state, with a distinct perceptual quality but also associated with suffering, which is an emotional state. Because of this complexity, the presence of pain in non-human animals, or another human for that matter, cannot be determined unambiguously using observational methods, but the conclusion that animals experience pain is often inferred on the basis of likely presence of phenomenal consciousness which is deduced from comparative brain physiology as well as physical and behavioural reactions.\n", "Continuing into the 1990s, discussions were further developed on the roles that philosophy and science had in understanding animal cognition and mentality. In subsequent years, it was argued there was strong support for the suggestion that some animals (most likely amniotes) have at least simple conscious thoughts and feelings and that the view animals feel pain differently to humans is now a minority view.\n\nSection::::Background.:Scientific investigation.\n\nIn the 20th and 21st centuries, there were many scientific investigations of pain in non-human animals.\n\nSection::::Background.:Scientific investigation.:Mammals.\n\nAt the turn of the century, studies were published showing that arthritic rats self-select analgesic opiates.\n", "The second component is the experience of \"pain\" itself, or suffering – the internal, emotional interpretation of the nociceptive experience. Again in humans, this is when the withdrawn finger begins to hurt, moments after the withdrawal. Pain is therefore a private, emotional experience. Pain cannot be directly measured in other animals, including other humans; responses to putatively painful stimuli can be measured, but not the experience itself. To address this problem when assessing the capacity of other species to experience pain, argument-by-analogy is used. This is based on the principle that if an animal responds to a stimulus in a similar way to ourselves, it is likely to have had an analogous experience.\n", "Sometimes a distinction is made between \"physical pain\" and \"emotional\" or \"psychological pain\". Emotional pain is the pain experienced in the absence of physical trauma, e.g. the pain experienced by humans after the loss of a loved one, or the break-up of a relationship. It has been argued that only primates and humans can feel \"emotional pain\", because they are the only animals that have a neocortex – a part of the brain's cortex considered to be the \"thinking area\". However, research has provided evidence that monkeys, dogs, cats and birds can show signs of emotional pain and display behaviours associated with depression during painful experience, i.e. lack of motivation, lethargy, anorexia, unresponsiveness to other animals.\n", "Continuing into the 1990s, discussions were further developed on the roles that philosophy and science had in understanding animal cognition and mentality. In subsequent years, it was argued there was strong support for the suggestion that some animals (most likely amniotes) have at least simple conscious thoughts and feelings and that the view animals feel pain differently to humans is now a minority view.\n\nSection::::Background.:Scientific investigation.\n\nIn the 20th and 21st centuries, there were many scientific investigations of pain in non-human animals.\n\nSection::::Background.:Scientific investigation.:Mammals.\n\nIn 2001 studies were published showing that arthritic rats self-select analgesic opiates.\n", "Other researchers also believe that animal consciousness does not require a neocortex, but can arise from homologous subcortical brain networks. It has been suggested that brainstem circuits can generate pain. This includes research with anencephalic children who, despite missing large portions of their cortex, express emotions. There is also evidence from activation studies showing brainstem mediated feelings in normal humans and foetal withdrawal responses to noxious stimulation but prior to development of the cortex.\n", "Pain is a complex mental state, with a distinct perceptual quality but also associated with suffering, which is an emotional state. Because of this complexity, the presence of pain in non-human animals cannot be determined unambiguously using observational methods, but the conclusion that animals experience pain is often inferred on the basis of likely presence of phenomenal consciousness which is deduced from comparative brain physiology as well as physical and behavioural reactions.\n", "The idea that non-human animals might not feel pain goes back to the 17th-century French philosopher, René Descartes, who argued that animals do not experience pain and suffering because they lack consciousness. In 1789, the British philosopher and social reformist, Jeremy Bentham, addressed in his book \"An Introduction to the Principles of Morals and Legislation\" the issue of our treatment of animals with the following often quoted words: \"The question is not, Can they reason? nor, can they talk? but, Can they suffer?\"\n", "Sometimes a distinction is made between \"physical pain\" and \"emotional\" or \"psychological pain\". Emotional pain is the pain experienced in the absence of physical trauma, e.g. the pain experienced after the loss of a loved one, or the break-up of a relationship. It has been argued that only primates can feel \"emotional pain\", because they are the only animals that have a neocortex – a part of the brain's cortex considered to be the \"thinking area\". However, research has provided evidence that monkeys, dogs, cats and birds can show signs of emotional pain and display behaviours associated with depression during painful experience, i.e. lack of motivation, lethargy, anorexia, unresponsiveness to other animals.\n", "There is controversy about whether cephalopods have the capability to experience pain. This mainly relates to differences between the nervous systems of different taxa. Reviews have been published arguing that fish cannot feel pain because they lack a neocortex in the brain. If true, this would also rule out pain perception in most mammals, all birds, reptiles and cephalopods. However, the \"Cambridge Declaration on Consciousness\" published in 2012, states that the absence of a neocortex does not appear to preclude an organism from experiencing affective states.\n", "The nervous system of cephalopods is the most complex of all the invertebrates and their brain-to-body-mass ratio falls between that of endothermic and ectothermic vertebrates. The brain is protected in a cartilaginous cranium.\n\nThe possibility that non-human animals may be capable of perceiving pain has a long history. Initially, this was based around theoretical and philosophical argument, but more recently has turned to scientific investigation.\n\nSection::::Background.:Philosophy.\n", "Invertebrate nervous systems are very unlike those of vertebrates and this dissimilarity has sometimes been used to reject the possibility of a pain experience in invertebrates. In humans, the neocortex of the brain has a central role in pain and it has been argued that any species lacking this structure will therefore be incapable of feeling pain. However, it is possible that different structures may be involved in the pain experience of other animals in the way that, for example, crustacean decapods have vision despite lacking a human visual cortex.\n", "In 2014, the veterinary \"Journal of Small Animal Practice\" published an article on the recognition of pain which started – \"The ability to experience pain is universally shared by all mammals...\" and in 2015, it was reported in the science journal \"Pain\", that several mammalian species (rat, mouse, rabbit, cat and horse) adopt a facial expression in response to a noxious stimulus that is consistent with the expression of humans in pain.\n\nSection::::Background.:Scientific investigation.:Birds.\n", "The idea that non-human animals might not feel pain goes back to the 17th-century French philosopher, René Descartes, who argued that animals do not experience pain and suffering because they lack consciousness. In 1789, the British philosopher and social reformist, Jeremy Bentham, addressed in his book \"An Introduction to the Principles of Morals and Legislation\" the issue of our treatment of animals with the following often quoted words: \"The question is not, Can they reason? nor, can they talk? but, Can they suffer?\"\n", "The second component is the experience of \"pain\" itself, or suffering – the internal, emotional interpretation of the nociceptive experience. This is when the withdrawn finger begins to hurt, moments after the withdrawal. Pain is therefore a private, emotional experience. Pain cannot be directly measured in other animals; responses to putatively painful stimuli can be measured, but not the experience itself. To address this problem when assessing the capacity of other species to experience pain, argument-by-analogy is used. This is based on the principle that if an animal responds to a stimulus in a similar way, it is likely to have had an analogous experience.\n", "The idea that animals might not feel pain as human beings feel it traces back to the 17th-century French philosopher, René Descartes, who argued that animals do not experience pain and suffering because they lack consciousness. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. In his interactions with scientists and other veterinarians, he was regularly asked to \"prove\" that animals are conscious, and to provide \"scientifically acceptable\" grounds for claiming that they feel pain. Carbone writes that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal, noting that although the argument that animals have at least simple conscious thoughts and feelings has strong support, some critics continue to question how reliably animal mental states can be determined. However, some canine experts are stating that, while intelligence does differ animal to animal, dogs have the intelligence of a two to two-and-a-half year old. This does support the idea that dogs, at the very least, have some form of consciousness. The ability of invertebrates to experience pain and suffering is less clear, however, legislation in several countries (e.g. U.K., New Zealand, Norway) protects some invertebrate species if they are being used in animal testing.\n", "At the same time as the investigations using arthritic rats, studies were published showing that birds with gait abnormalities self-select for a diet that contains carprofen, a human analgesic. In 2005, it was written \"Avian pain is likely analogous to pain experienced by most mammals\" and in 2014, \"...it is accepted that birds perceive and respond to noxious stimuli and that birds feel pain\"\n\nSection::::Background.:Scientific investigation.:Reptiles and amphibians.\n\nVeterinary articles have been published stating both reptiles and amphibians experience pain in a way analogous to humans, and that analgesics are effective in these two classes of vertebrates.\n", "Section::::The experience of pain.:Physical pain.\n\nA definition of pain widely accepted and used by scientists is \"an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage\". \n\nThe nerve impulses of the nociception response may be conducted to the brain thereby registering the location, intensity, quality and unpleasantness of the stimulus. This subjective component of pain involves conscious awareness of both the sensation and the unpleasantness (the aversive, negative affect). The brain processes underlying conscious awareness of the unpleasantness (suffering), are not well understood.\n", "Sometimes a distinction is made between \"physical pain\" and \"emotional\" or \"psychological pain\". Emotional pain is the pain experienced in the absence of physical trauma, e.g. the pain experienced by humans after the loss of a loved one, or the break-up of a relationship. It has been argued that only primates and humans can feel \"emotional pain\", because they are the only animals that have a neocortex – a part of the brain's cortex considered to be the \"thinking area\". However, research has provided evidence that monkeys, dogs, cats and birds can show signs of emotional pain and display behaviours associated with depression during painful experience, i.e. lack of motivation, lethargy, anorexia, unresponsiveness to other animals.\n", "In 2014, the veterinary \"Journal of Small Animal Practice\" published an article on the recognition of pain which started – \"The ability to experience pain is universally shared by all mammals...\".\n\nSection::::Background.:Scientific investigation.:Birds.\n\nAt the same time as the investigations using arthritic rats, studies were published showing that birds with gait abnormalities self-select for a diet that contains carprofen, a human analgesic. In 2005, it was written \"Avian pain is likely analogous to pain experienced by most mammals\" and in 2014, \"it is accepted that birds perceive and respond to noxious stimuli and that birds feel pain.\"\n", "Section::::The experience of pain.:Reflex response to painful stimuli.\n\nNociception usually involves the transmission of a signal along nerve fibers from the site of a noxious stimulus at the periphery to the spinal cord. Although this signal is also transmitted on to the brain, a reflex response, such as flinching or withdrawal of a limb, is produced by return signals originating in the spinal cord. Thus, both physiological and behavioral responses to nociception can be detected, and no reference need be made to a conscious experience of pain. Based on such criteria, nociception has been observed in all major animal taxa.\n", "The second component is the experience of \"pain\" itself, or suffering – the internal, emotional interpretation of the nociceptive experience. Again in humans, this is when the withdrawn finger begins to hurt, moments after the withdrawal. Pain is therefore a private, emotional experience. Pain cannot be directly measured in other animals, including other humans; responses to putatively painful stimuli can be measured, but not the experience itself. To address this problem when assessing the capacity of other species to experience pain, argument-by-analogy is used. This is based on the principle that if an animal responds to a stimulus in a similar way to ourselves, it is likely to have had an analogous experience.\n", "Scientists have proposed that in conjunction with argument-by-analogy, criteria of physiology or behavioural responses can be used to assess the possibility that non-human animals can perceive pain. In 2015, Lynne Sneddon, Director of Bioveterinary Science at the University of Liverpool, published a review of the evidence gathered investigating the suggestion that cephalopods can experience pain. The review included the following summary table -\n\nIn the table, indicates positive evidence and not yet denotes it has not been tested or there is insufficient evidence.\n\nSection::::Societal implications.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-00158
Why do rates of growth use exponents and rates of decay use Euler's number?
They both actually use the same formula: N = a(1 + r)^t where a = initial value, r = growth rate (negative for decay), t = time However, this formula assumes discrete growth/decay, like interest compounded monthly. When you have a bacterial culture, the bacteria don't wait until the end of each hour to divide, they are doing so continuously. You can try to simulate this by expressing the rate over increasingly smaller intervals of time. Let's say each hour, 10% of the bacteria in your culture will reproduce, and you want to know how much it will grow over the 10 hours. We could use an interval of 10 hours, naively multiplying the growth rate by 10: (1 + 10 * 0.1)^1 = 1.1 This assumes the bacteria wait until the end of the 10 hours and 10% divide, which of course is wrong. They are dividing continuously, and some of the new ones will produce new ones, compounding our growth. So let's try 1 hour instead: (1 + 1 * 0.1)^10 = 2.59 That's a lot more. Better, but they don't wait until the end of the hour, either, so let's try minutes: (1 + 1/60 * 0.1)^(10 * 60) = 2.716 And seconds: (1 + 1/3600 * 0.1)^(10 * 3600) = 2.7182 Is that number starting to look familiar? Of course it does, it is e, Euler's constant. In fact, this is one of the ways e is defined, as the limit of (1 + 1/n)^n as n approaches infinity. This gives us our formula for continuous exponential growth as: N = e^rt **TL;DR** It isn't about growth vs. decay, it is discrete vs. continuous.
[ "Since there may also be voltage dependence in other factors in a Fowler-Nordheim-type equation, in particular in the notional emission area \"A\" and in the local work-function, it is not necessarily expected that \"κ\" for CFE from a metal of local work-function 4.5 eV should have the value \"κ\" = 1.23, but there is certainly no reason to expect that it will have the original Fowler-Nordheim value \"κ\" = 2.\n", "In addition, there is an inflection point in the graph of the generalized logistic function when\n\nand one in the graph of the Gompertz function when\n\nSection::::Applications.:Gomp-ex law of growth.\n", "For example, if an initial population of formula_12bacteria doubles every twenty minutes, then at time interval formula_13 it is given by the equationbrbr\n\nformula_14,brbr\n\nwhere formula_13 is the number of twenty-minute intervals that have passed. However, we usually prefer to measure time in hours or minutes, and it is not difficult to change the units of time. For example, since 1 hour is 3 twenty-minute intervals, the population in one hour is formula_16. The hourly growth factor is 8, which means that for every 1 at the beginning of the hour, there are 8 by the end. Indeed, brbr\n\nformula_17brbr\n", "BULLET::::- Edward Sang, \"On the precautions to be taken in recording and using the records of original computations\", Proceedings of the Royal Society of Edinburgh, volume 9, 1878, 349–352 link\n\nBULLET::::- Edward Sang, \"On the tabulation of all fractions having their values between two prescribed limits\", Transactions of the Royal Society of Edinburgh, volume 28, 1878, 287–298\n\nBULLET::::- Edward Sang, \"A New table of seven-place logarithms of all numbers continuously up to 200000\", Edinburgh, 1878 (second edition)\n", "Strictly, if the barrier field in Fowler-Nordheim 1928 theory is exactly proportional to the applied voltage, and if the emission area is independent of voltage, then the Fowler-Nordheim 1928 theory predicts that plots of the form (log(\"i\"/\"V\") vs. 1/\"V\") should be exact straight lines. However, contemporary experimental techniques were not good enough to distinguish between the Fowler-Nordheim theoretical result and the Millikan-Lauritsen experimental result.\n\nThus, by 1928 basic physical understanding of the origin of CFE from bulk metals had been achieved, and the original Fowler-Nordheim-type equation had been derived.\n", "Since that time, the proliferation of modern high-speed desktop computers has made it possible for amateurs, with the right hardware, to compute trillions of digits of \"e\".\n\nSection::::In computer culture.\n\nIn contemporary internet culture, individuals and organizations frequently pay homage to the number .\n", "As mentioned above, the units for k are inverse time. If we were to take the reciprocal of this, we would be left with units of time. For this reason, we often state that the lifetime of a species which undergoes first order decay is equal to the reciprocal of \"k\". Consider, now, what would happen if we were to set the time, \"t\", to the reciprocal of the rate constant, \"k\" such that \"t\" = 1/\"k\". This would yield \n", "For example, Boltzmann's standard definition of entropy \"S\" = \"k\" ln \"W\" (where \"W\" is the number of ways of arranging a system and \"k\" is Boltzmann's constant) can also be written more simply as just \"S\" = Log(\"W\"), where \"Log\" here denotes the indefinite logarithm, and we let \"k\" = [log e]; that is, we identify the physical entropy unit \"k\" with the mathematical unit [log e]. This identity works because\n", "where \"x\" is the value of \"x\" at time 0. This formula is transparent when the exponents are converted to multiplication. For instance, with a starting value of 50 and a growth rate of per interval, the passage of one interval would give ; two intervals would give ; and three intervals would give . In this way, each increase in the exponent by a full interval can be seen to increase the previous total by another five percent. (The order of multiplication does not change the result based on the associative property of multiplication.)\n", "In the 1920s, empirical equations were used to find the power of \"V\" that appeared in the exponent of a semi-logarithmic equation assumed to describe experimental CFE results. In 1928, theory and experiment were brought together to show that (except, possibly, for very sharp emitters) this power is \"V\". It has recently been suggested that CFE experiments should now be carried out to try to find the power (\"κ\") of \"V\" in the pre-exponential of the following empirical CFE equation:\n\nwhere \"B\", \"C\" and \"κ\" are treated as constants.\n\nFrom eq. (42) it is readily shown that\n", "In 1951 he was invited to join a sub-committee of the S.M.A. which was looking into the feasibility of making it possible to use a particular M.K.S system in school science teaching. The sub-committee's report, which was published in 1954 under the title \"The Teaching of Electricity with special reference to the use of M.K.S units\", included a large section on experiments and the making of the necessary apparatus. In his preface, the chairman of the sub-committee wrote:\n", "For the last \"k\" epoch of a given era, it should be taken into account that at \"u\" = \"x\" 1 the greatest power is \"p\"(\"x\") (not \"p\"(\"x\") ). Therefore, for the density increase over the whole era one obtains\n\nTherefore, even at not very great \"k\" values, formula_72. During the next era (with a length \"k\" ' ) density will increase faster because of the increased starting amplitude \"A\"': formula_73, etc. These formulae illustrate the steep increase in matter density.\n\nSection::::Metric evolution.:Statistical analysis near the singularity.\n", "The number can be represented as a real number in a variety of ways: as an infinite series, an infinite product, a continued fraction, or a limit of a sequence. The chief among these representations, particularly in introductory calculus courses is the limit\n\ngiven above, as well as the series\n\ngiven by evaluating the above power series for at .\n\nLess common is the continued fraction\n\nwhich written out looks like\n\nThis continued fraction for converges three times as quickly:\n\nMany other series, sequence, continued fraction, and infinite product representations of have been developed.\n\nSection::::Representations.:Stochastic representations.\n", "and that formula_3 is the time at which the population of the assembly is reduced to 1/\"e\" ≈ 0.367879441 times its initial value.\n\nFor example, if the initial population of the assembly, \"N\"(0), is 1000, then the population at time formula_3, formula_9, is 368.\n\nA very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than \"e\". In that case the scaling time is the \"half-life\".\n\nSection::::Measuring rates of decay.:Half-life.\n", "Exponential decay occurs in the same way when the growth rate is negative. In the case of a discrete domain of definition with equal intervals, it is also called geometric growth or geometric decay, the function values forming a geometric progression. In either exponential growth or exponential decay, the ratio of the rate of change of the quantity to its current size remains constant over time.\n\nThe formula for exponential growth of a variable \"x\" at the growth rate \"r\", as time \"t\" goes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is\n", "If \"p\" is the unit of time the quotient \"t\"/\"p\" is simply the number of units of time. Using the notation \"t\" for the (dimensionless) number of units of time rather than the time itself, \"t\"/\"p\" can be replaced by \"t\", but for uniformity this has been avoided here. In this case the division by \"p\" in the last formula is not a numerical division either, but converts a dimensionless number to the correct quantity including unit.\n\nA popular approximated method for calculating the doubling time from the growth rate is the rule of 70,\n\ni.e. formula_9.\n", "For instance, in the equation below, the growth of population formula_1 is a function of the minimum of three Michaelis-Menten terms representing limitation by factors formula_2, formula_3 and formula_4.\n\nThe use of the equation is limited to a situation where there are steady state ceteris paribus conditions, and factor interactions are tightly controlled.\n\nSection::::Applications.:Protein nutrition.\n", "Section::::Work.:Individual growth model.\n\nThe individual growth model published by Ludwig von Bertalanffy in 1934 is widely used in biological models and exists in a number of permutations.\n\nIn its simplest version the so-called Bertalanffy growth equation is expressed as a differential equation of length (\"L\") over time (\"t\"):\n\nformula_1\n\nwhen formula_2 is the Bertalanffy growth rate and formula_3 the ultimate length of the individual. This model was proposed earlier by August Friedrich Robert Pūtter (1879-1929), writing in 1920.\n", "BULLET::::- Edward Sang, \"On the extension of Brouncker's method to the comparison of several magnitudes\", 1872 (Transactions of the Royal Society of Edinburgh, volume 26, part 1, pp. 59–67)\n\nBULLET::::- Edward Sang, \"Account of the extension of the seven-place logarithmic tables, from 100,000 to 200,000\", Proceedings of the Royal Society of Edinburgh 7, 1872, 395 link\n\nBULLET::::- Edward Sang, \"On a singular case of rectification in lines of the fourth order\", Proceedings of the Royal Society of Edinburgh 7, 1872, 613-614 link\n", "Erdős in 1948 showed that the constant \"E\" is an irrational number. Later, Borwein provided an alternative proof.\n\nDespite its irrationality, the binary representation of the Erdős–Borwein constant may be calculated efficiently.\n\nSection::::Applications.\n\nThe Erdős–Borwein constant comes up in the average case analysis of the heapsort algorithm, where it controls the constant factor in the running time for converting an unsorted array of items into a heap.\n", "or, \n\nsee exercises 7-а and 37 respectively. By the way, Malmsten's integrals are also found to be closely connected to the Stieltjes constants.\n\nIn 1842, Malmsten also evaluated several important logarithmic series, among which we can find these two series\n\nand\n\nThe latter series was later rediscovered in a slightly different form by Ernst Kummer, who derived a similar expression\n", "Table 1 shows the jellium model calculation for van der Waals constant \"C\" and dynamical image plane \"Z\" of rare gas atoms on various metal surfaces. The increasing of \"C\" from He to Xe for all metal substrates is caused by the larger atomic polarizability of the heavier rare gas atoms. For the position of the dynamical image plane, it decreases with increasing dielectric function and is typically on the order of 0.2 Å.\n\nSection::::Physisorption potential.\n", "The first correction to the free electron model for jelium is from the Fock exchange contribution to electron-electron interactions. Adding this in, one has a total energy of\n\nwhere the negative term is due to exchange: exchange interactions lower the total energy. Higher order corrections to the total energy are due to electron correlation and if one decides to work in a series for small formula_13, one finds\n\nThe series is quite accurate for small formula_13 but of dubious value for formula_13 values found in actual metals.\n", "The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms calculated from the constant. It is assumed that the table was written by William Oughtred. The discovery of the constant itself is credited to Jacob Bernoulli in 1683, who attempted to find the value of the following expression (which is in fact ):\n", "Let m be the mass of the substance at any moment. We know that the rate of change of mass of substance at any moment is proportional to mass of substance at that moment. Thus, we can write\n\nThus, we can write\n\nThus lnm = kt + lnc\n\nwhere lnc is the constant of proportionality\n\nNow mass of substance initially(t=0) was M.Substituting this information in equation (1) and solving,we get c=M.Thus\n" ]
[ "Rates of growth and rates of decay are calculated using different methods.", "Growth uses exponents and decay uses Eulers number?" ]
[ "They both use the same formula.", "Exponents are used in approximations where the compounding is discrete instead of continuous. " ]
[ "false presupposition" ]
[ "Rates of growth and rates of decay are calculated using different methods.", "Growth uses exponents and decay uses Eulers number?" ]
[ "false presupposition", "false presupposition" ]
[ "They both use the same formula.", "Exponents are used in approximations where the compounding is discrete instead of continuous. " ]
2018-08394
why did egyptians worship the eye and the ability to pay attention?
They did not really worship the eye or the ability to pay attention. The eye symbol in Egypt was a representation of "The Eye of Horus" which was the symbol of Horus who was the God of the Sky. Horus was also the primary enemy of Set who was the God of the Desert, violence, and disorder. Horus was also the Patron God of the Pharaohs and his symbol was also a sign of royal authority. URL_0
[ "Likewise, cobra goddesses often represented the Eye. Among them was Wadjet, a tutelary deity of Lower Egypt who was closely associated with royal crowns and the protection of the king. Other Eye-associated cobra goddesses include the fertility deity Renenutet, the magician goddess Weret-hekau, and Meretseger, the divine protector of the burial grounds near the city of Thebes.\n", "The eyes of Egyptian deities, although they are aspects of the power of the gods who own them, sometimes take active roles in mythology, possibly because the word for \"eye\" in Egyptian, \"jrt\", resembles another word meaning \"do\" or \"act\". The presence of the feminine suffix \"-t\" in \"jrt\" may explain why these independent eyes were thought of as female. The Eye of Ra, in particular, is deeply involved in the sun god's creative actions.\n", "Section::::Worship.\n\nThe Eye of Ra was invoked in many areas of Egyptian religion, and its mythology was incorporated into the worship of many of the goddesses identified with it.\n", "The concept of the solar Eye as mother, consort, and daughter of a god was incorporated into royal ideology. Pharaohs took on the role of Ra, and their consorts were associated with the Eye and the goddesses equated with it. The sun disks and uraei that were incorporated into queens' headdresses during the New Kingdom reflect this mythological tie. The priestesses who acted as ceremonial \"wives\" of particular gods during the Third Intermediate Period (c. 1059–653 BC), such as the God's Wife of Amun, had a similar relationship with the gods they served. Amenhotep III even dedicated a temple at Sedeinga in Nubia to his wife, Tiye, as a manifestation of the Eye of Ra, paralleling the temple to Amenhotep himself at nearby Soleb.\n", "The Eye's importance extends to the afterlife as well. Egyptian funerary texts associate deceased souls with Ra in his nightly travels through the Duat, the realm of the dead, and with his rebirth at dawn. In these texts the Eye and its various manifestations often appear, protecting and giving birth to the deceased as they do for Ra. A spell in the \"Coffin Texts\" states that Bastet, as the Eye, illuminates the Duat like a torch, allowing the deceased to pass safely through its depths.\n", "The Egyptians associated many gods who took felid form with the sun, and many lioness deities, like Sekhmet, Menhit, and Tefnut, were equated with the Eye. Bastet was depicted as both a domestic cat and a lioness, and with these two forms she could represent both the peaceful and violent aspects of the Eye. Yet another goddess of the solar Eye was Mut, the consort of the god Amun, who was associated with Ra. Mut was first called the Eye of Ra in the late New Kingdom, and the aspects of her character that were related to the Eye grew increasingly prominent over time. Mut, too, could appear in both leonine and cat form.\n", "Section::::Society.:Religion.\n\nThe findings in the Eye Temple indicate that Tell Brak is among the earliest sites of organized religion in northern Mesopotamia. It is unknown to which deity the Eye Temple was dedicated, and the \"Eyes\" figurines appears to be votive offerings to that unknown deity. Michel Meslin hypothesized that the temple was the center of the Sumerian Innana or the Semitic Ishtar, and that the \"Eyes\" figurines were a representation of an all-seeing female deity.\n", "The Eye of Ra was involved in many areas of ancient Egyptian religion, including in the cults of the many goddesses who are equated with it. Its life-giving power was celebrated in temple rituals, and its dangerous aspect was invoked in the protection of the pharaoh, of sacred places, and of ordinary people and their homes.\n\nSection::::Roles.\n\nSection::::Roles.:Solar.\n", "The ancient Egyptian concept of the pharaonic soul was complicated and divided into several aspects. The \"ka\" aspect was a spirit double, created at birth, with the physical appearance of the person. After death, the \"ka\" would leave the body and travel through the underworld at night and return to its physical representation (the mummy, statue, or artworks) each morning. Pharaonic mortuary temples, which faced east toward the dawn sun and were affixed to the side of pyramids during the pyramid building era, featured a darkened inner sanctuary with a \"ka\" statue of the pharaoh. When the sun rose on the horizon, its rays would travel down the central passageway of the temple, and project the image of the sun onto the \"ka\" and, conversely, reflect the \"ka\" back to the sun, uniting the two. This is the conceptual leap that underpins Ancient Egyptian culture — the pharaoh’s soul road on a beam of light.\n", "The dual nature of the Eye goddess shows, as Graves-Brown puts it, that \"the Egyptians saw a double nature to the feminine, which encompassed both extreme passions of fury and love.\" This same view of femininity is found in texts describing human women, such as the \"Instruction of Ankhsheshonq\", which says a man's wife is like a cat when he can keep her happy and like a lioness when he cannot.\n\nSection::::Manifestations.\n", "The characteristics of the Eye of Ra were an important part of the Egyptian conception of female divinity in general, and the Eye was equated with many goddesses, ranging from very prominent deities like Hathor to obscure ones like Mestjet, a lion goddess who appears in only one known inscription. \n", "When a good person dies, they continue to live in the City of Light for the dead in Akhetaten. The conditions are the same after death.\n\nThe explanation as to why Aten could not be fully represented was that Aten was beyond creation. Thus the scenes of gods carved in stone previously depicted animals and human forms, now showed Aten as an orb above with life-giving rays stretching toward the royal figure. The king was depicted singularly in relation to divine power. This power transcended human or animal form.\n", "The violent form of the Eye was also invoked in religious ritual and symbolism as an agent of protection. The uraeus on royal and divine headdresses alludes to the role of the Eye goddesses as protectors of gods and kings. For similar reasons, uraei appear in rows atop shrines and other structures, surrounding and symbolically guarding them against hostile powers. Many temple rituals called upon Eye goddesses to defend the temple precinct or the resident deity. Often, the texts of such rituals specifically mention a set of four defensive uraei. These uraei are sometimes identified with various combinations of goddesses associated with the Eye, but they can also be seen as manifestations of \"Hathor of the Four Faces\", whose protection of the solar barque is extended in these rituals to specific places on earth.\n", "The deities associated with the Eye were not restricted to feline and serpent forms. Hathor's usual animal form is a cow, as is that of the closely linked Eye goddess Mehet-Weret. Nekhbet, a vulture goddess, was closely connected with Wadjet, the Eye, and the crowns of Egypt. Many Eye goddesses appear mainly in human form, including Neith, a sometimes warlike deity sometimes said to be the mother of the sun god, and Satet and Anuket, who were linked with the Nile cataracts and the inundation. Other such goddesses include Sothis, the deified form of the star of the same name, and Maat, the personification of cosmic order, who was connected with the Eye because she was said to be the daughter of Ra. Even Isis, who is usually the companion of Osiris rather than Ra, or Astarte, a deity of fertility and warfare who was imported from Canaan rather than native to Egypt, could be equated with the solar Eye.\n", "The Egyptians used to have a whole list of Gods who were worshipped during this period of time. These Gods were always represented in the form of humans or animals or as animal-headed humans. Some of these gods were specific to certain places or cities for example while others were more general and were worshipped on a larger scale. From early periods of the Egyptian kingdom solar gods such as Re had played a very important role in Egyptian state religion. This is mainly because the idea of the sun as a distant yet universal power was close enough to the prevailing ideas of the supreme power of the king both inside Egypt and beyond its borders. In the New Kingdom, the prominence and power of solar gods increased again. One of the most popular ones was the Aten, the visible sun-disk which can be seen traversing the sky each day. Akhenaten raised the Aten to the position of 'sole god', and it was represented as a disk with rays of light terminating in hands which reach out to the royal family, it was usually perceived as if it was offering the hieroglyphic sign for life. Akhenaten and his family are usually displayed while worshipping the Aten or even indulging in everyday activities beneath the disk that represents Aten. Art and test during this period always stressed on the idea of the tie between the king and the God. The king represents the link between the god and ordinary people. As a result, ordinary people had to focus on worshipping Akhenaten and the royal family instead of the Aten. And with worshipping the king they are getting closer to the god Aten. It is highly probable that Akhenaten's religion is not monotheistic. Even though the Aten is actually the only God worshipped and provided with temples during this period, other gods still existed and are sometimes mentioned in inscriptions. However, these solar gods are personifications of abstract concepts. This might be well understood as sometimes even the names of the Aten, which are written in cartouches like king's names, consist of a statement describing the Aten in terms of other gods. The idea of traditional gods was not highly tolerated by the majority of ordinary people however, teams of workmen were sent around the temples of Egypt where they called out the names and images of these gods wherever they occurred. A number of hymns to the Aten were composed during Akhenaten's reign in order to praise the Aten. These hymns provide an overview of what James Allen has described as the 'natural philosophy' of Akhenaten's religion. The wonders of the natural world are described to praise the universal power of the sun. All creatures celebrate when the sun rises and the nature becomes beautiful. While nasty things happen = at night when the sun is not present. Akhenaten decided that the worship of the Aten required a new location which is uncontaminated by the cults of traditional gods. That's why; he chose a site in Middle Egypt for a new capital city which he called Akhetaten, 'Horizon of the Aten'. It is a faraway site in the desert surrounded on three sides by cliffs and to the west by the Nile and is known today as el-Amarna. In the cliffs surrounding the city the king left a series of monumental inscriptions in which he explained his reasons for the move to this new site and his architectural intentions for the city in the form of lists of buildings that he either built or intended to build.\n", "The Egyptians often referred to the sun and the moon as the \"eyes” of particular gods. The right eye of the god Horus, for instance, was equated with the sun, and his left eye equated with the moon. At times the Egyptians called the lunar eye the \"Eye of Horus\", a concept with its own complex mythology and symbolism, and called the solar eye the \"Eye of Ra\"—Ra being the preeminent sun god in ancient Egyptian religion. However, in Egyptian belief, many terms and concepts are fluid, so the sun could also be called the \"Eye of Horus\".\n", "Section::::Ancient Egyptian funerary monuments.\n\nThe Ancient Egyptians may have also witnessed—and recorded—camera obscura effects. Ancient Egyptian texts, known collectively as the Books of the Netherworld, describe pharaonic funerary complexes as sun-powered resurrection machines that somehow affected a merger of the pharaoh's soul with the sun god. The continuation of the entire cosmos was dependent upon the sun god who was regenerated in darkness.\n", "Inverted images of the outside world, viewed by people located in the depths of a megalithic cairn, would very effectively generate a sense of \"otherness\". Furthermore, people in the Neolithic did not share present-day knowledge of optical physics, and would have had their own understandings. Even from a scientific perspective, however, the act of witnessing the projection of the sun’s disc, or human figures emerging from a stone wall, is rather suggestive of an encounter with the supernatural. It has been suggested that projections of the sun may have been interpreted as the manifestation of a Neolithic solar deity.\n", "In the third phase of the story, Horus competes with Set for the kingship. Their struggle encompasses a great number of separate episodes and ranges in character from violent conflict to a legal judgment by the assembled gods. In one important episode, Set tears out one or both of Horus' eyes, which are later restored by the healing efforts of Thoth or Hathor. For this reason, the Eye of Horus is a prominent symbol of life and well-being in Egyptian iconography. Because Horus is a sky god, with one eye equated with the sun and the other with the moon, the destruction and restoration of the single eye explains why the moon is less bright than the sun.\n", "A myth about the Eye, known from allusions in the \"Coffin Texts\" from the Middle Kingdom (c. 2055–1650 BC) and a more complete account in the Bremner-Rhind Papyrus from the Late Period (664–332 BC), demonstrates the Eye's close connection with Ra and Atum and her ability to act independently. The myth takes place before the creation of the world, when the solar creator—either Ra or Atum—is alone. Shu and Tefnut, the children of this creator god, have drifted away from him in the waters of Nu, the chaos that exists before creation in Egyptian belief, so he sends out his Eye to find them. The Eye returns with Shu and Tefnut but is infuriated to see that the creator has developed a new eye, which has taken her place. The creator god appeases her by giving her an exalted position on his forehead in the form of the uraeus, the emblematic cobra that appears frequently in Egyptian art, particularly on royal crowns. The equation of the Eye with the uraeus and the crown underlines the Eye's role as a companion to Ra and to the pharaoh, with whom Ra is linked. Upon the return of Shu and Tefnut, the creator god is said to have shed tears, although whether they are prompted by happiness at his children's return or distress at the Eye's anger is unclear. These tears give rise to the first humans. In a variant of the story, it is the Eye that weeps instead, so the Eye is the progenitor of humankind.\n", "In one myth, when Set and Horus were fighting for the throne after Osiris's death, Set gouged out Horus's left eye. The majority of the eye was restored by either Hathor or Thoth. When Horus's eye was recovered, he offered it to his father, Osiris, in hopes of restoring his life. Hence, the eye of Horus was often used to symbolise sacrifice, healing, restoration, and protection.\n\nSection::::As hieroglyph and symbol.\n", "The Eye's flight from and return to Egypt was a common feature of temple ritual in the Ptolemaic and Roman periods (305 BC – AD 390), when the new year and the Nile flood that came along with it were celebrated as the return of the Eye after her wanderings in foreign lands. The Egyptians built shrines along the river containing images of animals and dwarfs rejoicing at the goddess' arrival. Scholars do not know how well developed the myth and the corresponding rituals were in earlier times. One of the oldest examples is Mut's return to her home temple in Thebes, which was celebrated there annually as early as the New Kingdom. At the temple of Montu at Medamud, in a festival that may date back to the late Middle Kingdom, it was Montu's consort Raet-Tawy who was equated with Hathor and the Eye of Ra. The return of this Eye goddess, in fertile, moisture-bearing form, set the stage for her subsequent marriage to Montu and the birth of their mythological child, a form of Horus. The temple's new year festival celebrated her homecoming with drinking and dancing, paralleling the goddess' inebriated state after her pacification. In other cities, two goddesses were worshipped as the belligerent and peaceful forms of the Eye, as with Ayet and Nehemtawy at Herakleopolis or Satet and Anuket at Aswan.\n", "Frequently, two Eye-related goddesses appear together, representing different aspects of the Eye. The juxtaposed deities often stand for the procreative and aggressive sides of the Eye's character, as Hathor and Sekhmet sometimes do. Wadjet and Nekhbet can stand for Lower and Upper Egypt, respectively, along with the Red Crown and White Crown that represent the two lands. Similarly, Mut, whose main cult center was in Thebes, sometimes served as an Upper Egyptian counterpart of Sekhmet, who was worshipped in Memphis in Lower Egypt.\n", "In another temple ritual, the pharaoh played a ceremonial game in honor of the Eye goddesses Hathor, Sekhmet, or Tefnut, in which he struck a ball symbolizing the Eye of Apep with a club made from a type of wood that was said to have sprung from the Eye of Ra. The ritual represents, in a playful form, the battle of Ra's Eye with its greatest foe.\n", "This enigmatic deity is named for the narrow band that runs along the side of its face through its almond-shaped eye with its round iris. Like many other supernaturals, the Banded-eye God has a cleft head and a downturned mouth. Unlike others, the Banded-eye God is only known from its profile - these renditions are generally concentrated on bowls from the Valley of Mexico (as shown on left), although the Banded-eye God is one of the five supernaturals shown on Las Limas Monument 1 from the Olmec heartland.\n" ]
[ "Egyptians worshipped the eye.", "Egyptians worshipped the eye." ]
[ "While \"The eye of Horus\" was an Egyptian symbol denoting the God of the Sky; Egyptians did not worship the eye itself.", "Egyptians didn't worship the eye, they worshipped \"The Eye of Horus\", Horus was a God." ]
[ "false presupposition" ]
[ "Egyptians worshipped the eye.", "Egyptians worshipped the eye." ]
[ "false presupposition", "false presupposition" ]
[ "While \"The eye of Horus\" was an Egyptian symbol denoting the God of the Sky; Egyptians did not worship the eye itself.", "Egyptians didn't worship the eye, they worshipped \"The Eye of Horus\", Horus was a God." ]
2018-00172
Why is the contents of iron supplements so much higher than the recommended daily allowance?
Iron supplementation may need to be higher than the general populations requirements if someone needs iron stores replacing if they require more iron due to ongoing medical problems. Additionally not all iron may get absorbed anyway (this is what commonly causes dark stools in those taking iron supplements).
[ "Many other substances decrease the rate of non-heme iron absorption. Examples are tannins from foods, such as tea and saw palmetto, phytic acid, and roughage. Vegetarians and especially vegans are at increased risk of iron deficiency due to the combination of limited amounts of iron in the diet in a form that is poorly absorbed alongside compounds that further limit absorption.\n", "The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For women the PRI is 13 mg/day ages 15–17 years, 16 mg/day for women ages 18 and up who are premenopausal and 11 mg/day postmenopausal. For pregnancy and lactation, 16 mg/day. For men the PRI is 11 mg/day ages 15 and older. For children ages 1 to 14 the PRI increases from 7 to 11 mg/day. The PRIs are higher than the U.S. RDAs, with the exception of pregnancy. The EFSA reviewed the same safety question did not establish a UL.\n", "\"Iron sucrose\" has an occurrence of allergic reactions of less than 1 in 1000. A common side effect is taste changes, especially a metallic taste, occurring in between 1 in 10 and 1 in 100 treated patients. It has a maximum dose of 200 mg on each occasion according to the SPC, but it has been given in doses of 500 mg. Doses can be given up to 3 times a week\n", "For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (%DV). For iron labeling purposes 100% of the Daily Value was 18 mg, and remained unchanged at 18 mg. A table of all of the old and new adult Daily Values is provided at Reference Daily Intake. The original deadline to be in compliance was July 28, 2018, but on September 29, 2017 the U.S. Food and Drug Administration released a proposed rule that extended the deadline to January 1, 2020 for large companies and January 1, 2021 for small companies.\n", "Elevation in ferritin concentration without elevation in transferrin saturation does not rule out an iron overload disorder. This combination can be observed in loss-of-function ferroportin mutation and in aceruloplasminemia. Elevated level of ferritin concentration can be observed in acute or chronic inflammatory process without pathologic iron overload.\n\nFerritin level above 200 ng/mL (449 pmol/L) in women or 300 ng/mL (674 pmol/L) in men who have no signs of inflammatory disease need additional testing. Transferrin saturation above normal range in male and female also need additional testing.\n", "It is important to use both the imaging techniques and serum ferritin level as indicators to start the therapy of iron overload. Serum level and the imaging techniques can be used as markers for treatment progress.\n\nbr\n\nbr\n\nSection::::Treatment.\n", "Since then, supplemental iron products, including Geritol, have been contraindicated because of concerns over hemochromatosis, and serious questions raised in studies for men, postmenopausal women, and nonanemic patients with liver disease, heart disease, type 2 diabetes, or cancer.\n\nSection::::Media sponsorships.\n", "The National Academy of Medicine updated Estimated Average Requirements and Recommended Dietary Allowances in 2001. The current EAR for iron for women ages 14–18 is 7.9 mg/day, 8.1 for ages 19–50 and 5.0 thereafter (post menopause). For men the EAR is 6.0 mg/day for ages 19 and up. The Recommended Dietary Allowance is 15.0 mg/day for women ages 15–18, 18.0 for 19–50 and 8.0 thereafter. For men, 8.0 mg/day for ages 19 and up. (Recommended Dietary Allowances are higher than Estimated Average Requirements so as to identify amounts that will cover people with higher than average requirements.) The Recommended Dietary Allowance for pregnancy is 27 mg/day, and for lactation, 9 mg/day. For children ages 1–3 years it is 7 mg/day, 10 for ages 4–8 and 8 for ages 9–13. The European Food Safety Authority refers to the collective set of information as Dietary Reference Values, with Population Reference Intakes instead of Recommended Dietary Allowances, and Average Requirements instead of Estimated Average Requirements. For women the Population Reference Intake is 13 mg/day ages 15–17 years, 16 mg/day for women ages 18 and up who are premenopausal and 11 mg/day postmenopausal. For pregnancy and lactation, 16 mg/day. For men the Population Reference Intake is 11 mg/day ages 15 and older. For children ages 1 to 14 the Population Reference Intake increases from 7 to 11 mg/day. The Population Reference Intakes are higher than the US Recommended Dietary Allowances, with the exception of pregnancy.\n", "Commonly observed side-effects, occurring in 1 to 10% of cases, include stool discoloration, diarrhea, nausea, and dyspepsia. Uncommon side-effects, occurring in 0.1 to 1% of cases, include constipation, vomiting, stomachache, tooth discoloration, itchiness, and headache. As very rare side-effects, occurring in 0.1 to 0.01% of cases, allergic reactions have been observed.\n\nSection::::Dosage and administration.\n\nThe preparation is available in various galenic formulations: syrup, drops, drinkable solution, film-coated tablets, and chewable tables. The syrup, drops, or drinkable solution are preferable for children. The preparation is dosed according to age. The following general dosage guidelines apply (for iron deficiency with anemia):\n", "BULLET::::- For niacin and magnesium there appears to be a contradiction inherent in the information in the table, as the amounts recommended for daily consumption can be more than the amounts identified as the safe upper limits. For both nutrients, the ULs identify the amounts which will not increase risk of adverse effects when the nutrients are consumed as a serving of a dietary supplement. Magnesium above the UL may cause diarrhea. Niacin above the UL may cause flushing of the face and a sensation of body warmth. Each country or regional regulatory agency decides on a safety margin below when symptoms may occur, so the ULs can differ.\n", "Dietary minerals in foods are large and diverse with many required to function while other trace elements can be hazardous if consumed in excessive amounts. Bulk minerals with a Reference Daily Intake (RDI, formerly Recommended Daily Allowance (RDA)) of more than 200 mg/day are calcium, magnesium, and potassium while important trace minerals (RDI less than 200 mg/day) are copper, iron, and zinc. These are found in many foods, but can also be taken in dietary supplements.\n\nSection::::Colour.\n", "The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for iron in 2001. The current EAR for iron for women ages 14–18 is 7.9 mg/day, 8.1 for ages 19–50 and 5.0 thereafter (post menopause). For men the EAR is 6.0 mg/day for ages 19 and up. The RDA is 15.0 mg/day for women ages 15–18, 18.0 for 19–50 and 8.0 thereafter. For men, 8.0 mg/day for ages 19 and up. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 27 mg/day and, for lactation, 9 mg/day. For children ages 1–3 years 7 mg/day, 10 for ages 4–8 and 8 for ages 9–13. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of iron the UL is set at 45 mg/day. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes.\n", "These examples demonstrate that to properly understand a value for TIBC, one also must know the serum iron, the percent transferrin saturation, and the individual clinical situation. In modern laboratory testings, serum ferritin levels are generally accepted as reliable single indicators of the presence of iron deficiency.\n\nSection::::Usual values.\n\nLaboratories often use different units of measurement and \"normal\" may vary by population and the laboratory techniques used. Look at the individual laboratory reference values to interpret a specific test (for instance, your own). Example reference ranges are:\n\nBULLET::::- Serum iron: Male 65–177 μg/dL (11.6–31.7 μmol/L); Female 50–170 μg/dL (9.0–30.4 μmol/L)\n", "Section::::Medical uses.:Athletes.\n\nAthletes may be at elevated risk of iron deficiency and so benefit from supplementation, but the circumstances vary between individuals and dosage should be based on tested ferritin levels, since in some cases supplementation may be harmful.\n\nSection::::Side effects.\n\nSide effects of therapy with oral iron are most often diarrhea or constipation and epigastric abdominal discomfort. Taken after a meal, side effects decrease, but there is an increased risk of interaction with other substances. Side effects are dose-dependent, and the dose may be adjusted.\n", "Section::::Mechanisms of iron regulation.:Cellular iron regulation.:The storage iron pool.\n\nIron can be stored in ferritin as ferric iron due to the ferroxidase activity of the ferritin heavy chain. Dysfunctional ferritin may accumulate as hemosiderin, which can be problematic in cases of iron overload. The ferritin storage iron pool is much larger than the labile iron pool, ranging in concentration from 0.7 mM to 3.6 mM.\n\nSection::::Mechanisms of iron regulation.:Cellular iron regulation.:Iron export.\n", "After administration, the maximum absorption capacity is reached already after 30 minutes and continuously increasing absorption can be observed over 24 hours. The non-absorbed iron is excreted via the stool.\n\nSection::::Pharmacology.:Effectiveness, safety.\n\nThe effectiveness and safety have been investigated and documented in numerous clinical studies and in various patient populations. This has included children, young people, adults, and the elderly, in addition to pregnant women and breastfeeding mothers.\n\nSection::::Pharmacology.:Effectiveness, safety.:Important clinical studies.\n\nSection::::Pharmacology.:Effectiveness, safety.:Important clinical studies.:Study in pregnant women.\n", "As always, laboratory values have to be interpreted with the lab's reference values in mind and considering all aspects of the individual clinical situation.\n\nSerum ferritin can be elevated in inflammatory conditions; so a normal serum ferritin may not always exclude iron deficiency, and the utility is improved by taking a concurrent C-reactive protein (CRP). The level of serum ferritin that is viewed as \"high\" depends on the condition. For example, in inflammatory bowel disease the threshold is 100, where as in chronic heart failure (CHF) the levels are 200.\n\nSection::::Treatment.\n", "The utilization and absorption by the erythrocytes of the iron administered orally in the form of iron polymaltose complex is correlated with the absorption in the intestines, whereby the relative absorption decreases with increased dosage and the more severe the iron deficiency, the greater the Absorption. As with all oral iron preparations, only about 10-15% of the iron is absorbed. A dose of 100 mg of iron is thus necessary in order for 10 mg to be absorbed.\n\nSection::::Pharmacology.:Pharmacokinetics.\n", "It is very difficult to measure or estimate the actual human consumption of these substances. Highly unsaturated omega-3 rich oils such as fish oil are being sold in pill form so that the taste of oxidized or rancid fat is not apparent. The health food industry's dietary supplements are self-regulated and outside of FDA regulations. To properly protect unsaturated fats from oxidation, it is best to keep them cool and in oxygen-free environments.\n\nSection::::Mechanism.\n", "Phosphorus is a deleterious contaminant because it makes steel brittle, even at concentrations of as little as 0.6%. Phosphorus cannot be easily removed by fluxing or smelting, and so iron ores must generally be low in phosphorus to begin with.\n\nSection::::Smelting.:Trace elements.:Aluminium.\n", "Section::::Signs and symptoms.\n\nSymptoms can vary from one person to another. It depends on the extent of accumulation and on the body location of the accumulation. African iron overload can be considered in patient with some of these conditions. \n\nSection::::Mechanism.\n", "The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For people ages 15 and older the AI is set at 3.0 mg/day. AIs for pregnancy and lactation is 3.0 mg/day. For children ages 1–14 years the AIs increase with age from 0.5 to 2.0 mg/day. The adult AIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and decided that there was insufficient information to set a UL.\n", "Mangels et al. write that, because of the lower bioavailability of iron from plant sources, the Food and Nutrition Board of the National Academy of Sciences established a separate RDA for vegetarians and vegans of 14 mg (¼gr) for vegetarian men and postmenopausal women, and 33 mg (½gr) for premenopausal women not using oral contraceptives. Supplements should be used with caution after consulting a physician, because iron can accumulate in the body and cause damage to organs. This is particularly true of anyone with hemochromatosis, a relatively common condition that can remain undiagnosed.\n", "Iron poisoning may result in mortality or short-term and long-term morbidity.\n\nSection::::Side effects.:Infection risk.\n\nBecause one of the functions of elevated ferritin (an acute phase reaction protein) in acute infections is thought to be to sequester iron from bacteria, it is generally thought that iron supplementation (which circumvents this mechanism) should be avoided in patients who have active bacterial infections. Replacement of iron stores is seldom such an emergency situation that it cannot wait for any such acute infection to be treated.\n", "In setting human nutrient guidelines, government organizations do not necessarily agree on amounts needed to avoid deficiency or maximum amounts to avoid the risk of toxicity. For example, for vitamin C, recommended intakes range from 40 mg/day in India to 155 mg/day for the European Union. The table below shows U.S. Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamins and minerals, PRIs for the European Union (same concept as RDAs), followed by what three government organizations deem to be the safe upper intake. RDAs are set higher than EARs to cover people with higher than average needs. Adequate Intakes (AIs) are set when there is not sufficient information to establish EARs and RDAs. Governments are slow to revise information of this nature. For the U.S. values, with the exception of calcium and vitamin D, all of the data date to 1997-2004.\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]
2018-10659
Why are static and kinetic coefficients of friction different? Is it possible for a kinetic coefficient to be higher, and what about the materials involved makes the difference?
The most common explanation is that objects not in motion begin to form chemical bonds with the surface they are resting on. Thus you need a larger force to start moving something and the coefficient of static friction is larger than the kinetic. That’s only a first year physics explanation however, I’m sure someone with more knowledge will have a better answer.
[ "Usually, the value of both coefficients does not exceed the unit and can be considered constant only within certain ranges of forces and velocities, outside of which there are extreme conditions that modify these coefficients.\n\nThe following table shows the values of the static and dynamic friction coefficients for common materials:\n\nSection::::Physics.:Friction.:Rolling Friction.\n", "Once the block has been put into motion, a friction force, with a lesser intensity than of the one of static friction formula_2, which is the dynamic friction force formula_8.In this case it is necessary to take into account not only the first two laws of Amontons, but also of the law of Coulomb, so as to be able to affirm that the relationship between dynamic friction force formula_8, coefficient of dynamic friction k and normal force N is the following:\n\nformula_10\n\nSection::::Physics.:Friction.:Static and dynamic friction coefficient.\n", "New models are beginning to show how kinetic friction can be greater than static friction. Kinetic friction is now understood, in many cases, to be primarily caused by chemical bonding between the surfaces, rather than interlocking asperities; however, in many other cases roughness effects are dominant, for example in rubber to road friction. Surface roughness and contact area affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces.\n", "where, in this case, K depends on the elastic properties of the materials. Also for the elastic bodies the tangential force depends on the coefficient c seen above, and it will be\n\nformula_35\n\nand therefore a fairly exhaustive description of the friction coefficient can be obtained\n\nformula_36\n\nSection::::Physics.:Friction.:Friction measurements.\n", "In many practical engineering applications, the fluid flow is more rapid, therefore turbulent rather than laminar. Under turbulent flow, the friction loss is found to be roughly proportional to the square of the flow velocity and inversely proportional to the pipe diameter, that is, the friction loss follows the phenomenological Darcy–Weisbach equation in which the \"hydraulic slope\" \"S\" can be expressed\n\nwhere we have introduced the Darcy friction factor \"f\" (but see \"Confusion with the Fanning friction factor\");\n", "BULLET::::- In laminar flow, losses are proportional to fluid velocity, \"V\"; that velocity varies smoothly between the bulk of the fluid and the pipe surface, where it is zero. The roughness of the pipe surface influences neither the fluid flow nor the friction loss.\n", "The two fundamental ‘laws’ of friction were first published (in 1699) by Guillaume Amontons, with whose name they are now usually associated, they state that:\n\nBULLET::::1. the force of friction acting between two sliding surfaces is proportional to the load pressing the surfaces together\n\nBULLET::::2. the force of friction is independent of the apparent area of contact between the two surfaces.\n", "BULLET::::- Objects intruded into the fluid flow.\n\nFor the purposes of calculating the total friction loss of a system, the sources of form friction are sometimes reduced to an equivalent length of pipe.\n\nSection::::Measurements.\n\nBecause of the importance of friction loss in civil engineering and in industry, it has been studied extensively for over a century.\n\nBULLET::::- Cited by Moody, L. F. (1944)\n\nBULLET::::- Cited by Moody, L. F. (1944)\n\nBULLET::::- Exhibits Nikuradse data.\n\nBULLET::::- Large amounts of field data on commercial pipes. The Colebrook–White equation was found inadequate over a wide range of flow conditions.\n", "If at this point the two surfaces are sliding between them, a resistance to shear stress \"t\" is observed, given by the presence of adhesive bonds, which were created precisely because of the plastic deformations, and therefore the frictional force will be given by\n\nformula_25\n\nAt this point, since the coefficient of friction is the ratio between the intensity of the frictional force and that of the applied load, it is possible to state that\n\nformula_26\n", "By studying the behavior at the limits it will be that for c = 0 t = 0 and for c = 1 it returns to the condition in which the surfaces are directly in contact and there is no presence of a third body. Keeping in mind what has just been said, it is possible to correct the friction coefficient formula as follows:\n\nformula_33\n\nIn conclusion, the case of elastic bodies in interaction with each other is considered.\n\nSimilarly to what we have just seen, it is possible to define an equation of the type \n\nformula_34\n", "BULLET::::- Shows friction factor in the smooth flow region for 1  Re  10 from two very different measurements.\n\nSection::::Surface roughness.\n\nThe roughness of the surface of the pipe or duct affects the fluid flow in the regime of turbulent flow. Usually denoted by ε, values used for calculations of water flow, for some representative materials are:\n\nValues used in calculating friction loss in ducts (for, e.g., air) are:\n\nSection::::Calculating friction loss.\n\nSection::::Calculating friction loss.:Hagen–Poiseuille.\n", "Kinetic friction, also known as dynamic friction or sliding friction, occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as \"μ\", and is usually less than the coefficient of static friction for the same materials. However, Richard Feynman comments that \"with dry metals it is very hard to show any difference.\"\n\nThe friction force between two surfaces after sliding begins is the product of the coefficient of kinetic friction and the normal force: formula_18.\n", "In a design problem, one may select pipe for a particular hydraulic slope \"S\" based on the candidate pipe's diameter \"D\" and its roughness ε. \n\nWith these quantities as inputs, the friction factor \"f\" can be expressed in closed form in the Colebrook–White equation or other fitting function, and the flow volume \"Q\" and flow velocity \"V\" can be calculated therefrom.\n", "BULLET::::- Is the response the same everywhere (homogeneity of the material)?\n\nBULLET::::- Do any boundaries or interfaces have to be taken into account?\n\nBULLET::::- Is the response linear with respect to the field, or are there nonlinearities?\n\nThe relationship between the electric field E and the dipole moment M gives rise to the behavior of the dielectric, which, for a given material, can be characterized by the function F defined by the equation:\n", "Section::::Laws of dry friction.\n\nThe elementary property of sliding (kinetic) friction were discovered by experiment in the 15th to 18th centuries and were expressed as three empirical laws:\n\nBULLET::::- Amontons' First Law: The force of friction is directly proportional to the applied load.\n\nBULLET::::- Amontons' Second Law: The force of friction is independent of the apparent area of contact.\n\nBULLET::::- Coulomb's Law of Friction: Kinetic friction is independent of the sliding velocity.\n\nSection::::Dry friction.\n", "BULLET::::- The point of departure from smooth flow occurs at a Reynolds number roughly inversely proportional to the value of the relative roughness: the higher the relative roughness, the lower the Re of departure. The range of Re and ε / \"D\" between smooth pipe flow and rough pipe flow is labeled \"transitional\". In this region, the measurements of Nikuradse show a decline in the value of \"f\" with Re, before approaching its asymptotic value from below, although Moody chose not to follow those data in his chart, which is based on the Colebrook–White equation.\n", "This friction factor is one-fourth of the Darcy friction factor, so attention must be paid to note which one of these is meant in the \"friction factor\" chart or equation consulted. Of the two, the Fanning friction factor is the more commonly used by chemical engineers and those following the British convention.\n\nThe formulas below may be used to obtain the Fanning friction factor for common applications.\n\nThe Darcy friction factor can also be expressed as\n\nformula_15\n\nwhere:\n\nBULLET::::- formula_3 is the shear stress at the wall\n\nBULLET::::- formula_9 is the density of the fluid\n", "The precise form of the traction bound is the so-called local friction law. For this Coulomb's (global) friction law is often applied locally: formula_13, with formula_14 the friction coefficient. More detailed formulae are also possible, for instance with formula_14 depending on temperature formula_16, local sliding velocity formula_17, etc.\n\nSection::::Solutions for static cases.\n\nSection::::Solutions for static cases.:Rope on a bollard, the capstan equation.\n", "BULLET::::- In the rough pipe domain, friction loss is dominated by the relative roughness and is insensitive to Reynolds number.\n\nBULLET::::- In the transition domain, friction loss is sensitive to both.\n\nBULLET::::- For Reynolds numbers 2000  Re  4000, the flow is unstable, varying with time as vortices within the flow form and vanish randomly. This domain of flow is not well modeled, nor are the details well understood.\n\nSection::::Characterizing friction loss.:Form friction.\n\nFactors other than straight pipe flow induce friction loss; these are known as “minor loss”:\n\nBULLET::::- Fittings, such as bends, couplings, valves, or transitions in hose or pipe diameter, or\n", "The force of rolling friction depends, therefore, on the small deformations suffered by the supporting surface and by the wheel itself, and can be expressed as formula_19, where it is possible to express \"b\" in relation to the sliding friction coefficient formula_3 as formula_21, with \"r\" being the wheel radius.\n\nSection::::Physics.:Friction.:The surfaces.\n\nGoing even deeper, it is possible to study not only the most external surface of the metal, but also the immediately more internal states, linked to the history of the metal, its composition and the manufacturing processes undergone by the latter.\n", "BULLET::::- First Law of Amontons – Friction is independent of the apparent area of contact;\n\nBULLET::::- Second Law of Amontons – The frictional force is directly proportional to the normal load;\n\nBULLET::::- Third Law of Coulomb – Dynamic friction is independent of the relative sliding speed.\n\nSection::::Physics.:Friction.:Static friction.\n", "BULLET::::2. Dynamic friction, which occurs between surfaces in relative motion.\n\nThe study of friction phenomena is a predominantly empirical study and does not allow to reach precise results, but only to useful approximative conclusions. This inability to obtain a definite result is due to the extreme complexity of the phenomenon. If it's studied more closely it presents new elements, which, in turn, make the global description even more complex.\n\nSection::::Physics.:Friction.:Laws of friction.\n\nAll the theories and studies on friction can be simplified into three main laws, which are valid in most cases:\n", "BULLET::::1. First of all, there is the deformation of the separate bodies in reaction to loads applied on their surfaces. This is the subject of general continuum mechanics. It depends largely on the geometry of the bodies and on their (constitutive) material behavior (e.g. elastic vs. plastic response, homogeneous vs. layered structure etc.).\n", "The normal force is defined as the net force compressing two parallel surfaces together, and its direction is perpendicular to the surfaces. In the simple case of a mass resting on a horizontal surface, the only component of the normal force is the force due to gravity, where formula_7. In this case, the magnitude of the friction force is the product of the mass of the object, the acceleration due to gravity, and the coefficient of friction. However, the coefficient of friction is not a function of mass or volume; it depends only on the material. For instance, a large aluminum block has the same coefficient of friction as a small aluminum block. However, the magnitude of the friction force itself depends on the normal force, and hence on the mass of the block.\n", "One of the most popular models for describing friction is the Coulomb friction model. This model defines coefficients of static friction formula_68 and dynamic friction formula_69 such that formula_70. These coefficients describe the two types of friction forces in terms of the reaction forces acting on the bodies. More specifically, the static and dynamic friction force magnitudes formula_71 are computed in terms of the reaction force magnitude formula_72 as follows\n" ]
[]
[]
[ "normal" ]
[]
[ "normal", "normal" ]
[]