id
stringlengths 10
10
| question
stringlengths 18
294
| comment
stringlengths 28
6.89k
| passages
sequence | presuppositions
sequence | corrections
sequence | labels
sequence | raw_presuppositions
sequence | raw_labels
sequence | raw_corrections
sequence |
---|---|---|---|---|---|---|---|---|---|
2018-11846 | What is that weird tightness in the back of the jaw when eating tart foods? | I believe it is the muscles around your salivary glands suddenly contracting, to force a bunch of saliva into your mouth to dilute the acid in sour food. Its basically a tiny muscle spasm, which is why it hurts. | [
"Za'atar is used as a seasoning for meats and vegetables or sprinkled onto hummus. It is also eaten with labneh (yogurt drained to make a tangy, creamy cheese), and bread and olive oil for breakfast, most commonly in Jordan, Palestine, Israel, Syria, and Lebanon, as well as other places in the Arab world. The Lebanese speciality \"shanklish\", dry-cured balls of labneh, can be rolled in za'atar to form its outer coating.\n",
"The French word \"tarte\" can be translated to mean either pie or tart, as both are mainly the same with the exception of a pie usually covering the filling in pastry, while flans and tarts leave it open.\n",
"Tart\n\nA tart is a baked dish consisting of a filling over a pastry base with an open top not covered with pastry. The pastry is usually shortcrust pastry; the filling may be sweet or savoury, though modern tarts are usually fruit-based, sometimes with custard. Tartlet refers to a miniature tart; an example would be egg tarts. The categories of \"tart\", \"flan\", \"quiche\", and \"pie\" overlap, with no sharp distinctions.\n\nSection::::History.\n",
"Tartaric acid\n\nTartaric acid is a white, crystalline organic acid that occurs naturally in many fruits, most notably in grapes, but also in bananas, tamarinds, and citrus. Its salt, potassium bitartrate, commonly known as cream of tartar, develops naturally in the process of winemaking. It is commonly mixed with sodium bicarbonate and is sold as baking powder used as a leavening agent in food preparation. The acid itself is added to foods as an antioxidant E334 and to impart its distinctive sour taste. \n",
"Remains of a second, unnamed species of \"Azibius\", cf. \"Azibius\" sp., have been discovered in the HGL-50 layer at Gour Lazib. It is known for a few upper and lower teeth. These teeth are three times larger than those of \"A. trerki\". A larger right talus has also been found, and is assumed to belong to this new species.\n\nSection::::Anatomy and physiology.\n",
"Liverpool Tart\n\nThe earliest known mention of a Liverpool Tart is 1897, when it was hand-written into a family cookbook, which was recently included in the village website for Evershot, in Dorset.\n",
"Section::::Description.:Palate.\n",
"Despite his unusual diet, Tarrare was slim and of average height. At the age of 17, he weighed only (). He was described as having unusually soft fair hair and an abnormally wide mouth, in which his teeth were heavily stained and on which the lips were almost invisible. When he had not eaten, his skin would hang so loosely that he could wrap the fold of skin from his abdomen around his waist. When full, his abdomen would distend \"like a huge balloon\". The skin of his cheeks was wrinkled and hung loosely, and when stretched out, he could hold twelve eggs or apples in his mouth. \n",
"There are two stories of how this well known school Dessert came into being. The first, is that it was made by accident, by a School Chef in Leeds, United Kingdom named Peter Turner. Turner, then a young man in the 1950’s, left a batch of custard tarts in the school kitchen oven for too long, baking the custard hard.\n",
"Tarts are thought to have either come from a tradition of layering food, or to be a product of Medieval pie making. Enriched dough (i.e. short crust) is thought to have been first commonly used in 1550, approximately 200 years after pies. In this period, they were viewed as high-cuisine, popular with nobility, in contrast to the view of a commoners pie. While originally savoury, with meat fillings, culinary tastes led to sweet tarts to prevail, filling tarts instead with fruit and custard.Early medieval tarts generally had meat fillings, but later ones were often based on fruit and custard.\n",
"Another species identified as \"wild za'atar\" (Arabic:\"za'atar barri\") is \"Origanum vulgare\", commonly known as European oregano, oregano, pot marjoram, wild marjoram, winter marjoram, or wintersweet. This species is also extremely common in Lebanon, Syria, Jordan, Israel, and the Palestine, and is used by peoples of the region to make one local variety of the spice mixture.\n",
"Much information given about \"Tarchia\" in older work refers to PIN 3142/250 (which was generally referred to \"Saichania\" until it was named as \"T. teresae\" in 2016). In 2001, it was stated that, in \"Tarchia\", wear facets indicative of tooth-to-tooth occlusion are present; this likely does not refer to the holotype specimen, since in the holotype no teeth are preserved.\n\nSection::::Phylogeny.\n",
"An early tart was the Italian crostata, dating to at least the mid-15th century. It has been described as a \"rustic free-form version of an open fruit tart\".\n\nSection::::Description.\n\nTarts are typically free-standing with firm pastry base consisting of dough, itself made of flour, thick filling, and perpendicular sides while pies may have softer pastry, looser filling, and sloped sides, necessitating service from the pie plate.\n\nSection::::Varieties.\n",
"Tarte Tatin\n\nThe tarte Tatin (), named after the hotel serving it as its signature dish, is a pastry in which the fruit (usually apples) is caramelised in butter and sugar before the tart is baked. It originated in France but has spread to other countries over the years.\n\nSection::::History.\n",
"Section::::Modern versions.\n\nModern custard tarts are usually made from shortcrust pastry, eggs, sugar, milk or cream, and vanilla, sprinkled with nutmeg and baked. Unlike egg tarts, custard tarts are normally served at room temperature. They are available either as individual tarts, generally around across, or as larger tarts intended to be divided into slices.\n\nSection::::Modern versions.:Britain and Commonwealth.\n",
"The seed pods contain a sweet and sour pulp which is eaten raw in Mexico and India as an accompaniment to various meat dishes and used as a base for drinks with sugar and water ('agua de guamúchil'). \n",
"Viken Sassouni developed Sassouni analysis which indicates that patient's with long face syndrome have 4 of their bony planes (mandibular plane, occlusal plane, palatal plane, SN plane) steep to each other.\n\nSection::::Types.:Dental open bite.\n",
"Custard tart\n\nCustard tarts or flans pâtissier are a pastry consisting of an outer pastry crust filled with egg custard and baked.\n\nSection::::History.\n\nThe development of custard is so intimately connected with the custard tart or pie that the word itself comes from the old French \"croustade\", meaning a kind of pie. Some other names for varieties of custard tarts in the Middle Ages were \"doucettes\" and \"darioles\". In 1399, the coronation banquet prepared for Henry IV included \"doucettys\".\n",
"An open bite is a condition characterised by a complete lack of overlap and occlusion between the upper and lower incisors. In children, open bite can be caused by prolonged thumb sucking. Patients often present with impaired speech and mastication.\n\nSection::::Classification.:Angle's classification method.\n",
"The French title of the film refers to a \"grain of couscous\" and to mullet, a type of small fish, both popular in Tunisian cuisine. The two ingredients constitute both the staple of his extended family's diet and the menu on which he plans to establish his restaurant.\n\nSection::::Plot.\n",
"Section::::Characteristics.\n",
"In Iran, the fruits of \"Crataegus\" (including \"Crataegus azarolus\" var. \"aronia\", as well as other species) are known as \"zâlzâlak\" and eaten raw as a snack, or made into a jam known by the same name.\n\nSection::::Uses.:Research.\n",
"Recently the isolated has been used online as an emoticon, because it resembles a smiling face.\n\nSection::::Arabic tāʼ.:Tāʼ marbūṭah.\n\nAn alternative form called ' (, \"bound '\") is used at the end of words to mark feminine gender for nouns and adjectives. In Modern Standard Arabic, it denotes or . Regular ', to distinguish it from ', is referred to as ' (, \"open '\").\n",
"Qutab\n\nQutab is an Azerbaijani dish made from thinly rolled dough that is cooked briefly on a convex griddle known as saj.\n\nSection::::Composition.\n",
"In 2006 a tooth from France found in Charente, specimen CHEm03.537, was referred to a \"Nuthetes\" sp.\n\nSection::::Classification.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-11515 | Why do recent scratches, cuts, scrapes, etc. sting/burn when in warm water? | Because the subcutaneous (under the skin) nerves are more exposed, and are more sensitive without their layer of defense anymore. | [
"The hunting reaction is one out of four possible responses to immersion of the finger in cold water. The other responses observed in the fingers after immersion in cold water are a continuous state of vasoconstriction, slow steady and continuous rewarming and a proportional control form in which the blood vessel diameter remains constant after an initial phase of vasoconstriction. However, the vast majority of the vascular responses to immersion of the finger in cold water can be classified as the hunting reaction.\n",
"Stings are most common in the hours before and after low tide (especially at springs), so one possible precaution is to avoid bathing or paddling at these times. Weever stings have been known to penetrate wet suit boots even through a rubber sole (if thin), and bathers and surfers should wear sandals, \"jelly shoes\", or wetsuit boots with relatively hard soles, and avoid sitting or \"rolling\" in the shallows. Stings also increase in frequency during the summer (to a maximum in August), but this is probably the result of the greater number of bathers.\n",
"In those hospitalized from scalds or fire burns, 310% are from assault. Reasons include: child abuse, personal disputes, spousal abuse, elder abuse, and business disputes. An immersion injury or immersion scald may indicate child abuse. It is created when an extremity, or sometimes the buttocks are held under the surface of hot water. It typically produces a sharp upper border and is often symmetrical, known as \"sock burns\", \"glove burns\", or \"zebra stripes\" - where folds have prevented certain areas from burning. Deliberate cigarette burns are preferentially found on the face, or the back of the hands and feet. Other high-risk signs of potential abuse include: circumferential burns, the absence of splash marks, a burn of uniform depth, and association with other signs of neglect or abuse.\n",
"Section::::Research findings.:Peripheral nervous system.\n\nSection::::Research findings.:Peripheral nervous system.:Receptors.\n\nCrayfish (\"Procambarus clarkii\") respond quickly and strongly to high temperatures, however, they show no response to low temperature stimuli, or, when stimulated with capsaicin or isothiocyanate (both are irritants to mammals). Noxious high temperatures are considered to be a potentially ecologically relevant noxious stimulus for crayfish that can be detected by sensory neurons, which may be specialized nociceptors.\n",
"BULLET::::- Countershading: body colouration which is dark above and lighter below\n\nBULLET::::- Crenulate: having the edge slightly scalloped\n\nBULLET::::- Cutaneous: pertaining to the skin\n\nBULLET::::- Ctenoid scale: rough-edged scale\n\nBULLET::::- Cycloid scale: smooth-edged scale\n\nSection::::D.\n\nBULLET::::- Deciduous: temporary, falling off\n\nBULLET::::- Demersal: living on or near the sea bed\n\nBULLET::::- Dendritic: resembling a tree or shrub\n\nBULLET::::- Denature: the \"unfolding\" of a protein resulting in a lessening of its biological properties. In the case of some fish toxins, denaturing with hot water can lessen painful symptoms.\n\nBULLET::::- Dentate: with tooth-like projections\n",
"BULLET::::- Placing crustaceans in slowly heated water to the boiling point\n\nBULLET::::- Placing crustaceans directly into boiling water\n\nBULLET::::- Placing marine crustaceans in fresh water\n\nBULLET::::- Unfocused microwaving of the body as opposed to focal application to the head\n",
"BULLET::::- Itchy, burning skin. Irritant contact dermatitis tends to be more painful than itchy, while allergic contact dermatitis often itches.\n\nWhile either form of contact dermatitis can affect any part of the body, irritant contact dermatitis often affects the hands, which have been exposed by resting in or dipping into a container (sink, pail, tub, swimming pools with high chlorine) containing the irritant.\n\nSection::::Causes.\n",
"A fish kill can occur with rapid fluctuations in temperature or sustained high temperatures. Generally, cooler water has the potential to hold more oxygen, so a period of sustained high temperatures can lead to decreased dissolved oxygen in a body of water. An August, 2010, fish kill in Delaware Bay was attributed to low oxygen as a result of high temperatures.\n",
"Keratolysis exfoliativa normally appears during warm weather. Due to excessive sweating and friction, in for example athletic shoes, the skin can start to exfoliate. Other factors that can cause exfoliation are detergents and solvents.\n\nAnother very common cause has been reported from salt water fishermen, who often suffer from these symptoms. It is not sure whether it is from the salt water or whether it is from some bacteria from fish.\n\nSection::::Treatment.\n",
"Thermal testing is a common and traditional way used to detect pulp necrosis. These tests can exist in the form of a cold or hot test, which aims to stimulate nerves in the pulp by the flow of dentine liquid at changes in temperature. The liquid flow leads to movement of the odontoblast processes and mechanical stimulation of pulpal nerves.\n",
"Humans usually become infected after swimming in lakes or other bodies of slow-moving fresh water. Some laboratory evidence indicates snails shed cercariae most intensely in the morning and on sunny days, and exposure to water in these conditions may therefore increase risk. Duration of swimming is positively correlated with increased risk of infection in Europe and North America, and shallow inshore waters may harbour higher densities of cercariae than open waters offshore. Onshore winds are thought to cause cercariae to accumulate along shorelines. Studies of infested lakes and outbreaks in Europe and North America have found cases where infection risk appears to be evenly distributed around the margins of water bodies as well as instances where risk increases in endemic swimmer's itch \"hotspots\". Children may become infected more frequently and more intensely than adults but this probably reflects their tendency to swim for longer periods inshore, where cercariae also concentrate. Stimuli for cercarial penetration into host skin include unsaturated fatty acids, such as linoleic and linolenic acids. These substances occur naturally in human skin and are found in sun lotions and creams based on plant oils.\n",
"BULLET::::- Many inorganic gases (although NO, HS and SF can be monitored using TD)\n\nBULLET::::- Methane\n\nBULLET::::- Compounds that are thermally unstable\n\nBULLET::::- Compounds heavier than n-CH, didecyl phthalate or 6-ring polycyclic aromatic hydrocarbons boiling above 525 °C.\n\nSection::::Applications.\n\nApplications of thermal desorption were originally restricted to occupational health monitoring, but have since extended to cover a much wider range. Some of the most important are mentioned below – where available, examples of early reports, and more recent citations (including those of widely used standard methods) have been given:\n\nBULLET::::- Outdoor environmental monitoring\n\nBULLET::::- Workplace/occupational health monitoring\n",
"(Illustrated by a still lake, where the surface water can be comfortably warm for swimming but deeper layers be so cold as to represent a danger to swimmers, the same effect as gives rise to notices in London's city docks warning 'Danger Cold Deep Water).\n",
"BULLET::::- Repeated application of hot water bottles, heating blankets or heat pads to treat chronic pain—e.g., chronic backache.\n\nBULLET::::- Repeated exposure to heated car seats, space heaters, or fireplaces. Repeated or prolonged exposure to a heater is a common cause of this condition in elderly individuals.\n\nBULLET::::- Occupational hazards of silversmiths and jewellers (face exposed to heat), bakers and chefs (arms)\n",
"Most human stings are inflicted by the lesser weever, which habitually remains buried in sandy areas of shallow water and is thus more likely to come into contact with bathers than other species (such as the greater weever, which prefers deeper water); stings from other species are generally limited to anglers and commercial fishermen. Even very shallow water (sometimes little more than damp sand) may harbour lesser weevers. The vast majority of injuries occur to the foot and are the result of stepping on buried fish; other common sites of injury are the hands and buttocks. \n",
"BULLET::::- Chilblains: condition caused by repeated exposure of skin to temperatures just above freezing. The cold causes damage to small blood vessels in the skin. This damage is permanent and the redness and itching will return with additional exposure. The redness and itching typically occurs on cheeks, ears, fingers, and toes.\n\nBULLET::::- Frostbite: the freezing and destruction of tissue\n\nBULLET::::- Frostnip: a superficial cooling of tissues without cellular destruction\n\nBULLET::::- Trench foot or immersion foot: a condition caused by repetitive exposure to water at non-freezing temperatures\n",
"The pulp can respond (reversible pulpitis, irreversible pulpitis, partial necrosis, total necrosis) in a variety of ways to irritants. This response depends on the severity and duration of the irritant involved. If the irritant is severe or persists for a sustained amount of time it can cause the odontoblasts to die and cause initiation of an inflammatory response.\n\nSection::::Histopathology.:Odontoblasts.\n",
"The common brown shrimp \"Crangon crangon\" and the prawns \"Palaemon serratue\" and \"Palaemon elegana\" all exhibit a nociceptive sensitivity to both hot and cold temperatures. Both thermal sensitivity levels and nociceptive thresholds change with changes in acclimation temperature.\n\nSection::::Research findings.:Peripheral nervous system.:Nerve fibres.\n\nCrayfish have peripheral nerve fibres which are responsive to noxious stimuli.\n",
"Monfreid also wrote in several places about men of his crew suffering stingray wounds while standing and wading into Red Sea shallows to load or unload smuggled wares: he wrote that to \"save the man's life\", searing the wound with a red-hot iron was necessary.\n\nSection::::Fossils.\n",
"Scalds are generally more common in children, especially from the accidental spilling of hot liquids.\n\nSection::::Treatment.\n\nApplying first aid for scalds is the same as for burns. First, the site of the injury should be removed from the source of heat, to prevent further scalding. If the burn is at least second degree, remove any jewelry or clothing from the site, unless it is already stuck to the skin. Cool the scald for about 20 minutes with cool or lukewarm (not cold) water, such as water from a tap.\n",
"Sun scald\n\nSun scald is the freezing of bark following high temperatures in the winter season, resulting in permanent visible damage to bark. Fruits may also be damaged. In the northern hemisphere, it is also called southwest injury.\n\nSection::::Causes.\n",
"Warm and cold sensitive nerve fibers differ in structure and function. The cold-sensitive and warm-sensitive nerve fibers are underneath the skin surface. Terminals of each temperature-sensitive fiber do not branch away to different organs in the body. They form a small sensitive point which are unique from neighboring fibers. Skin used by the single receptor ending of a temperature-sensitive nerve fiber is small. There are 20 cold points per square centimeter in the lips, 4 in the finger, and less than 1 cold point per square centimeter in trunk areas. There are 5 times as many cold sensitive points as warm sensitive points.\n",
"The rash is caused by a type of cell-mediated hypersensitivity reaction; this type of hypersensitivity normally occurs in people who become sensitized to volatile organic compounds. Although in some instances several years may be required to develop sensitivity, this time period may vary greatly depending on the individual. In Dogger Bank itch, sensitivity is acquired after repeated handling of the sea chervils that become entangled in fishing nets.\n",
"The following are some examples of unsafe practices which could lead to electric injury (cannot cover every possible scenario):\n\nBULLET::::- Using electrical appliances while wet (showering, bathing, etc.) as plumbing is often connected to electrical ground, and wet skin loses much of its resistance. Exception for newer quality appliances \"intended for the bathroom\" when \"not\" simultaneously showering, bathing, being in a path of water going to plumbing, or touching bare concrete or sheet metal. Standing on a dry carpet or rug is ideal.\n",
"Section::::Dermatological manifestations.:Contact dermatitis.\n\nContact dermatitis occurs when the skin comes into contact with chemicals that cause a reaction, often redness, swelling or itchiness. Flood water often contains chemicals from industries or households that can cause such a reaction, these include pesticides, bleach and detergents.\n\nSection::::Dermatological manifestations.:Traumatic injuries.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-00295 | why does corn pop when cooked one way, and go soft when cooked another? | Popcorn are dried so the shell is hard and the inside is 14-20% water. You heat them without water so the temperatur is above the bowling point of water. The water inside turns to steam and the pressure will build up and rupture the shell. The water leave them as steam so they are dry and hard. Bowling corn in water is often done with fresh corn with a lot of water inside them. The temperatur will never be above the boiling point of water so they will not explode. They can absorbera water from the surrounding and stay soft. If you put popcorn in water and boil them the will not pop. If you let them soak in water before cooking you could likely get something more like fresh corn. | [
"Section::::Description.\n",
"The most common methods for cooking corn on the cob are frying, boiling, roasting, and grilling. Corn on the cob can be grilled directly in its husk, or it can be husked first and then wrapped in aluminum foil. When oven roasting, cooking the corn in the husk directly on the rack is recommended. When roasting or grilling corn on the cob, the cook can first peel the husk back to rub the corn with oil or melted butter, then re-secure the husk around the corn with a string. Corn on the cob can also be microwaved for 3 to 4 minutes still in its husk.\n",
"Section::::Behaviour.\n",
"Corn on the cob is normally eaten while still warm. It is often seasoned with salt and buttered before serving. Some diners use specialized skewers, thrust into the ends of the cob, to hold the ear while eating without touching the hot and sticky kernels.\n\nWithin a day of corn being picked it starts converting sugar into starch, which results in reduction in the level of natural sweetness. Corn should be cooked and served the same day it has been harvested, as it takes only a single day for corn to lose up to 25% of its sweetness.\n\nSection::::Preparation.\n",
"Section::::Description.:Voice.\n",
"In the first step of nixtamalization, kernels of dried maize are cooked in an alkaline solution at or near the mixture's boiling point. After cooking, the maize is steeped in the cooking liquid for a period. The length of time for which the maize is boiled and soaked varies according to local traditions and the type of food being prepared, with cooking times ranging from a few minutes to an hour, and soaking times from a few minutes to about a day.\n",
"Section::::Behaviour.:Breeding.\n",
"Popping results are sensitive to the rate at which the kernels are heated. If heated too quickly, the steam in the outer layers of the kernel can reach high pressures and rupture the hull before the starch in the center of the kernel can fully gelatinize, leading to partially popped kernels with hard centers. Heating too slowly leads to entirely unpopped kernels: the tip of the kernel, where it attached to the cob, is not entirely moisture-proof, and when heated slowly, the steam can leak out of the tip fast enough to keep the pressure from rising sufficiently to break the hull and cause the pop.\n",
"Chillcuring\n\nChillcuring is a grain ventilating process, especially of fresh-harvested shelled corn.\n\nSection::::Process.\n",
"The hard part at the center of the corn resembles a funnel with a broad raised top and a pointed bottom. Because of their shape, corns intensify the pressure at the tip and can cause deep tissue damage and ulceration. The scientific name for a corn is \"heloma\" (plural \"helomata\"). A hard corn is called a \"heloma durum\", while a soft corn is called a \"heloma molle\".\n",
"Section::::Predators and parasites.\n",
"As the oil and the water within the kernel are heated, they turn the moisture in the kernel into pressurized steam. Under these conditions, the starch inside the kernel gelatinizes, softens, and becomes pliable. The internal pressure of the entrapped steam continues to increase until the breaking point of the hull is reached: a pressure of approximately and a temperature of . The hull thereupon ruptures rapidly and explodes, causing a sudden drop in pressure inside the kernel and a corresponding rapid expansion of the steam, which expands the starch and proteins of the endosperm into airy foam. As the foam rapidly cools, the starch and protein polymers set into the familiar crispy puff. Special varieties are grown to give improved popping yield. Though the kernels of some wild types will pop, the cultivated strain is \"Zea mays everta,\" which is a special kind of flint corn.\n",
"Corn nut\n\nCorn nuts, also known as toasted corn, quico , or Cracker are a snack food made of roasted or deep-fried corn kernels. In parts of South America, including Peru and Ecuador, it is referred to as cancha.\n\nSection::::Preparation.\n\nCorn nuts are prepared by soaking whole corn kernels in water for three days, then deep-frying them in oil until they are hard and brittle.\n\nThe kernels are soaked because they shrink during the harvesting and cleaning process, and rehydration returns them to their original size.\n\nSection::::History.\n",
"Section::::Behaviour.:Feeding.\n",
"The \"eloteros\" also sell coal-grilled \"elotes (elotes asados)\". These \"elotes\" are splashed with salt water and grilled in the coals until the husks start to burn and the kernels reach a crunchy texture. In mesoamerica,it is custom to grill \"elote\" during the first harvest of the year --the end of June until the beginning of September. During this time women can be seen on the sides of the highway next to the cornfields selling grilled \"elote\" seasoned with lime juice and salt.\n\nSection::::See also.\n\nBULLET::::- Corn dog\n\nBULLET::::- Corn roaster\n\nBULLET::::- List of maize dishes\n\nBULLET::::- Maize\n\nBULLET::::- Sweet corn\n",
"A chamber is used at this section in order to mix the corn and water and let them temper for 10 to 30 minutes. For more efficient separation, differential moisture content between germ and endosperm is desired. Tempering of kernel leads to moisture uptake. Because of the differential swelling of germ and endosperm, the germ becomes more flexible and resilient during tempering while there is no movement of material out of kernel.\n\nSection::::Process overview.:Degermination.\n",
"Section::::In cooking.\n\nThere are a wide variety of different recipes for dishes involving corn flakes and crushed corn flakes can even be a substitute for bread crumbs.\n",
"Section::::Process steps.\n\nSection::::Process steps.:Cleaning.\n",
"Section::::Process steps.:Gluten recovery.\n",
"Section::::Process steps.:Steeping.\n",
"Popcorn will pop when freshly harvested, but not well; its high moisture content leads to poor expansion and chewy pieces of popcorn. Kernels with a high moisture content are also susceptible to mold when stored. For these reasons, popcorn growers and distributors dry the kernels until they reach the moisture level at which they expand the most. This differs by variety and conditions, but is generally in the range of 14–15% moisture by weight. If the kernels are over-dried, the expansion rate will suffer and the percentage of kernels that pop will decline.\n",
"Corn construction\n\nCorn construction refers to the use of corn (maize) in construction.\n\nThe tassel, leaf, silk, cob in husks, and the stalk are the parts of corn.\n",
"Corn wet-milling\n\nThe corn wet-milling is a process of breaking corn kernels into their component parts: corn oil, protein, corn starch, and fiber. It uses water and a series of steps to separate the parts to be used for various products.\n\nSection::::History.\n",
"The hard part at the center of the corn resembles a barley seed, that is like a funnel with a broad raised top and a pointed bottom. Because of their shape, corns intensify the pressure at the tip and can cause deep tissue damage and ulceration. Hard corns are especially problematic for people with insensitive skin due to damaged nerves (e.g., in people with diabetes mellitus). The scientific name for a corn is \"heloma\" (plural \"helomata\"). A hard corn is called a \"heloma durum\", while a soft corn is called a \"heloma molle\".\n",
"BULLET::::- One of the main things he noted was the composition of the endosperm of the maize kernels. He wrote: “The texture of the endosperm is one of the unique features of this maize. Cut in any direction it separates with a sort of cleavage, exposing a dull, smooth surface. The texture suggests that of the hardest waxes, though it is still harder and more crystalline. From this optical resemblance to wax the term cereous or waxy endosperm is suggested.” The moisture content of the kernel must be 16% or lower before the waxy trait can be recognised visually.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-01984 | Why does iTunes check for updates IMMEDIATELY AFTER installing updates? | When you're releasing a new version, you only test a certain amount of versions back for upgrade paths. It takes resources to try and test more and more versions. As a result, you start dropping the single upgrade support and relying on the system to help the user get there over one or two upgrades. | [
"In 2012, Skype introduced automatic updates to better protect users from security risks but received some challenge from users of the Mac product, as the updates cannot be disabled from version 5.6 on, both on Mac OS and Windows versions, although in the latter, and only from version 5.9 on, automatic updating can be turned off in certain cases.\n",
"When iTunes restores or updates an iOS firmware, Apple has added many checkpoints before the iOS version is installed and on-device consolidation begins. At the first \"Verifying iPhone software\" iTunes communicates with \"gs.apple.com\" to verify that the IPSW file provided is still being signed. The TATSU server will give back a list of versions being signed. If the version is not being signed, then iBEC and iBoot will decline the image, giving an error of \"error 3194\" or \"declined to authorize the image\"\n",
"\"The Telegraph\" reported in November 2011 that Apple had been aware of a security vulnerability since 2008 that would let unauthorized third parties install \"updates\" to users' iTunes software. Apple fixed the issue prior to the \"Telegraph\"s report, and told the media that \"The security and privacy of our users is extremely important\", though this was questioned by security researcher Brian Krebs, who told the publication that \"A prominent security researcher warned Apple about this dangerous vulnerability in mid-2008, yet the company waited more than 1,200 days to fix the flaw\".\n\nSection::::Criticism.:Software bloat.\n",
"Major versions of iOS are released annually. The current version, iOS 12, was released on September 17, 2018. It is available for all iOS devices with 64-bit processors; the iPhone 5S and later iPhone models, the iPad (2017), the iPad Air and later iPad Air models, all iPad Pro models, the iPad Mini 2 and later iPad Mini models, and the sixth-generation iPod Touch. On all recent iOS devices, iOS regularly checks on the availability of an update, and if one is available, will prompt the user to permit its automatic installation.\n",
"iTunes will communicate with iBoot throughout the process of an update or restore ensuring the firmware has not been modified to a Custom Firmware (\"CFW\"). iTunes will not update or restore a device when it suspects the file has been modified.\n\nThis is a chain process, before the firmware has been installed iBoot has to verify iBoot, iBoot has to verify the bootloader, and so on. You cannot install unsigned \n\niOS versions, unless 1) you have SHSH2 blobs and exploits have been released or 2) you exploit the chain process.\n\nSection::::Exploits and countermeasures.\n",
"Soft updates allow only asynchronous metadata writes that do not render the on-disk file system inconsistent, or that the only inconsistency that ever happens is a storage space leak (space marked allocated when not used by any file). It avoids having to do ordered synchronous metadata writes by temporarily \"rolling back\" any part of a metadata block that depends on another potentially non-flushed or partially rolled-back block when writing it.\n",
"A number of tools have been created by independent software vendors which provide the ability for Windows Updates to be automatically downloaded for, or added to, an online or offline system. One common use for offline updates is to ensure a system is fully patched against security vulnerabilities before being connected to the Internet or another network. A second use is that downloads can be very large, but may be dependent on a slow or unreliable network connection, or the same updates may be needed for more than one machine. AutoPatcher, WSUS Offline Update, PortableUpdate, and Windows Updates Downloader are examples such tools.\n",
"BULLET::::- Built-in update: Mechanisms for installing updates are built into some software systems (or, in the case of some operating systems such as Linux, Android and iOS, into the operating system itself). Automation of these update processes ranges from fully automatic to user initiated and controlled. Norton Internet Security is an example of a system with a semi-automatic method for retrieving and installing updates to both the antivirus definitions and other components of the system. Other software products provide query mechanisms for determining when updates are available.\n",
"Apple File System is designed to avoid metadata corruption caused by system crashes. Instead of overwriting existing metadata records in place, it writes entirely new records, points to the new ones and then releases the old ones. This avoids corrupted records containing partial old and partial new data caused by a crash that occurs during an update. It also avoids having to write the change twice, as happens with an HFS+ journaled file system, where changes are written first to the journal and then to the catalog file.\n\nSection::::Design.:Space sharing.\n",
"Unlike its predecessor, Automatic Updates can download and install updates. Instead of the five-minute schedule used by its predecessor, Automatic Updates checks the Windows Update servers once a day. After Windows ME is installed, a notification balloon prompts the user to configure the Automatic Updates client. The user can choose from three notification schemes: Being notified before downloading the update, being notified before installing the update, or both.\n",
"BULLET::::- The ability to delay \"feature updates\" for up to 365 days.\n\nThese features were added in Windows 10 version 1511. They are intended for large organizations with lots of computers, so that they can logically group their computers for gradual deployment. Microsoft recommends a small set of pilot computers to receive the updates almost immediately, while the set of most critical computers to receive them after every other group has done so, and has experienced their effects.\n",
"BULLET::::- The actions that the file performs on your system\n\nBULLET::::- The level at which the file uses the resources of your computer\n\nBULLET::::- The performance impact that it has\n\nBULLET::::- The stability of the file for the specific operating system\n\nBULLET::::- The version of the file\n\nBULLET::::- Who developed the file?\n\nSection::::Issues.\n\nUpon release the Download Insight program would erroneously flags a downloaded file as having no Digital Signature and no version number and therefore a potential threat.\n\nSection::::Reception.\n",
"Isolation from networks makes automatic updating impossible, because the sheep dip computer is not able to make contact with the servers from which software updates and antivirus signatures are distributed. It is therefore normal for updates to be applied manually, after they have been downloaded by a separate network-connected computer and copied to a USB flash drive.\n",
"Data that is unlinked from the metadata dependency graph before writing it to disk has begun does not need to be written to disk at all. For example, creating a file, using it for a short period of time, and then deleting it may cause no disk activity at all.\n\nSoft updates require periodic flushing of the metadata to nonvolatile storage.\n\nSection::::Implementations.\n",
"Windows 10 Home is permanently set to download all updates automatically, including cumulative updates, security patches, and drivers, and users cannot individually select updates to install or not. Microsoft offers a diagnostic tool that can be used to hide updates and prevent them from being reinstalled, but only after they had been already installed, then uninstalled without rebooting the system. Tom Warren of \"The Verge\" felt that, given web browsers such as Google Chrome had already adopted such an automatic update system, such a requirement would help to keep all Windows10 devices secure, and felt that \"if you're used to family members calling you for technical support because they've failed to upgrade to the latest Windows service pack or some malware disabled Windows Update then those days will hopefully be over.\"\n",
"The latest iteration of the site was launched in August 2007, and at the time, only worked in the web browser Internet Explorer, version 6 and version 7. Before using the catalog, the user must install an ActiveX control so that they can search the updates available on the website. Searches can be saved as an RSS feed so that it can be monitored for new updates. On the Microsoft Update Catalog, downloads are accelerated with Microsoft's Background Intelligent Transfer Service, which downloads updates from the website asynchronously while attempting to use as little bandwidth as possible.\n",
"Windows Update Agent on Windows 10 supports peer to peer distribution of updates; by default, systems' bandwidth is used to distribute previously downloaded updates to other users, in combination with Microsoft servers. Users may optionally change Windows Update to only perform peer to peer updates within their local area network.\n",
"In Mac OS 9 and earlier versions of Mac OS X, Software Update was a standalone tool. The program was part of the CoreServices in OS X. It could automatically inform users of new updates (with new features and bug and security fixes) to the operating system, applications, device drivers, and firmware. All updates required the user to enter their administrative password and some required a system restart. It could be set to check for updates daily, weekly, monthly, or not at all; in addition, it could download and store the associated .pkg file (the same type used by Installer) to be installed at a later date, and it maintained a history of installed updates. Starting with Mac OS X 10.5 Leopard, updates that required a reboot logged out the user prior to installation and automatically restarted the computer when complete. In earlier versions of OS X, the updates were installed, but critical files were not replaced until the next system startup.\n",
"Starting with Windows 98, Microsoft included Windows Update that once installed and executed, would check for patches to Windows and its components, which Microsoft would release intermittently. With the release of Microsoft Update, this system also checks for updates for other Microsoft products, such as Microsoft Office, Visual Studio and SQL Server.\n\nEarlier versions of Windows Update suffered from two problems:\n\nBULLET::::1. Less-experienced users often remained unaware of Windows Update and did not install it. Microsoft countered this issue, in Windows ME with the Automatic Updates component, which displayed availability of updates, with the option of automatic installation.\n",
"After Leopard’s release, there were widely reported incidents of new Leopard installs hanging during boot on the blue screen that appears just before the login process starts. Apple attributed these problems to an outdated version of an unsupported add-on extension called Application Enhancer (APE), from Unsanity which had been incompatible with Leopard. Some users were unaware that APE had been silently installed during installation of Logitech mouse drivers. However, only the users who did not have the latest version of APE installed (2.0.3 at that time) were affected. Apple published a knowledge base article on how to solve this problem.\n",
"By default, this check occurs every five minutes, plus when Internet Explorer starts; however, the user could configure the next check to occur only at certain times of the day or on certain days of the week. The tool queries the Microsoft server for a file called \"codice_1\", which contained a list of all the critical updates released for the operating system. The tool then compares this list with the list of installed updates on its machine and displays an update availability notification. Once the check is executed, any custom schedule defined by the user is reverted to the default. Microsoft stated that this ensures that users received notification of critical updates in a timely manner.\n",
"The initial public release of iOS 10 on September 13, 2016 saw many iPhones and iPads temporarily disabled, or \"bricked\", by the over-the-air update, requiring bricked devices to be connected to a Mac or PC with iTunes in order to retry the update or restore the device to factory settings. Apple quickly released iOS 10.0.1, and issued a statement: \"We experienced a brief issue with the software update process, affecting a small number of users during the first hour of availability. The problem was quickly resolved and we apologize to those customers.\"\n\nSection::::Problems.:Local backup encryption issue.\n",
"A security flaw in Apple's iTunes allowed unauthorized third parties to use iTunes online update procedures to install unauthorized programs. Gamma International offered presentations to government security officials at security software trade shows where they described how to covertly install the FinFisher spy software on suspects' computers using iTunes' update procedures.\n",
"If the business application error occurred due to a work flow issue or human errors during data input, then the business users are notified. Business users then review their work flow and revise it if necessary. They also modify the user guide or user instructions to avoid such an error in the future.\n\nSection::::Application support.:Infrastructure issue correction.\n\nIf the business application error occurred due to infrastructure issues, then the specific infrastructure team is notified. The infrastructure team then implements permanent fixes for the issue and monitors the infrastructure to avoid the re-occurrence of the same error.\n",
"Windows Update Agent can be managed through a Control Panel applet, as well as Group Policy, Microsoft Intune and Windows PowerShell. It can also be set to automatically download and install both \"important\" and \"recommended\" updates. In prior versions of Windows, such updates were only available through the Windows Update web site. Additionally, Windows Update in Windows Vista supports downloading Windows Ultimate Extras, optional software for Windows Vista Ultimate Edition.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-04631 | Why is ADD / ADHD treated with stimulants instead of depressants? | You're right, it's counter intuitive. To understand it you have to think of it not as hyperactivity but the inability to focus or concentrate on a task, a bit like being fatigued. It's the attention deficit part of ADHD. The stimulants act to enable the person to concentrate on a task and not be distracted. It rebalances the attention deficit. | [
"The National Institute of Mental Health recommends stimulants for the treatment of ADHD, and states that, \"under medical supervision, stimulant medications are considered safe\". A 2007 drug class review found no evidence of any differences in efficacy or side effects in the stimulants commonly prescribed.\n",
"Stimulant medications are considered safe when used under medical supervision. Nonetheless, there are concerns that the long term safety of these drugs has not been adequately documented, as well as social and ethical issues regarding their use and dispensation. The U.S. FDA has added black-box warnings to some ADHD medications, warning that abuse can lead to psychotic episodes, psychological dependence, and that severe depression may occur during withdrawal from abusive use.\n",
"Psychostimulants, such as cocaine, amphetamines, methylphenidate, caffeine, and nicotine, produce improvements in physical and mental functioning, including increased energy and alertness. Stimulants tend to be most widely used by people suffering from ADHD, which can either be already diagnosed or yet undiagnosed within those individuals; it's because a significant amount of people having ADHD isn't getting diagnosed because most of them is simply unaware of the fact that they are suffering from this condition, so they are more likely than others prone to using stimulants like caffeine, nicotine or pseudoephedrine to mitigate their symptoms; it's worth noting that the unawareness concerning effects of use of illicit substances like cocaine, methamphetamine or mephedrone results in self-medication with these drugs by individuals affected with ADHD symptoms, that can effectively prevent them from getting diagnosed with ADHD and getting the right treatment for this disorder with stimulants like methylphenidate and amphetamines, and improving their QoL in general. Stimulants can be beneficial for individuals who experience depression, to reduce anhedonia and increase self-esteem., however in some cases depression may occur as a comorbid condition originating from the prolonged presence of negative symptoms of undiagnosed ADHD, which can impair executive functions, resulting in lack of motivation, focus and contentment with one's life, so stimulants may be found useful for treating treatment-resistant depression, especially in individuals thought to have ADHD. The SMH also hypothesizes that hyperactive and hypomanic individuals use stimulants to maintain their restlessness and heighten euphoria. Additionally, stimulants are useful to individuals with social anxiety by helping individuals break through their inhibitions. Some reviews suggest that students use psychostimulants recreationally to medicate for underlying conditions, such as ADHD, depression or anxiety, and that their chance of self-medicating of these drugs can be loosely predicted using a variety of risk factors including childhood parental monitoring, participating in a sports team, or through the DAST-10 (screening test).\n",
"Stimulants are the most effective medications available for the treatment of ADHD. Seven different formulations of stimulants have been approved by the U.S. Food and Drug Administration (FDA) for the treatment of ADHD: four amphetamine-based formulations, two methylphenidate-based formulations, and dextromethamphetamine hydrochloride. Atomoxetine, guanfacine and clonidine are the only non-controlled, non-stimulant FDA approved drugs for the treatment of ADHD.\n",
"The effects of amphetamine and methylphenidate on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. The long-term effects on the developing brain and on mental health disorders in later life of chronic use of methylphenidate is unknown. Despite this, between 0.51% to 1.23% of children between the ages of 2 and 6 years take stimulants in the US. Stimulant drugs are not approved for this age group.\n",
"This class of medicines is generally regarded as one unit; however, they affect the brain differently. Some investigations are dedicated to finding the similarities of children who respond to a specific medicine. The behavioral response to stimulants in children is similar regardless of whether they have ADHD or not.\n\nStimulant medication is an effective treatment for adult attention-deficit hyperactivity disorder although the response rate may be lower for adults than children.\n\nSome physicians may recommend antidepressant drugs as the first line treatment instead of stimulants although antidepressants have much lower treatment effect sizes than stimulant medication.\n\nSection::::Medications.:Stimulants.:Amphetamine.\n",
"There are a number of non-stimulant medications, such as atomoxetine, bupropion, guanfacine, and clonidine that may be used as alternatives, or added to stimulant therapy. There are no good studies comparing the various medications; however, they appear more or less equal with respect to side effects. Stimulants appear to improve academic performance while atomoxetine does not. Atomoxetine, due to its lack of addiction liability, may be preferred in those who are at risk of recreational or compulsive stimulant use. There is little evidence on the effects of medication on social behaviors. , the long-term effects of ADHD medication have yet to be fully determined. Magnetic resonance imaging studies suggest that long-term treatment with amphetamine or methylphenidate decreases abnormalities in brain structure and function found in subjects with ADHD. A 2018 review found the greatest short term benefit with methylphenidate in children and amphetamines in adults.\n",
"Although ADHD has most often been treated with medication, medications do not cure ADHD. They are used solely to treat the symptoms associated with this disorder and the symptoms will come back once the medication stops.\n\nSection::::Treatment.:Medication.\n\nStimulants are typically formulated in fast and slow-acting as well as short and long-acting formulations. The fast-acting amphetamine mixed salts (Adderall) and its derivatives, with short and long-acting formulations bind to the trace amine associated receptor and triggers the release of dopamine into the synaptic cleft. They may have a better cardiovascular disease profile than methylphenidate and potentially better tolerated.\n",
"In 1934, Benzedrine became the first amphetamine medication approved for use in the United States. Methylphenidate was introduced in the 1950s, and enantiopure dextroamphetamine in the 1970s. The use of stimulants to treat ADHD was first described in 1937. Charles Bradley gave the children with behavioral disorders benzedrine and found it improved academic performance and behavior.\n",
"A 2008 review found that the use of stimulants improved teachers' and parents' ratings of behavior; however, it did not improve academic achievement. The same review also indicates growth retardation for children consistently medicated over three years, compared to unmedicated children in the study. Intensive treatment for 14 months has no effect on long-term outcomes 8 years later. No significant differences between the various drugs in terms of efficacy or side effects have been found.\n\nSection::::Treatment.:Stimulants.:School enforcement.\n",
"Assessment of the effects of stimulants is relevant given the large population currently taking stimulants. A systematic review of cardiovascular effects of prescription stimulants found no association in children, but found a correlation between prescription stimulant use and ischemic heart attacks. A review over a four-year period found that there were few negative effects of stimulant treatment, but stressed the need for longer term studies. A review of a year long period of prescription stimulant use in those with ADHD found that cardiovascular side effects were limited to transient increases in blood pressure only. Initiation of stimulant treatment in those with ADHD in early childhood appears to carry benefits into adulthood with regard to social and cognitive functioning, and appears to be relatively safe.\n",
"Combined medical management and behavioral treatment is the most effective ADHD management strategy, followed by medication alone, and then behavioral treatment. In terms of cost-effectiveness, management with medication has been shown to be the most cost-effective, followed by behavioral treatment, and combined treatment. The individually most effective and cost-efficient way is with stimulant medication. Additionally, long-acting medications for ADHD, in comparison to short-acting varieties, generally seem to be cost-effective. Comorbid (relating to two diseases that occur together, e.g. depression and ADHD) disorders makes finding the right treatment and diagnosis much more costly than when comorbid disorders are absent.\n\nSection::::Alternative medicine.\n",
"Amphetamine and its derivatives, prototype stimulants, are likewise available in immediate and long-acting formulations. Amphetamines act by multiple mechanisms including reuptake inhibition, displacement of transmitters from vesicles, reversal of uptake transporters and reversible MAO inhibition. Thus amphetamines actively increases the release of these neurotransmitters into the synaptic cleft. They may have a better side-effect profile than methylphenidate cardiovascularly and potentially better tolerated.\n",
"Parents of children with ADHD note that they usually display their symptoms at an early age. There have been few longitudinal studies on the long-term effects of stimulant use in children. The use of stimulant medication has not been approved by the FDA for children under the age of six. A growing trend is the diagnosis of younger children with ADHD. Prescriptions for children under the age of 5 rose nearly 50 percent from 2000 to 2003. Research on this issue has indicated that stimulant medication can help younger children with \"severe ADHD symptoms\" but typically at a lower dose than older children. It was also found that children at this age are more sensitive to side effects and should be closely monitored. Evidence suggests that careful assessment and highly individualized behavioural interventions significantly improve both social and academic skills, while medication only treats the symptoms of the disorder. \"One of the primary reasons cited for the growing use of psychotropic interventions was that many physicians realize that psychological interventions are costly and difficult to sustain.\"\n",
"Psychopharmacologists have also tried adding a stimulant, in particular, d-amphetamine. However, the use of stimulants in cases of treatment-resistant depression is relatively controversial. A review article published in 2007 found psychostimulants may be effective in treatment-resistant depression with concomitant antidepressant therapy, but a more certain conclusion could not be drawn due to substantial deficiencies in the studies available for consideration, and the somewhat contradictory nature of their results.\n\nSection::::History.\n",
"In individuals who experience sub-normal height and weight gains during stimulant therapy, a rebound to normal levels is expected to occur if stimulant therapy is briefly interrupted. The average reduction in final adult height from continuous stimulant therapy over a 3 year period is 2 cm.\n\nSection::::Treatment.:Stimulants.:Effectiveness.\n",
"Stimulants or \"uppers\", such as amphetamines or cocaine, which increase mental or physical function, have an opposite effect to depressants.\n\nSection::::Types.:Depressants.:Antihistamines.\n",
"Non-medical prescription stimulant use is high. A 2003 study found that non prescription use within the last year by college students in the US was 4.1%. A 2008 meta analysis found even higher rates of non prescribed stimulant use. It found 5% to 9% of grade school and high school children and 5% to 35% of college students used a nonprescribed stimulant in the last year.\n",
"In contrast, much larger doses of amphetamine are likely to impair cognitive function and induce rapid muscle breakdown. Substance dependence (i.e., addiction) is a serious risk of amphetamine abuse, but only rarely arises from proper medical use. Very high doses can result in a psychosis (e.g., delusions and paranoia), which very rarely occurs at therapeutic doses even during long-term use. As recreational doses are generally much larger than prescribed therapeutic doses, recreational use carries a far greater risk of serious side effects.\n\nSection::::Notable stimulants.:Caffeine.\n",
"Amphetamines-type stimulants are often used for their therapeutic effects. Physicians sometimes prescribe amphetamine to treat major depression, where subjects do not respond well to traditional SSRI medications, but evidence supporting this use is poor/mixed. Notably, two recent large phase III studies of lisdexamfetamine (a prodrug to amphetamine) as an adjunct to an SSRI or SNRI in the treatment of major depressive disorder showed no further benefit relative to placebo in effectiveness. Numerous studies have demonstrated the effectiveness of drugs such as Adderall (a mixture of salts of amphetamine and dextroamphetamine) in controlling symptoms associated with ADHD. Due to their availability and fast-acting effects, substituted amphetamines are prime candidates for abuse.\n",
"Tolerance to the therapeutic effects of stimulants can occur, and rebound of symptoms may occur when the dose wears off. Rebound effects are often the result of the stimulant dosage being too high or the individual not being able to tolerate stimulant medication. Signs that the stimulant dose is too high include irritability, feeling stimulated or blunting of affect and personality.\n",
"There is limited research on the association between stimulant treatment and presentation of manic symptoms. In a study of 34 adolescents hospitalized with mania, there was an association between earlier age of onset and previous stimulant use, independent of ADHD. In a retrospective study of 80 adolescents hospitalized with bipolar disorder, 35% of patients had previously used stimulants and 44% had used antidepressants, where stimulant use was associated with worse hospitalization course. However, there is mixed research on these relationships. A study conducted in 2008 of 245 bipolar adolescents found neither earlier age of onset nor severity of bipolar symptoms were associated with prior stimulant treatment.\n",
"Section::::Management.:Medication.\n\nStimulant medications are the pharmaceutical treatment of choice. They have at least some effect on symptoms, in the short term, in about 80% of people Methylphenidate appears to improve symptoms as reported by teachers and parents. Stimulants may also reduce the risk of unintentional injuries in children with ADHD.\n",
"Short-term clinical trials have shown medications to be effective for treating ADHD, but the trials usually use exclusion criteria, meaning knowledge of medications for ADHD is based on a small subset of the typical patients seen in clinical practice. They have not been found to improve school performance and data is lacking on long-term effectiveness and the severity of side effects. Stimulants, however, may reduce the risk of unintentional injuries in children with ADHD.\n",
"Reviews of clinical stimulant research have established the safety and effectiveness of long-term amphetamine use for ADHD. Controlled trials spanning two years have demonstrated continuous treatment effectiveness and safety. One review highlighted a 9-month randomized controlled trial of amphetamine in children that found an average increase of 4.5 IQ points and continued improvements in attention, disruptive behaviors, and hyperactivity.\n\nSection::::Medications.:Concerns regarding stimulants.:Withdrawal and rebound.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-01814 | Why and how do some animals (i.e. birds) move their heads in a quick way, almost as if snapping to an angle? | So I want you to hold both thumbs out in front of you, holding your hands apart. Look at one thumb, then look at the other. Chances are your eyes snapped from one to the other quickly. Eyes in general don't focus well when an image is moving too much, so tend to try to keep things stable by "snapping" from image to image. Even when you're trying to read or looks slowly across something your eyes are really making lots of small jumps instead of truly moving smoothly most of the time (the only exception is really when your eye is tracking something in motion) Birds don't have muscles to move their eyes around, so their whole head snaps around to do the same job as our eye muscles do. They don't always do this, birds will sometimes move their heads slowly, but just like with our eyes and watching something that's moving, they'll move their whole head to follow a moving object they're tracking. | [
"To obtain steady images while flying or when perched on a swaying branch, birds hold the head as steady as possible with compensating reflexes. Maintaining a steady image is especially relevant for birds of prey. Because the image can be centered on the deep fovea of only one eye at a time, most falcons when diving use a spiral path to approach their prey after they have locked on to a target individual. The alternative of turning the head for a better view slows down the dive by increasing drag while spiralling does not reduce speeds significantly.\n\nSection::::Perception.:Edges and shapes.\n",
"In many animals, including human beings, the inner ear functions as the biological analogue of an accelerometer in camera image stabilization systems, to stabilize the image by moving the eyes. When a rotation of the head is detected, an inhibitory signal is sent to the extraocular muscles on one side and an excitatory signal to the muscles on the other side. The result is a compensatory movement of the eyes. Typically eye movements lag the head movements by less than 10 ms.\n\nSection::::See also.\n\nBULLET::::- Adaptive optics\n\nBULLET::::- Deblurring\n\nBULLET::::- Heligimbal\n\nBULLET::::- Hyperlapse\n\nBULLET::::- Motion compensation\n\nBULLET::::- Shaky camera\n",
"Section::::Birds.:Rhynchokinesis.\n\nRhynchokinesis is an ability possessed by some birds to flex their upper beak or rhinotheca. Rhynchokinesis involves flexing at a point some way along the upper beak - either upwards, in which case the upper beak and lower beak or gnathotheca diverge, resembling a yawn, or downwards, in which case the tips of the beaks remain together while a gap opens up between them at their midpoint.\n",
"There is a need for some mechanism that stabilises images during rapid head movements. This is achieved by the vestibulo-ocular reflex, which is a reflex eye movement that stabilises images on the retina by producing eye movements in the direction opposite to head movements, thus preserving the image on the centre of the visual field. For example, when the head moves to the right, the eyes move to the left, and vice versa. In many animals, including human beings, the inner ear functions as the biological analogue of an accelerometer in camera image stabilization systems, to stabilize the image by moving the eyes. When a rotation of the head is detected, an inhibitory signal is sent to the extraocular muscles on one side and an excitatory signal to the muscles on the other side. The result is a compensatory movement of the eyes. Typical human eye movements lag head movements by less than 10 ms.\n",
"Some animals - usually, but not always, prey animals - have their two eyes positioned on opposite sides of their heads to give the widest possible field of view. Examples include rabbits, buffaloes, and antelopes. In such animals, the eyes often move independently to increase the field of view. Even without moving their eyes, some birds have a 360-degree field of view.\n",
"BULLET::::- Maxillojugal Unit\n\nBULLET::::- Dentary-predentary\n\nBULLET::::- Quadratojugal\n\nBULLET::::- Quadrate\n\nAs the lower jaw closes, the maxillojugal units move laterally producing a power stroke. These motions were later proved by a microwear analysis on an Edmontosaurus jaw.\n\nSection::::Birds.\n\nBirds show a vast range of cranial kinetic hinges in their skulls. Zusi recognised three basic forms of cranial kinesis in birds,\n\nBULLET::::- Prokinesis, where the upper beak moves at the point where it is hinged with the bird's skull\n",
"The first example of cranial kinesis was in the chondrichthyans, such as sharks. There is no attachment between the hyomandibular and the quadrate, and instead the hyoid arch suspends the two sets of jaws like pendulums. This allows sharks to swing their jaws outwards and forwards over the prey, allowing the synchronous meeting of the jaws and avoiding deflecting the prey when it comes close.\n\nSection::::Fish.:Actinopterygian fish.\n",
"BULLET::::- \"Head toss:\" This behavior, shown by every observed dog, is a prompt for attention, food or a sign of frustration, expressed in varying degrees depending on the level of arousal. In the complete expression, the head is swept to one side, nose rotated through a 90° arc to midline, then rapidly returned to the starting position. The entire sequence takes 1–2 seconds. The mildest expression is a slight flick of the head to the side and back. During this behavior, the characteristic contrasting black and white chin markings are displayed.\n",
"In the late 1990s, however, experiments using animals whose heads were free to move showed clearly that the SC actually produces \"gaze shifts\", usually composed of combined head and eye movements, rather than eye movements \"per se\". This discovery reawakened interest in the full breadth of functions of the superior colliculus, and led to studies of multisensory integration in a variety of species and situations. Nevertheless, the role of the SC in controlling eye movements is understood in much greater depth than any other function.\n",
"Optocollic reflex\n\nOptocollic reflex is a gaze stabilization reflex that occurs in birds in response to visual (optokinetic) inputs, and leads to head movements that compensate for passive displacements and rotations of the animal. The reflex seems to be more prominent when the bird is flying (or at least held in a \"flying position\"). The brain systems involved in the reflex are the nucleus of the basal optic roots, the pretectal nucleus lentiformis mesencephali, the vestibular nuclei, and the cerebellum\n",
"Section::::Other animals.\n\nIn veterinary literature usually only the lateral bend of head and neck is termed torticollis, whereas the analogon to the rotatory torticollis in humans is called a head tilt.\n\nThe most frequently encountered form of torticollis in domestic pets is the head tilt, but occasionally a lateral bend of the head and neck to one side is encountered.\n\nSection::::Other animals.:Head tilt.\n\nCauses for a head tilt in domestic animals are either diseases of the central or peripheral vestibular system or relieving posture due to neck pain.\n\nKnown causes for head tilt in domestic animals include:\n",
"As the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects. Animals also use \"motion parallax\", in which the animals (or just the head) move to gain different viewpoints. For example, pigeons (whose eyes do not have overlapping fields of view and thus cannot use stereopsis) bob their heads up and down to see depth.\n",
"Many owl species, such as the barn owl, have asymmetrically positioned ears that enhance sound positioning.\n\nSection::::Fish.\n\nMany flatfish, such as flounders, have eyes placed asymmetrically in the adult fish. The fish has the usual symmetrical body structure when it is young, but as it matures and moves to living close to the sea bed, the fish lies on its side, and the head twists so that both eyes are on the top.\n",
"Section::::Fish.\n\nMany fish exhibit durophagous behaviour including the Triggerfish, some Teleosts and some cichlids.\n\nSection::::Fish.:Triggerfish (\"Balistidae\").\n\nTriggerfish have jaws that contain a row of four teeth on either side, the upper jaw containing an additional set of six plate-like pharyngeal teeth. Triggerfish do not have jaw protrusion and there are enlarged jaw adductor muscles for extra power to crush the protective shells and spines of their prey.\n\nSection::::Fish.:Teleost (Teleostei).\n",
"The neck of a bird is composed of 13–25 cervical vertebrae enabling birds to have increased flexibility. A flexible neck allows many birds with immobile eyes to move their head more productively and center their sight on objects that are close or far in distance. Most birds have about three times as many neck vertebrae as humans, which allows for increased stability during fast movements such as flying, landing, and taking-off. The neck plays a role in head-bobbing which is present in at least 8 out of 27 orders of birds, including Columbiformes, Galliformes, and Gruiformes. Head-bobbing is an optokinetic response which stabilizes a birds surroundings as they alternate between a thrust phase and a hold phase. Head-bobbing is synchronous with the feet as the head moves in accordance with the rest of the body. Data from various studies suggest that the main reason for head-bobbing in some birds is for the stabilization of their surroundings, although it is uncertain why some but not all bird orders show head-bob.\n",
"Section::::Hares.\n\nIn hares or \"jackrabbits\" (but not in their ancestors), there is a suture between regions in the fetal braincase that remains open in the adult, forming what is thought to be an intracranial joint, permitting relative motion between the anterior and posterior part of the braincase. It is thought that this helps absorb the force of impact as the hare strikes the ground.\n\nSection::::See also.\n\nBULLET::::- Snake skull\n\nSection::::References.\n\nBULLET::::- Notes\n\nBULLET::::- Bibliography\n\nBULLET::::- \"A functional and evolutionary analysis of rhynchokinesis in birds\" by Richard L Zusi, Smithsonian Institution Press, 1984.\n",
"Due to the nature of a horse's vision, head position may indicate where the animal is focusing attention. To focus on a distant object, a horse will raise its head. To focus on an object close by, and especially on the ground, the horse will lower its nose and carry its head in a near-vertical position. Eyes rolled to the point that the white of the eye is visible often indicates fear or anger.\n",
"Heterogeneous eyes have evolved at least nine times: four or more times in gastropods, once in the copepods, once in the annelids, once in the cephalopods, and once in the chitons, which have aragonite lenses. No extant aquatic organisms possess homogeneous lenses; presumably the evolutionary pressure for a heterogeneous lens is great enough for this stage to be quickly \"outgrown\".\n\nThis eye creates an image that is sharp enough that motion of the eye can cause significant blurring. To minimise the effect of eye motion while the animal moves, most such eyes have stabilising eye muscles.\n",
"Most vertebrates have some form of kinetic skull. Cranial kinesis, or lack thereof, is usually linked to feeding. Animals which must exert powerful bite forces, such as crocodiles, often have rigid skulls with little or no kinesis, for maximum strength. Animals which swallow large prey whole (snakes), which grip awkwardly shaped food items (parrots eating nuts), or, most often, which feed in the water via suction feeding often have very kinetic skulls, frequently with numerous mobile joints. In the case of mammals, which have akinetic skulls (except for perhaps hares), the lack of kinesis is most likely to be related to the secondary palate, which prevents relative movement. This in turn is a consequence of the need to be able to create a suction during suckling.\n",
"Pleurokinesis refers to the complex multiple jointing thought to occur in ornithopods, such as hadrosaurs. Ornithopod jaws are isognathic (meet simultaneously), working like a guillotine to slice plant material which can be manipulated with their teeth. However, because of the wedge shape of their teeth, the occlusional plane is tilted away from the centre of the head, causing the jaws to lock together and, due to the lack of a secondary palate, the force of this would not be braced. Because of this, Norman and Weishampel proposed a pleurokinetic skull. Here, there are four (or perhaps even more) kinetic parts of the skull,\n",
"Section::::Visual capacity of the horse.:Visual acuity and sensitivity to motion.\n\nThe horse has a \"visual streak\", or an area within the retina, linear in shape, with a high concentration of ganglion cells (up to 6100 cells/mm in the visual streak compared to the 150 and 200 cells/mm in the peripheral area). Horses have better acuity when the objects they are looking at fall in this region. They therefore will tilt or raise their heads, to help place the objects within the area of the visual streak.\n",
"The visual system in the brain is too slow to process that information if the images are slipping across the retina at more than a few degrees per second. Thus, to be able to see while we are moving, the brain must compensate for the motion of the head by turning the eyes. Another specialisation of visual system in many vertebrate animals is the development of a small area of the retina with a very high visual acuity. This area is called the fovea, and covers about 2 degrees of visual angle in people. To get a clear view of the world, the brain must turn the eyes so that the image of the object of regard falls on the fovea. Eye movement is thus very important for visual perception, and any failure can lead to serious visual disabilities. To see a quick demonstration of this fact, try the following experiment: hold your hand up, about one foot (30 cm) in front of your nose. Keep your head still, and shake your hand from side to side, slowly at first, and then faster and faster. At first you will be able to see your fingers quite clearly. But as the frequency of shaking passes about 1 Hz, the fingers will become a blur. Now, keep your hand still, and shake your head (up and down or left and right). No matter how fast you shake your head, the image of your fingers remains clear. This demonstrates that the brain can move the eyes opposite to head motion much better than it can follow, or pursue, a hand movement. When your pursuit system fails to keep up with the moving hand, images slip on the retina and you see a blurred hand.\n",
"Section::::Behaviour.:Feeding.\n\nThe ruff normally feeds using a steady walk and pecking action, selecting food items by sight, but it will also wade deeply and submerge its head. On saline lakes in East Africa it often swims like a phalarope, picking items off the surface. It will feed at night as well as during the day. It is thought that Ruff use both visual and auditory cues to find prey. When feeding, the ruff frequently raises its back feathers, producing a loose pointed peak on the back; this habit is shared only by the black-tailed godwit.\n",
"All dogs (and all living Canidae - wolves, foxes, and wild dogs) possess a similar ligament connecting the spinous process of their first thoracic (or chest) vertebrae to the back of the axis bone (second cervical or neck bone), which supports the weight of the head without active muscle exertion, thus saving energy. This ligament is analogous in function (but different in exact structural detail) to the nuchal ligament found in ungulates. This ligament allows dogs to carry their heads while running long distances, such as while following scent trails with their nose to the ground, without expending much energy.\n",
"Section::::Vision.:Extraocular muscles.\n\nEach eye has six muscles that control its movements: the lateral rectus, the medial rectus, the inferior rectus, the superior rectus, the inferior oblique, and the superior oblique. When the muscles exert different tensions, a torque is exerted on the globe that causes it to turn, in almost pure rotation, with only about one millimeter of translation. Thus, the eye can be considered as undergoing rotations about a single point in the center of the eye.\n\nSection::::Vision.:Rapid eye movement.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-00863 | Why is it greenscreen and not any other color? and how does it work? | It can be any colour you want. But you have to pick a colour that will not clash with the rest of the set, or the costumes of the actors. There has to be a decent contrast between the screen and the rest of the set. | [
"The web presenter technology involves using a green screen backdrop when filming so that the video can be edited using Chroma key compositing. This allows the video to appear as a transparent overlay onto any website using a single line of HTML (and often javascript) code. A green screen is used as the image sensors in digital video cameras are more sensitive to green than any other colour.\n",
"The GreenScreen List Translator is the first step in a GreenScreen Assessment. It is also used as a stand alone screening protocol by health and sustainability screening and certification programs. It is widely referenced in standards and certifications related to green building products, including the Health Product Declaration Standard (HPD), Portico, and the \"Building product disclosure and optimization - material ingredients\" credits in the US Green Building Council's LEED program.\n\nSection::::External links.\n\nBULLET::::- GreenScreen for Safer Chemicals home page for the GreenScreen Standard\n\nBULLET::::- Clean Production Action publisher of the GreenScreen\n",
"Section::::Process.\n\nThe principal subject is filmed or photographed against a background consisting of a single colour or a relatively narrow range of colours, usually blue or green because these colours are considered to be the furthest away from skin tone. The portions of the video which match the pre-selected color are replaced by the alternate background video. This process is commonly known as \"keying\", \"keying out\" or simply a \"key\".\n\nSection::::Process.:Processing a green backdrop.\n",
"The central tools of the List Translator are the GreenScreen Specified Lists and the GreenScreen List Translator Map. \n",
"If the foreground object was filmed close to the backing screen or with less than ideal lighting conditions, the foreground object will usually have ‘spill’ somewhere on it. This is most common when filming blonde people against a blue or greenscreen as their translucent hair will absorb the backing screen color. Such ‘colorspill’ can be removed and replaced with several options to achieve a more realistic result.\n",
"On August 19, 2013 Jim Jannard announced his retirement from RED, leaving Jarred Land the current president to take over in his absence.\n\nSection::::Cameras.\n\nSection::::Cameras.:Red One.\n",
"Green screen (disambiguation)\n\nGreen screen compositing, or more generally chroma key compositing, is a technique for combining two still images or video frames.\n\nGreen screen may also refer to:\n\nBULLET::::- Green-screen display, a monochrome CRT computer display\n\nBULLET::::- GreenScreen Interactive Software, a publisher of video games\n\nBULLET::::- Green screen of death, a failure mode on the TiVo digital video recorder and Xbox 360 console game system platforms\n\nBULLET::::- Green Screen film festival, a film festival in Germany\n\nBULLET::::- GreenScreen for Safer Chemicals a green chemicals assessment tool.\n",
"It is commonly used for weather forecast broadcasts, wherein a news presenter is usually seen standing in front of a large CGI map during live television newscasts, though in actuality it is a large blue or green background. When using a blue screen, different weather maps are added on the parts of the image where the color is blue. If the news presenter wears blue clothes, his or her clothes will also be replaced with the background video. Chroma keying is also common in the entertainment industry for visual effects in movies and video games.\n\nSection::::History.\n\nSection::::History.:Predecessors.\n",
"Section::::Workflow.\n\nPrimatte is usually activated on a foreground image with a person or other foreground object filmed or digitized against a solid colored background or backing screen; usually a bluescreen or a greenscreen. The solid colored background area is removed and replaced with transparency. This allows the user to replace the solid colored background with a background image of his choice.\n",
"Episodes that have been made with the games starting with \"Halo 3\" have used the theater mode camera. The Forge Mode from \"Halo 4\" onward also helped by providing a green screen and the creation of entire areas for certain scenes.\n",
"Reverse bluescreen\n\nReverse bluescreen is a special effects technique pioneered by Jonathan Erland of Apogee Inc.(John Dykstra's company)for shooting the flying sequences in the film \"Firefox\". Erland received Academy Awards for this technique.\n",
"Sometimes a shadow can be used to create a visual effect. Areas of the bluescreen or greenscreen with a shadow on them can be replaced with a darker version of the desired background video image, making it look like the person is casting a shadow on them. Any spill of the chroma key color will make the result look unnatural. A difference in the focal length of the lenses used can affect the success of chroma key.\n\nSection::::Tolerances.:Exposure.\n",
"Over the years, \"Red vs. Blue\" has attracted numerous notable guest stars, namely Elijah Wood, Christopher Sabat, Amber Benson, Dan Avidan, Arin Hanson, and Smosh.\n\nSection::::Production.\n\nSection::::Production.:Development history.\n",
"BULLET::::- Naganawanaland – Kelly Coffield plays a newly appointed U.S. ambassador to an obscure African nation, while David Alan Grier plays her interpreter whose translations don't quite match up to what the new ambassador is actually saying.\n",
"A studio shot taken in front of a green screen will naturally have ambient light the same color as the screen, due to its light scattering. This effect is known as \"spill\". This can look unnatural or cause portions of the characters to disappear, so must be compensated for, or avoided by using a larger screen placed far from the actors.\n\nSection::::Process.:Major factors.:Camera.\n\nThe depth of field used to record the scene in front of the colored screen should match that of the background. This can mean recording the actors with a larger depth of field than normal.\n\nSection::::Clothing.\n",
"The biggest challenge when setting up a bluescreen or greenscreen is even lighting and the avoidance of shadow, because it is best to have as narrow a color range as possible being replaced. A shadow would present itself as a darker color to the camera and might not register for replacement. This can sometimes be seen in low-budget or live broadcasts where the errors cannot be manually repaired. The material being used affects the quality and ease of having it evenly lit. Materials which are shiny will be far less successful than those that are not. A shiny surface will have areas that reflect the lights making them appear pale, while other areas may be darkened. A matte surface will diffuse the reflected light and have a more even color range. In order to get the cleanest key from shooting greenscreen it is necessary to create a value difference between the subject and the greenscreen. In order to differentiate the subject from the screen, a two-stop difference can be used, either by making the greenscreen two stops higher than the subject, or vice versa.\n",
"A so-called \"yellow screen\" is accomplished with a white backdrop. Ordinary stage lighting is used in combination with a bright yellow sodium lamp. The sodium light falls almost entirely in a narrow frequency band, which can then be separated from the other light using a prism, and projected onto a separate but synchronized film carrier within the camera. This second film is high-contrast black and white, and is processed to produce the matte.\n\nOccasionally, a magenta background is used, as in some software applications where the magenta or fuchsia is sometimes referred to as \"magic pink\".\n",
"The GreenScreen standard is developed, maintained and published by Clean Production Action (CPA), a non profit organization, based in the United States. CPA publishes the GreenScreen as an open standard which anyone can utilize. To make a public claim using a GreenScreen Benchmark, however, the GreenScreen assessment must be completed by a Profiler licensed by CPA.\n\nSection::::Related standards.\n",
"CineSpace\n\ncineSpace is a color management solution for the motion picture and video industry. It addresses the two major issues concerning color:\n\nBULLET::::1. Ensuring that all displays throughout the facility look the same; and\n\nBULLET::::2. Making those displays look like a selected output target, such as a particular film stock or a video standard like HD.\n\nThe cineSpace product was originally developed by Rising Sun Research. In 2008 Cine-tal Systems, Inc acquired the product. In 2011, THX acquired the product.\n\nSection::::References.\n",
"Compositing techniques known as chroma keying that remove all areas of a certain color from a recording - colloquially known as \"bluescreen\" or \"greenscreen\" after the most popular colors used - are probably the best-known and most widely used modern techniques for creating traveling mattes, although rotoscoping and multiple motion control passes have also been used in the past. Computer-generated imagery, either static or animated, is also often rendered with a transparent background and digitally overlaid on top of modern film recordings using the same principle as a matte - a digital image mask.\n\nSection::::History.\n",
"Just before principal photography was about to begin, the company purchased a pair of workstations dedicated to 3D graphics and enhanced their Avid mounting system. The film was shot in ten days, with a Sony HVR-Z1. The bluescreen technique, which involves acting against an otherwise blank backdrop with backgrounds filled in later, was used for all the scenes with Prati, done at Illusion's studios. The technique spares the production the cost and time of constructing a physical set, but, as Prati pointed out, it also means the actor has no points of reference.\n",
"Section::::Reception.:Impact on machinima.\n",
"Red, green, and blue light combined at full intensity on the black screen makes white; by lowering the intensity, it is possible to create different shades of grey.\n",
"BULLET::::- A video mixer, which combines the video from the camera with the video from the realtime rendering software to produce a final video output. One of the most common ways to mix the video to replace a chroma key background.\n\nA major difference between a virtual studio and the bluescreen special effects used in movies is that the computer graphics are rendered in realtime, removing the need for any post production work, and allowing it to be used in live television broadcasts.\n",
"Another challenge for bluescreen or greenscreen is proper camera exposure. Underexposing or overexposing a colored backdrop can lead to poor saturation levels. In the case of video cameras, underexposed images can contain high amounts of noise, as well. The background must be bright enough to allow the camera to create a bright and saturated image.\n\nSection::::Programming.\n\nThere are several different quality- and speed-optimized techniques for implementing color keying in software.\n"
] | [] | [] | [
"normal"
] | [
"It has to be a green screen."
] | [
"false presupposition",
"normal"
] | [
"It can be any color it just needs to be a color that isn't used in the rest of the set. "
] |
2018-03099 | Why does a sound wave look the way it does, especially in audio tracks in music softwares? What do the hills and valleys represent and what does it mean when it's above or below the line? | Above the line = speaker cone moves outwards. Below the line = speaker cone moves inwards. And that movement produces sound by pushing air around. | [
"Section::::Applications.\n\nSection::::Applications.:The Sound Around You Project.\n",
"It is not easy to identify what acoustic cues listeners are sensitive to when perceiving a particular speech sound:\n\n\"At first glance, the solution to the problem of how we perceive speech seems deceptively simple. If one could identify stretches of the acoustic waveform that correspond to units of perception, then the path from sound to meaning would be clear. However, this correspondence or mapping has proven extremely difficult to find, even after some forty-five years of research on the problem.\"\n",
"Section::::Sound map vision guides development.\n",
"Section::::Applications.:New York Sound Map.\n\nThe NYSoundmap is a project of The New York Society for Acoustic Ecology (NYSAE), a New York metropolitan chapter of the American Society for Acoustic Ecology, an organization dedicated to exploring the role of sound in natural habitats and human societies, and promoting public dialog concerning the identification, preservation, and restoration of natural and cultural sound environments.\n",
"In 2009, Atchley performed concert versions of \"turtle\". The primary sound of this work is generated by six sine wave tones in the frequency range from 261.63 Hz and 440.00 Hz. Attending video landscapes are generated by defining and displaying sets of points within a single, germinal image.\n",
"BULLET::::- For a \"sine wave\", the wave height \"H\" is twice the amplitude:\n\nBULLET::::- For a \"periodic wave\" it is simply the difference between the maximum and minimum of the surface elevation \"z\" = \"η\"(\"x\" – \"c\" \"t\"):\n",
"Using either approach, a grid of receivers must be defined in order to measure or calculate noise levels. When results are obtained, using GIS tools, spatial interpolation must be applied in order to give a continuous graphical representation of sound levels. According to the END five dBA ranges are used for this contour (isoline) representation.\n\nThe maps may be useful for planning stages, or for prior evaluation of action plans, determination of most polluted areas.\n",
"Section::::Vocals.\n",
"Section::::History / Background.\n",
"Section::::Geophotography as an educational tool.\n",
"For aliasing-free rendition in the entire audio range a distance of the single emitters below 2 cm would be necessary. But fortunately our ear is not particularly sensitive to spatial aliasing. A 10–15 cm emitter distance is generally sufficient. \n\nSection::::Challenges.:Truncation effect.\n",
"Section::::Mathematical description.\n\nThe below discussion is from Landau and Lifshitz. If the amplitude and the direction of propagation varies slowly over the distances of wavelength, then an arbitrary sound wave can be approximated locally as a plane wave. In this case, the velocity potential can be written as\n",
"BULLET::::- The spatial-domain double prediction is also called DMH (Directional Multi-Hypothesis), which is obtained by fusing two prediction points around the initial prediction point, and the initial point is located in the line between the two prediction points. In addition to the initial prediction point, there are 8 prediction points in total, to be fused only with the two prediction points located in the same straight line with the initial prediction point. Besides four different directions, the adjustment will also be conducted according to the distance, and the four modes with 1/2 pixel distance and 1/4 pixel distance will be respectively calculated, plus the initial prediction point, to work out 9 modes in total for comparison, thus to select out the optimal prediction mode.\n",
"A sound is often formed by the seas flooding a river valley. This produces a long inlet where the sloping valley hillsides descend to sea-level and continue beneath the water to form a sloping sea floor. The Marlborough Sounds in New Zealand are a good example of this type of formation.\n",
"Section::::Tonalism in Southern California.\n",
"Section::::Digital photography.\n\nForms of tone mapping long precede digital photography. The manipulation of film and development process to render high contrast scenes, especially those shot in bright sunlight, on printing paper with a relatively low dynamic range, is effectively a form of tone mapping, although it is not usually called that. Local adjustment of tonality in film processing is primarily done via dodging and burning, and is particularly advocated by and associated with Ansel Adams, as described in his book \"The Print;\" see also his Zone System.\n",
"The scale space is divided into a number of octaves, where an octave refers to a series of response maps of covering a doubling of scale. In SURF, the lowest level of the scale space is obtained from the output of the 9×9 filters.\n",
"Sound map\n\nSound maps are digital geographical maps that put emphasis on the sonic representation of a specific location. Sound maps are created by associating landmarks (streets in a city, train stations, stores, pathways, factories, oil pumps, etc.) and soundscapes.\n",
"The normal process of exposure compensation, brightening shadows and altering contrast applied globally to digital images as part of a professional or serious amateur workflow is also a form of tone mapping.\n",
"Section::::Linear noise source.\n",
"BULLET::::- Level Difference: Very close sound sources cause a different level between the ears.\n\nSection::::Sound localization by the human auditory system.:Signal processing.\n\nSound processing of the human auditory system is performed in so-called critical bands. The hearing range is segmented into 24 critical bands, each with a width of 1 Bark or 100 Mel. For a directional analysis the signals inside the critical band are analyzed together.\n",
"Section::::Perception of sound.:Spatial location.\n\nSpatial location (see: Sound localization) represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment. In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification. This is the main reason why we can pick the sound of an oboe in an orchestra and the words of a single person at a cocktail party.\n\nSection::::Sound pressure level.\n",
"Section::::Modern usage and techniques.\n\nSection::::Modern usage and techniques.:Field geophotography.\n",
"Section::::Data and Symbology.\n",
"In an interview, Dwight Yoakam defined the term \"Bakersfield sound\":\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-01481 | If light slows down in water, how does it speed back up again when it comes out? | There's a way to think about this problem that was--IIRC--in Stephen Hawking's *A Brief History of Time*. Imagine a big famous actor walking through a room. They travel at a constant speed that we'll call A. The actor always moves at speed A, no matter what. When the room is empty, they're able to walk into the room and out of it easily in a straight line. However, if the room is full of people then the actor can't walk in and out of the room in a straight line. They keep moving at A, but because of the people they have to bounce around and take a much more circuitous path to get out of the room. This means that despite remaining at A the entire time, it took them *longer* to get out of the full room than the empty room. The same is true with light. The light doesn't 'slow down' in water. The light's still moving at c. However, water is much more dense than air or a vacuum, so in order to make it through the water, the light has to take a much more circuitous path. Despite never changing speed, we as an outside observe perceive the light taking more time to cross through the same distance as a difference in speed, rather than what it actually is: the distance having changed. Then again, I'm not a physicist, so this could be wrong, but I think it serves as a good ELI5. Edit: I’m seeing a lot of comments that this explanation is either wrong, too oversimplified or some combination of the two. As I said, I’m not a physicist, and am only repeating what I’ve heard from what I believed to be a reputable source. I would encourage anyone reading this to also look at the discussion underneath this comment and in the rest of the comments as well. Just because I got the most upvotes doesn’t mean I’m right. | [
"The simplest picture of light given by classical physics is of a wave or disturbance in the electromagnetic field. In a vacuum, Maxwell's equations predict that these disturbances will travel at a specific speed, denoted by the symbol . This well-known physical constant is commonly referred to as the speed of light. The postulate of the constancy of the speed of light in all inertial reference frames lies at the heart of special relativity and has given rise to a popular notion that the \"speed of light is always the same\". However, in many situations light is more than a disturbance in the electromagnetic field.\n",
"In 1850, Hippolyte Fizeau and Léon Foucault independently established that light travels more slowly in water than in air, thus validating a prediction of Fresnel's wave theory of light and invalidating the corresponding prediction of Newton's corpuscular theory. The speed of light was measured in still water. What would be the speed of light in flowing water?\n",
"In 1998, Danish physicist Lene Vestergaard Hau led a combined team from Harvard University and the Rowland Institute for Science which succeeded in slowing a beam of light to about 17 meters per second, and researchers at UC Berkeley slowed the speed of light traveling through a semiconductor to 9.7 kilometers per second in 2004. Hau and her colleagues later succeeded in stopping light completely, and developed methods by which it can be stopped and later restarted. This was in an effort to develop computers that will use only a fraction of the energy of today's machines.\n",
"According to the theories prevailing at the time, light traveling through a moving medium would be a simple sum of its speed \"through\" the medium plus the speed \"of\" the medium. Contrary to expectation, Fizeau found that although light appeared to be dragged by the water, the magnitude of the dragging was much lower than expected. If formula_134 is the speed of light in still water, and formula_69 is the speed of the water, and formula_136 is the water-bourne speed of light in the lab frame with the flow of water adding to or subtracting from the speed of light, then\n",
"If a laser beam is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than \"c\". Similarly, a shadow projected onto a distant object can be made to move across the object faster than \"c\". In neither case does the light travel from the source to the object faster than \"c\", nor does any information travel faster than light. An analogy can be made to pointing a water hose in one direction and then quickly moving the hose to point the stream of water in another direction. At no point does the water leaving the hose ever increase in velocity, but the endpoint of the stream can be moved faster than the water in the stream itself.\n",
"Light that travels through transparent matter does so at a lower speed than \"c\", the speed of light in a vacuum. For example, photons engage in so many collisions on the way from the core of the sun that radiant energy can take about a million years to reach the surface; however, once in open space, a photon takes only 8.3 minutes to reach Earth. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polariton (other quasi-particles are phonons and excitons); this polariton has a nonzero effective mass, which means that it cannot travel at \"c\". Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering.\n",
"A recent theory of DVM, termed the Transparency Regulator Hypothesis, argues that water transparency is the ultimate variable that determines the exogenous factor (or combination of factors) that causes DVM behavior in a given environment. In less transparent waters, where fish are present and more food is available, fish tend to be the main driver of DVM. In more transparent bodies of water, where fish are less numerous and food quality improves in deeper waters, UV light can travel farther, thus functioning as the main driver of DVM in such cases.\n\nSection::::Unusual events.\n",
"That is, if \"n\" is the index of refraction of water, so that \"c/n\" is the velocity of light in stationary water, then the predicted speed of light \"w\" in one arm would be\n\nand the predicted speed in the other arm would be\n\nHence light traveling against the flow of water should be slower than light traveling with the flow of water.\n",
"BULLET::::- At the event horizon, formula_30 the speed of light shining outward away from the center of black hole is formula_31 It can not escape from the event horizon. Instead, it gets stuck at the event horizon. Since light moves faster than all others, matter can only move inward at the event horizon. Everything inside the event horizon is hidden from the outside world.\n",
"So-called superluminal motion is seen in certain astronomical objects, such as the relativistic jets of radio galaxies and quasars. However, these jets are not moving at speeds in excess of the speed of light: the apparent superluminal motion is a projection effect caused by objects moving near the speed of light and approaching Earth at a small angle to the line of sight: since the light which was emitted when the jet was farther away took longer to reach the Earth, the time between two successive observations corresponds to a longer time between the instants at which the light rays were emitted.\n",
"In 1851, Fizeau conducted an experiment to answer this question, a simplified representation of which is illustrated in Fig. 5‑1. A beam of light is divided by a beam splitter, and the split beams are passed in opposite directions through a tube of flowing water. They are recombined to form interference fringes, indicating a difference in optical path length, that an observer can view. The experiment demonstrated that dragging of the light by the flowing water caused displacement of the fringes, showing that the motion of the water had affected the speed of the light.\n",
"LAMP's founding Director was William \"Billy Ray\" Morris, who oversaw archaeological research and educational programs until his departure in 2005. In March 2006, underwater archaeologist Chuck Meide took over control of the organization as its new Director, with the assistance of then Director of Archaeology Dr. Sam Turner. Today, LAMP maintains four archaeologists on staff and works with a team of archaeological conservators, and regularly employs a large number of volunteers and student interns.\n",
"When light propagates through a material, it travels slower than the vacuum speed, . This is a change in the phase velocity of the light and is manifested in physical effects such as refraction. This reduction in speed is quantified by the ratio between and the phase velocity. This ratio is called the refractive index of the material. Slow light is a dramatic reduction in the group velocity of light, not the phase velocity. Slow light effects are not due to abnormally large refractive indices, as which will be explained below.\n",
"BULLET::::- The waves of visible light oscillate with a period (reciprocal frequency) of about 2 femtoseconds formula_1. The precise period depends on the energy of the photons, which determines their color. (\"See wave–particle duality\".) This time can be calculated by dividing the wavelength of the light by the speed of light (approximately 3 × 10 m/s) to determine the time required for light to travel that distance.\n\nBULLET::::- 1.3 fs – cycle time for 390-nanometer light, at the transition between violet visible light and ultraviolet\n",
"If a laser beam is swept quickly across a distant object, the spot of light can move faster than \"c\", although the initial movement of the spot is delayed because of the time it takes light to get to the distant object at the speed \"c\". However, the only physical entities that are moving are the laser and its emitted light, which travels at the speed \"c\" from the laser to the various positions of the spot. Similarly, a shadow projected onto a distant object can be made to move faster than \"c\", after a delay in time. In neither case does any matter, energy, or information travel faster than light.\n",
"Section::::Physical origin.:Cherenkov emission angle.\n\nIn the figure on the geometry, the particle (red arrow) travels in a medium with speed formula_1 such that \n\nwhere formula_3 is speed of light in vacuum, and formula_4 is the refractive index of the medium. If the medium is water, the condition is formula_5, since formula_6 for water at 20 °C.\n\nWe define the ratio between the speed of the particle and the speed of light as \n\nThe emitted light waves (blue arrows) travel at speed \n",
"Also, slow light can be used in optical quantum memory.\n\nSection::::In fiction.\n\nThe description of \"luminite\" in Maurice Renard's novel, \"Le maître de la lumière\" (\"The Master of Light\", 1933), might be one of the earliest mentions of slow light.\n\nSubsequent fictional works that address slow light are noted below.\n\nBULLET::::- The slow light experiments are mentioned in Dave Eggers's novel \"You Shall Know Our Velocity\" (2002), in which the speed of light is described as a \"Sunday crawl\".\n",
"The model identifies a difference between the information carried by the wave at its signal velocity \"c\", and the information about the wave front's apparent rate of change of position. If a light pulse is envisaged in a wave guide (glass tube) moving across an observer's field of view, the pulse can only move at \"c\" through the guide. If that pulse is also directed towards the observer, he will receive that wave information, at \"c\". If the wave guide is moved in the same direction as the pulse, the information on its position, passed to the observer as lateral emissions from the pulse, changes. He may see the rate of change of position as apparently representing motion faster than \"c\" when calculated, like the edge of a shadow across a curved surface. This is a different signal, containing different information, to the pulse and does not break the second postulate of special relativity. \"c\" is strictly maintained in all local fields.\n",
"The wavefront moves with speed formula_6, but at the same time the receiver moves away with speed formula_8 during a time formula_9, soformula_10where formula_11 is the speed of the receiver in terms of the speed of light, and where formula_12 is the period of light waves impinging on the receiver, \"as observed in the frame of the source.\" The corresponding frequency formula_13is:\n\nThus far, the equations have been identical to those of the classical Doppler effect with a stationary source and a moving receiver.\n",
"The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for large barriers. This could, for instance, be the gap between two prisms. When the prisms are in contact, the light passes straight through, but when there is a gap, the light is refracted. There is a non-zero probability that the photon will tunnel across the gap rather than follow the refracted path. For large gaps between the prisms the tunnelling time approaches a constant and thus the photons appear to have crossed with a superluminal speed.\n",
"Once light passes through the lens, it is transmitted through a transparent liquid medium until it reaches the retina, containing the photoreceptors. Like other vertebrates, the photoreceptors are on the inside layer so light must pass through layers of other neurons before it reaches them. The retina contains rod cells and cone cells.\n\nSection::::The retina.\n",
"The effective velocity of light in various transparent substances containing ordinary matter, is less than in vacuum. For example, the speed of light in water is about 3/4 of that in vacuum.\n",
"As long wavelengths of light (i.e. red) do not reach the deep sea from the surface, many deep-sea organisms are insensitive to red wavelengths, and so to these creatures red-colored objects appear black. The red photophore of \"Malacosteus\" thus allows it to illuminate prey without being detected. These fishes exhibit a number of adaptations for feeding on large prey. The \"open\" structure of its jaws reduces water resistance, allowing them to be snapped shut more quickly, while large recurved teeth and powerful jaw closing muscles assure a secure hold on prey items. The connection between the head and the body is reduced, with unossified vertebrae, allowing the cranium to be tilted back and the jaws thrust forward for a wider gape. Finally, the gills are exposed to the outside, allowing the fish to continue respiring while slowly swallowing large prey.\n",
"Section::::Superluminal travel of non-information.:Closing speeds.\n\nThe rate at which two objects in motion in a single frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame.\n",
"In classical physics, light is described as a type of electromagnetic wave. The classical behaviour of the electromagnetic field is described by Maxwell's equations, which predict that the speed \"c\" with which electromagnetic waves (such as light) propagate through the vacuum is related to the distributed capacitance and inductance of the vacuum, otherwise respectively known as the electric constant \"ε\" and the magnetic constant \"μ\", by the equation\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-01138 | Why is buttermilk used in so many cakes and breads, and what separates it from normal milk? | Buttermilk is slightly acidic. It reacts with the baking soda to produce carbon dioxide which helps the batter/dough rise. Think of a vinegar/baking soda volcano. The acidity also helps break down the proteins (gluten) so the result is less chewy. | [
"Cultured buttermilk was first commercially introduced in the United States in the 1920s. Commercially available cultured buttermilk is milk that has been pasteurized and homogenized, and then inoculated with a culture of \"Lactococcus lactis\" or \"Lactobacillus bulgaricus\" plus \"Leuconostoc citrovorum\" to simulate the naturally occurring bacteria in the old-fashioned product. The tartness of cultured buttermilk is primarily due to lactic acid produced by lactic acid bacteria while fermenting lactose, the primary sugar in milk. As the bacteria produce lactic acid, the pH of the milk decreases and casein, the primary milk protein, precipitates, causing the curdling or clabbering of milk. This process makes buttermilk thicker than plain milk. While both traditional and cultured buttermilk contain lactic acid, traditional buttermilk tends to be less viscous, whereas cultured buttermilk is more viscous. \n",
"Originally, buttermilk referred to the liquid left over from churning butter from cultured or fermented cream. Traditionally, before the advent of homogenization, the milk was left to sit for a period of time to allow the cream and milk to separate. During this time, naturally occurring lactic acid-producing bacteria in the milk fermented it. This facilitates the butter churning process, since fat from cream with a lower pH coalesces more readily than that of fresh cream. The acidic environment also helps prevent potentially harmful microorganisms from growing, increasing shelf-life.\n",
"When introduced, cultured buttermilk was popular among immigrants, and viewed as a food that could slow aging. It reached peak annual sales of in 1960. Buttermilk's popularity has declined since then, despite an increasing population, and annual sales in 2012 reached less than half that number. \n\nHowever, condensed buttermilk and dried buttermilk remain important in the food industry. Liquid buttermilk is used primarily in the commercial preparation of baked goods and cheese. Buttermilk solids are used in ice cream manufacturing, as well as being added to pancake mixes to make buttermilk pancakes.\n\nSection::::Acidified buttermilk.\n",
"Buttermilk\n\nButtermilk is a fermented dairy drink. Traditionally, it was the liquid left behind after churning butter out of cultured cream. However, most modern buttermilk is cultured. It is common in warm climates (including the Balkans, South Asia, the Middle East and the Southern United States) where unrefrigerated fresh milk sours quickly.\n",
"In the Levant this form is called \"boksum\" (Arabic: ) in Iraq and Syria or \"qurshalla\" (Arabic:قرشلة) in Jordan. It is made from flour, eggs, oil or butter, sugar, yeast or baking powder, and sometimes a small amount of cardamon. It is topped with roasted sesame seeds, black caraway seeds, or anise, and eaten as a dunking biscuit, especially with herbal tea.\n\nSection::::International variations.:Netherlands and Belgium (Flanders).\n",
"Traditional buttermilk is still common in many Indian, Nepalese, and Pakistani households, but rarely found in Western countries. In Nepal, buttermilk is called \"mohi\" and is a common drink in many Nepalese homes. It is served to family members and guests, and can be taken with meals or snacks. In many families, it is most popularly served with roasted maize.\n\nSection::::Cultured buttermilk.\n",
"\"Acidified buttermilk\" is a substitute made by adding a food-grade acid such as vinegar or lemon juice to milk. It can be produced by mixing 1 tablespoon () of acid with 1 cup () of milk and letting it sit until it curdles, about 10 minutes. Any level of fat content for the milk ingredient may be used, but whole milk is usually used for baking. In the process which is used to produce paneer, such acidification is done in the presence of heat.\n\nSection::::Nutrition.\n",
"In many African and Asian countries, butter is traditionally made from fermented milk rather than cream. It can take several hours of churning to produce workable butter grains from fermented milk.\n",
"The recipe of the traditional, homemade variant became standardized at the beginning of the 20th century. The ingredients are firmly specified and it is usually baked above cinders. The essential ingredients are exclusively: sugar, wheat flour, butter, milk, eggs, yeast and salt. Additional toppings are restricted to ground or chopped walnut, almond, cinnamon powder or vanilla sugar made from natural vanilla powder.\n",
"Buttermilk can be drunk straight, and it can also be used in cooking. In making Soda bread, the acid in buttermilk reacts with the raising agent, sodium bicarbonate, to produce carbon dioxide which acts as the leavening agent. Buttermilk is also used in marination, especially of chicken and pork, which the lactic acid helps to tenderize, retain moisture and allows added flavors to permeate the meat.\n\nSection::::Traditional buttermilk.\n",
"Melktert , Afrikaans for \"milk tart\", is a South African dessert consisting of a sweet pastry crust containing a creamy filling made from milk, flour, sugar and eggs. The ratio of milk to egg is higher than in a traditional Portuguese custard tart (Pastel de nata) or Chinese egg tart (\"dan ta\"), in which both was influenced by the Portuguese, resulting in a lighter texture and a stronger milk flavour. Some recipes require the custard to be baked in the crust, and others call for the custard to be prepared in advance, and then placed in the crust before serving. Cinnamon is often sprinkled over its surface. The milk used for the custard can also be infused with a cinnamon stick before preparation.\n",
"Today, baked milk is produced on an industrial scale. Like scalded milk, it is free of bacteria and enzymes and can be stored safely at room temperature for up to forty hours. Home-made baked milk is used for preparing a range of cakes, pies, and cookies.\n\nSection::::Fermented baked milk.\n\nRyazhenka and varenets are fermented baked milk products, a sort of traditional yoghurt. It is a common breakfast drink in Ukraine, Belarus, and Russia. \n",
"The invention of baking powder and other chemical leavening agents during the 19th century substantially increased the flexibility of this traditional pound cake by introducing the possibility of creating lighter, fluffier cakes using these traditional combinations of ingredients, and it is this transformation that brought about the modern butter cake.\n\nSection::::Ingredients and technique.\n",
"Butterscotch\n\nButterscotch is a type of confectionery whose primary ingredients are brown sugar and butter, but other ingredients are part of some recipes, such as corn syrup, cream, vanilla, and salt. The earliest known recipes, in mid-19th century Yorkshire, used treacle (molasses) in place of or in addition to sugar.\n",
"In New Orleans, sweetened condensed milk is commonly used as a topping on chocolate or similarly cream-flavored snowballs. In Scotland, it is mixed with sugar and butter then boiled to form a popular sweet candy called tablet or Swiss milk tablet, this recipe being very similar to another version of the Brazilian candy brigadeiro called \"branquinho\". In some parts of the Southern United States, condensed milk is a key ingredient in lemon ice box pie, a sort of cream pie. In the Philippines, condensed milk is mixed with some evaporated milk and eggs, spooned into shallow metal containers over liquid caramelized sugar, and then steamed to make a stiffer and more filling version of \"crème\" caramel known as \"leche flan\", also common in Brazil under the name \"pudim de leite\".\n",
"When potato is used as a major portion of the batter, the result is a \"potato pancake\". Commercially prepared pancake mixes are available in some countries. When buttermilk is used in place of or in addition to milk, the pancake develops a tart flavor and becomes known as a buttermilk pancake, which is common in Scotland and the US. Buckwheat flour can be used in a pancake batter, making for a type of buckwheat pancake, a category that includes Blini, Kaletez, Ploye, and Memil-buchimgae.\n",
"A variant of filmjölk called \"tätmjölk\", \"filtäte\", \"täte\" or \"långmjölk\" is made by rubbing the inside of a container with leaves of certain plants: sundew (\"Drosera\", ) or butterwort (\"Pinguicula\", ). Lukewarm milk is added to the container and left to ferment for one to two days. More \"tätmjölk\" can then be made by adding completed \"tätmjölk\" to milk. In \"Flora Lapponica\" (1737), Carl von Linné described a recipe for \"tätmjölk\" and wrote that any species of butterwort could be used to make \"tätmjölk\".\n",
"BULLET::::- residents being unable to afford \"sweet\" milk, or fresh milk, and instead drinking sour, older milk, which was cultured to add longevity and shelf life to the product in the era prior to modern refrigeration\n\nSection::::History.\n",
"Baked milk\n\nBaked milk (, , ) is a variety of boiled milk that has been particularly popular in Russia, Ukraine and Belarus. It is made by simmering milk on low heat for eight hours or longer.\n",
"Buttermilk koldskål\n\nButtermilk koldskål (, often simply koldskål – literally \"cold bowl\") is a sweet cold dairy beverage or dessert eaten in Denmark.\n\nKoldskål is made with buttermilk and other varying ingredients: eggs, sugar, cream and/or other dairy products, vanilla, and sometimes lemon. The dish arose when buttermilk became commonly available in Denmark in the early 1900s and was eaten chilled most days during the summer as a dessert or snack. Since 1979, there have been ready-made varieties on the Danish market, originally from Esbjerg Dairy, but now from a range of dairies, including Arla.\n",
"The cream cheese variant of the gooey butter cake recipe, while close enough to the original, is an approximation designed for easier preparation at home. Almost all bakeries in the greater St. Louis area, including those at local grocery chains Schnucks and Dierbergs, use a slightly different recipe based on corn syrup, sugar and powdered eggs; however, no cake mix or cream cheese is involved.\n\nSection::::Origin and popularity.\n",
"This variety of halva is usually made with wheat semolina, sugar or honey, and butter or vegetable oil. Raisins, dates, other dried fruits, or nuts such as almonds or walnuts are often added to semolina halva. The halva is very sweet, with a gelatinous texture similar to polenta; the added butter gives it a rich mouthfeel.\n",
"Panera Bread Company (original name: St. Louis Bread Company) makes a Danish with a gooey butter filling for the St. Louis market. More recently, Walgreens sells wrapped, individual slices of a version of St. Louis gooey butter cake as a snack alongside muffins, brownies, and cookies.\n\nGooey butter cake is now widely available outside of the St. Louis area, as Walmart has been marketing a version called Paula Deen Baked Goods Original Gooey Butter Cake. While Walmart still sells a gooey butter cake, they dropped the Paula Deen version. \n",
"Powdered milk is frequently used in the manufacture of infant formula, confectionery such as chocolate and caramel candy, and in recipes for baked goods where adding liquid milk would render the product too thin. Powdered milk is also widely used in various sweets such as the famous Indian milk balls known as gulab jamun and a popular Indian sweet delicacy (sprinkled with desiccated coconut) known as chum chum (made with skim milk powder). Many no-cook recipes that use nut butters use powdered milk to prevent the nut butter from turning liquid by absorbing the oil. \n",
"Different varieties are found around the world. \"Smen\" is a spiced Moroccan clarified butter, buried in the ground and aged for months or years. A similar product is \"maltash\" of the Hunza Valley, where cow and yak butter can be buried for decades, and is used at events such as weddings. Yak butter is a specialty in Tibet; \"tsampa\", barley flour mixed with yak butter, is a staple food. Butter tea is consumed in the Himalayan regions of Tibet, Bhutan, Nepal and India. It consists of tea served with intensely flavored—or \"rancid\"—yak butter and salt. In African and Asian developing nations, butter is traditionally made from sour milk rather than cream. It can take several hours of churning to produce workable butter grains from fermented milk.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-13140 | Why do computer-generated faces don't actually look like real people? | Humans are **really** good at checking out other humans and “judging” them, in some sense. Many emotional cues are subtle, but it’s important for the brain to be able to understand them. Facial expressions are a big part of human communication/interaction. That’s just one reason we’re really sensitive to how faces look. Determining if something is “wrong” with a human historically kept us from catching diseases from sick people... and many widely-accepted standards of beauty (symmetry in particular) are associated with health— which is why people have evolved to consider those when deciding to mate/raise children with somebody else. Basically, there’s little room for error before people start to notice something is wrong/weird about a face. As another comment mentions, many types of 3D animation keep faces intentionally “cartoonish” which stops people from judging them like humans and getting creeped out (because they’re just a little different from humans). This is NOT universal but is true in some cases. | [
"Standard Poser characters have been extensively used by European and US based documentary production teams to graphically render the human body or virtual actors in digital scenes. Humanoids printed in several science and technology magazines around the US are often Poser rendered and postworked models.\n\nSection::::Library.\n",
"BULLET::::- In 2018 GDC Epic Games and Tencent Games demonstrated \"Siren\", a digital look-alike of the actress Bingjie Jiang. It was made possible with the following technologies: CubicMotion's computer vision system, 3Lateral's facial rigging system and Vicon's motion capture system. The demonstration ran in near real time at 60 frames per second in the Unreal Engine 4.\n",
"BULLET::::- In 2003 audience debut of photo realistic human-likenesses in the 2003 films \"The Matrix Reloaded\" in the burly brawl sequence where up-to-100 Agent Smiths fight Neo and in \"The Matrix Revolutions\" where at the start of the end showdown Agent Smith's cheekbone gets punched in by Neo leaving the digital look-alike unnaturally unhurt. The Matrix Revolutions bonus DVD documents and depicts the process in some detail and the techniques used, including facial motion capture and limbal motion capture, and projection onto models.\n",
"Human image synthesis\n\nHuman image synthesis can be applied to make believable and even photorealistic renditions of human-likenesses, moving or still. This has effectively been the situation since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material.\n\nSection::::Timeline of human image synthesis.\n",
"Computer based facial expression modelling and animation is not a new endeavour. The earliest work with computer based facial representation was done in the early-1970s. The first three-dimensional facial animation was created by Parke in 1972. In 1973, Gillenson developed an interactive system to assemble and edit line drawn facial images. in 1974, Parke developed a parameterized three-dimensional facial model.\n",
"BULLET::::- Human image synthesis since the early 2000s has improved beyond the point of human's inability to tell a real human imaged with a real camera from a simulation of a human imaged with a simulation of a camera.\n\nBULLET::::- 2D video forgery techniques were presented in 2016 that allow near real-time counterfeiting of facial expressions in existing 2D video.\n",
"Facial motion capture\n\nFacial motion capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce CG (computer graphics) computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in more realistic and nuanced computer character animation than if the animation were created manually.\n",
"BULLET::::- For believable results also the reflectance field must b.e captured or an approximation must be picked from the libraries to form a 7D reflectance model of the target.\n\nSection::::Synthesis.\n\nThe whole process of making digital look-alikes i.e. characters so lifelike and realistic that they can be passed off as pictures of humans is a very complex task as it requires photorealistically modeling, animating, cross-mapping, and rendering the soft body dynamics of the human appearance.\n",
"Section::::Techniques.\n\nSection::::Techniques.:Generating facial animation data.\n\nThe generation of facial animation data can be approached in different ways: 1.) marker-based motion capture on points or marks on the face of a performer, 2.) markerless motion capture techniques using different type of cameras, 3.) audio-driven techniques, and 4.) keyframe animation.\n",
"Section::::Face animation languages.\n\nMany face animation languages are used to describe the content of facial animation. They can be input to a compatible \"player\" software which then creates the requested actions. Face animation languages are closely related to other multimedia presentation languages such as SMIL and VRML. Due to the popularity and effectiveness of XML as a data representation mechanism, most face animation languages are XML-based. For instance, this is a sample from Virtual Human Markup Language (VHML):\n",
"Early computer-generated animated faces include the 1985 film \"Tony de Peltrie\" and the music video for Mick Jagger's song \"Hard Woman\" (from \"She's the Boss\"). The first actual human beings to be digitally duplicated were Marilyn Monroe and Humphrey Bogart in a March 1987 film \"Rendez-vous in Montreal\" created by Nadia Magnenat Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Institute of Canada. The film was created by six people over a year, and had Monroe and Bogart meeting in a café in Montreal, Quebec, Canada. The characters were rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands.\n",
"Synthesis with an actor and suitable algorithms is applied using powerful computers. The actor's part in the synthesis is to take care of mimicking human expressions in still picture synthesizing and also human movement in motion picture synthesizing. Algorithms are needed to simulate laws of physics and physiology and to map the models and their appearance, movements and interaction accordingly.\n\nOften both physics/physiology based (i.e. skeletal animation) and image-based modeling and rendering are employed in the synthesis part. Hybrid models employing both approaches have shown best results in realism and ease-of-use.\n",
"BULLET::::- Late 2017 and early 2018 saw the surfacing of the deepfakes controversy where porn videos were doctored utilizing deep machine learning so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.\n",
"BULLET::::- In 2010 Walt Disney Pictures released a sci-fi sequel entitled \"\" with a digitally rejuvenated digital look-alike of actor Jeff Bridges playing the antagonist CLU.\n\nBULLET::::- In SIGGGRAPH 2013 Activision and USC presented a real time \"Digital Ira\" a digital face look-alike of Ari Shapiro, an ICT USC research scientist, utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture. The end result both precomputed and real-time rendering with the modernest game GPU shown here and looks fairly realistic.\n",
"BULLET::::- Morph targets (also called \"blendshapes\") based systems offer a fast playback as well as a high degree of fidelity of expressions. The technique involves modeling portions of the face mesh to approximate expressions and visemes and then blending the different sub meshes, known as morph targets or blendshapes. Perhaps the most accomplished character using this technique was Gollum, from \"The Lord of the Rings\". Drawbacks of this technique are that they involve intensive manual labor and are specific to each character. Recently, new concepts in 3D modeling have started to emerge. Recently, a new technology departing from the traditional techniques starts to emerge, such as \"Curve Controlled Modeling\" that emphasizes the modeling of the movement of a 3D object instead of the traditional modeling of the static shape.\n",
"The common algorithms usually perform two steps: the first step generates global face image which keeps the characteristics of the face using probabilistic method maximum a posteriori (MAP). The second step produces residual image to compensate the result of the first step. Furthermore, all the algorithms are based on a set of high- and low-resolution training image pairs, which incorporates image super-resolution techniques into facial image synthesis.\n\nAny face hallucination algorithm must be based in three constraints:\n\nData constraint\n\nThe output image should be nearly to the original image when it is smoothed or down-sampled.\n\nGlobal constraint\n",
"Methods for simulating deformation, such as changes of shapes, of dynamic bodies involve intensive calculations, and several models have been developed. Some of these are known as \"free-form deformation\", \"skeleton-driven deformation\", \"dynamic deformation\" and \"anatomical modelling\". Skeletal animation is well known in computer animation and 3D character simulation. Because of the calculation insensitivity of the simulation, few interactive systems are available which realistically can simulate dynamic bodies in real-time. Being able to \"interact\" with such a realistic 3D model would mean that calculations would have to be performed within the constraints of a frame rate which would be acceptable via a user interface.\n",
"BULLET::::- Physiological models, such as skeletal muscle systems and physically based head models, form another approach in modeling the head and face. Here, the physical and anatomical characteristics of bones, tissues, and skin are simulated to provide a realistic appearance (e.g. spring-like elasticity). Such methods can be very powerful for creating realism but the complexity of facial structures make them computationally expensive, and difficult to create. Considering the effectiveness of parameterized models for communicative purposes (as explained in the next section), it may be argued that physically based models are not a very efficient choice in many applications. This does not deny the advantages of physically based models and the fact that they can even be used within the context of parameterized models to provide local details when needed.\n",
"At the beginning, the computer needs to know the shapes of the characters, even the detail of their hands or their thumbs. For example, a sculptor sculpted Marilyn's and Humphrey's hands by covering real human hands with plaster, a grid was drawn, photos from various angles were taken, and the information was digitized in 2D and the computer reconstituted the 3D information. For the heads and torsos, a sculptor created 3D plaster models and the process of digitizing is the same.\n",
"Since computer generated characters don't actually have muscles, different techniques are used to achieve the same results. Some animators create bones or objects that are controlled by the capture software, and move them accordingly, which when the character is rigged correctly gives a good approximation. Since faces are very elastic this technique is often mixed with others, adjusting the weights differently for the skin elasticity and other factors depending on the desired expressions.\n\nSection::::Facial expression capture.:Usage.\n\nSeveral commercial companies are developing products that have been used, but are rather expensive.\n",
"BULLET::::- In 2003 \"The Animatrix: Final Flight of the Osiris\" a state-of-the-art want-to-be human likenesses not quite fooling the watcher made by Square Pictures.\n\nBULLET::::- In 2003 digital likeness of Tobey Maguire was made for movies \"Spider-man 2\" and \"Spider-man 3\" by Sony Pictures Imageworks.\n",
"In the last two decades, a number of computer based facial composite systems have been introduced; amongst the most widely used systems are SketchCop FACETTE Face Design System Software, \"Identi-Kit 2000\", FACES, E-FIT and PortraitPad. In the U.S. the FBI maintains that hand-drawing is its preferred method for constructing a facial composite. Many other police agencies, however, use software, since suitable artistic talent is often not available.\n\nSection::::Methods.:Evolutionary systems.\n",
"BULLET::::- \"Computer Facial Animation\" by Frederic I. Parke, Keith Waters 2008\n\nBULLET::::- \"Data-driven 3D facial animation\" by Zhigang Deng, Ulrich Neumann 2007\n\nBULLET::::- \"Handbook of Virtual Humans\" by Nadia Magnenat-Thalmann and Daniel Thalmann, 2004\n\nSection::::External links.\n\nBULLET::::- Face/Off: Live Facial Puppetry - Realtime markerless facial animation technology developed at ETH Zurich\n\nBULLET::::- The \"Artificial Actors\" Project - Institute of Animation\n\nBULLET::::- iFACE\n\nBULLET::::- Animated Baldi\n\nBULLET::::- download of Carl-Herman Hjortsjö, Man's face and mimic language\" (the original Swedish title of the book is: \"Människans ansikte och mimiska språket\". The correct translation would be: \"Man's face and facial language\")\n",
"Section::::Techniques.:Applying facial animation to a character.\n\nThe main techniques used to apply facial animation to a character are: 1.) morph targets animation, 2.) bone driven animation, 3.) texture-based animation (2D or 3D), and 4.) physiological models.\n",
"Over 90% of the character is defined with only three sliders that control age (from 18 to 80 y.o.), body mass and body tone. The character is finished with other lab tools for body and face details, poses, skin and eye shaders, animation, poses, proxy, etc.\n\nSection::::Technology.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-00554 | How does a reddit moderator moderate? | They basically use reddit the same way as anyone else but they have special permissions in their sub that they moderate. So if they want to enforce subreddit rules on users they can do so willingly. | [
"Various types of Internet sites permit user comments, such as: Internet forums, blogs, and news sites powered by scripts such as phpBB, a Wiki, or PHP-Nuke. Depending on the site's content and intended audience, the webmaster will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lesser moderators. Most often, webmasters will attempt to eliminate trolling, spamming, or flaming, although this varies widely from site to site.\n",
"Also known as unilateral moderation, this kind of moderation system is often seen on Internet forums. A group of people are chosen by the webmaster (usually on a long-term basis) to act as delegates, enforcing the community rules on the webmaster's behalf. These moderators are given special privileges to delete or edit others' contributions and/or exclude people based on their e-mail address or IP address, and generally attempt to remove negative contributions throughout the community.\n\nSection::::Supervisor moderation.:Commercial content moderation (CCM).\n",
"In addition to the three manners proposed by Muller and colleagues in which moderated mediation can occur, Preacher, Rucker, and Hayes (2007) proposed that the independent variable A itself can moderate the effect of the mediator B on the outcome variable C. They also proposed that a moderator variable D could moderate the effect of A on B, while a different moderator E moderates the effect of B on C.\n\nSection::::Differences between moderated mediation and mediated moderation.\n",
"Discussion moderator\n\nA discussion moderator or debate moderator is a person whose role is to act as a neutral participant in a debate or discussion, holds participants to time limits and tries to keep them from straying off the topic of the questions being raised in the debate. Sometimes moderators may ask questions intended to allow the debate participants to fully develop their argument in order to ensure the debate moves at pace.\n",
"In panel discussions commonly held at academic conferences, the moderator usually introduces the participants and solicits questions from the audience. On television and radio shows, a moderator will often take calls from people having differing views, and will use those calls as a starting point to ask questions of guests on the show. Perhaps the most prominent role of moderators is in political debates, which have become a common feature of election campaigns. The moderator may have complete control over which questions to ask, or may act as a filter by selecting questions from the audience.\n\nSection::::History.\n",
"Moderated mediation relies on the same underlying models (specified above) as mediated moderation. The main difference between the two processes is whether there is overall moderation of the treatment effect of A on the outcome variable C. If there is, then there is mediated moderation. If there is no overall moderation of A on C, then there is moderated mediation.\n\nSection::::Testing for moderated mediation.\n",
"The \"moderators\" (short singular form: \"mod\") are users (or employees) of the forum who are granted access to the posts and threads of all members for the purpose of \"moderating discussion\" (similar to arbitration) and also keeping the forum clean (neutralizing spam and spambots etc.). Moderators also answer users' concerns about the forum, general questions, as well as respond to specific complaints. Common privileges of moderators include: deleting, merging, moving, and splitting of posts and threads, locking, renaming, stickying of threads, banning, unbanning, suspending, unsuspending, warning the members, or adding, editing, and removing the polls of threads. \"Junior Modding\", \"Backseat Modding\", or \"Forum copping\" can refer negatively to the behavior of ordinary users who take a moderator-like tone in criticizing other members.\n",
"Within a conference, discussions are managed as a collection of items. One person starts an item by entering the item text and giving it a title, called a header. Other people can then make their own responses to the initial entry. The item text and each response is signed with the name of its author. The item's author can specify the type of responses desired. The most common type of response is a discussion response where the response is simply text. Participants can make as many discussion responses to an item as they like. Another type of response is a vote where a numeric response followed by an optional text comment is given with only one response per item per participant. And still another response type is a dynamic value vote.\n",
"\"Staff Moderators\" are a special staff position created to ensure a better staff involvement with in-world issues. They oversee and assist the duties of both the Community Moderators and Supervisors, therefore assisting with the oversight of this branch of volunteers. They also provide a conduit between volunteers/members and higher staff as well as handle any in-world issues that need attention.\n\nSection::::The Concept.\n",
"Bootstrapping has also been suggested as a method of estimating the sampling distributions of a moderated mediation model in order to generate confidence intervals. This method has the advantage of not requiring that any assumptions be made about the shape of the sampling distribution.\n\nPreacher, Rucker and Hayes also discuss an extension of simple slopes analysis for moderated mediation. Under this approach, one must choose a limited number of key conditional values of the moderator that will be examined. As well, one can use the Johnson–Neyman technique to determine the range of significant conditional indirect effects.\n",
"To determine if the group is a good fit and to learn more about the norms, lurkers will read most if not all of the posts. By reading the posts, lurkers develop a better understanding about the topics being discussed and if this is a good fit for them. Lurkers will also examine email addresses and signatures with associated websites so get a better understanding of the other members of the group.\n",
"Architectures can also be oriented to give editorial control to a group or individual. Many email lists are worked in this fashion (e.g., Freecycle). In these situations, the architecture usually allows, but does not require that contributions be moderated. Further, moderation may take two different forms: reactive or proactive. In the reactive mode, an editor removes posts, reviews, or content that is deemed offensive after it has been placed on the site or list. In the proactive mode, an editor must review all contributions before they are made public.\n",
"Enrichment scenes \"belong\" to individual characters and players, though these can invite other players to participate in the scene, either as their character playing different non player characters. The hosting player has final say on what goes on in the scene, except that each scene will feature a conflict, with the stakes of each side defined by the hosting player and by their opposition, respectively. A player's opposition is the gamemaster, the gamemaster's is the players. Conflict scenes, on the other hand, is not the province of any one participant. One character \"picks a fight\" with another character, after which each player not already engaged in a conflict has a chance of picking a fight. A fight is between two characters only. If a character is engaged in combat with more than one opponent, he will have one separate \"page of conflict\" for each player. For each page of conflict, each player defines a set of stakes. The battle ends when one character is unable to best his opponents assault.\n",
"Each member could get a chance to speak through assignment of the floor and debate. Debate may be limited in the number of speeches and time and should be respectful to others at all times. Voting takes place to decide the course of action and it could be done in a multitude of ways, such as voice vote, standing vote, and ballot vote.\n",
"The bishop is not required to appoint a moderator of the curia and may exercise the office himself or delegate its functions to others. Usually, the vicar general, or one of them, is appointed to this office.\n",
"Each episode varies in format, with some recurring segments, including \"Yes, Yes, No\", in which Vogt and Goldman explain internet trivia to Alex Blumberg, co-founder of Gimlet Media, with occasional help from outside guests. In a variation on this segment called \"Sports, Sports, Sports\", Blumberg instead explains sports-related tweets to Vogt and Goldman. The segment debuted in Episode #106, \"Is that You, KD?\". In another recurring segment, called \"Super Tech Support\", the Reply All team—particularly Goldman, who previously worked as a network administrator—take on odd or especially complex tech support issues that the listeners or friends of the hosts have encountered.\n",
"Moderator\n\nModerator may refer to:\n\nSection::::Government.\n\nBULLET::::- Moderator (town official), elected official who presides over the Town Meeting form of government\n\nSection::::Internet.\n\nBULLET::::- Internet forum moderator, a person given special authority to enforce the rules on a forum\n\nBULLET::::- Game moderator\n\nBULLET::::- Moderator of a Usenet newsgroup\n\nBULLET::::- Google Moderator, an application to assist chairmen of online meetings\n\nSection::::Religion.\n\nBULLET::::- Moderator of the General Assembly, in Presbyterian and Reformed churches\n\nBULLET::::- Moderator of the curia, an administrative position in the Catholic church\n\nSection::::Nuclear engineering.\n",
"Section::::Production.\n\nSection::::Production.:Direction and writing.\n",
"Prof. Sean Garrity (Kevin Corrigan) is the theatrical drama instructor at Greendale. He gets involved in a conspiracy intrigue with Jeff, Annie and Dean Pelton, when he mysteriously poses as Jeff's fake Conspiracy Theories night school class teacher, \"Professor Professorson\" in the episode \"Conspiracy Theories and Interior Design.\" He later teaches Troy and Britta in an elective acting class and directs Troy in an all-black cast stage production of \"Fiddler on the Roof\", entitled, \"Fiddla \"Please\"\". In \"Introduction to Teaching\", he teaches a two-day course called \"Nicolas Cage: Good or Bad?\" that Abed finds very difficult.\n\nSection::::Recurring characters.:Faculty.:Prof. June Bauer.\n",
"Because there was no established procedure to analyze models with moderated mediation, Langfred (2004) first describes the different types of moderated mediation models that might exist, noting that there are two primary forms of moderated mediation. Type 1, in which the moderator operates on the relationship between the independent variable and the mediator, and Type 2, in which the moderator operates on the relationship between the mediator and the dependent variable. Langfred reviews the existing perspectives on moderated mediation (James and Brett, 1984), and notes that an accepted statistical approach already exists for Type 1 moderated mediation, as demonstrated by Korsgaard, Brodt, and Whitener (2002). Type 2 moderation, however, is more statistically difficult, so Langfred reviews three different possible approaches for the analysis, and ultimately recommends one of them as the correct technique.\n",
"Another mandatory process of the college is moderation. Moderation typically takes place in the fourth or fifth semester, as a way of choosing a major. Conditions vary from department to department and most require the completion of a certain set or a certain number of courses. To moderate, the student presents whatever work is required to a moderation board of three professors, and is subsequently interviewed, examined, and critiqued.\n",
"The party-directed mediation model was developed by Gregorio Billikopf of the University of California. One aspect of the mediation model focuses on listening, using the techniques of client centered therapy developed by Carl Rogers. The role of the mediator is primarily to be a good listener and coach, thereby allowing the parties involved to have free rein over the specific steps taken toward resolving a conflict or achieving a compromise.\n\nSection::::See also.\n\nBULLET::::- Alternative dispute resolution\n\nSection::::External links.\n",
"Forum rules are maintained and enforced by the moderation team, but users are allowed to help out via what is known as a report system. Most American forum software contains such a system. It consists of a small function applicable to each post (including one's own). Using it will notify all currently available moderators of its location, and subsequent action or judgment can be carried out immediately, which is particularly desirable in large or very developed boards. Generally, moderators encourage members to also use the \"private message\" system if they wish to report behavior. Moderators will generally frown upon attempts of moderation by non-moderators, especially when the would-be moderators do not even issue a report. Messages from non-moderators acting as moderators generally declare a post as against the rules or predict punishment. While not harmful, statements that attempt to enforce the rules are discouraged.\n",
"In order to test for moderated mediation, some recommend examining a series of models, sometimes called a piecemeal approach, and looking at the overall pattern of results. This approach is similar to the Baron and Kenny method for testing mediation by analyzing a series of three regressions. These researchers claim that a single overall test would be insufficient to analyze the complex processes at play in moderated mediation, and would not allow one to differentiate between moderated mediation and mediated moderation.\n",
"Essentially, it is the duty of the moderator to manage the day-to-day affairs of a forum or board as it applies to the stream of user contributions and interactions. The relative effectiveness of this user management directly impacts the quality of a forum in general, its appeal, and its usefulness as a community of interrelated users.\n\nSection::::Structure.:User groups.:Administrator.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-20394 | Why is it that our vision isn’t effected that much when our eyes are wide open versus squinting? | I could be wrong but i'd say it's because you're not exposing any more of your pupil or iris by widening them whereas when you squint you obscure your pupil and iris | [
"Squinting is most often practiced by people who suffer from refractive errors of the eye who either do not have or are not using their glasses. Squinting helps momentarily improve their eyesight by slightly changing the shape of the eye to make it more round, which helps light properly reach the fovea. Squinting also decreases the amount of light entering the eye, making it easier to focus on what the observer is looking at by removing rays of light which enter the eye at an angle and would need to otherwise be focused by the observer's faulty lens and cornea.\n",
"Pinhole glasses, which severely restrict the amount of light entering the cornea, have the same effect as squinting.\n\nIt is a common belief that squinting worsens eyesight. However, according to Robert MacLaren, a professor of ophthalmology at the University of Oxford, this is nothing more than an old wives' tale: the only damage that can be caused by squinting for long periods is a temporary headache due to prolonged contraction of the facial muscles.\n",
"Squinting is also a common involuntary reflex, especially among people with light colored eyes, during adaptation to a sudden change in lighting such as when one goes from a dark room to outdoors on a sunny day to avoid pain or discomfort of the eyes. The pupillary light reflex caused by adjustment to light takes around five minutes in people with healthy eyes, so squinting and pain after that could be a sign of photophobia.\n",
"Squint\n\nSquinting is the action of looking at something with partially closed eyes.\n",
"Squint (disambiguation)\n\nA squint is the action of tightening of the muscles around the eye. \n\nSquint may also refer to:\n\nBULLET::::- Squint, a term for strabismus (crossed eyes)\n\nBULLET::::- \"Squint\" (album), a 1993 album by Steve Taylor\n\nBULLET::::- Squint (antenna), an angle of transmission offset\n\nBULLET::::- Squint (opening) (hagioscope), an opening through the wall of a church in an oblique direction\n\nBULLET::::- Squint Entertainment, a record label\n\nBULLET::::- Squint Lake, a lake in Burnaby, British Columbia, Canada\n\nBULLET::::- Squint Phares (1915–1974), American basketball player\n\nBULLET::::- Squint Hunter, coach for the Saint Louis Billikens men's basketball team, 1926–1927\n",
"Section::::Background.\n",
"When images are acquired from locations with large differences in openness (for example, closed canopy locations and canopy gaps) it is essential to control camera exposure. If the camera is allowed to automatically adjust exposure (which is controlled by aperture and shutter speed), the result is that small openings in closed conditions will be bright, whereas openings of the same size in open conditions will be darker (for example, canopy areas around a gap). This means that during image analysis the same-sized holes will be interpreted as \"sky\" in a closed-canopy image and \"canopy\" in the open-canopy image. Without controlling exposure, the real differences between closed- and open-canopy conditions will be underestimated.\n",
"Section::::Release and promotion.\n",
"Section::::Critical reception.\n",
"Section::::In vision.\n\nIn normal vision, diffraction through eyelashes – and due to the edges of the eyelids if one is squinting – produce many diffractions spikes. If it is windy, then the motion of the eyelashes cause spikes that move around and scintillate. After a blink, the eyelashes may come back in a different position and cause the diffraction spikes to jump around. This is classified as an Entoptic phenomenon.\n\nSection::::Other uses of diffraction spikes.\n",
"Squint (antenna)\n\nIn a phased array or slotted waveguide antenna, squint refers to the angle that the transmission is offset from the normal of the plane of the antenna. In simple terms, it is the change in the beam direction as a function of operating frequency, polarization, or orientation. It is an important phenomenon that can limit the bandwidth in phased array antenna systems.\n\nThis deflection can be caused by:\n\nBULLET::::- Signal Frequency\n",
"The spread of the diffraction-limited PSF is approximated by the diameter of the first null of the Airy disk,\n",
"Section::::Neural mechanisms.\n",
"Presbyopia, like other focal imperfections, becomes less noticeable in bright sunlight when the pupil becomes smaller. As with any lens, increasing the focal ratio of the lens increases depth of field by reducing the level of blur of out-of-focus objects (compare the effect of aperture on depth of field in photography). Constricting the aperture may be achieved by forming a tiny hole with one's index finger and peering through it.\n",
"There is some confusion over how the focusing mechanism of the eye works. In the 1977 book, \"Eye and Brain\", for example, the lens is said to be suspended by a membrane, the 'zonula', which holds it under tension. The tension is released, by contraction of the ciliary muscle, to allow the lens to become more round, for close vision. This implies the ciliary muscle, which is outside the zonula, must be circumferential, contracting like a sphincter, to slacken the tension of the zonula pulling outwards on the lens. This is consistent with the fact that our eyes seem to be in the 'relaxed' state when focusing at infinity, and also explains why no amount of effort seems to enable a myopic person to see farther away.\n",
"Section::::Examples.:The human eye.\n\nThe fastest f-number for the human eye is about 2.1, corresponding to a diffraction-limited point spread function with approximately 1 μm diameter. However, at this f-number, spherical aberration limits visual acuity, while a 3 mm pupil diameter (f/5.7) approximates the resolution achieved by the human eye. The maximum density of cones in the human fovea is approximately 170,000 per square millimeter, which implies that the cone spacing in the human eye is about 2.5 μm, approximately the diameter of the point spread function at f/5.\n\nSection::::Examples.:Focused laser beam.\n",
"requires increasing the lens f-number to achieve the same DOF, and if the lens is stopped down\n\nsufficiently far, the reduction in defocus blur is offset by the increased\n\nblur from diffraction. See the Depth of field article for a more\n\ndetailed discussion.\n\nSection::::Circle of confusion diameter limit in photography.:Adjusting the circle of confusion diameter for a lens’s DoF scale.\n\nThe \"f\"-number determined from a lens DoF scale can be adjusted to reflect a CoC different from the one on which the DoF scale is based. It is shown in the Depth of field article that\n",
"BULLET::::1. in the case where the spread of the IRF is small with respect to the spread of the diffraction PSF, in which case the system may be said to be essentially diffraction limited (so long as the lens itself is diffraction limited).\n\nBULLET::::2. in the case where the spread of the diffraction PSF is small with respect to the IRF, in which case the system is instrument limited.\n\nBULLET::::3. in the case where the spread of the PSF and IRF are of the same order of magnitude, in which case both impact the available resolution of the system.\n",
"Section::::Near response.:Accommodation of the lens.\n\nChanging the curvature of the lens is carried out by the ciliary muscles surrounding the lens; this process is known as \"accommodation\". Accommodation narrows the inner diameter of the ciliary body, which actually relaxes the fibers of the suspensory ligament attached to the periphery of the lens, and also allows the lens to relax into a more convex, or globular, shape. A more convex lens refracts light more strongly and focuses divergent light rays from near objects onto the retina, allowing closer objects to be brought into better focus.\n\nSection::::Clinical significance.\n\nSection::::Clinical significance.:Eye care professionals.\n",
"In low light conditions the parallax suppression phenomenon is markedly better. The depth of field looking through the sight remains the same as in bright conditions. This is in contrast to open sights, where the eye's pupil will become wider in low light conditions, meaning a larger aperture and a blurrier target. The downside to this is that the image through an aperture sight is darker than with an open sight.\n",
"Because extension tubes do not have optics, they don't affect the optical quality of a lens. Because of their function, there are other effects: decrease of light; shallower depth of field; and loss of ability to focus at infinity. The longer the extension tube, the closer the lens can focus. The amount of light and depth of field will be equally reduced. If you are using auto exposure this is all corrected for you by the camera, but if you are not it has to be calculated and taken into account when setting exposure.\n",
"The blur circle, of diameter \"C\", in the focused object plane at distance \"S\", is an unfocused virtual image of the object at distance \"S\" as shown in the diagram. It depends only on these distances and the aperture diameter \"A\", via similar triangles, independent of the lens focal length:\n\nThe circle of confusion in the image plane is obtained by multiplying by magnification \"m\":\n\nwhere the magnification \"m\" is given by the ratio of focus distances:\n\nUsing the lens equation we can solve for the auxiliary variable \"f\":\n\nwhich yields\n",
"and express the magnification in terms of focused distance and focal length:\n\nwhich gives the final result:\n\nThis can optionally be expressed in terms of the f-number \"N\" = \"f/A\" as:\n\nThis formula is exact for a simple paraxial thin lens or a symmetrical lens, in which the entrance pupil and exit pupil are both of diameter \"A\". More complex lens designs with a non-unity pupil magnification will need a more complex analysis, as addressed in depth of field.\n",
"The optimum eye relief distance also varies with application. For example, a rifle scope needs a very long eye relief to prevent recoil from causing it to strike the observer.\n",
"the two white pixels closer together than a single photoreceptor.\n\nPupil Size Inversion\n\nWhen pupils are narrowed to around 1mm for reading fine print,\n\nthe size of the central \"Airy\" disk increases to a diameter of 10 photoreceptors.\n\nThe so-called \"blur\" is increased for reading.\n\nWhen pupils are widened for fight/flight response,\n\nthe size of the central \"Airy\" disk decreases to a diameter of about 1.5 photoreceptors.\n\nThe so-called \"blur\" is decreased in anticipation of large movements.\n\nNo published neuroanatomical model predicts that discrimination\n\nimproves when pupils are narrowed.\n\nPupil Shape Inversion\n\nEyes have pupils (apertures) that cause diffraction.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-04401 | Why do sharper knives cut better? | Sharpness is the thickness of the edge of the blade. The thinner you make the edge, the "sharper" it is. The thinner a blade is, the more force is applied per square inch of cutting area. That is to say that if you apply one pound of force on the blade, and it's cutting area is 1 square inch, you are giving 1 psi of pressure. If the blade is only 0.1 square inches, you are giving 10 psi, and so on. The sharpest blades have cutting areas far smaller than what would be measured in inches, allowing for much higher pressures with less force applied. The object you are cutting has what is called a tensile strength, which is the amount of force required to separate the elements of the object. The stronger the item, the more force is required to cut it. A sharper knife allows you to use the limited force of your arm and hand to cut through tougher materials easily. | [
"\"Biting\" sharpness is considered ideal for kitchen knives, but sharper blades are desired for shaving and surgical scalpels, which must cut without side-to-side slicing of the blade, and duller but tougher blades are more suitable for chiseling and chopping wood.\n\nFor testing the sharpness of a straight razor, a traditional though dangerous test is to \"place\" a moistened thumb on the edge, and feel if it catches. If a thumb is actually \"drawn along\" or \"across\" a properly sharpened straight razor it will cut into the skin, drawing blood.\n\nSection::::Grinding.\n",
"The extent to which this honing takes place depends upon the intended use of the tool or implement. For some applications an edge with a certain amount of \"jaggedness\" is acceptable, or even desirable, as this creates a serrated cutting edge. In other applications the edge must be as smooth as possible.\n\nSection::::Implements with essentially straight edges.:Steeling.\n",
"There are many different kinds of \"honing oils\" to suit different needs. It is important to use the appropriate solution for the job. In the case of knife sharpening, motor oil is too thick or \"heavy\" and can over-lubricate or clog a sharpening stone, whereas WD-40 is too \"light\" an oil and will not carry the metal filings plus stone dust (collectively known as \"swarf\") away from the stone, and clog it. Not using any oil at all will also clog or \"glaze\" the stone, again reducing its cutting power. Historically sperm whale oil, Neatsfoot oil, and other animal fats were popular.\n",
"The word \"honing\" is ambiguous, and may refer to either fine sharpening (step 1.2) or straightening (step 2).\n\nThe finest level of sharpening is done most frequently, while the coarser levels are done progressively more rarely, and sharpening methods differ between blades and applications.\n\nFor example, a straight razor used for shaving is stropped before each use, and may be stropped part-way \"through\" use, while it will be fine sharpened on a stone a few times per year, and re-ground on a rough stone after several years.\n",
"Blades may also be damaged by being corroded by acid (as when cutting lemons or tomatoes) or by high temperatures and corrosive chemicals in a dishwasher.\n\nIf a knife is used as a scraper, a pry-bar, or encounters hard particles in softer materials, there may be a sideways load at the tip, causing bending damage.\n\nBlade damage is avoided by:\n",
"BULLET::::- using an appropriate blade for the task – a thinner blade for more delicate work, and a thicker blade whenever a thinner blade is not required (e.g. a thinner blade might be used to cut fillets, butterfly steak or roast for stuffing, or perform Mukimono, while a thicker one might be used to slice or chop repeatedly, separate primal cuts of poultry or small game, or scrape and trim fat from meat or hide, as these actions would be more likely to cause unnecessary wear on a thinner blade.)\n\nBULLET::::- using a soft cutting surface,\n",
"Section::::Angles.\n",
"Section::::Steeling.\n",
"Different knives are sharpened differently according to grind (edge geometry) and application. For example, surgical scalpels are extremely sharp but fragile, and are generally disposed of, rather than sharpened, after use. Straight razors used for shaving must cut with minimal pressure, and thus must be very sharp with a small angle and often a hollow grind. Typically these are stropped daily or more often. Kitchen knives are less sharp, and generally cut by slicing rather than just pressing, and are steeled daily. At the other extreme, an axe for chopping wood will be less sharp still, and is primarily used to split wood by chopping, not by slicing, and may be reground but will not be sharpened daily. In general, but not always, the harder the material to be cut, the higher (duller) the angle of the edge.\n",
"As well as coarse grinding, sharpeners also typically 'dress' the cutting edges with a sharpening stone or honing steel, secure or replace loose handles and generally offer advice and assistance regarding best practice. Some also sell knives and related products.\n\nSection::::See also.\n\nBULLET::::- Blade\n\nBULLET::::- Grinding machine\n\nBULLET::::- Knife sharpening\n\nBULLET::::- Razor strop\n\nBULLET::::- Saw set\n\nBULLET::::- Scary sharp\n\nBULLET::::- Sharpening jig\n\nBULLET::::- Sharpening stone\n\nSection::::External links.\n\nBULLET::::- A Guide to Honing and Sharpening\n\nBULLET::::- https://scienceofsharp.wordpress.com/ True effects of various blade sharpening techniques, mostly on straight razors, shown by electron microscope.\n",
"By contrast, a kitchen knife is steeled before or after each use (and may be steeled during heavy use, as by butchers), and sharpened on a stone a few times per year.\n\nSection::::Method.:Blade damage.\n\nBlades are damaged primarily by buckling – compressive force, from being pressed \"into\" a hard object, such as bone, ice, or a hard cutting board – and by bending, from sideways pressure. Both of these tend to \"roll\" the edge of a blade, due to metal's ductile nature.\n",
"Section::::Other types of implements.\n\nDifferent techniques are required where the edges are not straight. Special tools and skills are more often required, and sharpening is often best done by a specialist rather than the user of the tool.\n\nExamples include:\n\nBULLET::::- Drill bits - twist drills used for wood or steel are usually sharpened on a grinding wheel or within a purpose made grinding jig to an angle of 60° from vertical (120° total) although sharper angles may be used for hard or brittle materials such as glass.\n",
"Section::::Tools.\n\nTurning tools are generally made from three different types of steel; carbon steel, high speed steel (HSS), and more recently powdered metal. Comparing the three types, high speed steel tools maintain their edge longer, requiring less frequent sharpening than carbon steel, but not as long as powdered metal tools. The harder the type of high speed steel used, the longer the edge will maintain sharpness. Powdered steel is even harder than HSS, but takes more effort to obtain an edge as sharp as HSS, just as HSS is harder to get as sharp as carbon steel.\n",
"Section::::Stropping.\n\nStropping a knife is a finishing step. This is often done with a leather strap, either clean or impregnated with abrasive compounds (e.g. chromium(III) oxide or diamond), but can be done on paper, cardstock, cloth, or even bare skin in a pinch. It removes little or no metal material, but produces a very sharp edge by either straightening or very slightly reshaping the edge. Stropping may bring a somewhat sharp blade to \"like new\" condition.\n\nSection::::External links.\n\nBULLET::::- Scienceofsharp, True effects of various blade sharpening techniques, mostly on straight razors, shown by electron microscope.\n",
"The stress generated by a cutting implement is directly proportional to the force with which it is applied, and inversely proportional to the area of contact. Hence, the smaller the area (i.e., the sharper the cutting implement), the less force is needed to cut something. It is generally seen that cutting edges are thinner for cutting soft materials and thicker for harder materials. This progression is seen from kitchen knife, to cleaver, to axe, and is a balance between the easy cutting action of a thin blade vs strength and edge durability of a thicker blade.\n\nSection::::Metal cutting.\n",
"Knife sharpening proceeds in several stages, in order from coarsest (most destructive) to finest (most delicate). These may be referred to either by the \"effect\" or by the \"tool\". Naming by effect, the stages are:\n\nBULLET::::1. sharpening: removing metal to form a \"new\" edge\n\nBULLET::::1. rough sharpening (using either water stones, oil stones, or medium grits of sandpaper in the scary sharp method of sharpening)\n\nBULLET::::2. fine sharpening (using the same tools as above, but in finer grits)\n\nBULLET::::2. straightening: straightening the \"existing\" metal on the blade, but not removing significant quantities of metal\n",
"BULLET::::- straight cutting, with no side-to-side movement,\n\nBULLET::::- immediate cleaning.\n\nBULLET::::- oiling (with food grade oil if appropriate)\n\nSection::::Method.:Inspection.\n\nBlade sharpness can be checked in multiple ways.\n\nVisually, a very sharp knife has an edge that is too small to see with the eye; it may even be hard or impossible to focus in a microscope. The shape near the edge can be highlighted by rotating the knife and watching changes in reflection. Nicks and rolled edges can also be seen, as the rolled edge provides a reflective surface, while a properly straightened edge will be invisible when viewed head-on.\n",
"The substance on the sharpening surface must be harder (hardness is measured on the Mohs scale) than the material being sharpened; diamond is extremely hard, making diamond dust very effective for sharpening, though expensive; less costly, but less hard, abrasives are available, such as synthetic and natural Japanese waterstones. Several cutlery manufacturers now offer electric knife sharpeners with multiple stages with at least one grinding stage. These electric sharpeners are typically used in the kitchen but have the ability to sharpen blades such as pocket or tactical knives. The main benefit of using an electric sharpener is speed with many models that can complete the sharpening process in one to two minutes. The disadvantage is that the sharpening angle is fixed so some specialized knives, like a Japanese style Santoku, may need additional attention to sharpen to the ideal angle.\n",
"A blade's sharpness may be tested by checking if it \"bites\"—begins to cut by being drawn across an object \"without pressure\". Specialized sticks exist to check bite, though one can also use a soft ballpoint pen, such as the common white Bic Stic. A thumbnail may be used at the risk of a cut, or the edge of a sheet of paper. For kitchen knives, various vegetables may be used to check bite, notably carrots, tomatoes, or cucumbers. In testing in this way, any nicks are felt as obstacles.\n",
"Perceived sharpness is a combination of both resolution and acutance: it is thus a combination of the captured resolution, which cannot be changed in processing, and of acutance, which can be so changed. \n\nProperly, perceived sharpness is the steepness of transitions (slope), which is change in output value divided by change in position – hence it is maximized for large changes in output value (as in sharpening filters) and small changes in position (high resolution).\n\nCoarse grain or noise can, like sharpening filters, increase acutance, hence increasing the perception of sharpness, even though they degrade the signal-to-noise ratio.\n",
"Sharpening these implements can be expressed as the creation of two intersecting planes which produce an edge that is sharp enough to cut through the target material. For example, the blade of a steel knife is ground to a bevel so that the two sides of the blade meet. This edge is then refined by honing until the blade is capable of cutting.\n",
"BULLET::::3. polishing (also called stropping): giving a mirror finish, but not significantly altering the edge.\n\nBULLET::::- polishing may also be achieved by buffing a blade: instead of moving the knife against a flat leather strop loaded with fine abrasive, the knife is held still and a powered circular cloth wheel is moved against the knife.\n\nNamed by tools, the same three stages are:\n\nBULLET::::1. grinding (on a grinding wheel) or whetting (on a whetstone)\n\nBULLET::::2. steeling, using a honing steel\n\nBULLET::::3. stropping, on a razor strop or buffing on a wheel\n",
"In times when swords were regularly used in warfare, they required frequent sharpening because of dulling from contact with rigid armor, mail, metal rimmed shields, or other swords, for example. Particularly, hitting the edge of another sword by accident or in emergency could chip away metal and even cause cracks through the blade. Soft-cored blades are more resistant to fracturing on impact.\n\nSection::::Physics.:Nail Pulls.\n",
"A sharp object works by concentrating forces which creates a high pressure due to the very small area of the edge, but high pressures can nick a thin blade or even cause it to roll over into a rounded tube when it is used against hard materials. An irregular material or angled cut is also likely to apply much more torque to hollow-ground blades due to the \"lip\" formed on either side of the edge. More blade material can be included directly behind the cutting edge to reinforce it, but during sharpening some proportion of this material must be removed to reshape the edge, making the process more time-consuming. Also, any object being cut must be moved aside to make way for this wider blade section, and any force distributed to the grind surface reduces the pressure applied at the edge.\n",
"BULLET::::- Laminated blades combine the advantages of a hard, but brittle steel which will hold a good edge but is easily chipped and damaged, with a tougher steel less susceptible to damage and chipping, but incapable of taking a good edge. The hard steel is sandwiched (laminated) and protected between layers of the tougher steel. The hard steel forms the edge of the knife; it will take a more acute grind than a less hard steel, and will stay sharp longer.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-00602 | why is the taste of water repulsive when you have a sore throat but other liquid are palatable? | Try adding just a pinch of salt to your water. The hypotonic solution is irritating your throat. | [
"Dogs have around 1,700 taste buds compared to humans with around 9,000. The sweet taste buds in dogs respond to a chemical called furaneol which is found in many fruits and in tomatoes. It appears that dogs do like this flavor and it probably evolved because in a natural environment dogs frequently supplement their diet of small animals with whatever fruits happen to be available. Because of dogs' dislike of bitter tastes, various sprays, and gels have been designed to keep dogs from chewing on furniture or other objects. Dogs also have taste buds that are tuned for water, which is something they share with other carnivores but is not found in humans. This taste sense is found at the tip of the dog's tongue, which is the part of the tongue that he curls to lap water. This area responds to water at all times, but when the dog has eaten salty or sugary foods the sensitivity to the taste of water increases. It is proposed that this ability to taste water evolved as a way for the body to keep internal fluids in balance after the animal has eaten things that will either result in more urine being passed or will require more water to adequately process. It certainly appears that when these special water taste buds are active, dogs seem to get an extra pleasure out of drinking water, and will drink copious amounts of it.\n",
"In order to further classify the extent of dysgeusia and clinically measure the sense of taste, gustatory testing may be performed. Gustatory testing is performed either as a whole-mouth procedure or as a regional test. In both techniques, natural or electrical stimuli can be used. In regional testing, 20 to 50 µL of liquid stimulus is presented to the anterior and posterior tongue using a pipette, soaked filter-paper disks, or cotton swabs. In whole mouth testing, small quantities (2-10 mL) of solution are administered, and the patient is asked to swish the solution around in the mouth.\n",
"Flavors have temperaments and accordingly would cause warmness, coldness, dryness and wetness in the body. One can simply decide the Mizaj of a food item or a drink by tasting them to a great extent.\n\nTasteless food items, also called watery, are cold and wet. Every insipid food item such as lettuce, dairy products such as yoghurt, or doogh (a yogurt-based beverage) and citrus fruits which are not too much sour or sweet are cold and wet.\n",
"Long before modern studies had established the germ theory of disease, or any advanced understanding of the nature of water as a vehicle for transmitting disease, traditional beliefs had cautioned against the consumption of water, rather favouring processed beverages such as beer, wine and tea. For example, in the camel caravans that crossed Central Asia along the Silk Road, the explorer Owen Lattimore noted, \"The reason we drank so much tea was because of the bad water. Water alone, unboiled, is never drunk. There is a superstition that it causes blisters on the feet.\"\n\nSection::::Socioeconomic impact.\n",
"Syldavians seem to be fond of mineral water, which does not go down well with the whisky-drinking Captain Haddock, one of Tintin's travelling companions.\n",
"The Gallic medical writer Marcellus of Bordeaux may offer another textual reference to Esus in his \"De medicamentis\", a compendium of pharmacological preparations written in Latin in the early 5th century and the sole source for several Celtic words. The work contains a magico-medical charm decipherable as Gaulish which appears to invoke the aid of Esus (spelled Aisus) in curing throat trouble.\n",
"Both consistency studies and fMRI scans have validated JIW’s lexical-gustatory synesthesia. A fMRI scan showed the bilateral activation of the Broca’s area 43 in the brain during JIW’s taste experiences. The Broca’s area 43 is a part of the primary gustatory cortex which is responsible for the perception of taste. Further studies of the underlying brain regions involved in synesthetes like JIW could aid in identifying the root physiological mechanisms involved in lexical-gustatory synesthesia.\n\nSection::::Experimental studies.:Case studies.:SC.\n",
"Pure water is usually described as tasteless and odorless, although humans have specific sensors that can feel the presence of water in their mouths, and frogs are known to be able to smell it. However, water from ordinary sources (including bottled mineral water) usually has many dissolved substances, that may give it varying tastes and odors. Humans and other animals have developed senses that enable them to evaluate the potability of water by avoiding water that is too salty or putrid.\n\nSection::::Chemical and physical properties.:Color and appearance.\n",
"BULLET::::- Food that look like water or have similar characteristics as water by being odorless, tasteless, colorless, and food that are watery and slimy such as soups, broths, and sour stews containing vegetables, water from boiled Kaleh pache (dish of boiled cow or sheep's feet and/or head) or tripe (edible lining from the stomachs of sheep), frozen food, raw food, or cooked food served cold, consuming too much rice without bran, potato, tomato, sauces particularly mayonnaise, salads, dairy products, water, fruits and vegetables with cold Mizaj (cooling characteristics) such as lettuce, cucumber, citrus and sour fruits could form or increase phlegm in the body.\n",
"\"I have established Poo-Ha-Bah for all the people. Poo-Ha-Bah in my language is a very important word--it's talking about Doctor Water. It's really important to have healing water here, not only as a human--a lot of animal life have used healing water, a lot of different ways. My people have always traveled for many miles to get into different kinds of healing waters. This is something that we all need, and this is one reason I have looked for healing water and I finally found one in Tecopa, California. I am pretty sure that we all will enjoy the healing water if we ask the healing water to help us with our illness of all different kinds. This is something that our people have talked about for many, many years.\"\n",
"Rose or mint water is a drink commonly added to Palestinian sweets and dishes. However, it is also a popular drink on its own, and is seen as refreshing in the heated summers. Herbs such as sage can also be boiled with water to create a drink that is sometimes used for medicinal purposes. A warm drink made from sweetened milk with salep garnished with walnuts, coconut flakes and cinnamon, is known as \"sahlab\" and is primarily served during the winter season.\n\nSection::::Beverages.:Coffee and tea.\n",
"The term \"Sweetwater\" is a name often given to freshwater which tastes good in regions where much of the water is bitter to the taste. The Spanish called the river \"Agua Dulce\", a name they applied to good clear water anywhere they lived.\n\nSection::::Course.\n",
"In people admitted to hospital, a bedside \"water swallow test\" is often performed to determine whether there might be need for more detailed swallowing assessment. The test is more reliable when larger amounts of fluid are used. When assessing the swallowing, the test is abnormal if there is coughing or choking, or if the voice changes because of aspirated fluid resting on the vocal cords.\n",
"In a study of 350 infants conducted in Puducherry, India, two-thirds of mothers of infants ages 1 to 6 months admitted administering gripe water to their children at least once a day. The mothers believed that gripe water helps in digestion and prevents stomach ache. However, infant colic, vomiting and constipation were significantly more common in gripe water administered infants compared to those who were exclusively breast fed. The study did not indicate the rate of constipation or vomiting prior to the use of gripe water to compare with during use. Constipation was reported for 19.5% of the infants who were given gripe water compared to 5.8% for those who were not.\n",
"In Japan, this plant's leaves are used as a vegetable - these are from the cultivar, not the wild type which has a far more pungent taste. Wild waterpepper produces oils that cause skin irritation, and the many acids in its tissues, including formic acid, make the plant unpalatable to livestock. Young red sprouts are used as a sashimi garnish, and are known as . Though livestock do not eat the wild type, some insects do, giving rise to the Japanese saying , which may be translated as “There is no accounting for taste.” or more narrowly “Some prefer nettles.”\n",
"Spenser Theyre-Smith's short play \"A Case for Eviction\" (1883) features the comically increasing demands of an unseen houseguest, Major O'Golly, who at one point is said by the uneducated servant Mary to have requested \"Polly Nary water\" with his whiskey.\n\nIn William Dean Howells's \"The Rise of Silas Lapham\" (1885), the Laphams attend a dinner party at the Coreys. After dinner, the men remain in the dining room smoking cigars, and one of the guests \"reached him a bottle of Apollinaris,\" filling a glass for Silas. \"He drank a glass, and then went on smoking.\"\n",
"Sour flavor is cold and dry and cause dryness and coldness in the body as well. Vinegar and pickled vegetables and fruits preserved in vinegar, sour fruits or sour juices, verjuice, and Qarehqurut or black kashk (fabricated from the liquid yoghurt) are all cold and dry.\n\nSalty, bitter and spicy flavors which are usually used to give foods a special taste are warm and dry. Although spicy foods are warmer and dryer than bitter and salty foods respectively.\n",
"Unlike osmotic pressure, tonicity is influenced only by solutes that cannot cross the membrane, as only these exert an effective osmotic pressure. Solutes able to freely cross the membrane do not affect tonicity because they will always equilibrate with equal concentrations on both sides of the membrane without net solvent movement. It is also a factor affecting imbibition.\n\nThere are three classifications of tonicity that one solution can have relative to another: \"hypertonic\", \"hypotonic\", and \"isotonic\".\n\nSection::::Hypertonic Solution.\n",
"Palatability\n\nPalatability is the hedonic reward (i.e., pleasure) provided by foods or fluids that are agreeable to the \"palate\", which often varies relative to the homeostatic satisfaction of nutritional, water, or energy needs. The palatability of a food or fluid, unlike its flavor or taste, varies with the state of an individual: it is lower after consumption and higher when deprived. It has increasingly been appreciated that this can create a hunger that is independent of homeostatic needs.\n\nSection::::Brain mechanism.\n",
"Physicians who specialize in treating OI agree that the single most important treatment is drinking more than two liters (eight cups) of fluids each day. A steady, large supply of water or other fluids reduces most, and for some patients all, of the major symptoms of this condition. Typically, patients fare best when they drink a glass of water no less frequently than every two hours during the day, instead of drinking a large quantity of water at a single point in the day.\n",
"Section::::In popular culture.\n\nSection::::In popular culture.:Books.\n\nTruong, Monique \"Bitter in the Mouth\" (2011). The book's main character, Linda, can taste words.\n\nSection::::In popular culture.:Movies.\n\nDisney / Pixar's \"Ratatouille\" (2007). A computer-animated comedy about a young Rat named Remy who has a dream to become a chef one day. Remy has a highly developed sense of taste and smell that are portrayed throughout the movie in synesthetic taste sequences.\n\nSection::::In popular culture.:Webcomics.\n\nThe character Terezi Pyrope from the webcomic Homestuck can smell and taste both colours and emotions.\n\nSection::::In popular culture.:Podcasts.\n",
"Medicinal tonic water originally contained only carbonated water and a large amount of quinine. However, most tonic water today contains less quinine and is used mostly for its flavor. As a result of the lower quinine content, it is less bitter, and is also usually sweetened, often with high-fructose corn syrup or sugar. Some manufacturers also produce diet (or slimline) tonic water, which may contain artificial sweeteners such as aspartame. Traditional-style tonic water with little more than quinine and carbonated water is less common but may be preferred by those who desire the bitter flavor.\n",
"De Cordi et al. studied thirty patients with lesions of the mouth and oropharynx (caused by various diseases). 83% of patients reported a reduction in pain, 13% remained the same and 3% showed initial improvement but then got worse. 83% showed a distinct improvement in functionality in the ability to take food, 7% remained the same, and 7% got worse, while 3% reported considerable improvement followed by slight worsening. 57% of patients reported an improvement in the grade of oral mucositis, 40% remained the same while 3% got worse.\n",
"Water can be treated in the wilderness through filtering, chemical disinfectants, a portable ultraviolet light device, pasteurizing or boiling. Factors in choice may include the number of people involved, space and weight considerations, the quality of available water, personal taste and preferences, and fuel availability.\n\nIn a study of long-distance backpacking, it was found that water filters were used more consistently than chemical disinfectants. Inconsistent use of iodine or chlorine may be due to disagreeable taste, extended treatment time or treatment complexity due to water temperature and turbidity.\n",
"There is an old wives tale that having a hot drink can help with common cold and influenza symptoms, including sore throat, but there is only limited evidence to support this idea. If the sore throat is unrelated to a cold and is caused by for example tonsillitis, a cold drink may be helpful.\n\nThere are also other medication like lozenges which can help people to cope with a sore throat.\n\nWithout active treatment, symptoms usually last two to seven days.\n\nSection::::Epidemiology.\n\nIn the United States there are about 2.4 million emergency department visits with throat-related complaints per year.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-02240 | If the ice is inside the cup, why does the water precipitate outside the cup? | There's water vapor in the air surrounding the cup. The cold cup cools the air, causing the vapor to condense and form little droplets that adhere to the cup. | [
"The cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup. The cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, fluid will overflow into the pipe in the center of the cup. Due to the drag that molecules exert on one another, the cup will be emptied.\n",
"The secondary flow along the floor of the bowl or cup can be seen by sprinkling heavy particles such as sugar, sand, rice or tea leaves into the water and then setting the water in circular motion by stirring with a hand or spoon. The boundary layer spirals inward and sweeps the heavier solids into a neat pile in the center of the bowl or cup. With water circulating in a bowl or cup, the primary flow is purely circular and might be expected to fling heavy particles outward to the perimeter. Instead, heavy particles can be seen to congregate in the center as a result of the secondary flow along the floor.\n",
"This Cup was used just once and by Kay Khosrow in his reign to find where Bizhan was, who had gone to the Turan border for hunting. Bizhan had become romantically involved with Manizheh, the daughter of Turanian king Afrasiab, after a brief encounter with her in the border of Iran and Turan. Manizhe clandestinely brought him to the palace of his father, and when Afrasiab found out he threw Bizhan into a pit and expelled Manizheh from the castle. Everyone in Iran thought that Bizhan was dead except for Kay Khosrow who saw him alive in the Cup. Kay Khosrow then sent Rostam to rescue Bizhan.\n",
"The frozen droplets on the surface of rimed crystals are hard to resolve and the topography of a graupel particle is not easy to record with a visible-wavelength microscope because of the limited resolution and depth of field in the instrument. However, observations of snow crystals with a low-temperature scanning electron microscope (LT-SEM) clearly show cloud droplets measuring up to on the surface of the crystals. The rime has been observed on all four basic forms of snow crystals, including plates, dendrites, columns and needles. As the riming process continues, the mass of frozen, accumulated cloud droplets obscures the identity of the original snow crystal, thereby giving rise to a graupel particle.\n",
"There is a pressure gradient from the perimeter of the bowl or cup toward the center. This pressure gradient provides the centripetal force necessary for the circular motion of each parcel of water. The pressure gradient also accounts for a \"secondary flow\" of the boundary layer in the water flowing across the floor of the bowl or cup. The slower speed of the water in the boundary layer is unable to balance the pressure gradient. The boundary layer spirals inward toward the axis of circulation of the water. On reaching the center the secondary flow is then upward toward the surface, progressively mixing with the primary flow. Near the surface there may also be a slow secondary flow outward toward the perimeter.\n",
"Cups for cold drinks could not be treated in the same way, as condensation forms on the outside, then soaks into the board, making the cup unstable. To remedy this, cup manufacturers developed the technique of spraying both the inside and outside of the cup with wax. Clay- and wax-coated cups disappeared with the invention of polyethylene (PE)-coated cups; this process covers the surface of the board with a very thin layer of PE, waterproofing the board and welding the seams together.\n",
"Nestor's Cup (mythology)\n\nIn Greek mythology Nestor's Cup is a legendary golden mixing cup which was owned by the hero Nestor. The cup is described in the \"Iliad\".\n\nSection::::Epic cycle.\n\nNestor's Cup is described in Book 11 of the \"Iliad\". Machaon, son of Asclepius, is injured by Paris, and taken back to the Greek camp by Nestor; a healing drink is prepared for him in the cup. The cup is described over six lines.\n",
"Sealed chromatography cartridges or columns work similarly except the sample and buffer is pumped into and through the resin by an external device such as a liquid chromatographic (LC) system, also requiring collection and monitoring of several fractions. Even though this method is often semi-automated, using chromatography cartridges is typically limited to processing one sample at a time and some sample dilution from the chase buffer is still likely to occur.\n",
"There are three proposed mechanisms that could account for a film of liquid water on the ice surface:\n\nBULLET::::- Pressure Melting: Since water expands upon freezing, ice can be melted by 'crushing' the solid structure with enough pressure. This interpretation was initially proposed by John Joly in 1886, basing it on an extrapolation from Le Chatelier's principle.\n\nBULLET::::- Premelting: Due to premelting effects there is always a thin film of liquid water on the ice surface.\n\nBULLET::::- Friction: Heat generated from the ice skates moving melts a small amount of ice under the blade.\n",
"A traditional \"recipe\" is to put a coin in the bottom of the cup, pour the coffee until it is no longer visible, and mixing with alcohol until it reappears. The recipe is often claimed to be a hoax, as the coin will not reappear in a cylindrical coffee cup. This phenomenon is explained by the Beer–Lambert law stating that the absorption of light is proportional to the concentration. As such in order for the recipe to work one will need a cup with a significantly wider top than bottom which will allow for the concentration to decrease faster than the column of liquid increases.\n",
"BULLET::::- US patent number 6039206 2000. This cup holder features legs which extend when the cup is placed into the holder.\n\nBULLET::::- US patent number D645,308 2011. This cup holder for a surface solves a lot of previous problems. It was shown shortly before being granted on U.S. television however has not been sold in the marketplace.\n\nIn Japan several patents were applied for, but they were not finalized. They have since been commercialized by other manufacturers. These are:\n\nBULLET::::- Number 2006314739.\n",
"Pure water is supercooled in a chiller to −2°C and released through a nozzle into a storage tank. On release it undergoes a phase transition forming small ice particles within 2.5% ice fraction. In the storage tank it is separated by the difference in density between ice and water. The cold water is supercooled and released again increasing the ice fraction in the storage tank.\n\nHowever a small crystal in the supercooled water or a nucleation cell on the surface may act as a seed for ice crystals and block the generator.\n\nSection::::See also.\n\nBULLET::::- Cold chain\n\nBULLET::::- Fishing industry\n",
"BULLET::::- \"The Story of Luran\", West Highlands, Scotland. Standard form of tale concerning a butler boy named Luran - in it the cup ends in the possession of Mingarry Castle until lost at sea. Similar tales also existed for Dunvegan Castle (see Dunvegan Cup), and at Raasay. Other folktales exist involving Luran.\n\nSection::::Archaeology.\n",
"BULLET::::- \"Thermal conductivity\": The container of hotter liquid may melt through a layer of frost that is acting as an insulator under the container (frost is an insulator, as mentioned above), allowing the container to come into direct contact with a much colder lower layer that the frost formed on (ice, refrigeration coils, etc.) The container now rests on a much colder surface (or one better at removing heat, such as refrigeration coils) than the originally colder water, and so cools far faster from this point on.\n",
"Every operational ice concentration algorithm is predicated on this\n\nprinciple or a slight variation.\n\nThe NASA team algorithm, for instance, works by taking the\n\ndifference of two channels and dividing by their sum.\n\nThis makes the retrieval slightly nonlinear, but with\n\nthe advantage that the influence of temperature is mitigated.\n\nThis is because brightness temperature varies roughly linearly\n\nwith physical temperature when all other things are equal—see emissivity—and because the sea ice emissivity at different microwave\n\nchannels is strongly correlated.\n\nAs the equation suggests, concentrations of multiple ice\n\ntypes can potentially be detected, with NASA team distinguishing between\n",
"Under certain conditions of temperature and humidity, ice can form on a refrigeration dehumidifier's evaporator coils. The ice buildup can impede airflow and eventually form a solid block encasing the coils. This buildup prevents the dehumidifier from operating effectively, and can cause water damage if condensed water drips off the accumulated ice and not into the collection tray. In extreme cases, the ice can deform or distort mechanical elements, causing permanent damage.\n",
"Later, Roderick Charles MacLeod transcribed the inscription; giving the woman's name as \"Katharina Nig Ry Neil\"—Katharina, daughter of King Neil. R.C. MacLeod declared that Macleod legend assigned the cup to Niall Glúndub, and that the cup might have passed down to her from him, or that the cup was attributed to him by his descendants. R.C. MacLeod later claimed that it was traditionally given that the wooden bowl dated from the 10th century, and that it was the property of Niall Glúndub, the 10th century Irish king of Cenél nEógain, R.C. MacLeod does not rule out the possibility of the ornamentation having been added to the cup at a later date; the silver work dates, at the earliest, from the 14th century and the dated inscription puts it at 1493.\n",
"A puzzling observation is the coexistence of and ions under anoxic conditions. No sulfide anions () are found in the system. This suggests an intricate and poorly understood interaction between the sulfur and the iron biochemical cycles.\n\nIn December 2014, scientists and engineers led by Mikucki returned to Taylor Glacier and used a probe called IceMole, designed by a German collaboration, to melt into the glacier and directly sample the salty water (brine) that feeds Blood Falls.\n",
"In many versions of the tale the vessel is in the shape of the horn; usually the vessel is gold(en), or of some other precious material. Tales often begin with a hole in the ground or similar opening up mysteriously, revealing 'little people' living underground. In some versions the fairy person offers a drink from the cup, which the protagonist refuses or discards - with the vessel's discarded liquid often acting corrosively Usually the vessel is stolen by the human protagonist of the tale, them then being consumed by fear and often chased by angry supernatural beings - the vessel is sometimes recorded as ending in the possession of nobility, or the church.\n",
"In addition to his bachelor's degree from Franklin & Marshall, Rupp was a Fulbright scholar with a master's degree from the Pennsylvania State University and a doctorate from the University of Pennsylvania. He went on to a long career as a faculty member at Millersville University where taught French and served as chairmen of the language department, retiring in 1982.\n",
"Soon afterwards, Stanley purchased what is frequently described as a decorative punch bowl, but which silver expert John Culme identified as a rose bowl, made in Sheffield, England, and sold by London silversmith G. R. Collis and Company (now Boodle and Dunthorne Jewellers), for ten guineas, equal to ten and a half pounds sterling, US$48.67, which is equal to $ in dollars. He had the words \"Dominion Hockey Challenge Cup\" engraved on one side of the outside rim, and \"From Stanley of Preston\" on the other side.\n",
"A Pythagorean cup looks like a normal drinking cup, except that the bowl has a central column in it, giving it a shape like a Bundt pan. The central column of the bowl is positioned directly over the stem of the cup and over the hole at the bottom of the stem. A small open pipe runs from this hole almost to the top of the central column, where there is an open chamber. The chamber is connected by a second pipe to the bottom of the central column, where a hole in the column exposes the pipe to (the contents of) the bowl of the cup.\n",
"The interpretation of the inscription depends on a lacuna in the first line: depending on how it is restored, the inscription may be contrasting the cup from Pithekoussai with the legendary cup of Nestor described in the \"Iliad\" (\"Iliad\" 11.632 ff.), or identifying the cup as one owned by Nestor. The original publication of the inscription accepted the first possibility; by 1976, P.A. Hansen wrote that \"no less than fifteen\" possible restorations of the first lacuna had been published. By the 1990s, it was generally thought that the cup is in fact claiming to be Nestor's. The restoration proposed by Yves Gerhard in 2011 once again argues that the inscription is contrasting the Pithekoussan cup with that of Nestor.\n",
"Section::::Toponymy.\n",
"When water in a circular bowl or cup is moving in circular motion the water displays free-vortex flow – the water at the center of the bowl or cup spins at relatively high speed, and the water at the perimeter spins more slowly. The water is a little deeper at the perimeter and a little more shallow at the center, and the surface of the water is not flat but displays the characteristic depression toward the axis of the spinning fluid. At any elevation within the water the pressure is a little greater near the perimeter of the bowl or cup where the water is a little deeper, than near the center. The water pressure is a little greater where the water speed is a little slower, and the pressure is a little less where the speed is faster, and this is consistent with Bernoulli's principle.\n"
] | [
"There is no water outside of a cup."
] | [
"The air surrounding the cup contains water vapor."
] | [
"false presupposition"
] | [
"There is no water outside of a cup."
] | [
"false presupposition"
] | [
"The air surrounding the cup contains water vapor."
] |
2018-01158 | how civilizations kept track of years before what we consider "year zero." What reference points did they have/use? | **Person Based** In many ancient societies, they named years after people. For example, X many years during the Reign of King Bob the First. Sometimes each year was named after a different person. Examples: * Assyria picked a new, appointed king each year. That year was named after that king. * Years named after Roman consuls/emperors. * The BC/AD, named after Jesus. * Birth of Kim Il-Sung **Event Based** Named after a notable singular event (such as a nation being conquered; founding of an empire) or recurring events (such as the Olympics). Examples: * The Olympics. * Rise of the Seleucid Empire. * Founding of Rome. * The year Rome conquered a given country. * Muhammed's flight from Mecca (Islamic Calendar) * Founding of the French First Republic. * Founding of the Republic of China. **Arbitrary** Basically a group of people picks a year as a start date and starts counting forward. Examples: * Roman tax periods. * Mayan calendar. * Religious calendars dated from the point of creation (early Christian Calendars, Hebrew Calendar) * Hindu calendar | [
"Dionysius did not use AD years to date any historical event. This began with the English cleric Bede (c. 672–735), who used AD years in his \"Historia ecclesiastica gentis Anglorum\" (731), popularizing the era. Bede also used a term similar to the English before Christ once, but that practice did not catch on until very much later. Bede did not sequentially number days of the month, weeks of the year, or months of the year. However, he did number many of the days of the week using a counting origin of one in Ecclesiastical Latin. Previous Christian histories used \"anno mundi\" (\"in the year of the world\") beginning on the first day of creation, or \"anno Adami\" (\"in the year of Adam\") beginning at the creation of Adam five days later (the sixth day of creation according to the Genesis creation narrative), used by Africanus, or \"anno Abrahami\" (\"in the year of Abraham\") beginning 3,412 years after Creation according to the Septuagint, used by Eusebius of Caesarea, all of which assigned \"one\" to the year beginning at Creation, or the creation of Adam, or the birth of Abraham, respectively. Bede continued this earlier tradition relative to the AD era.\n",
"Before the making of the Sun, dates are given in Valian Years, and not all events can be precisely dated. In such cases events are given in chronological order between known dates. Although all dates prior to the first sunrise have been given in Valian years, these can be converted to Years of the Lamps by subtracting 1900, Years of the Trees by subtracting 3500, or Years of the Trees in the First Age by subtracting 4550.\n",
"Section::::Ancient dating systems.:Indiction cycles.\n\nAnother common system was the indiction cycle (15 indictions made up an agricultural tax cycle in Roman Egypt, an indiction being a year in duration). Documents and events began to be dated by the year of the cycle (e.g., \"fifth indiction\", \"tenth indiction\") in the 4th century, and this system was used long after the tax ceased to be collected. It was used in Gaul, in Egypt until the Islamic conquest, and in the Eastern Roman Empire until its conquest in 1453. \n",
"Year Zero (political notion)\n\nThe term Year Zero ( \"chhnam saun\"), applied to the takeover of Cambodia in April 1975 by the Khmer Rouge, is an analogy to the Year One of the French Revolutionary Calendar. During the French Revolution, after the abolition of the French monarchy (September 20, 1792), the National Convention instituted a new calendar and declared the beginning of the Year I. The Khmer Rouge takeover of Phnom Penh was rapidly followed by a series of drastic revolutionary de-industrialization policies resulting in a death toll that vastly exceeded that of the French Reign of Terror.\n\nSection::::Concept.\n",
"BULLET::::- Bede began his history of the world with 3952 BC\n\nBULLET::::- In their ceremonial or commemorative proceedings, Freemasons add 4,000 years to the current Anno Domini calendar year and append \"Anno Lucis\" (\"Year of Light\") to the year (i.e., 2014 AD = 6014 AL). This alternative calendar era, which would designate 4000 BC as \"year zero\", was created in the 18th century (58th century AL) in deference to the Hebrew calendar's \"Anno mundi\" and other ideas regarding the year of creation at the time.\n",
"With the establishment of eponym lists, succinct statements about events were sometimes added in order to keep track of the sequence. The limmu lists themselves run from 911 through to 631 BC, and are dated with the aid of the Canon of Ptolemaeus, which coincides with dates from the Canon between 747 and 631 BC. According to one limmu list, a solar eclipse occurred in the tenth reigning year of the Assyrian king Aššur-dan II, in the month of Sivan (May–June on the Gregorian calendar), by Bur-Sagale. Using the Canon of Kings the tenth year can be dated to 763 BC, and modern astronomy dating has backed the Assyrian eclipse up as June 15, 763 BC. Other events can be dated from this establishment of fact, such as the taking of the Egyptian city of Thebes by the Assyrians in 664 BC, and to be able to determine the date of the minting of ancient coins.\n",
"Section::::Computing.\n\nProgramming libraries may implement a year zero, an example is the Perl CPAN module DateTime.\n\nSection::::Other traditions.\n\nSection::::Other traditions.:South Asian calendars.\n",
"Throughout the Roman and Byzantine periods, the Decapolis and other Hellenized cities of Syria and Palestine used the Pompeian era, counting dates from the Roman general Pompey's conquest of the region in 63 BC.\n\nSection::::Ancient dating systems.:Maya.\n",
"An example of a non-agricultural calendar is the \"Tzolk'in\" calendar of the Maya civilization of pre-Columbian Mesoamerica, which is a cycle of 260 days. This count is based on an earlier calendar and is found throughout Mesoamerica. This formed part of a more comprehensive system of Maya calendars which combined a series of astronomical observations and ritual cycles.\n",
"BULLET::::- 5000–4900 BC: The Older Peron transgression, a warm period that would dominate the 5th millennium, begins in this period\n\nBULLET::::- According to Early Anthropocene Hypothesis the early farming practises started to raise the atmospheric CO-levels to preindustrial levels\n\nBULLET::::- c. 4350 BC: Kikai Caldera in Japan forms in a massive VEI7 eruption\n\nSection::::Calendars and chronology.\n\nBULLET::::- 4713 BC: The epoch (origin) of the Julian Period described by Joseph Justus Scaliger occurred on January 1, the astronomical Julian day number zero\n",
"BULLET::::- 3114 BC – One version of the Mayan calendar, known as the Mesoamerican Long Count, uses the epoch of 11 or 13 August 3114 BC. The Maya Long Count calendar was first used approximately 236 BC (see Mesoamerican_Long_Count_calendar#Earliest_Long_Counts.\n\nBULLET::::- 3102 BC – According to calculations of Aryabhata (6th century), the Hindu Kali Yuga began at midnight on 18 February 3102 BC.\n\nBULLET::::- 3102 BC – Aryabhata dates the events of the Mahabharata to around 3102 BC. Other estimates range from the late 4th to the mid-2nd millennium BC.\n\nSection::::Centuries.\n\nBULLET::::- 40th century BC\n\nBULLET::::- 39th century BC\n",
"Section::::Science and technology.:Astronomy.\n\nFrom Sumerian times, temple priesthoods had attempted to associate current events with certain positions of the planets and stars. This continued to Assyrian times, when Limmu lists were created as a year by year association of events with planetary positions, which, when they have survived to the present day, allow accurate associations of relative with absolute dating for establishing the history of Mesopotamia.\n",
"Year zero\n\nYear zero does not exist in the Anno Domini (AD) system usually used to number years in the Gregorian calendar and in its predecessor, the Julian calendar. In this system, the year is followed by . However, there is a year zero in astronomical year numbering (where it coincides with the Julian year ) and in ISO 8601:2004 (where it coincides with the Gregorian year ) as well as in all Buddhist and Hindu calendars.\n\nSection::::Historical, astronomical and ISO year numbering systems.\n\nSection::::Historical, astronomical and ISO year numbering systems.:Historians.\n",
"Occasionally in Talmudic writings, reference was made to other starting points for eras, such as Destruction Era dating, being the number of years since the AD 70 destruction of the Second Temple, and the number of years since the Creation year based on the calculation in the Seder Olam Rabbah of Rabbi Jose ben Halafta in about AD 160. By his calculation, based on the Masoretic Text, Adam and Eve were created on 1st of Tishrei (Rosh Hashanah Day 1) in 3760 BC, later confirmed by the Muslim chronologist al-Biruni as 3448 years before the Seleucid era. An example is the c. 8th-century AD Baraita of Samuel.\n",
"Historians have never included a year zero. This means that between, for example, and , there are 999 years: 500 years BC, and 499 years AD preceding 500. In common usage \"anno Domini\" 1 is preceded by the year 1 BC, without an intervening year zero. Neither the choice of calendar system (whether Julian or Gregorian) nor the era (\"Anno Domini\" or Common Era) determines whether a year zero will be used. If writers do not use the convention of their group (historians or astronomers), they must explicitly state whether they include a year 0 in their count of years, otherwise their historical dates will be misunderstood.\n",
"The vague year, from \"annus vagus\" or wandering year, is an integral approximation to the year equaling 365 days, which wanders in relation to more exact years. Typically the vague year is divided into 12 schematic months of 30 days each plus 5 epagomenal days. The vague year was used in the calendars of Ethiopia, Ancient Egypt, Iran, Armenia and in Mesoamerica among the Aztecs and Maya. It is still used by many Zoroastrian communities.\n\nSection::::Astronomical years.:Heliacal year.\n",
"All eras used with Hindu and Buddhist calendars, such as the Saka era or the Kali Yuga, begin with the year 0. All these calendars use elapsed, expired, or complete years, in contrast with most other calendars which use current years. A complete year had not yet elapsed for any date in the initial year of the epoch, thus the number 1 cannot be used. Instead, during the first year the indication of 0 years (elapsed) is given in order to show that the epoch is less than 1 year old. This is similar to the Western method of stating a person's age – people do not reach age one until one year has elapsed since birth (but their age during the year beginning at birth is specified in months or fractional years, not as age zero). However, if ages were specified in years and months, such a person would be said to be, for example, 0 years and 6 months or 0.5 years old. This is analogous to the way time is shown on a 24-hour clock: during the first hour of a day, the time elapsed is 0 hours, \"n\" minutes.\n",
"Section::::History of the Astrological Ages.:Post-Hipparchus.:Mashallah ibn Athari.\n\nThe renowned Jewish astronomer and astrologer Masha’allah (c.740 – 815 CE) employed precession of the equinoxes for calculating the period “Era of the Flood” dated as 3360 BCE or 259 years before the Indian Kali Yuga, believed to have commenced in 3101 BCE.\n\nSection::::History of the Astrological Ages.:Post-Hipparchus.:Giovanni Pico della Mirandola.\n",
"In all the lands where the Persian calendar was used the \"epagemonai\" were placed at the end of the year. To offset the difference between the agricultural year and the calendar year (the tax-gathering season began after the harvest) the start of the \"araji\" (land-tax) year was delayed by one month every 120 years. A Roman historian, Quintus Curtius Rufus, describing a ceremony in 333 BC, writes:\n",
"BULLET::::- Northreckoning (NR): Used in the City of Waterdeep, Northreckoning dates from the year Ahghairon became the first Lord of Waterdeep. A more archaic system called Waterdeep Years (WY) dates from the supposed first use of Waterdeep as a trading post. This reckoning is now largely abandoned except in ancient texts.\n\nBULLET::::- Mulhorand Calendar (MC): One of the oldest calendars in use in the Realms, this ancient scheme of record-keeping dates from the founding of Skuld, the City of Shadows, reputedly by a Mulhorandi god.\n",
"BULLET::::- In Thailand in 1888 King Chulalongkorn decreed a National Thai Era dating from the founding of Bangkok on April 6, 1782. In 1912, New Year's Day was shifted to April 1. In 1941, Prime Minister Phibunsongkhram decided to count the years since 543 BC. This is the Thai solar calendar using the Thai Buddhist Era. Except for this era, it is the Gregorian calendar.\n",
"In 1627, the German astronomer Johannes Kepler first used an astronomical year which was to become year zero in his \"Rudolphine Tables\". He labeled the year \"Christi\" and inserted it between years labeled \"Ante Christum\" (BC) and \"Post Christum\" (AD) on the mean motion pages of the Sun, Moon, and planets. Then in 1702 the French astronomer Philippe de la Hire used a year he labeled at the end of years labeled \"ante Christum\" (BC), immediately before years labeled \"post Christum\" (AD) on the mean motion pages in his \"Tabulæ Astronomicæ\", thus adding the designation \"0\" to Kepler's \"Christi\". Finally, in 1740 the French astronomer Jacques Cassini , who is traditionally credited with the invention of year zero, completed the transition in his \"Tables astronomiques\", simply labeling this year \"0\", which he placed at the end of years labeled \"avant Jesus-Christ\" (BC), immediately before years labeled \"après Jesus-Christ\" (AD).\n",
"The system is so named due to its use in astronomy. Few other disciplines outside history deal with the time before year 1, some exceptions being dendrochronology, archaeology and geology, the latter two of which use 'years before the present'. Although the absolute numerical values of astronomical and historical years only differ by one before year 1, this difference is critical when calculating astronomical events like eclipses or planetary conjunctions to determine when historical events which mention them occurred.\n\nSection::::Year zero usage.\n",
"The Egyptians also devised a method of telling the time at night based on the heliacal risings of 36 decan stars, one for each 10° segment of the 360° circle of the zodiac and corresponding to the ten-day \"weeks\" of their civil calendar.\n",
"Dionysius Exiguus’ Anno Domini era (which contains only calendar years \"AD\") was extended by Bede to the complete Christian era (which contains, in addition all calendar years \"BC\", but no \"year zero\"). Ten centuries after Bede, the French astronomers Philippe de la Hire (in the year 1702) and Jacques Cassini (in the year 1740), purely to simplify certain calculations, put the Julian Dating System (proposed in the year 1583 by Joseph Scaliger) and with it an astronomical era into use, which contains a leap year zero, which precedes the year 1 (AD).\n\nSection::::Prehistoric chronologies.\n"
] | [] | [] | [
"normal"
] | [
"Years were always numerically tracked."
] | [
"false presupposition",
"normal"
] | [
"A long time ago years were tracked by names of kings or rulers. "
] |
2018-23547 | How does a human manage to die if they slit open their wrists but doesn’t when they lose their whole arm? | Yeah this all depends on context. If there is no intervention, you will die from either. If you are planning to cut off your arm, like a surgery amputation, they apply a tourniquet to cut off blood flow to the area. There is a certain way that if you slit your wrist it can be very hard to repair, which is why it's so dangerous, but most of the time with intervention you will be okay. | [
"On October 26, 2016, the Director of hand transplantation at UCLA, Dr. Kodi Azari, and his team, performed a hand transplant on 51-year-old entertainment executive from Los Angeles, Jonathan Koch at Ronald Reagan UCLA Medical Center. Koch underwent a 17-hour procedure to replace his left hand, which he lost to a mysterious, life-threatening illness that struck him in January 2015. On June 23, 2015, Koch had the amputation surgery, also performed by Dr. Kodi Azari, which was designed to prep him to receive a transplanted limb. This included severing the left hand closer to the wrist than the elbow. Azari kept all the nerves and tendons long and extended, which would give him plenty to work with later. Then he sutured them together and attached them to the stump of bone to keep them from retracting. This is the first known hand transplant case in which the hand was amputated in preparation for a hand transplant, as opposed to previous hand transplant patients who have undergone typical amputation surgeries. Azari's theory about prepping the hand for a transplant during the initial amputation surgery would later be supported by Koch when he was able to move his thumb only two hours after he woke up from the 17-hour transplant surgery and move his entire hand only two days after surgery. \n",
"Various research papers on cattle slaughter collected by Compassion In World Farming mention that \"after the throat is cut, large clots can form at the severed ends of the carotid arteries, leading to occlusion of the wound (or \"ballooning\" as it is known in the slaughtering trade). Nick Cohen wrote in the \"New Statesman\", \"Occlusions slow blood loss from the carotids and delay the decline in blood pressure that prevents the suffering brain from blacking out. In one group of calves, 62.5 percent suffered from ballooning. Even if the cut to the neck is clean, blood is carried to the brain by vertebral arteries and it keeps cattle conscious of their pain. \"Experiments carried out by the principal of the Swedish Veterinary Institute (\"Veterinärhögskolan\") by order of the Swedish government in 1925 and published in 1928 determined that the blood carried to the brain by the vertebral arteries in bovines is reduced after slaughter by the Jewish method shehitah from 1/30 to 1/40, and on the basis of this and one other experiment Professor Axel Sahlstedt declared the method humane and not cruel. However, on the basis of other experiments that had showed different results, Sahlstedt recommended post-stunning as standard.\n",
"Section::::Cultural references.:In podcasts.\n\nBULLET::::- In the 2007 podcast series \"Wormwood: A Serialized Mystery\", the main protagonist Dr. Xander Crowe has replaced his left hand with a Hand of Glory. Among other things it gives him the power to open any lock, and a running gag in the series is that it disassembles his cell phone while he sleeps.\n\nBULLET::::- The horror podcast NoSleep Podcast features a tale entitled \"The Hand of Glory\" by author Colin Harker about a drug addict who crafts a hand of glory from the corpse of an erotic asphyxiation victim. There are predictably gruesome consequences.\n",
"BULLET::::- Stab wounds to the chest at or below the clavicle–The radial nerve is the terminal branch of the posterior cord of the brachial plexus. A stab wound may damage the posterior cord and result in neurological deficits, including an inability to abduct the shoulder beyond the first 15 degrees, an inability to extend the forearm, reduced ability to supinate the hand, reduced ability to abduct the thumb and sensory loss to the posterior surface of the arm and hand.\n",
"In December 2012, a surgical team, led by W.P. Andrew Lee, M.D. of Johns Hopkins Hospital, performed the hospital's first bilateral arm transplant. The surgery took thirteen hours. The surgery connected the bones, blood vessels, muscles, tendons, nerves, and skin on both arms, extending his left arm from the elbow and his right from below the shoulder. The transplant was coupled with a treatment of the deceased donors bone marrow cells to help prevent rejection of the new limbs.\n",
"Injury to the median nerve proper occurs in 0.06% of cases. Risk of nerve injury has been found to be higher in patients undergoing endoscopic CTR compared with open, though most are temporary neurapraxias. The palmar cutaneous branch of the median nerve may be injured during superficial skin dissection or while releasing the proximal portion of the transverse carpal ligament with scissors or an endoscopic device. Nerve injury can lead to persistent paresthesias or painful neuroma formation.\n",
"Although suicide was officially accepted as the cause of death, some medical experts have raised doubts, suggesting that the evidence does not support this. The most detailed objection was provided in a letter from three medical doctors published in \"The Guardian\", reinforced by support from two other senior doctors in a later letter to the newspaper. These doctors argued that the post-mortem finding of a transected ulnar artery could not have caused a degree of blood loss that would kill someone, particularly when outside in the cold (where vasoconstriction would cause slow blood loss). Further, this conflicted with the minimal amount of blood found at the scene. They also contended that the amount of co-proxamol found was only about a third of what would normally be fatal. Dr Rouse, a British epidemiologist wrote to the \"British Medical Journal\" offering his opinion that the act of committing suicide by severing the wrist arteries is an extremely rare occurrence in a 59-year-old man with no previous psychiatric history. Nobody else died from that cause during the year.\n",
"In 2005, a baby dolphin (now named Winter) became entangled in the ropes of a crab trap. The rope cut off the supply of blood to her tail which resulted in her tail being amputated.\n",
"Symptoms vary depending on the severity and location of the trauma; however, common symptoms include wrist drop (the inability to extend the wrist upward when the hand is palm down); numbness of the back of the hand and wrist, specifically over the first web space which is innervated by the radial nerve; and inability to voluntarily straighten the fingers or extend the thumb, which is performed by muscles of the extensor group, all of which are primarily innervated by the radial nerve. Loss of wrist extension is due to paralysis of the posterior compartment of forearm muscles; although the elbow extensors are also innervated by the radial nerve, their innervation is usually spared because the compression occurs below, distal, to the level of the axillary nerve, which innervates the long head of the triceps, and the upper branches of the radial nerve that innervate the remainder of the Triceps..\n",
"The \"ULTRASEAL LAA\" device, from Cardia, is a percutaneous, transcatheter device intended to prevent thrombus embolization from the left atrial appendage in patients who have non-valvular atrial fibrillation.\n\nAs with all Cardia devices (such as: Atrial Septal Defect Closure Device or Patent Foramen Ovale Closure Device), the Ultraseal is fully retrievable and repositionable in the Cardia Delivery System used for deployment. The device can be retrieved and redeployed multiple times in a single procedure without replacing the device or delivery sheath.\n",
"The provision of MUA to an extremity joint is reserved for primary conditions thereof, such as a frozen articulation. The practice of applying MUA to an extremity joint that conjoins the spine (i.e., shoulder and/or hip), as a routine component or an extension of a spinal MUA procedure, is not supported by clinical investigation.\n\nSection::::Risk.\n",
"Most tissues and organs of the body can survive clinical death for considerable periods. Blood circulation can be stopped in the entire body below the heart for at least 30 minutes, with injury to the spinal cord being a limiting factor. Detached limbs may be successfully reattached after 6 hours of no blood circulation at warm temperatures. Bone, tendon, and skin can survive as long as 8 to 12 hours.\n",
"BULLET::::- \"Sensory deficit\": Loss of sensation or paresthesiae in ulnar half of the palm, and the medial 1½ digits on the palmar aspect of the hand, with dorsal sparing. The dorsal aspect of the hand is unaffected as the posterior cutaneous branch of the ulnar nerve is given off higher up in the forearm and does not reach the wrist.\n\nIn severe cases, surgery may be performed to relocate or \"release\" the nerve to prevent further injury.\n\nSection::::See also.\n\nBULLET::::- Axillary nerve\n\nBULLET::::- Median nerve\n\nBULLET::::- Musculocutaneous nerve\n\nBULLET::::- Radial nerve\n\nSection::::External links.\n\nBULLET::::- Cubital Tunnel Support Forums\n",
"Palmar branch of the median nerve\n\nThe palmar branch of the median nerve is a branch of the median nerve which arises at the lower part of the forearm.\n\nSection::::Branches.\n\nIt pierces the palmar carpal ligament, and divides into a lateral and a medial branch; \n\nBULLET::::- The \"lateral branch\" supplies the skin over the ball of the thumb, and communicates with the volar branch of the lateral antebrachial cutaneous nerve.\n\nBULLET::::- The \"medial branch\" supplies the skin of the palm and communicates with the palmar cutaneous branch of the ulnar.\n\nSection::::Clinical significance.\n",
"Since the nerve passes dorsally around the head of the radius, it is susceptible to traction or compression injuries when the elbow joint is injured, in particular, radial dislocation. Another area of potential entrapment is the arcade of Frohse, a fibrous arch formed from the proximal part of the superficial head of the supinator, under which the deep branch of the radial nerve passes. The passage for the nerve varies in size. In some cases of spontaneous paralysis of the nerve, releasing this fibrous band released pressure on the nerve and restored function \n",
"In December 2010 \"The Times\" reported that Kelly had a rare abnormality in the arteries supplying his heart; the information had been disclosed by the head of the Academic Unit of Pathology at Sheffield University Medical School, Professor Paul Ince, who noted that the post-mortem had found severe narrowing of the blood vessels, and said that heart disease was likely to have been a factor in Kelly's death as the cut to the wrist artery would not itself have been fatal. Vice-President of the British Cardiovascular Society Ian Simpson said that Kelly's artery anomaly could have contributed to his death.\n",
"In the case of a non-fatal suicide attempt, the person may experience injury of the tendons of the extrinsic flexor muscles, or the ulnar and median nerves which control the muscles of the hand, both of which can result in temporary or permanent reduction in the victim's sensory or motor ability or also cause chronic somatic or autonomic pain. As in any class IV hemorrhage, aggressive resuscitation is required to prevent death of the patient; standard emergency bleeding control applies for pre-hospital treatment.\n\nSection::::Dehydration.\n",
"Section::::Hutton Inquiry.:Fatality of ulnar artery cuts.\n",
"Injury of median nerve at different levels causes different syndromes with varying motor and sensory deficits.\n\nAbove the elbow\n\nBULLET::::- Common mechanism of injury: A supracondylar humerus fracture\n\nBULLET::::- Motor deficit:\n\nBULLET::::- Loss of pronation of forearm, weakness in flexion of the hand at the wrist, loss of flexion of radial half of digits and thumb, loss of abduction and opposition of thumb.\n\nBULLET::::- Presence of an ape hand deformity when the hand is at rest, due to an hyperextension of index finger and thumb, and an adducted thumb\n",
"American Ryan Boarman was bitten by a shark on his right elbow on 25 April 2016. After spending some time in Balinese hospitals, he was transferred to Singapore’s Raffles Hospital on 29 April 2016, where he went under the knife of orthopaedic surgeon Dr Lim Yeow Wai. The American had suffered a 360-degree laceration around the elbow, with the shark biting, pulling off and shearing away at least eight muscles and tendons and injuring one nerve and one ligament.\n\nSection::::Corporate affairs.\n\nSection::::Corporate affairs.:Financial performance.\n",
"The anterior interosseous nerve (a branch of the median nerve) and the anterior interosseous artery and vein pass downward on the front of the interosseous membrane between the flexor pollicis longus and flexor digitorum profundus.\n\nInjuries to tendons are particularly difficult to recover from due to the limited blood supply they receive.\n\nSection::::Human anatomy.:Actions.\n\nThe flexor pollicis longus is a flexor of the phalanges of the thumb; when the thumb is fixed, it assists in flexing the wrist.\n\nSection::::Human anatomy.:Innervation.\n\nThe flexor pollicis longus is supplied by the anterior interosseous(C8-T1) branch of the median nerve (C5-T1).\n\nSection::::Human anatomy.:Variations.\n",
"The extensor carpi ulnaris extends the wrist, but when acting alone inclines the hand toward the ulnar side; by its continued action it extends the elbow-joint.\n\nThe muscle is a minor extensor of the carpus in carnivores, but has become a flexor in ungulates. In this case it is described as \"ulnaris lateralis\".\n\nSection::::Innervation.\n\nDespite its name, the extensor carpi ulnaris is innervated by the posterior interosseous nerve (C7 and C8), the continuation of the deep branch of the radial nerve. It would therefore be paralyzed in an injury to the posterior cord of the brachial plexus.\n\nSection::::Injuries.\n",
"Section::::Long-term functionality.\n\nThe long-term functionality varies patient to patient and is affected by several factors including level of amputation and transplant and participation in occupational therapy post hand transplant surgery. Hand transplant recipient Jonathan Koch was able to pick up a napkin and a tennis ball with his newly transplanted hand 7 days after his 17-hour surgery and by day 9, he was able to pick up a bottle of water and take a drink. 3 months after surgery, Koch was able to use his transplanted hand to tie his shoe.\n\nSection::::Survival rates.\n",
"Section::::Judicial caning.:Medical treatment and the effects.\n\nA 2010 report by Amnesty International described the severity of judicial caning as follows, \"In Malaysian prisons specially trained caning officers tear into victims' bodies with a metre-long cane swung with both hands at high speed. The cane rips into the victim's naked skin, pulps the fatty tissue below, and leaves scars that extend to muscle fibre. The pain is so severe that victims often lose consciousness.\"\n",
"In regards to the outcome, Rosolie explained that the anaconda \"[got] my arm into a position where her force was fully on my exposed arm. I started to feel the blood drain out of my hand, and I felt the bone flex. And when I got to the point where I felt like it was going to snap, I had to tap out.\"\n\nSection::::Broadcast and reception.\n"
] | [
"A human deosn't die from a cut off arm. ",
"Humans don't die if they lose their whole arrm."
] | [
"A human will die from a cut off arm if no one intervenes. ",
"Humans can die if they lose their whole arm and there is no first aid or medical intervention."
] | [
"false presupposition"
] | [
"A human deosn't die from a cut off arm. ",
"Humans don't die if they lose their whole arrm."
] | [
"false presupposition",
"false presupposition"
] | [
"A human will die from a cut off arm if no one intervenes. ",
"Humans can die if they lose their whole arm and there is no first aid or medical intervention."
] |
2018-05041 | what’s physically different about USB versions? | For example USB 3.0 has 9 wires while 2,0 has only 4. More wires = more data sent in shorter time. Also 3.0 uses different method of sending signal than 2.0. And for the C version, which I don't know much about. Only info I found is that it has 24-pins and and in the 3.1 version 2 wires can have speeds up to 10Gbit/s. | [
"Unlike other data buses (such as Ethernet), USB connections are directed; a host device has \"downstream\" facing ports that connect to the \"upstream\" ports of devices. Only downstream facing ports provide power; this topology was chosen to easily prevent electrical overloads and damaged equipment. Thus, USB cables have different ends: A and B, with different physical connectors for each. Each format has a plug and receptacle defined for each of the A and B ends. USB cables have plugs, and the corresponding receptacles are on the computers or electronic devices. In common practice, the A end is usually the standard format, and the B side varies over standard, mini, and micro. The mini and micro formats also provide for USB On-The-Go with a hermaphroditic AB receptacle, which accepts either an A or a B plug. On-The-Go allows USB between peers without discarding the directed topology by choosing the host at connection time; it also allows one receptacle to perform double duty in space-constrained applications.\n",
"On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.\n\nSection::::Technology.\n\nOn a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.\n",
"Inside the plastic casing is a small printed circuit board, which has some power circuitry and a small number of surface-mounted integrated circuits (ICs). Typically, one of these ICs provides an interface between the USB connector and the onboard memory, while the other is the flash memory. Drives typically use the USB mass storage device class to communicate with the host.\n\nSection::::Technology.:Flash memory.\n",
"Alternate Mode hosts and sinks can be connected with either regular full-featured USB-C cables, or converter cables/adapters:\n\nBULLET::::- USB 3.1 Type-C to Type-C full-featured cable: DisplayPort, Mobile High-Definition Link (MHL), HDMI and Thunderbolt (20Gbit/s, or 40Gbit/s with cable length up to 0.5 m) Alternate Mode USB-C ports can be interconnected with standard passive full-featured USB Type-C cables. These cables are only marked with standard \"trident\" SuperSpeed USB logo (for Gen 1 cables) or the SuperSpeed+ USB 10 Gbit/s logo (for Gen 2 cables) on both ends. Cable length should be 2.0m or less for Gen 1 and 1.0m or less for Gen 2.\n",
"Micro-USB connectors, which were announced by the USB-IF on 4 January 2007, have a similar width to Mini-USB, but approximately half the thickness, enabling their integration into thinner portable devices. The Micro-A connector is with a maximum overmold boot size of , while the Micro-B connector is with a maximum overmold size of .\n\nThe thinner Micro-USB connectors were introduced to replace the Mini connectors in devices manufactured since May 2007, including smartphones, personal digital assistants, and cameras. \n",
"BULLET::::- Anker PowerIQ\n\nSection::::Power.:Non-standard devices.\n",
"Full-featured USB-C 3.1 cables are electronically marked cables that contain a full set of wires and a chip with an ID function based on the configuration data channel and vendor-defined messages (VDMs) from the USB Power Delivery 2.0 specification. USB-C devices also support power currents of 1.5 A and 3.0 A over the 5 V power bus in addition to baseline 900 mA; devices can either negotiate increased USB current through the configuration line or they can support the full Power Delivery specification using both BMC-coded configuration line and legacy BFSK-coded V line.\n",
"Specifications listed on technology web sites (such as GSMArena, PDAdb.net, PhoneScoop, and others) can help determine compatibility. Using GSMArena as an example, one would locate the page for a given device, and examine the verbiage under \"Specifications → Comms → USB\". If \"USB Host\" is shown, the device should be capable of supporting OTG-type external USB accessories.\n\nIn many of the above implementations, the host device has only a micro-B receptacle rather than a micro-AB receptacle. Although non-standard, micro-B to micro-A receptacle adapters are widely available and used in place of the mandated micro-AB receptacle on these devices.\n\nSection::::Backward compatibility.\n",
"On the device side, a modified Micro-B plug (Micro-B SuperSpeed) is used to cater for the five additional pins required to achieve the USB 3.0 features (USB-C plug can also be used). The USB 3.0 Micro-B plug effectively consists of a standard USB 2.0 Micro-B cable plug, with an additional 5 pins plug \"stacked\" to the side of it. In this way, cables with smaller 5 pin USB 2.0 Micro-B plugs can be plugged into devices with 10 contact USB 3.0 Micro-B receptacles and achieve backward compatibility.\n",
"USB 2.0 uses two wires for power (V and GND), and two for differential serial data signals. Mini and micro connectors have their GND connections moved from pin #4 to pin #5, while their pin #4 serves as an ID pin for the On-The-Go host/client identification.\n\nUSB 3.0 provides two additional differential pairs (four wires, SSTx+, SSTx−, SSRx+ and SSRx−), providing full-duplex data transfers at \"SuperSpeed\", which makes it similar to Serial ATA or single-lane PCI Express.\n\nSection::::Connectors.:Connector properties.:Colors.\n",
"Because there are two separate controllers in each USB 3.0 host, USB 3.0 devices transmit and receive at USB 3.0 data rates regardless of USB 2.0 or earlier devices connected to that host. Operating data rates for earlier devices are set in the legacy manner.\n\nSection::::Device classes.\n\nThe functionality of a USB device is defined by a class code sent to a USB host. This allows the host to load software modules for the device and to support new devices from different manufacturers.\n\nDevice classes include:\n\nSection::::Device classes.:USB mass storage / USB drive.\n",
"USB OTG recognizes that a device can perform both master and slave roles, and so subtly changes the terminology. With OTG, a device can be either a host when acting as a link master, or a \"peripheral\" when acting as a link slave. The choice between host and peripheral roles is handled entirely by which end of the cable the device is connected to. The device connected to the \"A\" end of the cable at start-up, known as the \"A-device\", acts as the default host, while the \"B\" end acts as the default peripheral, known as the \"B-device\".\n",
"This article provides information about the physical aspects of Universal Serial Bus, USB: connectors, cabling, and power. The initial versions of the USB standard specified connectors that were easy to use and that would have acceptable life spans; revisions of the standard added smaller connectors useful for compact portable devices. Higher-speed development of the USB standard gave rise to another family of connectors to permit additional data paths. All versions of USB specify cable properties; version 3.X cables include additional data paths. The USB standard included power supply to peripheral devices; modern versions of the standard extend the power delivery limits for battery charging and devices requiring up to 100 watts. USB has been selected as the standard charging format for many mobile phones, reducing the proliferation of proprietary chargers.\n",
"USB mass storage device class\n\nThe USB mass storage device class (also known as USB MSC or UMS) is a set of computing communications protocols defined by the USB Implementers Forum that makes a USB device accessible to a host computing device and enables file transfers between the host and the USB device. To a host, the USB device acts as an external hard drive; the protocol set interfaces with a number of storage devices.\n\nSection::::Uses.\n\nDevices connected to computers via this standard include:\n\nBULLET::::- External magnetic hard drives\n",
"BULLET::::- SuperSpeed (SS) adds two additional pairs of shielded twisted wire (and new, mostly compatible expanded connectors). These are dedicated to full-duplex SuperSpeed operation. The half-duplex lines are still used for configuration.\n\nBULLET::::- SuperSpeed+ (SS+) uses increased data rate (Gen 2x1 mode) and/or the additional lane in the Type-C connector (Gen 1x2 and Gen 2x2 mode).\n\nA USB connection is always between a host or hub at the \"A\" connector end, and a device or hub's \"upstream\" port at the other end.\n\nSection::::Signaling (USB PHY).:Signaling state.\n",
"BULLET::::- SuperSpeed+ (SS+) uses increased data rate (Gen 2×1 mode) and/or the additional lane in the USB-C connector (Gen 1×2 and Gen 2×2 mode).\n\nA USB connection is always between a host or hub at the \"A\" connector end, and a device or hub's \"upstream\" port at the other end.\n\nSection::::Protocol layer.\n\nDuring USB communication, data is transmitted as packets. Initially, all packets are sent from the host via the root hub, and possibly more hubs, to devices. Some of those packets direct a device to send some packets in reply.\n\nSection::::Transactions.\n\nThe basic transactions of USB are:\n",
"Additionally, many devices have USB On-The-Go and support USB storage, in most cases using either a special USB micro-B flash drive or an adapter for a standard USB port. Such adapters can also be used with various other USB devices, such as hardware mice and keyboards.\n\nSection::::Chargers and external batteries.\n\nSmartphone chargers have gone through a diverse evolution that has included cradles, plug-in cords and obscure connectors. However, more recent devices generally use micro-USB. (Apple devices still use proprietary cables, though the form-factor of their 30-pin plug used on older devices has shown up elsewhere.\n",
"USB mass storage device class (MSC or UMS) standardizes connections to storage devices. At first intended for magnetic and optical drives, it has been extended to support flash drives. It has also been extended to support a wide variety of novel devices as many systems can be controlled with the familiar metaphor of file manipulation within directories. The process of making a novel device look like a familiar device is also known as extension. The ability to boot a write-locked SD card with a USB adapter is particularly advantageous for maintaining the integrity and non-corruptible, pristine state of the booting medium.\n",
"Various domains and application areas involve devices of many kinds and these devices may have different communication capabilities. To achieve interoperability in such an heterogeneous situation, the SIB supports multiple transport mechanisms, such as TCP/IP, HTTP, Bluetooth and NoTA. Depending on the actual operating environment the most suitable transport technology is selected.\n\nSection::::Notion of application.\n",
"USB OTG introduces the concept of a device performing both master and slave roles whenever two USB devices are connected and one of them is a USB OTG device, they establish a communication link. The device controlling the link is called the master or host, while the other is called the slave or peripheral.\n",
"superMHL can use a variety of source and sink connectors with certain limitations: micro-USB or proprietary connectors can be used for the source only, HDMI Type-A for the sink only, while the USB Type-C and the superMHL connectors can be used for the source or sink.\n\nSection::::Connectors.\n\nSection::::Connectors.:Micro-USB-to-HDMI (five-pin).\n\nThe first implementations use the most common mobile connection (Micro-USB) and the most common TV connection (HDMI). There are two types of connection, depending on whether the display device directly supports MHL.\n\nSection::::Connectors.:Micro-USB-to-HDMI (five-pin).:Passive cable.\n",
"BULLET::::- External optical drives, including CD and DVD reader and writer drives\n\nBULLET::::- Portable flash memory devices\n\nBULLET::::- Solid-state drives\n\nBULLET::::- Adapters between standard flash memory cards and USB connections\n\nBULLET::::- Digital cameras\n\nBULLET::::- Digital audio and portable media players\n\nBULLET::::- Card readers\n\nBULLET::::- PDAs\n\nBULLET::::- Mobile phones\n\nDevices supporting this standard are known as MSC (Mass Storage Class) devices. While MSC is the original abbreviation, UMS (Universal Mass Storage) has also come into common use.\n\nSection::::Operating system support.\n\nMost mainstream operating systems include support for USB mass storage devices; support on older systems is usually available through patches.\n",
"Manufacturers of personal electronic devices might not include a USB standard connector on their product for technical or marketing reasons. Some manufacturers provide proprietary cables that permit their devices to physically connect to a USB standard port. Full functionality of proprietary ports and cables with USB standard ports is not assured; for example, some devices only use the USB connection for battery charging and do not implement any data transfer functions.\n",
"OTG devices attached either to a peripheral-only B-device or a standard/embedded host have their role fixed by the cable, since in these scenarios it is only possible to attach the cable one way.\n\nSection::::Connectors.:Connector types.:USB-C.\n\nDeveloped at roughly the same time as the USB 3.1 specification, but distinct from it, the USB-C Specification 1.0 was finalized in August 2014 and defines a new small reversible-plug connector for USB devices. The USB-C plug connects to both hosts and devices, replacing various Type-A and Type-B connectors and cables with a standard meant to be future-proof.\n",
"BULLET::::- Parallel DB-25 – pitch, 26 pins, 2×13 (2 rows of 13 pins)\n\nBULLET::::- In some instances USB through version 2 on motherboards – pitch, 10 pins, 2×5 (2 rows of 5 pins)\n\nFor all of the above connectors, the computer manufacturer typically attaches a female IDC connector onto one end of a ribbon cable, and later slides that connector onto a matching male box header or pin header on the computer motherboard.\n\nSection::::See also.\n\nBULLET::::- Vampire tap\n\nBULLET::::- Wire wrap\n\nBULLET::::- Electrical connector\n\nBULLET::::- DC connector\n\nBULLET::::- Krone LSA-PLUS\n\nBULLET::::- Berg connector\n\nBULLET::::- JST connector\n\nBULLET::::- Molex connector\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal",
"normal"
] | [] |
2018-04048 | How do viruses know what to do despite not being alive? | "Are they mechanical?" Basically. They run on pure chemistry and physics. It's not that they know what to do, but rather that that specific amalgamation of chemicals will react that way in the presence of certain things. | [
"Section::::Nonhuman.:Bacteria quorum sensing.\n\nCommunication is not a tool used only by humans, plants and animals, but it is also used by microorganisms like bacteria. The process is called quorum sensing. Through quorum sensing, bacteria are able to sense the density of cells, and regulate gene expression accordingly. This can be seen in both gram positive and gram negative bacteria.\n\nThis was first observed by Fuqua \"et al.\" in marine microorganisms like \"V. harveyi\" and \"V. fischeri\".\n\nSection::::Models.\n",
"As with the example of HBV, showing in figure 2, four open reading frames are encoded (ORFs), all ORFs are in the same direction, defining the minus- and plus-strands. And the virus has four known genes, which encode the core protein, the virus polymerase, surface antigens (preS1, preS2, and S) and the X protein.\n",
"Then the CRISPR–Cas prokaryotic immune system which holds a \"library\" of genome fragments from phages (proto-spacers) that have previously infected the host. Spacers from isolate microbial genomes with matches to metagenomic viral contigs (mVCs) were identified for 4.4% of the viral groups and 1.7% of singletons. The hypothesis was explored that viral transfer RNA (tRNA) genes originate from their host.\n",
"Section::::Plant chemoreceptors.\n\nPlants have various mechanisms to perceive danger in their environment. Plants are able to detect pathogens and microbes through surface level receptor kinases (PRK). Additionally, receptor-like proteins (RLPs) containing ligand binding receptor domains capture pathogen-associated molecular patterns (PAMPS) and damage-associated molecular patterns (DAMPS) which consequently initiates the plant's innate immunity for a defense response.\n",
"NNSVs can also bind to cellular receptors throughout the pro-inflammatory cytokine pathway to inhibit the immune response. By carrying accessory proteins that directly bind to pattern recognition receptors, the virus can use its accessory proteins to induce conformational changes throughout other immune response proteins and inhibit cellular responses. Generally, the pattern recognition receptors detect infection-associated molecules commonly associated with viruses, but some viruses carry accessory proteins that reconfigure the protein to inhibit its function and block the rest of the signaling cascade that would produce an immune response.\n",
"BULLET::::- For any bacterium to enter a host's cell, the cell must display receptors to which bacteria can adhere and be able to enter the cell. Some strains of \"E. coli\" are able to internalize themselves into a host's cell even without the presence of specific receptors as they bring their own receptor to which they then attach and enter the cell.\n\nBULLET::::- Under nutrient limitation, some bacteria transform into endospores to resist heat and dehydration.\n",
"Section::::Bacteria.:Examples.:\"Escherichia coli\".\n",
"Section::::Plant virus movement between cells.\n\nMost plant viruses move between plant cells via plasmodesmata, pores between plant cell walls that allow the plant cells to communicate with each other. Plasmodesmata usually only allow the passage of small diffusible molecules, such as various metabolites. Neither virus particles nor viral genomic nucleic acid can pass through plasmodesmata unaided.\n\nSection::::Function of movement proteins.\n",
"More recently, the Ploegh lab at the Whitehead Institute has been using a technique called “sortagging” to look at the pathways through which viruses are able to avoid detection by the immune system. Memory B cells are lymphocytes known to be produced to fight off secondary infection, yet the influenza virus is able to avoid the immune response generated by these cells. This method was used to tag the influenza virus, so that it could be observed, and it was found that the interaction of virus antigens with the B-cell receptor is required for infection.\n",
"A third and more specific example, is by simply attaching to the surface of the cell via receptors on the cell, and injecting only its genome into the cell, leaving the rest of the virus on the surface. This is restricted to viruses in which only the gene is required for infection of a cell (most positive-sense, single-stranded RNA viruses because they can be immediately translated) and further restricted to viruses that actually exhibit this behavior. The best studied example includes the bacteriophages; for example, when the tail fibers of the T2 phage land on a cell, its central sheath pierces the cell membrane and the phage injects DNA from the head capsid directly into the cell.\n",
"This basic idea extends to viruses that do not contain an envelope. Well studied examples are the viruses that infect bacteria, known as bacteriophages (or simply phages). Typical phages have long tails used to attach to receptors on the bacterial surface.\n\nSection::::Reducing cellular proximity.:Overview.\n",
"The life cycle of viruses, such as those used in neuronal tracing, is different from cellular organisms. Viruses are parasitic in nature and cannot proliferate on their own. Therefore, they must infect another organism and effectively hijack cellular machinery to complete their life cycle. The first stage of the viral life cycle is called viral entry. This defines the manner in which a virus infects a new host cell. In nature, neurotropic viruses are usually transmitted through bites or scratches, as in the case of Rabies virus or certain strains of Herpes viruses. In tracing studies, this step occurs artificially, typically through the use of a syringe. The next stage of the viral life cycle is called viral replication. During this stage, the virus takes over the host cell's machinery to cause the cell to create more viral proteins and assemble more viruses. Once the cell has produced a sufficient number of viruses, the virus enters the viral shedding stage. During this stage, viruses leave the original host cell in search of a new host. In the case of neurotropic viruses, this transmission typically occurs at the synapse. Viruses can jump across the relatively short space from one neuron to the next. This trait is what makes viruses so useful in tracer studies. Once the virus enters the next cell, the cycle begins anew. The original host cell begins to degrade after the shedding stage. In tracer studies, this is the reason the timing must be tightly controlled. If the virus is allowed to spread too far, the original microcircuitry of interest is degraded and no useful information can be retrieved. Typically, viruses can infect only a small number of organisms, and even then only a specific cell type within the body. The specificity of a particular virus for a specific tissue is known as its tropism. Viruses in tracer studies are all neurotropic (capable of infecting neurons).\n",
"Viral production without cell lysis has recently been observed in \"O. tauri\" cells. Thomas et al. (2011) found that in resistant host cells, the viral genome was replicated and viruses were released via a budding mechanism. This low rate of viral release through budding allows for prolonged survivability of the host and virus progeny, resulting in a stable co-existence.\n\nSection::::Encoded proteins.\n",
"For the bacteria to use quorum sensing constitutively, they must possess three characteristics: to secrete a signaling molecule, an autoinducer, to detect the change in concentration of signaling molecules, and to regulate gene transcription as a response. This process is highly dependent on the diffusion mechanism of the signaling molecules. QS Signaling molecules are usually secreted at a low level by individual bacteria. At low cell density, the molecules may just diffuse away. At high cell density, the local concentration of signaling molecules may exceed its threshold level, and trigger changes in gene expressions.\n\nSection::::Bacteria.:Mechanism.:Gram-positive Bacteria.\n",
"Invertebrates do not produce antibodies by the lymphocyte-based adaptive immune system that is central to vertebrate immunity, but they are capable of effective immune responses. Phagocytosis was first observed in invertebrates, and this and other innate immune responses are important in immunity to viruses and other pathogens. The hemolymph of invertebrates contains many soluble defence molecules, such as hemocyanins, lectins, and proteins, which protect these animals against invaders.\n",
"The major way bacteria defend themselves from bacteriophages is by producing enzymes that destroy foreign DNA. These enzymes, called restriction endonucleases, cut up the viral DNA that bacteriophages inject into bacterial cells. Bacteria also contain a system that uses CRISPR sequences to retain fragments of the genomes of viruses that the bacteria have come into contact with in the past, which allows them to block the virus's replication through a form of RNA interference. This genetic system provides bacteria with acquired immunity to infection.\n\nSection::::Infection in other species.:Archaeal viruses.\n",
"Bacteriophages are a common and diverse group of viruses and are the most abundant biological entity in aquatic environments—there are up to ten times more of these viruses in the oceans than there are bacteria, reaching levels of 250,000,000 bacteriophages per millilitre of seawater. These viruses infect specific bacteria by binding to surface receptor molecules and then entering the cell. Within a short amount of time, in some cases just minutes, bacterial polymerase starts translating viral mRNA into protein. These proteins go on to become either new virions within the cell, helper proteins, which help assembly of new virions, or proteins involved in cell lysis. Viral enzymes aid in the breakdown of the cell membrane, and, in the case of the T4 phage, in just over twenty minutes after injection over three hundred phages could be released.\n",
"In one study, mice treated with gentamycin via infusion pump displayed CNS and brain involvement during infection with \"Listeria\", indicating that the population of bacteria responsible for severe pathogenesis resided within cells and was protected from the circulating antibiotic. Macrophages infected with \"Listeria\" pass the infection on to neurons more easily through paracytophagy than through extracellular invasion by free bacteria. The mechanism which specifically targets these infected cells to the CNS is currently not known. This Trojan horse function is also observed and thought to be important in early stages of infection where gut-to-lymph node infection is mediated by infected dendritic cells.\n",
"Several viruses use a leaky scanning mechanism to produce vital proteins which implies that leaky scanning is not a consequence of inadequacy, but instead allows viruses to overcome the high selective pressures of competing with their hosts. Molecular biologists are narrowing the search of the ideal nucleotide environment for initiation of translation, and the mechanisms by which viruses replicate.\n\nSection::::Discovery.\n",
"In the case of the Cauliflower mosaic virus (CaMV), viroplasms improve the virus transmission by the aphid vector. Viroplasms also control release of virions when the insect stings an infected plant cell or a cell near the infected cells.\n\nSection::::Possible co-evolution with the host.\n",
"Section::::Bacteria.:Examples.:\"Pseudomonas aeruginosa\".\n",
"Section::::Microorganisms whose study is encompassed by microbial genetics.:Viruses.\n\nViruses are capsid-encoding organisms composed of proteins and nucleic acids that can self-assemble after replication in a host cell using the host's replication machinery. There is a disagreement in science about whether viruses are living due to their lack of ribosomes. Comprehending the viral genome is important not only for studies in genetics but also for understanding their pathogenic properties.\n",
"An important innovation in this field is the use of neurotropic viruses as tracers. These not only spread throughout the initial site of infection, but can jump across synapses. The use of a virus provides a self-replicating tracer. This can allow for the elucidation of neural microcircuitry to an extent that was previously unobtainable. \n",
"In animal cells, virus particles are gathered by the microtubule-dependent aggregation of toxic or misfolded protein near the microtubule organizing center (MTOC), so the viroplasms of animal viruses are generally localized near the MTOC. MTOCs are not found in plant cells. Plant viruses induce the rearrangement of membranes structures to form the viroplasm. This is mostly shown for plant RNA viruses.\n\nSection::::Functions.\n",
"Due to the lack of information of reticulons, scientists often study reticulon-like proteins. The genome \"Arabidopsis thaliana\" has at least 19 reticulon like proteins, and 15 of them have been explicitly identified. One study on \"Arabadopsis\" looks at transport between organelles and specific receptors. The regulation of receptor transport to the plasma membrane is important for the recognition of pathogens. Membrane associated proteins travel from the ER to the Golgi bodies, and eventually the plasma membrane. Immune receptors that are related to the plasma membrane are called pattern recognition receptors (PRRs). Through \"Arabidopsis\" protein microarrays the FLAGELIN-SENSITIVE2 (FLS2) receptor, a PRR, was tagged to identify reticulon-like protein RTNLB1 and its homolog RTNLB2. When manipulating the expression levels of RTNLB1 and RTNLB2, signaling of the FLS2 receptor was interrupted. A serine cluster at the N-terminal of the protein is important for the FLS2 interaction. Although there is not a direct interference, RTNLB1 and RTNLB2 interact with newly created FLS2 to facilitate transport to the plasma membrane. Through the RTNLB1 and RTNLB2 reticulon domain, their function is part of a larger protein system that moderates FLS2 secretion. Receptor trafficking is looked at through plant studies as an important process of receptor activity. The role of human reticulons which are involved in intracellular protein trafficking indicate the relationship between reticulons and plant RTNLBs.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-13259 | Why do large commercial ships use diesel engines to produce electricity, to then power the turbines? | Ships go fast and slow. Diesel is more efficient at particular engine speeds (ask the engineer). Keeping the diesel at the efficient point and using electricity to power the propeller turns out to be a good idea. | [
"A turbine-electric system is also possible using gas turbine generators. Some yachts use only gas turbines for integrated electric propulsion without any diesel engines. If electric propulsion is used via electric motor on shaft, or integrated into the main reduction gear driving the shaft, greater power available is realized faster than using diesels. In addition, an on-shaft permanent magnet motor drive system also utilizing gas turbine Prime Movers on the main reduction gear, can also provide electricity when driven by the Prime Movers. The on-shaft permanent magnet electric motors provide propulsion at lower speeds via on-board electrical power generation gas turbine or diesel, at significant fuel savings. If a fleet wide usage is analyzed, significant logistical advantages are realized over time. Compared to diesel, it increases flexibility, versatility and efficiency, with capability of transforming to provide propulsion or electrical power more rapidly, which ever the situation dictates.\n",
"Gas turbines are also used for electrical power generation and some ships use a combination: \"Queen Mary 2\" has a set of diesel engines in the bottom of the ship plus two gas turbines mounted near the main funnel; all are used for generating electrical power, including those used to drive the propellers. This provides a relatively simple way to use the high-speed, low-torque output of a turbine to drive a low-speed propeller, without the need for excessive reduction gearing.\n\nSection::::Ships.:Submarines.\n",
"BULLET::::- Diesel-electric hybrid: There is a third potential use for a diesel auxiliary and that is to charge the batteries, when they suddenly start to wane far from shore in the middle of the night, or at anchor after some days of living aboard. In this case, where this kind of use is to be expected, perhaps on a larger cruising yacht, then a combined diesel-electric solution may be designed from the start. The diesel engine is installed with the prime purpose of charging the battery banks, and the electric motor with that of propulsion. There is some reduction in efficiency if motoring for long distances as the diesel's power is converted first to electricity and then to motion, but there is a balancing saving every time the wind-, sail- and solar-charged batteries are used for manoeuvring and for short journeys without starting the diesel. There is the flexibility of being able to start the diesel as a pure generator whenever required. The main losses are in weight and installation cost, but on the bigger cruising boats that may sit at anchor running large diesels for hours every day, these are not too big an issue, compared to the savings that can be made at other times. An example is the fishing boat Selfa El-Max 1099, with 135 kWh battery and 80 kW diesel generator. An LNG-powered supply vessel started operation in 2016 with a 653 kWh/1600 kW battery acting as spinning reserve during dynamic positioning, saving 15-30% fuel.\n",
"Ships often also employ diesel generators, sometimes not only to provide auxiliary power for lights, fans, winches etc., but also indirectly for main propulsion. With electric propulsion the generators can be placed in a convenient position, to allow more cargo to be carried. Electric drives for ships were developed before World War I. Electric drives were specified in many warships built during World War II because manufacturing capacity for large reduction gears was in short supply, compared to capacity for manufacture of electrical equipment. Such a diesel-electric arrangement is also used in some very large land vehicles such as railroad locomotives.\n",
"Medium-speed engines intended for marine applications are usually used to power (ro-ro) ferries, passenger ships or small freight ships. Using medium-speed engines reduces the cost of smaller ships and increases their transport capacity. In addition to that, a single ship can use two smaller engines instead of one big engine, which increases the ship's safety.\n\nSection::::Types.:By engine speeds.:Low-speed engines.\n",
"In the second half of the 20th century, rising fuel costs almost led to the demise of the steam turbine. Most new ships since around 1960 have been built with diesel engines. The last major passenger ship built with steam turbines was \"Fairsky\", launched in 1984. Similarly, many steam ships were re-engined to improve fuel efficiency. One high-profile example was the 1968 built \"Queen Elizabeth 2\" which had her steam turbines replaced with a diesel-electric propulsion plant in 1986.\n",
"In recent times, there is some renewed interest in commercial nuclear shipping. Nuclear-powered cargo ships could lower costs associated with carbon dioxide emissions and travel at higher cruise speeds than conventional diesel powered vessels.\n\nSection::::Power sources.:Reciprocating diesel engines.\n",
"For base loads diesel generators or gas engines are usually preferred, since they offer better fuel efficiency, however, such stationary engines have a lower power density and are built only up to about 10 MW power per unit.\n\nThe efficiency of larger gas turbines (50 MW or more) can be enhanced by using a combined cycle, where the remaining energy of hot exhaust gases is used to generate steam which drives another steam turbine on same shaft or a separate generator set.\n\nSection::::History.\n",
"Most new-build ships with steam turbines are specialist vessels such as nuclear-powered vessels, and certain merchant vessels (notably Liquefied Natural Gas (LNG) and coal carriers) where the cargo can be used as bunker fuel.\n\nSection::::Power sources.:Steam turbines.:LNG carriers.\n",
"The use of combined fuel and electric propulsion (\"combined diesel-electric or gas\", or CODLOG) has gradually been extended over the years to the extent that some modern liners such as the Queen Mary 2 use only electric motors for the actual propulsion, powered by diesel and gas turbine engines. The advantages include being able to run the fuel engines at an optimal speed at all times and being able to mount the electric motor in a pod which may be rotated by 360° for increased manoeuvrability. Note that this is not actually an \"electric boat\", but rather a variant of diesel-electric or turbine-electric propulsion, similar to the diesel or electric propulsion used on submarines since WWI.\n",
"In addition to their well known role as power supplies during power failures, diesel generator sets also routinely support main power grids worldwide in two distinct ways:\n\nSection::::Supporting main utility grids.:Grid support.\n",
"Many warships built since the 1960s have used gas turbines for propulsion, as have a few passenger ships, like the jetfoil. Gas turbines are commonly used in combination with other types of engine. Most recently, has had gas turbines installed in addition to diesel engines. Because of their poor thermal efficiency at low power (cruising) output, it is common for ships using them to have diesel engines for cruising, with gas turbines reserved for when higher speeds are needed. However, in the case of passenger ships the main reason for installing gas turbines has been to allow a reduction of emissions in sensitive environmental areas or while in port. Some warships, and a few modern cruise ships have also used steam turbines to improve the efficiency of their gas turbines in a combined cycle, where waste heat from a gas turbine exhaust is utilized to boil water and create steam for driving a steam turbine. In such combined cycles, thermal efficiency can be the same or slightly greater than that of diesel engines alone; however, the grade of fuel needed for these gas turbines is far more costly than that needed for the diesel engines, so the running costs are still higher.\n",
"In July 2000 the \"Millennium\" became the first cruise ship to be propelled by gas turbines, in a combined diesel and gas configuration. The liner RMS Queen Mary 2 uses a combined diesel and gas configuration.\n\nIn marine racing applications the 2010 C5000 Mystic catamaran Miss GEICO uses two Lycoming T-55 turbines for its power system.\n\nSection::::Advances in technology.\n",
"In World War II the United States built diesel–electric surface warships. Due to machinery shortages destroyer escorts of the and es were diesel–electric, with half their designed horsepower (The and es were full-power steam turbine–electric). The s, on the other hand, were designed for diesel–electric propulsion because of its flexibility and resistance to damage.\n\nSome modern diesel–electric ships, including cruise ships and icebreakers, use electric motors in pods called azimuth thrusters underneath to allow for 360° rotation, making the ships far more maneuverable. An example of this is \"Symphony of the Seas\", the largest passenger ship as of 2019.\n",
"In 2017 Scandlines began a project which is aiming for electrical power through huge batteries, in order not to emit greenhouse gases and other pollutants. The old oil (or diesel) burning engines will mainly be used to charge the batteries. The final intention however, is to abandon the old engines totally. The initial part will for instance reduce the carbon dioxide emissions by 50 percent.\n\nSection::::History.\n\nSection::::History.:Early history.\n",
"Recent advances in technology reliquefication plants to be fitted to vessels, allowing the boil off to be reliquefied and returned to the tanks. Because of this, the vessels' operators and builders have been able to contemplate the use of more efficient slow-speed Diesel engines (previously most LNG carriers have been steam turbine-powered). Exceptions are the LNG carrier \"Havfru\" (built as \"Venator\" in 1973), which originally had dual fuel diesel engines, and its sister-ship \"Century\" (built as \"Lucian\" in 1974), also built with dual fuel gas turbines before being converted to a diesel engine system in 1982.\n",
"Large generators are also used onboard ships that utilize a diesel-electric powertrain. Voltages and frequencies may vary in different installations.\n\nSection::::Applications.\n\nEngine-generators are used to provide electrical power in areas where utility (central station) electricity is unavailable, or where electricity is only needed temporarily. Small generators are sometimes used to provide electricity to power tools at construction sites. \n\nTrailer-mounted generators supply temporary installations of lighting, sound amplification systems, amusement rides etc. You can use a wattage chart to calculate the estimated power usage for different types of equipment to determine how many watts are necessary in a portable generator.\n",
"The first diesel motorship was also the first diesel–electric ship, the Russian tanker \"Vandal\" from Branobel, which was launched in 1903. Steam turbine–electric propulsion has been in use since the 1920s (s), using diesel–electric powerplants in surface ships has increased lately. The Finnish coastal defence ships \"Ilmarinen\" and \"Väinämöinen\" laid down in 1928–1929, were among the first surface ships to use diesel–electric transmission. Later, the technology was used in diesel powered icebreakers.\n",
"Proper sizing of diesel generators is critical to avoid low-load or a shortage of power. Sizing is complicated by the characteristics of modern electronics, specifically non-linear loads. In size ranges around 50 MW and above, an open cycle gas turbine is more efficient at full load than an array of diesel engines, and far more compact, with comparable capital costs; but for regular part-loading, even at these power levels, diesel arrays are sometimes preferred to open cycle gas turbines, due to their superior efficiencies.\n\nSection::::Diesel generator set.\n",
"The LM2500+ is an evolution of the LM2500, delivering up to or 28.6 MW of electric energy when combined with an electrical generator. Two of such turbo-generators have been installed in the superstructure near the funnel of \"Queen Mary 2\", the world's largest transatlantic ocean liner, for additional electric energy when the ship's four diesel-generators are working at maximum capacity or fail. Celebrity Cruises uses two LM2500+ engines in their \"Millennium\"-class ships in a COGAS cycle.\n\nThe LM2500 is license-built in Japan by IHI Corporation, in India by Hindustan Aeronautics Limited, and in Italy by Avio Aero.\n",
"Another use is to buffer extreme loads on the power system. For example, tokamak fusion devices impose very large peak loads, but relatively low average loads, on the electrical grid. The DIII-D tokamak at General Atomics, the Princeton Large Torus (PLT) at the Princeton Plasma Physics Laboratory, and the Nimrod synchrotron at the Rutherford Appleton Laboratory each used large flywheels on multiple motor–generator rigs to level the load imposed on the electrical system: the motor side slowly accelerated a large flywheel to store energy, which was consumed rapidly during a fusion experiment as the generator side acted as a brake on the flywheel. Similarly, the next generation U.S. Navy aircraft carrier Electromagnetic Aircraft Launch System (EMALS) will use a flywheel motor–generator rig to supply power instantaneously for aircraft launches at greater than the ship's installed generator capacity.\n",
"Such a propulsion system has a smaller footprint than a diesel-only power plant with the same maximal power output, since smaller engines can be used and the gas turbine and gearbox don't need that much additional space. Still, it retains the high fuel efficiency of diesel engines when cruising, allowing greater range and lower fuel costs than with gas turbines alone. On the other hand, a more complex, heavy and troublesome gearing is needed.\n\nTypical cruising speed of CODAG warships on diesel-power is and typical maximal speed with switched on turbine is .\n\nSection::::Turbines and diesels on separate shafts.\n",
"There are several operational and economical advantages to such electrical de-coupling of a ship's propulsion system, and it became a standard element of cruise ship design in the 1990s, over 30 years after \"Canberra\" entered service. However, diesel engine and gas turbine driven alternators are the primary power source for most modern electrically propelled ships.\n",
"BULLET::::- Towed generators are common on long-distance cruising yachts and can generate a lot of power when travelling under sail. If an electric boat has sails as well, and will be used in deep water (deeper than about ), then a towed generator can help build up battery charge while sailing (there is no point in trailing such a generator while under electric propulsion as the extra drag from the generator would waste more electricity than it generates). Some electric power systems use the free-wheeling drive propeller to generate charge through the drive motor when sailing, but this system, including the design of the propeller and any gearing, cannot be optimised for both functions. It may be better locked off or feathered while the towed generator's more efficient turbine gathers energy.\n",
"Cruise ships require electrical power, normally provided by diesel generators. When docked ships must run their generators continuously to power on-board facilities, unless they are capable of using onshore power, where available. Polluting emissions from the diesel engines can be equivalent to 700 lorries running their engines, and is harmful where ships dock in populated areas. Some cruise ships already support the use of shorepower, while others are being adapted to do so.\n\nModern cruise ships typically have some or all of the following facilities:\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-00841 | Why do LED lights look jittery or like they're strobing when you look at them quickly? | They look like they are strobing because (for most of them) on mains power they actually are! Much like when whatching TV, however, something called ”persistence of vision” smooths it all out for you. When blinking or looking away quickly your brain "preserves" what you saw in that instant and you can spot it. You can also see it when something is moving quickly across your vision. Similar stuff happens when you dim LEDs (like LED car taillights when the brakes aren't on), though much, much faster through something called PWM. The LEDs are switched on and off really quickly - when they are on for half the time they look half as bright. In theory PWM is too fast to be perceivable (when done right) but it seems a lot of people are actually sensitive to it! You can also get strobing from HID headlights because they often use AC to get the thousands of volts they need to ignite. *This bit goes a little beyond ELI5 but hopefully still helps. My inbox kinda exploded and I've tried to answer repeated questions in the edits.* Strobing is (historically) very common with LEDs driven from mains AC. You can often see the effect if you wave your hand back and forth while focusing on a stationary spot - instead of smooth motion blur you can see a series of hand images, like stills from a movie. Cheap camera phones also sometimes show it. So why does this happen with LEDs but not other lights? In your mains AC, the voltage alternates from positive to negative and back again 50 or 60 times per second. That means that 100 or 120 times a second the voltage is exactly **zero.** Zero voltage, zero power. In traditional incandescent lights there is a fillament which is heated super hot to provide light. This fillament takes time to cool down - much longer than the mains supply takes to go through zero - and so it can stay hot, keep putting out light, and there is (almost) no flicker. In LEDs, there is no fillament to heat and they react *very* quickly. When the voltage to them starts to drop towards zero, the lights dim and turn off, coming back on again as it voltage goes back up. As this is happening at 100 or 120 Hz, most people wont notice it. Cheap or traditional triac based dimming can seriously exacerbate the issue with mains strobing. In higher quality power supplies for LEDs, they use "smoothing capacitors" and/or purpose designed LED drivers to help the LED stay lit through the low/zero volt bits and this reduces the strobing effect. Incidentally, flourescents also strobe (though to a lesser degree) and most video cameras have special software to help hide this. Obviously with battery (DC) powered stuff, excluding dimming, there is no AC and so no strobing. **E:** typos **Late E2+:** Some battery powered things can use DC to DC transformers which can in turn cause strobing, so the above has caveats. LED car headlights may fall into this category. I have assumed above that we are talking about incandescent replacement globes which almost always have a full bridge rectifier. For single diode lights (Christmas lights, dim indicators, or other decorative lighting) it is half the frequency and more noticeable. The flicker many people mention in slow motion footage of car LED taillights is almost certainly PWM dimming for combo brakes/running lights. Brakes on, full power, running lights, dimmed. **Regarding strobing headlights**, chances are they are HID lights not LED. HIDs need thousands of volts and have transformers (called ballasts) to get this, in turn meaning almost certainly an AC voltage being produced. Much like flourescent tubes, or arc lamps, there is no fillament to help it ride the zero crossing in the AC signal and they strobe. If it is absolutely LED then I would suspect it has to do with being a fancy matrix LED configuration which automatically controls the beam pattern (PWM?). Might also be DC to DC transformers at play. I also found it really interesting how many people have issues with PWM lights. Common wisdom used to be anything above 1 kHz was impossible to see with the naked eye... the exact frequency used in PWM is kinda arbitrary though, apart from lower is easier. Nothing stopping someone using PWM at say 200 Hz instead, which might be where the issue lies. If strobing bothers you the good news seems to be that a lot of newer high quality LED globes have switch-mode and/or smoothing built in, however it's not clear how to tell from the box. I did a search on Amazon and I couldn't find the right magic words. YMMV. If you have the chance to use them in person, at least one variety will stay on for a fraction of a second after you turn them off, so you might be able to look for this. Dimmable sorts might also be better. | [
"BULLET::::- Cycling: LEDs are ideal for uses subject to frequent on-off cycling, unlike incandescent and fluorescent lamps that fail faster when cycled often, or high-intensity discharge lamps (HID lamps) that require a long time before restarting.\n\nBULLET::::- Dimming: LEDs can very easily be dimmed either by pulse-width modulation or lowering the forward current. This pulse-width modulation is why LED lights, particularly headlights on cars, when viewed on camera or by some people, seem to flash or flicker. This is a type of stroboscopic effect.\n",
"LED lamps may flicker. The effect can be seen on a slow motion video of such a lamp. The extent of flicker is based on the quality of the DC power supply built into the lamp structure, usually located in the lamp base. Longer exposures to flickering light contribute to headaches and eye strain.\n",
"Many systems pulse LEDs on and off, by applying power periodically or intermittently. So long as the flicker rate is greater than the human flicker fusion threshold, and the LED is stationary relative to the eye, the LED will appear to be continuously lit. Varying the on/off ratio of the pulses is known as pulse-width modulation. In some cases PWM-based drivers are more efficient than constant current or constant voltage drivers.\n",
"White LEDs can be used as white holiday lights or to create any other color through the use of colored refractors and lenses similar to those used with incandescent bulbs. Color fading may occur due to the exposure of colored plastics to sunlight or heat, as with ordinary holiday lights. Yellowing may also occur in the epoxy body in which the LED is encased if left in the sun consistently.\n",
"LEDs do not intrinsically produce temporal modulations; they just reproduce the input current waveform very well, and any ripple in the current waveform is reproduced by a light ripple because LEDs have a fast response; therefore, compared to conventional lighting technologies (incandescent, fluorescent), for LED lighting more variety in the TLA properties is seen. Many types and topologies of LED driver circuits are applied; simpler electronics and limited or no buffer capacitors often result in larger residual current ripple and thus larger temporal light modulation.\n",
"BULLET::::- Light source technology: LEDs do not intrinsically produce temporal modulation; they just reproduce the input current waveform very well, and any ripple in the current waveform is reproduced by a light ripple because LEDs have a fast response; therefore compared to conventional lighting technologies (incandescent, fluorescent), for LED lighting more variety in the TLA properties is seen.\n\nBULLET::::- Power source technology (driver, electrical ballast): Many types and topologies of LED drivers and electrical ballasts are applied; simpler electronics and limited or no buffer capacitors often result in larger residual current ripple and thus larger temporal light modulation.\n",
"Some special effects, such as certain kinds of electronic glowsticks commonly seen at outdoor events, have the appearance of a solid color when motionless but produce a multicolored or dotted blur when waved about in motion. These are typically LED-based glow sticks. The variation of the duty cycle upon the LED(s), results in usage of less power while by the properties of flicker fusion having the direct effect of varying the brightness. When moved, if the frequency of duty cycle of the driven LED(s) is below the flicker fusion threshold timing differences between the on/off state of the LED(s) becomes evident, and the color(s) appear as evenly spaced points in the peripheral vision.\n",
"Further background and explanations on the different TLA phenomena are given in a recorded webinar “\"Is it all just flicker?\"”. Models for the visibility of flicker and stroboscopic effect from the temporal behavior of luminous output of LEDs are in the doctoral thesis of Perz.\n\nSection::::Root causes.\n\nThe root cause of TLAs is the variation of the light intensity of lighting equipment. Important factors that can contribute and that determine the magnitude and type of light modulation of lighting equipment are:\n",
"Because only a single set of LEDs, all having a common anode or cathode, can be lit simultaneously without turning on unintended LEDs, Charlieplexing requires frequent output changes, through a method known as multiplexing. When multiplexing is done, not all LEDs are lit quite simultaneously, but rather one set of LEDs is lit briefly, then another set, and eventually the cycle repeats. If it is done fast enough, they will appear to all be on, all the time, to the human eye because of persistence of vision. In order for a display to not have any noticeable flicker, the refresh rate for each LED must be greater than 50 Hz.\n",
"LED lamps generally do not benefit from flicker attenuation through phosphor persistence, the notable exception being white LEDs. Flicker at frequencies as high as 2000 Hz (2 kHz) can be perceived by humans during saccades and frequencies above 3 kHz have been recommended to avoid human biological effects.\n\nSection::::Visual phenomena.\n",
"BULLET::::- being purer i.e. white light being filtered to red, green, and blue is not as accurate as an LED producing that color natively\n\nBULLET::::- a larger, clearer image then is possible with an equivalently lumen-rated traditional projector.\n\nBy strobing the LEDs in sequence, the \"color wheel\" can be removed, making the new generation of DLP+LED projectors completely solid state, as well as reducing visual artifacts that put DLP at a disadvantage to LCD or LCOS, i.e. the \"rainbow effect\", in certain situations.\n",
"There are two types of LEDs: colored LEDs and white LEDs. Colored LEDs emit a specific color light (monochromatic light), regardless of the color of the transparent plastic lens that encases the LED's chip. The plastic may be colored for cosmetic reasons, but does not substantially affect the color of the light emitted. Holiday lights of this type do not suffer from color fading because the light is determined by the LED's chip rather than the plastic lens.\n",
"This way the LED junction is flooded and cleared of carriers as quickly as possible, basically by short circuit discharge. This pushes the speed of the LED to maximum, which makes the output optical signal fast enough so that the range/power ratio is the same as with the faster red HPWT-BD00-F4000 LED. The side effects of this brutal driving\n",
"BULLET::::- More accurate color rendering: The color rendering index is the ability of a light source to correctly reproduce the colors of the objects in comparison to an ideal light source. Improved color rendering makes it easier for drivers to recognize objects.\n\nBULLET::::- Quick turn on and off: Unlike fluorescent and high-intensity discharge (HID) lamps, such as mercury vapor, metal halide, and sodium vapor lamps, which take time to heat up once switched on, LEDs come on with full brightness instantly.\n",
"Some well-known HP-LEDs in this category are the Nichia 19 series, Lumileds Rebel Led, Osram Opto Semiconductors Golden Dragon, and Cree X-lamp. As of September 2009, some HP-LEDs manufactured by Cree now exceed 105 lm/W.\n\nExamples for Haitz's law—which predicts an exponential rise in light output and efficacy of LEDs over time—are the CREE XP-G series LED, which achieved 105 lm/W in 2009 and the Nichia 19 series with a typical efficacy of 140 lm/W, released in 2010.\n\nSection::::Types.:AC-driven.\n",
"Before the introduction of LED lamps, three types of lamps were used for the bulk of general (white) lighting: \n\nBULLET::::- Incandescent lights, which produce light with a glowing filament heated by electric current. These are very inefficient, having a luminous efficacy of 10-17 lumens/W, and also have a short lifetime of 1000 hours. They are being phased out of general lighting applications. Incandescent lamps produce a continuous black body spectrum of light similar to sunlight, and so produce high Color rendering index (CRI).\n",
"BULLET::::- There is progressive wear of layers of phosphor in white LEDs. The change in color slowly moves devices from one photobiological risk group to a higher one.\n\nSection::::Disadvantages of LED street lights.:Health concerns.\n\nBULLET::::- Exposure to the light of white LED bulbs suppresses melatonin by up to five times more than exposure to the light of pressure sodium bulbs. The fact that white light, emitting at wavelengths of 400-500 nanometers suppresses the production of melatonin produced by the pineal gland is known. The effect is disruption of a human being’s biological clock resulting in poor sleeping and rest periods.\n",
"BULLET::::- Thermal runaway: Parallel strings of LEDs will not share current evenly due to the manufacturing tolerance in their forward voltage. Running two or more strings from a single current source will likely result in LED failure as the devices warm up. A circuit is required to ensure even distribution of current between parallel strands.\n\nSection::::Applications.\n\nLED uses fall into four major categories:\n\nBULLET::::- Visual signals where light goes more or less directly from the source to the human eye, to convey a message or meaning\n",
"In these configurations, the relocated resistors make it possible to light multiple LEDs at the same time row-by-row, instead of requiring that they be lit individually. The row current capacity could be boosted by an NPN emitter follower instead of the typically much weaker I/O pin.\n\nSection::::Problems with Charlieplexing.\n\nSection::::Problems with Charlieplexing.:Refresh rate.\n",
"In the diagram above it can be seen that if LED 6 has a 4 V forward voltage, and LEDs 1 and 3 have forward voltages of 2 V or less, they will light when LED 6 is intended to, as their current path is shorter. This issue can easily be avoided by comparing forward voltages of the LEDs used in the matrix and checking for compatibility issues. Or, more simply, using LEDs that all have the same forward voltage.\n",
"Some incandescent or LED-based strings use a power supply transformer with lamps connected in parallel. These sets are much safer, but there is a voltage drop at the end of the string causing reduced brightness of the lamps at the end of the set. The reduced brightness is, however, less noticeable with LED-based sets than incandescent sets. Power supplies with integrated plugs may make the set difficult to connect in certain places.\n",
"The simple capacitive or resistive dropper power supply used by some cheaper bulbs will cause some flickering at twice the mains alternating current frequency, difficult to detect but possibly contributing to eyestrain and headaches. Also, due to the Ohm's law, the current delivered by capacitive power supplies increases with mains frequency, e.g., from 50Hz to 60Hz.\n",
"BULLET::::- The Rainbow Effect: This is an unwanted visual artifact that is described as flashes of colored light seen when the viewer looks across the display from one side to the other. This artifact is unique to single-chip DLP projectors. The Rainbow Effect is significant only in DLP displays that use a single white lamp with a \"color wheel\" that is synchronized with the display of red, green and blue components. LED illumination systems that use discrete red, green and blue LEDs in concert with the display of red, green and blue components at high frequency reduce, or altogether eliminate, the Rainbow effect.\n",
"LEDs use much less electricity (only 4 watts for a 70-light string) and have a much greater lifespan than incandescent lamps. Since they are constructed from solid state materials and have no metallic filaments to burn out or break, LEDs are much less susceptible to breakage from impact or rough handling.\n",
"Visible light communications (VLC) works by switching the current to the LEDs off and on at a very high speed, too quick to be noticed by the human eye, thus, it does not present any flickering. Although Li-Fi LEDs would have to be kept on to transmit data, they could be dimmed to below human visibility while still emitting enough light to carry data.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-01705 | What exactly is a commercial pilot doing in the cockpit during a flight? So many buttons, pedals, paperwork etc? | Over twenty years experience, currently flying a 737. All the buttons and switches are basically used for lights, power, pumps and engine start up procedures. Take off and landing are basically the only time I'm controlling the aircraft by hand. There is paperwork sometimes I'm evaluating a first officer, other times just chatting. There is no sleeping on the flight deck, there has to be two pilots awake at all times. Long hauls have more than two pilots onboard so they can rotate sleep. | [
"BULLET::::- Jim McLean, Winnipeg TCA Base Maintenance\n\nBULLET::::- Nelson Harvey, Winnipeg Airport Air Traffic Controller\n\nBULLET::::- Captain Hobson, pilot instructor (left hand seat)\n\nBULLET::::- Captain Tom Payton, pilot instructor (In passenger cabin)\n\nBULLET::::- Captain Rene Giguere, pilot manager/instructor (right seat)\n\nSection::::Production.\n",
"Section::::Flight instruments.:FMS.\n\nThe flight management system/control unit may be used by the pilot to enter and check for the following information: flight plan, speed control, navigation control, and so on.\n\nSection::::Flight instruments.:Back-up instruments.\n\nIn a less prominent part of the cockpit, in case of failure of the other instruments, there will be a battery-powered integrated standby instrument system along with a magnetic compass, showing essential flight information such as speed, altitude, attitude and heading.\n\nSection::::Aerospace industry technologies.\n",
"Section::::Permits and licences.:Flight engineer licence.\n\nSome airliners are flown by a third crew member in addition to the pilot and co-pilot, a flight engineer. The flight engineer is responsible for monitoring aircraft systems in flight and for inspecting the aircraft before and after each flight. The Boeing 747-300 is an example of an airliner that employs a flight engineer. Recent airliners from manufacturers such as Boeing and Airbus are designed for a two pilot crew with flight management functions previously the responsibility of a flight engineer now handled by automation. Many older airliners flying require a flight engineer.\n",
"(1) A sport, recreational, private, commercial, or airline transport pilot may log pilot in command flight time for flights-\n\n(i) When the pilot is the sole manipulator of the controls of an aircraft for which the pilot is rated, or has sport pilot privileges for that category and class of aircraft, if the aircraft class rating is appropriate;\n\n(ii) When the pilot is the sole occupant in the aircraft;\n",
"(iii) When the pilot, except for a holder of a sport or recreational pilot certificate, acts as pilot in command of an aircraft for which more than one pilot is required under the type certification of the aircraft or the regulations under which the flight is conducted; or\n\nFAR Part 91.3 Responsibility and authority of the pilot in command.\n\n(a) The pilot in command of an aircraft is directly responsible for, and is the final authority as to, the operation of that aircraft.\n",
"Unlike in the United States, even for VFR flights, pilots are required to file a flight plan or have a flight itinerary with a responsible person for any flight greater than 25 nm from the departure aerodrome. Also, in Canada, flight plans are opened automatically at the estimated time of departure (ETD). Flight information centres play a prominent role managing flight plans, collecting position reports from pilots en route, and initiating commsearch procedures to locate pilots who have not closed flight plans.\n",
"Members may be motivated to complete flights in order to qualify for awards or certificates, either from completing specific routes (commonly referred to as \"tours\" in this context), or from a total number of hours completed either overall or on a specific type of aircraft. In such circumstances, profiles are provided for pilots where others can see their accomplishments and an overall roster displays an individuals performance among others in the group.\n\nSection::::Operation.:Airline hubs.\n",
"Most modern commercial aircraft with auto-pilots use flight computers and so called flight management systems(FMS) that can fly the aircraft without the pilot's active intervention during certain phases of flight. Also under development or in production are unmanned vehicles: missiles and drones which can take off, cruise and land without airborne pilot intervention.\n",
"The typical ready room is equipped as follows:\n\nBULLET::::- armchair seats for the pilots, usually of airliner type\n\nBULLET::::- coffee and magazines\n\nBULLET::::- a loudspeaker, known as the \"bull horn\"\n\nBULLET::::- an illuminated ticker tape\n\nBULLET::::- a main board at the front of the room\n\nBULLET::::- a blackboard\n\nBULLET::::- a \"Ouija board\", used to track aircraft movements\n\nThe ready room personnel comprises:\n\nBULLET::::- the on-duty squadron pilots\n\nBULLET::::- the squadron commander or executive officer\n\nBULLET::::- the permanent duty officer\n\nBULLET::::- the squadron Air Combat Information officer\n\nBULLET::::- the \"talker\", an enlisted man who communicates with Air Plot\n",
"U.S. FAA FAR 121.533(e) gives broad and complete final authority to airline captains: \"Each pilot in command has full control and authority in the operation of the aircraft, without limitation, over other crewmembers and their duties during flight time, whether or not he holds valid certificates authorizing him to perform the duties of those crewmembers.\"\n\nICAO and other countries equivalent rules are similar.\n\nIn Annex 2, \"Rules of the Air\", under par. \"2.3.1 Responsibility of pilot-in-command\", ICAO declares:\n",
"Section::::Flight instruments.:PFD.\n",
"Section::::Difficulties.:Automation Bias.\n",
"(1) For the purpose of obtaining instrument experience in an aircraft (other than a glider), performed and logged under actual or simulated instrument conditions, either in flight in the appropriate category of aircraft for the instrument privileges sought or in a flight simulator or flight training device that is representative of the aircraft category for the instrument privileges sought—\n\n(i) At least six instrument approaches;\n\n(ii) Holding procedures; and\n\n(iii) Intercepting and tracking courses through the use of navigation systems.\n",
"In the 2012 film \"Ted\", the main character, John Bennett, tells the story of how he met Lori Collins. The flashback is a close recreation of the scene where Ted Striker met Elaine Dickinson in the disco.\n\nIn early 2014 Delta Air Lines began using a new on-board safety film with many 1980s references, featuring an ending with a cameo of Kareem Abdul-Jabbar reprising his role as co-pilot Roger Murdock.\n",
"On two-pilot flight deck airplanes, sensors and computers monitor and adjust systems automatically. There is no onboard technical expert and third pair of eyes. If a malfunction, abnormality or emergency occurs, it is displayed on an electronic display panel and the computer automatically initiates corrective action to rectify the abnormal condition. One pilot does the flying and the other pilot resolves the issue. Modern technological advancements in today's aircraft have reduced the dependence upon human control over systems.\n",
"Section::::General structure of certification.:Multi-crew pilot license.\n",
"Section::::Operations.:Staff.:Pilots.\n\nAir Evac's instrument-rated pilots are skilled aviators who become proficient air medical pilots by training under its proprietary and Federal Aviation Administration-approved program. Each certified pilot meets FAA standards and has flown, on average, more than 5,700 hours\n",
"The following excerpts from the CFR Title 14 Aeronautics and Space illustrate the Federal Aviation Regulations (FAR) pertaining to logging pilot-in-command and the flight experience required for maintaining IFR currency.\n\nSection::::FAA Regulations.:Logging PIC time.\n\nFAR Part 1 Defines pilot-in-command as follows:\n\nPilot in command means the person who:\n\n(1) Has final authority and responsibility for the operation and safety of the flight;\n\n(2) Has been designated as pilot in command before or during the flight; and\n\n(3) Holds the appropriate category, class, and type rating, if appropriate, for the conduct of the flight.\n\nFAR Part 61.51(e) Logging pilot-in-command flight time.\n",
"To encourage high traffic density and provide a variety of different situations for pilots, BVA often holds weekly events. These events range in focus from crowding aircraft into dense areas, closely simulating the amount of traffic the airports real world counterpart would receive, to an event that highlights a particular skill set or technique.\n",
"Section::::Traffic advisories.\n\nIn the United States, Canada, and Australia, a pilot operating under VFR outside Class B, C, D airspace can request \"flight following\" from ATC. This service is provided by ATC if workload permits, but it is an advisory service only. The responsibility for maintaining separation with other aircraft and proper navigation still remains with the pilot in command (PIC). In the United Kingdom, this is known as a \"Traffic Service\". In other countries it is known as \"Flight Information Service\".\n\nSection::::Pilot certifications.\n",
"There are some positions that have and will serve the same function in every vehicle's flight control team. The group of individuals serving in those positions may be different, but they will be called the same thing and serve the same function.\n\nSection::::Common flight control positions.:Flight director.\n\nLeads the flight control team. \"Flight\" has overall operational responsibility for missions and payload operations and for all decisions regarding safe, expedient flight. This person monitors the other flight controllers, remaining in constant verbal communication with them via intercom channels called \"loops\".\n\nSection::::Common flight control positions.:Flight operations directorate (FOD).\n",
"Following resolution of an earlier incident with a faulty pitot tube that lasted a few minutes, the pilot-in-command left to take a rest break, leaving control in the hands of the copilots. When the two copilots were operating the Airbus around 02:11:21, it was not clear which one of the two was in charge of the plane, nor did the copilots communicate with each other about who was in control of the plane.\n",
"CRM aviation training has gone by several names, including cockpit resource management, flightdeck resource management, and command, leadership, and resource management, but the current generic term, crew resource management, was widely adopted. When CRM techniques are applied to other arenas, they are sometimes given unique labels, such as maintenance resource management or maritime resource management.\n",
"Section::::Flight physicals.:Types of flight physicals.\n",
"Section::::Modern high-end flight simulators.:Disorientation training.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-03706 | Why can't electrons be bound together by nuclear force like protons in a nucleus? | Electrons are leptons and do not interact via the strong force. Only quarks and objects made of quarks (hadrons) interact via the strong force. ELI5: Legos (electromagnetism) and velcro (strong force) are two different ways of sticking things together. You can’t attach a lego piece (electron) to something velcro, unless that thing is a lego covered in velcro (proton). | [
"Unlike gravity or electrical forces, the nuclear force is effective only at very short distances. At greater distances, the electrostatic force dominates: the protons repel each other because they are positively charged, and like charges repel. For that reason, the protons forming the nuclei of ordinary hydrogen—for instance, in a balloon filled with hydrogen—do not combine to form helium (a process that also would require some protons to combine with electrons and become neutrons). They cannot get close enough for the nuclear force, which attracts them to each other, to become important. Only under conditions of extreme pressure and temperature (for example, within the core of a star), can such a process take place.\n",
"To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy. Because of mass–energy equivalence (i.e. Einstein's famous formula ), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called \"mass defect\".\n",
"The strong force only acts \"directly\" upon elementary particles. However, a residual of the force is observed between hadrons (the best known example being the force that acts between nucleons in atomic nuclei) as the nuclear force. Here the strong force acts indirectly, transmitted as gluons, which form part of the virtual pi and rho mesons, which classically transmit the nuclear force (see this topic for more). The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement.\n\nSection::::Fundamental forces.:Weak nuclear.\n",
"The nuclear force has a spin-dependent component. The force is stronger for particles with their spins aligned than for those with their spins anti-aligned. If two particles are the same, such as two neutrons or two protons, the force is not enough to bind the particles, since the spin vectors of two particles of the same type must point in opposite directions when the particles are near each other and are (save for spin) in the same quantum state. This requirement for fermions stems from the Pauli exclusion principle. For fermion particles of different types, such as a proton and neutron, particles may be close to each other and have aligned spins without violating the Pauli exclusion principle, and the nuclear force may bind them (in this case, into a deuteron), since the nuclear force is much stronger for spin-aligned particles. But if the particles' spins are anti-aligned the nuclear force is too weak to bind them, even if they are of different types.\n",
"Although the nuclear force is weaker than strong interaction itself, it is still highly energetic: transitions produce gamma rays. The mass of nuclei is significantly different from the masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission.\n\nSection::::Unification.\n",
"The nuclear forces arising between nucleons are analogous to the forces in chemistry between neutral atoms or molecules called London forces. Such forces between atoms are much weaker than the attractive electrical forces that hold the atoms themselves together (i.e., that bind electrons to the nucleus), and their range between atoms is shorter, because they arise from small separation of charges inside the neutral atom. Similarly, even though nucleons are made of quarks in combinations which cancel most gluon forces (they are \"color neutral\"), some combinations of quarks and gluons nevertheless leak away from nucleons, in the form of short-range nuclear force fields that extend from one nucleon to another nearby nucleon. These nuclear forces are very weak compared to direct gluon forces (\"color forces\" or strong forces) inside nucleons, and the nuclear forces extend only over a few nuclear diameters, falling exponentially with distance. Nevertheless, they are strong enough to bind neutrons and protons over short distances, and overcome the electrical repulsion between protons in the nucleus.\n",
"There are two main reasons for this fact. First, the strong interaction acts essentially among the quarks forming the nucleons. The nucleon–nucleon interaction in vacuum is a mere \"consequence\" of the quark–quark interaction. While the latter is well understood in the framework of the Standard Model at high energies, it is much more complicated in low energies due to color confinement and asymptotic freedom. Thus there is yet no fundamental theory allowing one to deduce the nucleon–nucleon interaction from the quark–quark interaction. Furthermore, even if this problem were solved, there would remain a large difference between the ideal (and conceptually simpler) case of two nucleons interacting in vacuum, and that of these nucleons interacting in the nuclear matter. To go further, it was necessary to invent the concept of effective interaction. The latter is basically a mathematical function with several arbitrary parameters, which are adjusted to agree with experimental data.\n",
"To reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons—for instance, the main isotope of iron has 26 protons and 30 neutrons. Isotopes also exist where the number of neutrons differs from the most stable number for that number of nucleons. If the ratio of protons to neutrons is too far from stability, nucleons may spontaneously change from proton to neutron, or neutron to proton.\n",
"of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of\n\nformula_2\n",
"The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker \"between\" neutrons and protons, because it is mostly neutralized \"within\" them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms.\n",
"The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are , , , and . Even though the nickel isotope, , is more stable, the iron isotope is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create through the alpha process.\n",
"Usually, the excitation of valence electrons (such as 3s for sodium) involves energies corresponding to photons of visible or ultraviolet light. The excitation of core electrons is possible, but requires much higher energies, generally corresponding to x-ray photons. This would be the case for example to excite a 2p electron of sodium to the 3s level and form the excited 1s2s2p3s configuration.\n\nThe remainder of this article deals only with the ground-state configuration, often referred to as \"the\" configuration of an atom or molecule.\n\nSection::::History.\n",
"The protons and neutrons that comprise an atomic nucleus behave almost identically within the nucleus. The approximate symmetry of isospin treats these particles as identical, but in a different quantum state. This symmetry is only approximate, however, and the nuclear force that binds nucleons together is a complicated function depending on nucleon type, spin state, electric charge, momentum, etc. and with contributions from non-central forces. The nuclear force is not a fundamental force of nature, but a consequence of the residual effects of the strong force that surround the nucleons. One consequence of these complications is that although deuterium, a bound state of a proton (p) and a neutron (n) is stable, exotic nuclides such as diproton or dineutron have no stability. The nuclear force is not sufficiently strong to form either p-p or n-n bound states, or equivalently, the nuclear force does not form a potential well deep enough to bind these identical nucleons.\n",
"Within each group (each periodic table column) of metals, reactivity increases with each lower row of the table (from a light element to a heavier element), because a heavier element has more electron shells than a lighter element; a heavier element's valence electrons exist at higher principal quantum numbers (they are farther away from the nucleus of the atom, and are thus at higher potential energies, which means they are less tightly bound).\n",
"In the case of electrons, the behavior depends on temperature and context. At low temperatures, with no positrons present, electrons cannot be created or destroyed. Therefore, there is an electron chemical potential that might vary in space, causing diffusion. At very high temperatures, however, electrons and positrons can spontaneously appear out of the vacuum (pair production), so the chemical potential of electrons by themselves becomes a less useful quantity than the chemical potential of the conserved quantities like (electrons minus positrons).\n",
"The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force, which combines together protons and neutrons, and the Coulomb force, which causes protons to repel each other. Protons are positively charged and repel each other by the Coulomb force, but they can nonetheless stick together, demonstrating the existence of another, short-range, force referred to as nuclear attraction. Light nuclei (or nuclei smaller than iron and nickel) are sufficiently small and proton-poor allowing the nuclear force to overcome repulsion. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei, however, no energy is released, since the nuclear force is short-range and cannot continue to act across longer nuclear length scales. Thus, energy is not released with the fusion of such nuclei; instead, energy is required as input for such processes.\n",
"Nuclei are bound together by the residual strong force (nuclear force). The residual strong force is a minor residuum of the strong interaction which binds quarks together to form protons and neutrons. This force is much weaker \"between\" neutrons and protons because it is mostly neutralized within them, in the same way that electromagnetic forces \"between\" neutral atoms (such as van der Waals forces that act between two inert gas atoms) are much weaker than the electromagnetic forces that hold the parts of the atoms together internally (for example, the forces that hold the electrons in an inert gas atom bound to its nucleus).\n",
"In hydrogen, or any other atom in group 1A of the periodic table (those with only one valence electron), the force on the electron is just as large as the electromagnetic attraction from the nucleus of the atom. However, when more electrons are involved, each electron (in the \"n\"-shell) experiences not only the electromagnetic attraction from the positive nucleus, but also repulsion forces from other electrons in shells from 1 to \"n\". This causes the net force on electrons in outer shells to be significantly smaller in magnitude; therefore, these electrons are not as strongly bonded to the nucleus as electrons closer to the nucleus. This phenomenon is often referred to as the orbital penetration effect. The shielding theory also contributes to the explanation of why valence-shell electrons are more easily removed from the atom.\n",
"The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from \"all\" the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows.\n",
"Section::::Requirements.\n\nA substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces.\n",
"This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit.\n",
"It is not the case that every quark in the universe attracts every other quark in the above distance independent manner. Color confinement implies that the strong force acts without distance-diminishment only between pairs of quarks, and that in collections of bound quarks (hadrons), the net color-charge of the quarks essentially cancels out, resulting in a limit of the action of the forces. Collections of quarks (hadrons) therefore appear nearly without color-charge, and the strong force is therefore nearly absent between those hadrons except that the cancellation is not quite perfect. A residual force remains (described below) known as the residual strong force. This residual force \"does\" diminish rapidly with distance, and is thus very short-range (effectively a few femtometers). It manifests as a force between the \"colorless\" hadrons, and is sometimes known as the strong nuclear force or simply nuclear force.\n",
"This led Rutherford to propose a planetary model in which a cloud of electrons surrounded a small, compact nucleus of positive charge. Only such a concentration of charge could produce the electric field strong enough to cause the heavy deflection.\n\nSection::::History.:First steps toward a quantum physical model of the atom.\n",
"The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 \"million\" eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.\n",
"The nucleus of an atom consists of neutrons and protons, which in turn are the manifestation of more elementary particles, called quarks, that are held in association by the nuclear strong force in certain stable combinations of hadrons, called baryons. The nuclear strong force extends far enough from each baryon so as to bind the neutrons and protons together against the repulsive electrical force between the positively charged protons. The nuclear strong force has a very short range, and essentially drops to zero just beyond the edge of the nucleus. The collective action of the positively charged nucleus is to hold the electrically negative charged electrons in their orbits about the nucleus. The collection of negatively charged electrons orbiting the nucleus display an affinity for certain configurations and numbers of electrons that make their orbits stable. Which chemical element an atom represents is determined by the number of protons in the nucleus; the neutral atom will have an equal number of electrons orbiting that nucleus. Individual chemical elements can create more stable electron configurations by combining to share their electrons. It is that sharing of electrons to create stable electronic orbits about the nucleus that appears to us as the chemistry of our macro world.\n"
] | [
"If protons in a nucleus can be bound together by nuclear force, then so can electrons. "
] | [
"Electrons are leptons, therefore they don't interact. "
] | [
"false presupposition"
] | [
"If protons in a nucleus can be bound together by nuclear force, then so can electrons. ",
"If protons in a nucleus can be bound together by nuclear force, then so can electrons. "
] | [
"normal",
"false presupposition"
] | [
"Electrons are leptons, therefore they don't interact. ",
"Electrons are leptons, therefore they don't interact. "
] |
2018-04894 | Why do our bodies change sleeping positions when we’re already asleep? | So we don’t get pressure sores from lack of blood flow, so we do t lose limbs in our sleep, so we regular temperature better. | [
"Sleep inversion\n\nSleep inversion or sleep-wake inversion is a reversal of sleeping tendencies. Individuals experiencing sleep-wake inversion exchange diurnal habits for nocturnal habits, meaning they are active at night and sleep during the day. Sleep-wake inversion, when involuntary, can be a sign of a serious disorder.\n\nSection::::Symptoms.\n",
"To differentiate circadian rhythm sleep disorder from other diagnoses, the sleep disruption must not occur exclusively during the cause of another sleep disorder or other disorder. The disturbance in sleep must not be due to the direct physiological effects of a substance, whether used for medication or abuse, or to a general medical condition.\n",
"The unspecified type of circadian rhythm sleep disorder is characterized by a pattern of sleep-wake disturbance and circadian mismatch that is not due to the causes of the other three types. Examples of other causes include irregular sleep-wake patterns and non-24-hour sleep-wake patterns. If an individual's sleep-wake pattern is based on a period of time of slightly more than 24 hours, their circadian rhythm can become progressively delayed.\n\nSleep inversion may be a symptom of elevated blood ammonia levels. \n\nSection::::Diagnosis.\n",
"Because circadian rhythm sleep disorder is usually related to environmental stressors, avoidance of these stressors (such as long-distance travel, shift work, and sleep-disrupting lifestyles) can prevent the disorder from beginning or continuing. People who are able to adhere strictly to a normal sleep-wake schedule can also offset circadian rhythm-related problems.\n\nSection::::Treatment.\n",
"HD cells continue to fire in an organized manner during sleep, as if animals were awake. However, instead of always pointing toward the same direction—the animals are asleep and thus immobile—the neuronal \"compass needle\" moves constantly. In particular, during rapid eye movement sleep, a brain state rich in dreaming activity in humans and whose electrical activity is virtually indistinguishable from the waking brain, this directional signal moves as if the animal is awake: that is, HD neurons are sequentially activated, and the individual neurons representing a common direction during wake are still active, or silent, at the same time.\n",
"The shift work type of circadian rhythm sleep disorder is distinguished by disruptions due to a conflict between a person's endogenous circadian cycle and the cycle required by shift work. Individuals who work the night shift often experience this problem, especially those people who switch to a normal sleep schedule on days off. Also, people who work rotating shifts experience this problem because of the changing sleep-wake schedules they experience. The disruptions caused by shift work result in inconsistent circadian schedules and an inability to adjust to the changes consistently.\n",
"Modern scientific studies have suggested a beneficial effect of the right lateral decubitus position on the heart. In particular, one study assessed the autonomic effect of three sleep positions (supine, left lateral decubitus, and right lateral decubitus) in healthy subjects using spectral heart rate variability analysis. The results indicated that cardiac vagal activity was greatest when subjects were in the right lateral decubitus position.\n\nSection::::Islam.\n",
"Healthy sleep must include the appropriate sequence and proportion of NREM and REM phases, which play different roles in the memory consolidation-optimization process. During a normal night of sleep, a person will alternate between periods of NREM and REM sleep. Each cycle is approximately 90 minutes long, containing a 20-30 minute bout of REM sleep. NREM sleep consists of sleep stages 1–4, and is where movement can be observed. A person can still move their body when they are in NREM sleep. If someone sleeping turns, tosses, or rolls over, this indicates that they are in NREM sleep. REM sleep is characterized by the lack of muscle activity. Physiological studies have shown that aside from the occasional twitch, a person actually becomes paralyzed during REM sleep. In motor skill learning, an interval of sleep may be critical for the expression of performance gains; without sleep these gains will be delayed.\n",
"In order to diagnose circadian rhythm sleep disorder, patients are often asked for records of their sleep and wake times in order to determine if a diagnosis is warranted. Interviews and direct observation in a sleep lab may also be utilized. A diagnosis requires a pattern of sleep disruption caused by a mismatch between a person's circadian sleep-wake pattern and the pattern required by that person's environment. The disruption can be persistent or recurrent and leads to impaired functioning, often in a social or occupational context.\n",
"Section::::University of Surrey.\n\nDijk created the Surrey Sleep Research Centre in 2003 and remains its Director, leading a team that investigates the regulation and function of sleep and biological rhythms at many different levels of organisation, from gene expression to cognition. In 2005 he became a Professor of Sleep and Physiology. He served as Associate Dean (research) for the Faculty of Health and Medical Sciences (2013-2015)\n\nDijk was also the Director of Sleep-Wake Research in the University of Surrey's Clinical Research Centre.\n",
"In 1993, a different model called the opponent process model was proposed. This model explained that these two processes opposed each other to produce sleep, as against Borbely's model. According to this model, the SCN, which is involved in the circadian rhythm, enhances wakefulness and opposes the homeostatic rhythm. In opposition is the homeostatic rhythm, regulated via a complex multisynaptic pathway in the hypothalamus that acts like a switch and shuts off the arousal system. Both effects together produce a see-saw like effect of sleep and wakefulness. More recently, it has been proposed that both models have some validity to them, while new theories hold that inhibition of NREM sleep by REM could also play a role. In any case, the two process mechanism adds flexibility to the simple circadian rhythm and could have evolved as an adaptive measure.\n",
"A shift in the otolithic membrane that stimulates the cilia is considered the state of the body until the cilia are once again stimulated. E.g. lying down stimulates cilia and standing up stimulates cilia, however, for the time spent lying the signal that you are lying remains active, even though the membrane resets.\n\nOtolithic organs have a thick, heavy gelatin membrane that, due to inertia (like endolymph), lags behind and continues ahead past the macula it overlays, bending and activating the contained cilia.\n",
"Section::::In humans.\n\nLordosis behavior became secondary in hominidae and is non-functional in humans. There is no human analogue to the lordosis reflex, although lordosis-like positions can be observed in women being mounted from behind. \n",
"Another major theory is that the neural functions that regulate sleep are out of balance in such a way that causes different sleep states to overlap. In this case, cholinergic sleep on neural populations are hyperactivated and the serotonergic sleep off neural populations are under-activated. As a result, the cells capable of sending the signals that would allow for complete arousal from the sleep state, the serotonergic neural populations, have difficulty in overcoming the signals sent by the cells that keep the brain in the sleep state. During normal REM sleep, the threshold for a stimulus to cause arousal is greatly elevated. Under normal conditions, medial and vestibular nuclei, cortical, thalamic, and cerebellar centers coordinate things such as head and eye movement, and orientation in space.\n",
"Individuals with the jet lag type of circadian rhythm sleep disorder demonstrate sleepiness during the desired wake portion of the day due to the change in time zone. They have difficulty sleeping during the desired sleep portion of the day. They also have difficulty altering their sleep-wake schedule to one appropriate to the new time zone.\n",
"The majority of sleep neurons are located in the ventrolateral preoptic area (vlPOA). These sleep neurons are silent until an individual shows a transition from waking to sleep. The sleep neurons in the preoptic area receive inhibitory inputs from some of the same regions they inhibit, including the tubermammillary nucleus, raphe nuclei, and locus coeruleus. Thus, they are inhibited by histamine, serotonin, and norepinepherine. This mutual inhibition may provide the basis for establishing periods of sleep and waking. A reciprocal inhibition also characterizes an electronic circuit known as the flip-flop. A flip-flop can assume one of two states, usually referred to as on or off. Thus, either the sleep neurons are active and inhibit the wakefulness neurons, or the wakefulness neurons are active and inhibit the sleep neurons, Because these regions are mutually inhibitory, it is impossible for neurons in both sets of regions to be active at the same time. This flip-flop, switching from one state to another quickly, can be unstable.\n",
"Sleeping positions\n\nThe sleeping position is the body configuration assumed by a person during or prior to sleeping. It has been shown to have health implications, particularly for babies.\n\nSection::::Sleeping preferences.\n\nA Canadian survey found that 39% of respondents preferring the \"log\" position (lying on one's side with the arms down the side) and 28% preferring to sleep on their side with their legs bent.\n",
"Humans, like most living organisms, have various biological rhythms. These biological clocks control processes that fluctuate daily (e.g. body temperature, alertness, hormone secretion), generating circadian rhythms. Among these physiological characteristics, our sleep-wake propensity can also be considered one of the daily rhythms regulated by the biological clock system. Our sleeping cycles are tightly regulated by a series of circadian processes working in tandem, which allow us to experience moments of consolidated sleep during the night and a long wakeful moment during the day. Conversely, disruptions to these processes and the communication pathways between them can lead to problems in sleeping patterns, which are collectively referred to as Circadian rhythm sleep disorders.\n",
"Section::::Normal.:Steady NREM (Non-REM) sleep.\n\nSection::::Normal.:Steady NREM (Non-REM) sleep.:Ventilation.\n\nBreathing is remarkably regular, both in amplitude and frequency in steady NREM sleep. Steady NREM sleep has the lowest indices of variability of all sleep stages. Minute ventilation decreases by 13% in steady stage II sleep and by 15% in steady slow wave sleep (Stage III and Stage IV sleep). Mean inspiratory flow is decreased but inspiratory duration and respiratory cycle duration are unchanged, resulting in an overall decreased tidal volume.\n",
"Diagnosis of any type of circadian rhythm sleep disorder must be distinguished from normal adjustments a person makes in reaction to a schedule change. The sleep disruptions must be persistent and recurring and lead to social or occupational problems. People who prefer unusually late or early sleep schedules or people adjusting to a new sleep schedule should not receive this diagnosis unless they meet the other criteria.\n\nSection::::Diagnosis.:Definition.\n",
"Brain arousal is stimulated by the circadian system during the day and sleep is usually stimulated at night. The rhythms are maintained in the suprachiasmatic nucleus (SCN), located in the anterior hypothalamus in the brain, and synchronized with the day/night cycle. Gene-transcription feedback loops in individual SCN cells form the molecular basis of biological timekeeping. Circadian phase shifts are dependent on the schedule of light exposure, the intensity, and previous exposure to light. Variations in exposure can advance or delay these rhythms. For example, the rhythms can be delayed due to light exposure at night.\n",
"Obstructive sleep apnea (OSA) is a form of sleep apnea that occurs more frequently and is most severe when individuals are sleeping in the supine position. Studies and evidence show that OSA related to sleeping in the supine position is related to the airway positioning, reduced lung volume, and the inability of airway muscles to dilate enough to compensate as the airway collapses. With individuals who have OSA, many health care providers encourage their patients to avoid the supine position while asleep and sleep laterally or sleep with the head of their bed up in a 30 or 45 degree angle.\n",
"Sleep regulation refers to the control of when an organism transitions between sleep and wakefulness. The key questions here are to identify which parts of the brain are involved in sleep onset and what their mechanisms of action are. In humans and most animals sleep and wakefulness seems to follow an electronic flip-flop model i.e. both states are stable, but the intermediate states are not. Of course, unlike in the flip-flop, in the case of sleep, there seems to be a timer ticking away from the minute of waking so that after a certain period one must sleep, and in such a case even waking becomes an unstable state. The reverse may also be true to a lesser extent.\n",
"Set point of ventilation is different in wakefulness and sleep. pCO2 is higher and ventilation is lower in sleep. Sleep onset in normal subjects is not immediate, but oscillates between arousal, stage I and II sleep before steady NREM sleep is obtained. So falling asleep results in decreased ventilation and a higher pCO2, above the wakefulness set point. On wakefulness, this constitutes an error signal which provokes hyperventilation until the wakefulness set point is reached. When the subject falls asleep, ventilation decreases and pCO2 rises, resulting in hypoventilation or even apnea. These oscillations continue until steady state sleep is obtained. The medulla oblongata controls our respiration.\n",
"During the 1920s an obscure disorder that caused encephalitis and attacked the part of the brain that regulates sleep influenced Europe and North America. Although the virus that caused this disorder was never identified, the psychiatrist and neurologist Constantin von Economo decided to study this disease and identified a key component in the sleep-wake regulation. He identified the pathways that regulated wakefulness and sleep onset by studying the parts of the brain that were affected by the disease and the consequences it had on the circadian rhythm. He stated that the pathways that regulated sleep onset are located between the brain stem and the basal forebrain. His discoveries were not appreciated until the last two decades of the 20th century when the pathways of sleep onset were found to reside in the exact place that Constantin von Economo stated.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-02594 | How come exercising in the morning can help wake you up, while exercising in the evening can help you fall asleep? | It's not necessarily so. A lot of people find evening exercise stimulates them and makes it hard for them to get to sleep. Insomnia advice regularly includes avoiding late-evening exercise. | [
"Section::::Recommendations.:Activities.\n\nExercise is an activity that can facilitate or inhibit sleep quality; people who exercise experience better quality of sleep than those who do not, but exercising too late in the day can be activating and delay falling asleep. Increasing exposure to bright and natural light during the daytime and avoiding bright light in the hours before bedtime may help promote a sleep-wake schedule aligned with nature's daily light-dark cycle.\n",
"Preliminary evidence from a 2012 review indicated that physical training for up to four months may increase sleep quality in adults over 40 years of age. A 2010 review suggested that exercise generally improved sleep for most people, and may help with insomnia, but there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. A 2018 systematic review and meta-analysis suggested that exercise can improve sleep quality in people with insomnia.\n\nSection::::Health effects.:Excessive exercise.\n",
"Chinese exercise, particularly in the retired community, seems to be socially grounded. In the mornings, dances are held in public parks; these gatherings may include Latin dancing, ballroom dancing, tango, or even the jitterbug. Dancing in public allows people to interact with those with whom they would not normally interact, allowing for both health benefits and social benefits.\n\nThese sociocultural variations in physical exercise show how people in different geographic locations and social climates have varying motivations and methods of exercising. Physical exercise can improve health and well-being, as well as enhance community ties and appreciation of natural beauty.\n",
"Epidemiological studies on the relations between cardiovascular health and siesta have led to conflicting conclusions, possibly because of poor control of confounding variables, such as physical activity. It is possible that people who take a siesta have different physical activity habits, for example, waking earlier and scheduling more activity during the morning. Such differences in physical activity may lead to different 24-hour profiles in cardiovascular function. Even if such effects of physical activity can be discounted in explaining the relationship between siesta and cardiovascular health, it is still not known whether the daytime nap itself, a supine posture, or the expectancy of a nap is the most important factor.\n",
"Morning Exercises\n\nMorning Exercises refers to a religious observance by Puritans in London which started at the beginning of the English Civil War.\n\nSection::::Origins.\n",
"Phillips maintains that aerobic exercise is more effective for fat loss when done first thing in the morning, because it raises the metabolism for the remainder of the day, and because the body draws more heavily on its fat stores after fasting overnight.\n\nSection::::Diet.\n",
"Section::::Comparison with Haṭha yoga.:Similarities.\n",
"The siesta habit has recently been associated with a 37% lower coronary mortality, possibly due to reduced cardiovascular stress mediated by daytime sleep. Short naps at mid-day and mild evening exercise were found to be effective for improved sleep, cognitive tasks, and mental health in elderly people.\n",
"Section::::Management.\n\nSleeping in a more upright position seems to lessen catathrenia (as well as sleep apnea). Performing regular aerobic exercise, where steady breathing is necessary (running, cycling etc.) may lessen catathrenia. Strength exercise, on the other hand, may worsen catathrenia because of the tendency to hold one's breath while exercising. Yoga and/or meditation focused on steady and regular breathing may lessen catathrenia.\n\nSome evidence indicate that continous positive airway pressure can be an effective treatment for catathrenia: in a study, the subject using CPAP significantly decreased the sounds typically produced because of the disorder, which almost disappeared.\n\nSection::::External links.\n",
"It is important to note, however, that not all humans share identical circadian rhythms. One study across Italy and Spain had students fill out a questionnaire, then ranked them on a \"morningness–eveningness\" scale. The results were a fairly standard bell curve. Levels of alertness over the course of the day had a significant correlation with scores on the questionnaire. All categories of participants—evening types, morning types, and intermediate types—had high levels of alertness from roughly 2 pm to 8 pm, but outside this window their alertness levels corresponded to their scores.\n\nSection::::See also.\n\nBULLET::::- Morning\n\nBULLET::::- Sunset\n\nBULLET::::- Twilight\n",
"Section::::Comparison with Haṭha yoga.:Derivation.\n",
"A 2010 review of published scientific research suggested that exercise generally improves sleep for most people, and helps sleep disorders such as insomnia. The optimum time to exercise \"may\" be 4 to 8 hours before bedtime, though exercise at any time of day is beneficial, with the exception of heavy exercise taken shortly before bedtime, which may disturb sleep. However, there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. Sleeping medications such as Ambien and Lunesta are an increasingly popular treatment for insomnia. Although these nonbenzodiazepine medications are generally believed to be better and safer than earlier generations of sedatives, they have still generated some controversy and discussion regarding side-effects. White noise appears to be a promising treatment for insomnia.\n",
"At the same time you can work on developing a higher level of awareness of the many natural rhythms in your everyday life and use exercises to help bring those rhythms into your music.\n",
"Section::::Comparison with Haṭha yoga.:Differences.\n",
"Caffeine is the most widely used alerting drug in the world and has been shown to improve alertness in simulated night work. Caffeine and naps before a night shift reduces sleepiness during the shift. Modafinil and armodafinil are non-amphetamine alerting drugs originally developed for the treatment of narcolepsy that have been approved by the FDA (the US Food and Drug Administration) for excessive sleepiness associated with SWSD.\n\nSection::::Treatment.:Medications that promote daytime sleep.\n",
"Although muscle \"stimulation\" occurs in the gym (or home gym) when lifting weights, muscle \"growth\" occurs afterward during rest periods. Without adequate rest and sleep (6 to 8 hours), muscles do not have an opportunity to recover and grow. Additionally, many athletes find that a daytime nap further increases their body's ability to recover from training and build muscles. Some bodybuilders add a massage at the end of each workout to their routine as a method of recovering.\n\nSection::::Muscle growth.:Overtraining.\n",
"In a study done on the effect of lighting intensity on delta waves, a measure of sleepiness, high levels of lighting (1700 lux) showed lower levels of delta waves measured through an EEG than low levels of lighting (450 lux). This shows that lighting intensity is directly correlated with alertness in an office environment.\n",
"Wisconsin Bookwatch wrote that the PBS series is; \"An excellent introduction to an art useful for exercise, relieving stress and tension, posture improvement, and mental focus\". \n",
"It is easy to incorporate endurance, flexibility and strength activities into your daily routine for active living. Activities such as normal household chores can fit into more than one of the above categories, and it is simple enough to switch to using the stairs instead of taking the elevators at work.\n\nSection::::Recommendations.\n",
"Section::::In culture.:Research.\n",
"Sleep hygiene strategies include advice about timing of sleep and food intake in relationship to exercise and sleeping environment. Recommendations depend on knowledge of the individual situation; counselling is presented as a form of patient education.\n",
"Street workout is divided in two main branches, the first one being strength training and the second dynamics. Strength training includes the isometric holds like: planche, front lever, back lever, etc. Also, strength training includes single arm pull ups, hefestos, muscle-ups and many others. Dynamics includes movements like 360s and its variations, switchblades, and an incredible variety of tricks developed by the athletes which are connected with other moves in order to create routines or sets.\n\nSome of the main benefits of street workout activities are: \n\nBULLET::::- It is completely free;\n",
"The internal circadian clock is profoundly influenced by changes in light, since these are its main clues about what time it is. Exposure to even small amounts of light during the night can suppress melatonin secretion, and increase body temperature and wakefulness. Short pulses of light, at the right moment in the circadian cycle, can significantly 'reset' the internal clock. Blue light, in particular, exerts the strongest effect, leading to concerns that electronic media use before bed may interfere with sleep.\n",
"The siesta habit has recently been associated with a 37% reduction in coronary mortality, possibly due to reduced cardiovascular stress mediated by daytime sleep (Naska et al., 2007). Nevertheless, epidemiological studies on the relations between cardiovascular health and siesta have led to conflicting conclusions, possibly because of poor control of moderator variables, such as physical activity. It is possible that people who take a siesta have different physical activity habits, e.g. waking earlier and scheduling more activity during the morning. Such differences in physical activity may mediate different 24-hour profiles in cardiovascular function. Even if such effects of physical activity can be discounted for explaining the relationship between siesta and cardiovascular health, it is still unknown whether it is the daytime nap itself, a supine posture or the expectancy of a nap that is the most important factor. It was recently suggested that a short nap can reduce stress and blood pressure (BP), with the main changes in BP occurring between the time of lights off and the onset of stage 1 (Zaregarizi, M. 2007 & 2012).\n",
"Physical exercise can also include training that focuses on accuracy, agility, power, and speed.\n\nSometimes the terms 'dynamic' and 'static' are used. 'Dynamic' exercises such as steady running, tend to produce a lowering of the diastolic blood pressure during exercise, due to the improved blood flow. Conversely, static exercise (such as weight-lifting) can cause the systolic pressure to rise significantly, albeit transiently, during the performance of the exercise.\n\nSection::::Health effects.\n"
] | [
"Exercising at night helps you go to sleep.",
"Excercising in the morning helps wake a person up, whilst excercising at night helps one fall asleep. "
] | [
"Exercising at night can cause you to stay awake. ",
"There are many people who claim to find it difficult to sleep after working out at night, therefore this statement isn't completely true. "
] | [
"false presupposition"
] | [
"Exercising at night helps you go to sleep.",
"Excercising in the morning helps wake a person up, whilst excercising at night helps one fall asleep. "
] | [
"false presupposition",
"false presupposition"
] | [
"Exercising at night can cause you to stay awake. ",
"There are many people who claim to find it difficult to sleep after working out at night, therefore this statement isn't completely true. "
] |
2018-18727 | Why are Europeans considered “Westerners” just like Americans despite being right next to Asia? | The world is older than the United states. So, lots of years ago people living in Europe were the most western people in the known world. For a long time. Also. Here's something that is big. A lot of people in the United States are from Europe, based on European culture. So that's why US gets to be westerners. | [
"Western civilization is commonly said to include the United Kingdom, United States, Canada, Australia, New Zealand, European Union (and at least the EFTA countries, European microstates). \n\nThe definition is often widened, and can include these countries, or a combination of these countries:\n\nBULLET::::- European countries outside of the EU and EFTA – Due to sharing of the general European culture and Christian faith, these countries are included in the definition of the West.\n",
"Use of the term \"West\" as a specific cultural and geopolitical term developed over the course of the Age of Exploration as Europe spread its culture to other parts of the world. Roman Catholics were the first major religious group to immigrate to the New World, as settlers in the colonies of Portugal and Spain (and later, France) belonged to that faith. English and Dutch colonies, on the other hand, tended to be more religiously diverse. Settlers to these colonies included Anglicans, Dutch Calvinists, English Puritans and other nonconformists, English Catholics, Scottish Presbyterians, French Huguenots, German and Swedish Lutherans, as well as Quakers, Mennonites, Amish, and Moravians.\n",
"The concept of Western culture is generally linked to the classical definition of the \"Western world\". In this definition, Western culture is the set of literary, scientific, political, artistic and philosophical principles that set it apart from other civilizations. Much of this set of traditions and knowledge is collected in the Western canon.\n\nThe term has come to apply to countries whose history is strongly marked by European immigration or settlement, such as the Americas, and Oceania, and is not restricted to Europe.\n",
"There is debate among some as to whether Latin America as a whole is in a category of its own. Whether Russia should be categorized as \"East\" or \"\"West\"\" has been \"an ongoing discussion\" for centuries.\n\nSection::::Western/European culture.\n\nThe term \"Western culture\" is used very broadly to refer to a heritage of social norms, ethical values, traditional customs, religious beliefs, political systems, and specific artifacts and technologies.\n\nSpecifically, Western culture may imply:\n",
"In China \"Traditions Regarding Western Countries\" became a regular part of the \"Twenty-Four Histories\" from the 5th century CE, when commentary about The West concentrated upon on an area that did not extend farther than Syria. The extension of European imperialism in the 18th and 19th centuries established, represented, and defined the existence of an \"Eastern world\" and of a \"Western world\". Western stereotypes appear in works of Indian, Chinese and Japanese art of those times. At the same time, Western influence in politics, culture, economics and science came to be constructed through an imaginative geography of West and East.\n",
"It is difficult to determine which individuals fit into which category and the East–West contrast is sometimes criticized as relativistic and arbitrary. Globalism has spread Western ideas so widely that almost all modern cultures are, to some extent, influenced by aspects of Western culture. Stereotyped views of \"the West\" have been labeled Occidentalism, paralleling Orientalism—the term for the 19th-century stereotyped views of \"the East\".\n",
"The concept of European culture is generally linked to the classical definition of the Western world. In this definition, Western culture is the set of literary, scientific, political, artistic and philosophical principles which set it apart from other civilizations. Much of this set of traditions and knowledge is collected in the Western canon. The term has come to apply to countries whose history has been strongly marked by European immigration or settlement during the 18th and 19th centuries, such as the Americas, and Australasia, and is not restricted to Europe..\n",
"While the concept of a \"West\" did not exist until the emergence of the Roman Republic, the roots of the concept can be traced back to Ancient Greece. Since Homeric literature (the Trojan Wars), through the accounts of the Persian Wars of Greeks against Persians by Herodotus, and right up until the time of Alexander the Great, there was a paradigm of a contrast between Greeks and other civilizations. Greeks felt they were the most civilized and saw themselves (in the formulation of Aristotle) as something between the advanced civilisations of the Near East (who they viewed as soft and slavish) and the wild barbarians of most of Europe to the west.\n",
"Westerners are also known for their explorations of the globe and outer space. The first expedition to circumnavigate the Earth (1522) was by Westerners, as well as the first journey to the South Pole (1911), and the first Moon landing (1969). The landing of robots on Mars (2004 and 2012) and on an asteroid (2001), the \"Voyager 2\" explorations of the outer planets (Uranus in 1986 and Neptune in 1989), \"Voyager 1\"s passage into interstellar space (2013), and \"New Horizons\" flyby of Pluto (2015) were significant recent Western achievements.\n\nSection::::Media.\n",
"BULLET::::- With the development of ships in Eurasia, rivers became trade routes. Europe and empires in Greece and Rome benefited from the Mediterranean, compared to Chinese empires (who later built the Grand Canal for similar purposes).\n\nBULLET::::- Raids from the Eurasian Steppe brought diseases that caused epidemics in settled populations.\n\nBULLET::::- The Social Development Index shows the West leading until the 6th century, China leading until the 18th century, and the West leading again in the modern era.\n",
"The \"West\" was originally defined as the Western world. Ancient Romans distinguished between Oriental (Eastern, or Asian) cultures that inhabited present-day Egypt and Occidental cultures that lived in the West. A thousand years later, the East-West Schism separated the Catholic Church and Eastern Orthodox Church from each other. The definition of Western changed as the West was influenced by and spread to other nations. Islamic and Byzantine scholars added to the Western canon when their stores of Greek and Roman literature jump-started the Renaissance. Although Russia converted to Christianity in the 10th century, the West expanded to include it fully when Peter the Great deeply reformed the country's government, the church and modernised the society thanks to the ideas brought from the Netherlands. Today, most modern uses of the term refer to the societies in the West and their close genealogical, linguistic, and philosophical descendants, typically included are those countries whose ethnic identity and dominant culture are derived from European culture. However, though sharing in similar historical background, it would be incorrect to regard the Western world as a monolithic bloc, as many cultural, linguistic, religious, political, and economical differences exist between Western countries and populations.\n",
"By the mid-20th century, worldwide export of Western culture went through the new mass media: film, radio and television and recorded music, while the development and growth of international transport and telecommunication (such as transatlantic cable and the radiotelephone) played a decisive role in modern globalization. In modern usage, \"Western world\" sometimes refers to Europe and to areas whose populations largely originate from Europe, through the Age of Discovery.\n\nSection::::Introduction.\n",
"Western culture, throughout most of its history, has been nearly equivalent to Western Christian culture, and many of the population of the Western hemisphere could broadly be described as cultural Christians. The notion of \"Europe\" and the \"Western World\" has been intimately connected with the concept of \"Christianity and Christendom\" many even attribute Christianity for being the link that created a unified European identity.\n",
"Section::::History.:Medieval West.\n\nIn a narrow sense, the Medieval West referred specifically to the Catholic \"Latin\" West, also called \"Frankish\" during Charlemagne's reign, in contrast to the Orthodox East, where Greek remained the language of the Byzantine Empire. In its broadest sense, the Medieval West refers to the whole of Christendom, including both the Catholic West and the Orthodox East.\n",
"Section::::Definition.:Personal.\n\nA different view on the Western world is not defining it by its territory, but by its people group, as these tend to differ in an increasingly globalised world. This view highlights the non-Western population in countries with a Western majority, or vice versa. The Boers for instance can be regarded as Western inhabitants of South Africa.\n\nSection::::Process of Westernization.\n\nSection::::Process of Westernization.:Colonization (1400s–1970s).\n\nSection::::Process of Westernization.:Colonization (1400s–1970s).:Europeanization.\n\nFrom 1400s onward, Europeanization and colonialism spread gradually over much of the world and controlled different regions during this five centuries long period, colonizing or subjecting the majority of the globe.\n",
"Western culture\n\nWestern culture, sometimes equated with Western civilization, Occidental culture, the Western world, Western society, and European civilization, is the heritage of social norms, ethical values, traditional customs, belief systems, political systems, artifacts and technologies that originated in or are associated with Europe. The term also applies beyond Europe to countries and cultures whose histories are strongly connected to Europe by immigration, colonization, or influence. For example, Western culture includes countries in the Americas and Australasia, whose language and demographic ethnicity majorities are European. Western culture has its roots in Greco-Roman culture from classical antiquity (see Western canon). \n",
"According to the historian, Western civilisation's rise to global dominance is the single most important historical phenomenon of the past five centuries. All around the world, more and more people study at universities, work for companies, vote for governments, take medicines, wear clothes, and play sports, all of which have strong 'western' influences. Yet six hundred years ago the kingdoms of Western Europe seemed like miserable backwaters, ravaged by incessant war and pestilence. It was Ming China or Ottoman Turkey that had the look of world civilisations. How did the West overtake its Eastern rivals? And has the zenith of Western power now passed?\n",
"Section::::Modern definitions.\n\nThe exact scope of the \"Western world\" is somewhat subjective in nature, depending on whether cultural, economic, spiritual or political criteria are employed. It is a generally accepted western view to recognize the existence of at least three \"major worlds\" (or \"cultures\", or \"civilizations\"), broadly in contrast with the Western: the \"Eastern world\", the \"Arab\" and the \"African\" worlds, with no clearly specified boundaries. Additionally, \"Latin American\" and \"Orthodox\" worlds are sometimes separately considered \"akin\" to the West.\n",
"The Greeks contrasted themselves with both their Eastern neighbours (such as the Trojans in \"Iliad\") as well as their Western neighbours (who they considered barbarians). Concepts of what is \"the West\" arose out of legacies of the Western Roman Empire and the Eastern Roman Empire. Later, ideas of the West were formed by the concepts of Latin Christendom and the Holy Roman Empire. What is thought of as Western thought today originates primarily from Greco-Roman and Germanic influences, and includes the ideals of the Middle Ages, the Renaissance, and the Enlightenment, as well as Christian culture.\n\nSection::::History.:Classical West.\n",
"These are Spengler's terms for Classical, Arabian and Western Cultures respectively.\n\n\"Apollonian\" \n\nCulture and Civilization is focused around Ancient Greece and Rome. Spengler saw its world view as being characterized by appreciation for the beauty of the human body, and a preference for the local and the present moment.\n\n\"Magian\" \n",
"The West as a geographical area is unclear and undefined. More often a country's ideology is what will be used to categorize it as a Western society. There is some disagreement about what nations should or should not be included in the category and at what times. Many parts of the Eastern Roman (Byzantine) Empire are considered Western today but were considered Eastern in the past. However, in the past it was also the Eastern Roman Empire that had many features now seen as \"Western,\" preserving Roman law, which was first codified by Justinian in the east, as well as the traditions of scholarship around Plato, Aristotle, and Euclid that were later introduced to Italy during the Renaissance by Greek scholars fleeing the fall of Constantinople. Thus, the culture identified with East and West itself interchanges with time and place (from the ancient world to the modern). Geographically, the \"West\" of today would include Europe (especially the states that collectively form the European Union) together with extra-European territories belonging to the English-speaking world, the Hispanidad, the Lusosphere; and the Francophonie in the wider context. Since the context is highly biased and context-dependent, there is no agreed definition what the \"West\" is.\n",
"From a very different perspective, it has also been argued that the idea of the West is, in part, a non-Western invention, deployed in the non-West to shape and define non-Western pathways through or against modernity.\n\nSection::::Other views.:Views on torn countries.\n",
"In Chinese Buddhism, the West represents movement toward the Buddha or enlightenment (see Journey to the West). The ancient Aztecs believed that the West was the realm of the great goddess of water, mist, and maize. In Ancient Egypt, the West was considered to be the portal to the netherworld, and is the cardinal direction regarded in connection with death, though not always with a negative connotation. Ancient Egyptians also believed that the Goddess Amunet was a personification of the West. The Celts believed that beyond the western sea off the edges of all maps lay the Otherworld, or Afterlife.\n",
"Western Europe\n\nWestern Europe is the region comprising the western part of Europe. Though the term \"Western Europe\" is commonly used, there is no commonly agreed-upon definition of the countries that it encompasses. \n",
"West, while Islamic nations and much of the former Soviet Union are, regardless of location, grouped in the East. Other than Asia and some parts of Africa, Europe has successfully absorbed almost all of the societies of Oceania, and the Americas into the Western world, Turkey, the Philippines,\n\nIsrael, and Japan, which are geographically located in the Eastern world, are considered at least partially westernized due to the cultural influence of Europe.\n\nSection::::Identity politics.\n\nSection::::Identity politics.:Asian concepts.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-14795 | How does fire just "appear"? | Fire is not a thing, but a process - and a self-reinforcing one. If conditions are right (fuel+oxygen+temperature) at some tiny place, a chemical reaction occurs where the fuel combines with the oxygen and releases heat. This means that the tiny place *next* to it heats up, and now suddenly also has the right conditions to oxidize. Furthermore, the heat also facilitates circulation - hot gas rises up, cold air with new extra oxygen comes in. If the self-reinforcement is not strong enough, then the spark or whatever disappears, but if the conditions are good for it, then the fire (i.e. the zone where burning/oxidizing is happening) expands. | [
"The glow of a flame is complex. Black-body radiation is emitted from soot, gas, and fuel particles, though the soot particles are too small to behave like perfect blackbodies. There is also photon emission by de-excited atoms and molecules in the gases. Much of the radiation is emitted in the visible and infrared bands. The color depends on temperature for the black-body radiation, and on chemical makeup for the emission spectra. The dominant color in a flame changes with temperature. The photo of the forest fire in Canada is an excellent example of this variation. Near the ground, where most burning is occurring, the fire is white, the hottest color possible for organic material in general, or yellow. Above the yellow region, the color changes to orange, which is cooler, then red, which is cooler still. Above the red region, combustion no longer occurs, and the uncombusted carbon particles are visible as black smoke.\n",
"Section::::Spatial and temporal scales.\n",
"Section::::Origin.\n",
"Fire is hot because the conversion of the weak double bond in molecular oxygen, O, to the stronger bonds in the combustion products carbon dioxide and water releases energy (418 kJ per 32 g of O); the bond energies of the fuel play only a minor role here. At a certain point in the combustion reaction, called the ignition point, flames are produced. The \"flame\" is the visible portion of the fire. Flames consist primarily of carbon dioxide, water vapor, oxygen and nitrogen. If hot enough, the gases may become ionized to produce plasma. Depending on the substances alight, and any impurities outside, the color of the flame and the fire's intensity will be different.\n",
"Section::::Mapping.\n",
"Section::::Fuels.:Solid fuels.\n\nThe act of combustion consists of three relatively distinct but overlapping phases:\n\nBULLET::::- Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation.\n\nBULLET::::- Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours.\n",
"Section::::Methods of deployment.:Hand-held projectors.\n",
"This world, which is the same for all, no one of gods or men has made. But it always was and will be: an ever-living fire, with measures of it kindling, and measures going out. \n",
"Section::::Fictional character biography.:Pre-Crisis.:Global Guardians.\n",
"Section::::Examples.\n",
"Section::::Fire progression.:December 2017.\n",
"Harry Julius Emeléus was the first to record their emission spectra, and in 1929 he coined the term \"cold flame\".\n\nSection::::Parameters.\n",
"Section::::Post-fire science.\n",
"BULLET::::- On Nov. 7, 2015 a bright plume of light was seen expanding then \"exploding\" over south California between 7-8pm. The Orange County Sheriff confirmed this was in fact a Trident II ballistic missile test firing from the USS Kentucky.\n\nBULLET::::- On Sept. 2, 2015 a bright plume from a Atlas V rocket launching the MUOS satellite from Cape Canaveral Air Force Station was seen by observers in Florida\n",
"BULLET::::- First day of Mount Carmel fire, 2010 December 2, Second day of Mount Carmel fire, 2010 December 3, Third day of Mount Carmel fire, 2010 December 4—Videos captured by security cameras located at the Laboratory of Climatology at the University of Haifa campus at Mount Carmel and directed south-west, overlooking the forest fire.\n",
"In fires (particularly house fires), the cooler flames are often red and produce the most smoke. Here the red color compared to typical yellow color of the flames suggests that the temperature is lower. This is because there is a lack of oxygen in the room and therefore there is incomplete combustion and the flame temperature is low, often just . This means that a lot of carbon monoxide is formed (which is a flammable gas) which is when there is greatest risk of backdraft. When this occurs, combustible gases at or above the flash point of spontaneous combustion are exposed to oxygen, carbon monoxide and superheated hydrocarbons combust, and temporary temperatures of up to occur.\n",
"Section::::Ignition sources.\n",
"Section::::Composition.\n",
"An example of flashover is ignition of a piece of furniture in a domestic room. The fire involving the initial piece of furniture can produce a layer of hot smoke which spreads across the ceiling in the room. The hot buoyant smoke layer grows in depth, as it is bounded by the walls of the room. The radiated heat from this layer heats the surfaces of the directly exposed combustible materials in the room, causing them to give off flammable gases via pyrolysis. When the temperatures of the evolved gases become high enough, these gases will ignite throughout their extent.\n",
"Section::::Recording.\n",
"Section::::1988 fire.\n",
"Section::::In other media.\n\nSection::::In other media.:Television.\n\nSection::::In other media.:Television.:Live-action.\n\nBULLET::::- Fire appeared in the \"Justice League of America\" television pilot movie, portrayed by Michelle Hurd.\n",
"Reaction is initiated by an activating energy, in most cases, it is heat. Several examples include friction, as in case of matches, heating an electrical wire, a flame (propagation of fire), or a spark (from a lighter or from any starting electrical device). There are also many other ways to bring sufficient activation energy including electricity, radiation, and pressure, all of which will lead to a temperature rise. In most cases, heat production enables self-sustainability of the reaction, and enables a chain reaction to grow. The temperature at which a liquid produces sufficient vapor to get a flammable mix with self-sustainable combustion is called its flash-point.\n",
"A special type of contact metamorphism, associated with fossil fuel fires, is known as pyrometamorphism.\n\nSection::::Types.:Hydrothermal.\n",
"Fire (comics)\n\nFire (Beatriz da Costa) is a fictional comic book superheroine from the DC Comics universe.\n\nSection::::Publication history.\n\nA version of her first appeared in \"Super Friends\" #25, (October 1979), and was created by E. Nelson Bridwell and Ramona Fradon.\n\nSection::::Fictional character biography.\n\nSection::::Fictional character biography.:Pre-Crisis.\n\nSection::::Fictional character biography.:Pre-Crisis.:Super Friends.\n"
] | [
"Fire is a thing that just appears."
] | [
"Fire is a process that is self-reinforcing, a small chemical reaction occurs just before the flame rises. "
] | [
"false presupposition"
] | [
"Fire is a thing that just appears.",
"Fire is a thing that just appears."
] | [
"false presupposition",
"normal"
] | [
"Fire is a process that is self-reinforcing, a small chemical reaction occurs just before the flame rises. ",
"Fire is a process that is self-reinforcing."
] |
2018-02640 | How fish can produce light while being at the very bottom of the ocean | It's called bioluminescence. Basically, it's the result of a chemical reaction which converts chemical energy into light energy. Some organisms use it for defence or as a deterrent, others use it to hunt. Kind of hard to go into less depth (haha depth, bottom of the ocean, do you geddit, Reddit?) at least whilst I'm sleepy, but yeah. | [
"Section::::In marine animals.:Mechanism.:Photophores.\n\nCounter-illumination relies on organs that produce light, photophores. These are roughly spherical structures that appear as luminous spots on many marine animals, including fish and cephalopods. The organ can be simple, or as complex as the human eye, equipped with lenses, shutters, colour filters and reflectors.\n",
"Bioluminescence occurs widely among animals, especially in the open sea, including fish, jellyfish, comb jellies, crustaceans, and cephalopod molluscs; in some fungi and bacteria; and in various terrestrial invertebrates including insects. About 76% of the main taxa of deep-sea animals produce light. Most marine light-emission is in the blue and green light spectrum. However, some loose-jawed fish emit red and infrared light, and the genus \"Tomopteris\" emits yellow light.\n",
"UV vision may also be related to foraging and other communication behaviors.\n\nMany species of fish can see the ultraviolet end of the spectrum, beyond the violet.\n\nUltraviolet vision is sometimes used during only part of the life cycle of a fish. For example, juvenile brown trout live in shallow water where they use ultraviolet vision to enhance their ability to detect zooplankton. As they get older, they move to deeper waters where there is little ultraviolet light.\n",
"Section::::In marine animals.:Mechanism.:Matching light intensity and wavelength.\n\nAt night, nocturnal organisms match both the wavelength and the light intensity of their bioluminescence to that of the down-welling moonlight and direct it downward as they swim, to help them remain unnoticed by any observers below.\n",
"The black dragonfish (also called the northern stoplight loosejaw) \"Malacosteus niger\" is believed to be one of the only fish to produce a red glow. Its eyes, however, are insensitive to this wavelength; it has an additional retinal pigment which fluoresces blue-green when illuminated. This alerts the fish to the presence of its prey. The additional pigment is thought to be assimilated from chlorophyll derivatives found in the copepods which form part of its diet.\n\nSection::::Biotechnology.\n\nSection::::Biotechnology.:Biology and medicine.\n",
"Like other hydrozoans, certain siphonophores can emit light. A siphonophore of the genus \"Erenna\" has been discovered at a depth of around off the coast of Monterey, California. The individuals from these colonies are strung together like a feather boa. They prey on small animals using stinging cells. Among the stinging cells are stalks with red glowing ends. The tips twitch back and forth, creating a twinkling effect. Twinkling red lights are thought to attract the small fish eaten by these siphonophores.\n",
"Counterillumination (or counter-lighting) involves the production of light by the fish for the purpose of camouflaging its silhouette from observers lurking below. Sternoptychidae produce this light with organs called photophores, of which they have between 3 and 7 – usually 6 – on the branchiostegal membrane along the lower edge of the chest and belly. The intensity of the light produced is controlled by the fish, an appropriate brightness chosen according to how much light reaches the eyes from above. The patterns of light created by the photophores are also unique to each species, probably playing a role in courtship.\n",
"The light organ of embryonic and juvenile squids has a striking anatomical similarity to an eye and expresses several genes similar to those involved in eye development in mammalian embryos (e.g. \"eya\", \"dac\") which indicate that squid eyes and squid light organs may be formed using the same developmental \"toolkit\".\n\nAs the down-welling light increases or decreases, the squid is able to adjust luminescence accordingly, even over multiple cycles of light intensity.\n\nSection::::See also.\n\nBULLET::::- Reflectin\n\nSection::::Further reading.\n",
"Section::::Fluorescence in nature.:Aquatic biofluorescence.:Photic zone.\n\nSection::::Fluorescence in nature.:Aquatic biofluorescence.:Photic zone.:Fish.\n\nBony fishes living in shallow water generally have good color vision due to their living in a colorful environment. Thus, in shallow-water fishes, red, orange, and green fluorescence most likely serves as a means of communication with conspecifics, especially given the great phenotypic variance of the phenomenon.\n",
"In the Hawaiian bobtail squid (\"Euprymna scolopes\") light is produced in a large and complex two-lobed light organ inside the squid's mantle cavity. At the top of the organ (dorsal side) is a reflector, directing the light downwards. Below this are containers (crypts) lined with epithelium containing light-producing symbiotic bacteria. Below those is a kind of iris, consisting of branches (diverticula) of its ink sac; and below that is a lens. Both the reflector and the lens are derived from mesoderm. Light escapes from the organ downwards, some of it travelling directly, some coming off the reflector. Some 95% of the light-producing bacteria are voided at dawn every morning; the population in the light organ then builds up slowly during the day to a maximum of some 10 bacteria by nightfall: this species hides in sand away from predators during the day, and does not attempt counter-illumination during daylight, which would in any case require much brighter light than its light organ output. The emitted light shines through the skin of the squid's underside. To reduce light production, the squid can change the shape of its iris; it can also adjust the strength of yellow filters on its underside, which presumably change the balance of wavelengths emitted. The light production is correlated with the intensity of down-welling light but about one third as bright; the squid is able to track repeated changes in brightness.\n",
"As common for deep-sea creatures, all members of Stomiiformes (except one) have photophores, whose structure is characteristic of the order. The light emitted can be more or less strong and its color can be light yellow, white, violet or red. The light coming from these fish is generally invisible to their prey. The lighting mechanism can be very simple – consisting of small gleaming points on the fish body – or very elaborate, involving lenses and refractors.\n",
"Because of its very small size, picoplankton is difficult to study by classic methods such as optical microscopy. More sophisticated methods are needed.\n\nBULLET::::- Epifluorescence microscopy allows researchers to detect certain groups of cells possessing fluorescent pigments such as \"Synechococcus\" which possess phycoerythrin.\n",
"Epidermal fluorescent cells in fish also respond to hormonal stimuli by the α–MSH and MCH hormones much the same as melanophores. This suggests that fluorescent cells may have color changes throughout the day that coincide with their circadian rhythm. Fish may also be sensitive to cortisol induced stress responses to environmental stimuli, such as interaction with a predator or engaging in a mating ritual.\n\nSection::::Fluorescence in nature.:Phylogenetics.\n\nSection::::Fluorescence in nature.:Phylogenetics.:Evolutionary origins.\n",
"Some species have a tapetum, a reflective layer which bounces light that passes through the retina back through it again. This enhances sensitivity in low light conditions, such as nocturnal and deep sea species, by giving photons a second chance to be captured by photoreceptors. However this comes at a cost of reduced resolution. Some species are able to effectively turn their tapetum off in bright conditions, with a dark pigment layer covering it as needed.\n\nThe retina uses a lot of oxygen compared to most other tissues, and is supplied with plentiful oxygenated blood to ensure optimal performance.\n",
"Another, well-studied example of biofluorescence in the ocean is the hydrozoan \"Aequorea victoria\". This jellyfish lives in the photic zone off the west coast of North America and was identified as a carrier of green fluorescent protein (GFP) by Osamu Shimomura. The gene for these green fluorescent proteins has been isolated and is scientifically significant because it is widely used in genetic studies to indicate the expression of other genes.\n\nSection::::Fluorescence in nature.:Aquatic biofluorescence.:Photic zone.:Mantis shrimp.\n",
"Section::::Uses in nature.:Counterillumination camouflage.\n\nIn many animals of the deep sea, including several squid species, bacterial bioluminescence is used for camouflage by counterillumination, in which the animal matches the overhead environmental light as seen from below. In these animals, photoreceptors control the illumination to match the brightness of the background. These light organs are usually separate from the tissue containing the bioluminescent bacteria. However, in one species, \"Euprymna scolopes\", the bacteria are an integral component of the animal's light organ.\n\nSection::::Uses in nature.:Attraction.\n",
"The predatory deep-sea dragonfish \"Malacosteus niger\", the closely related genus \"Aristostomias\" and the species \"Pachystomias microdon\" are capable of harnessing the blue light emitted from their own bioluminescence to generate red biofluorescence from suborbital photophores. This red fluorescence is invisible to other animals, which allows these dragonfish extra light at dark ocean depths without attracting or signaling predators.\n\nSection::::Fluorescence in nature.:Terrestrial biofluorescence.\n\nSection::::Fluorescence in nature.:Terrestrial biofluorescence.:Amphibians.\n",
"Many deep-sea fish are bioluminescent, with extremely large eyes adapted to the dark. Bioluminescent organisms are capable of producing light biologically through the agitation of molecules of luciferin, which then produce light. This process must be done in the presence of oxygen. These organisms are common in the mesopelagic region and below (200m and below). More than 50% of deep-sea fish, as well as some species of shrimp and squid, are capable of bioluminescence. About 80% of these organisms have photophores – light producing glandular cells that contain luminous bacteria bordered by dark colourings. Some of these photophores contain lenses, much like those in the eyes of humans, which can intensify or lessen the emanation of light. The ability to produce light only requires 1% of the organism's energy and has many purposes: It is used to search for food and attract prey, like the anglerfish; claim territory through patrol; communicate and find a mate, and distract or temporarily blind predators to escape. Also, in the mesopelagic where some light still penetrates, some organisms camouflage themselves from predators below them by illuminating their bellies to match the colour and intensity of light from above so that no shadow is cast. This tactic is known as counter-illumination.\n",
"\"P. uveae\" varies in its behaviour from location to location. For example, the population of \"P.uveae\" in the lagoon at Kakaban Island appears to be photophobic and was only recorded being active at night, while the population from Tinguiban Islet in the Philippines were described as being \"sun-lovers\". The adult \"P.uveae\" fed in depths of 1-2m in the lagoon and were not found in the adjacent open reef. The differences in colouration and behaviour may indicate that \"P. uveae\" is made up of more than one cryptic species. Where it occurs \"P.uveae\" is quite numerous and has been said to appear \"in masses\".\n",
"In the eyeflash squid (\"Abralia veranyi\") a species which daily migrates between the surface and deep waters, a study showed that the light produced is bluer in cold waters and greener in warmer waters, temperature serving as a guide to the required emission spectrum. The animal has more than 550 photophores on its underside, consisting of rows of four to six large photophores running across the body, and many smaller photophores scattered over the surface. In cold water at 11 Celsius, the squid's photophores produced a simple (unimodal) spectrum with its peak at 490 nanometres (blue-green). In warmer water at 24 Celsius, the squid added a weaker emission (forming a shoulder on the side of the main peak) at around 440 nanometres (blue), from the same group of photophores. Other groups remained unilluminated: other species, and perhaps \"A. veranyi\" from its other groups of photophores, can produce a third spectral component when needed. Another squid, \"Abralia trigonura\", is able to produce three spectral components: at 440 and at 536 nanometres (green), appearing at 25 Celsius, apparently from the same photophores; and at 470–480 nanometres (blue-green), easily the strongest component at 6 Celsius, apparently from a different group of photophores. Many species can in addition vary the light they emit by passing it through a choice of colour filters.\n",
"Most squid live in deep water, and in these, the lens of the eye is translucent to ultraviolet light. However, \"O. banksii\" lives near the surface where ultraviolet light penetrates the water, and the lens is yellow, strongly absorbing blue light.\n\nSection::::Distribution and habitat.\n",
"Section::::Description.\n",
"Mesopelagic fish are adapted for an active life under low light conditions. Most of them are visual predators with large eyes. Some of the deeper water fish have tubular eyes with big lenses and only rod cells that look upwards. These give binocular vision and great sensitivity to small light signals. This adaptation gives improved terminal vision at the expense of lateral vision, and allows the predator to pick out squid, cuttlefish, and smaller fish that are silhouetted against the gloom above them.\n",
"Many cephalopods, including at least 70 genera of squid, are bioluminescent. Some squid and small crustaceans use bioluminescent chemical mixtures or bacterial slurries in the same way as many squid use ink. A cloud of luminescent material is expelled, distracting or repelling a potential predator, while the animal escapes to safety. The deep sea squid \"Octopoteuthis deletron\" may autotomise portions of its arms which are luminous and continue to twitch and flash, thus distracting a predator while the animal flees.\n",
"Section::::Bioluminescence.\n\nSeveral species of deep-sea fish have luminous organs used to attract prey. Females of the genus \"Linophryne\" bear barbels containing luminous organs in addition to an escal light organ attached to the head. In \"L. arborifera\", the top light organ has been likened to a pearl onion and contains luminous bacteria. The barbels, which look like seaweed fronds, do not contain bacteria but complex paracrystalline photogenic granules. The esca is ectodermal in origin whereas the barbel organs may be derived from the mesoderm.\n\nSection::::External links.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-01671 | How come in a can of soda or beer, the bubbles don't all either rise to the surface or sink to the bottom? How do they remain dispersed throughout the drink? | It's because until the pressure is released when you open the can, the bubbles don't exist at all. The CO2 is dissolved in the water, but when you release it, the CO2 can finally escape and it becomes bubbles. | [
"Normally, this process is relatively slow, because the activation energy for this process is high. The activation energy for a process like bubble nucleation depends on where the bubble forms. It is highest for bubbles that form in the liquid itself (homogeneous nucleation), and lower if the bubble forms on some other surface (heterogeneous nucleation). When the pressure is released from a soda bottle, the bubbles tend to form on the sides of the bottle. But because they are smooth and clean, the activation energy is still relatively high, and the process is slow. The addition of other nucleation sites provides an alternative pathway for the reaction to occur with lower activation energy, much like a catalyst. For instance dropping grains of salt or sand into the solution lowers the activation energy, and increases the rate of carbon dioxide precipitation. \n",
"BULLET::::- Some champagne stirrers operate by providing many nucleation sites via high surface-area and sharp corners, speeding the release of bubbles and removing carbonation from the wine.\n\nBULLET::::- The Diet Coke and Mentos eruption offers another example. The surface of Mentos candy provides nucleation sites for the formation of carbon-dioxide bubbles from carbonated soda.\n\nBULLET::::- Both the bubble chamber and the cloud chamber rely on nucleation, of bubbles and droplets, respectively.\n\nSection::::Examples.:Examples of the nucleation of crystals.\n",
"This is the case with certain types of draught beer such as draught stouts. In the case of these draught beers, which before dispensing also contain a mixture of dissolved nitrogen and carbon dioxide, the agitation is caused by forcing the beer under pressure through small holes in a restrictor in the tap. The surging mixture gradually settles to produce a very creamy head.\n\nSection::::Development.\n",
"Although CO is most common for beverages, nitrogen gas is sometimes deliberately added to certain beers. The smaller bubble size creates a smoother beer head. Due to the poor solubility of nitrogen in beer, kegs or widgets are used for this.\n\nIn the laboratory, a common example of effervescence is seen if hydrochloric acid is added to a block of limestone. If a few pieces of marble or an antacid tablet are put in hydrochloric acid in a test tube fitted with a bung, effervescence of carbon dioxide can be witnessed.\n",
"The method in which the bubbles are introduced into the water stream and retention time are also important factors. The average retention time for a vertical unit is typically 4 to 5 minutes and 5 to 6 minutes for a horizontal unit.\n\nSection::::DGF pump.\n",
"The conversion of dissolved carbon dioxide to gaseous carbon dioxide forms rapidly expanding gas bubbles in the soda, which pushes the beverage contents out of the container. Gases, in general, are more soluble in liquids at elevated pressures. Carbonated sodas contain elevated levels of carbon dioxide under pressure. The solution becomes supersaturated with carbon dioxide when the bottle is opened, and the pressure is released. Under these conditions, carbon dioxide begins to precipitate from solution, forming gas bubbles. \n",
"Beers can be carbonated with CO or with other gases such as Nitrogen. These gases are not as soluble in water as carbon dioxide, so they form bubbles that do not grow through Ostwald ripening. This means that the beer has smaller bubbles and a more creamy and stable head. This less soluble gas gives the beer a different and flatter texture. In beer terms, the mouthfeel is smooth, not bubbly like beers with normal carbonation. Nitro beer could taste less acidic than normal beer.\n\nSection::::Storage and degradation.\n",
"Two different regimes may be distinguished in the nucleate boiling range. When the temperature difference is between approximately to above T, isolated bubbles form at nucleation sites and separate from the surface. This separation induces considerable fluid mixing near the surface, substantially increasing the convective heat transfer coefficient and the heat flux. In this regime, most of the heat transfer is through direct transfer from the surface to the liquid in motion at the surface and not through the vapor bubbles rising from the surface.\n",
"A major difference of low pressure dissolved air flotation and other flotation processes lies in the volumes of bubbles, amount of air and raising speeds. One macro bubble can be 1000 times bigger in volume compared to one micro bubble. And vice versa the number of micro bubbles can be 1000 fold in number compared to one macro bubble having same volume.\n",
"The keys to good separation are both gravity and the creation of millions of very small bubbles. Based on Stokes' law, the size of the oil droplet and density of the droplet will affect the rate of rise to the surface. The larger and lighter the droplet, the faster it will rise to the surface. By attaching a small gas bubble to an oil droplet, the density of the droplet decreases, which increases the rate at which it will rise to the surface. Therefore, the smaller the gas bubbles created the smaller the oil droplet floated to the surface. Efficient flotation systems need to create as many small bubbles as possible. \n",
"At standard atmospheric pressure and low temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is \"sub-cooled nucleate boiling\", and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling, or DNB).\n",
"Glass surfaces can retain oil from the skin, aerosolized oil from nearby cooking, and traces of fat from food. When these oils come in contact with beer there is a significant reduction in the amount of head (foam) that is found on the beer, and the bubbles will tend to stick to the side of the glass rather than rising to the surface as normal.\n",
"When the can is opened, the pressure in the can quickly drops, causing the pressurised gas and beer inside the widget to jet out from the hole. This agitation on the surrounding beer causes a chain reaction of bubble formation throughout the beer. The result, when the can is then poured out, is a surging mixture in the glass of very small gas bubbles and liquid.\n",
"There are two types of nucleation processes. Gases preferentially condense onto surfaces of pre-existing aerosol particles, known as heterogeneous nucleation. This process causes the diameter at the mode of particle-size distribution to increase with constant number concentration. With sufficiently high supersaturation and no suitable surfaces, particles may condense in the absence of a pre-existing surface, known as homogeneous nucleation. This results in the addition of very small, rapidly growing particles to the particle-size distribution.\n\nSection::::Physics.:Dynamics.:Activation.\n",
"Bubble nucleation happens when the a volatile becomes saturated. Actually the bubbles are composed of molecules that tend to aggregate spontaneously in a process called homogeneous nucleation. The surface tension acts on the bubbles shrinking the surface and forces them back to the liquid. The nucleation process is greater when the space to fit is irregular and the volatile molecules can ease the effect of surface tension. The nucleation can occur thanks to the presence of solid crystals, which are stored in the magma chamber. They are perfect potential nucleation sites for bubbles. If there is no nucleation in the magma the bubbles formation might appear really late and magma becomes significantly supersaturated. The balance between supersaturation pressure and bubble's radii expressed by this equation: ∆P=2σ/r, where ∆P is 100 MPa and σ is the surface tension. If the nucleation starts later when the magma is very supersaturated, the distance between bubbles becomes smaller. Essentially if the magma rises rapidly to the surface, the system will be more out of equilibrium and supersaturated. When the magma rises there is competition between adding new molecules to the existing ones and creating new ones. The distance between molecules characterizes the efficiency of volatiles to aggregate to the new or existing site. Crystals inside magma can determine how bubbles grow and nucleate.\n",
"As in many chemical processes, there are competing considerations of recovery (i.e. the percentage of target surfactant that reports to the overhead foamate stream) and enrichment (i.e. the ratio of surfactant concentration in the foamate to the concentration in the feed). A crude method of moving upon the enrichment-recovery spectrum is to control the gas rate to the column. A higher gas rate will mean higher recovery but lower enrichment.\n\nFoam fractionation proceeds via two mechanisms:\n\nBULLET::::1. The target molecule adsorbs to a bubble surface, and\n",
"Section::::Cause.\n\nThe eruption is caused by a physical reaction, rather than any chemical reaction. The addition of the Mentos leads to the rapid nucleation of carbon dioxide gas bubbles precipitating out of solution: \n",
"Bubble (physics)\n\nA bubble is a globule of one substance in another, usually gas in a liquid. \n\nDue to the Marangoni effect, bubbles may remain intact when they reach the surface of the immersive substance.\n\nSection::::Common examples.\n\nBubbles are seen in many places in everyday life, for example:\n\nBULLET::::- As spontaneous nucleation of supersaturated carbon dioxide in soft drinks\n\nBULLET::::- As water vapor in boiling water\n\nBULLET::::- As air mixed into agitated water, such as below a waterfall\n\nBULLET::::- As sea foam\n\nBULLET::::- As a soap bubble\n\nBULLET::::- As given off in chemical reactions, e.g., baking soda + vinegar\n",
"The nucleation reaction can start with any heterogeneous surface, such as rock salt, but Mentos have been found to work better than most. Tonya Coffey, a physicist at Appalachian State University, found that the aspartame in diet drinks lowers the surface tension in the water and causes a bigger reaction, but that caffeine does not accelerate the process. It has also been shown that a wide variety of beverage additives such as sugars, citric acid, and natural flavors can also enhance fountain heights. In some cases, dissolved solids that increase the surface tension of water (such as sugars) also increase fountain heights. These results suggest that additives serve to enhance geyser heights not by decreasing surface tension, but rather by decreasing bubble coalescence. Decreased bubble coalescence leads to smaller bubble sizes and greater foaming ability in water. Thus, the geyser reaction will still work even using sugared drinks, but diet is commonly used both for the sake of a larger geyser as well as to avoid having to clean up a sugary soda mess.\n",
"One of the ways foam is created is through dispersion, where a large amount of gas is mixed with a liquid. A more specific method of dispersion involves injecting a gas through a hole in a solid into a liquid. If this process is completed very slowly, then one bubble can be emitted from the orifice at a time as shown in the picture below.\n\nOne of the theories for determining the separation time is shown below; however, while this theory produces the theoretical data that matches with experimental data, detachment due to capillarity is accepted as a better explanation.\n",
"BULLET::::2. The bubbles form a foam which travels up a column and is discharged to the foamate stream of foam fractionation.\n\nThe rate at which certain non-ionic molecules can adsorb to bubble surface can be estimated by solving the Ward-Tordai equation. The enrichment and recovery depend on the hydrodynamic condition of the rising foam, which is a complex system dependent upon bubble size distribution, stress state at the gas-liquid interface, rate of bubble coalescence, gas rate \"inter alia\". The hydrodynamic condition is described by the Hydrodynamic Theory of Rising Foam.\n\nSection::::Applications.\n",
"At lower scale than the bubble is the thickness of the film for metastable foams, which can be considered a network of interconnected films called lamellae. Ideally, the lamellae connect in triads and radiate 120° outward from the connection points, known as Plateau borders.\n\nAn even lower scale is the liquid–air interface at the surface of the film. Most of the time this interface is stabilized by a layer of amphiphilic structure, often made of surfactants, particles (Pickering emulsion), or more complex associations.\n\nSection::::Formation.\n",
"BULLET::::- Bubbles of carbon dioxide \"nucleate\" shortly after the pressure is released from a container of carbonated liquid.\n",
"The process of dissolving carbon dioxide in water is called carbonation. Commercial soda water in siphons is made by chilling filtered plain water to or below, optionally adding a sodium or potassium based alkaline compound such as sodium bicarbonate to reduce acidity, and then pressurizing the water with carbon dioxide. The gas dissolves in the water, and a top-off fill of carbon dioxide is added to pressurize the siphon to approximately , some higher than is present in fermenting champagne bottles.\n",
"Section::::Mixed phase models (dissolved and bubble phases).:Varying Permeability Model.:Bubble nucleation.\n\nGas bubbles with a radius greater than 1 micron should float to the surface of a standing liquid, whereas smaller ones should dissolve rapidly due to surface tension. The Tiny Bubble Group has been able to resolve this apparent paradox by developing and experimentally verifying a new model for stable gas nuclei.\n"
] | [
"Bubbles trapped in the drink somewhere."
] | [
"The gas is actually dissolved into the liquid itself which makes it everywhere the liquid is. "
] | [
"false presupposition"
] | [
"Bubbles trapped in the drink somewhere."
] | [
"false presupposition"
] | [
"The gas is actually dissolved into the liquid itself which makes it everywhere the liquid is. "
] |
2018-01116 | Why is long-distance bus travel in the United States so much more common than in Europe? | [Long-distance bus travel isn't really all that common in the US]( URL_0 ). Most people try to avoid taking a Greyhound whenever possible. The only real advantage of a bus is that it's cheap. They don't require airports and train stations. They don't require air traffic control. They don't require multi-million dollar vehicles. They don't require large crews. All it takes is a driver and a bus that costs less than $500k (you could get 100 busses for the cost of a single 737). They can drive on existing roads and freeways. For small towns that don't get much traffic, they don't even need a dedicated terminal - they can just stop at a store or other public place. While the travel is slow, the low cost makes them the only real option for people who don't have the money to travel any other way and people who want to travel between places that are too small to have their own airport or train station. Keep in mind that [the population density in the US is far lower than Europe]( URL_2 ) so [we don't have trains going everywhere]( URL_1 ). Since the US train network is primarily dedicated to freight, passenger travel tends to be slow, and expensive. | [
"Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa.\n",
"In the mid-1950s more than 2,000 buses operated by Greyhound Lines, Trailways, and other companies connected 15,000 cities and towns. Passenger volume decreased as a result of expanding road and air travel, and urban decay that caused many neighborhoods with bus depots to become more dangerous. In 1960, American intercity buses carried 140 million riders; the rate decreased to 40 million by 1990, and continued to decrease until 2006.\n",
"In the early 2010s, many countries in Europe decided to liberalise the market for medium/long distance coach (intercity bus) transportation. This move has already proven to be helping both the economies and the Europeans.\n\nThe bus is the cheapest method of transportation and slower than the train in countries that have high-speed rail. However, many companies have made adjustments so that their coach fleets can be as comfortable as trains. Toilets and power have been added to the coaches and some are equipped with WiFi.\n",
"Section::::Intercity coach travel by country.:Switzerland.\n\nSwitzerland has an extremely dense network of interconnected rail, bus and ship lines, including some long-distance bus lines. Although Switzerland is a mountainous country, the rail network is denser than Germany's. Switzerland is an exception to the rule that long-distance bus lines are established especially in countries with inadequate railway network, or in areas with low population density. Some of the railway and main bus routes on Italian territory also serve to shorten the distance between Swiss towns. From Germany lines run from Frankfurt am Main, Heidelberg, Karlsruhe to Basel and Lucerne.\n",
"Section::::Responses from the public administrations.:Europe.\n\nIn 2012 Polis, a network of European cities and regions, published a position paper that calls upon European institutions and other European actors to take action, to ensure that the promotion of health benefits of active travel are maximised in all relevant European policies and programmes. Recommendation are based on references in European policy documents to improving health through active travel which should form the basis of shared objectives, policies, work programmes and investment to increase levels of walking and cycling.\n\nSpecific recommendations include:\n",
"Suburban models in the United States are often used in Park-and-Ride services, and are very common in the New York City area, where New Jersey Transit Bus Operations is a major operator serving widespread bedroom communities.\n\nSection::::Usage.\n\nThe number of miles traveled by vehicles in the United States fell by 3.6% in 2008, while the number of trips taken on mass transit increased by 4.0%. At least part of the drop in urban driving can be explained by the 4% increase in the use of public transportation \n",
"Between 2006 and 2014, American intercity buses focused on medium-haul trips between 200 and 300 miles; airplanes performed the bulk of longer trips and automobiles shorter ones. For most medium-haul trips curbside bus fares were less than the cost of automobile gasoline, and one tenth that of Amtrak. Buses are also four times more fuel-efficient than automobiles. Their Wi-Fi service is also popular; one study estimated that 92% of Megabus and BoltBus passengers planned to use an electronic device. New lower fares introduced by Greyhound on traditional medium-distance routes and rising gasoline prices have increased ridership across the network and made bus travel cheaper than all alternatives.\n",
"Passenger transportation is dominated by a network of over 3.9 million miles of highways which is pervasive and highly developed by global standards. Passenger transportation is dominated by passenger vehicles (including cars, trucks, vans, and motorcycles), which account for 86% of passenger-miles traveled. The remaining 14% was handled by planes, trains, and buses. Public transit use is highly concentrated in large older cities, with only six above 25% and only New York City above 50% of trips on transit. Airlines carry almost all non-commuter intercity traffic, except the Northeast Corridor where Amtrak carries more than all airlines combined.\n",
"In order to make coach transport even easier and even cheaper, platforms for comparison have been created. Web platforms like SoBus and GetByBus have integrated the booking process without redirection to the transport company's website.\n\nSection::::Air transport.\n",
"There are three common types of bus service in the United States: conventional bus systems, bus rapid transit (BRT), and intercity buses. Nearly every major city in the United States offers some form of bus service, with some being 24 hours a day. These buses run on flexible routes and make frequent stops, with a focus of provided accessible service to all tracts of a community. Bus rapid transit attempts to mimic the speed of a light rail system. Most BRT systems in the United States are in moderate sized cities or satellite cities, and serve as auxiliary routes for rail service. The primary different between BRT in the United States and regular bus service is BRT often runs more frequent as has fewer stops, in order to make service quicker. Furthermore, BRT service generally has their own dedicated right of way and signal prioritization, which allows BRT vehicles to move faster than regular automobile traffic. Both BRT and conventional buses are usually publicly financed. Well-known examples of cities with popular BRT services in the United States include Cleveland, Miami, and Richmond. Most inter-city bus service is private for-profit ventures, although they normally used publicly subsidized motorways and highways. Examples of intercity bus service in the United States is \"Megabus\" and \"Greyhound\", which are the two largest inter-city bus services in the United States.\n",
"BULLET::::- 386 Oestgeest - Den Haag Arriva\n\nBULLET::::- 387 Utrecht - Gorinchem Arriva\n\nBULLET::::- 388 Utrecht - Dordrecht Arriva\n\nBesides of regular public transport, a number of international bus companies serves Netherlands.\n\nSection::::Intercity coach travel by country.:Norway.\n",
"Public transportation in the United States refers to publicly financed mass transit services across the nation. This includes various forms of bus, rail, ferry, and sometimes, airline services. Most established public transit systems are located in central, urban areas where there is enough density and public demand to require public transportation. In more auto-centric suburban localities, public transit is normally, but not always, less frequent and less common. Most public transit services in the United States are either national, regional/commuter, or local, depending on the type of service. Furthermore, sometimes \"public transportation\" in the United States is an umbrella term used synonymous with \"alternative transportation\", meaning any form of mobility that excludes driving alone by automobile. This can sometimes include carpooling, vanpooling, on-demand mobility (i.e. Uber, Lyft, Bird, Lime), infrastructure that is fixated toward bicycles (i.e. bike lanes, sharrows, cycle tracks, and bike trails), and paratransit service.\n",
"According to an article by Jonathan Spira, who has authored multiple articles on the topic, the programs have their origins going back to American GIs who brought European (\"foreign\") cars back with them starting in the 1950s. The advent of the jet age in the 1960s, allowing Americans to more easily take European vacations, was a further impetus to such programs.\n\nSection::::General description.\n",
"In addition, some airlines run bus services from a city's bus terminal to an airport or, in other cases, connecting two airports whose cities' population sizes are deemed too small for them to have air service between each other. One example of the former is Singapore Airlines' bus service from downtown Newark, New Jersey to Newark Liberty International Airport. An example of the latter is United Airlines service from Beaumont Airport in Beaumont, Texas to Houston George Bush Intercontinental Airport in Houston, which used to be done on United Express SAAB 340 aircraft but which is now run on a bus.\n",
"In the mid-1950s more than 2,000 buses operated by Greyhound Lines, Trailways, and other companies connected 15,000 cities and towns. Passenger volume decreased as a result of expanding road and air travel, and urban decay that caused many neighborhoods with bus depots to become more dangerous. In 1960, American intercity buses carried 140 million riders; the rate decreased to 40 million by 1990, and continued to decrease until 2006.\n",
"Between 2006 and 2014, American intercity buses focused on medium-haul trips between 200 and 300 miles; airplanes performed the bulk of longer trips and automobiles shorter ones. For most medium-haul trips curbside bus fares were less than the cost of automobile gasoline, and one tenth that of Amtrak. Buses are also four times more fuel-efficient than automobiles. Their Wi-Fi service is also popular; one study estimated that 92% of Megabus and BoltBus passengers planned to use an electronic device. New lower fares introduced by Greyhound on traditional medium-distance routes and rising gasoline prices have increased ridership across the network and made bus travel cheaper than all alternatives.\n",
"BULLET::::- Renesse (mun. Schouwen-Duiveland), Netherlands – free bus services in the area (in summer only)\n\nSection::::New Zealand.\n\nBULLET::::- Hamilton CBD.\n\nBULLET::::- Invercargill, New Zealand - previously had various fare free services.\n\nSection::::Russia.\n\nBULLET::::- Moscow, Russia -\n",
"Section::::Student transport by country.:The Netherlands.\n\nIn the Netherlands, there isn't an organized form of student transport on a large scale.\n\nChildren who attend kindergarten are usually brought by their parents.\n\nAlmost all students at elementary school go to school by foot, as they live close by the school. Students who live further away, go by bike.\n",
"In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles.\n\nSection::::Uses.:Student transport.\n",
"The world's second largest automobile market, the United States has the highest rate of per-capita vehicle ownership in the world, with 865 vehicles per 1,000 Americans.\n\nBicycle usage is minimal with the American Community Survey reporting that bicycle commuting had a 0.61% mode share in 2012 (representing 856,000 American workers nationwide).\n\nSection::::Mode share.:Cargo.\n",
"Commuter bus systems are generally categorized as public transit, especially for large metropolitan transit networks. Usually these routes cover a long distance compared to most transit bus routes, but still short—usually 40 miles in one direction. An urban-suburban bus line generally connects a suburban area to the downtown core.\n",
"The history of tour buses in North America began in the early 20th century when trucks were converted to provide a means for sightseeing within large American cities. Gray Line, the largest sightseeing operators began operations in 1910. Sightseeing was likely a side business for many intercity bus operators because the same types of buses were used (this remains true even today). World War II saw the industry decline, but it slowly re-emerged as an alternative to driving.\n",
"Intercity bus services are of prime importance in lightly populated rural areas that often have little or no public transportation.\n\nIntercity bus services are one of four common transport methods between cities, not all of which are available in all places. The others are by airliner, train, and private automobile.\n\nSection::::History.\n\nSection::::History.:Stagecoaches.\n",
"Accessible coaches are operated on routes between England and Wales, the M9 and M90 in Scotland, and the M20.\n",
"A decline set in across Europe, particularly in Britain, when millions of servicemen returned from World War II having learned to drive. Trips away were now, for the increasing number who had one, by car. The decline in the United States came even sooner. McGurn says:\n"
] | [
"Long distance bus travel is more common in the US than in Europe."
] | [
"Long distance bus travel is not really common in the US."
] | [
"false presupposition"
] | [
"Long distance bus travel is more common in the US than in Europe.",
"Long distance bus travel is more common in the US than in Europe."
] | [
"normal",
"false presupposition"
] | [
"Long distance bus travel is not really common in the US.",
"Long distance bus travel is not really common in the US."
] |
2018-00446 | How can softer materials wear down harder ones? | There have been many more soft surfaces on the hard surface. the hard surface still sees wear for each interaction it is just much less than the soft surface | [
"The rate of erosive wear is dependent upon a number of factors. The material characteristics of the particles, such as their shape, hardness, impact velocity and impingement angle are primary factors along with the properties of the surface being eroded. The impingement angle is one of the most important factors and is widely recognized in literature. For ductile materials, the maximum wear rate is found when the impingement angle is approximately 30°, whilst for non-ductile materials the maximum wear rate occurs when the impingement angle is normal to the surface.\n\nSection::::Wear types and mechanisms.:Corrosion and oxidation wear.\n",
"where formula_33 is the ratio between debond length and critical length, formula_34is the strength of fibers, formula_24 is the width of fiber, formula_15is the fraction of fibers and formula_37is the interface friction stress. From the equation, it can be found that higher volume fraction, higher fiber strength and lower interfacial stress can get a better toughening effect.\n\nWhen fiber is ductile, the work from plastic deformation mainly contributes to the improvement of toughens. The additional toughness contributed by plastic deformation can be expressed by:\n\nformula_38\n",
"Section::::Wear types and mechanisms.:Surface fatigue.\n\nSurface fatigue is a process by which the surface of a material is weakened by cyclic loading, which is one type of general material fatigue. Fatigue wear is produced when the wear particles are detached by cyclic crack growth of microcracks on the surface. These microcracks are either superficial cracks or subsurface cracks.\n\nSection::::Wear types and mechanisms.:Fretting wear.\n",
"Using these relationships, their time derivatives, and the above stress-strain relationships for the spring and dashpot elements, the system can be modeled as follows:\n\nor, in dot notation:\n\nThe retardation time, formula_36, is different for each material and is equal to\n\nSection::::Model characteristics.\n",
"BULLET::::- Law 1 – The mass involved in wear is proportional to the distance traveled in the rubbing between the surfaces.\n\nBULLET::::- Law 2 – The mass involved in wear is proportional to the applied load.\n\nBULLET::::- Law 3 – The mass involved in wear is inversely proportional to the hardness of the less hard material.\n\nAn important aspect of wear is emission of wear particles into the environment which increasingly threatens human health and ecology. The first researcher who investigated this topic was Ernest Rabinowicz. \n\nSection::::Physics.:Wear.:Abrasive Wear.\n",
"Creep resistance can be influenced by many factors such as diffusivity, precipitate and grain size.\n",
"BULLET::::- \"Plasticity\" or plastic deformation is the opposite of elastic deformation and is defined as unrecoverable strain. Plastic deformation is retained after the release of the applied stress. Most materials in the linear-elastic category are usually capable of plastic deformation. Brittle materials, like ceramics, do not experience any plastic deformation and will fracture under relatively low strain, while ductile materials such as metallics, lead, or polymers will plasticly deform much more before a fracture initiation.\n",
"Alloying with elements of higher shear modulus or of very different lattice parameters will increase the stiffness and introduce local stress fields respectively. In either case, the dislocation propagation will be hindered at these sites, impeding plasticity and increasing yield strength proportionally with solute concentration. \n\nSolid solution strengthening depends on:\n\nBULLET::::- Concentration of solute atoms\n\nBULLET::::- Shear modulus of solute atoms\n\nBULLET::::- Size of solute atoms\n\nBULLET::::- Valency of solute atoms (for ionic materials)\n",
"It has been verified that the harder a material is, the more it decreases. In the same way, the less two materials are mutually soluble, the more the wear tends to decrease. Finally, as regards the crystalline structure, it is possible to state that some structures are more suitable to resist the wear of others, such as a hexagonal structure with a compact distribution, which can only deform by slipping along the base planes.\n\nSection::::Physics.:Wear.:Wear rate.\n",
"BULLET::::- In work hardening (also referred to as strain hardening) the material is strained past its yield point, e.g. by cold working. Ductile metal becomes harder and stronger as it's physically deformed. The plastic straining generates new dislocations. As the dislocation density increases, further dislocation movement becomes more difficult since they hinder each other, which means the material hardness increases.\n",
"The most easily machined types of metals include aluminum, brass, and softer metals. As materials get harder, denser and stronger, such as steel, stainless steel, titanium, and exotic alloys, they become much harder to machine and take much longer, thus being less manufacturable. Most types of plastic are easy to machine, although additions of fiberglass or carbon fiber can reduce the machinability. Plastics that are particularly soft and gummy may have machinability problems of their own.\n\nSection::::For CNC machining.:Material form.\n",
"At a time \"t\", a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time \"t\" at which the stress is relieved, at which time the strain immediately decreases (discontinuity) then continues decreasing gradually to a residual strain.\n",
"Soft materials often exhibit higher susceptibility to fretting than hard materials of a similar type. The hardness ratio of the two sliding materials also has an effect on fretting wear. However, softer materials such as polymers can show the opposite effect when they capture hard debris which becomes embedded in their bearing surfaces. They then act as a very effective abrasive agent, wearing down the harder metal with which they are in contact.\n\nSection::::See also.\n\nBULLET::::- Tribology\n\nBULLET::::- Motion-triggered contact insufficiency\n\nBULLET::::- Wear\n\nBULLET::::- Contact mechanics\n\nSection::::External links.\n\nBULLET::::- Fretting and Its Insidious Effects, by EPI Inc.\n",
"Once the response has been observed, the stiffness can be calculated from it. Most elastography techniques find the stiffness of tissue based on one of two main principles:\n\nBULLET::::- For a given applied force (stress), stiffer tissue deforms (strains) less than does softer tissue.\n\nBULLET::::- Mechanical waves (specifically shear waves) travel faster through stiffer tissue than through softer tissue.\n",
"There are two aspects to surface integrity: \"topography characteristics\" and \"surface layer characteristics\". The topography is made up of surface roughness, waviness, errors of form, and flaws. The surface layer characteristics that can change through processing are: plastic deformation, residual stresses, cracks, hardness, overaging, phase changes, recrystallization, intergranular attack, and hydrogen embrittlement. When a traditional manufacturing process is used, such as machining, the surface layer sustains local plastic deformation.\n",
"Soft robots, particularly those designed to imitate life, often must experience cyclic loading in order to move or do the tasks for which they were designed. For example, in the case of the lamprey- or cuttlefish-like robot described above, motion would require electrolyzing water and igniting gas, causing a rapid expansion to propel the robot forward. This repetitive and explosive expansion and contraction would create an environment of intense cyclic loading on the chosen polymeric material. A robot underwater and/or on Europa would be nearly impossible to patch up or replace, so care would need to be taken to choose a material and design that minimizes initiation and propagation of fatigue-cracks. In particular, one should choose a material with a fatigue limit, or a stress-amplitude frequency above which the polymer’s fatigue response is no longer dependent on the frequency. \n",
"All metals have a property called hardness, which is the property of the metal that resists bending. Soft metals are pliable and easy to bend while hard metals are stiff and hard to bend. The hardness of metals can be changed by annealing with heat treatment, or by work hardening a wire by bending it.\n",
"BULLET::::- \"Tensile stress\" is the stress state caused by an applied load that tends to elongate the material along the axis of the applied load, in other words the stress caused by \"pulling\" the material. The strength of structures of equal cross sectional area loaded in tension is independent of shape of the cross section. Materials loaded in tension are susceptible to stress concentrations such as material defects or abrupt changes in geometry. However, materials exhibiting ductile behavior (most metals for example) can tolerate some defects while brittle materials (such as ceramics) can fail well below their ultimate material strength.\n",
"Where formula_8, or the stress experienced by the material and equals the change in length divided by the original length multiplied by the materials elasticity or Yong's modulus \"E\".\n\nSection::::Plasicity.\n\nSection::::Plasicity.:The lower a materials \"Plasticity\", the better the \"RW\".\n",
"The active deformation mechanism in a material depends on the homologous temperature, confining pressure, strain rate, stress, grain size, presence or absence of a pore fluid and its composition, presence or absence of impurities in the material, mineralogy, and presence or absence of a lattice-preferred orientation. Note these variables are not fully independent e.g. for a pure material of a fixed grain size, at a given pressure, temperature and stress, the strain-rate is given by the flow-law associated with the particular mechanism(s). More than one mechanism may be active under a given set of conditions and some mechanisms cannot operate independently but must act in conjunction with another in order that significant permanent strain can develop. In a single deformation episode, the dominant mechanism may change with time e.g. recrystallization to a fine grain size at an early stage may allow diffusive mass transfer processes to become dominant.\n",
"Secondly, because soft robots are made of highly compliant materials, one must consider temperature effects. The yield stress of a material tends to decrease with temperature, and in polymeric materials this effect is even more extreme. At room temperature and higher temperatures, the long chains in many polymers can stretch and slide past each other, preventing the local concentration of stress in one area and making the material ductile. But most polymers undergo a ductile-to-brittle transition temperature below which there is not enough thermal energy for the long chains to respond in that ductile manner, and fracture is much more likely. The tendency of polymeric materials to turn brittle at cooler temperatures is in fact thought to be responsible for the Space Shuttle Challenger disaster, and must be taken very seriously, especially for soft robots that will be implemented in medicine. A ductile-to-brittle transition temperature need not be what one might consider \"cold,\" and is in fact characteristic of the material itself, depending on its crystallinity, toughness, side-group size (in the case of polymers), and other factors. \n",
"The method described in this section is meant as an overview of the direct stiffness method. Additional sources should be consulted for more details on the process as well as the assumptions about material properties inherent in the process.\n\nSection::::Applications.\n",
"For some materials, e.g. elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%, thus other more complex definitions of strain are required, such as \"stretch\", \"logarithmic strain\", \"Green strain\", and \"Almansi strain\". Elastomers and shape memory metals such as Nitinol exhibit large elastic deformation ranges, as does rubber. However, elasticity is nonlinear in these materials. \n\nNormal metals, ceramics and most crystals show linear elasticity and a smaller elastic range.\n\nLinear elastic deformation is governed by Hooke's law, which states:\n",
"Wear of metals occurs by plastic displacement of surface and near-surface material and by detachment of particles that form wear debris. The particle size may vary from millimeters to nanometers. This process may occur by contact with other metals, nonmetallic solids, flowing liquids, solid particles or liquid droplets entrained in flowing gasses.\n",
"BULLET::::- They exhibit plasticity—the ability to permanently change shape in response to the force, but remain in one piece. The yield strength is the point at which elastic deformation gives way to plastic deformation. Deformation in the plastic range is non-linear, and is described by the stress-strain curve. This response produces the observed properties of scratch and indentation hardness, as described and measured in materials science. Some materials exhibit both elasticity and viscosity when undergoing plastic deformation; this is called viscoelasticity.\n\nBULLET::::- They fracture—split into two or more pieces.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-17213 | Why is the incubation time of an HIV infection until the development of AIDS so long? | Your body does a pretty good job at fending off the virus for a long time, but eventually it gets overwhelmed and your CD4 count drops as your viral load skyrockets. This is assuming no treatment. A person on treatment could be positive and have a very low viral load and normal CD4 count for their entire lives. [This diagram]( URL_0 ) shows what the normal course of the disease looks like. & #x200B; | [
"During latency, an infection is subclinical. With respect to viral infections, in incubation the virus is replicating. This is in contrast to viral latency, a form of dormancy in which the virus does not replicate. An example of latency is HIV infection. HIV may at first have no symptoms and show no signs of AIDS, despite HIV replicating in the lymphatic system and rapidly accumulating a large viral load. These persons may be infectious.\n\nSection::::Intrinsic and extrinsic incubation period.\n",
"BULLET::::- HIV: The window period for HIV may be up to three months, depending on the test method and other factors. RNA based HIV tests has the lowest window period. Modern and accurate testing abilities can cut this period to 25 days, 16 days, or even as low as 12 days, again, depending on the type of test and the quality of its administration and interpretation.\n",
"For example, one study used serial passage in baboons to create a strain of HIV-2 that is particularly virulent to baboons. Typical strains of HIV-2 only infect baboons slowly. This makes it challenging for scientists to use HIV-2 in animal models of HIV-1, since the animals in the model will only show symptoms slowly. However, the more virulent strain of HIV-2 could be practical for use in animal models.\n",
"HIV set point\n\nThe HIV set point is the viral load of a person infected with HIV, which stabilizes after a period of acute HIV infection. The set point is reached after the immune system has developed specific Cytotoxic T cells and begins to attempt to fight the virus. The higher the viral load of the set point, the faster the virus will progress to AIDS; the lower the viral load of the set point, the longer the patient will remain in clinical latency. The only effective way to lower the set point is through highly active antiretroviral therapy.\n",
"There is another, smaller percentage of individuals who have been recently identified. These are called Highly Exposed Persistently Seronegative (HEPS). This is a small group of individuals and has been observed only in a group of uninfected HIV-negative prostitutes in Kenya and in The Gambia. When these individuals' PBMCs are stimulated with HIV-1 peptides, they have lymphoproliferative activity and have HIV-1 specific CD8+ CTL activity suggesting that transient infection may have occurred. This does not occur in unexposed individuals. What is interesting, is that the CTL epitope specificity differs between HEPS and HIV positive individuals, and in HEPS, the maintenance of responses appears to be dependent upon persistent exposure to HIV.\n",
"There are three main stages of HIV infection: acute infection, clinical latency, and AIDS.\n\nSection::::Signs and symptoms.:Acute infection.\n",
"Acute HIV infection, primary HIV infection or acute seroconversion syndrome is the second stage of HIV infection. It occurs after the incubation stage, before the latency stage and the potential AIDS succeeding the latency stage.\n",
"Section::::Host genetic susceptibility.\n",
"One of the best-studied viruses that does this is HIV. HIV uses reverse transcriptase to create a DNA copy of its RNA genome. HIV latency allows the virus to largely avoid the immune system. Like other viruses that go latent, it does not typically cause symptoms while latent. Unfortunately, HIV in proviral latency is nearly impossible to target with antiretroviral drugs.\n\nSection::::Mechanisms.:Maintaining latency.\n",
"Three instances of delayed HIV seroconversion occurring in health-care workers have been reported; in these instances, the health-care workers tested negative for HIV antibodies greater than 6 months postexposure but were seropositive within 12 months after the exposure. DNA sequencing confirmed the source of infection in one instance. Two of the delayed seroconversions were associated with simultaneous exposure to hepatitis C virus (HCV). In one case, co-infection was associated with a rapidly fatal HCV disease course; however, it is not known whether HCV directly influences the risk for or course of HIV infection or is a marker for other exposure-related factors.\n",
"HIV disease progression rates\n\nFollowing infection with HIV-1, the rate of clinical disease progression varies between individuals. Factors such as host susceptibility, genetics and immune function, health care and co-infections as well as viral genetic variability may affect the rate of progression to the point of needing to take medication in order not to develop AIDS.\n\nSection::::Rapid progressors.\n",
"Section::::Virus characteristics.\n\nHIV binds to immune cell surface receptors, including CD 4 and CXCR4 or CD4 and CCR5. The binding causes conformation changes and results in the membrane fusion between HIV and cell membrane. Active infection occurs in most cells, while latent infection occurs in much fewer cells 1, 2 and at very early stages of HIV infection. 9, 35 In active infection, HIV pro virus is active and HIV virus particles are actively replicated; and the infected cells continuously release viral progeny; while in latent infection, HIV pro virus is transcriptionally silenced and no viral progeny is produced.\n",
"The initial period following the contraction of HIV is called acute HIV, primary HIV or acute retroviral syndrome. Many individuals develop an influenza-like illness or a mononucleosis-like illness 2–4 weeks after exposure while others have no significant symptoms. Symptoms occur in 40–90% of cases and most commonly include fever, large tender lymph nodes, throat inflammation, a rash, headache, tiredness, and/or sores of the mouth and genitals. The rash, which occurs in 20–50% of cases, presents itself on the trunk and is maculopapular, classically. Some people also develop opportunistic infections at this stage. Gastrointestinal symptoms, such as vomiting or diarrhea may occur. Neurological symptoms of peripheral neuropathy or Guillain–Barré syndrome also occurs. The duration of the symptoms varies, but is usually one or two weeks.\n",
"Diagnosis of HIV/AIDS\n\nHIV tests are used to detect the presence of the human immunodeficiency virus (HIV), the virus that causes acquired immunodeficiency syndrome (AIDS), in serum, saliva, or urine. Such tests may detect antibodies, antigens, or RNA.\n\nSection::::AIDS diagnosis.\n\nAIDS is diagnosed separately from HIV.\n\nSection::::Terminology.\n\nThe window period is the time from infection until a test can detect any change. The average window period with HIV-1 antibody tests is 25 days for subtype B. Antigen testing cuts the window period to approximately 16 days and nucleic acid testing (NAT) further reduces this period to 12 days.\n",
"BULLET::::- New, aggressive strain of HIV discovered in Cuba Researchers at the University of Leuven in Belgium say the HIV strain CRF19 can progress to AIDS within two to three years of exposure to virus. Typically, HIV takes approximately 10 years to develop into AIDS. The researchers found that patients with the CRF19 variant had more virus in their blood than patients who had more common strains. Patients with CRF19 may start getting sick before they even know they've been infected, which ultimately means there's a significantly shorter time span to stop the disease's progression. The researchers suspect that fragments of other subsets of the virus fasten to each other through an enzyme which makes the virus more powerful and more easily replicated in the body, thus the faster progression.\n",
"A strong immune defense reduces the number of viral particles in the blood stream, marking the start of secondary or chronic HIV infection. The secondary stage of HIV infection can vary between two weeks and 20 years. During the secondary phase of infection, HIV is active within lymph nodes, which typically become persistently swollen, in response to large amounts of virus that become trapped in the follicular dendritic cells (FDC) network. The surrounding tissues that are rich in CD4 T cells may also become infected, and viral particles accumulate both in infected cells and as free virus. Individuals who are in this phase are still infectious. During this time, CD4 CD45RO T cells carry most of the proviral load.\n",
"Coinfections or immunizations may enhance viral replication by inducing a response and activation of the immune system. This activation facilitates the three key stages of the viral life cycle: entry to the cell; reverse transcription and proviral transcription. Chemokine receptors are vital for the entry of HIV into cells. The expression of these receptors is inducible by immune activation caused through infection or immunization, thus augmenting the number of cells that are able to be infected by HIV-1. Both reverse transcription of the HIV-1 genome and the rate of transcription of proviral DNA rely upon the activation state of the cell and are less likely to be successful in quiescent cells. In activated cells there is an increase in the cytoplasmic concentrations of mediators required for reverse transcription of the HIV genome. Activated cells also release IFN-alpha which acts on an autocrine and paracrine loop that up-regulates the levels of physiologically active NF-kappa B which activates host cell genes as well as the HIV-1 LTR. The impact of co-infections by micro-organisms such as \"Mycobacterium tuberculosis\" can be important in disease progression, particularly for those who have a high prevalence of chronic and recurrent acute infections and poor access to medical care. Often, survival depends upon the initial AIDS-defining illness. Co-infection with DNA viruses such as HTLV-1, herpes simplex virus-2, varicella zoster virus and cytomegalovirus may enhance proviral DNA transcription and thus viral load as they may encode proteins that are able to trans-activate the expression of the HIV-1 pro-viral DNA. Frequent exposure to helminth infections, which are endemic in Africa, activates individual immune systems, thereby shifting the cytokine balance away from an initial Th1 cell response against viruses and bacteria which would occur in the uninfected person to a less protective T helper 0/2-type response. HIV-1 also promotes a Th1 to Th0 shift and replicates preferentially in Th2 and Th0 cells. This makes the host more susceptible to and less able to cope with infection with HIV-1, viruses and some types of bacteria. Ironically, exposure to dengue virus seems to slow HIV progression rates temporarily.\n",
"HIV latency, and the consequent viral reservoir in CD4 T cells, dendritic cells, as well as macrophages, is the main barrier to eradication of the virus.\n",
"An individual may only develop signs of an infection after a period of subclinical infection, a duration that is called the incubation period. This is the case, for example, for subclinical sexually transmitted diseases such as AIDS and genital warts. Individuals with such subclinical infections, and those that never develop overt illness, creates a reserve of individuals that can transmit an infectious agent to infect other individuals. Because such cases of infections do not come to clinical attention, health statistics can often fail to measure the true prevalence of an infection in a population, and this prevents the accurate modeling of its infectious transmission.\n",
"BULLET::::- The pathophysiology of HIV/AIDS involves, upon acquisition of the virus, that the virus replicates inside and kills T helper cells, which are required for almost all adaptive immune responses. There is an initial period of influenza-like illness, and then a latent, asymptomatic phase. When the CD4 lymphocyte count falls below 200 cells/ml of blood, the HIV host has progressed to AIDS, a condition characterized by deficiency in cell-mediated immunity and the resulting increased susceptibility to opportunistic infections and certain forms of cancer.\n",
"The viral load of an infected person is an important risk factor in both sexual and mother-to-child transmission. During the first 2.5 months of an HIV infection a person's infectiousness is twelve times higher due to the high viral load associated with acute HIV. If the person is in the late stages of infection, rates of transmission are approximately eightfold greater. \n",
"The stages of HIV infection are acute infection (also known as primary infection), latency and AIDS. Acute infection lasts for several weeks and may include symptoms such as fever, swollen lymph nodes, inflammation of the throat, rash, muscle pain, malaise, and mouth and esophageal sores. The latency stage involves few or no symptoms and can last anywhere from two weeks to twenty years or more, depending on the individual. AIDS, the final stage of HIV infection, is defined by low CD4+ T cell counts (fewer than 200 per microliter), various opportunistic infections, cancers and other conditions.\n\nSection::::Acute infection.\n",
"HIV infections have a prolonged and variable course. The median period of time between infection with HIV and the onset of clinically apparent disease is approximately 10 years in industrialized countries, according to prospective studies of homosexual men in which dates of seroconversion are known. Similar estimates of asymptomatic periods have been made for HIV-infected blood-transfusion recipients, injection-drug users and adult hemophiliacs.\n",
"While latent or latency period may be synonymous, a distinction is sometimes made between incubation period, the period between infection and onset of the disease, and latent period, the time from infection to infectiousness. Which is shorter depends on the disease. A person may carry a disease, such as \"Streptococcus\" in the throat, without exhibiting any symptoms. Depending on the disease, the person may or may not be contagious during the incubation period.\n",
"The rapid HIV test enabled a reliable outcome to be generated within 15 to 20 minutes, which in practice lowered the threshold for such testing. Because in this way more people opted for testing, more HIV-positive people could be reached. A clear advantage is that the diagnosis is made at the earliest possible stage of the infection, facilitating adequate treatment. In addition, through successful early treatment, the infectious period for HIV-positive people can be reduced. \n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-01077 | How to artists make money of Spotify if its free to use? | Spotify also has a paid subscription service. It takes that money and divides it up between it's own running costs and paying artists every time one of their songs is played. | [
"In December 2013, the company launched a new website, \"Spotify for Artists\", that explained its business model and revenue data. Spotify gets its content from major record labels as well as independent artists, and pays copyright holders royalties for streamed music. The company pays 70% of its total revenue to rights holders. Spotify for Artists states that the company does not have a fixed per-play rate, instead considers factors such as the user's home country and the individual artist's royalty rate. Rights holders received an average per-play payout between $.006 and $.0084.\n",
"Icelandic singer Björk initially chose not to release her album \"Vulnicura\" on Spotify, saying: \"This streaming thing just does not feel right. I don’t know why, but it just seems insane. ... To work on something for two or three years and then just, 'Oh, here it is for free.' It's not about the money; it’s about respect, you know? Respect for the craft and the amount of work you put into it.\"\n",
"In November 2018, Spotify announced it is opening up Spotify Connect to all of the users using its Free service, however these changes still required products supporting Spotify Connect to support the latest SDK.\n\nSection::::Business model.:Monetization.\n\nIn 2007, just after launch, the company made a loss of 31.8 million Swedish kronor ($4.4 million).\n\nIn October 2010, \"Wired\" reported that Spotify was making more money for labels in Sweden than any other retailer \"online or off\".\n",
"In October 2017, Microsoft announced that it would be ending its Groove Music streaming service by December, with all music from users transferring to Spotify as part of a new partnership.\n\nIn November 2017, it was announced that Pat McGrath Labs cosmetics would be sold through Spotify via Merchbar on singer Maggie Lindemann's artist page.\n",
"BBC \"Music Week\" editor Tim Ingham wrote: \"Unlike buying a CD or download, streaming is not a one-off payment. Hundreds of millions of streams of tracks are happening each and every day, which quickly multiplies the potential revenues on offer – and is a constant long-term source of income for artists.\"\n\nSection::::Business model.:Accounts and subscriptions.\n\nAs of November 2018, the two Spotify subscription types, all offering unlimited listening time, are:\n",
"In September 2018, Spotify announced \"Upload Beta\", allowing artists to upload directly to the platform instead of going through a distributor or record label. The feature was rolled out to a small number of US-based artists by invitation only. Uploading was free and artists received 100% of the revenue from songs they uploaded; artists were able to control when their release went public. On July 1, 2019 Spotify deprecated the program and announced plans to stop accepting direct uploads by the end of that month, and eventually remove all content uploaded in this manner.\n\nSection::::Business model.:Industry initiatives.\n",
"In March 2014, Spotify introduced a new, discounted Premium subscription tier for students. Students in the United States enrolled in a university can pay half-price for a Premium subscription. In April 2017, the Students offer was expanded to 33 more countries.\n\nSpotify introduced its Family subscription in October 2014, connecting up to five family members for a shared Premium subscription. Spotify Family was upgraded in May 2016, letting up to six people share a subscription and reducing the price.\n",
"Following the release of Frank Ocean's exclusive release of his album \"Blonde,\" Lucian Grainge, CEO of Universal Music Group banned exclusive distribution with streaming services by UMG artists.\n\nBut exclusive releases are still poised to benefit both the artists themselves, and the streaming service. It is possible that the practice will continue from superstar artists who record and release music from their own labels (like Frank Ocean did with \"Blonde\"). This would be yet another reason for artists to leave their labels; they would reap more financial benefits in cutting out the labels.\n\nSection::::Crowdfunding.\n",
"In October 2017, Spotify launched \"Rise\", a program aimed at promoting emerging artists.\n\nSection::::Business model.:Stations by Spotify.\n",
"Spotify offers an unlimited subscription package, close to the Open Music Model (OMM)—estimated economic equilibrium—for the recording industry. However, the incorporation of digital rights management (DRM) protection diverges from the OMM and competitors such as iTunes Store and Amazon Music that have dropped DRM.\n\nSpotify encourages people to pay for music, with subscriptions as its main revenue source. The subscription removes advertisements and limits, and increases song bitrates to 320 kbit/s.\n\nFor example, in Norway, the figure of 1.2 billion unauthorized song downloads in 2008 is compared to a figure of 210 million from 2012.\n",
"Unlike physical or download sales, which pay artists a fixed price per song or album sold, Spotify pays royalties based on the number of artists' streams as a proportion of total songs streamed. It distributes approximately 70% of total revenue to rights holders, who then pay artists based on their individual agreements. Spotify has faced criticism from artists and producers including Taylor Swift and Thom Yorke, who have argued that it does not fairly compensate musicians. In 2017, as part of its efforts to renegotiate license deals for an interest in going public, Spotify announced that artists would be able to make albums temporarily exclusive to paid subscriptions if they are part of Universal Music Group or the Merlin Network.\n",
"Section::::Platforms.\n",
"The variable (and some say unsustainable) nature of this compensation, has led to criticism. In a 2009 \"Guardian\" article, Helienne Lindvall wrote about why \"major labels love Spotify\", writing that the labels receive 18% of shares from the streaming company—something that artists themselves never actually get. She further wrote that \"On Spotify, it seems, artists are not equal. There are indie labels that, as opposed to the majors and Merlin members, receive no advance, receive no minimum per stream, and only get a 50% share of ad revenue on a pro-rata basis (which so far has amounted to next to nothing).\" In 2009, Swedish musician Magnus Uggla pulled his music from the service, stating that after six months he had earned \"what a mediocre busker could earn in a day\".\n",
"Section::::History.:Other developments.:User growth.\n",
"While working at Founders Fund, Parker had been looking to invest in a company that could further Napster's music sharing mission legally. In 2009 a friend showed him Spotify, a Swedish streaming music service, and Parker sent an email to Spotify's founder Daniel Ek. The pair traded emails, and in 2010 Parker invested US$15 million in Spotify. Parker, who currently serves on Spotify's board, negotiated with Warner and Universal on Spotify's behalf, and in July 2011, Spotify announced its U.S. launch. At Facebook's f8 conference that year, Parker announced a partnership between Facebook and Spotify, which allowed users to share their Spotify playlists on their Facebook profiles.\n",
"In April 2018, Spotify announced a discounted entertainment bundle with video-on-demand provider Hulu, which included discounted rates for university students.\n\nSection::::History.:Other developments.:Dispute with Apple.\n",
"Mighty Audio is one of two official partners of Spotify in the offline streaming space, the other being Samsung. The first product Mighty is 1.5 x 1.5 inches, 0.6 ounces, drop and water resistant, Bluetooth and WiFi enabled, has an 8GB storage capacity, and up to 5 hours of battery life.\n",
"Norwegian newspaper \"Dagbladet\" reported in 2009 that the record label Racing Junior earned only NOK 19 ($3.00 USD) after their artists had been streamed over 55,100 times. According to an infographic by David McCandless, an artist on Spotify would need over four million streams per month to earn the U.S. minimum monthly wage of $1,160. In October 2011, U.S. independent label Projekt Records stated: \"In the world I want to live in, I envision artists fairly compensated for their creations, because we (the audience) believe in the value of what artists create. The artist's passion, dedication, and expression is respected and rewarded. Spotify is NOT a service that does this. Projekt will not be part of this unprincipled concept.\"\n",
"Section::::History.:Other developments.:Company partnerships.\n\nIn January 2015, Sony announced PlayStation Music, a new music service with Spotify as its exclusive partner. PlayStation Music incorporates the Spotify service into Sony's PlayStation 3 and PlayStation 4 gaming consoles, and Sony Xperia mobile devices, in 41 markets around the world. The service enables users to listen to their favourite tracks while gaming. The new service launched on 30 March 2015.\n",
"BULLET::::- In 2005, Yahoo! Music was launched at $5 per month with digital rights management.\n\nBULLET::::- In 2011, Spotify introduced a $5 per month premium subscription in the United States with digital rights management.\n\nBULLET::::- In 2011, Microsoft Zune offered a subscription service for music downloads with digital rights management known as a Zune Pass, at $10 a month.\n\nBULLET::::- in 2012, Google Play Music launched unlimited music streaming for a subscription price of $9.99 per month. Users can upload their own MP3s to the service and download them, but cannot download songs they have not uploaded themselves.\n",
"In December 2013, CEO Daniel Ek announced that Android and iOS smartphone users with the free service tier could listen to music in Shuffle mode, a feature in which users can stream music by specific artists and playlists without being able to pick which songs to hear. Mobile listening previously was not allowed in Spotify Free accounts. Ek stated that \"We're giving people the best free music experience in the history of the smartphone.\"\n",
"In July 2018, Spotify introduced a new beta feature that gives artists, labels and teams an easy way to submit unreleased music directly to Spotify's editorial team for playlist consideration.\n",
"Apple Inc. responded to the increasing demand for subscription-based streaming services (evident by Spotify's success) in June 2015, with the release of Apple Music. Operating in over 100 countries, Apple music offers users their own take on 24/7 radio stations and music suggestions: and \"for you\" and \"new\" tab managed by talented music experts.\n\nPricing: $9.99 per month for single users, and $14.99 for families (up to 6).\n\nBusiness Model: subscription-based streaming.\n\nSection::::Digital Music Distributors.:YouTube.\n",
"BULLET::::- Audio – While songs can be streamed for free, generally in order to download most licensed music, consumers need to purchase songs from web stores, such as the popular iTunes. However, Spotify Premium is emerging as a new model for purchasing digital content on the web: consumers pay a monthly fee to unlimited streaming and downloading from Spotify's music library.\n",
"Several bands from the 1960s and 1970s delayed their work being made available on Spotify or any streaming services. Until the end of 2013, Led Zeppelin's music was not available, before the parties reached an agreement in December. In 2015, AC/DC and The Beatles allowed their music on streaming services. In 2018, many songs from the album \"Get Happy!!\" by Elvis Costello and the Attractions were removed from the site, including \"New Amsterdam\" and many others.\n"
] | [
"Spotify is completely free."
] | [
"Spotify has a paid subscription option."
] | [
"false presupposition"
] | [
"Spotify is completely free.",
"Spotify is completely free."
] | [
"false presupposition",
"normal"
] | [
"Spotify has a paid subscription option.",
"Spotify has a paid subscription option."
] |
2018-01952 | Why do our toes taper down in size? | The big toe is a left over from our days as apes. Their feet were shaped more like our modern hands and the big toe functioned like a thumb, allowing us to grip onto branchs to avoid falling. [Human vs Chimp foot]( URL_0 ) Although good in trees, this wasn’t useful when our ancestors switched to standing and walking on two feet. Thus our feet changed shape to reflect their new role, and our toes reduced to what they are today. | [
"Children until the age of 3 to 4 have a degree of genu varum. The child sits with the soles of the feet facing one another; the tibia and femur are curved outwards; and, if the limbs are extended, although the ankles are in contact, there is a distinct space between the knee-joints. During the first year of life, a gradual change takes place. The knee-joints approach one another; the femur slopes downward and inward towards the knee joints; the tibia become straight; and the sole of the foot faces almost directly downwards.\n",
"Human feet evolved enlarged heels to bear the weight that evolution also increased. The human foot evolved as a platform to support the entire weight of the body, rather than acting as a grasping structure, as it did in early hominids. Humans therefore have smaller toes than their bipedal ancestors. This includes a non-opposable hallux, which is relocated in line with the other toes. Moreover, humans have a foot arch rather than flat feet. When non-human hominids walk upright, weight is transmitted from the heel, along the outside of the foot, and then through the middle toes while a human foot transmits weight from the heel, along the outside of the foot, across the ball of the foot and finally through the big toe. This transference of weight contributes to energy conservation during locomotion.\n",
"In humans, the hallux is usually longer than the second toe, but in some individuals, it may not be the longest toe. There is an inherited trait in humans, where the dominant gene causes a longer second toe (\"Morton's toe\" or \"Greek foot\") while the homozygous recessive genotype presents with the more common trait: a longer hallux. People with the rare genetic disease fibrodysplasia ossificans progressiva characteristically have a short hallux which appears to turn inward, or medially, in relation to the foot.\n\nSection::::Structure.:Variation.\n",
"Supination is the opposite, and occurs when the foot impacts the ground and there is not enough of an “inward roll” in the foot’s motion. The weight of the body isn’t transferred at all to the big toe, forcing the outside of the foot and the smaller toes which can't handle the stress as well to take the majority of the overweight instead.\n",
"Commonly known as road founder, mechanical separation occurs when horses with long toes are worked extensively on hard ground. The long toes and hard ground together contribute to delayed breakover, hence mechanical separation of the laminae at the toe. Historically, this was seen in carriage horses bred for heavy bodies and long, slim legs with relatively small hooves; their hooves were trimmed for long toes (to make them lift their feet higher, enhancing their stylish \"action\"), and they were worked at speed on hard roads. Road founder is also seen in overweight animals, particularly when hooves are allowed to grow long; classic examples are ponies on pasture board in spring, and pregnant mares.\n",
"Shorter toes: Human toes are straight and extremely short in relation to body size compared to other animals. In running, the toes support 50 to 75% of body mass in humans. Impulse and mechanical work increase in humans as toe length increases, showing that it is energetically favorable to have shorter toes. The costs of shorter toes are decreased gripping capabilities and power output. However, the efficiency benefits seem to outweigh these costs, as the toes of \"A. afarensis\" remains were shorter than great apes, but 40% longer than modern humans, meaning that there is a trend toward shorter toes as the primate species moves away from tree-dwelling. This 40% increase in toe length would theoretically induce a flexor impulse 2.5 times that of modern humans, which would require twice as much mechanical work to stabilize.\n",
"The long bone category includes the femora, tibiae, and fibulae of the legs; the humeri, radii, and ulnae of the arms; metacarpals and metatarsals of the hands and feet, the phalanges of the fingers and toes, and the clavicles or collar bones. The long bones of the human leg comprise nearly half of adult height. The other primary skeletal component of height are the vertebrae and skull.\n",
"In humans the largest bone in the tarsus is the calcaneus, which is the weight-bearing bone within the heel of the foot.\n\nSection::::Human anatomy.\n\nSection::::Human anatomy.:Bones.\n\nThe talus bone or ankle bone is connected superiorly to the two bones of the lower leg, the tibia and fibula, to form the ankle joint or talocrural joint; inferiorly, at the subtalar joint, to the calcaneus or heel bone. Together, the talus and calcaneus form the hindfoot.\n",
"Humans usually have five toes on each foot. When more than five toes are present, this is known as polydactyly. Other variants may include syndactyly or arachnodactyly. Forefoot shape, including toe shape, exhibits significant variation among people; these differences can be measured and have been statistically correlated with ethnicity. Such deviations may affect comfort and fit for various shoe types. Research conducted for the U.S. Army indicated that larger feet may still have smaller arches, toe length, and toe-breadth.\n\nSection::::Function.\n",
"\"Muscles of the little toe\": Stretching laterally from the calcaneus to the proximal phalanx of the fifth digit, abductor digiti minimi form the lateral margin of the foot and is the largest of the muscles of the fifth digit. Arising from the base of the fifth metatarsal, flexor digiti minimi is inserted together with abductor on the first phalanx. Often absent, opponens digiti minimi originates near the cuboid bone and is inserted on the fifth metatarsal bone. These three muscles act to support the arch of the foot and to plantar flex the fifth digit.\n",
"At birth, only the metaphyses of the \"long bones\" are present. The long bones are those that grow primarily by elongation at an epiphysis at one end of the growing bone. The long bones include the femurs, tibias, and fibulas of the lower limb, the humeri, radii, and ulnas of the upper limb (arm + forearm), and the phalanges of the fingers and toes. The long bones of the leg comprise nearly half of adult height. The other primary skeletal component of height is the spine and skull.\n",
"Limb and foot structure of representative terrestrial vertebrates:\n\nSection::::Structure.:Variability in scaling and limb coordination.\n",
"Freiston and Galis look at the development of ribs, digits, and mammalian asymmetry. They argue that this construction is relevant for the study of disease, the consistency in evolution of body plans, and understanding of developmental constraints. Sexual dimorphism in prenatal digit ratio was found as early as 14 weeks and was maintained whether or not the fleshy finger part was included.\n\nSection::::Language and cognitive studies.\n",
"Section::::Human evolution.:Growth pattern of children.\n\nDesmond Collins who was an Extension Lecturer of Archaeology at London University said that the lengthened youth period of humans is part of neoteny.\n",
"Other theorists have argued that neoteny has not been the main cause of human evolution, because humans only retain some juvenile traits, while relinquishing others. For example, the high leg-to-body ratio (long legs) of adult humans as opposed to human infants shows that there is not a holistic trend in humans towards neoteny when compared to the other great apes. Andrew Arthur Abbie agrees, citing the gerontomorphic fleshy human nose and long human legs as contradicting the neoteny hominid evolution hypothesis, although he does believe humans are generally neotenous. Brian K. Hall also cites the long legs of humans as a peramorphic trait, which is in sharp contrast to neoteny.\n",
"Section::::Structure.:Columnar organization of limb structures.\n",
"The midfoot is the intermediate portion of the foot between the hindfoot and forefoot. The structures in this region are intermediate in size, and typically transmit loads from the hindfoot to the forefoot. The human transverse tarsal joint of the midfoot transmits forces from the subtalar joint in the hindfoot to the forefoot joints (metatarsophalangeal and interphalangeal) and associated bones (metatarsals and phalanges). The midfoot of the dog, horse and elephant contains similar intermediate structures having similar functions to those of the human midfoot.\n",
"If the big toe and the second toe are the same length (as measured from the MPT joint to the tip, including only the phalanges), then the second toe will protrude farther than the big toe, as shown in the photo. If the second toe is shorter than the big toe, the big toe may still protrude the furthest, or there may be little difference, as shown in the X-ray.\n\nSection::::Presentation.\n",
"The human foot consists of multiple bones and soft tissues which support the weight of the upright human. Specifically, the toes assist the human while walking, providing balance, weight-bearing, and thrust during gait.\n\nSection::::Clinical significance.\n\nA sprain or strain to the small interphalangeal joints of the toe is commonly called a stubbed toe. A sprain or strain where the toe joins to the foot is called turf toe.\n\nLong-term use of improperly sized shoes can cause misalignment of toes, as well as other orthopedic problems.\n",
"The metatarsal bones behind the toes vary in relative length. For most feet, a smooth curve can be traced through the joints at the bases of the toes. But in Morton's foot, the line has to bend more sharply to go through the base of the big toe, as shown in the diagram. This is because the first metatarsal, behind the big toe, is short compared to the second metatarsal, next to it. The longer second metatarsal puts the joint at the base of the second toe (the second metatarsal-phalangeal, or MTP, joint) further forward.\n",
"The opponens digiti minimi originates from the long plantar ligament and the plantar tendinous sheath of peroneus longus and is inserted on the fifth metatarsal. When present, it acts to plantar flex the fifth digit and supports the plantar arch. The flexor digiti minimi arises from the region of base of the fifth metatarsal and is inserted onto the base of the first phalanx of the fifth digit where it is usually merged with the abductor of the first digit. It acts to plantar flex the last digit. The largest and longest muscles of the little toe is the abductor digiti minimi. Stretching from the lateral process of the calcaneus, with a second attachment on the base of the fifth metatarsal, to the base of the fifth digit's first phalanx, the muscle forms the lateral edge of the sole. Except for supporting the arch, it plantar flexes the little toe and also acts as an abductor.\n",
"Human knee joints are enlarged for the same reason as the hip – to better support an increased amount of body weight. The degree of knee extension (the angle between the thigh and shank in a walking cycle) has decreased. The changing pattern of the knee joint angle of humans shows a small extension peak, called the “double knee action,” in the midstance phase. Double knee action decreases energy lost by vertical movement of the center of gravity. Humans walk with their knees kept straight and the thighs bent inward so that the knees are almost directly under the body, rather than out to the side, as is the case in ancestral hominids. This type of gait also aids balance.\n",
"BULLET::::- The hallux is primarily flexed by the flexor hallucis longus muscle, located in the deep posterior of the lower leg, via the flexor hallucis longus tendon. Additional flexion control is provided by the flexor hallucis brevis. It is extended by the abductor hallucis muscle and the adductor hallucis muscle.\n\nBULLET::::- The little toe has a separate set of control muscles and tendon attachments, the flexor and abductor digiti minimi. Numerous other contribute to fine motor control of the foot. The connective tendons between the minor toes account for the inability to actuate individual toes.\n\nSection::::Structure.:Blood supply.\n",
"Three muscles insert on the calcaneus: the gastrocnemius, soleus, and plantaris. These muscles are part of the posterior compartment of the leg and aid in walking, running and jumping. Their specific functions include plantarflexion of the foot, flexion of the knee, and steadying the leg on the ankle during standing. The calcaneus also serves as origin for several short muscles that run along the sole of the foot and control the toes.\n\nSection::::Clinical significance.\n",
"The second and third dorsal interossei muscles attaches to the third metatarsal bone. The second dorsal interossei from the medial side of the bone and the third dorsal interossei from the lateral side. The function of the muscle is to spread the toes.\n\nThe first Plantar interossei muscle originates from the medial side of the base and shaft of the third metatarsal. The function of the muscle is to move the third toe medially and move the toes together.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-03548 | How do processed foods increase the risk of cancer? | The uncontrollable mutation of cells is a result of something damaging the DNA of cells. All sorts of things can damage their DNA, including chemicals you ingest or inhale. The claim that processed foods increase the risk of cancer is likely because they are more likely to have preservatives or other chemicals as a result of how the food was made and prepared. If these chemicals do in fact damage DNA when a cell absorbs them, then they are likely to increase your risk of cancer. | [
"Some specific foods have been linked to specific cancers. Studies have shown that individuals that eat red or processed meat have a higher risk of developing breast cancer, prostate cancer, and pancreatic cancer. This may be partially explained by the presence of carcinogens in food cooked at high temperatures. Several risk factors for the development of colorectal cancer include high intake of fat, alcohol, red and processed meats, obesity, and lack of physical exercise. A high-salt diet is linked to gastric cancer. Aflatoxin B1, a frequent food contaminate, is associated with liver cancer. Betel nut chewing has been shown to cause oral cancers.\n",
"Foods and drinks that promote weight gain: Limit consumption of energy-dense foods; Avoid sugary drinks. As calorie consumption is one of the harder tasks when it comes to monitoring weight-gain, it is a very important component in reducing the risk of cancer. Foods that have been processed heavily tend to contain more sugar and fat. This method usually increases the “taste” of those foods. As a result of the processing, the calorie level in those foods tends to spike. Monitoring the size and how many portions you are consuming each day of the processed foods tends to help reduce the risk of cancer. Foods that are typically low in calorie density tend to contain higher amounts of healthy fiber and water content.\n",
"Food processing does have some benefits, such as making food last longer and making products more convenient. However, there are drawbacks to relying on a lot of heavily processed foods. Whole foods and those that are only minimally processed, like frozen vegetables without any sauce, tend to be more healthy. An unhealthy diet high in fat, added sugar and salt, such as one containing a lot of highly-processed foods, can increase the risk for cancer, type 2 diabetes and heart disease, according to the World Health Organization.\n\nSection::::Added sodium.\n",
"While many dietary recommendations have been proposed to reduce cancer risks, the evidence to support them is not definitive. The primary dietary factors that increase risk are obesity and alcohol consumption. Diets low in fruits and vegetables and high in red meat have been implicated but reviews and meta-analyses do not come to a consistent conclusion. A 2014 meta-analysis find no relationship between fruits and vegetables and cancer. Coffee is associated with a reduced risk of liver cancer. Studies have linked excess consumption of red or processed meat to an increased risk of breast cancer, colon cancer and pancreatic cancer, a phenomenon that could be due to the presence of carcinogens in meats cooked at high temperatures. In 2015 the IARC reported that eating processed meat (e.g., bacon, ham, hot dogs, sausages) and, to a lesser degree, red meat was linked to some cancers.\n",
"Note that the \"food safe\" symbol doesn't guarantee food safety under all conditions. The composition of materials contacting foodstuffs aren't the only factor controlling carcinogen migration into foodstuffs, there are other factors that can have a significant role in food safety. Examples include: the temperature of food products, the fat content of the food products and total time of contact with a surface. The safety of foam food containers is currently debated and is a good example of all three of these factors at play. Polystyrene may melt when in contact with hot or fatty foods and may pose a safety risk. In the United States, materials in contact with food may not contain more than 1% polystyrene by weight (0.5% for fatty foods).\n",
"The International Agency for Research on Cancer at the World Health Organization classifies processed meat as Group 1 (carcinogenic to humans), because the IARC has found sufficient evidence that consumption of processed meat by humans causes colorectal cancer.\n\nA 2016 report by the American Institute for Cancer Research and the World Cancer Research Fund found that processed meat consumption increased the risk of stomach cancer. A 2012 paper by Bryan et. al. identified \"Helicobacter pylori\" as a potential causative agent that warranted further study.\n\nSection::::Preservatives.\n",
"Section::::Dietary components.:Processed and red meat.\n\nOn October 26, 2015, the International Agency for Research on Cancer of the World Health Organization reported that eating processed meat (e.g., bacon, ham, hot dogs, sausages) or red meat was linked to some cancers.\n\nSection::::Dietary components.:Fiber, fruits and vegetables.\n\nThe evidence on the effect of dietary fiber on the risk of colon cancer is mixed with some types of evidence showing a benefit and others not. While eating fruit and vegetables has a benefit, it has less benefit on reducing cancer than once thought.\n",
"Another possible health concern related to potato chips is acrylamide, which is produced when potatoes are fried or baked at high temperatures. Studies suggest that rodents exposed to high levels of acrylamide develop cancer; however, it is currently unclear whether a similar risk exists in humans. Many potato chip manufacturers attempt to remove burned and thus potentially acrylamide-rich chips before the packaging process. Large scanners are used to eliminate chips worst affected by heat.\n\nSection::::Regional varieties.\n\nSection::::Regional varieties.:Canada.\n",
"processed foods often contain potentially harmful substances such as oxidized fats and trans fatty acids.\n",
"Some specific foods are linked to specific cancers. Studies have linked eating red or processed meat to an increased risk of breast cancer, colon cancer, prostate cancer, and pancreatic cancer, which may be partially explained by the presence of carcinogens in foods cooked at high temperatures. Aflatoxin B1, a frequent food contaminate, causes liver cancer, but drinking coffee is associated with a reduced risk. Betel nut chewing causes oral cancer. Pickled vegetables are directly linked to increased risks of several cancers. The differences in dietary practices may partly explain differences in cancer incidence in different countries. For example, stomach cancer is more common in Japan due to its high-salt diet and colon cancer is more common in the United States. Immigrant communities tend to develop the risk of their new country, often within one generation, suggesting a substantial link between diet and cancer.\n",
"Reports from the Food Standards Agency have found that the known animal carcinogen acrylamide is generated in fried or overheated carbohydrate foods (such as french fries and potato chips). Studies are underway at the FDA and European regulatory agencies to assess its potential risk to humans.\n\nSection::::In cigarettes.\n",
"Some specific foods are linked to specific cancers. A high-salt diet is linked to gastric cancer. Aflatoxin B1, a frequent food contaminant, causes liver cancer. Betel nut chewing can cause oral cancer. National differences in dietary practices may partly explain differences in cancer incidence. For example, gastric cancer is more common in Japan due to its high-salt diet while colon cancer is more common in the United States. Immigrant cancer profiles mirror those of their new country, often within one generation.\n\nSection::::Causes.:Infection.\n",
"Some methods of food preservation are known to create carcinogens. In 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat, i.e. meat that has undergone salting, curing, fermenting, and smoking, as \"carcinogenic to humans\".\n\nMaintaining or creating nutritional value, texture and flavor is an important aspect of food preservation.\n\nSection::::Traditional techniques.\n\nNew techniques of food preservation became available to the home chef from the dawn of agriculture until the Industrial Revolution.\n\nSection::::Traditional techniques.:Curing.\n",
"There are concerns about a relationship between the consumption of meat, in particular processed and red meat, and increased cancer risk. The International Agency for Research on Cancer (IARC), a specialized agency of the World Health Organization (WHO), classified processed meat (e.g., bacon, ham, hot dogs, sausages) as, \"\"carcinogenic to humans\" (Group 1), based on \"sufficient evidence\" in humans that the consumption of processed meat causes colorectal cancer.\" IARC also classified red meat as \"\"probably carcinogenic to humans\" (Group 2A), based on \"limited evidence\" that the consumption of red meat causes cancer in humans and \"strong\" mechanistic evidence supporting a carcinogenic effect.\"\n",
"BULLET::::- The 2005 Indonesia food scare, where carcinogenic formaldehyde was found to be added as a preservative to noodles, tofu, salted fish, and meatballs.\n\nBULLET::::- In 2008 Chinese milk scandal, melamine was discovered to have been added to milk and infant formula which caused 54,000 babies to be sent to the hospital. Six babies died because of kidney stones related to the contaminant.\n\nSection::::Hair in food.\n",
"On the other hand, temperatures below convert the starch in potatoes into sugar, which alters their taste and cooking qualities and leads to higher acrylamide levels in the cooked product, especially in deep-fried dishes. The discovery of acrylamides in starchy foods in 2002 has led to international health concerns. They are believed to be probable carcinogens and their occurrence in cooked foods is being studied for potentially influencing health problems.\n",
"Higher-energy radiation, including ultraviolet radiation (present in sunlight), x-rays, and gamma radiation, generally \"is\" carcinogenic, if received in sufficient doses. For most people, ultraviolet radiations from sunlight is the most common cause of skin cancer. In Australia, where people with pale skin are often exposed to strong sunlight, melanoma is the most common cancer diagnosed in people aged 15–44 years.\n\nSubstances or foods irradiated with electrons or electromagnetic radiation (such as microwave, X-ray or gamma) are not carcinogenic. In contrast, non-electromagnetic neutron radiation produced inside nuclear reactors can produce secondary radiation through nuclear transmutation.\n\nSection::::In prepared food.\n",
"Processed foods may actually take less energy to digest than whole foods, according to a study published in \"Food & Nutrition Research\" in 2010, meaning you retain more of the calories they contain. Processed foods also tend to be more allergenic than whole foods, according to a June 2004 \"Current Opinion in Allergy and Clinical Immunology\" article. Although the preservatives and other food additives used in many processed foods are generally recognized as safe, a few may cause problems for some individuals, including sulfites, artificial sweeteners, artificial colors and flavors, sodium nitrate, BHA and BHT, olestra, caffeine and monosodium glutamate.\n",
"Nitrosamines, present in processed and cooked foods, have been noted as being carcinogenic, being linked to colon cancer. Also, toxic compounds called PAHs, or polycyclic aromatic hydrocarbons, present in processed, smoked and cooked foods, are known to be carcinogenic.\n\nSection::::Meat in society.\n",
"While many dietary recommendations have been proposed to reduce the risk of cancer, the evidence to support them is not definitive. The primary dietary factors that increase risk are obesity and alcohol consumption; with a diet low in fruits and vegetables and high in red meat being implicated but not confirmed. A 2014 meta-analysis did not find a relationship between fruits and vegetables and cancer. Consumption of coffee is associated with a reduced risk of liver cancer. Studies have linked excessive consumption of red or processed meat to an increased risk of breast cancer, colon cancer, and pancreatic cancer, a phenomenon which could be due to the presence of carcinogens in meats cooked at high temperatures. Dietary recommendations for cancer prevention typically include an emphasis on vegetables, fruit, whole grains, and fish, and an avoidance of processed and red meat (beef, pork, lamb), animal fats, and refined carbohydrates.\n",
"The International Agency for Research on Cancer (IARC) classified processed meat (e.g., bacon, ham, hot dogs, sausages) as, \"\"carcinogenic to humans\" (Group 1), based on \"sufficient evidence\" in humans that the consumption of processed meat causes colorectal cancer.\" IARC also classified red meat as \"\"probably carcinogenic to humans\" (Group 2A), based on \"limited evidence\" that the consumption of red meat causes cancer in humans and \"strong\" mechanistic evidence supporting a carcinogenic effect.\" Subsequent studies have shown that taxing processed meat products could save lives, particularly in the West where meat intensive diets are the norm. If the amount of taxation was linked to the level of harm they caused, some processed meats, such as bacon and sausages, would nearly double in price.\n",
"Section::::Pathways.:Microbial.\n\nMicrobial rancidity refers to a water-dependent process in which microorganisms, such as bacteria or molds, use their enzymes such as lipases to break down fat. By destroying or inhibiting microorganisms, pasteurization (sterilization) and addition of antioxidant ingredients, such as vitamin E, can reduce this process.\n\nSection::::Food safety.\n\nUsing fish oil as an example of a food or dietary supplement susceptible to rancidification over various periods of storage, two reviews found effects only on flavor and odor, with no evidence as of 2015 that rancidity causes harm if a spoiled product is consumed.\n\nSection::::Prevention.\n",
"Long-term rat studies showed that PhIP causes cancer of the colon and mammary gland in rats. Female rats given doses of 0, 12.4, 25, 50, 100 or 200 ppm of PhIP showed a dose-dependent incidence of adenocarcinomas. The offspring of female rats exposed to PhIP while pregnant had a higher prevalence of adenocarcinomas than those whose mothers had not been exposed. This was true even for offspring who were not exposed to PhIP. PhIP was transferred from mothers to offspring in their milk.\n\nSection::::Cancer.:Epidemiological studies.\n",
"Most processed meat contains at least some red meat. To enhance flavor or improve preservation meat is treated by salting, curing, fermentation, smoking, or other processes to create processed meat. Nitrates and nitrites found in processed meat (e.g. bacon, ham, salami, pepperoni, hot dogs, and some sausages) can be converted by the human body into nitrosamines that can be carcinogenic, causing mutation in the colorectal cell line, thereby causing tumorigenesis and eventually leading to cancer. In its Press Release 240 (16 Oct. 2015) the International Agency for Research on Cancer, based on a review of 800 studies over 20 years, concluded that processed meat is definitely carcinogenic (Group 1) and found that for each additional 50g of processed meat consumed per day, the risk of colorectal cancer increased by 18% (up to a maximum of approximately 140g); it also found that there appeared to be an increase in gastric cancer but this was not as clear.\n",
"Foods that have undergone processing, including some commercial baked goods, desserts, margarine, frozen pizza, microwave popcorn and coffee creamers, sometimes contain trans fats. This is the most unhealthy type of fat, and may increase your risk for high cholesterol, heart disease and stroke. The 2010 Dietary Guidelines for Americans recommends keeping your trans fat intake as low as possible.\n\nSection::::Other potential disadvantages.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-04752 | What is the significance that all galaxies complete one revolution every billion years? How does this help us better understand the mechanics of what makes them tick? | If this measurement of 1 billion years turns out to be true and is the same for all galaxies, then it is significant because if we know the speed of the rotation, then we can measure a lot of other things about the galaxies- their masses especially. The equations for how this works would be out of the ELI5 boards, but they are connected. Also- if one billion years -per-rotation is the speed, it leads us to more questions. !!And this is how science works- solving problems and finding new questions!! -Why is it 1B years? -Are they the same direction? -Are bigger galaxies necessarily older because of what this new data states? -Does this change when two disc galaxies merge? And does this shine any light on non- disc galaxies? -And the black holes at the centers of many galaxies spin- does this mean the spin gets faster, slower or doesn’t change when the stars are swallowed up into the black hole since the mass of the stars just become added to the mass of the black hole? Also by knowing the speed and mass we can begin to measure the amount of energy making the galaxy spin- which could lead to identifying more information about the black hole. And once know about the mass and energy we can study the effects of the galaxy on it’s neighboring galaxies or giant dust clouds. And so on. It’s science! And it’s always so exciting! | [
"Section::::Kinematics.\n\nSection::::Kinematics.:Measurement difficulties and techniques.\n",
"Section::::Kinematics.:Offset Tully–Fisher relation.\n",
"BULLET::::- 1977 — R. Brent Tully and Richard Fisher publish the Tully–Fisher relation between the luminosity of an isolated spiral galaxy and the velocity of the flat part of its rotation curve,\n\nBULLET::::- 1978 — Steve Gregory and Laird Thompson describe the Coma supercluster,\n\nBULLET::::- 1978 — Donald Gudehus finds evidence that clusters of galaxies are moving at several hundred kilometers per second relative to the cosmic microwave background radiation,\n",
"Section::::Morphology and structure.:Sérsic decomposition.\n",
"In June 2019, citizen scientists through Galaxy Zoo reported that the usual Hubble classification, particularly concerning spiral galaxies, may not be supported, and may need updating.\n\nSection::::Rotation of galaxies.\n",
"The Hubble sequence is often represented in the form of a two-pronged fork, with the ellipticals on the left (with the degree of ellipticity increasing from left to right) and the barred and unbarred spirals forming the two parallel prongs of the fork. Lenticular galaxies are placed between the ellipticals and the spirals, at the point where the two prongs meet the “handle”.\n\nTo this day, the Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy.\n",
"BULLET::::2. Galaxy masses from different tracers: how much mass there is in stars, gas, and dark matter, and how it is distributed\n\nBULLET::::3. Galaxy assembly as traced from the kinematic structure: what the motions of stars and gas tell us about the structure of the galaxies\n\nBULLET::::4. Galaxy assembly as traced through the stellar population content: how, when, and where did stars form throughout the history of the galaxies\n",
"BULLET::::- 1943 — Carl Keenan Seyfert identifies six spiral galaxies with unusually broad emission lines, named Seyfert galaxies,\n\nBULLET::::- 1949 — J. G. Bolton, G. J. Stanley, and O. B. Slee identify NGC 4486 (M87) and NGC 5128 as extragalactic radio sources,\n\nSection::::Mid-20th century.\n\nBULLET::::- 1953 — Gérard de Vaucouleurs discovers that the galaxies within approximately 200 million light-years of the Virgo Cluster are confined to a giant supercluster disk,\n\nBULLET::::- 1954 — Walter Baade and Rudolph Minkowski identify the extragalactic optical counterpart of the radio source Cygnus A,\n",
"BULLET::::- Evolved galaxies at z 1.5 from the Gemini deep deep survey: the formation epoch of massive stellar systems; Patrick J. McCarthy, Damien Le Borgne, David Crampton, Hsiao-Wen Chen, Roberto G. Abraham, Karl Glazebrook, Sandra Savaglio, Raymond G. Carlberg, Ronald O. Marzke, Kathy Roth; The Astrophysical Journal; 13 Sept 2004: \n",
"Galaxies have magnetic fields of their own. They are strong enough to be dynamically important: they drive mass inflow into the centers of galaxies, they modify the formation of spiral arms and they can affect the rotation of gas in the outer regions of galaxies. Magnetic fields provide the transport of angular momentum required for the collapse of gas clouds and hence the formation of new stars.\n",
"A 2012 paper that suggests a new classification system, first proposed by the Canadian astronomer Sidney van den Bergh, for lenticular and dwarf spheroidal galaxies (S0a-S0b-S0c-dSph) that parallels the Hubble sequence for spirals and irregulars (Sa-Sb-Sc-Im) reinforces this idea showing how the spiral–irregular sequence is very similar to this new one for lenticulars and dwarf ellipticals.\n\nSection::::Formation theories.:Mergers.\n",
"BULLET::::- 1985 — Robert Antonucci and J. Miller discover that the Seyfert II galaxy NGC 1068 has broad lines which can only be seen in polarized reflected light,\n\nBULLET::::- 1986 — Amos Yahil, David Walker, and Michael Rowan-Robinson find that the direction of the IRAS galaxy density dipole agrees with the direction of the cosmic microwave background temperature dipole,\n",
"BULLET::::- 150 Em – 16,000 light years – Diameter of the Small Magellanic Cloud, a dwarf galaxy orbiting the Milky Way\n\nBULLET::::- 200 Em – 21,500 light years – Distance to OGLE-2005-BLG-390Lb, the most distant and the most Earth-like planet known\n\nBULLET::::- 240 Em – 25,000 light years – Distance to the Canis Major Dwarf Galaxy\n\nBULLET::::- 260 Em – 28,000 light years – Distance to the center of the Galaxy\n\nBULLET::::- 830 Em – 88,000 light years – Distance to the Sagittarius Dwarf Elliptical Galaxy\n\nSection::::1 zettametre.\n",
"The main goal has become to understand the nature and the history of these ubiquitous dark haloes by investigating the properties of the galaxies they contain (i.e. their luminosities, kinematics, sizes, and morphologies). The measurement of the kinematics (their positions, velocities and accelerations) of the observable stars and gas has become a tool to investigate the nature of dark matter, as to its content and distribution relative to that of the various baryonic components of those galaxies.\n\nSection::::Further investigations.\n",
"The oldest spiral galaxy on file is BX442. At eleven billion years old, it is more than two billion years older than any previous discovery. Researchers think the galaxy's shape is caused by the gravitational influence of a companion dwarf galaxy. Computer models based on that assumption indicate that BX442's spiral structure will last about 100 million years.\n\nSection::::Structure.:Related.\n\nIn June 2019, citizen scientists through Galaxy Zoo reported that the usual Hubble classification, particularly concerning spiral galaxies, may not be supported, and may need updating.\n\nSection::::Origin of the spiral structure.\n",
"Spiral galaxies have abundant sources for potential star-formation material, but how long galaxies are able to continuously draw on these resources remains in question. A future generation of observational tools and computational abilities will shed light on some of the technical details of the Milky Way's past and future as well as how HVCs play a role in its evolution.\n\nSection::::Examples of HVCs.\n\nSection::::Examples of HVCs.:Northern Hemisphere.\n",
"Section::::Uses of radio galaxies.:Standard rulers.\n\nSome work has been done attempting to use radio galaxies as standard rulers to determine cosmological parameters. This method is fraught with difficulty because a radio galaxy's size depends on both its age and its environment. When a model of the radio source is used, though, methods based on radio galaxies can give good agreement with other cosmological observations.\n\nSection::::Uses of radio galaxies.:Effects on environment.\n",
"Observations of the rotation curve of spirals, however, do not bear this out. Rather, the curves do not decrease in the expected inverse square root relationship but are \"flat\", i.e. outside of the central bulge the speed is nearly a constant (the solid line in Fig. 1). It is also observed that galaxies with a uniform distribution of luminous matter have a rotation curve that rises from the center to the edge, and most low-surface-brightness galaxies (LSB galaxies) have the same anomalous rotation curve.\n",
"Like the spiral galaxies with high surface brightness companions, most of these spiral galaxies are clearly interacting systems. Tidal tails and bridges are visible in many of the images.\n\nSection::::List of galaxies in the catalog.:Elliptical and elliptical-like galaxies.\n\nSection::::List of galaxies in the catalog.:Elliptical and elliptical-like galaxies.:Elliptical galaxies connected to spiral galaxies.\n\nThese objects are very similar to the spiral galaxies with elliptical companions. All of the galaxies have features such as tidal tails and tidal bridges that have formed through gravitational interaction.\n\nSection::::List of galaxies in the catalog.:Elliptical and elliptical-like galaxies.:Elliptical galaxies repelling spiral arms.\n",
"BULLET::::- 1989 — Margaret Geller and John Huchra discover the \"Great Wall\", a sheet of galaxies more than 500 million light years long and 200 million wide, but only 15 million light years thick,\n\nBULLET::::- 1990 — Michael Rowan-Robinson and Tom Broadhurst discover that the IRAS galaxy IRAS F10214+4724 is the brightest known object in the Universe,\n",
"BULLET::::- 6.0 billion years (7.8 Gya): Many galaxies like NGC 4565 become relatively stable – ellipticals result from collisions of spirals with some like IC 1101 being extremely massive.\n",
"Section::::History.:Motivations to study satellite galaxies.\n\nSpectroscopic, photometric and kinematic observations of satellite galaxies have yielded a wealth of information that has been used to study, among other things, the formation and evolution of galaxies, the environmental effects that enhance and diminish the rate of star formation within galaxies and the distribution of dark matter within the dark matter halo. As a result, satellite galaxies serve as a testing ground for prediction made by cosmological models.\n\nSection::::Classification of satellite galaxies.\n",
"Lenticular and spiral galaxies, taken together, are often referred to as disk galaxies. The bulge-to-disk flux ratio in lenticular galaxies can take on a range of values, just as it does for each of the spiral galaxy morphological types (Sa, Sb, etc.).\n\nExamples of lenticular galaxies: M85, M86, NGC 1316, NGC 2787, NGC 5866, Centaurus A.\n\nSection::::Classes of galaxies.:Spirals.\n",
"BULLET::::- 14 Em – 1,500 light years – Approximate thickness of the plane of the Milky Way galaxy at the Sun's location\n\nBULLET::::- 14.2 Em – 1,520 light years – Diameter of the NGC 604\n\nBULLET::::- 30.8568 Em – 3,261.6 light years – 1 kiloparsec\n\nBULLET::::- 31 Em – 3,200 light years – Distance to Deneb according to \"Hipparcos\"\n\nBULLET::::- 46 Em – 4,900 light years – Distance to OGLE-TR-56, the first extrasolar planet discovered using the transit method\n\nBULLET::::- 47 Em – 5,000 light years – Distance to the Boomerang nebula, coldest place known (1 K)\n",
"BULLET::::- 1987 — David Burstein, Roger Davies, Alan Dressler, Sandra Faber, Donald Lynden-Bell, R. J. Terlevich, and Gary Wegner claim that a large group of galaxies within about 200 million light years of the Milky Way are moving together towards the \"Great Attractor\" in the direction of Hydra and Centaurus,\n\nBULLET::::- 1987 — R. Brent Tully discovers the Pisces–Cetus Supercluster Complex, a structure one billion light years long and 150 million light years wide,\n"
] | [] | [] | [
"normal"
] | [
"All galaxies complete one revolution every billion years."
] | [
"false presupposition",
"normal"
] | [
"It is not clear if all galaxies complete one revolution every billion years."
] |
2018-03790 | Why do games on the phone limit the number of times you can play in a day? | Those sorts of "games" are built on a pay-to-win system. They limit your amount of playtime, but then offer a "pay to keep playing" system. Take for example the My Little Pony mobile game. My niece absolutely loves it, and I spent some time on my mom's iPad helping her play. To earn the more desirable pony characters our options were invest hundreds of hours of time mining Gems that the game hands out very sparingly, or pay up front between $30 and $100 for some of the top-tier ponies. These games are not made by people who love games, who are trying to create a fun player experience. They're designed by a company to maximize profit. | [
"Mobile games are another example of products that use FOMO to retain large numbers of engagement. Mobile games are well known for timed exclusives of one sort or another. “If there's a chance a player might miss a one-time event, it generates FOMO.” As an example of a mobile game that utilizes FOMO tactics, Crab Wars has a timed currency booster that lasts for three hours, so any time where it’s not active is time spent “losing” money. Compounded on top of this mechanic is a bonus for stacking three of these boosts together at any given time. There are also daily rewards for logging in, which get more rewarding for consecutive days logged in.\n",
"The popularity of mobile games has increased in the 2000s, as over US$3 billion worth of games were sold in 2007 internationally, and projected annual growth of over 40%. Ownership of a smartphone alone increases the likelihood that a consumer will play mobile games. Over 90% of smartphone users play a mobile game at least once a week.\n",
"BULLET::::- Mobile browser download - a game file is downloaded directly from a mobile website.\n\nUntil the launch of Apple App Store, in the US, the majority of mobile games were sold by the US wireless carriers, such as AT&T Mobility, Verizon Wireless, Sprint Corporation and T-Mobile US. In Europe, games were distributed equally between carriers, such as Orange and Vodafone, and off-deck, third party stores such as Jamba!, Kalador and Gameloft.\n",
"Section::::Common limits of mobile games.\n\nMobile games tend to be small in scope (in relation to mainstream PC and console games) and many prioritise innovative design and ease of play over visual spectacle. Storage and memory limitations (sometimes dictated at the platform level) place constraints on file size that presently rule out the direct migration of many modern PC and console games to mobile. One major problem for developers and publishers of mobile games is describing a game in such detail that it gives the customer enough information to make a purchasing decision.\n\nSection::::Location-based mobile games.\n",
"Section::::1990s.:Mobile phone gaming.\n\nMobile phones began becoming video gaming platforms when Nokia installed \"Snake\" onto its line of mobile phones in 1997 (Nokia 6110). As the game gained popularity, every major phone brand offered \"time killer games\" that could be played in very short moments such as waiting for a bus. Mobile phone games early on were limited by the modest size of the phone screens that were all monochrome, the very limited amount of memory and processing power on phones, and the drain on the battery.\n\nSection::::2000s.\n",
"Today, mobile games are usually downloaded from an app store as well as from mobile operator's portals, but in some cases are also preloaded in the handheld devices by the OEM or by the mobile operator when purchased, via infrared connection, Bluetooth, or memory card, or side loaded onto the handset with a cable.\n",
"Section::::History.\n\nTowards the end of the 20th century, mobile phone ownership became ubiquitous in the industrialised world - due to the establishment of industry standards, and the rapid fall in cost of handset ownership, and use driven by economies of scale. As a result of this explosion, technological advancement by handset manufacturers became rapid. With these technological advances, mobile phone games also became increasingly sophisticated, taking advantage of exponential improvements in display, processing, storage, interfaces, network bandwidth and operating system functionality.\n",
"Games may limit the number of times per second that updates are sent to a particular client, and/or are sent about particular objects in the game's world. Because of limitations in the amount of bandwidth available, and the CPU time that's taken by network communication, some games prioritize certain critical communication while limiting the frequency and priority of less important information. As with the tickrate, this effectively increases the synchronization latency. Game engines may also reduce the precision of some values sent over the network to help with bandwidth use; this lack of precision may in some instances be noticeable.\n",
"Section::::The N-Gage application.:My Games.\n\nThis screen shows all the games that are currently installed on the phone—be it a Trial version or the full game (purchased or rented). The ones that are trial versions have a pink stripe that says \"TRIAL\" to the far right of the game icon, overlapping what looks somewhat like a battery meter that, once you pay for the game, illustrates your progress with that game. At the bottom of the list of installed games is a quick link (Get More Games) that takes you to the showroom.\n",
"In Europe, downloadable mobile games were introduced by the \"Les Games\" portal from Orange France, run by In-fusio, in 2000. Whereas before mobile games were usually commissioned directly by handset manufacturers, now also mobile operators started to act as distributors of games. As the operators were not keen on handling potentially hundreds of relationships with one- or two-person developers, mobile aggregators and publishers started to act as a middleman between operators and developers that further reduced the revenue share seen by developers.\n",
"Casual games became popular on smartphones immediately upon their debut, with touch-screen phones like the iPhone of 2007 featuring large color displays, all-day availability to the phone owner, and intuitive tapping-and-dragging user interfaces.\n",
"Section::::In-game mobile marketing.\n\nThere are essentially three major trends in mobile gaming right now: interactive real-time 3D games, massive\n\nmulti-player games and social networking games. This means a trend towards more complex and more sophisticated, richer game play. On the other side, there are the so-called casual games, i.e. games that are very simple and very easy to play. Most mobile games today are such casual games and this will probably stay so for quite a while to come.\n",
"Although this edition of the Nokia Game necessitated more skill than logic, it was well-received for not forcing players to play within very strict time limits, but also criticised for its frequent bugs and players who would often disconnect while losing. This was partly rectified by awarding more points to players with a higher number of consecutive plays.\n",
"According to a February 2010 comScore MobiLens study of the U.S. mobile gaming market, smartphone subscribers are much more likely to play mobile casino games than subscribers of generic phones. The study revealed that 7.6% of smartphone subscribers and 1.2% of generic mobile subscribers played mobile casino games within a three-month time frame.\n",
"Games can be downloaded directly to the phone over the air (by GPRS or WiFi), or the user may choose to download it to a computer and then install it on to the phone using a USB-cable and Nokia PC Suite.\n\nSection::::Reception.\n",
"A report by the Council on Science and Public Health to the AMA cited a 2005 Entertainment Software Association survey of computer game players and noted that players of MMORPGs were more likely to play for more than two hours per day than other gamers. In its report, the Council used this two-hour-per-day limit to define \"gaming overuse\", citing the American Academy of Pediatrics guideline of no more than one to two hours per day of \"screen time.\" However, the ESA document cited in the Council report does not contain the two-hour-per-day data.\n",
"Mobile game\n\nA mobile game is a game played on a feature phone, smartphone/tablet, smartwatch, PDA, portable media player or graphing calculator.\n\nThe earliest known game on a mobile phone was a Tetris variant on the Hagenuk MT-2000 device from 1994.\n",
"Mobile games have been developed to run on a wide variety of platforms and technologies. These include the (today largely defunct) Palm OS, Symbian, Adobe Flash Lite, NTT DoCoMo's DoJa, Sun's Java, Qualcomm's BREW, WIPI, BlackBerry, Nook and early incarnations of Windows Mobile. Today, the most widely supported platforms are Apple's iOS and Google's Android. The mobile version of Microsoft's Windows 10 (formerly Windows Phone) is also actively supported, although in terms of market share remains marginal compared to iOS and Android.\n",
"Eccky and users can also exchange text messages via mobile phone. Exchange of text messages requires that the user have a mobile phone, and that the user purchase a mobile phone for Eccky within the game environment. This feature involves extra costs to the user, specifically the cost of sending text messages, and can be turned on and off at the user's discretion.\n",
"In 1997, Nokia launched the very successful \"Snake\". Snake (and its variants), that was preinstalled in most mobile devices manufactured by Nokia, has since become one of the most played games and is found on more than 350 million devices worldwide. A variant of the \"Snake\" game for the Nokia 6110, using the infrared port, was also the first two-player game for mobile phones.\n",
"Video games often have digital currencies. The more time players spend in a game, the more \"wealth\" they acquire. Players use this virtual wealth to \"purchase\" new aspects of the game.\n\nRewards are set on different time schedules in video games. Players may be rewarded for finishing tasks within a certain time frame, or might be given bonuses for playing during a pre-determined period.\n",
"The common repeating events are the \"Pokémon Go\" Fest held annually, the \"Pokémon Go\" Safari Zone which has been held in multiple countries, and the monthly \"Community Day\" events. Player counts for the larger events range from thousands up to two million players in one event. Due to large concentrations of players all using their mobile phones in such events, the large gatherings have resulted in network disruptions, which have resulted in a lawsuit against Niantic.\n\nSection::::Background.\n\nSection::::Background.:\"Pokémon Go\".\n",
"In order to deter inappropriate user behavior, players must register using a valid email address. Games with mature content are flagged as such by users—either the player who added the mature content, or any other user who views the game—and users can opt not to be shown any games with content flagged as mature.\n\nSection::::History.\n",
"BULLET::::- A phone call or other session may be interrupted after a handover, if the new base station is overloaded. Unpredictable handovers make it impossible to give an absolute QoS guarantee during a session initiation phase.\n\nBULLET::::- The pricing structure is often based on per-minute or per-megabyte fee rather than flat rate, and may be different for different content services.\n",
"German psychotherapist and online addiction expert Bert te Wildt recommends using apps such as Offtime and Menthal to help prevent mobile phone overuse. In fact, there are many apps available on Android and iOS stores which help track mobile usage. For example, in iOS 12 Apple added a function called \"Screen Time\" that allows users to see how much time they have spent on the phone. These apps usually work by doing one of two things: increasing awareness by sending user usage summaries, or notifying the user when he/ she has exceeded some user-defined time-limit for each app or app category.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-18571 | why health insurance companies have special enrollment periods? | To fight off "adverse selection". In other words, so that you can't buy insurance each time you get sick and then drop it each time you feel better. An alternative to that would be to exclude pre-existing conditions coverage, but that is hard to enforce because it is much harder to say your didn't know your house was on fire on a given date, then to say you didn't notice you were bleeding from the butt on a given date. Also, there are rules against it. | [
"Under the Patient Protection and Affordable Care Act, annual enrollment, or open enrollment, is the period that people in the United States who need health insurance can sign up for an individual insurance plan. Unless someone experiences a \"qualifying event\" outside of the annual enrollment period, annual enrollment is the only time to sign up for individual health insurance under the Affordable Care Act. Annual enrollment used to last for three months; the 2016 cycle lasted from November 1, 2015 until January 31, 2016. The 2018 annual enrollment cycle was reduced to 45 days (in most states) from November 1, 2017 to December 15, 2017. \n",
"In the United States, annual enrollment (also known as open enrollment or open season) is a period of time, usually but not always occurring once per year, when employees of companies and organizations may make changes to their elected fringe benefit options, such as health insurance. The term also applies to the annual period during which individuals may buy individual health insurance plans through the online, state-based health insurance exchanges established by the Patient Protection and Affordable Care Act. Annual enrollment is also prominent in Medicare, where almost 50 million enrollees can choose to stay in original Medicare, or join or change plans within the Medicare Advantage and Medicare Part D Prescription Drug programs for the coming calendar year. Individuals usually can make changes to, or sign up for, their health insurance or fringe benefits only once per year during the annual enrollment period or when they have experienced a specific qualifying event.\n",
"According to the HealthCare.gov, special Enrollment Period refers to a time outside of the open enrollment period, in which you and your family have a right to sign up for health coverage. In other words, you qualify for a special enrollment period 60 days following certain life events, including but limited to: change in family status such as marriage or birth of a child, loss of other health coverage. \n\nHowever, SHOP Marketplace is not offered to self-employed individuals, with no employees. They can however get health coverage through Health Insurance Marketplace for individuals. \n",
"During this time period, an employer will typically communicate to all eligible employees what options they have for their benefit program. Often the vendors or insurance providers will be present to explain the details of their products. This can be done either with group presentations, \"benefit fairs\" or meetings one on one with each employee. As travel expenses continue to rise many vendors and insurance providers have turned to using independent \"contract enrollers\" to do the communication on their behalf.\n",
"People can enroll in Medicare Advantage and other Part C health plans either by enrolling when they first become Medicare-eligible and first join both Parts A and B or by switching from traditional Medicare during an annual or special enrollment period as outlined in \"Medicare and You, 2019\" (there are over a dozen such enrollment periods). In 2019, a special Medicare Advantage Open Enrollment period extended from January to March during which the over 20,000,000 people on Part C Medicare Advantage could switch or drop plans. This open enrollment period is not to be confused with the annual election period that typically runs in the Fall of a given year and that was intended initially primarily for Part D of Medicare. The \"new\" January–March Open Enrollment period is actually a restoration of the way Part C worked before PPACA. Medicare Advantage Open Enrollment applies only to people on Medicare Advantage.\n",
"Open season is a prominent feature of the Federal Employees Health Benefits Program during which some 3 million Federal civilian employees and retirees may choose among several dozen health insurance plans for the coming year. Open season is scheduled in the fall each year, and plan enrollment decisions take effect in the following calendar year.\n\nSection::::Annual enrollment under the Patient Protection and Affordable Care Act.\n",
"The mandate and the limits on open enrollment were designed to avoid the insurance death spiral in which healthy people delay insuring themselves until they get sick. In such a situation, insurers would have to raise their premiums to cover the relatively sicker and thus more expensive policies, which could create a vicious cycle in which more and more people drop their coverage.\n",
"SHOP enrollment is available any time of the year - there is no \"Open Enrollment\" limitation. Employers who wish to contribute to the premium cost of their employees may qualify to receive a SHOP tax credit. The tax credit is worth up to 50% of employer's contribution toward its employees' premium costs. It will be up to 35% for tax-exempt employers. Unfortunately, employees can't join the plan after the initial enrollment period unless they are new hires and qualify Special Enrollment Period.\n",
"Most insurance companies don't offer pre-existing coverage, but do offer acute onset coverage. There are a lot of exceptions that may apply when it comes to acute onset. For most insurance companies, the insured must seek medical attention within 12 or 24 hour window after initial symptoms manifestation in order to be considered acute onset.\n",
"Group health insurance plans sponsored by employers with 15 or more employees were prohibited by the Pregnancy Discrimination Act of 1978 from excluding maternity coverage for a pre-existing condition of pregnancy; this prohibition was extended to all group health insurance plans by the Health Insurance Portability and Accountability Act of 1996 (HIPAA).\n\nSection::::Practice and effect.\n",
"The private health system in Australia operates on a \"community rating\" basis, whereby premiums do not vary solely because of a person's previous medical history, current state of health, or (generally speaking) their age (but see Lifetime Health Cover below). Balancing this are waiting periods, in particular for pre-existing conditions (usually referred to within the industry as PEA, which stands for \"pre-existing ailment\"). Funds are entitled to impose a waiting period of up to 12 months on benefits for any medical condition the signs and symptoms of which existed during the six months ending on the day the person first took out insurance. They are also entitled to impose a 12-month waiting period for benefits for treatment relating to an obstetric condition, and a 2-month waiting period for all other benefits when a person first takes out private insurance. Funds have the discretion to reduce or remove such waiting periods in individual cases. They are also free not to impose them to begin with, but this would place such a fund at risk of \"adverse selection\", attracting a disproportionate number of members from other funds, or from the pool of intending members who might otherwise have joined other funds. It would also attract people with existing medical conditions, who might not otherwise have taken out insurance at all because of the denial of benefits for 12 months due to the PEA Rule. The benefits paid out for these conditions would create pressure on premiums for all the fund's members, causing some to drop their membership, which would lead to further rises in premiums, and a vicious cycle of higher premiums-leaving members would ensue.\n",
"BULLET::::- 0 months: Hawaii, Maryland, Michigan\n\nBULLET::::- 3 months: Kansas, New Hampshire\n\nBULLET::::- 6 months: 45 other states + DC\n\nBULLET::::- Large group (self-insured) health insurance plans\n\nBULLET::::- Maximum pre-existing condition exclusion period\n\nBULLET::::- 12 months: 50 states + DC\n\nBULLET::::- Maximum look-back period for pre-existing conditions\n\nBULLET::::- 6 months: 50 states + DC\n\nPre-existing condition exclusions were prohibited for HIPAA-eligible individuals (those with 18 months continuous coverage unbroken for no more than 63 days and coming from a group health insurance plan).\n\nIndividual (non-group) health insurance plans could exclude maternity coverage for a pre-existing condition of pregnancy.\n",
"Regulation of pre-existing condition exclusions in individual (non-group) and small group (2 to 50 employees) health insurance plans in the United States was left to individual U.S. states as a result of the McCarran–Ferguson Act of 1945 which delegated insurance regulation to the states and the Employee Retirement Income Security Act of 1974 (ERISA) which exempted self-insured large group health insurance plans from state regulation. After most states had by the early 1990s implemented some limits on pre-existing condition exclusions by small group (2 to 50 employees) health insurance plans, the Health Insurance Portability and Accountability Act (Kassebaum-Kennedy Act) of 1996 (HIPAA) extended some minimal limits on pre-existing condition exclusions for \"all\" group health insurance plans—including the self-insured large group health insurance plans that cover half of those with employer-provided health insurance but are exempt from state insurance regulation.\n",
"BULLET::::- 2 years: Florida, Illinois, West Virginia\n\nBULLET::::- 3 years: Montana, Rhode Island,\n\nBULLET::::- 5 years: Alabama, Arkansas, Delaware, Iowa, Pennsylvania, Texas\n\nBULLET::::- unlimited: Alaska, Arizona, District of Columbia, Georgia, Hawaii, Kansas, Missouri, Nebraska, Oklahoma, South Carolina, Tennessee, Wisconsin\n\nBULLET::::- Small group (2 to 50 employees) health insurance plans\n\nBULLET::::- Maximum pre-existing condition exclusion period\n\nBULLET::::- 0 months: Hawaii, Maryland, Michigan\n\nBULLET::::- 3 months: Kansas\n\nBULLET::::- 6 months: California, Colorado, Massachusetts, New Jersey, New Mexico, Oregon, Rhode Island\n\nBULLET::::- 9 months: Indiana, New Hampshire, Washington\n\nBULLET::::- 12 months: 36 other states + DC\n\nBULLET::::- Maximum look-back period for pre-existing conditions\n",
"Choices among health plans are available to employees during an \"open enrollment\" period, or \"open season,\" after which the employee will be covered fully in any plan he or she chooses without limitations regarding pre-existing conditions. After the annual enrollment, changes can be made only upon a \"qualifying life event\" such as marriage, divorce, adoption or birth of a child, or change in employment status of a spouse, until the next annual open season, during which employees can enroll, disenroll, or change from one plan to another. The exact dates of the open season change from year to year, but are usually from the Monday of the second full week in November through Monday, the second full week of December. Enrollment begins at or near the beginning of the calendar year, and lasts until a different plan choice is made in a subsequent open season or through a qualifying life event. In practice, there is a great deal of inertia in enrollment, and only about 5 percent of employees change plans in most open seasons.\n",
"But it wasn’t until 1990 that AACRAO established the term, “Strategic Enrollment Management”, and started the first annual SEM conference, specifically focused on pressing issues and effective practices in Strategic Enrollment Management. Beginning in 2009, AACRAO developed the first SEM Award of Excellence to recognize outstanding achievement and visionary leadership in Strategic Enrollment Management.\n\nDr. Bob Bontrager, Sr. Director of AACRAO Consulting and SEM Initiatives edited some of the first books on SEM: \n\nBULLET::::- \"SEM and Institutional Success: Integrating Enrollment, Finance and Student Access\" (2008)\n\nBULLET::::- \"Applying SEM at the Community College\" (2009)\n",
"To relieve insurers and brokers of that tedious and time-consuming chore, many states now maintain \"export lists\" of risks that the state insurance commissioner has already identified as having no coverage available whatsoever from any admitted insurer in the state. In turn, brokers presented by clients with those risks can immediately \"export\" them to the out-of-state surplus market and apply directly to surplus line insurers without having to first document multiple attempts to present the risk to admitted insurers. However, many states have refused to establish export lists, including Florida, Illinois, and Texas.\n",
"The health insurance advocacy group America's Health Insurance Plans was willing to accept these constraints on pricing, capping, and enrollment because of the individual mandate: The individual mandate requires that all individuals purchase health insurance. This requirement of the ACA allows insurers to spread the financial risk of newly insured people with pre-existing conditions among a larger pool of individuals.\n",
"The amendment of decreasing screening days to 73 days is in effect today. Mainly two factors contributed to the ratification of amendment. \n",
"This problem of ballooning health insurance premiums has persisted for years, and was important in the debate over the Patient Protection and Affordable Care Act, commonly called the Affordable Care Act (ACA). One set of ACA provisions eliminated the insurance industry’s ability to charge different premiums to people with differing health status and the ability to exclude individuals with pre-existing conditions. Implemented alone, these provisions would greatly increase the risk in the health insurance market, and premiums would increase to cover associated increases in expenditure. Therefore, the ACA includes a number of provisions aimed at incentivizing young invincibles to enter into the health insurance market.\n",
"each full 12-month period that you could have had Part B, but didn't sign up for it. Usually, you don't pay a late enrollment penalty if you meet certain conditions that allow you to sign up for Part B during a special enrollment period.\n\nSection::::Comparison with private insurance.\n",
"Pre-existing condition\n\nIn the context of healthcare in the United States, a pre-existing condition is a medical condition that started before a person's health benefits went into effect. Before 2014 some insurance policies would not cover expenses due to pre-existing conditions. These exclusions by the insurance industry were meant to cope with adverse selection by potential customers. Such exclusions have been prohibited since January 1, 2014, by the Patient Protection and Affordable Care Act.\n\nAccording to the Kaiser Family Foundation, more than a quarter of adults below the age of 65 (approximately 52 million people) had pre-existing conditions in 2016.\n",
"In the individual market, sometimes thought of as the \"residual market\" of insurance, insurers have generally used a process called underwriting to ensure that each individual paid for his or her actuarial value or to deny coverage altogether. The House Committee on Energy and Commerce found that, between 2007 and 2009, the four largest for-profit insurance companies refused insurance to 651,000 people for previous medical conditions, a number that increased significantly each year, with a 49% increase in that time period. The same memorandum said that 212,800 claims had been refused payment due to pre-existing conditions and that insurance firms had business plans to limit money paid based on these pre-existing conditions. These persons who might not have received insurance under previous industry practices are guaranteed insurance coverage under the ACA. Hence, the insurance exchanges will shift a greater amount of financial risk to the insurers, but will help to share the cost of that risk among a larger pool of insured individuals. The ACA's prohibition on denying coverage for pre-existing conditions began on January 1, 2014. Previously, several state and federal programs, including most recently the ACA, provided funds for state-run high-risk pools for those with previously existing conditions. Several states have continued their high-risk pools even after the first marketplace enrollment period.\n",
"BULLET::::- 1996 The Health Insurance Portability and Accountability Act (HIPAA) not only protects health insurance coverage for workers and their families when they change or lose their jobs, it also made health insurance companies cover pre-existing conditions. If such condition had been diagnosed before purchasing insurance, insurance companies are required to cover it after patient has one year of continuous coverage. If such condition was already covered on their current policy, new insurance policies due to changing jobs, etc... have to cover the condition immediately.\n",
"Some cancer insurance plans have provisions that prevent the policyholder from receiving benefits during a period after initial enrollment; this length is frequently thirty days. Some plans stipulate that if a policyholder is diagnosed with cancer in the first thirty days of coverage, their benefits are significantly reduced and coverage will subsequently be terminated.\n\nSection::::Concerns regarding coverage.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-23780 | How and why are mobile games able to advertise with content from an entirely different game? | Most digital ads like that are not served by the game/app/webpage/etc themselves. They just have a deal with a third-party ad company that loads ads for them, this is how most digital advertising works for stuff like pop-ups, banner ads, and much more. That advertising company will have a near limitless supply of companies wanting to show their ad to the right person. Those other games and stuff are just buying ads, and the ad company is serving it to you (a user of a game) because they think its a relevant to to you... considering you're playing a mobile game with ads, advertising you another game seems like a reasonable bet you may also be interested in that game too, | [
"Section::::Static in-game advertising.\n\nSimilar to product placement in the film industry, static IGAs cannot be changed after they are programmed directly into the game (unless it's completely online). However, unlike product placement in traditional media, IGA allows gamers to interact with the virtual product. For example, has required the use of in-game Sony Ericsson phones to catch terrorists. Unlike static IGAs, dynamic IGAs are not limited to a developer and publisher determined pre-programmed size or location and allow the advertiser to customize the advertisement display.\n",
"This article \"presents a framework to examine mobile entertainment from multiple points of views. This allows future research to be conducted with the clarity of distinguishing mobile entertainment services of different domains.\"\n\nSection::::Review and Redefine.\n",
"One form of in-game mobile advertising is what allows players to actually play. As a new and effective form of advertising, it allows consumers to try out the content before they actually install it. This type of marketing can also really attract the attention of users like casual players. These advertising blur the lines between game and advertising, and provide players with a richer experience that allows them to spend their precious time interacting with advertising.\n",
"This kind of advertisement is not only interesting, but also brings some benefits to marketers. As this kind of in-gaming mobile marketing can create more effective conversion rates because they are interactive and have faster conversion speeds than general advertising. Moreover, games can also offer a stronger lifetime value. They measure the quality of the consumer in advance to provide some more in-depth experience,So this type of advertising can be more effective in improving user stickiness than advertising channels such as stories and video.\n\nSection::::QR codes.\n",
"Brands are now delivering promotional messages within mobile games or sponsoring entire games to drive consumer engagement. This is known as mobile advergaming or ad-funded mobile game.\n\nIn in-game mobile marketing, advertisers pay to have their name or products featured in the mobile games. For instance, racing games can feature real cars made by Ford or Chevy. Advertisers have been both creative and aggressive in their attempts to integrate ads organically in the mobile games.\n",
"Mobile Advertising was one of the main themes at the \"3GSM World Congress\" held in Barcelona, Spain in February 2007. The Mobile Entertainment Summit held there discussed whether Mobile Advertising Can Fund New Content Businesses and Resurrect Advertising.\n",
"Due to the variety of ways in which product placement can be accomplished in any media, and because the category is nascent, this category is not standardized at all, but some examples include branded in-game goods or even in-game quests. For example, in a game where you run a restaurant, you might be asked to collect ingredients to make a Starbucks Frappuccino, and receive in-game rewards for doing so.\n",
"Section::::Common limits of mobile games.\n\nMobile games tend to be small in scope (in relation to mainstream PC and console games) and many prioritise innovative design and ease of play over visual spectacle. Storage and memory limitations (sometimes dictated at the platform level) place constraints on file size that presently rule out the direct migration of many modern PC and console games to mobile. One major problem for developers and publishers of mobile games is describing a game in such detail that it gives the customer enough information to make a purchasing decision.\n\nSection::::Location-based mobile games.\n",
"Although investment in mobile marketing strategies like advergaming is slightly more expensive than what is intended for a mobile app, a good strategy can make the brand derive a substantial revenue. Games that use advergaming make the users remember better the brand involved. This memorization increases virality of the content so that the users tend to recommend them to their friends and acquaintances, and share them via social networks.\n",
"The Mobile Entertainment Forum (MEF) carried out a survey of its members in 2006 where 81% of respondents expected successful advertising models to cut prices or fund the consumption of entertainment content on mobile phones.\n",
"A DMS publishes to mobile devices, offering unique content formatted for those devices, such as the iPhone, iPad and Android phones. Mobile publication often takes the form of a mobile-optimized website theme, with larger navigation and a cleaner user interface. A mobile publication can also include 'apps' for devices that support them, 'push' notifications and SMS texting marketing.\n\nGaming is also a new form of Digital marketing, where creators custom makes games fit for a certain brand. It is used with larger navigation and an interface. It is the key factor to where mobile publication is included within the services.\n",
"1. Content embedded mode For the most part at present, the downloading APP from APP store is free, for APP development enterprise, need a way to flow to liquidate, implantable advertising and APP combines content marketing and game characters to seamlessly integrating user experience, so as to improve advertising hits.\n\nWith these free downloading apps, developers use in-app purchases or subscription to profit. \n",
"2. Advertising model advertisement implantation mode is a common marketing mode in most APP applications. Through Banner ads, consumer announcements, or in-screen advertising, users will jump to the specified page and display the advertising content when users click. This model is more intuitive, and can attract users' attention quickly.\n",
"In Europe, downloadable mobile games were introduced by the \"Les Games\" portal from Orange France, run by In-fusio, in 2000. Whereas before mobile games were usually commissioned directly by handset manufacturers, now also mobile operators started to act as distributors of games. As the operators were not keen on handling potentially hundreds of relationships with one- or two-person developers, mobile aggregators and publishers started to act as a middleman between operators and developers that further reduced the revenue share seen by developers.\n",
"The principal advantage of product placement in in-games advertising is visibility and notoriety. For advertisers an ad may be displayed multiple times and a game may provide an opportunity to ally a product's brand image with the image of the game. Such examples include the use Sobe drink in Tom Clancy’s Splinter Cell: Double Agent.\n",
"A growing venue for advertainment is video games, sometimes called \"advergaming\", where product placement and partnerships may take a more dynamic role, according to researchers. The variables of gaming within ongoing competition may make players more perceptive or active in the face of advertainment. Advergaming examples include billboards advertising for (and product placement of) Bawls energy drink in Fallout: Brotherhood of Steel, and billboards for Adidas sportswear in FIFA International Soccer. Gamers' attitudes about in-game promotions vary greatly from tolerant to highly resistant. \n\nSection::::Production.\n",
"BULLET::::1. Simultaneously produced and consumed. This mean that the end-user is part of the production of content, and that the producer is also present during the transaction.\n\nBULLET::::2. Heterogeneous. This means that there is a (almost) unique instance of the service made for each end-user.\n\nBULLET::::3. Intangible. Although the service is intangible, they are often coupled to physical products.\n\nBULLET::::4. Perishable. The value of the service disappears after consumption by the end-user.\n",
"Collaborative content has become more prominent on video platforms and social media in recent years. Content producers/influencers are usually contacted by companies for their creative input and voice in the makings of a product or provided with a discount code to gain a percentage of the profits after consumers incorporate the code as a part their purchase. Collaborative content may also include a brief or a contract and can vary from client to client- however, there is a degree of flexibility as the finished product is supposedly a representation of the content producer. Notable companies involved in this trade include pixi, colourpop and MAC cosmetics.\n",
"Section::::Applications.:Video games.\n\nSince the popularization of digital distribution platforms, it has become particularly common for games to feature paid content. Said content is often unlockable via online microtransactions. The act of payment may provide access to a previously restricted feature or content in the game, or permission to download additional content.\n\nSection::::Applications.:Content on mobile services.\n\nIn comparison to content on the traditional internet, content for mobile services has never been free.\n\nSection::::Payment models.\n",
"Gamers may feel that IGA is invasive and in some cases have dubbed IGA-supported software as spyware. Some gamers choose to remove advertisements from the game experience, either by paying more for an advertisement-free copy or disabling the advertisements through exploits.\n\nIn-game advertising can also lead to negative reviews for a video game, as occurred in 2013 with Maxis' promotion of a heavily branded Nissan Leaf charging station as downloadable content in SimCity. Maxis claimed \"Plopping down the station will add happiness to nearby buildings. It will not take power, water, or workers away from your city.\"\n\nSection::::Effectiveness.\n",
"Section::::Types.:Video games.\n\nSome video games are tie-in licences for films, television shows or books.\n\nVideo game movie tie-ins are expensive for a game developer to license, and the game designers have to work within constraints imposed by the film studio, under pressure to finish the game in time for the film's release. The aim for the publishers is to increase hype and revenue as the two industries effectively market one another's releases.\n",
"Examples of TTL advertising in games include \"link-chases,\" ARGs, and viral marketing.\n",
"Mobile application development, also known as mobile apps, has become a significant mobile content market since the release of the first iPhone from Apple in 2007. Prior to the release of Apple's phone product, the market for mobile applications (outside of games) had been quite limited. The bundling of the iPhone with an app store, as well as the iPhone's unique design and user interface, helped bring a large surge in mobile application use. It also enabled additional competition from other players. For example, Google's Android platform for mobile content has further increased the amount of app content available to mobile phone subscribers.\n",
"Section::::Multipurpose games.\n\nSince mobile devices have become present in the majority of households at least in the developed countries, there are more and more games created with educational or lifestyle- and health-improvement purposes. For example, mobile games can be used in speech-language pathology, children's rehabilitation in hospitals (Finnish startup Rehaboo!), acquiring new useful or healthy habits (Habitica app), memorising things and learning languages (Memrise).\n\nThere are also apps with similar purposes which are not games per se, in this case they are called gamified apps. Sometimes it is difficult to draw a line between multipurpose games and gamified apps.\n",
"IGA can be integrated into the game either through a display in the background, such as an in-game billboard or a commercial during the pause created when a game loads, or highly integrated within the game so that the advertised product is necessary to complete part of the game or is featured prominently within cutscenes. Due to the custom programming required, dynamic advertising is usually presented in the background; static advertisements can appear as either. One of the advantages of IGA over traditional advertisements is that consumers are less likely to multitask with other media while playing a game, however, some attention is still divided between the gameplay, controls, and the advertisement.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-03474 | Why can't we find the centre of the universe? | There is no center. All distant objects we can see are receding from us. More distant objects are receding more quickly.The same is true wherever you are. If you pick a point, it appears every (distant) object is moving away from that point. This is because space is expanding everywhere. Similarly, if you take a ruler and pretend you are at 4 1-2-3-4-5-6-7 and start stretching it, it looks like everything is moving away from you: ---2----3---4---5----6---- So you might think, Oh, 4 is the center of the universe. But what does the guy at 7 see? This: -4-5-6-7-8-9-10- becoming: ---5----6---7---8----9---- There is no central location. It's not as though objects are moving "outwards" from an explosion, distances themselves are growing with time. | [
"With the growing recognition in the late 20th century of the presence of dark matter in the universe, ordinary baryonic matter has come to be seen as something of a cosmic afterthought. As John D. Barrow put it, “This would be the final Copernican twist in our status in the material universe. Not only are we not at the center of the universe: we are not even made of the predominant form of matter”.\n",
"The Copernican principle, named after Nicolaus Copernicus, states that the Earth is not in a central, specially favored position. Hermann Bondi named the principle after Copernicus in the mid-20th century, although the principle itself dates back to the 16th-17th century paradigm shift away from the geocentric Ptolemaic system.\n",
"Professor Brian Cox explores our origins, place and destiny in the universe. He describes the initial conditions of the human psyche as one that places itself at the center of the universe, surrounded by family, environment, and events. Brian tells the story of how our innate human curiosity has led us from feeling that we are at the center of everything, to our modern understanding of our true place in space and time – that we are living 13.8 billion years from the beginning of the universe, on a mere speck of rock in a possibly infinite expanse of space.\n",
"Center of the universe\n\nThe center of the universe may refer to:\n\nSection::::Astronomy.\n\nBULLET::::- Geocentric model, the astronomical model which places Earth at the orbital center of all celestial bodies\n\nBULLET::::- Heliocentrism, the astronomical model in which the Sun is at the orbital center of the Solar System\n\nBULLET::::- History of the center of the Universe, a discussion of the historical view that the Universe has a center\n\nSection::::Mythology and religion.\n\nBULLET::::- Axis mundi, the mythological concept of a world center\n\nBULLET::::- Modern geocentrism, the belief that Earth is the center of the universe as described by classical geocentric models\n",
"List of places referred to as the Center of the Universe\n\nSeveral cities have been given the nickname \"Center (or Centre) of the Universe\". In addition, several fictional works have described a depicted location as being at the center of the universe.\n\nModern models of the Universe suggest it does not have a center, unlike previous systems which placed Earth (geocentrism) or the Sun (heliocentrism) at the center of the Universe.\n\nSection::::Nicknames of places.\n\nSection::::Nicknames of places.:Asia.\n\nBULLET::::- Wudaokou, Beijing (a nickname)\n\nSection::::Nicknames of places.:Europe.\n",
"The 19th century astronomer Johann Heinrich von Mädler proposed the Central Sun Hypothesis, according to which the stars of the universe revolved around a point in the Pleiades.\n\nSection::::The nonexistence of a center of the Universe.\n",
"\"Center\" is well-defined in a Flat Earth model. A flat Earth would have a definite geographic center. There would also be a unique point at the exact center of a spherical firmament (or a firmament that was a half-sphere).\n\nSection::::Earth as the center of the Universe.\n",
"BULLET::::- Space Flight Operations Facility, the operations control center of the Deep Space Network\n\nBULLET::::- The former interpretive centre of the Dominion Astrophysical Observatory in Saanich, British Columbia, Canada was once called the Centre of the Universe.\n\nSection::::Fiction.\n\nDepictions of a \"center of the universe\" in fiction include:\n\nBULLET::::- Azathoth, \"The Blind Idiot God\", in H.P. Lovecraft's Cthulhu Mythos\n\nBULLET::::- Eternia, the planet that is home to the Masters of the Universe\n\nBULLET::::- Oa, a planet at the center of the DC Comics Universe\n\nBULLET::::- Terminus, in the Doctor Who serial \"Terminus\"\n",
"BULLET::::- Perpignan (France) : Salvador Dali considered its train station as the center of the Universe\n\nBULLET::::- Wolverhampton, UK : Sir Terry Wogan referred to Wolverhampton UK as the centre of the Universe because there, the bathwater goes straight down the plughole\n\nBULLET::::- Hammersmith, UK : local historian Keith Whitehouse claimed it as the centre of the universe due to its history of radical politics and invention\n\nBULLET::::- Kirmington, UK : Home of Guy Martin referred to as ‘Center of T’ Universe’\n\nSection::::Nicknames of places.:North America.\n",
"BULLET::::- Ashland, Virginia (the actual, cosmological center of the Universe, as declared by former Mayor Dick Gillis)\n\nBULLET::::- John B. Lindale House in Magnolia, Delaware displays a sign proclaiming \"This is Magnolia, the Center of the Universe around which the Earth revolves\"\n\nBULLET::::- Alumni Hall (University of Notre Dame), South Bend, Indiana\n\nBULLET::::- The center of the Great Dome on the campus of the Massachusetts Institute of Technology\n",
"History of the center of the Universe\n\nThe center of the Universe is a concept that lacks a coherent definition in modern astronomy; according to standard cosmological theories on the shape of the universe, it has no center.\n",
"Historically, different people have suggested various locations as the center of the Universe. Many mythological cosmologies included an \"axis mundi\", the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BCE Greece, philosophers developed the geocentric model, based on astronomical observation; this model proposed that the center of the Universe lies at the center of a spherical, stationary Earth, around which the Sun, Moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the Sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it.\n",
"BULLET::::- San Dimas, California, in \"Bill & Ted's Excellent Adventure\"\n\nBULLET::::- Nibbler's home planet Eternium, in \"Futurama\"\n\nBULLET::::- Anyplace other than \"The Restaurant at the End of the Universe\" in \"The Hitchhiker's Guide to the Galaxy\" series\n\nBULLET::::- In the game \"Super Mario Galaxy\" for the Wii, Mario travels to the final area, named the Center of the Universe\n\nBULLET::::- In the game \"\" for the PS3, The Great Clock was said to be constructed at the exact center of the universe (give or take fifty feet)\n",
"Newton made clear his heliocentric view of the Solar System – developed in a somewhat modern way, because already in the mid-1680s he recognised the \"deviation of the Sun\" from the centre of gravity of the solar system. For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather \"the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World\", and this centre of gravity \"either is at rest or moves uniformly forward in a right line\" (Newton adopted the \"at rest\" alternative in view of common consent that the centre, wherever it was, was at rest).\n",
"Section::::Milky Way's galactic center as center of the Universe.\n\nBefore the 1920s, it was generally believed that there were no galaxies other than our own (see for example The Great Debate). Thus, to astronomers of previous centuries, there was no distinction between a hypothetical center of the galaxy and a hypothetical center of the universe.\n",
"In the early-20th century, the discovery of other galaxies and the development of the Big Bang theory led to the development of cosmological models of a homogeneous, isotropic Universe, which lacks a central point and is expanding at all points.\n\nSection::::Outside astronomy.\n\nIn religion or mythology, the \"axis mundi\" (also cosmic axis, world axis, world pillar, columna cerului, center of the world) is a point described as the center of the world, the connection between it and Heaven, or both.\n",
"BULLET::::- In Albuquerque, New Mexico, a large sculpted-hallway structure with short corridors aligned to north-south, east-west, and up-down, at the main campus of the University of New Mexico is known as \"The Center of the Universe\"\n\nBULLET::::- A concrete circle at the apex of a rebuilt span of the old Boston Avenue viaduct, between 1st and Archer Streets, in Tulsa, Oklahoma is known as \"The Center of the Universe\". The spot produces an acoustical anomaly and it is for which the Center of the Universe Festival and Ms. Center of the Universe Pageant are named\n",
"Since there is believed to be no \"center\" or \"edge\" of the Universe, there is no particular reference point with which to plot the overall location of the Earth in the universe. Because the observable universe is defined as that region of the Universe visible to terrestrial observers, Earth is, because of the constancy of the speed of light, the center of Earth's observable universe. Reference can be made to the Earth's position with respect to specific structures, which exist at various scales. It is still undetermined whether the Universe is infinite. There have been numerous hypotheses that the known universe may be only one such example within a higher multiverse; however, no direct evidence of any sort of multiverse has been observed, and some have argued that the hypothesis is not falsifiable. \n",
"The earliest scientific models of the Universe were developed by ancient Greek and Indian philosophers and were geocentric, placing Earth at the center of the Universe. Over the centuries, more precise astronomical observations led Nicolaus Copernicus to develop the heliocentric model with the Sun at the center of the Solar System. In developing the law of universal gravitation, Isaac Newton built upon Copernicus' work as well as observations by Tycho Brahe and Johannes Kepler's laws of planetary motion.\n",
"Johannes Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe. Kepler's third law was published in 1619. The first law was \"The orbit of every planet is an ellipse with the Sun at one of the two foci.\"\n",
"BULLET::::- Fremont, Seattle A suburb of Seattle is the official Center of the Universe - Sign at the Center of the Universe \n\nBULLET::::- New York City (a nickname)\n\nBULLET::::- Manhattan is often referred to as the Center of the Universe.\n\nBULLET::::- Times Square (a nickname)\n\nBULLET::::- Palm Court, New College of Florida in Sarasota, Florida was enshrined the Center of the Universe in 1965.\n\nBULLET::::- Toronto, a term used derisively by residents of the rest of Canada in reference to the city; see also nicknames for Toronto\n",
"You, King Gelon, are aware the Universe is the name given by most astronomers to the sphere the center of which is the center of the Earth, while its radius is equal to the straight line between the center of the Sun and the center of the Earth. This is the common account as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the Universe is many times greater than the Universe just mentioned. His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface\n",
"BULLET::::- Space and time in the Mesoamerican religion\n\nSection::::Media.\n\nBULLET::::- \"Center of the Universe\" (TV series), an American sitcom\n\nBULLET::::- \"Center of the Universe\", a song by Built to Spill from their album \"Keep It Like a Secret\"\n\nBULLET::::- \"Center of the Universe\", an album by Admiral Twin\n\nBULLET::::- \"Centre of the Universe\", a song from the album \"Epica\", by Kamelot\n\nBULLET::::- \"Centre of the Universe\", a debut single of Arthur Koldomasov\n\nBULLET::::- \"Center of the Universe\" (song), a song by Swedish house producer Axwell\n",
"Section::::The nonexistence of a center of the Universe.:Expanding Universe.\n\nHubble also demonstrated that the redshift of other galaxies is approximately proportional to their distance from the Earth (Hubble's law). This raised the appearance of our galaxy being in the center of an expanding Universe, however, Hubble rejected the findings philosophically:\n",
"BULLET::::- A site near Kamloops, British Columbia, Canada has been referred to as a spiritual \"Centre of the Universe\"\n\nBULLET::::- Teotihuacan, in modern-day Mexico was considered the center of the universe by many Mesoamerican tribes, including the Aztecs, and was a model city for the later indigenous civilizations. It was called the \"birthplace of the gods\" and heavily influenced the region despite being abandoned for centuries\n\nBULLET::::- Bon Aqua, Tennessee Hickman county referred to as the center of the universe by country singer Johnny Cash\n\nSection::::Nicknames of places.:Astronomy.\n"
] | [
"There is an absolute center of the universe to find. ",
"The center of the universe exists. "
] | [
"The center of the universe is always relative to your point of reference. Essentially you are always at the center of the universe.",
"There isn't actually a center of the universe, although it may appear to be due to the universe constantly expanding, however there is no central location."
] | [
"false presupposition"
] | [
"There is an absolute center of the universe to find. ",
"The center of the universe exists. "
] | [
"false presupposition",
"false presupposition"
] | [
"The center of the universe is always relative to your point of reference. Essentially you are always at the center of the universe.",
"There isn't actually a center of the universe, although it may appear to be due to the universe constantly expanding, however there is no central location."
] |
2018-17717 | When exposed to a heat source such as a fire or hot oven, why do some things get harder while some things get softer or melt? | Well the things that get harder are uuuusually soft because of the water inside of them, when that water is boiled off you get just the hard stuff, whereas other stuff just melts | [
"BULLET::::- Break Deformations – deformations that lead to the breaking of bumps and the creation of new contact areas.\n\nThe energy that is dissipated during the phenomenon is transformed into heat, thus increasing the temperature of the surfaces in contact. The increase in temperature also depends on the relative speed and the roughness of the material, it can be so high as to even lead to the fusion of the materials involved.\n",
"May be cut smoothly with a knife. Relatively few minerals are sectile. Sectility is a form of tenacity and can be used to distinguish minerals of similar appearance. Gold, for example, is sectile but pyrite (\"fool's gold\") is not.\n\nElasticity:br\n\nIf bent, will spring back to its original position when the stress is released.\n\nPlasticity:br\n\nIf bent, will not spring back to its original position when the stress is released. It stays bent. In contrast, flexibility is the ability of a material to deform elastically and return to its original shape when the applied stress is removed.\n",
"When two macroscopically smooth surfaces come into contact, initially they only touch at a few of these asperity points. These cover only a very small portion of the surface area. Friction and wear originate at these points, and thus understanding their behavior becomes important when studying materials in contact. When the surfaces are subjected to a compressive load, the asperities deform through elastic and plastic modes, increasing the contact area between the two surfaces until the contact area is sufficient to support the load.\n",
"BULLET::::- Additional heat provides additional energy to allow more vigorous convection, allows resorption of existing mineral phases back into the melt, and can cause a higher-temperature form of a mineral or other higher-temperature minerals to begin precipitating\n",
"Toughness in meat is derived from several proteins, such as actin, myosin and collagen, that combined form the structure of the muscle tissue. Heating these proteins causes them to denature, or break down into other substances, which in turn changes the structure and texture of meat, usually reducing its toughness and making it more tender. This typically takes place between over an extended period of time.\n\nSection::::Theory.:Flavour.\n",
"Hot hardness\n\nIn materials engineering and metallurgy, hot hardness or red hardness (when a metal glows a dull red from the heat) corresponds to hardness of a material at high temperatures. As the temperature of material increases, hardness decreases and at some point a drastic change in hardness occurs. The hardness at this point is termed the \"hot\" or \"red\" hardness of that material. Such changes can be seen in materials such as heat treated alloys.\n",
"Hardness of a material to deformation is dependent on its microdurability or small-scale shear modulus in any direction, not to any rigidity or stiffness properties such as its bulk modulus or Young's modulus. Stiffness is often confused for hardness. Some materials are stiffer than diamond (e.g. osmium) but are not harder, and are prone to spalling and flaking in squamose or acicular habits.\n\nSection::::Physics.:Mechanisms and theory.\n",
"Unlike with differential hardening, in differential tempering there is no distinct boundary between the harder and softer metals, but the change from hard to soft is very gradual, forming a continuum, or \"grade\" (gradient), of hardness. However, higher heating temperatures cause the colors to spread less, creating a much steeper grade, while lower temperatures can make the change more gradual, using a smaller portion of the entire continuum. The tempering colors only represent a fraction of the entire grade, because the metal turns grey above , making it difficult to judge the temperature, but the hardness will continue to decrease as the temperature rises.\n",
"The action of a hardened ball against a softer, flat plate illustrates the process of burnishing. If the ball is pressed directly into the plate, stresses develop in both objects around the area where they contact. As this normal force increases, both the ball and the plate's surfaces deform.\n",
"When food is cooked, some of its proteins become denatured. This is why boiled eggs become hard and cooked meat becomes firm.\n",
"BULLET::::- In work hardening (also referred to as strain hardening) the material is strained past its yield point, e.g. by cold working. Ductile metal becomes harder and stronger as it's physically deformed. The plastic straining generates new dislocations. As the dislocation density increases, further dislocation movement becomes more difficult since they hinder each other, which means the material hardness increases.\n",
"Surface chemistry of cooking\n\nIn cooking several factors, including materials, techniques, and temperature, can influence the surface chemistry of the chemical reactions and interactions that create food. All of these factors depend on the chemical properties of the surfaces of the materials used. The material properties of cookware, such as hydrophobicity, surface roughness, and conductivity can impact the taste of a dish dramatically. The technique of food preparation alters food in fundamentally different ways, which produce unique textures and flavors. The temperature of food preparation must be considered when choosing the correct ingredients.\n\nSection::::Materials in cooking.\n",
"The active deformation mechanism in a material depends on the homologous temperature, confining pressure, strain rate, stress, grain size, presence or absence of a pore fluid and its composition, presence or absence of impurities in the material, mineralogy, and presence or absence of a lattice-preferred orientation. Note these variables are not fully independent e.g. for a pure material of a fixed grain size, at a given pressure, temperature and stress, the strain-rate is given by the flow-law associated with the particular mechanism(s). More than one mechanism may be active under a given set of conditions and some mechanisms cannot operate independently but must act in conjunction with another in order that significant permanent strain can develop. In a single deformation episode, the dominant mechanism may change with time e.g. recrystallization to a fine grain size at an early stage may allow diffusive mass transfer processes to become dominant.\n",
"Section::::Toughening in Ceramics.:Transformation Toughening.\n",
"There are five hardening processes: Hall-Petch strengthening, work hardening, solid solution strengthening, precipitation hardening, and martensitic transformation.\n\nSection::::Physics.\n\nIn solid mechanics, solids generally have three responses to force, depending on the amount of force and the type of material:\n\nBULLET::::- They exhibit elasticity—the ability to temporarily change shape, but return to the original shape when the pressure is removed. \"Hardness\" in the elastic range—a small temporary change in shape for a given force—is known as stiffness in the case of a given object, or a high elastic modulus in the case of a material.\n",
"Creep resistance can be influenced by many factors such as diffusivity, precipitate and grain size.\n",
"Formally, fragility reflects the degree to which the temperature dependence of the viscosity (or relaxation time) deviates from Arrhenius behavior. This classification was originally proposed by Austen Angell. The most common definition of fragility is the \"kinetic fragility index\" \"m\", which characterizes the slope of the viscosity (or relaxation time) of a material with temperature as it approaches the glass transition temperature from above:\n\nformula_1\n",
"Parts that are subject to high pressures and sharp impacts are still commonly case-hardened. Examples include firing pins and rifle bolt faces, or engine camshafts. In these cases, the surfaces requiring the hardness may be hardened selectively, leaving the bulk of the part in its original tough state.\n",
"Factors to consider in surface grinding are the material of the grinding wheel and the material of the piece being worked on.\n\nTypical workpiece materials include cast iron and mild steel. These two materials don't tend to clog the grinding wheel while being processed. Other materials are aluminum, stainless steel, brass and some plastics. When grinding at high temperatures, the material tends to become weakened and is more inclined to corrode. This can also result in a loss of magnetism in materials where this is applicable.\n",
"Burnishing (metal)\n\nBurnishing is the plastic deformation of a surface due to sliding contact with another object. It smooths the surface and makes it shinier. Burnishing may occur on any sliding surface if the contact stress locally exceeds the yield strength of the material. The phenomenon can occur both unintentionally as a failure mode, and intentionally as part of a manufacturing process. It is a squeezing operation under cold working.\n\nSection::::Mechanics.\n",
"Section::::Physical properties.:Hardness.\n\nThe hardness of a mineral defines how much it can resist scratching. This physical property is controlled by the chemical composition and crystalline structure of a mineral. A mineral's hardness is not necessarily constant for all sides, which is a function of its structure; crystallographic weakness renders some directions softer than others. An example of this property exists in kyanite, which has a Mohs hardness of 5½ parallel to [001] but 7 parallel to [100].\n",
"Creep (deformation)\n\nIn materials science, creep (sometimes called cold flow) is the tendency of a solid material to move slowly or deform permanently under the influence of persistent mechanical stresses. It can occur as a result of long-term exposure to high levels of stress that are still below the yield strength of the material. Creep is more severe in materials that are subjected to heat for long periods and generally increases as they near their melting point.\n",
"Thermal stress\n\nThermal stress is stress created by any change in temperature to a material. These stresses can lead to fracture or plastic deformation depending on the other variables of heating, which include material types and constraints. Temperature gradients, thermal expansion or contraction and thermal shocks are things that can lead to thermal stress. This type of stress is highly dependent on the thermal expansion coefficient which varies from material to material. In general the larger the temperature change, the higher the level of stress that can occur.\n\nSection::::Temperature gradients.\n",
"Hardening (metallurgy)\n\nHardening is a metallurgical metalworking process used to increase the hardness of a metal. The hardness of a metal is directly proportional to the uniaxial yield stress at the location of the imposed strain. A harder metal will have a higher resistance to plastic deformation than a less hard metal.\n\nSection::::Processes.\n\nThe five hardening processes are:\n",
"Section::::Importance of cooking temperature on interfaces.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-02119 | Why is silicon so vital for electronic devices | Basically, semi-conductors (meaning sometimes they conduct electricity and other times they don't - silicon can be manipulated in a process called doping) are what are needed for electronics. Silicon is the most abundant element in the Earth's crust, making it super affordable. Edit: spelling. | [
"Elemental silicon also has a large impact on the modern world economy. Most free silicon is used in the steel refining, aluminium-casting, and fine chemical industries (often to make fumed silica). Even more visibly, the relatively small portion of very highly purified elemental silicon used in semiconductor electronics (< 10%) is essential to integrated circuits – most computers, cell phones, and modern technology depend on it.\n",
"Elemental silicon also has a large impact on the modern world economy. Although most free silicon is used in the steel refining, aluminum-casting, and fine chemical industries (often to make fumed silica), the relatively small portion of very highly purified silicon that is used in semiconductor electronics (< 10%) is perhaps even more critical. Because of wide use of silicon in integrated circuits, the basis of most computers, a great deal of modern technology depends on it.\n\nSection::::Elements.:Phosphorus.\n",
"By far, silicon (Si) is the most widely used material in semiconductor devices. Its combination of low raw material cost, relatively simple processing, and a useful temperature range makes it currently the best compromise among the various competing materials. Silicon used in semiconductor device manufacturing is currently fabricated into boules that are large enough in diameter to allow the production of 300 mm (12 in.) wafers.\n",
"In common [[integrated circuit]]s, a wafer of monocrystalline silicon serves as a mechanical support for the circuits, which are created by doping and insulated from each other by thin layers of [[silicon dioxide|silicon oxide]], an insulator that is easily produced on Si surfaces by processes of [[thermal oxidation]] or [[LOCOS|local oxidation (LOCOS)]], which involve exposing the element to oxygen under the proper conditions that can be predicted by the [[Deal–Grove model]]. Silicon has become the most popular material for both high power semiconductors and integrated circuits because it can withstand the highest temperatures and greatest electrical activity without suffering [[avalanche breakdown]] (an [[electron avalanche]] is created when heat produces free electrons and holes, which in turn pass more current, which produces more heat). In addition, the insulating oxide of silicon is not soluble in water, which gives it an advantage over [[germanium]] (an element with similar properties which can also be used in semiconductor devices) in certain fabrication techniques.\n",
"Electronic components are sometimes encased in silicone to increase stability against mechanical and electrical shock, radiation and vibration, a process called \"potting\".\n\nSilicones are used where durability and high performance are demanded of components under hard conditions, such as in space (satellite technology). They are selected over polyurethane or epoxy encapsulation when a wide operating temperature range is required (−65 to 315 °C). Silicones also have the advantage of little exothermic heat rise during cure, low toxicity, good electrical properties and high purity.\n",
"There is some evidence that silicon is important to human health for their nail, hair, bone, and skin tissues, for example, in studies that demonstrate that premenopausal women with higher dietary silicon intake have higher [[bone density]], and that silicon supplementation can increase bone volume and density in patients with [[osteoporosis]]. Silicon is needed for synthesis of [[elastin]] and [[collagen]], of which the [[aorta]] contains the greatest quantity in the human body, and has been considered an [[mineral (nutrient)|essential element]]; nevertheless, it is difficult to prove its essentiality, because silicon is very common, and hence, deficiency symptoms are difficult to reproduce.\n",
"[[Monocrystalline silicon]] of such purity is usually produced by the [[Czochralski process]], is used to produce [[Wafer (electronics)|silicon wafers]] used in the [[semiconductor industry]], in electronics, and in some high-cost and high-efficiency [[photovoltaic]] applications. Pure silicon is an [[intrinsic semiconductor]], which means that unlike metals, it conducts [[electron hole]]s and electrons released from atoms by heat; silicon's [[electrical conductivity]] increases with higher temperatures. Pure silicon has too low a conductivity (i.e., too high a [[resistivity]]) to be used as a circuit element in electronics. In practice, pure silicon is [[doping (semiconductors)|doped]] with small concentrations of certain other elements, which greatly increase its conductivity and adjust its electrical response by controlling the number and charge ([[electron hole|positive]] or [[electron|negative]]) of activated carriers. Such control is necessary for [[transistor]]s, [[solar cell]]s, [[semiconductor detector]]s, and other [[semiconductor device]]s used in the computer industry and other technical applications. In [[silicon photonics]], silicon may be used as a continuous wave [[Raman laser]] medium to produce coherent light.\n",
"Section::::Safety and environmental considerations.\n",
"Section::::Applications.\n\nSection::::Applications.:Compounds.\n",
"Silicon is currently under consideration for elevation to the status of a \"plant beneficial substance by the Association of American Plant Food Control Officials (AAPFCO).\"\n\nSection::::Safety.\n",
"Section::::Applications.:Thin-film-transistor liquid-crystal display.\n\nAmorphous silicon has become the material of choice for the active layer in thin-film transistors (TFTs), which are most widely used in large-area electronics applications, mainly for liquid-crystal displays (LCDs).\n",
"BULLET::::- EPBT improvements\n",
"Section::::Polycrystalline silicon components.\n\nAt the component level, polysilicon has long been used as the conducting gate material in MOSFET and CMOS processing technologies. For these technologies it is deposited using low-pressure chemical-vapour deposition (LPCVD) reactors at high temperatures and is usually heavily doped n-type or p-type.\n",
"Section::::Biological role.\n\n[[File:20110123 185042 Diatom.jpg|upright|thumb|A diatom, enclosed in a silica cell wall]]\n\nAlthough silicon is readily available in the form of [[silicate]]s, very few organisms use it directly. [[Diatom]]s, [[radiolaria]], and [[siliceous sponge]]s use [[biogenic silica]] as a structural material for their skeletons. In more advanced plants, the silica [[phytolith]]s (opal phytoliths) are rigid microscopic bodies occurring in the cell; some plants, for example [[rice]], need silicon for their growth. Silicon has been shown to improve plant cell wall strength and structural integrity in some plants.\n\nSection::::Biological role.:Human nutrition.\n",
"Germanium (Ge) was a widely used early semiconductor material but its thermal sensitivity makes it less useful than silicon. Today, germanium is often alloyed with silicon for use in very-high-speed SiGe devices; IBM is a major producer of such devices.\n\nGallium arsenide (GaAs) is also widely used in high-speed devices but so far, it has been difficult to form large-diameter boules of this material, limiting the wafer diameter to sizes significantly smaller than silicon wafers thus making mass production of GaAs devices significantly more expensive than silicon.\n\nOther less common materials are also in use or under investigation.\n",
"The first MOSFET (metal–oxide–semiconductor field-effect transistor) using SiSn as a channel material was shown in 2013.\n\nThis study proved that SiSn can be used as semiconductor for MOSFET fabrication, and that there may be certain applications where the use of SiSn instead of silicon may be more advantageous. In particular, the off current of SiSn transistors is much lower than that of silicon transistors. Thus, logic circuits based on SiSn MOSFETs consume lower static power compared to silicon-based circuits. This is advantageous in battery operated devices (LSTP devices), where the standby power has to be reduced for longer battery life.\n",
"Silicon is the material used to create most integrated circuits used in consumer electronics in the modern industry. The economies of scale, ready availability of inexpensive high-quality materials, and ability to incorporate electronic functionality make silicon attractive for a wide variety of MEMS applications. Silicon also has significant advantages engendered through its material properties. In single crystal form, silicon is an almost perfect Hookean material, meaning that when it is flexed there is virtually no hysteresis and hence almost no energy dissipation. As well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. Semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and MEMS in particular. Silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems.\n",
"In general, the main issue with applications of silicon nitride has not been technical performance, but cost. As the cost has come down, the number of production applications is accelerating.\n\nSection::::Applications.:Automobile industry.\n",
"Silicones are used in many products. Ullmann's Encyclopedia of Industrial Chemistry lists the following major categories of application: Electrical (e.g., insulation), electronics (e.g., coatings), household (e.g., sealants and cooking utensils), automobile (e.g., gaskets), aeroplane (e.g., seals), office machines (e.g., keyboard pads), medicine and dentistry (e.g., tooth impression molds), textiles and paper (e.g., coatings). For these applications, an estimated 400,000 tonnes of silicones were produced in 1991. Specific examples, both large and small are presented below.\n\nSection::::Uses.:Automotive.\n",
"Section::::Materials for MEMS manufacturing.:Polymers.\n\nEven though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. Polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. MEMS devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges.\n\nSection::::Materials for MEMS manufacturing.:Metals.\n",
"Section::::Major enabling factors.:Alternative materials research.\n\nThe vast majority of current transistors on ICs are composed principally of doped silicon and its alloys. As silicon is fabricated into single nanometer transistors, short-channel effects adversely change desired material properties of silicon as a functional transistor. Below are several non-silicon substitutes in the fabrication of small nanometer transistors.\n",
"It consists of silicon in which the crystal lattice of the entire solid is continuous, unbroken to its edges, and free of any grain boundaries. Mono-Si can be prepared as an intrinsic semiconductor that consists only of exceedingly pure silicon, or it can be doped by the addition of other elements such as boron or phosphorus to make p-type or n-type silicon. Due to its semiconducting properties, single-crystal silicon is perhaps the most important technological material of the last few decades—the \"silicon era\", because its availability at an affordable cost has been essential for the development of the electronic devices on which the present-day electronics and IT revolution is based.\n",
"The use of silicones in electronics is not without problems, however. Silicones are relatively expensive and can be attacked by solvents. Silicone easily migrates as either a liquid or vapor onto other components.\n\nSilicone contamination of electrical switch contacts can lead to failures by causing an increase in contact resistance, often late in the life of the contact, well after any testing is completed. Use of silicone-based spray products in electronic devices during maintenance or repairs can cause later failures.\n\nSection::::Uses.:Firestops.\n",
"Further, a number of design improvements, such as, the use of new emitters, bifacial configuration, interdigitated back contact (IBC) configuration bifacial-tandem configuration are actively being pursued.\n\nSection::::Mono-silicon.\n",
"As for the traditional foundries, on July 2006 TSMC claimed no customer wanted SOI, but Chartered Semiconductor devoted a whole fab to SOI.\n\nSection::::Use in high-performance radio frequency (RF) applications.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-02624 | How do microphones on the side of the face pick up sound so well, when they barely reach the cheek? | You know how you can hear someone talking even when they're facing away from you? Sound can bend around barriers, so the microphone can still pick up speech. The design of mics like you mention is meant to be reasonably discreet while still picking up clear sound. Being out of the direct path of air also eliminates plosives, which are caused by air rapidly entering the mic (think of the popping when people say 'p' and 'b' sounds directly into a mic.) Now, with helicopter pilots, sound quality isn't as big of an issue, and since the aircraft is so incredibly loud, the mic needs to be right up against the pilot's mouth in order to be able to hear him/her. | [
"BULLET::::- The facial nerve (VII) and vestibulocochlear nerve (VIII) both enter the internal auditory canal in the temporal bone. The facial nerve then reaches the side of the face by using the stylomastoid foramen, also in the temporal bone. Its fibers then spread out to reach and control all of the muscles of facial expression. The vestibulocochlear nerve reaches the organs that control balance and hearing in the temporal bone and therefore does not reach the external surface of the skull.\n",
"Section::::Use and implementation.\n\nA PSK31 operator typically uses a single-sideband (SSB) transceiver connected to the sound card of a computer running PSK31 software. When the operator enters a message for transmission, the software produces an audio tone that sounds, to the human ear, like a continuous whistle with a slight warble. This sound is then fed through either a microphone jack (using an intermediate resistive attenuator to reduce the sound card's output power to microphone levels) or an auxiliary connection into the transceiver, from which it is transmitted.\n",
"The microphone is subject to increased bass when used at close proximity in pressure gradient mode. Distant sources are not affected by this. However, increasing the distance between the source and microphone levels the frequency response, and will decrease the noise of ambient sound by the same level. As a result of this, many lip-ribbon microphones use a fixed mouth guard to ensure a distance of between source and microphone. Any plosive effects are then controlled by means of pop shields and meshes.\n",
"Suppose that a musician is playing an instrument and that the sound is received by two microphones, each of them located at two different places. Let the attenuation of sound due to distance at each microphone be formula_285 and formula_286, which are assumed to be known constants. Similarly, let the noise at each microphone be formula_268 and formula_271, each with zero mean and variances formula_289 and formula_290 respectively. Let formula_1 denote the sound produced by the musician, which is a random variable with zero mean and variance formula_281 How should the recorded music from these two microphones be combined, after being synced with each other?\n",
"It is possible to recreate the three-dimensional soundfield, however the soundfield microphone particularly shows its versatility in a stereo or mono application. For example, a forward-facing cardioid is produced byformula_1. By combining the signals in various proportions, it is possible to derive any number of first-order microphones, pointing in any direction, before and after recording. For instance, provided the W, X, Y and Z signals are recorded separately, it is possible to pinpoint the microphone to a certain response from the audience even after recording. Examples of software that perform these calculations are\n",
"Section::::Products.\n\nThey had two lines of processors, Advanced Voice and Smart Sound, both branded as \"earSmart.\" Advanced Voice processors come in a variety of models, the eS110 (supports 1 microphone), eS305 and eS310 (supports 2 microphones) and the company’s latest generation eS325 (supports 3 microphones), which was announced at Mobile World Congress 2013.\n",
"Section::::Components.\n\nThe vocal tract can be viewed through an aerodynamic-biomechanic model that includes three main components:\n\nBULLET::::1. air cavities\n\nBULLET::::2. pistons\n\nBULLET::::3. air valves\n",
"Shure's first headset microphone for stage use was created in 1991. One of the earliest uses of a Shure headset mic onstage was for the television special \".\" Among the headset microphones Shure has manufactured over the years are the WH20, WH30, WCM16 (introduced in 1993), Beta53 and Beta54. The newest of Shure's headset microphones, the MX153, part of the Microflex series, was introduced in 2012.\n",
"BULLET::::- The sub-cardioid microphone has no null points. It is produced with about 7:3 ratio with 3–10 dB level between the front and back pickup.\n\nSection::::Polar patterns.:Bi-directional.\n",
"When humans speak, sounds are transmitted in all directions; however, listeners perceive the direction from which the sounds are coming. Similarly, signers broadcast to potentially anyone within the line of sight, while those watching see who is signing. This is characteristic of most forms of human and animal communication.\n\nTransitoriness\n",
"When activated, the sound from the amplifier is reproduced by the speaker in the talk box and directed through the tube into the performer's mouth. The shape of the mouth filters the sound, with the modified sound being picked up by the microphone. The shape of the mouth changes the harmonic content of the sound in the same way it affects the harmonic content generated by the vocal folds when speaking.\n",
"In a more robust and expensive implementation, the returned light is split and fed to an interferometer, which detects movement of the surface by changes in the optical path length of the reflected beam. The former implementation is a tabletop experiment; the latter requires an extremely stable laser and precise optics.\n",
"Section::::Appearance.:Ears.\n\nDobermanns often have their ears cropped, a procedure that is thought to be done for functionality for both the traditional guard duty and effective sound localization. According to the Doberman Pinscher Club of America, ears are \"normally cropped and carried erect\". Like tail docking, ear cropping is illegal in many countries and has never been legal in some Commonwealth countries.\n\nSection::::Intelligence.\n",
"Because bone conduction headphones transmit sound to the inner ear through the bones of the skull, users can consume audio content while maintaining situational awareness.\n\nSection::::Safety.:Use in the 21st century.\n\nThe Google Glass device employs bone conduction technology for the relay of information to the user through a transducer that sits beside the user's ear. The use of bone conduction means that any vocal content that is received by the Glass user is nearly inaudible to outsiders.\n",
"A main application of dereverberation is in hands-free phones and desktop conferencing terminals because, in these cases, the microphones are not close to the source of sound – the talker’s mouth – but at arm’s length or further distance. As well as telecommunications, dereverberation is importantly applied in automatic speech recognition because speech recognizers are usually error-prone in reverberant scenarios.\n",
"The first problem means that we heard a sound directly in front of a subject when it is located at the back in fact or vice versa. This problem can be lessened by accurate inclusion of the subject's head movement and pinna response. And if these two are missed during the HRTF calculation, the reverse problem will occur. And there exists another method to solve this problem, the early echo response. Some people proposed a refined algorithm which exaggerates the differences for the sounds from different directions and strengthen the pinnae effects to reduce the front-to-back reversal rates.\n",
"Section::::Surgical anatomy.\n\nSection::::Surgical anatomy.:Vascular supply.\n\nThe external carotid artery (ECA), with contributions from the internal carotid artery (ICA) system, is the predominant arterial blood supply to the skin and muscles of the cheek. The greatest contribution is from the facial artery which traverses the face obliquely and terminates in the angular artery. The dorsal nasal artery runs along the nose and is the terminal branch of the ophthalmic artery, which is a terminal branch of the ICA. Many smaller branches and communications also exist.\n",
"Nady System's products have also been used in producing broadcast television content, including the Golden Globe Awards, Grammy Awards, and The Lawrence Welk Show. Nady Systems' work in television has been recognized by the National Academy of Television Arts and Sciences, which awarded Nady Systems an Emmy Award in 1996 for “Outstanding Technical Achievement in Pioneering Wireless Microphone Technology.”\n",
"SoundBite relies on two primary components to deliver sound:\n\nBULLET::::- The behind-the-ear (BTE) microphone unit is worn on the patient’s deaf ear. Using the natural acoustic benefits of the outer ear, sound is collected and channeled into the ear canal. A tiny microphone is placed within the canal of the impaired ear and is connected by a small tube to the BTE. The BTE uses a digital signal processor to process the sound and a wireless chip to transmit the sound signals to the in-the-mouth (ITM) device.\n",
"JZ Microphones produces ten microphone models, in whose creation twenty-four patents owned by company are used. Most of the microphones are made with ‘’Golden drop’’ technology – a slightly different gilding process of capsule; in result the sound is much natural and cleaner. Also the original design of microphones differ JZ from other microphones, one of the most popular model series Black Hole unique design with hole in body makes attaching easier and also reduces unnecessary sounds.\n\nSound engineers and producers that use JZ Microphones JZ:\n\nBULLET::::- Andy Gill\n\nBULLET::::- Sylvia Massy\n\nBULLET::::- Dave Jerden\n\nBULLET::::- Kurt Hugo Schneider\n",
"They exit the bus and Clem enters \"The Wall of Science\", an exhibit featuring recreations of historical events. The presentation includes two men, \"The Honorable Chester Cadaver\" and \"Senator Clive Brown\", demonstrating a \"model government\" (which runs on electricity). When one of them asks Clem his name, he responds \"Uh, Clem\", and the central computer records this and subsequently addresses him as \"Ah clem\". Barney honks his nose after stating his name, which is recorded as \"Barney (honk sound)\". Thus the album satirizes early, inaccurate speech recognition technology.\n",
"Within the first days of the FIFA World Cup 2010 Prosoniq came out with a free \"VuvuX\" AudioUnit plug in to remove the Vuvuzela noise from the audio commentary without affecting speech and background atmosphere. According to their web site the plug in is not using a notch filter and is based on their sonicWORX de-mixing technology which utilizes statistical signal properties.\n",
"Section::::Background.\n\nA microphone blocker is a cheap, simple accessory that provides protection against eavesdropping, cellphone surveillance, and other types of audio hacking. There are a variety of spyware programs that can turn on a mobile device's microphone remotely, and the vast majority of devices do not have internal hardware protection to prevent eavesdropping. Most anti-spying software does not guarantee that the microphone will be fully blocked or disabled, while spyware and malware are constantly changing and improving.\n",
"The quality of the imaging arriving at the listener's ear depends on numerous factors, of which the most important is the original \"miking\", that is, the choice and arrangement of the recording microphones (where \"choice\" refers here not to the brands chosen, but to the size and shape of the microphone diaphragms, and \"arrangement\" refers to microphone placement and orientation relative to other microphones). This is partly because miking simply affects imaging more than any other factor, and because, if the miking spoils the imaging, nothing later in the chain can recover it.\n",
"BULLET::::- Tonight’s Headlines(2006) – in this work we find the newest version of the “eloquent silence” series running through Solomons’ oeuvre. The work starts out like an ordinary night edition of the news on Israeli Channel 2, but when the time comes for the anchors to announce the headlines, they stare at the camera, without uttering a word. They keep the gestures and body language characteristic of the media just as much as the spoken language itself.\n"
] | [
"Microphones on the side of the cheek should not pick up sound well.",
"If microphones can barely reach the cheek, they should not be able to pick up sound so well."
] | [
"They can pick up sound because the sound travels to the microphone.",
"Sound cam bend around barriers, therefore the microphone can still pick up speech on the sidelines."
] | [
"false presupposition"
] | [
"Microphones on the side of the cheek should not pick up sound well.",
"If microphones can barely reach the cheek, they should not be able to pick up sound so well."
] | [
"false presupposition",
"false presupposition"
] | [
"They can pick up sound because the sound travels to the microphone.",
"Sound cam bend around barriers, therefore the microphone can still pick up speech on the sidelines."
] |
2018-04777 | How does an architectural drawing get translated into a physical building? Who decides how many bolts, the type of material, etc? | I am a plumbing designer. The basic answer is that all the pieces go through many hands before a building is built and it depends on what kind of project. A structural engineer will determine what type of skeleton holds a building up, concrete footers, if steel and concrete, what columns and beams hold up the floors, ect. Framers work with them to determine trusses in wood frame buildings. On big enough projects(basically anything commercial and not just single houses), there is a design process for each trade. Engineers most of the time come up with plans for the trades but depending on the quality of engineers(usually cheaper bids taken and therefore corners are cut), the level of detail and similarity to what actually gets built will differ. I work for a company that installs plumbing hvac and fire protection systems. In the office we have project managers, estimators and designers(me). Between us we take plans that architects and engineers produce and turn them into workable and cost effective projects for our trades. Construction companies work on maybe a 5% profit margin so this is important because the plans we receive are generally not set up to be cost effective(knowledge of code and experience in the field is important to know what does and doesn't work and how to save money and be more effective with time). Jobs tend to change while in progress as well so we have to be able to adapt to make a functional building. Estimators bid a job based on rough plans. They determine what type of materials in a bid based on job specs, cost and codes. Designers take those plans after a bid is won and produce shop drawings for the field guys and do material takeoffs as well sometimes to determine what is needed for the project. Project managers take this info and convey it o the field foremen who make it happen on site. Hope this wasn't too wordy. Any other questions related to this are welcome | [
"Section::::Types.:Survey drawings.\n\nMeasured drawings of existing land, structures and buildings. Architects need an accurate set of survey drawings as a basis for their working drawings, to establish exact dimensions for the construction work. Surveys are usually measured and drawn up by specialist land surveyors.\n\nSection::::Types.:Record drawings.\n",
"BULLET::::- Assembly drawings show how the different parts are put together. For example, a wall detail will show the layers that make up the construction, how they are fixed to structural elements, how to finish the edges of openings, and how prefabricated components are to be fitted.\n",
"These drawings show the provisions required to accommodate the services that significantly affect the design of the building structure, fabric, and external works. This includes drawings (and schedules) of work the building trade carries out, or that must be cost-estimated at the design stage, e.g., plant bases\n\nSection::::Sets of drawings.:United Kingdom.:Builder's work Drawing.:Installation stage.\n\nThese drawings show requirements for building works necessary to facilitate installing the engineering services (other than where it is appropriate to mark out on site). Information on these drawing includes details of all:\n",
"A comprehensive set of drawings used in a building construction project: these will include not only architect's drawings but structural and services engineer's drawings etc. Working drawings logically subdivide into location, assembly and component drawings.\n\nBULLET::::- Location drawings, also called general arrangement drawings, include floor plans, sections and elevations: they show where the construction elements are located.\n",
"Architectural drawings are made according to a set of conventions, which include particular views (floor plan, section etc.), sheet sizes, units of measurement and scales, annotation and cross referencing. \n",
"Design engineers also use orthographic or pictorial views called \"working cases\" to record their ideas. These preliminary sketches are used as the basis for both the component and assembly drawings. Production drawings are 'drawn' (graphic) information prepared by the design team for use by the construction or production team, the main purpose of which is to define the size, shape, location and production of the building or component'.\n",
"There are two basic elements to a building design, the aesthetic and the practical. The aesthetic element includes the layout and visual appearance, the anticipated feel of the materials, and cultural references that will influence the way people perceive the building. Practical concerns include space allocated for different activities, how people enter and move around the building, daylight and artificial lighting, acoustics, traffic noise, legal matters and building codes, and many other issues. While both aspects are partly a matter of customary practice, every site is different. Many architects actively seek innovation, thereby increasing the number of problems to be resolved.\n",
"In architecture, the finished work is expensive and time consuming, so it is important to resolve the design as fully as possible before construction work begins. Complex modern buildings involve a large team of different specialist disciplines, and communication at the early design stages is essential to keep the design moving towards a coordinated outcome. Architects (and other designers) start investigating a new design with sketches and diagrams, to develop a rough design that provides an adequate response to the particular design problems.\n",
"Section::::Information required to be included in Shop Drawings.:Notes of changes or alterations from the construction documents.\n\nNotes concerning changes or differences from the original documents should be made on the shop drawing for the architect’s and engineer’s approval. Ultimately, they are responsible for changes in these drawings and should have the opportunity to analyze any modifications. A dialogue should occur between the fabricator and the architect and engineer about any areas needing clarification. Successful installations are the result of collaboration between the designer, fabricator, and contractor.\n\nSection::::Information required to be included in Shop Drawings.:Information needed to fabricate the product.\n",
"Developments in the 20th century included the parallel motion drawing board, as well as more complex improvements on the basic T-square. The development of reliable technical drawing pens allowed for faster draughting and stencilled lettering. Letraset dry transfer lettering and half-tone sheets were popular from the 1970s until computers made those processes obsolete.\n\nSection::::Drafting.:CGI and computer-aided design.\n",
"A drawing which based on the detailed drawing, installation drawing or co-ordination drawing (interface drawing) with the primary purpose of defining that information needed by the tradesmen on site to install the works or concurrently work among various engineering assembly. The main features of typical installation drawings are:\n\nBULLET::::- Plan layouts to a scale of at least 1:50, accompanied by cross-sections to a scale of at least 1:20 for all congested areas\n\nBULLET::::- A spatially coordinated drawing, i.e., show no physical location clashes between the system components\n",
"BULLET::::- Component drawings enable self-contained elements e.g. windows and doorsets, to be fabricated in a workshop, and delivered to site complete and ready for installation. Larger components may include roof trusses, cladding panels, cupboards and kitchens. Complete rooms, especially hotel bedrooms and bathrooms, may be made as prefabricated pods complete with internal decorations and fittings.\n",
"BULLET::::- Major components, so their whereabouts in specifications and other drawings can be easily determined\n\nSection::::Sets of drawings.:United Kingdom.:Detailed design drawing.\n\nA drawing the intended locations of plant items and service routes in such detail as to indicate the design intent. The main features of detailed design drawings should be as follows:\n\nBULLET::::- Plan layouts to a scale of at least 1:100.\n\nBULLET::::- Plant areas to a scale of at least 1:50 and accompanied by cross-sections.\n",
"Architectural drawing\n\nAn architectural drawing or architect's drawing is a technical drawing of a building (or building project) that falls within the definition of architecture. Architectural drawings are used by architects and others for a number of purposes: to develop a design idea into a coherent proposal, to communicate ideas and concepts, to convince clients of the merits of a design, to enable a building contractor to construct it, as a record of the completed work, or to make a record of a building that already exists.\n",
"The process and the knowledge it produces is recursive: Since subcontractors are engaged early and often in an architect-led design build project, to assess efficiencies, opportunity costs, payback rates and quality options. Their input informs overall design decisions from the outset. Cost-benefit is also a constant consideration that informs design decisions from the outset. Building performance is measured early too, so that trade offs between budget, schedule, functionality and usability can inform specification and continuous refinement of the design.\n",
"Traditionally, working drawings would typically combine plans, sections, elevations and some details to provide a complete explanation of a building on one sheet. That was possible because little detail was included, the building techniques involved being common knowledge amongst building professionals. Modern working drawings are much more detailed and it is standard practice to isolate each view on a separate sheet. Notes included on drawings are brief, referring to standardised specification documents for more information. Understanding the layout and construction of a modern building involves studying an often-sizeable set of drawings and documents.\n\nSection::::Drafting.\n",
"Mechanical system drawings must abide by all of the following regulations: the National Building Code of Canada, the National Fire Code, and Model National Energy Code of Canada for Buildings. For residential projects, The National Housing Code of Canada and the Model National Energy Code of Canada for Houses must also be followed. These drawings must also adhere to local and provincial codes and bylaws.\n\nSection::::See also.\n\nBULLET::::- Architectural drawing\n\nBULLET::::- Electrical drawing\n\nBULLET::::- Engineering drawing\n\nBULLET::::- Plumbing drawing\n\nBULLET::::- Structural drawing\n\nBULLET::::- Working drawing\n\nSection::::External links.\n\nBULLET::::- Examples of mechanical drawings\n",
"Section::::Sets of drawings.\n\nSection::::Sets of drawings.:United States.\n\nSection::::Sets of drawings.:United States.:Arrangement drawing.\n\nArrangement drawings include information about the self-contained units that make up the system: table of parts, fabrication and detail drawing, overall dimension, weight/mass, lifting points, and information needed to construct, test, lift, transport, and install the equipment. These drawings should show at least three different orthographic views and clear details of all the components and how they are assembled.\n\nSection::::Sets of drawings.:United States.:Assembly drawing.\n",
"The geometry of most \"architectural\" structures (such as buildings or bridges) is twodimensional and it is essential to study this aspect, whether for aesthetic, commodity-related or economic reasons. Several criteria are therefore taken into account in its definition.\n\nSection::::Objective.\n\nThe study is limited to the quest of the geometry giving the structure of minimum volume.\n\nThe cost of a structure depends on the nature and the quantity of the materials used as well as the tools and human resources required for its production.\n",
"Historically, drawings were made in ink on paper or a similar material, and any copies required had to be laboriously made by hand. The twentieth century saw a shift to drawing on tracing paper, so that mechanical copies could be run off efficiently. The development of the computer had a major impact on the methods used to design and create technical drawings, making manual drawing almost obsolete, and opening up new possibilities of form using organic shapes and complex geometry. Today the vast majority of drawings are created using CAD software.\n\nSection::::Size and scale.\n",
"Architectural drawings are made according to a set of conventions, which include particular views (floor plan, section etc.), sheet sizes, units of measurement and scales, annotation and cross referencing. Conventionally, drawings were made in ink on paper or a similar material, and any copies required had to be laboriously made by hand. The twentieth century saw a shift to drawing on tracing paper, so that mechanical copies could be run off efficiently.\n\nSection::::Architectural plan aspects.:Architectural design values.\n",
"There are many types of in-depth specialized technical evaluations and audits. These validations generally require time, a major effort by the customer group, and a high level of funding. Normally, the most valuable methods and tools are comprehensive scans which are performance based and include metrics that can easily be measured without lab-type instruments.\n\nEvaluations and reviews, are integral part of asset and portfolio management, design, construction, commissioning.\n",
"Corrections are made by the architect and engineer, and the shop drawing is corrected by the supplier, then the appropriate number of copies is distributed. This method can be time consuming, as the shop drawing is not approved until the corrections are made on it.\n\nSection::::Reviews.:Submittal of a copy that can be reproduced.\n\nThe architect and engineer make comments on the reproducible, then copies are distributed. This method facilitates the timely approval and distribution of the shop drawing. Review comments usually are obvious on the reproducible copy. When sepia copies are used, \n",
"Electrical drawing\n\nAn electrical drawing is a type of technical drawing that shows information about power, lighting, and communication for an engineering or architectural project. Any electrical working drawing consists of \"lines, symbols, dimensions, and notations to accurately convey an engineering's design to the workers, who install the electrical system on the job\". \n\nA complete set of working drawings for the average electrical system in large projects usually consists of:\n\nBULLET::::- A plot plan showing the building's location and outside electrical wiring\n\nBULLET::::- Floor plans showing the location of electrical systems on every floor\n",
"If using a local authority, approval can be obtained in 1 of 3 ways:-\n\nSection::::1. Full Plans.\n\nBy the \"full plans\" method where drawings are deposited with the Local Authority and are subsequently checked for compliance with the Building Regulations.\n\nThe various stages of the work are also inspected and checked for compliance with the relevant technical requirements of the Building Regulations; by a Building Control Surveyor employed by the Local Authority.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-11780 | Why is it that when one begins to actually think about a reflex like blinking, breathing, or swallowing, it seems to become less reflexive and require conscious effort to even continue? | Because a few of our body systems have *dual* connections to their muscles, with controls by *both* the voluntary and the autonomic nervous systems. | [
"Primitive reflexes are primarily tested with suspected brain injury or some dementias such as Parkinson's disease for the purpose of assessing frontal lobe functioning. If they are not being suppressed properly they are called frontal release signs. Atypical primitive reflexes are also being researched as potential early warning signs of autistic spectrum disorders.\n",
"In a study done by and his colleagues students had to watch a movie. One group did so with a pen between their teeth while the other group had to hold the pen with their lips. The first group interpreted the movie funnier than the second, because the muscles responsible for smiling were used and then made the brain release hormones related to being happy. These studys show that facial expressions are not only the result of emotions but can also be their cause. \n\nSection::::Reception.\n",
"Many actions in response to sensory inputs are rapid, transient, stereotyped, and unconscious. They could be thought of as cortical reflexes and are characterized by rapid and somewhat stereotyped responses that can take the form of rather complex automated behavior as seen, e.g., in complex partial epileptic seizures. These automated responses, sometimes called \"zombie behaviors\", could be contrasted by a slower, all-purpose conscious mode that deals more slowly with broader, less stereotyped aspects of the sensory inputs (or a reflection of these, as in imagery) and takes time to decide on appropriate thoughts and responses. Without such a consciousness mode, a vast number of different zombie modes would be required to react to unusual events.\n",
"Social cognition researchers are also interested in the regulation of activated schemas. It is believed that the situational activation of schemas is automatic, meaning that it is outside individual conscious control. In many situations however, the schematic information that has been activated may be in conflict with the social norms of the situation in which case an individual is motivated to inhibit the influence of the schematic information on their thinking and social behavior. Whether a person will successfully regulate the application of the activated schemas is dependent on individual differences in self-regulatory ability and the presence of situational impairments to executive control. High self-regulatory ability and the lack of situational impairments on executive functioning increase the likelihood that individuals will successfully inhibit the influence of automatically activated schemas on their thinking and social behavior. When people stop suppressing the influence of the unwanted thoughts, a rebound effect can occur where the thought becomes hyper-accessible.\n",
"Preconscious automaticity requires only the triggering proximal stimulus event, and occur prior to or in the absence of any conscious awareness of that event. Because they occur without our conscious awareness they are unnoticeable, uncontrollable, and nearly effortless. Many previous studies suggest that the mere perception of the physical behaviors of others, as well as abstract categories (race, gender, role-related) that occurs passively in person perception results in increased tendencies to behave in the same way oneself. So basically a stimulus may that be person, object, or an action will unconsciously affect one's response and or behavior without one's knowledge. In a study they subliminally exposed one of the participants with an African American face or a Caucasian face before the participants engaged in a verbal game. The study concluded that when participants were subliminally exposed to the African American faces they were significantly more aggressive in the verbal game than those exposed to the Caucasian face. In a study related to this the participants were required to play a video game that depicted a real-life situation that involved deciding to shoot a man with a gun. Participants were shown pictures of both Caucasian and African American men with or without a gun or another object in hand. The participants had to respond \"Shoot\" or \"Not Shoot\" within milliseconds. The results were that participants significantly decided to shoot faster when African Americans had a gun versus Caucasians.\n",
"The studies of Paul Ekman, a psychologist who created the Facial Action Coding System (FACS), indicates that a lot of \"thin slicing\" can be done within seconds by unconsciously analyzing a person's fleeting look called a microexpression. Ekman claims that the face is a rich source of what is going on inside our mind and although many facial expressions can be made voluntarily, our faces are also dictated by an involuntary system that automatically expresses our emotions. On example of how movements of the face result in emotions is shown in an experiment from Paul Ekman, Wallace V. Friesen and Robert Levenson. They asked their test subjects to remember negative or burdening experiences. Another group was asked only to make faces that resembled negative feelings like anger, sadness and the like. Both groups were connected to sensors which measured their physiological reactions (puls and body temperature). Interestingly the latter group showed the same physical reactions as the first group.\n",
"Automatic behavior, from the Greek \"automatos\" or self-acting, is the spontaneous production of often purposeless verbal or motor behavior without conscious self-control or self-censorship. This condition can be observed in a variety of contexts, including schizophrenia, psychogenic fugue, epilepsy (in complex partial seizures and Jacksonian seizures), narcolepsy or in response to a traumatic event. The individual does not recall the behavior. According to the book 'The Mind Machine' by Colin Blakemore, hypoglycemia usually leads quickly to unconsciousness, but as blood glucose level falls, there is 'a window of experience between sanity and \"coma\" in which self-control is lost', and the body 'behaves on its own'.\n",
"As said earlier from Norman and Shallice the other component used in voluntary action is supervisory attention. Schemas cause the activation of behaviors; the greater the excitation of the activity the more easily it is to achieve the subgoals and complete the schema. Either top-down fashion activates schemas, where intentions are governed by some type of cognitive system, or by bottom-up fashion where features or an object in the environment trigger a schema to begin. The bottom-up feature is what is seen in ideational apraxia because an object appears to capture the attention of the patient. However, the schema that corresponds to the object cannot be fulfilled. For some reason there is a disconnect in the brain that does not allow the individual to produce the sequence of actions that they know should be happening with the object that is in their visual pathway. It is this area that is still an area of ambiguity to physicians and researchers alike. They are not sure where in the brain the action schema pathway is severed.\n",
"BULLET::::- If people are asked to explain their impressions and experience, they are less likely to remember what they felt. The act of describing an experience with words overrides part of the ability in the brain to remember the feelings as Jonathan W. Schooler showed.\n\nSection::::Research and examples.\n",
"Through this study, Pennypacker confirmed the observation of external inhibition on the human level. External inhibition was especially observed when the tone (external stimulus) was introduced during the acquisition phase, which was the interval right after the paired CS-UCS trials. The conditioned response, blinking reflex, was observed to be in decline (inhibited) compared to the rate during conditioning.\n",
"To the young writers, she wrote, \"You must keep trying because it is as essential as drawing breath – like exhaling! All the thoughts breathed out and shaping themselves visibly after being inside the cells of the brain, and then released. If you hold your breath and do not breathe out, you will suffocate.\"\n",
"By contrast with the conceptualization driving Coué's auto-suggestive self-administration procedure — namely, that constant repetition creates a situation in which \"\"a particular idea saturates the microcognitive environment of 'the mind'…\"\", which, then, in its turn, \"is converted into a corresponding ideomotor, ideosensory, or ideoaffective action, by the \"ideodynamic principle of action\"\", \"which then, in its turn, generates the response\" — the primary target of the entirely different self-administration procedure developed by Johannes Heinrich Schultz, known as \"Autogenic Training\", was to affect the autonomic nervous system, rather than (as Coué's did) to affect 'the mind'.\n\nSection::::The Coué method.\n",
"Older children and adults with atypical neurology (e.g., people with cerebral palsy) may retain these reflexes and primitive reflexes may reappear in adults. Reappearance may be attributed to certain neurological conditions including dementia (especially in a rare set of diseases called frontotemporal degenerations), traumatic lesions, and strokes. An individual with cerebral palsy and typical intelligence can learn to suppress these reflexes, but the reflex might resurface under certain conditions (i.e., during extreme startle reaction). Reflexes may also be limited to those areas affected by the atypical neurology, (i.e., individuals with cerebral palsy that only affects their legs retaining the Babinski reflex but having normal speech); for those individuals with hemiplegia, the reflex may be seen in the foot on the affected side only.\n",
"Section::::Neurology.:Executive control model.\n\nThe executive control model argues that thought insertion may be mediated by altered activity in motor planning regions, specifically the supplementary motor area. In one experiment, reduced connectivity between the supplementary motor area and motor implementation regions during suggested involuntary compared to voluntary movements was observed.\n\nSection::::Treatment.\n",
"After the above experiments, the authors concluded that subjects sometimes could not distinguish between \"producing an action without stopping and stopping an action before voluntarily resuming\", or in other words, they could not distinguish between actions that are immediate and impulsive as opposed to delayed by deliberation. To be clear, one assumption of the authors is that all the early (600 ms) actions are unconscious, and all the later actions are conscious. These conclusions and assumptions have yet to be debated within the scientific literature or even replicated (it is a very early study).\n",
"De Neys conducted a study that manipulated working memory capacity while answering syllogistic problems. This was done by burdening executive processes with secondary tasks. Results showed that when System 1 triggered the correct response, the distractor task had no effect on the production of a correct answer which supports the fact that System 1 is automatic and works independently of working memory, but when belief-bias was present (System 1 belief-based response was different from the logically correct System 2 response) the participants performance was impeded by the decreased availability of working memory. This falls in accordance with the knowledge about System 1 and System 2 of the dual-process accounts of reasoning because System 1 was shown to work independent of working memory, and System 2 was impeded due to a lack of working memory space so System 1 took over which resulted in a belief-bias.\n",
"A more direct test of the relationship between the Bereitschaftspotential and the \"awareness of the intention to move\" was conducted by Banks and Isham (2009). In their study, participants performed a variant of the Libet's paradigm in which a delayed tone followed the button press. Subsequently, research participants reported the time of their intention to act (e.g., Libet's \"W\"). If W were time-locked to the Bereitschaftspotential, W would remain uninfluenced by any post-action information. However, findings from this study show that W in fact shifts systematically with the time of the tone presentation, implicating that W is, at least in part, retrospectively reconstructed rather than pre-determined by the Bereitschaftspotential.\n",
"Libet found that the \"unconscious\" brain activity leading up to the \"conscious\" decision by the subject to flick their wrist began approximately half a second \"before\" the subject consciously felt that they had decided to move. Libet's findings suggest that decisions made by a subject are first being made on a subconscious level and only afterward being translated into a \"conscious decision\", and that the subject's belief that it occurred at the behest of their will was only due to their retrospective perspective on the event.\n",
"Section::::The birth of \"Conscious Autosuggestion\".:Conceptual difference from Autogenic Training.\n",
"Automaticity can be disrupted by explicit attention when the devotion of conscious attention to the pattern alters the content or timing of that pattern itself. This phenomenon is especially pronounced in situations that feature high upside and/or downside risk and impose the associated psychological stress on one's conscious mind; one's performance in these situations may either \"a\") be unimpaired or even enhanced (\"flow\") or \"b\") deteriorate (\"choke\").\n",
"In \"Think!: Why Crucial Decisions Can't Be Made in the Blink of an Eye\" (Simon and Schuster, 2006), Michael LeGault argues that \"Blinklike\" judgments are not a substitute for critical thinking. He criticizes Gladwell for propagating unscientific notions:\n",
"In decide trials the participants, it seems, were not able to reliably identify whether they had really had time to decide – at least, not based on internal signals. The authors explain that this result is difficult to reconcile with the idea of a conscious veto, but is simple to understand if the veto is considered an unconscious process. Thus it seems that the intention to move might not only arise from the subconscious, but it may only be inhibited if the subconscious says so. This conclusion could suggest that the phenomenon of \"consciousness\" is more of narration than direct arbitration (i.e. unconscious processing causes all thoughts, and these thoughts are again processed subconsciously).\n",
"BULLET::::2. \"Cognition is time-pressured. We are 'mind on the hoof' (Clark, 1997), and cognition must be understood in terms of how it functions under the pressure of real-time interaction with the environment.\" When you're under pressure to make a decision, the choice that is made emerges from the confluence of pressures that you're under. In the absence of pressure, a decision may be made differently.\n",
"Some studies have shown that when test subjects are under what Wegner refers to as a \"cognitive load\" (for instance, using multiple external distractions to try to suppress a target thought), the effectiveness of thought suppression appears to be reduced. However, in other studies in which focused distraction is used, long term effectiveness may improve. That is, successful suppression may involve less distractors. For example, in 1987 Wegner, Schneider, Carter & White found that a single, pre-determined distracter (e.g., a red Volkswagen) was sufficient to eliminate the paradoxical effect \"post-testing\". Evidence from Bowers and Woody in 1996 is supportive of the finding that hypnotized individuals produce no paradoxical effects. This rests on the assumption that deliberate \"distracter activity\" is bypassed in such an activity.\n",
"Section::::Neurological basis.\n\nResearch studies regarding the neurological functions of involuntary memory have been few in number. Thus far, only two neuroimaging studies have been conducted comparing involuntary memories to voluntary memories using Positron Emission Tomography (PET).\n"
] | [
"Muscles used in reflexes are only controlled by one nervous system. "
] | [
"Some muscles are controlled by both the voluntary and autonomic nervous systems. "
] | [
"false presupposition"
] | [
"Muscles used in reflexes are only controlled by one nervous system. ",
"Muscles used in reflexes are only controlled by one nervous system. "
] | [
"normal",
"false presupposition"
] | [
"Some muscles are controlled by both the voluntary and autonomic nervous systems. ",
"Some muscles are controlled by both the voluntary and autonomic nervous systems. "
] |
2018-02001 | Why do brittle things, like lead on the tip of my mechanical pencil, seem stronger when shorter? | Comes down to leverage, a longer piece means you can apply force further away from the breaking point, thus multiplying the force applied. | [
"For a less stiff sublayer, an additional strain in the sublayer, formula_14, lessens the strain in the pictorial layer such that formula_15. If the ratio of the strains between the two layers is approximately the same as the ratio of their elastic moduli, the crack spacing for a support with finite stiffness can be approximated as:\n\nformula_16\n",
"At a time \"t\", a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time \"t\" at which the stress is relieved, at which time the strain immediately decreases (discontinuity) then continues decreasing gradually to a residual strain.\n",
"The characterized \"creep strain rate\" typically refers to the constant rate in this second stage. Stress dependence of this rate depends on the creep mechanism. In tertiary creep, the strain rate exponentially increases with stress because of necking phenomena or internal cracks or voids decrease the effective area of the specimen. Strength is quickly lost in this stage while the material's shape is permanently changed. The acceleration of creep deformation in the tertiary stage eventually leads to material fracture.\n\nSection::::Mechanisms of deformation.\n",
"Although the term \"Mullins effect\" is commonly applied to stress softening in filled rubbers, the phenomenon is common to all rubbers, including \"gums\" (rubber lacking filler). As first shown by Mullins and coworkers, the retraction stresses of an elastomer are independent of carbon black when the stress at the maximum strain is constant. Mullins softening is a viscoelastic effect, although in filled rubber there can be additional contributions to the mechanical hysteresis from filler particles debonding from each other or from the polymer chains.\n",
"When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep.\n",
"Time is often neglected in the stress-strain curve relations, but at higher strain rates, higher stresses will occur according to the relationship \n\nformula_15\n\nwhere m is the strain rate sensitivity. The higher m is, the greater resistance to necking this material will have, just like the case of work-hardening coefficient.\n\nAnother dominant factor is the temperature. Temperature controls the activation of dislocations and diffusions. As the temperature increases, brittle materials can be transformed into ductile materials.\n\nSection::::See also.\n\nBULLET::::- Elastomers\n\nBULLET::::- Strength of materials\n\nBULLET::::- Tensometer\n\nBULLET::::- Universal testing machine\n\nBULLET::::- Stress–strain index\n\nBULLET::::- Stress–strain analysis\n\nSection::::External links.\n",
"Section::::Tools.\n\nTurning tools are generally made from three different types of steel; carbon steel, high speed steel (HSS), and more recently powdered metal. Comparing the three types, high speed steel tools maintain their edge longer, requiring less frequent sharpening than carbon steel, but not as long as powdered metal tools. The harder the type of high speed steel used, the longer the edge will maintain sharpness. Powdered steel is even harder than HSS, but takes more effort to obtain an edge as sharp as HSS, just as HSS is harder to get as sharp as carbon steel.\n",
"where formula_39 is a constant between 1.5-6, formula_40is the flow stress of fibers, formula_41is the fracture strain of fibers, formula_15is the fraction of fibers, and formula_43is the debond length. From the equation, it can be found that higher flow stress and longer debond length can improve the toughening. However, longer debond length usually lead to a decrease of flow stress because of loss of constraint for plastic deformation. \n\nSection::::Toughening in Polymers.\n",
"In the initial stage, or primary creep, or transient creep, the strain rate is relatively high, but decreases with increasing time and strain due to a process analogous to work hardening at lower temperatures. For instance, the dislocation density increases and, in many materials, a dislocation subgrain structure is formed and the cell size decreases with strain. The strain rate diminishes to a minimum and becomes near constant as the secondary stage begins. This is due to the balance between work hardening and annealing (thermal softening). The secondary stage referred to as \"steady-state creep\", is the most understood. The microstructure is invariant during this stage, which means that recovery effects are concurrent with deformation. No material strength is lost during these first two stages of creep.\n",
"For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the elastic region; now when the load is removed, some degree of extension will remain.\n",
"Creep resistance can be influenced by many factors such as diffusivity, precipitate and grain size.\n",
"Consider the difference between a carrot and chewed bubble gum. The carrot will stretch very little before breaking. The chewed bubble gum, on the other hand, will plastically deform enormously before finally breaking.\n\nSection::::Design terms.\n",
"Increasing the rubber concentration in a nanocomposite decreases the modulus and tensile strength. In one study, looking at PA6-EPDM blend, increasing the concentration of rubber up to 30 percent showed a negative linear relationship with the brittle-tough transition temperature, after which the toughness decreased. This suggests that the toughening effect of adding rubber particles is limited to a critical concentration. This is examined further in a study on PMMA from 1998; using SAXS to analyze crazing density, it was found that crazing density increases and yield stress decreases until the critical point when the relationship flips.\n",
"where formula_33 is the ratio between debond length and critical length, formula_34is the strength of fibers, formula_24 is the width of fiber, formula_15is the fraction of fibers and formula_37is the interface friction stress. From the equation, it can be found that higher volume fraction, higher fiber strength and lower interfacial stress can get a better toughening effect.\n\nWhen fiber is ductile, the work from plastic deformation mainly contributes to the improvement of toughens. The additional toughness contributed by plastic deformation can be expressed by:\n\nformula_38\n",
"where formula_11 is a constant, formula_12 is the average grain diameter and formula_13 is the original yield stress.\n",
"phenomenon, called aging, causes that formula_8 depends not only on the time lag formula_7 but on both formula_10 and formula_4 separately. At variable stress formula_12, each stress increment formula_13 applied at time formula_4 produces strain history formula_15. The linearity implies the principle of superposition (introduced by Boltzmann and for the case of aging, by Volterra). This leads to the (uniaxial) stress–strain relation of linear aging viscoelasticity:\n\nHere formula_16 denotes shrinkage strain formula_17 augmented by thermal expansion, if any. The integral is the Stieltjes\n",
"The number of vacancies does not directly affect the PLC start point. It was found if a material is pre-strained to a value ½ of that required to initiate jerky flow and then rested at the test temperature or annealed to remove vacancies (but low enough that the dislocation structure is not affected), the total critical strain is only slightly decreased as well as the types of serrations that do occur.\n\nSection::::Serrations descriptors.\n",
"BULLET::::1. A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high. formula_4.\n\nBULLET::::2. The secondary creep stage, also known as the steady state, is where the strain rate is constant. formula_5.\n\nBULLET::::3. A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain. formula_6.\n\nSection::::Phenomenology.:Relaxation test.\n",
"Different techniques are used to quantify material characteristics at smaller scales. Measuring mechanical properties for materials, for instance, of thin films, can not be done using conventional uniaxial tensile testing. As a result, techniques testing material \"hardness\" by indenting a material with a very small impression have been developed to determine to estimate these properties.\n",
"Young's modulus represents the factor of proportionality in Hooke's law, which relates the stress and the strain. However, Hooke's law is only valid under the assumption of an \"elastic\" and \"linear\" response. Any real material will eventually fail and break when stretched over a very large distance or with a very large force; however all solid materials exhibit nearly Hookean behavior for small enough strains or stresses. If the range over which Hooke's law is valid is large enough compared to the typical stress that one expects to apply to the material, the material is said to be linear. Otherwise (if the typical stress one would apply is outside the linear range) the material is said to be non-linear.\n",
"where \"C\" is a constant, \"D\" is the solute diffusivity, formula_12 is the solute concentration, and formula_13 is the misfit parameter, formula_14 is the applied stress. So it could be seen from the equation above, \"m\" is 3 for solute drag creep. Solute drag creep shows a special phenomenon, which is called the Portevin-Le Chatelier effect. When the applied stress becomes sufficiently large, the dislocations will break away from the solute atoms since dislocation velocity increases with the stress. After breakaway, the stress decreases and the dislocation velocity also decreases, which allows the solute atoms to approach and reach the previously departed dislocations again, leading to a stress increase. The process repeats itself when the next local stress maximum is obtained. So repetitive local stress maxima and minima could be detected during solute drag creep.\n",
"Intrinsic toughening mechanisms are not as well defined as extrinsic mechanisms, because they operate on a smaller length-scale than extrinsic mechanisms (usually ~1 μm). Plasticity is usually associated with “soft” materials such as polymers and cartilage, but bone also experiences plastic deformation. One example of an extrinsic mechanism is fibrils (length scale ~10’s nm) sliding against one another, stretching, deforming, and/or breaking. This movement of fibrils causes plastic deformation resulting in crack tip blunting. \n\nSection::::Bone fracture.:Bone characterization.:Extrinsic mechanisms.\n",
"BULLET::::- Kick's law, which related the energy to the sizes of the feed particles and the product particles;\n\nBULLET::::- Bond's law, which assumes that the total work useful in breakage is inversely proportional to the square root of the diameter of the product particles, [implying] theoretically that the work input varies as the length of the new cracks made in breakage.\n\nBULLET::::- Holmes's law, which modifies Bond's law by substituting the square root with an exponent that depends on the material.\n\nSection::::Forces.\n\nThere are three forces which typically are used to effect the comminution of particles: impact, shear, and compression.\n",
"compression, or shear strain is applied. The resulting stress vs. time data can be fitted with a number of equations, called\n\nmodels. Only the notation changes depending of the type of strain applied: tensile-compressive relaxation is denoted formula_21, shear\n\nis denoted formula_22, bulk is denoted formula_23. The Prony series for the shear relaxation is\n\nwhere formula_25 is the long term modulus once the material is totally relaxed, formula_26 are the relaxation times (not to be confused with formula_26 in the diagram); the higher\n",
"The rate of deformation is a function of the material's properties, exposure time, exposure temperature and the applied structural load. Depending on the magnitude of the applied stress and its duration, the deformation may become so large that a component can no longer perform its function — for example creep of a turbine blade will cause the blade to contact the casing, resulting in the failure of the blade. Creep is usually of concern to engineers and metallurgists when evaluating components that operate under high stresses or high temperatures. Creep is a deformation mechanism that may or may not constitute a failure mode. For example, moderate creep in concrete is sometimes welcomed because it relieves tensile stresses that might otherwise lead to cracking.\n"
] | [
"Brittle things like mechanical pencil lead are stronger when small. "
] | [
"Brittle things like mechanical pencil lead are not stronger but longer pieces mean force can be applied further from a breaking point thus multiplying the force. "
] | [
"false presupposition"
] | [
"Brittle things like mechanical pencil lead are stronger when small. ",
"Brittle things like mechanical pencil lead are stronger when small. "
] | [
"normal",
"false presupposition"
] | [
"Brittle things like mechanical pencil lead are not stronger but longer pieces mean force can be applied further from a breaking point thus multiplying the force. ",
"Brittle things like mechanical pencil lead are not stronger but longer pieces mean force can be applied further from a breaking point thus multiplying the force. "
] |
2018-18657 | How did that 2XL robot toy (Tiger Electronics) work? | Cassette tapes have **4 parallel tracks** used to provide stereo sound; the toy actually split the tracks giving it multiple lines of dialog it can play with the tape at the same position. So it plays a question, you press one of four buttons; it plays the dialog of whatever is on the track # you pressed. So it could have: > What kind of pet doesn't have a tail? 1. Dog 2. Cat 3. Rock 4. Fish and when you press the button with your answer, it reads the tape from that track. If you pressed 3, it would say "you're right", and any other option would say you're wrong, and some answers would be full of sass. The next question is another choice of 4 with the right answer probably on a different track. | [
"There have also been robots such as the teaching computer, Leachim (1974). Leachim was an early example of speech synthesis using the using the Diphone synthesis method. 2-XL (1976) was a robot shaped game / teaching toy based on branching between audible tracks on an 8-track tape player, both invented by Michael J. Freeman. Later, the 8-track was upgraded to tape cassettes and then to digital.\n\nSection::::Modern robots.:Modular robot.\n",
"Another prototype of the domestic robot was called “Topo”, which was designed by Androbot Inc. and released in 1983. Its programming language allowed it to do geometric movements and perform tasks. However, it did not have a sensor so it could not receive the order and responded to the order correctly and thus it could not be considered as a real robot. To solve this problem, the second and third generation contained an infrared transmitter and could be controlled by a remote pad. For the last generation, Topo4 was featured by a text-to-speech processor. Although Topo4 was made, but it never went into production.\n",
"Section::::Features and abilities.\n\nRobot consisted, from top down, of\n\nBULLET::::1. A glass bubble sensor unit with moving antennae;\n\nBULLET::::2. A fluted, translucent ring collar (actually an arrangement of shaped ribs, through which performer Bob May could see);\n",
"Section::::Labo Kits.:Robot Kit.\n",
"A precedent for this type of humanoid robot is in the Audio-Animatronics exhibit \"Great Moments with Mr. Lincoln\" presented at the State of Illinois Pavilion at the 1964 New York World's Fair created by WED Enterprises and appearing again soon thereafter at Disneyland. The device used pneumatics and hydraulics for movement and silicone based skin. The Lincoln figure could rise from his chair and gesture while speaking.\n\nSection::::See also.\n\nBULLET::::- Android\n\nBULLET::::- Gynoid\n\nBULLET::::- ASIMO\n\nBULLET::::- EveR-1\n\nBULLET::::- Hubo\n\nBULLET::::- Humanoid robot\n\nBULLET::::- RealDoll\n\nBULLET::::- Telenoid R1\n\nBULLET::::- TOPIO\n\nBULLET::::- Uncanny valley\n\nBULLET::::- Virtual Woman\n\nSection::::References.\n\nBULLET::::- (Google translation)\n",
"Freddy and Freddy II were robots built at the University of Edinburgh School of Informatics by Pat Ambler, Robin Popplestone, Austin Tate, and Donald Mitchie, and were capable of assembling wooden blocks in a period of several hours. German based company KUKA built the world's first industrial robot with six electromechanically driven axes, known as FAMULUS.\n",
"The company's primary hardware product was the Multiface series of interface devices that allowed dumping and retrieval of the computer's RAM contents to external storage devices such as disk drives, as well as utilities for viewing and disassembling that data. The first in the series was the Multiface One for the ZX Spectrum. It was followed by the Multiface Two for the Amstrad CPC, the Multiface 128 for the Spectrum 128, the Multiface 3 for the Spectrum +3 and the Multiface ST for the Atari ST. Other peripherals developed and sold by Romantic Robot were the Multiprint printer interface and the Videoface video capture peripheral, both for the Sinclair ZX Spectrum.\n",
"Zeno debuted in 2007 at Wired Nextfest. The robot could see, hear, and talk. Zeno featured more than 28 specialized motors, an agile body, and expressive face. Named for creator David Hanson's son Zeno and designed as a nod to Astro Boy, In 2012, an updated version of Zeno was released, which included Dynamixel RX-28 and RX-64 servos, plus a sensor suite comprising a gyro, accelerometer, compass, torque sensors, touch sensors, and temperature sensors, as well as more cartoon-like features.\n\nSection::::Humanoid robots.:Joey Chaos.\n",
"Leachim, was a robot teacher programmed with the class curricular, as well as certain biographical information on the 40 students whom it was programmed to teach. Leachim could synthesize human speech using Diphone synthesis. It was invented by Michael J. Freeman in 1974 and was tested in a fourth grade classroom in the Bronx, New York.\n",
"The WonderBorg itself requires assembly, and can be customised somewhat, with possible configurations involving differently sized gears favouring torque or speed, and wheels to replace the robot's usual six-legged design. Decal stickers were also included, to allow superficial decoration. Once assembled, the WonderBorg is powered by three AAA batteries and reacts to its environment using seven sensors:\n\nBULLET::::- infrared receiver\n\nBULLET::::- antennae: independent left and right tactile sensors\n\nBULLET::::- eyes: independent left and right infrared LEDs\n\nBULLET::::- light sensor\n\nBULLET::::- floor sensor: detects the presence or absence of ground ahead\n\nBULLET::::- internal clock sensor\n\nBULLET::::- steps sensor\n",
"An \"Action Set\" was available with a Missile Defense Pad, launching station, and included a \"Solar Cycle\" hollow wheel, into which the zeroid robot could be set inside.\n\nThe Zeroid alien robot was a box shaped robot with a clear dome head and powered drive. It had changeable internal nylon gears that would alter the robots movements with pre-programmed patterns. One pattern ended with the robots \"exploding\" with the arms and parts springing off.\n",
"The robots were sold commercially starting in early 1983, and were intended to be inexpensive, lacking a complicated manipulating device. Units are beige molded plastic with two drive wheels as feet and stand about 36 inches tall. Arms on Topo 1 and 2 fold out, but Topo 3 lacks arms altogether. Operation is based on one of two programming languages, either Apple BASIC, a modified version of the Logo language, or a version of Forth.\n",
"Physically, the robot was particularly tall, and had an antenna for a radio link, sonar range finders, a television camera, on-board processors, and collision detection sensors (\"bump detectors\"). The robot's tall stature and tendency to shake resulted in its name:\n\nSection::::Research results.\n",
"The LAGR vehicle, which was about the size of a supermarket shopping cart, was designed to be simple to control. (A companion DARPA program, Learning Locomotion, addressed complex motor control.) It was battery powered and had two independently driven wheelchair motors in the front, and two caster wheels in the rear. When the front wheels were rotated in the same direction the robot was driven either forward or reverse. When these wheels were driven in opposite directions, the robot turned.\n",
"As with other Robosapien models, the RS Media was designed with the possibility for modifications. In reference to the philosophy behind the 'hackability' of the robots, Tilden once said, \"Years ago some bright AI lads asked if I could build a competent humanoid cradle into which they could put their smart programs. Took me a while, but here it is lads, inexpensive and ready to go right out of the box. Make it think, and let me know how it goes.\"\n",
"In January 1987, Access Software announced The Robotic Workshop, a kit designed for home computers that used a range of Capsela parts. The kit includes more than 50 Capsela parts, including two motors, gears, wheels, and sensors. The kit also includes an electronic control unit that plugs into the user port of a Commodore 64, an instruction manual with 50 tutorial projects, and special programming software on a floppy disk. It was later released for Apple, Atari, and IBM computers.\n\nSection::::Use in schools.\n",
"In 1984, WABOT-2 was revealed, and made a number of improvements. It was capable of playing the organ. Wabot-2 had 10 fingers and two feet, and was able to read a score of music. It was also able to accompany a person. In 1986, Honda began its humanoid research and development program, to create humanoid robots capable of interacting successfully with humans.\n",
"RoboSapien\n\nRoboSapien is a toy-like biomorphic robot designed by Mark Tilden and produced by WowWee toys. The Robosapien X was made to entertain and will react to sounds and touch. The Robosapien is preprogrammed with moves, and also can be controlled by an infrared remote control included with the toy, or by either a personal computer equipped with an infrared PDA.\n\nThe toy's remote control unit has a total of 21 different buttons. With the help of two shift buttons, a total of 67 different robot-executable commands are accessible.\n\nSection::::Overview.\n",
"The Neato robot is able to return to its home base and charge itself when running out of energy, and has sensors that prevent it from falling off stairs. In case the robot is used in a floorplan larger than it can cover with one battery charge, the robot is able to continue cleaning from where it left off the previous session, after recharging its batteries.\n\nSection::::Models.\n\nThe XV-11 was the first Neato product, released to market in February 2010. This product pioneered the core design and functionality found in all subsequent Neato models.\n",
"The robot needs to be recharged at the end of every work day. Until recently, this required a Waterloo co-worker to plug it in before leaving for the night. In May 2008, Ian constructed a charging bay out of lumber that Ivan \"drives\" into - copper bars attached to the robot connect to copper springs on the charging unit which are directly connected to the battery charger and enabled through a relay circuit so that the charging bars are not live unless the robot is in the bay. The charging bay permits Ivan to recharge the robot at his convenience, and without assistance.\n",
"Section::::Robotic process control.\n\nBy 2006, Fischertechnik sets were available for robotic process control using “Robo-pro” software (the successor to Lucky-logic), on-board process controllers with flash memory, infrared and radio-frequency remote control, and pneumatic-activation. Robotic models could follow preprogrammed routes or lines on the floor, sense obstructions and change course, detect and move objects, and simulate everyday devices such as vending machines, passenger elevator systems, and traffic-control lights. In early 2010, Fischertechnik introduced the ROBO TX Explorer kit, which includes a color sensor.\n\nSection::::Sets.\n",
"With enthusiasm for K*bots gathering pace and returning students becoming ever more inventive, the 2005 World Championships saw the emergence of some very advanced models which started, for the first time, to push the limits of what the K'Nex motors were capable of. The Division 1 World Championship Final, widely regarded as the best K*bot match of all time, involved so much pressure on the materials that damage was caused to both the K'Nex pieces and the motors powering the K*bots.\n",
"There have been many variations of the toy, such as a \"Transformers\" version, in which the two robots are Optimus Prime and Megatron.\n",
"Section::::Modern history.:2001-present.\n\nIn April 2001, the Canadarm2 was launched into orbit and attached to the International Space Station. The Canadarm2 is a larger, more capable version of the arm used by the Space Shuttle, and is hailed as \"smarter\". Also in April, the Unmanned Aerial Vehicle Global Hawk made the first autonomous non-stop flight over the Pacific Ocean from Edwards Air Force Base in California to RAAF Base Edinburgh in Southern Australia. The flight was made in 22 hours.\n\nThe popular Roomba, a robotic vacuum cleaner, was first released in 2002 by the company iRobot.\n",
"Unimation produced PUMAs for years until being purchased by Westinghouse (ca. 1980), and later by Swiss company Stäubli (1988). Nokia Robotics manufactured about 1500 PUMA robots during the 1980s, the Puma-650 being their most popular model with customers. Some own Nokia Robotics products were also designed, like Nokia NS-16 Industrial Robot or NRS-15 \n\n. Nokia sold their Robotics division in 1990.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-05584 | Why don’t people always get sick when exposed to sick people? | Because we've got immune systems, and each person's has varying levels of resistance and performance. The pathogen that overwhelmed the sick person's body and made them sick isn't necessarily going to get an upper-hand on my body. | [
"Schaller and Park (2011) used the term \"the Behavioral Immune System\" to account for observable activities that humans utilize in the face of pathogen threat. Whereas non-human social animals appear to largely rely upon distinctly organized social structures to combat the threat of diseases, it is evident such systems would be strained to apply in most modern human societies. Schaller and Park (2011) describe \"perceptual cues\" that humans use that will trigger aversive behavior toward other individuals. For example, people who appear to be ill may stimulate avoidance behavior in those around them, particularly if the others around them have temporarily suppressed immune systems, Wilson et al., 2003, speculated that gregarious species may invest less overall in their immune functions because so many of the body's resources must go to support somatic growth and competition among mates. One potential explanation, therefore, for human hypersensitivity to the perception of disease threat is that we are left relatively vulnerable by our under-provisioned immune systems. Schaller and Park (2011) also make a connection between the experience of disgust and things that do pose actual threat of pathogen risk, however this \"disgust\" experience has a tendency to be over-applied rather than under-applied in the favor of the individual's health and is therefore triggered by things that resemble disgust-inducing objects or actions. Such research has enormous implications for the explanation of aspects ranging from cultural diet variance to conformity and xenophobia. Schaller and Murray (2008) ran a comparison against research by Grunier et al. (2004) that displayed the variability of the prevalence of pathogens on a geographic scale. Schaller and Murray (2008) found overlap in cultural differences that included food preparation, mate selection and family structure, and sociosexual practices. In regions where infectious disease threat was lower, people trended toward more liberal sexual practices, more extroverted personalities, and less \"self-conscious\" behavior. In the 2011 article \"the Behavioral Immune System\", the authors discuss how avoidance behaviors can also be triggered when an individual witnesses another individual violating social norms, hence a general trend toward ethnocentrism and wariness of foreigners. Especially during times of known pathogenic outbreak, this can be so extreme as to manifest itself in the form of xenophobia. Wu and Chang's 2011 study elucidated trends toward conformity that they believe may have evolved as a protective dynamic against the introduction of contagions.\n",
"Evolutionary medicine has found that under horizontal transmission, the host population might never develop tolerance to the pathogen.\n\nSection::::Transmission.\n",
"During most of human prehistory groups of hunter-gatherers were probably very small. Such groups probably made contact with other such bands only rarely. Such isolation would have caused epidemic diseases to be restricted to any given local population, because propagation and expansion of epidemics depend on frequent contact with other individuals who have not yet developed an adequate immune response. To persist in such a population, a pathogen either had to be a chronic infection, staying present and potentially infectious in the infected host for long periods, or it had to have other additional species as reservoir where it can maintain itself until further susceptible hosts are contacted and infected. In fact, for many 'human' diseases, the human is actually better viewed as an accidental or incidental victim and a dead-end host. Examples include rabies, anthrax, tularemia and West Nile virus. Thus, much of human exposure to infectious disease has been zoonotic.\n",
"One of the ways to prevent or slow down the transmission of infectious diseases is to recognize the different characteristics of various diseases. Some critical disease characteristics that should be evaluated include virulence, distance traveled by victims, and level of contagiousness. The human strains of Ebola virus, for example, incapacitate their victims extremely quickly and kill them soon after. As a result, the victims of this disease do not have the opportunity to travel very far from the initial infection zone. Also, this virus must spread through skin lesions or permeable membranes such as the eye. Thus, the initial stage of Ebola is not very contagious since its victims experience only internal hemorrhaging. As a result of the above features, the spread of Ebola is very rapid and usually stays within a relatively confined geographical area. In contrast, the Human Immunodeficiency Virus (HIV) kills its victims very slowly by attacking their immune system. As a result, many of its victims transmit the virus to other individuals before even realizing that they are carrying the disease. Also, the relatively low virulence allows its victims to travel long distances, increasing the likelihood of an epidemic.\n",
"Section::::Human social groups and disease implications.\n\nInvasion by a pathogen in any community, human or otherwise, requires a two-step process. First, there must be importation of the pathogen by means of migration. This may occur through a traveling node, or a vector, or may occur when an entire community relocates. Second, the number of infections must rise due to the social contacts within that population. For humans, this process can appear extremely chaotic. \"Local public sites with extremely high population density such as train stations, or large social, political, or religious mass gatherings are regarded as high-risk…\".\n",
"Short-sighted evolution suggests that the traits that increase reproduction rate and transmission to a new host will rise to high frequency within the pathogen population. These traits include the ability to reproduce sooner, reproduce faster, reproduce in higher numbers, live longer, survive against antibodies, or survive in parts of the body the pathogen does not normally infiltrate. These traits typically arise due to mutations, which occur more frequently in pathogen populations than in host populations, due to the pathogens' rapid generation time and immense numbers. After only a few generations, the mutations that enhance rapid reproduction or dispersal will increase in frequency. The same mutations that enhance the reproduction and dispersal of the pathogen also enhance its virulence in the host, causing much harm (disease and death). If the pathogen's virulence kills the host and interferes with its own transmission to a new host, virulence will be selected against. But as long as transmission continues despite the virulence, virulent pathogens will have the advantage. So, for example, virulence often increases within families, where transmission from one host to the next is likely, no matter how sick the host. Similarly, in crowded conditions such as refugee camps, virulence tends to increase over time since new hosts cannot escape the likelihood of infection.\n",
"Humans are surprisingly predictable when it comes to the range of possible social group structures. We occupy less than half of the social network structures that characterize non-human primates, however among primates we can be found to comfortably reside in the largest gamut for any one species. A few other notable characteristics make humans unique when considering community structure. We are the only primates that keep consanguineal relationships, even after departure from a natal group, and retain ties with kin living in different social groups. Although some non-human primates form temporary groups of \"roaming bachelors\", these social groups do not come anywhere near the fairly common social structures of permanent religious celibates. Some other examples of aberrant community and social structures in humans include men or women absenting for long periods in war or trade, \"raiding\" for wives that results in total displacement, eunuchs, and isolated stigmatized groups (criminals, lepers, etc.). A further oddity to consider regarding human social behavior is the \"creation\" of kin out of unrelated individuals, including the nomination of \"godparents\", step-families, and exceptionally close friends that assume a role of a relative. All of these examples have complicated effects on the measurement and understanding of pathogenic propagation within the human realm.\n",
"As with all parasites, natural selection favors the development of low-virulence virus strains. When a pathogen first invades a new host species, the hosts have little or no immunity and often suffer high mortality. Those that survive do so because they have different genetics that offer them some protection from the new pathogen. These survivors then reproduce and pass on those genes, resulting in lower mortality rates in future generations. There is no advantage to a pathogen to kill the host before dispersal to new hosts, thus a decrease in virulence over time is usually observed.\n\nSection::::See also.\n\nBULLET::::- pathogen\n",
"The hygiene hypothesis has difficulty explaining why allergic diseases also occur in less affluent regions. Additionally, exposure to some microbial species actually increases future susceptibility to disease instead, as in the case of infection with rhinovirus (the main source of the common cold) which increases the risk of asthma.\n\nSection::::Treatment.\n",
"Susceptibles have been exposed to neither the wild strain of the disease nor a vaccination against it, and thus have not developed immunity. Those individuals who have antibodies against an antigen associated with a particular infectious disease will not be susceptible, even if they did not produce the antibody themselves (for example, infants younger than six months who still have maternal antibodies passed through the placenta and from the colostrum, and adults who have had a recent injection of antibodies). However, these individuals soon return to the susceptible state as the antibodies are broken down.\n",
"The \"old friends hypothesis\" proposed in 2003 may offer a better explanation for the link between microbial exposure and inflammatory diseases. This hypothesis argues that the vital exposures are not common childhood and other recently evolved infections, which are no older than 10,000 years, but rather microbes already present in hunter-gatherer times when the human immune system was evolving. Conventional childhood infections are mostly \"crowd infections\" that kill or immunise and thus cannot persist in isolated hunter-gatherer groups. Crowd infections started to appear after the neolithic agricultural revolution, when human populations increased in size and proximity. The microbes that co-evolved with mammalian immune systems are much more ancient. According to this hypothesis, humans became so dependent on them that their immune systems can neither develop nor function properly without them.\n",
"The main sources of infection in the home are people (who are carriers or are infected), foods (particularly raw foods) and water, and domestic animals (in the U.S. more than 50% of homes have one or more pets). Sites that accumulate stagnant water—such as sinks, toilets, waste pipes, cleaning tools, face cloths, etc. readily support microbial growth and can become secondary reservoirs of infection, though species are mostly those that threaten \"at risk\" groups. Pathogens (potentially infectious bacteria, viruses etc.—colloquially called \"germs\") are constantly shed from these sources via mucous membranes, feces, vomit, skin scales, etc. Thus, when circumstances combine, people are exposed, either directly or via food or water, and can develop an infection.\n",
"Spillover is a common event; in fact, more than two thirds of human viruses are zoonotic . Most spillover events result in self-limited cases with no further human to human transmission, as occurs, for example, with rabies, antrax, histoplasmosis or hidatidosis. Other zoonotic pathogens are able to be transmitted by humans to produce secondary cases and even to establish limited chains of transmission. Some examples are the Ebola and Marburg filoviruses, the MERS and SARS coronaviruses or some avian flu viruses. Finally some few spillover events can result in the final adaptation of the microbe to the humans, who became a new stable reservoir, as occurred with the HIV virus resulting in the AIDS pandemic. In fact, most of the pathogens which are presently exclusive of humans were probably transmitted by animals sometime in the past . If the history of mutual adaptation is long enough, permanent host-microbe associations can be established resulting in co-evolution, and even on permanent integration of the microbe genome in the human genome, as it is the case of endogenous viruses. The closer the two species are in phylogenetic terms, the easier it is for microbes to overcome the biological barrier to produce successful spillovers. For this reason, other mammals are the main source of zoonotic agents for humans.\n",
"Sociality, although a very successful way of life, is thought to increase the per-individual risk of acquiring disease, simply because close contact with conspecifics is a key transmission route for infectious diseases. As social organisms are often densely aggregated and exhibit high levels of interaction, pathogens can more easily spread from infectious to susceptible individuals. The intimate interactions often found in social insects, such as the sharing of food through regurgitation, are further possible routes of pathogen transmission. As the members of social groups are typically closely related, they are more likely to be susceptible to the same pathogens. This effect is compounded when overlapping generations are present (such as in social insect colonies and primate groups), which facilitates the horizontal transmission of pathogens from the older generation to the next. In the case of species that live in nests/burrows, stable, homeostatic temperatures and humidity may create ideal conditions for pathogen growth.\n",
"Other epidemiologists have expanded on the idea of a tradeoff between costs and benefits of virulence. One factor is the time or distance between potential hosts. Airplane travel, crowded factory farms and urbanization have all been suggested as possible sources of virulence. Another factor is the presence of multiple infections in a single host leading to increased competition among pathogens. In this scenario, the host can survive only as long as it resists the most virulent strains. The advantage of a low virulence strategy becomes moot. Multiple infections can also result in gene swapping among pathogens, increasing the likelihood of lethal combinations.\n",
"Infectious diseases risks from contaminated clothing etc. can increase significantly under certain conditions, e.g., in healthcare situations in hospitals, care homes and the domestic setting where someone has diarrhoea, vomiting, or a skin or wound infection. It increases in circumstances where someone has reduced immunity to infection.\n",
"Section::::Behavioral immune system.\n",
"Section::::Community structures of social animals and implications for contagious infections.:Challenges.\n",
"Aversion to consuming or coming into contact with contaminated material also exists in presocial species, e.g., the gregarious-phase migratory grasshoppers \"Melanoplus sanguinipes\" avoids consuming conspecific corpses infected by entomoparasitic fungi. Female burying beetles (\"Nicrophorus vespilloides\") choose fresh carcasses over microbe-covered degraded ones to breed on - though this may have also evolved to allow a reduction in post-hatching competition between juveniles and microbes over the carcass.\n",
"Some individuals may have a natural resistance to a particular infectious disease. However, except in some special cases such as malaria, these individuals make up such a small proportion of the total population that they can be ignored for the purposes of modelling an epidemic.\n\nSection::::Mathematical model of susceptibility.\n",
"These mechanisms include sensory processes through which cues connoting the presence of parasitic infections are perceived (e.g., the smell of a foul odor, the sight of pox or pustules), as well as stimulus–response systems through which these sensory cues trigger a cascade of aversive affective, cognitive, and behavioral reactions (e.g., arousal of disgust, automatic activation of cognitions that connote the threat of disease, behavioral avoidance).\n",
"In 2003 Graham Rook proposed the \"old friends hypothesis\" which has been described as a more rational explanation for the link between microbial exposure and inflammatory disorders. The hypothesis states that the vital microbial exposures are not colds, influenza, measles and other common childhood infections which have evolved relatively recently over the last 10,000 years, but rather the microbes already present during mammalian and human evolution, that could persist in small hunter-gatherer groups as microbiota, tolerated latent infections, or carrier states. He proposed that coevolution with these species has resulted in their gaining a role in immune system development.\n",
"Transmission may occur from drinking contaminated water or when people share personal objects. Water quality typically worsens during the rainy season and outbreaks are more common at this time. In areas with four seasons, infections are more common in the winter. Worldwide, bottle-feeding of babies with improperly sanitized bottles is a significant cause. Transmission rates are also related to poor hygiene, (especially among children), in crowded households, and in those with poor nutritional status. Adults who have developed immunities may still carry certain organisms without exhibiting symptoms. Thus, adults can become natural reservoirs of certain diseases. While some agents (such as \"Shigella\") only occur in primates, others (such as \"Giardia\") may occur in a wide variety of animals.\n",
"BULLET::::- Modern transport. Ships and other cargo carriers often harbor unintended \"passengers\", that can spread diseases to faraway destinations. While with international jet-airplane travel, people infected with a disease can carry it to distant lands, or home to their families, before their first symptoms appear.\n\nSection::::History.\n",
"Secondly, collectivist cultures are untrusting of those outside of their in-group, which may serve as a protective behaviour against interactions with those in groups that may harbour novel diseases. In similar vein to the explanation presented with one's protective nature of their in-group members, one's immune system is well adapted to local parasites and will be unable to effectively protect against unfamiliar pathogens. Therefore, avoidance of those outside of one's inner circle will aid in the prevention of being exposed to novel and dangerous pathogens that the immune system is unable to defend against.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-01467 | the difference between DDR3, DDR4 and GDDR5? | The difference between DDR3 and DDR4 is mostly due to specifications of speed and voltage. DDR4, per the spec, uses lower voltages (saving power) and can run at higher speeds (more MHz). Some manufacturers released overclockable DDR3 that could run as fast as the low end DDR4, but high end DDR4 is faster than DDR3. GDDR in general has a slower clock speed than DDR but moves more data per cycle. GDDR is used in graphics processors where large large pieces of image data are moved in one slower step*, vs DDR which would need several faster steps to move the same amount of data. *Oversimplification. Moving data consists of several steps. But when a piece of data is too big to be moved at once, DDR has to move the first piece, and then move the second, and the added complexity of taking pieces apart, putting them back together, and managing all the work results in overall slower performance. | [
"BULLET::::- 2005: standards body JEDEC began working on a successor to DDR3 around 2005, about 2 years before the launch of DDR3 in 2007. The high-level architecture of DDR4 was planned for completion in 2008.\n",
"The first \"Dance Dance Revolution\" as well as its followup \"DDR 2ndMix\" uses Bemani System 573 Analog as its hardware. DDR 3rdMix replaces this with a Bemani System 573 Digital board, which would be used up to \"DDR Extreme\". Both of these are based on the PlayStation.\n\nBeginning with \"Dancing Stage Fusion\" in 2005, the hardware is replaced by Bemani Python, a PlayStation 2-based hardware. \"DDR SuperNova\", released in 2006, utilised a Bemani Python 2 board, originally found in \"GuitarFreaks V\" and Drummania V\". Bemani Python 2 would also be used in the followup \"DDR SuperNova 2\".\n",
"Section::::Modules.\n\nSection::::Modules.:JEDEC standard DDR4 module.\n\nBULLET::::- CAS latency (CL): Clock cycles between sending a column address to the memory and the beginning of the data in response\n\nBULLET::::- tRCD: Clock cycles between row activate and reads/writes\n\nBULLET::::- tRP: Clock cycles between row precharge and activate\n",
"Section::::Sequels.:\"Dance Dance Revolution 5thMix\".\n\nDance Dance Revolution 5thMIX, or DDR 5th Mix, is the 5th game in the Dance Dance Revolution series of music video games. It was released to the arcades by Konami on March 27, 2001. Although only officially released in Japan, units exist worldwide. DDR 5th Mix contains a total of 122 songs, nine of which are hidden and unlockable. Of those songs, 40 of them (including all nine unlockable songs) are brand new to Dance Dance Revolution.\n\nSection::::Sequels.:Current \"Dance Dance Revolution\" releases.\n",
"BULLET::::- A Bus Stop cover of \"Long Train Runnin' by The Doobie Brothers premiered in \"DDRMAX2\" and returned in \"Extreme\", while the \"SuperNova\" series and \"DDR X\" replaced it with a new cover by the artist X-Treme with different lyrics. On May 30, 2019, \"DDR A20\" introduced a remix of \"Long Train Runnin'\" by Haruki Yamada (ATTIC INC.) with Bodhi Kenyon, which incorporates lyrics from both the Bus Stop and X-Treme covers.\n",
"Along with the cabinet change, \"DDR X\" also changes its hardware to the PC-based Bemani PC Type 4. This more powerful hardware allows for high definition graphics and enhanced features. With \"DDR A\", Bemani PC Type 4 is replaced by Type 5, that is still used to this day.\n\nSection::::Releases.\n",
"BULLET::::- \"DDR (2013)\": 6 licenses\n\nBULLET::::- \"DDR X3\": all 10 licenses\n\nBULLET::::- \"DDR 2ndMix\": all 3 Dancemania licenses:\n\nBULLET::::- \"Bad Girls\" by Juliet Roberts\n\nBULLET::::- \"Boom Boom Dollar (Red Monster Mix)\" by King Kong & D. Jungle Girls\n\nBULLET::::- \"Stomp to My Beat\" by JS16\n\nBULLET::::- \"DDR (1998)\": \"Kung Fu Fighting\" by Bus Stop featuring Carl Douglas\n\nSection::::Unofficial releases.\n\n\"Dance Dance Revolution Megamix\" and \"Dance Dance Revolution Extreme Plus\" are commercial bootlegs of \"Dance Dance Revolution Extreme\".\n",
"BULLET::::- \"Dance Dance Revolution 2ndMix\" was updated after its initial release with a few new songs and the ability to connect to and play alongside Konami's DJ simulator games, Beatmania IIDX. While the official name of that version of DDR when alone was \"Dance Dance Revolution 2ndMix Link Version\", when connected to the two Beatmania IIDX cabinets it was compatible with it was referred to by two other unique names.\n",
"BULLET::::- A Barbie Young cover of \"Cartoon Heroes\" by Aqua was exclusively featured in \"DDR Extreme\". On April 25, 2019, \"DDR A20\" introduced a remix of \"Cartoon Heroes\" by nc featuring Jasmine And Dario Toda.\n",
"Dance Dance Revolution Universe 3\n\nDance Dance Revolution Universe 3, sometimes abbreviated as DDR Universe 3, is a video game for Xbox 360. It was announced by Konami on May 15, 2008, and released on October 21, 2008. The game has new songs, a story mode, the ability to create custom songs and custom character creation.\n",
"Not just as with standard SDRAM (non LP-DDR4 uses a prefetch of 8, not of 16), each generation of LPDDR has doubled the internal fetch size and external transfer speed.\n\nSection::::Generations.\n\nSection::::Generations.:LP-DDR(1).\n\nThe original low-power DDR (sometimes retroactively called LPDDR1) is a slightly modified form of DDR SDRAM, with several changes to reduce overall power consumption.\n",
"Section::::Events.:DDR SELECTION.\n\nOn September 26th, 2018, the DDR SELECTION category was added to DanceDanceRevolution A, in order to commemorate the 20th anniversary of the DanceDanceRevolution series, having 5 different interfaces.\n",
"Dance Dance Revolution X2 is a music video game released by Konami for the North American PlayStation 2. It is the direct sequel to the North American PlayStation 2 release of \"Dance Dance Revolution X\". Released on October 27, 2009 alongside \"Dance Dance Revolution Hottest Party 3\", DDR X2 was one of the first \"Dance Dance Revolution\" games released to use songs from the 2009-10 soundtrack. It contains a unique soundtrack, a new master mode, additional modes of play and minor changes and refinements but is otherwise unchanged from its global predecessor \"Dance Dance Revolution X\". It was the final DDR game released for the PlayStation 2.\n",
"On January 14, 2019, Konami revealed a new \"20th Anniversary Model\" cabinet redesign, featuring gold-colored plating, a larger screen, and updated dance pad LED lighting.\n\nBULLET::::- On legacy cabinets, card readers are optional. PlayStation memory cards are supported in Asia from \"2ndMix Link Edition\" to \"Extreme\". PlayStation 2 card support for \"SuperNova\" worldwide was announced, but cancelled. \"SuperNova\" and newer support e-Amusement instead. \"DDR X\" and \"its sequel\" also support USB drives.\n\nBULLET::::- Unofficially, this cabinet can be upgraded to support newer mixes, such as \"DDR Extreme\" and \"SuperNova 2\".\n",
"DDR (DDR1) was superseded by DDR2 SDRAM, which had modifications for higher clock frequency and again doubled throughput, but operates on the same principle as DDR. Competing with DDR2 was Rambus XDR DRAM. DDR2 dominated due to cost and support factors. DDR2 was in turn superseded by DDR3 SDRAM, which offered higher performance for increased bus speeds and new features. DDR3 has been superseded by DDR4 SDRAM, which was first produced in 2011 and whose standards were still in flux (2012) with significant architectural changes.\n",
"BULLET::::- DDR4 memory support updated for 2666 MHz (for i5, i7 and i9 parts) and 2400 MHz (for i3 parts); DDR3 memory is no longer supported on LGA1151 parts, unless using with H310C chipset\n\nBULLET::::- 300 series chipset on the second revision of socket LGA 1151\n\nSection::::Kaby Lake Refresh vs. Coffee Lake.\n",
"Home versions are commonly bundled with soft plastic dance pads that are similar in appearance and function to the Nintendo Power Pad. Some third-party manufacturers produce hard metal pads at a higher price.\n\nA version of DDR was also produced for the PC in North America. It uses the interface of \"Dance Dance Revolution 4thMix\", and contains around 40 songs from the first six mainstream arcade releases. It has not been as well received as the console versions.\n",
"DDR games have been released on various video game consoles, including the PlayStation, Dreamcast, Nintendo 64, PlayStation 2, PlayStation 3, GameCube, Wii, Xbox and Xbox 360, and even PCs. Home versions often contain new songs, songs from the arcade version, and additional features that take advantage of the capabilities of the console (e.g.; Xbox 360 versions such as the Dance Dance Revolution Universe series include support for online multiplayer and downloadable songs over Xbox Live, and high definition graphics). DDR has even reached Nintendo's Game Boy Color, with five versions of \"Dance Dance Revolution GB\" released in Japan; these included a series of three mainstream DDR games, a Disney Mix, and an Oha Star. The games come with a small thumb pad that fits over the Game Boy Color's controls to simulate the dance pad.\n",
"Section::::Gameplay.\n\nThe gameplay of \"Dance Dance Revolution X\" continues the gameplay introduced at the beginning of the series. DDR X contains returning Konami Originals and classic licensed tracks as well as new songs by Konami and label artists. Despite the new overhauled cabinet design available, the dance stage layout remains almost completely unaltered to the original, except in North America, where aesthetic changes were made to cut costs in production.\n",
"Section::::Background.:DDR2.\n",
"DDR4 SDRAM is the successor to DDR3 SDRAM. It was revealed at the Intel Developer Forum in San Francisco in 2008, and was due to be released to market during 2011. The timing varied considerably during its development - it was originally expected to be released in 2012, and later (during 2010) expected to be released in 2015, before samples were announced in early 2011 and manufacturers began to announce that commercial production and release to market was anticipated in 2012. DDR4 is expected to reach mass market adoption around 2015, which is comparable with the approximately five years taken for DDR3 to achieve mass market transition over DDR2.\n",
"Konami announced the development of \"Dance Dance Revolution X\" on May 15, 2008 alongside \"Dance Dance Revolution Universe 3\" and \"Dance Dance Revolution Hottest Party 2\". DDR X is intended to be released as part of the 10th anniversary of \"Dance Dance Revolution\". Konami promised that at least 70 songs would be featured in this release and that DDR X would bring with it enhanced graphics and new modes of play. Also promised was LAN multi-player support for up to 8 players, an upgraded Workout Mode that will allow players to build their own regimen, new dancing characters and the return of existing features such as EyeToy support.\n",
"BULLET::::- A redesigned Cover Flow-like song list. This design is based on DDR Universe 3.\n\nBULLET::::- New \"Happy\" and \"Pro\" play modes. Happy mode is designed for beginners, with a simplified interface and reduced song list. Pro is basically the \"normal\" mode.\n",
"The music of \"Dance Dance Revolution X2\" consists largely of the licensed songs released across multiple DDR games in North America. In addition a number of Konami Originals that were released in \"Dance Dance Revolution\" games globally and a number of returning songs from previous DDR games make up the eclectic collection of music in DDR X2. There seem to be strong nationalism themes in a lot of the track selection.\n",
"Internal banks are increased to 16 (4 bank select bits), with up to 8 ranks per DIMM.\n\nProtocol changes include:\n\nBULLET::::- Parity on the command/address bus\n\nBULLET::::- Data bus inversion (like GDDR4)\n\nBULLET::::- CRC on the data bus\n\nBULLET::::- Independent programming of individual DRAMs on a DIMM, to allow better control of on-die termination.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-02717 | Why when a country has a king, his wife becomes the queen. But when you have a queen, her husband doesn’t become king but remains a prince? | King, by rank, is higher than Queen If a Queen marries, her husband doesn't become a King because he doesn't hold power, she does. Awarding the title of King means he now outranks the Queen This rank is the reason why there's no Queen Jadwiga of Poland, it's *King* Jadwiga of Poland. She took the title King because it's the highest rank in the realm and cements her position, everyone else agreed | [
"In monarchies where polygamy has been practiced in the past (such as Morocco and Thailand), or is practiced today (such as the Zulu nation and the various Yoruba polities), the number of wives of the king varies. In Morocco, King Mohammed VI has broken with tradition and given his wife, Lalla Salma, the title of princess. Prior to the reign of King Mohammed VI, the Moroccan monarchy had no such title. In Thailand, the king and queen must both be of royal descent. The king's other consorts are accorded royal titles that confer status.\n",
"In Greek mythology, while the royal function was a male privilege, power devolution often came through women, and the future king inherited power through marrying the queen heiress. This is illustrated in the Homeric myths where all the noblest men in Greece vie for the hand of Helen (and the throne of Sparta), as well as the Oedipian cycle where Oedipus weds the recently widowed queen at the same time he assumes the Theban kingship.\n",
"In Ancient Africa, Ancient Persia, Asian and Pacific cultures, and in some European countries, female monarchs have been given the title \"king\" or its equivalent, such as \"pharaoh\", when gender is irrelevant to the office, or else have used the masculine form of the word in languages that have grammatical gender as a way to classify nouns. The Byzantine Empress Irene sometimes called herself \"basileus\" (βασιλεύς), 'emperor', rather than \"basilissa\" (βασίλισσα), 'empress' and Jadwiga of Poland was crowned as \"Rex Poloniae\", \"King of Poland\".\n",
"Under \"uterine primogeniture\", succession to the throne or other property is passed to the male most closely related to the previous titleholder through female kinship. A male may also inherit a right of succession through a female ancestor or spouse, to the exclusion of any female relative who might be older or of nearer proximity of blood (see above for Spain's mid-twentieth century dynastic succession law). In such cases, inheritance depends on uterine kinship, so a king would typically be succeeded by his sister's son. This particular system of inheritance applied to the thrones of the Picts of Northern Britain and the Etruscans of Italy. Some kingdoms and ethnic groups in Africa follow the same practice. This usage may stem in part from the certainty of the relationship to the previous king and kings: sons and daughters of a sister are his relations (mater semper certa est), even if they do not have the same father.\n",
"Similarly, inheritance patterns for men in matrilineal societies often reflect the importance of the mother's brother. For example, in the Ashanti Kingdom of Central Ghana, a king traditionally passes his title and status on to his sister's son. A king's own biological son does not inherit the kingship because he is not a member of the ruling matrilineal family group. Women usually inherit status and property directly from their mothers in matrilineal societies.\n",
"BULLET::::- Swaziland/Eswatini has a form of quasi-elective monarchy. In the country, no king can appoint his successor. Instead, the royal family decides which of his wives shall be \" Great Wife\" and \"Indlovukazi\" (She-Elephant / Queen Mother). The son of this \"Great Wife\" will automatically become the next king. The eldest son is never appointed successor as he has other ceremonial roles.\n",
"Xenophon, on the other hand, made exactly the same distinction between types of rulers in the beginning of his \"Education of Cyrus\" where he says that, concerning the knowledge of how to rule human beings, Cyrus the Great, his exemplary prince, was very different \"from all other kings, both those who have inherited their thrones from their fathers and those who have gained their crowns by their own efforts\".\n\nMachiavelli divides the subject of new states into two types, \"mixed\" cases and purely new states.\n\nSection::::Summary.:\"Mixed\" princedoms (Chapters 3–5).\n",
"A royal fiancée is called \"liphovela\", or \"bride\". They graduate from being fiancées to full wives as soon as they fall pregnant, when the king customarily marries them. But the traditional marriage, known as “Ludvendve” (marriage to the king) only follows later.\n\nIn traditional Swazi culture, the king is expected to marry a woman from every clan in order to cement relationships with each part of Eswatini. This means that the king must have many wives.\n\nBULLET::::- \"Inkhosikati\" (Queen) LaMatsebula—Ritual wife. Has a degree in Psychology.\n\nBULLET::::- Son: \"HRH Prince Sicalo\"\n\nBULLET::::- Son: \"Prince Maveletiveni\"\n",
"In a number of African monarchies, the title of the principal non-spousal female titleholder in the kingdom is often translated as \"Princess Royal\". This usually happens in kingdoms that don't make use of the higher title of queen mother. Princess Elizabeth, Batebe of Toro in Uganda, for example, often has her title translated in this manner. This happens even though it has historically meant something closer to \"queen sister\".\n",
"Before primogeniture was enshrined in European law and tradition, kings would often secure the succession by having their successor (usually their eldest son) crowned during their own lifetime, so for a time there would be two kings in coregency – a senior king and a junior king. Examples were Henry the Young King of England and the early Direct Capetians in France. Sometimes, however, primogeniture can operate through the female line.\n",
"Sometimes a specific title is commonly used by various dynasties in a region, e.g. Mian in various of the Punjabi princely Hill States (lower Himalayan region in British India).\n",
"Section::::Prince of the blood.\n\nThe husband of a queen regnant is usually titled \"prince consort\" or simply \"prince\", whereas the wives of male monarchs take the female equivalent (e.g., empress, queen) of their husband's title. In Brazil, Portugal and Spain, however, the husband of a female monarch was accorded the masculine equivalent of her title (e.g., emperor, king), at least after he fathered her heir. In previous epochs, husbands of queens regnant were often deemed entitled to the crown matrimonial, sharing their consorts' regnal title and rank \"jure uxoris\",\n",
"\"that one has to have been born to a mother who married to a king, that even if one is the biological son of the king one is disqualified from taking over the throne if ones mother was married to a prince before he became king. In this instance, all the king’s sons who were born outside the principle of \"Inkosi Izala Inkosi\" do not qualify to be considered for the throne. This principle also disqualifies all of their descendant.\"\n",
"There are even cases of Korean kings marrying princesses from abroad. For example, the Korean text Samguk Yusa about the Gaya kingdom (it was absorbed by the kingdom of Silla later), indicate that in 48 AD, King Kim Suro of Gaya (the progenitor of the Gimhae Kim clan) took a princess (Princess Heo) from the \"Ayuta nation\" (which is the Korean name for the city of Ayodhya in North India) as his bride and queen. Princess Heo belonged to the Mishra royal family of Ayodhya. According to the Samguk Yusa, the princess had a dream about a heavenly fair handsome king from a far away land who was awaiting heaven's anointed ride. After Princess Heo had the dream, she asked her parents, the king and queen of Ayodhya, for permission to set out and seek the foreign prince, which the king and queen urged with the belief that God orchestrated the whole fate. That king was no other than King Kim Suro of the Korean Gaya kingdom.\n",
"Sometimes, however, primogeniture can operate through the female line. In some systems a female may rule as monarch only when the male line dating back to a common ancestor is exhausted. In 1980, Sweden, by rewriting its 1810 Act of Succession, became the first European monarchy to declare equal (full cognatic) primogeniture, meaning that the eldest child of the monarch, whether female or male, ascends to the throne. Other European monarchies (such as the Netherlands in 1983, Norway in 1990 and Belgium in 1991) have since followed suit. Similar reforms were proposed in 2011 for the United Kingdom and the other Commonwealth realms, which came into effect in 2015 after having been approved by all of the affected nations. Sometimes religion is affected; under the Act of Settlement 1701 all Roman Catholics and all persons who have married Roman Catholics are ineligible to be the British monarch and are skipped in the order of succession.\n",
"In Brunei, the wife of the Sultan is known as a \"Raja Isteri\" with prefix \"Pengiran Anak\", equivalent to queen consort in English, as were the consorts of tsars when Bulgaria was still a monarchy.\n\nSection::::Titles.\n\nThe title of king consort for the husband of a reigning queen is rare, but not unheard of. Examples are Henry Stuart, Lord Darnley, in Scotland; Antoine of Bourbon-Vendôme in Navarre; and Ferdinand of Saxe-Coburg-Gotha in Portugal.\n",
"A queen regnant possesses and exercises sovereign powers, whereas a queen consort shares her husband's rank and titles, but does not share the sovereignty of her husband. The husband of a queen regnant traditionally does not share his wife's rank, title or sovereignty. However, the concept of a king consort is not unheard of in both contemporary and classical periods.\n\nA queen dowager is the widow of a king. A queen mother is a queen dowager who is also the mother of a reigning sovereign.\n\nSection::::History.\n",
"In some cases the old king of the conquered kingdom depended on his lords. 16th century France, or in other words France as it was at the time of writing of \"The Prince\", is given by Machiavelli as an example of such a kingdom. These are easy to enter but difficult to hold.\n",
"Although in the Khmer language there are many words meaning \"king\", the word officially used in Khmer (as found in the 1993 Cambodian Constitution) is \"preahmâhaksat\" (Khmer regular script: ព្រះមហាក្សត្រ), which literally means: \"preah\"- (\"excellent\", cognate of the Pali word vara) -\"mâha\"- (from Sanskrit, meaning \"great\", cognate with \"maha-\" in maharaja) -\"ksat\" (\"warrior, ruler\", cognate of the Sanskrit word kṣatrá).\n",
"A king can also be a queen's husband, and a queen can be a king's wife. If both of the couple reign, neither person is generally considered to be a consort. \n",
"Some monarchies even have a practice in which the monarch can formally abdicate in favor of his heir, and yet retain a kingly title with executive power, e.g. \"Maha Upayuvaraja\" (Sanskrit for \"Great Joint King\" in Cambodia), though sometimes also conferred on powerful regents who exercised executive powers.\n\nSection::::Prince as a substantive title.:Non-dynastic princes.\n",
"In parts of Europe, royalty continued to regularly marry into the families of their greatest vassals as late as the 16th century. More recently, they have tended to marry internationally. In other parts of the world royal intermarriage was less prevalent and the number of instances varied over time, depending on the culture and foreign policy of the era.\n",
"The dual title signifies a sovereign's dual role, but may also be created to improve a ruler's prestige. Both cases, however, show that the merging of rule was not simply a case of annexation where one state is swallowed by another, but rather of unification and almost equal status, though in the case of the British monarchy the suggestion that an emperor is higher in rank than a king was avoided by creating the title \"king-emperor\" (\"queen-empress\") instead of \"emperor-king\" (\"empress-queen\").\n\nSection::::In the British Empire.\n",
"It was practiced in the succession to the once-separate thrones of England and Scotland (until their union under James VI and I) and then the United Kingdom until 2015, when the Succession to the Crown Act 2013 changed it to absolute primogeniture. The rule change also applies to all Commonwealth realms that have the British monarch as their head of state.\n\nMale-preference primogeniture is currently practiced in succession to the thrones of Monaco and Spain (before 1700 and since 1830).\n",
"The kingdom of Portugal had the uncommon custom of not only recognizing the title of 'king', but also numbering a consort of a Queen regnant, but only if and when he fathers an heir apparent to her, and nevertheless they are also not included in official lists of Portuguese Chiefs of State. And so, the husband of Maria I is known as 'king consort Pedro III' until his death in 1786 before his wife; and also the husband of Maria II was titled 'King consort Fernando II' when their elder son, the future king Pedro V, was born in 1837, but this same title was retired from him when his queen wife died in 1853.\n"
] | [
"When a Queen marries her husband becomes a King. "
] | [
"When a Queen marries her husband does not become a King. "
] | [
"false presupposition"
] | [
"When a Queen marries her husband becomes a King. ",
"When a Queen marries her husband becomes a King. "
] | [
"normal",
"false presupposition"
] | [
"When a Queen marries her husband does not become a King. ",
"When a Queen marries her husband does not become a King. "
] |
2018-01007 | If we could do a manned landing on the moon in 1969 why has it been so hard to go back? Why have such few countries managed to do it considering how far tech has progressed over the past 50 years? | We have the technology to build new pyramids, too, but we still haven't replicated the Giza ones. It's an enormously costly endeavor, tech or no. | [
"The United States continued other space exploration, including major participation with the ISS with its own modules. It also planned a set of unmanned Mars probes, military satellites, and more. The Constellation space program, began by President George W. Bush in 2004, aimed to launch a next-generation multifunction Orion spacecraft by 2018. A subsequent return to the Moon by 2020 was to be followed by manned flights to Mars, but the program was canceled in 2010 in favor of encouraging commercial US manned launch capabilities.\n",
"Beginning with the Soviet launch of the first satellite, Sputnik 1, in 1957, the United States competed with the Soviet Union for supremacy in outer space exploration. After the Soviets placed the first man in space, Yuri Gagarin, in 1961, President John F. Kennedy pushed for ways in which NASA could catch up, famously urging action for a manned mission to the Moon: \"I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the earth.\" The first manned flights produced by this effort came from Project Gemini (1965–1966) and then by the Apollo program, which despite the tragic loss of the Apollo 1 crew, achieved Kennedy's goal by landing the first astronauts on the Moon with the Apollo 11 mission in 1969.\n",
"In 1967, both nations faced serious challenges that brought their programs to temporary halts. Both had been rushing at full-speed toward the first piloted flights of Apollo and Soyuz, without paying due diligence to growing design and manufacturing problems. The results proved fatal to both pioneering crews.\n",
"With Cold War tensions running high, the Soviet Union and United States took their rivalry to the stars in 1957 with the Soviet launch of Sputnik. A \"space race\" between the two powers followed. Although the USSR reached several important milestones, such as the first craft on the Moon (Luna 2) and the first human in space (Yuri Gagarin), the U.S. pulled ahead eventually with its Mercury, Gemini, and Apollo programs, which culminated in Apollo 11's manned landing on the moon on 20 July 1969. Five more manned landings followed (Apollo 13 was forced to abort its mission). Nevertheless, despite its successes, the U.S. space program could not match many major achievements of the Soviet space program, such as unmanned rover-based space exploration and image and video transfer from the surface of another planet, until the early 21st century.\n",
"Now it is time to take longer strides - time for a great new American enterprise - time for this nation to take a clearly leading role in space achievement, which in many ways may hold the key to our future on Earth.brbr...I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth. No single space project in this period will be more impressive to mankind, or more important in the long-range exploration of space; and none will be so difficult or expensive to accomplish. \n",
"The idea of a joint Moon mission was abandoned after Kennedy's death, but the Apollo Project became a memorial to him. His goal was fulfilled in July 1969, with the successful Apollo 11 Moon landing. This accomplishment remains an enduring legacy of Kennedy's speech, but his deadline demanded a necessarily narrow focus, and there was no indication of what should be done next once it was achieved. Apollo did not usher in an era of lunar exploration, and no further missions were sent to the Moon after Apollo 17 in 1972. Subsequent planned Apollo missions were canceled. The Space Shuttle and International Space Station projects never captured the public imagination the way the Apollo Project did, and NASA would struggle to realize its visions with inadequate resources. Ambitious visions of space exploration were proclaimed by Presidents George H. W. Bush in 1989, George W. Bush in 2004, and Donald J. Trump in 2017, but the future of the American space program remains uncertain.\n",
"Section::::Early U.S. uncrewed lunar missions (1958–1965).\n\nIn contrast to Soviet lunar exploration triumphs in 1959, success eluded initial U.S. efforts to reach the Moon with the Pioneer and Ranger programs. Fifteen consecutive U.S. uncrewed lunar missions over a six-year period from 1958 to 1964 all failed their primary photographic missions; however, Rangers 4 and 6 successfully repeated the Soviet lunar impacts as part of their secondary missions.\n",
"The US conducted the first manned spaceflight to leave earth orbit and orbit the Moon on December 21, 1968 with the Apollo 8 space mission. Later on they succeeded in achieving President Kennedy's goal on July 20, 1969, with the landing of Apollo 11. Neil Armstrong and Buzz Aldrin became the first men to set foot on the Moon. Six such successful landings were achieved through 1972, with one failure on Apollo 13.\n",
"In 1960, President John F. Kennedy challenged US scientists to land Americans on the moon and bring them back safely to earth, before the decade was out. NASA rose to the occasion and achieved this staggering task with the landing of Apollo 11 on the moon in 1969.\n",
"After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).\n\nWith the substantial completion of the ISS following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020 was judged inadequately funded and unrealistic by an expert review panel reporting in 2009. \n",
"Before the Moon race the US had pre-projects for scientific and military moonbases: the Lunex Project and Project Horizon. Besides crewed landings, the abandoned Soviet crewed lunar programs included the building of a multipurpose moonbase \"Zvezda\", the first detailed project, complete with developed mockups of expedition vehicles and surface modules.\n\nSection::::Recent exploration.\n",
"In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon's surface; just four months later, \"Surveyor 1\" marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.\n",
"Within four months of each other in early 1966 the Soviet Union and the United States had accomplished successful Moon landings with uncrewed spacecraft. To the general public both countries had demonstrated roughly equal technical capabilities by returning photographic images from the surface of the Moon. These pictures provided a key affirmative answer to the crucial question of whether or not lunar soil would support upcoming crewed landers with their much greater weight.\n",
"... I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish. \n",
"Following the end of the Space Race, spaceflight has been characterised by greater international co-operation, cheaper access to low Earth orbit and an expansion of commercial ventures. Interplanetary probes have visited all of the planets in the Solar System, and humans have remained in orbit for long periods aboard space stations such as \"Mir\" and the ISS. Most recently, China has emerged as the third nation with the capability to launch independent manned missions, whilst operators in the commercial sector have developed re-usable booster systems and craft launched from airborne platforms.\n\nSection::::Background.\n",
"The USSR made no manned flights during this period but continued to develop its Soyuz craft and secretly accepted Kennedy's implicit lunar challenge, designing Soyuz variants for lunar orbit and landing. They also attempted to develop the N1, a large, manned Moon-capable launch vehicle similar to the US Saturn V.\n",
"Now it is time to take longer strides - time for a great new American enterprise - time for this nation to take a clearly leading role in space achievement, which in many ways may hold the key to our future on Earth.brbr...I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth. No single space project in this period will be more impressive to mankind, or more important in the long-range exploration of space; and none will be so difficult or expensive to accomplish. \n",
"India, Japan, China, the United States, and the European Space Agency each sent lunar orbiters, and especially ISRO's \"Chandrayaan-1\" has contributed to confirming the discovery of lunar water ice in permanently shadowed craters at the poles and bound into the lunar regolith. The post-Apollo era has also seen two rover missions: the final Soviet Lunokhod mission in 1973, and China's ongoing Chang'e 3 mission, which deployed its Yutu rover on 14 December 2013. The Moon remains, under the Outer Space Treaty, free to all nations to explore for peaceful purposes.\n\nSection::::Observation and exploration.:By spacecraft.:21st century.\n",
"To date, the United States is the only country to have successfully conducted crewed missions to the Moon, with the last departing the lunar surface in December 1972. All crewed and uncrewed soft landings had taken place on the near side of the Moon, until 3 January 2019 when the Chinese Chang'e 4 spacecraft made the first landing on the far side of the Moon.\n\nSection::::Uncrewed landings.\n",
"Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the sixth landing and the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2023. Robotic missions are still pursued vigorously.\n\nSection::::Targets of exploration.:Mars.\n",
"BULLET::::- The Space Race between the United States and the Soviet Union gave a peaceful outlet to the political and military tensions of the Cold War, leading to the first human spaceflight with the Soviet Union's \"Vostok 1\" mission in 1961, and man's first landing on another world—the Moon—with America's \"Apollo 11\" mission in 1969. Later, the first space station was launched by the Soviet space program. The United States developed the first reusable spacecraft system with the Space Shuttle program, first launched in 1981. As the century ended, a permanent manned presence in space was being founded with the ongoing construction of the International Space Station.\n",
"Section::::Transport.\n\nSection::::Transport.:Earth to Moon.\n\nConventional rockets have been used for most lunar explorations to date. The ESA's SMART-1 mission from 2003 to 2006 used conventional chemical rockets to reach orbit and Hall effect thrusters to arrive at the Moon in 13 months. NASA would have used chemical rockets on its Ares V booster and Lunar Surface Access Module, that were being developed for a planned return to the Moon around 2019, but this was cancelled. The construction workers, location finders, and other astronauts vital to building, would have been taken four at a time in NASA's Orion spacecraft.\n",
"The Moon was first reached in September 1959 by the Soviet Union's Luna 2, an unmanned spacecraft, followed by the first successful soft landing by Luna 9 in 1966. The United States' NASA Apollo program achieved the only manned lunar missions to date, beginning with the first manned orbital mission by Apollo 8 in 1968, and six manned landings between 1969 and 1972, with the first being Apollo 11 in July 1969. These missions returned lunar rocks which have been used to develop a geological understanding of the Moon's origin, internal structure, and the Moon's later history. Since the 1972 Apollo 17 mission the Moon has been visited only by unmanned spacecraft.\n",
"In the 2000s, the People's Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.\n\nFrom the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).\n\nSection::::History of exploration.\n\nSection::::History of exploration.:Telescope.\n",
"The United States launched its first lunar impactor attempt, Ranger 3, on January 26, 1962, which also failed to reach the Moon. This was followed by the first US success, Ranger 4, on April 23, 1962. This was followed by another 27 missions to the Moon from 1962 to 1973, including five successful Surveyor soft landers, five Lunar Orbiter surveillance probes, and nine Apollo missions which landed the first humans on the Moon.\n\nThe first human-crewed mission to perform TLI was Apollo 8 on December 21, 1968, making them the first humans to leave the Earth's influence.\n"
] | [
"If the humans have managed to land on the moon 50 years ago with far less technology than we have today, it should no be difficult to return in modern times therefore humans should have returned. "
] | [
"It is very expensive to return to the moon which makes it difficult for humans to go back."
] | [
"false presupposition"
] | [
"If the humans have managed to land on the moon 50 years ago with far less technology than we have today, it should no be difficult to return in modern times therefore humans should have returned. ",
"If the humans have managed to land on the moon 50 years ago with far less technology than we have today, it should no be difficult to return in modern times therefore humans should have returned. "
] | [
"normal",
"false presupposition"
] | [
"It is very expensive to return to the moon which makes it difficult for humans to go back.",
"It is very expensive to return to the moon which makes it difficult for humans to go back."
] |
2018-02331 | How do certain anti-biotics and medicines target specific areas of the body? | Doan's doesn't and they've had some brushes with advertising law because of their claims. It's a branded non-steroidal anti-inflammatory (magnesium salicylate) that works just like ibuprofen, aspirin, diclofenac or many others. It generally reduces inflammation by blocking an enzyme that makes chemicals that start/continue an inflammatory process. This works anywhere in the body, including the back. Most drugs that say they target a physical area are stretching the truth. There are a few mechanisms that may make it true, however. If the drug is not able to be absorbed by the gut then it will stay in the gut, "targeting" the gut lining. If the drug is secreted into the urine by the kidneys then it might be able to target the urinary system. Lots of drugs won't cross the barrier between the blood and the brain which has evolved to stop harmful toxins reaching the brain - these will therefore "target" the rest of the body. The other way these claims could be true is if the drug targets a particular type of cell. Most drugs work by binding onto proteins or enzymes in and around cells. If the protein they bind to is only produced by a specific type of cell then the drug targets that cell by only working on it - it doesn't get concentrated in the area around it though. | [
"In other cases, \"topical\" is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution. Such medications are generally hydrophobic chemicals, such as steroid hormones. Specific types include transdermal patches which have become a popular means of administering some drugs for birth control, hormone replacement therapy, and prevention of motion sickness. One example of an antibiotic that may be applied topically is chloramphenicol.\n",
"The definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local.\n\nIn other cases, \"topical\" is defined as applied to a localized area of the body or to the surface of a body part regardless of the location of the effect. By this definition, topical administration also includes transdermal application, where the substance is administered onto the skin but is absorbed into the body to attain systemic distribution.\n",
"Section::::Local versus systemic effect.\n\nThe definition of the topical route of administration sometimes states that both the application location and the pharmacodynamic effect thereof is local. \n",
"Active targeting can also be achieved by utilizing magnetoliposomes, which usually serves as a contrast agent in magnetic resonance imaging. Thus, by grafting these liposomes with a desired drug to deliver to a region of the body, magnetic positioning could aid with this process.\n",
"The vehicle of an ointment is known as the \"ointment base\". The choice of a base depends upon the clinical indication for the ointment. The different types of ointment bases are:\n\nBULLET::::- Hydrocarbon bases, e.g. hard paraffin, soft paraffin, microcrystalline wax and ceresine\n\nBULLET::::- Absorption bases, e.g. wool fat, beeswax\n\nBULLET::::- Water-soluble bases, e.g. macrogols 200, 300, 400\n\nBULLET::::- Emulsifying bases, e.g. emulsifying wax, cetrimide\n\nBULLET::::- Vegetable oils, e.g. olive oil, coconut oil, sesame oil, almond oil and peanut oil.\n\nThe medicaments are dispersed in the base and are divided after penetrating the living cells of the skin.\n",
"BULLET::::- --- keratoderma, palmoplantar\n\nBULLET::::- --- keratoderma, palmoplantar, diffuse\n\nBULLET::::- --- papillon-lefevre disease\n\nBULLET::::- --- keratosis follicularis\n\nBULLET::::- --- pemphigus, benign familial\n\nBULLET::::- --- porokeratosis\n\nBULLET::::- --- porphyria, erythropoietic\n\nBULLET::::- --- porphyrias, hepatic\n\nBULLET::::- --- coproporphyria, hereditary\n\nBULLET::::- --- porphyria, acute intermittent\n\nBULLET::::- --- porphyria cutanea tarda\n\nBULLET::::- --- porphyria, hepatoerythropoietic\n\nBULLET::::- --- porphyria, variegate\n\nBULLET::::- --- protoporphyria, erythropoietic\n\nBULLET::::- --- pseudoxanthoma elasticum\n\nBULLET::::- --- rothmund-thomson syndrome\n\nBULLET::::- --- sjogren-larsson syndrome\n\nBULLET::::- --- xeroderma pigmentosum\n\nSection::::--- skin and connective tissue diseases.:--- skin diseases.:--- skin diseases, infectious.\n\nBULLET::::- --- dermatomycoses\n\nBULLET::::- --- blastomycosis\n\nBULLET::::- --- candidiasis, chronic mucocutaneous\n",
"Section::::Beneficial microorganisms.:Horizontal transmission.\n\nSome beneficial symbionts are acquired horizontally, from the environment or unrelated individuals. This requires that host and symbiont have some method of recognizing each other or each other’s products or services. Often, horizontally acquired symbionts are relevant to secondary rather than primary metabolism, for example for use in defense against pathogens, but some primary nutritional symbionts are also horizontally (environmentally) acquired. Additional examples of horizontally transmitted beneficial symbionts include bioluminescent bacteria associated with bobtail squid and nitrogen-fixing bacteria in plants.\n\nSection::::Beneficial microorganisms.:Mixed-mode transmission.\n",
"Section::::Side effects and concerns.\n\nBiologics are known to sometimes cause harsh side effects. Currently, Biologics are only delivered systemically. They can't be delivered orally because the harsh environment of the gastrointestinal tract would breakdown the drug before it could reach the diseased tissue. Because systemic administration results in blockading the same pathway in both healthy and diseased tissue, pharmacology is exaggerated leading to many side effects such as lymphoma, infections, congestive heart failure, demyelinating disease, a lupus-like syndrome, injection site reactions, and additional systemic side effects.\n",
"Indications for a central line over the more common peripheral IV line commonly includes poor peripheral venous access for a PIV. Another common indication is when patients would require infusions over a prolonged period of time, such as antibiotic therapy over a few weeks for osteomyelitis. Another indication is when the substances to be administered could irritate the blood vessel lining such as total parenteral nutrition, whose high glucose content can damage blood vessels, and some chemotherapy regimens. There is less damage to the blood vessels because central veins have a larger diameter than peripheral veins, have faster blood flow, and would get diluted as it is quickly distributed to the rest of the body. Vasopressors (such as norepinephrine, vasopressin, epinephrine, phenylephrine, among others) are typically infused through central lines to minimize the risk of extravasation.\n",
"This model may not be applicable in situations where some of the enzymes responsible for metabolizing the drug become saturated, or where an active elimination mechanism is present that is independent of the drug's plasma concentration. In the real world each tissue will have its own distribution characteristics and none of them will be strictly linear. If we label the drug's volume of distribution within the organism Vd and its volume of distribution in a tissue Vd the former will be described by an equation that takes into account all the tissues that act in different ways, that is:\n",
"Section::::Beneficial microorganisms.\n\nThe mode of transmission is also an important aspect of the biology of beneficial microbial symbionts, such as coral-associated dinoflagellates or human microbiota. Organisms can form symbioses with microbes transmitted from their parents, from the environment or unrelated individuals, or both.\n\nSection::::Beneficial microorganisms.:Vertical transmission.\n",
"Section::::Function.:Autonomic regulation.\n\nThe fenestrated sinusoidal capillaries of the area postrema and a specialized region of NTS make this particular region of the medulla critical in the autonomic control of various physiological systems, including the cardiovascular system and the systems controlling feeding and metabolism. Angiotensin II causes a dose-dependent increase in arterial blood pressure without producing considerable changes in the heart rate, an effect mediated by the area postrema.\n\nSection::::Clinical significance.\n\nSection::::Clinical significance.:Damage.\n",
"If defined strictly as having local effect, the topical route of administration can also include enteral administration of medications that are poorly absorbable by the gastrointestinal tract. One poorly absorbable antibiotic is vancomycin, which is recommended by mouth as a treatment for severe \"Clostridium difficile\" colitis.\n\nSection::::Choice of base formulation.\n",
"The paraaortic and retroaortic nodes receive: \n\nBULLET::::- (a) the efferents of the common iliac lymph nodes\n\nBULLET::::- (b) the lymphatics from the testis in the male, and from the ovary, uterine tube, and uterus in the female\n\nBULLET::::- (c) the lymphatics from the kidney and suprarenal gland\n\nBULLET::::- (d) the lymphatics draining the lateral abdominal muscles and accompanying the lumbar veins\n",
"Any study of pharmacological interactions between particular medicines should also discuss the likely interactions of some medicinal plants. The effects caused by medicinal plants should be considered in the same way as those of medicines as their interaction with the organism gives rise to a pharmacological response. Other drugs can modify this response and also the plants can give rise to changes in the effects of other active ingredients. \n\nThere is little data available regarding interactions involving medicinal plants for the following reasons:\n",
"BULLET::::- --- cardiotonic agents\n\nBULLET::::- --- fibrinolytic agents\n\nBULLET::::- --- natriuretic agents\n\nBULLET::::- --- nitric oxide donors\n\nBULLET::::- --- potassium channel blockers\n\nBULLET::::- --- sclerosing solutions\n\nBULLET::::- --- sodium channel blockers\n\nBULLET::::- --- vasoconstrictor agents\n\nBULLET::::- --- calcium channel agonists\n\nBULLET::::- --- nasal decongestants\n\nBULLET::::- --- vasodilator agents\n\nBULLET::::- --- endothelium-dependent relaxing factors\n\nBULLET::::- --- central nervous system agents\n\nBULLET::::- --- adjuvants, anesthesia\n\nBULLET::::- --- alcohol deterrents\n\nBULLET::::- --- analgesics\n\nBULLET::::- --- analgesics, non-narcotic\n\nBULLET::::- --- analgesics, opioid\n\nBULLET::::- --- anticonvulsants\n\nBULLET::::- --- anti-dyskinesia agents\n\nBULLET::::- --- antiparkinson agents\n\nBULLET::::- --- antiemetics\n\nBULLET::::- --- anti-obesity agents\n\nBULLET::::- --- appetite depressants\n",
"However, these models do not always truly reflect the real situation within an organism. For example, not all body tissues have the same blood supply, so the distribution of the drug will be slower in these tissues than in others with a better blood supply. In addition, there are some tissues (such as the brain tissue) that present a real barrier to the distribution of drugs, that can be breached with greater or lesser ease depending on the drug's characteristics. If these relative conditions for the different tissue types are considered along with the rate of elimination, the organism can be considered to be acting like two compartments: one that we can call the \"central compartment\" that has a more rapid distribution, comprising organs and systems with a well-developed blood supply; and a \"peripheral compartment\" made up of organs with a lower blood flow. Other tissues, such as the brain, can occupy a variable position depending on a drug's ability to cross the barrier that separates the organ from the blood supply.\n",
"The location of the target effect of active substances are usually rather a matter of pharmacodynamics (concerning e.g. the physiological effects of drugs). An exception is topical administration, which generally means that both the application location and the effect thereof is local.\n\nTopical administration is sometimes defined as both a local application location and local pharmacodynamic effect, and sometimes merely as a local application location regardless of location of the effects.\n\nSection::::Classification.:By application location.\n\nSection::::Classification.:By application location.:Enteral/gastrointestinal.\n",
"In gene therapy, gene delivery vectors, such as viruses, can be imaged according either to their particle biodistribution or their transduction pattern. The former means labeling the viruses with a contrast agent, being visible in some imaging modality, such as MRI or SPECT/PET and latter means visualising the marker gene of gene delivery vector to be visible by the means of immunohistochemical methods, optical imaging or even by PCR. Non-invasive imaging has gained popularity as the imaging equipment has become available for research use from clinics.\n",
"Chemical neuromodulation is always invasive, because a drug is delivered in a highly specific location of the body. The non-invasive variant is traditional pharmacotherapy, e.g. swallowing a tablet.\n\nBULLET::::- Intrathecal drug delivery systems (ITDS, which may deliver micro-doses of painkiller (for instance, ziconotide) or anti-spasm medicine (such as baclofen) directly to the site of action)\n\nSection::::History.\n",
"BULLET::::- --- hypoglycemic agents\n\nBULLET::::- --- immunologic factors\n\nBULLET::::- --- agglutinins\n\nBULLET::::- --- hemagglutinins\n\nBULLET::::- --- biological response modifiers\n\nBULLET::::- --- adjuvants, immunologic\n\nBULLET::::- --- interferon inducers\n\nBULLET::::- --- immunosuppressive agents\n\nBULLET::::- --- complement inactivating agents\n\nBULLET::::- --- myeloablative agonists\n\nBULLET::::- --- muscle relaxants, central\n\nBULLET::::- --- narcotic antagonists\n\nBULLET::::- --- natriuretic agents\n\nBULLET::::- --- antidiuretic agents\n\nBULLET::::- --- diuretics\n\nBULLET::::- --- diuretics, osmotic\n\nBULLET::::- --- neurotransmitter agents\n\nBULLET::::- --- adrenergic agents\n\nBULLET::::- --- adrenergic agonists\n\nBULLET::::- --- adrenergic alpha-agonists\n\nBULLET::::- --- adrenergic beta-agonists\n\nBULLET::::- --- adrenergic antagonists\n\nBULLET::::- --- adrenergic alpha-antagonists\n\nBULLET::::- --- adrenergic beta-antagonists\n",
"Section::::Regulation.\n\nBiologics or biological products for human use are regulated by the Center for Biologics Evaluation and Research (CBER), overseen by the Office of Medical Products and Tobacco, within the U.S. Food and Drug Administration which includes the Public Health Service Act and the Federal Food, Drug and Cosmetic Act. \"CBER protects and advances the public health by ensuring that biological products are safe and effective and available to those who need them. CBER also provides the public with information to promote the safe and appropriate use of biological products.\"\n\nSection::::Specialty market participants.\n",
"BULLET::::- --- antilipemic agents\n\nBULLET::::- --- anticholesteremic agents\n\nBULLET::::- --- hydroxymethylglutaryl-coa reductase inhibitors\n\nBULLET::::- --- lipotropic agents\n\nBULLET::::- --- antineoplastic agents\n\nBULLET::::- --- angiogenesis inhibitors\n\nBULLET::::- --- antibiotics, antineoplastic\n\nBULLET::::- --- anticarcinogenic agents\n\nBULLET::::- --- antimetabolites, antineoplastic\n\nBULLET::::- --- antimitotic agents\n\nBULLET::::- --- antineoplastic agents, alkylating\n\nBULLET::::- --- antineoplastic agents, hormonal\n\nBULLET::::- --- antineoplastic agents, phytogenic\n\nBULLET::::- --- myeloablative agonists\n\nBULLET::::- --- antirheumatic agents\n\nBULLET::::- --- anti-inflammatory agents, non-steroidal\n\nBULLET::::- --- gout suppressants\n\nBULLET::::- --- uricosuric agents\n\nBULLET::::- --- cardiovascular agents\n\nBULLET::::- --- anti-arrhythmia agents\n\nBULLET::::- --- antihypertensive agents\n\nBULLET::::- --- calcium channel blockers\n\nBULLET::::- --- cardioplegic solutions\n",
"BULLET::::- nuclear hormone receptors\n\nBULLET::::- structural proteins such as tubulin\n\nBULLET::::- membrane transport proteins\n\nBULLET::::- nucleic acids\n\nSection::::Drug target identification.\n\nIdentifying the biological origin of a disease, and the potential targets for intervention, is the first step in the discovery of a medicine using the reverse pharmacology approach. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. An alternative means of identifying new drug targets is forward pharmacology based on phenotypic screening to identify \"orphan\" ligands whose targets are subsequently identified through target deconvolution.\n\nSection::::Databases.\n\nDatabases containing biological targets information:\n",
"Section::::Targeting vectors.\n"
] | [
"Anti-biotics target certain parts of the body.",
"Certain antibiotics and medicines can target specific areas in the body. "
] | [
"Anti-biotics do not target a part of the body they just work in the entire body. ",
"Some of the creators of antibiotics stretch the truth by stating their drugs can target specific areas in the body, when in fact they actually can't."
] | [
"false presupposition"
] | [
"Anti-biotics target certain parts of the body.",
"Certain antibiotics and medicines can target specific areas in the body. "
] | [
"false presupposition",
"false presupposition"
] | [
"Anti-biotics do not target a part of the body they just work in the entire body. ",
"Some of the creators of antibiotics stretch the truth by stating their drugs can target specific areas in the body, when in fact they actually can't."
] |
2018-12205 | How did apex predators that existed millions of years ago go extinct? (i.e Terror birds, Megalodon, Levyatan) | It's hard to pinpoint the exact causes ~~of~~ for most prehistoric animals and plants, beyond trying to extrapolate data from the scarce info we have. We can look at extinction events nowadays for clues though. Overfeeding and loss of habitat from environmental changes are the two biggest causes. | [
"BULLET::::- There is no historical evidence of boom and bust cycles causing even local extinctions in regions where large mammal predators have been driven extinct by hunting. The recent hunting out of remaining predators throughout most of the United States has not caused massive vegetational change or dramatic boom and bust cycles in ungulates.\n\nBULLET::::- It is not spatially explicit and does not track predator and prey species separately, whereas the multispecies overkill model does both.\n",
"When populations of apex predators decrease, populations of mesopredators often increase. This is the mesopredator release effect. \"Mesopredator outbreaks often lead to declining prey populations, sometimes destabilizing communities and driving local extinctions\". When apex predators are removed from the ecosystem, this gives the mesopredators less competition and conflict. They are able to catch more prey and have lower mortality rates. Often, mesopredators can take over the role of apex predators. This happens when new species are introduced into an ecosystem or when species leave or are killed off. When this happens, and the new apex predator or former mesopredator, becomes the new species on top of the food chain, it is important to remember that they are not ecologically identical to the former apex predator, and is likely a smaller species, which will have different effects on the structure and stability of the ecosystem. The mesopredators that become the new apex predators are the species that benefit from this mesopredator release. Apex predators reduce mesopredator populations, and change mesopredator behaviors and habitat choices, by preying on and intimidating mesopredators. This can occur in any ecosystem with any type of relationship between predator and prey. However, in the case of the relationship between apex predator and mesopredator, it could mean that the apex predator causes the mesopredator to leave the ecosystem, again, creating room for a new species to become mesopredator. \n",
"BULLET::::- \"c.\" 1360 - \"Nesophontes\" survived in Cuba until around this time.\n\nSection::::2nd millennium CE.:15th century.\n\nBULLET::::- \"c.\" 1400 - New Zealand's Haast's eagle, a giant bird of prey, becomes extinct. The eagle's main prey were various species of moa, which also went extinct.\n\nBULLET::::- \"c.\" 1420 - The South Island giant moa survived in New Zealand's South Island until around this time.\n\nBULLET::::- \"c.\" 1440 - The lemur \"Palaeopropithecus ingens\" survived in Madagascar until about this time.\n\nBULLET::::- The moas of New Zealand became extinct, probably due to hunting.\n\nSection::::2nd millennium CE.:16th century.\n",
"BULLET::::- After the arrival of \"H. sapiens\" in the New World, existing predators must share the prey populations with this new predator. Because of this competition, populations of original, or first-order, predators cannot find enough food; they are in direct competition with humans.\n\nBULLET::::- Second-order predation begins as humans begin to kill predators.\n\nBULLET::::- Prey populations are no longer well controlled by predation. Killing of nonhuman predators by \"H. sapiens\" reduces their numbers to a point where these predators no longer regulate the size of the prey populations.\n",
"Megafauna that disappeared in Africa or Asia during the Early and Middle Pleistocene include:\n\nBULLET::::- Various giraffids (e.g. \"Giraffa jumae\"; \"Giraffa\" extirpated in Asia during the Middle Pleistocene)\n\nBULLET::::- \"Paracamelus\"\n\nBULLET::::- \"Camelus moreli\"\n\nBULLET::::- \"Soergelia\"\n\nBULLET::::- \"Damalops\"\n\nBULLET::::- \"Parmularius\"\n\nBULLET::::- Various \"Gazella\" sp. (e.g. \"Gazella psolea\")\n\nBULLET::::- \"Makapania\"\n\nBULLET::::- Dubois’ antelope (\"Dubosia santeng\")\n\nBULLET::::- \"Bos acutifrons\"\n\nBULLET::::- A few species of warthog such as \"Metridiochoerus\"\n\nBULLET::::- \"Kolpochoerus\"\n\nBULLET::::- Chalicotheres (e.g. \"Ancylotherium, Nestoritherium\")\n\nBULLET::::- Giant Eurasian beaver (\"Trogontherium\")\n\nBULLET::::- \"Hypolagus\"\n\nBULLET::::- \"Hippopotamus gorgops\" (a giant hippopotamus)\n\nBULLET::::- \"Serengetilagus\"\n\nBULLET::::- Various members of Equidae\n\nBULLET::::- \"Equus stenonis\"\n\nBULLET::::- \"Eurygnathohippus\"\n\nBULLET::::- \"Hipparion\"\n",
"Section::::Extinction.:Changing ecosystem.\n",
"Section::::Role in ecosystems.\n\nSection::::Role in ecosystems.:Trophic level.\n\nOne way of classifying predators is by trophic level. Carnivores that feed on herbivores are secondary consumers; their predators are tertiary consumers, and so forth. At the top of this food chain are apex predators such as lions. Many predators however eat from multiple levels of the food chain; a carnivore may eat both secondary and tertiary consumers.\n\nPredators must also contend with intraguild predation, where other predators kill and eat them. For example, coyotes compete with and sometimes kill gray foxes and bobcats.\n\nSection::::Role in ecosystems.:Biodiversity maintained by apex predation.\n",
"There are some inconsistencies between the current available data and the prehistoric overkill hypothesis. For instance, there are ambiguities around the timing of sudden extinctions of Australian megafauna. Biologists note that comparable extinctions have not occurred in Africa and South or Southeast Asia, where the fauna evolved with hominids. Post-glacial megafaunal extinctions in Africa have been spaced over a longer interval.\n",
"BULLET::::- Powerful goshawk and the Gracile goshawk (\"Accipiter efficax et Accipiter quartus\")\n\nBULLET::::- \"Sylviornis\" (giant, flightless New Caledonian galliform- largest in existence)\n\nBULLET::::- Noble megapode (\"Megavitornis altirostris\")\n\nBULLET::::- New Caledonian gallinule (\"Porphyrio kukwiedei\")\n\nBULLET::::- Giant megapodes\n\nBULLET::::- Giant malleefowl (\"Leipoa gallinacea\")\n\nBULLET::::- Pile-builder megapode (\"Megapodius molistructor\")\n\nBULLET::::- Consumed scrubfowl (\"Megapodius alimentum\")\n\nBULLET::::- Viti Levu scrubfowl (\"Megapodius amissus\")\n\nBULLET::::- New Caledonian ground dove (\"Gallicolumba longitarsus\")\n\nBULLET::::- New Caledonian snipe et Viti Levu snipe (\"Coenocorypha miratropica\" et \"Coenocorypha neocaledonica\")\n\nBULLET::::- Niue night heron (\"Nycticorax kalavikai\")\n\nBULLET::::- Marquesas cuckoo-dove (\"Macropygia heana\")\n\nBULLET::::- New Caledonian barn owl (\"Tyto letocarti\")\n\nBULLET::::- Various \"Galliraillus\" sp.\n",
"Section::::Ecological roles.:Conservation.\n\nBecause apex predators have powerful effects on other predators, on herbivores, and on plants, they can be important in nature conservation. Humans have hunted many apex predators close to extinction, but in some parts of the world these predators are now returning. They are increasingly threatened by climate change. For example, the polar bear requires extensive areas of sea ice to hunt its prey, typically seals, but climate change is shrinking the sea ice of the Arctic, forcing polar bears to fast on land for increasingly long periods.\n",
"Predatory megafaunal flightless birds were often able to compete with mammals in the early Cenozoic. Later in the Cenozoic, however, they were displaced by advanced carnivorans and died out. In North America, the bathornithids \"Paracrax\" and \"Bathornis\" were apex predators but became extinct by the Early Miocene. In South America, the related phorusrhacids shared the dominant predatory niches with metatherian sparassodonts during most of the Cenozoic but declined and ultimately went extinct after eutherian predators arrived from North America (as part of the Great American Interchange) during the Pliocene. In contrast, large herbivorous flightless ratites have survived to the present.\n",
"BULLET::::- It is unable to explain why large herbivore populations were not regulated by surviving carnivores such as grizzly bears, wolves, pumas, and jaguars whose populations would have increased rapidly in response to the loss of competitors.\n\nBULLET::::- It does not explain why almost all extinct carnivores were large herbivore specialists such as sabre toothed cats and short faced bears, but most hypocarnivores and generalized carnivores survived.\n",
"Carnivorous theropod dinosaurs including \"Allosaurus\" and \"Tyrannosaurus\" have been described as apex predators, based on their size, morphology, and dietary needs. \n\nA Permian shark, \"Triodus sessilis\", was discovered containing two amphibians (\"Archegosaurus decheni\" and \"Cheliderpeton latirostre\"), one of which had consumed a fish, \"Acanthodes bronni\", showing that the shark had lived at a trophic level of at least 4.\n\nAmong more recent fossils, the sabre-tooth cats, like \"Smilodon\", are considered to have been apex predators in the Cenozoic.\n\nSection::::Interactions with humans.\n\nSection::::Interactions with humans.:In hunting.\n",
"BULLET::::- \"c.\" 1180 - The Maui Nui moa-nalo survived until around this time. The moa-nalo were large ducks and the Hawaiian Islands' major herbivores.\n\nBULLET::::- \"c.\" 1190 - The Hunter Island penguin survived until around this time.\n\nSection::::2nd millennium CE.:14th century.\n\nBULLET::::- \"c.\" 1320 - The lemur \"Megaladapis edwardsi\" survived in Madagascar until about this time.\n\nBULLET::::- \"c.\" 1322 - The upland moa survived in New Zealand's South Island until around this time.\n\nBULLET::::- \"c.\" 1326 - Mantell's moa survived in New Zealand's North Island until around this time.\n",
"Section::::Role in ecosystems.:Population dynamics.\n\nIn the absence of predators, the population of a species can grow exponentially until it approaches the carrying capacity of the environment. Predators limit the growth of prey both by consuming them and by changing their behavior. Increases or decreases in the prey population can also lead to increases or decreases in the number of predators, for example, through an increase in the number of young they bear.\n",
"Life on earth has suffered occasional mass extinctions at least since . Despite their distrous effects, mass extinctions have sometimes accelerated the evolution of life on earth. When dominance of an ecological niche passes from one group of organisms to another, this is rarely because the new dominant group outcompetes the old, but usually because an extinction event allows new group to outlive the old and move into its niche.\n",
"Section::::Arguments against both climate change and overkill.\n\nIt may be observed that neither the overkill nor the climate change hypotheses can fully explain events: browsers, mixed feeders and non-ruminant grazer species suffered most, while relatively more ruminant grazers survived. However, a broader variation of the overkill hypothesis may predict this, because changes in vegetation wrought by either Second Order Predation (see below) or anthropogenic fire preferentially selects against browse species.\n\nSection::::Hyperdisease hypothesis.\n\nSection::::Hyperdisease hypothesis.:Theory.\n",
"The term \"mesopredator release\" was first used by Soulé and colleagues in 1988 to describe a process whereby mid-sized carnivorous mammals became far more abundant after being \"released\" from the control of a larger carnivore. This, in turn, resulted in decreased populations of still smaller prey species, such as birds. This may lead to dramatic prey population decline, or even extinction, especially on islands. This process arises when mammalian top predators are considered to be the most influential factor on trophic structure and biodiversity in terrestrial ecosystems. Top predators may feed on herbivores and kill predators in lower trophic levels as well. Thus, reduction in the abundance of top predators may cause the medium-sized predator population to increase, therefore having a negative effect on the underlying prey community. The mesopredator release hypothesis offers an explanation for the abnormally high numbers of mesopredators and the decline in prey abundance and diversity. The hypothesis supports the argument for conservation of top predators because they protect smaller prey species that are in danger of extinction. This argument has been a subject of interest within conservation biology for years, but few studies have adequately documented the phenomenon.\n",
"BULLET::::- Large animals store more fat in their bodies than do medium-sized animals and this should have allowed them to compensate for extreme seasonal fluctuations in food availability.\n",
"Section::::Paleobiology.\n\nSection::::Paleobiology.:Predatory behavior.\n",
"Quaternary extinction event\n\nThe Quaternary period (from 2.588 ± 0.005 million years ago to the present) saw the extinctions of numerous predominantly megafaunal species, which resulted in a collapse in faunal density and diversity and the extinction of key ecological strata across the globe. The most prominent event in the Late Pleistocene is differentiated from previous Quaternary pulse extinctions by the widespread absence of ecological succession to replace these extinct species, and the regime shift of previously established faunal relationships and habitats as a consequence.\n",
"BULLET::::- \"c.\" 885 - \"Daubentonia robusta\" survived in Madagascar until about this time.\n\nSection::::1st millennium \"CE\".:10th century.\n\nBULLET::::- \"c.\" 900 - The nene-nui survived on Maui until around this time.\n\nBULLET::::- \"c.\" 915 - \"Plesiorycteropus\" survived in Madagascar until about this time.\n\nBULLET::::- \"c.\" 950 - Sinoto's lorikeet and the conquered lorikeet survived until about this time.\n\nBULLET::::- \"c.\" 996 - The New Zealand owlet-nightjar survived until about this time.\n\nSection::::2nd millennium CE.\n\nSection::::2nd millennium CE.:12th century.\n",
"Mesonychids probably originated in China, where the most primitive mesonychid, \"Yangtanglestes\", is known from the early Paleocene. They were also most diverse in Asia, where they occur in all major Paleocene faunas. Since other predators, such as creodonts and Carnivora, were either rare or absent in these animal communities, mesonychids most likely dominated the large predator niche in the Paleocene of Eastern Asia.\n",
"The catastrophic mass extinction at the end of the Permian, around 252 million years ago, killed off about 70 percent of terrestrial vertebrate species and the majority of land plants.\n\nAs a result, ecosystems and food chains collapsed, and the establishment of new stable ecosystems took about 30 million years. With the disappearance of the gorgonopsians, which were dominant predators in the late Permian, the cynodonts' principal competitors for dominance of the carnivorous niches were a previously obscure sauropsid group, the archosaurs, which includes the ancestors of crocodilians and dinosaurs.\n",
"The eight or more species of elephant birds, giant flightless ratites in the genera \"Aepyornis\", \"Vorombe\", and \"Mullerornis\", are extinct from over-hunting, as well as 17 species of lemur, known as giant, subfossil lemurs. Some of these lemurs typically weighed over , and fossils have provided evidence of human butchery on many species.\n\nSection::::Influences.:Competition by humans.:Islands.:New Zealand.\n"
] | [
"Causes for apex predator extinction are known. "
] | [
"Exact causes for apex predator extinctions are unknown but clues can be found. "
] | [
"false presupposition"
] | [
"Causes for apex predator extinction are known. ",
"Causes for apex predator extinction are known. "
] | [
"false presupposition",
"normal"
] | [
"Exact causes for apex predator extinctions are unknown but clues can be found. ",
"Exact causes for apex predator extinctions are unknown but clues can be found. "
] |
2018-14075 | How do cameras focus? How do they know what they're pointing at is in focus? | It depends on the camera, but there are two main methods. Cheap or small cameras, like your cell phone use the contrast. Basically they take a picture (from the preview), and pick a strip of pixels. Then they subtract each pixel in the strip from it's neighbor to get the difference, square each difference, and add those numbers up. That's one method for getting the contrast of the strip, there are others. Then it just focuses back and forth until it finds the focus setting that gets the highest contrast number (basically the least blurry, since that would mean it's in focus). Higher end cameras use a mirror behind the lens to split a bit of the light away to a second lens for the autofocus sensors. The lenses are focused on different parts of the mirror, and thus different parts of the lens, but also such that when in focus it is the same spot on the image. Since an in focus image is when the all the light through the lens from a point hits the same point on the film, the camera can use the different spots to determine not only if they are hitting the same spot, but how far off they are and in what direction they are off. The benefit of this is these cameras don't need to search for an in focus spot, they can see an out of focus spot and determine how the lens needs to be adjusted to be in focus which means focusing can be much faster. The downside is you can only focus on spots that are already setup with a sensor. | [
"Almost all modern lenses for SLRs and DSLRs provide automatic focus. The autofocus sensor(s) and electronics are actually in the camera body, and this circuitry provides electrical power and signals to a motor inside the lens that adjusts the focus. (Some older autofocus systems are based on a motor in the camera body and using a mechanical connection to the focus mechanism in the lens.)\n",
"All of the above functions are independent of lens focus and stabilizing methods.\n\nSection::::Automatic modes.\n",
"Some cameras (Minolta 7, Canon EOS-1V, 1D, 30D/40D, Sony DSLR-A700, DSLR-A850, DSLR-A900) also have a few \"high-precision\" focus points with an additional set of prisms and sensors; they are only active with \"fast lenses\" with certain geometrical apertures (typically f-number 2.8 and faster). Extended precision comes from the wider effective measurement base of the \"range finder\".\n\nSection::::Passive.:Contrast detection.\n",
"Some cameras have post focusing. Post focusing means take the pictures first and then focusing later at the personal computer. The camera uses many tiny lenses on the sensor to capture light from every camera angle of a scene and is called plenoptics technology. A current plenoptic camera design has 40,000 lenses working together to grab the optimal picture.\n\nSection::::Physics.:Lens.:Autofocus.\n\nOn some cameras, the selection of a point in the imaging frame upon which the auto-focus system will attempt to focus. Many Single-lens reflex cameras (SLR) feature multiple auto-focus points in the viewfinder.\n\nSection::::Physics.:Exposure control.\n\nSection::::Physics.:Exposure control.:Aperture.\n",
"Section::::Focus.\n\nFocus is the tendency for light rays to reach the same place on the image sensor or film, independent of where they pass through the lens. For clear pictures, the focus is adjusted for distance, because at a different object distance the rays reach different parts of the lens with different angles. In modern photography, focusing is often accomplished automatically.\n",
"Section::::D.:Depth of Field.\n\nThe in-focus range of a lens or optical system around an item of interest. It is measured from the distance behind an object of interest, to the distance in front of the object of interest, when the viewing lens is specifically focused on the object of interest. Depth of field depends on subject-to-camera distance, focal length of the lens, and f-stop.\n\nSection::::D.:Depth of Focus.\n\nThe range of sensor-to-lens distance for which the image formed by the lens is clearly focused.\n\nSection::::D.:Digital Imager.\n",
"Due to the optical properties of photographic lenses, only objects within a limited range of distances from the camera will be reproduced clearly. The process of adjusting this range is known as changing the camera's focus. There are various ways of focusing a camera accurately. The simplest cameras have fixed focus and use a small aperture and wide-angle lens to ensure that everything within a certain range of distance from the lens, usually around 3 metres (10 ft) to infinity, is in reasonable focus. Fixed focus cameras are usually inexpensive types, such as single-use cameras. The camera can also have a limited focusing range or scale-focus that is indicated on the camera body. The user will guess or calculate the distance to the subject and adjust the focus accordingly. On some cameras this is indicated by symbols (head-and-shoulders; two people standing upright; one tree; mountains).\n",
"Follower Pots are installed on lens that allows feedback to the controller information relevant to zoom and focus positioning allowing the controller to quickly adjust to a preselected scene and arrive in focus at the proper focal length automatically.\n\nSection::::L.:Lens Speed.\n\nThe ability of a lens to transmit light, represented as the ratio of the focal length to the diameter of the lens. The largest lens opening (smallest f-number) at which the lens can be set. A fast lens transmits more light and has a larger opening than a slow lens.\n\nSection::::L.:Letterbox.\n",
"Section::::Active.\n\nActive AF systems measure distance to the subject independently of the optical system, and subsequently adjust the optical system for correct focus.\n",
"Section::::Trap focus.:AI Servo.\n",
"BULLET::::- Field of view. The field of view (FOV) is the part which can be seen by the machine vision system at one moment. The field of view depends from the lens of the system and from the working distance between object and camera.\n\nBULLET::::- Focus. An image, or image point or region, is said to be in focus if light from object points is converged about as well as possible in the image; conversely, it is out of focus if light is not well converged. The border between these conditions is sometimes defined via a circle of confusion criterion.\n",
"Section::::Focus motors.\n\nModern autofocus is done through one of two mechanisms; either a motor in the camera body and gears in the lens (\"screw drive\") or through electronic transmission of the drive instruction through contacts in the mount plate to a motor in the lens. Lens-based motors can be of a number of different types, but are often ultrasonic motors or stepper motors.\n",
"Passive systems may not find focus when the contrast is low, notably on large single-colored surfaces (walls, blue sky, etc.) or in low-light conditions. Passive systems are dependent on a certain degree of illumination to the subject (whether natural or otherwise), while active systems may focus correctly even in total darkness when necessary. Some cameras and external flash units have a special low-level illumination mode (usually orange/red light) which can be activated during auto-focus operation to allow the camera to focus.\n\nSection::::Trap focus.\n",
"BULLET::::- Lens. A lens is a device that causes light to either converge and concentrate or to diverge, usually formed from a piece of shaped glass. Lenses may be combined to form more complex optical systems as a Normal lens or a Telephoto lens.\n\nBULLET::::- Lens Controller. A lens controller is a device used to control a motorized (ZFI) lens. Lens controllers may be internal to a camera, a set of switches used manually, or a sophisticated device that allows control of a lens with a computer.\n",
"Focus (optics) – \n\nFocus puller – \n\nFocusing screen – \n\nFoley artist – \n\nFollow focus – \n\nFollow shot – \n\nFollowspot light – \n\nForced perspective – \n\nForeshadowing – \n\nFormalist film theory – \n\nFound footage – \n\nFourth wall – \n\nFrame – \n\nFrame composition – \n\nFrame rate – \n\nFrazier lens – \n\nFreeze frame shot – \n\nFrench hours – \n\nFrench Impressionist Cinema – \n\nFresnel lantern – \n\nFresnel lens – \n\nF-stop – \n\nFull frame – \n\nFull shot\n\nSection::::G.\n\nGaffer – \n",
"There are basically three types of F mount Nikon lens:\n\nBULLET::::1. MF = Manual focus lenses\n\nBULLET::::2. AF & AF-D = Auto focus by camera body driven focus motor, the D version provides distance information\n\nBULLET::::3. AF-I & AF-S = Auto focus by integrated/ultrasonic motor in lens; see also List of Nikon F-mount lenses with integrated autofocus motors\n",
"Focusing screens, in their simplest form, consist of a matte glass or plastic surface on which the image can be focused. Other devices, such as split-image prisms or microprisms, can help determine focus.\n\nManual focus lenses can also be used on modern digital cameras with an adapter. Zeiss, Leica and Cosina Voigtländer are among current manufacturers who continue to make manual lenses in lens mounts native to modern cameras. \n",
"Section::::Types of digital cameras.:Digital rangefinders.\n\nA rangefinder is a device to measure subject distance, with the intent to adjust the focus of a camera's objective lens accordingly (open-loop controller). The rangefinder and lens focusing mechanism may or may not be coupled. In common parlance, the term \"rangefinder camera\" is interpreted very narrowly to denote manual-focus cameras with a visually-read out optical rangefinder based on parallax. Most digital cameras achieve focus through analysis of the image captured by the objective lens and distance estimation, if it is provided at all, is only a byproduct of the focusing process (closed-loop controller).\n",
"Section::::Outline.\n",
"This is the principle of the camera, and of the human eye. The focusing adjustment of a camera adjusts \"S\", as using an image distance different from that required by this formula produces a defocused (fuzzy) image for an object at a distance of \"S\" from the camera. Put another way, modifying \"S\" causes objects at a different \"S\" to come into perfect focus.\n",
"Section::::Factors affecting MTF in typical camera systems.:Oversampling and downconversion to maintain the optical transfer function.\n",
"In small-format cameras, the smaller circle of confusion limit yields a proportionately smaller depth of focus. In motion-picture cameras, different lens mount and camera gate combinations have exact flange focal depth measurements to which lenses are calibrated.\n",
"Section::::Techniques.:Optical image stabilization.:Sensor-shift.\n",
"Autofocus systems rely on one or more sensors to determine correct focus. Some AF systems rely on a single sensor, while others use an array of sensors. Most modern SLR cameras use through-the-lens optical sensors, with a separate sensor array providing light metering, although the latter can be programmed to prioritize its metering to the same area as one or more of the AF sensors.\n",
"With a reflex finder, you can focus the image on the ground glass and frame your picture at the same time. It is common to find a device on the center of the ground glass to help precise focusing, for example a split-image or a microprism device. Today's reflex cameras usually incorporate autofocusing.\n\nReflex finders are found in:\n\nBULLET::::- Single-lens reflex (SLR) cameras, with one lens for both viewing and taking the picture\n\nBULLET::::- Twin-lens reflex (TLR) cameras, with one lens for viewing and one lens for taking the picture\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-03030 | Why is much of Europe’s climate mild when Europe is so much further North of the equator than much of North America is? | The gulf stream in the Atlantic takes warm water to Europe and cold water to North America. URL_0 | [
"The North Atlantic Gulf Stream, a tropical oceanic current that passes north of the Caribbean and up the East Coast of the United States to North Carolina, then heads east-northeast to the Azores, is thought to greatly modify the climate of Northwest Europe. As a result of the Gulf Stream, west-coast areas located in high latitudes like Ireland, the UK, and Norway have much milder winters (for their latitude) than would otherwise be the case. The lowland attributes of western Europe also help drive marine air masses into continental areas, enabling cities such as Dresden, Prague, and Vienna to have maritime climates in spite of being located well inland from the ocean.\n",
"Section::::Temperature.\n\nMost of Europe sees seasonal temperatures consistent with temperate climates in other parts of the world, though summers north of the Mediterranean Sea are cooler than most temperate climates experience in summer. Among the cities with a population over 100,000 people in Europe, the coldest winters are mostly found in Russia, with daily highs in winter averaging 0 C (32 F), while the mildest winters in the continent are in coastal southern Spain and the southernmost coast of Crete.\n",
"Section::::Cities and regions known for microclimates.:Europe.\n\nBULLET::::- Known for its wines, the Ticino region in Switzerland benefits from a microclimate in which palm trees and banana trees grow.\n\nBULLET::::- Montreux in Switzerland.\n\nBULLET::::- Weggis in Switzerland.\n\nBULLET::::- Ahr Valley in Germany\n\nBULLET::::- Gran Canaria is called \"Miniature Continent\" for its rich variety of microclimates.\n\nBULLET::::- Tenerife is known for its wide variety of microclimates.\n\nBULLET::::- Biddulph Grange is very rich with microclimates as a result of the large dips and variety of very large trees alongside a large amount of water.\n",
"BULLET::::- Thracian Lowlands and Upper Struma Valley, Bulgaria\n\nBULLET::::- Azores (bordering maritime)\n\nBULLET::::- Madeira\n\nBULLET::::- Algerian wine regions\n\nBULLET::::- Egyptian wine regions (irrigated by the Nile system)\n\nBULLET::::- Moroccan wine regions\n\nBULLET::::- Tunisian wine regions\n\nSection::::Continental climates.\n",
"Much native vegetation in Mediterranean climate area valleys have been cleared for agriculture. In places such as the Sacramento Valley and Oxnard Plain in California, draining marshes and estuaries combined with supplemental irrigation has led to a century of intensive agriculture. Much of the Overberg in the southern Cape of South Africa, once covered with renosterveld, has likewise been largely converted to agriculture, mainly wheat. In hillside and mountainous areas, away from urban sprawl, ecosystems and habitats of native vegetation are more sustained.\n",
"BULLET::::- Proximity to oceans moderates the climate. For example, the Scandinavian Peninsula has more moderate climate than similarly northern latitudes of northern Canada.\n",
"Section::::Climate.\n\nSouthern Europe's most emblematic climate is that of the Mediterranean climate, which has become a typically known characteristic of the area, which is due to the large subtropical semi-permanent centre of high atmospheric pressure found, not in the Mediterranean itself, but in the Atlantic Ocean, the Azores High. The Mediterranean climate covers Portugal, Southern and Eastern Spain, Southern France, Monaco, Italy, Greece, coastal Croatia, Albania, as well as the Mediterranean islands. Those areas of Mediterranean climate present similar vegetations and landscapes throughout, including dry hills, small plains, pine forests and olive trees.\n",
"Europe covers about , or 2% of the Earth's surface (6.8% of land area). Politically, Europe is divided into about fifty sovereign states of which the Russian Federation is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 741 million (about 11% of the world population) . The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast.\n",
"BULLET::::- Slovenia: Koper\n\nBULLET::::- Spain: A Coruña, Balearic Islands, Barcelona, Bilbao, Madrid, Málaga, Murcia, Santander, Seville, Valencia, Zaragoza\n\nBULLET::::- United Kingdom: Coastal Cornwall and the Isles of Scilly (borderline) , Bailiwick of Guernsey (borderline) \n\nBULLET::::- Vatican City\n\nSection::::Oceania.\n",
"Central Europe is a good example of a transition from an oceanic climate to a continental climate, which can be noticed immediately when looking at the hardiness zones, which tend to decrease mainly eastwards instead of northwards. Also, the plateaux and low mountain ranges in this region have a significant impact on how cold it might get during winter. Generally speaking, the hardiness zones are high considering the latitude of the region, although not as high as in the Shetland Islands where zone 9 extends to over 60°N. In Central Europe, the relevant zones decrease from zone 8 on the Belgian, Dutch, and German North Sea coast, with the exception of some of the Frisian Islands (notably Vlieland and Terschelling), the island of Helgoland, and some of the islands in the Rhine-Scheldt estuary, which are in zone 9, to zone 5 around Suwałki, Podlachia on the far eastern border between Poland and Lithuania. Some isolated, high elevation areas of the Alps and Carpathians may even go down to zone 3 or 4. An extreme example of a cold sink is Funtensee, Bavaria which is at least in zone 3 and maybe even in zone 1 or 2. Another notable example is Waksmund, a small village in the Polish Carpathians, which regularly reaches during winter on calm nights when cold and heavy airmasses from the surrounding Gorce and Tatra Mountains descend down the slopes to this low-lying valley, creating extremes which can be up to colder than nearby Nowy Targ or Białka Tatrzańska, which are both higher up in elevation. Waksmund is in zone 3b while nearby Kraków, only to the north and lower is in zone 6a. These examples prove that local topography can have a pronounced effect on temperature and thus on what is possible to grow in a specific region.\n",
"Cooler climates can be found in certain parts of Southern European countries, for example within the mountain ranges of Spain and Italy. Additionally, the north coast of Spain experiences a wetter Atlantic climate.\n\nSection::::Flora.\n\nSouthern Europe's flora is that of the Mediterranean Region, one of the phytochoria recognized by Armen Takhtajan. The Mediterranean and Submediterranean climate regions in Europe are found in much of Southern Europe, mainly in Southern Portugal, most of Spain, the southern coast of France, Italy, the Croatian coast, much of Bosnia, Montenegro, Kosovo, Serbia, Albania, Bulgaria, North Macedonia, Greece, and the Mediterranean islands.\n\nSection::::History.\n\nSection::::History.:Early history.\n",
"BULLET::::- Leeds, located in Yorkshire, England is known to have a number of microclimates because of the number of valleys surrounding the city centre.\n\nBULLET::::- The coastal areas in the Andalusia region of Spain typically average around at in summer, but Tarifa only averages . Further north along the coast Cádiz has a summer average of with very warm nights, whereas nearby Jerez de la Frontera has summer highs of with inland areas further north such as Seville being even hotter.\n\nSection::::Cities and regions known for microclimates.:Asia and Oceania.\n",
"Usually, if the inland areas have a humid continental climate, the coastal areas stay much milder during winter months, in contrast to the hotter summers. This is the case further north on the American west coast, such as in British Columbia, Canada, where Vancouver has an oceanic wet winter with rare frosts, but inland areas that average several degrees warmer in summer have cold and snowy winters.\n\nSection::::Background.:Soil types.\n",
"Cfb climates are predominant in central parts of Western Europe, including northern Spain, Northwestern Portugal (mountains), Belgium, Britain, France, Ireland and the Netherlands. They are the main climate type in New Zealand and the Australian states of Tasmania, Victoria and southeastern New South Wales (starting from the Illawarra region). In North America, they are found mainly in Washington, Oregon, Vancouver Island and neighbouring parts of British Columbia, as well as many coastal areas of southwest Alaska. There are pockets of Cfb in most South American countries, including many parts of Southern Chile, parts of the provinces of Chubut, Santa Cruz and Buenos Aires in Argentina. In Western Asia small pockets are found close to sea level on the Black Sea coast of northern Turkey and Georgia. While Cfb zones are rare in Africa, one dominates the coastline of the Eastern Cape in South Africa.\n",
"BULLET::::- the Roman Warm Period\n\nBULLET::::- the Medieval Warm Period\n\nBULLET::::- the retreat of glaciers since 1850\n\nBULLET::::- the \"Modern Warming\" during the 20th century\n\nCertain effects have occurred during these cycles. For example, during the Medieval Warm Period, the American Midwest was in drought, including the Sand Hills of Nebraska which were active sand dunes. The black death plague of \"Yersinia pestis\" also occurred during Medieval temperature fluctuations, and may be related to changing climates.\n",
"As pointed out by Rudolf Geiger in his book not only climate influences the living plant but the opposite effect of the interaction of plants on their environment can also take place, and is known as \"plant climate\".\n\nSection::::Sources and influences on microclimate.:Dams.\n\nArtificial reservoirs as well as natural ones create microclimates and often influence the macroscopic climate as well.\n\nSection::::Cities and regions known for microclimates.\n\nSection::::Cities and regions known for microclimates.:Americas.\n",
"The Mediterranean climate is most readily associated with the areas around the Mediterranean basin, where viticulture and winemaking first flourished on a large scale due to the influence of the Phoenicians, Greeks, and Romans of the ancient world.\n\nSection::::Mediterranean climates.:Wine regions with Mediterranean climates.\n\nBULLET::::- Tuscany and most other Central-Southern Italian wine regions\n\nBULLET::::- Liguria\n\nBULLET::::- Marsala, Sicily\n\nBULLET::::- Sardinia\n\nBULLET::::- Most Greek wine regions\n\nBULLET::::- Cyprus wine regions\n\nBULLET::::- Israeli wine regions\n\nBULLET::::- Jordanian wine regions\n\nBULLET::::- Lebanese wine regions\n\nBULLET::::- Palestinian wine regions\n\nBULLET::::- Most Albanian wine regions\n\nBULLET::::- Most Montenegrin wine regions\n\nBULLET::::- Corsica\n",
"Section::::Human aspects.\n\nSection::::Human aspects.:Demography, fauna and flora.\n\nThe vast majority of the world's human population resides in temperate zones, especially in the northern hemisphere, due to its greater mass of land. The biggest described number in temperate region in the world is found in southern Africa, where some 24,000 taxa (species and infraspecific taxa) have been described, but the native fauna and flora of this region does not have much cultural importance for the majority of the human population of the world that lives in Temperate Zones and that live in the Northern Hemisphere, only environmental importance.\n\nSection::::Human aspects.:Agriculture.\n",
"BULLET::::- For the pollen zone and Blytt-Sernander period, associated with the climate optimum, see Atlantic (period).\n\nSection::::Global effects.\n",
"It is a rain shadow wind that results from the subsequent adiabatic warming of air that has dropped most of its moisture on windward slopes (\"see\" orographic lift). As a consequence of the different adiabatic lapse rates of moist and dry air, the air on the leeward slopes becomes warmer than equivalent elevations on the windward slopes. Föhn winds can raise temperatures by as much as 14 °C (25 °F) in just a matter of minutes. Central Europe enjoys a warmer climate due to the Föhn, as moist winds off the Mediterranean Sea blow over the Alps.\n\nSection::::See also.\n\nBULLET::::- Forest restoration\n",
"Evidence of a warm climate in Europe, for example, comes from archaeological studies of settlement and farming in the Early Bronze Age at altitudes now beyond cultivation, such as Dartmoor, Exmoor, the Lake district and the Pennines in Great Britain. The climate appears to have deteriorated towards the Late Bronze Age however. Settlements and field boundaries have been found at high altitude in these areas, which are now wild and uninhabitable. Grimspound on Dartmoor is well preserved and shows the standing remains of an extensive settlement in a now inhospitable environment.\n",
"\"Csb\" climates are found in northwestern Iberia, namely Galicia and the north of Portugal, coastal California, western Washington and Oregon, southern portions of Vancouver Island in British Columbia, central Chile, parts of southern Australia and sections of southwestern South Africa.\n\nSection::::Cold-summer Mediterranean climate.\n",
"Section::::Climatology.\n\nContinental climates exist where cold air masses infiltrate during the winter and warm air masses form in summer under conditions of high sun and long days. Places with continental climates are as a rule are either far from any moderating effect of oceans (examples: Omaha, Nebraska and Kazan, Russia) or are so situated that prevailing winds tend to head offshore (example: Boston, USA). Such regions get quite warm in the summer, achieving temperatures characteristic of tropical climates but are colder than any other climates of similar latitude in the winter.\n\nSection::::Neighbouring climates.\n",
"Therefore, the average temperature throughout the year of Naples is , while it is only in New York City which is almost on the same latitude. Berlin, Germany; Calgary, Canada; and Irkutsk, in the Asian part of Russia, lie on around the same latitude; January temperatures in Berlin average around higher than those in Calgary, and they are almost higher than average temperatures in Irkutsk. Similarly, northern parts of Scotland have a temperate marine climate. The yearly average temperature in city of Inverness is . However, Churchill, Manitoba, Canada, is on roughly the same latitude and has an average temperature of , giving it a nearly subarctic climate.\n",
"Oceanic climates in Europe occur mostly in Northwest Europe, from Ireland and Great Britain eastward to central Europe. Most of France (away from the Mediterranean), Belgium, the Netherlands, Denmark, Germany, Norway, the north coast of Spain (Basque Country, north of Navarre, Galicia, Asturias and Cantabria), the western Azores off the coast of Portugal, the south of Kosovo and southern portions of Sweden, also have oceanic climates. Examples of oceanic climates are found in Glasgow, London, Bergen, Amsterdam, Dublin, Berlin, Bilbao, Donostia-San Sebastian, Biarritz, Bayonne, Zürich, Copenhagen, Skagen and Paris. With decreasing distance to the Mediterranean Sea, the oceanic climate of Northwest Europe gradually changes to the subtropical dry-summer or Mediterranean climate of southern Europe. The line between Oceanic and Continental climate in Europe runs in a generally north to south direction. For example, western Germany is more impacted by milder Atlantic air masses than is eastern Germany. Thus, winters across Europe become colder to the east, and (in some locations) summers become hotter. The line between oceanic Europe and Mediterranean Europe normally runs west to east and is related to changes in precipitation patterns and differences to seasonal temperatures.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-17740 | How to know all the instruments used in a song? | Typically you just know. If you have experience in music you can typically tell what the instruments are. | [
"BULLET::::- American Music Awards\n\nBULLET::::- Good Morning America\n\nBULLET::::- The Today Show\n\nBULLET::::- Lopez Tonight\n\nBULLET::::- Jimmy Kimmel Live!\n\nSection::::Keyboard Instruments.\n\nBULLET::::- Moog Synthesizers\n\nBULLET::::- Access Virus\n\nBULLET::::- Novation\n\nBULLET::::- Analogue Systems\n\nBULLET::::- Critter & Guitari\n\nBULLET::::- Roland JX-305\n\nBULLET::::- Roland JX-3P\n\nBULLET::::- Roland JX-8P\n\nBULLET::::- Roland JV-1080\n\nBULLET::::- Roland TR-606\n\nBULLET::::- Roland TR-707\n\nBULLET::::- Roland TR-909\n\nBULLET::::- Roland XP-80\n\nBULLET::::- Clavia Nord Lead\n\nBULLET::::- Clavia Nord Electro 2\n\nBULLET::::- Clavia Nord Modular\n\nBULLET::::- Korg M1\n\nBULLET::::- Korg MS2000\n\nBULLET::::- Kurzweil K2000\n\nBULLET::::- Analog synthesizers\n\nBULLET::::- Yamaha Piano\n\nBULLET::::- Hammond Organ\n\nBULLET::::- Ondes Martenot\n\nSection::::Selected discography.\n",
"BULLET::::- List of songs recorded by Led Zeppelin\n\nBULLET::::- List of songs recorded by Jennifer Lopez\n\nBULLET::::- List of songs recorded by Lorde\n\nBULLET::::- List of songs recorded by Lostprophets\n\nBULLET::::- List of songs recorded by Mike Love\n\nBULLET::::- List of songs recorded by Mallu Magalhães\n\nBULLET::::- List of songs recorded by Magnapop\n\nBULLET::::- List of songs recorded by Marina and the Diamonds\n\nBULLET::::- List of songs recorded by Maroon 5\n\nBULLET::::- List of songs recorded by Bruno Mars\n\nBULLET::::- List of songs recorded by Mireille Mathieu\n\nBULLET::::- List of songs recorded by Mayday Parade\n",
"BULLET::::- Charles Moniz – additional engineer\n\nBULLET::::- Wayne Gordon – recording\n\nBULLET::::- Bob Mallory – assistant recording\n\nBULLET::::- Tyler Hartman – assistant recording\n\nBULLET::::- Brent Kolatalo – drum engineering\n\nBULLET::::- Ken Lewis – drum engineering\n\nSection::::Release history.\n",
"BULLET::::- List of songs recorded by The Rasmus\n\nBULLET::::- List of songs recorded by Raven-Symoné\n\nBULLET::::- List of songs recorded by Red Hot Chili Peppers\n\nBULLET::::- List of songs recorded by Blind Joe Reynolds\n\nBULLET::::- List of songs recorded by Damien Rice\n\nBULLET::::- List of songs recorded by Rihanna\n\nBULLET::::- List of songs recorded by Rise Against\n\nBULLET::::- List of songs recorded by Kelly Rowland\n\nBULLET::::- List of songs recorded by Roxette\n\nBULLET::::- List of songs recorded by Rush\n\nBULLET::::- List of songs recorded by Saint Etienne\n\nBULLET::::- List of songs recorded by Kumar Sanu\n",
"BULLET::::- List of songs recorded by 311\n\nBULLET::::- List of songs recorded by 4Minute\n\nBULLET::::- List of songs recorded by 911\n\nBULLET::::- List of songs recorded by A-Teens\n\nBULLET::::- List of songs recorded by Aaliyah\n\nBULLET::::- List of songs recorded by Adele\n\nBULLET::::- List of songs recorded by Aerosmith\n\nBULLET::::- List of songs recorded by Ariana Grande\n\nBULLET::::- List of songs recorded by Die Ärzte\n\nBULLET::::- List of songs recorded by After School\n\nBULLET::::- List of songs recorded by Christina Aguilera\n\nBULLET::::- List of songs recorded by AKB48\n\nBULLET::::- List of songs recorded by Fiona Apple\n",
"BULLET::::- will.i.am - vocals on all tracks except 11; backing vocals on track 11; Moog synthesizers on tracks 1, 4, 5, 8, 9, 10, 11 and 13; drum programming on tracks 1, 4, 6, 7, 8, 9 and 10; clavinet on tracks 12 and 13; drums and piano on track 2; Wurlitzer electric piano on track 4; synthesizer on track 10; executive production; production; engineering on tracks 2, 3, 4, 8, 9, 12 and 14; mixing on tracks 10 and 14\n",
"BULLET::::- 2011 \"Lost in the City of Angels\" L.A. Guns – Engineer\n\nBULLET::::- 2011 \"From Gainsbourg to Lulu\" Lulu Gainsbourg – Bass, Drums, Engineer, Percussion, Producer\n\nBULLET::::- 2012 \"Music from Another Dimension!\" Aerosmith – Vocals (Background)\n\nBULLET::::- 2012 \"More Music From the Rum Diary\" – Producer\n\nBULLET::::- 2012 \"Hollywood Forever\" L.A. Guns – Engineer\n\nBULLET::::- 2012 \"Born Villain\" Marilyn Manson – Bass, Engineer, Guitar, Keyboards, Producer\n\nBULLET::::- 2013 \"West of Memphis: Voices for Justice [Original Motion Picture Soundtrack]\" – Engineer, Featured Artist, Piano, Producer, Spoken Word Producer, Strings\n\nBULLET::::- 2013 \"The Lone Ranger: Wanted\" – Producer, Arranger, Mixing\n",
"BULLET::::- List of songs recorded by Ada Jones\n\nBULLET::::- List of songs recorded by Joy Division\n\nBULLET::::- List of songs recorded by Junoon\n\nBULLET::::- List of songs recorded by JYJ\n\nBULLET::::- List of songs recorded by Kara\n\nBULLET::::- List of songs recorded by Kasabian\n\nBULLET::::- List of songs recorded by Keane\n\nBULLET::::- List of songs recorded by Kent\n\nBULLET::::- List of songs recorded by Alicia Keys\n\nBULLET::::- List of songs recorded by Morgana King\n\nBULLET::::- List of songs recorded by Kings of Leon\n\nBULLET::::- List of songs recorded by Kavita Krishnamurthy\n",
"BULLET::::- List of songs recorded by Nikka Costa\n\nBULLET::::- List of songs recorded by Miley Cyrus\n\nBULLET::::- List of songs recorded by Dalida\n\nBULLET::::- List of songs recorded by The Darkness\n\nBULLET::::- List of songs recorded by A Day to Remember\n\nBULLET::::- List of songs recorded by De/Vision\n\nBULLET::::- List of songs recorded by Lana Del Rey\n\nBULLET::::- List of songs recorded by Destiny's Child\n\nBULLET::::- List of songs recorded by Dido\n\nBULLET::::- List of songs recorded by Celine Dion\n\nBULLET::::- List of songs recorded by DJ Quik\n\nBULLET::::- List of songs recorded by Dragonette\n",
"BULLET::::- Keyboard instruments and related keyboard gear\n\nBULLET::::- Grand piano (e.g., Steinway)\n\nBULLET::::- Hammond organ and rotating Leslie speaker\n\nBULLET::::- Fender Rhodes electric piano\n\nBULLET::::- Wurlitzer electric piano\n\nBULLET::::- MIDI keyboard or MIDI-equipped stage piano\n\nBULLET::::- Vintage synthesizers (e.g., Moog synthesizers)\n\nBULLET::::- Keyboard amplifier\n\nBULLET::::- Acoustic drum kit: this may only include the wood-shelled drums and the stands. Studios typically own major brands such as Premier, Ludwig and Gretsch. Some studios have a selection of classic snares. Drummers typically prefer to use their own snare drum and cymbals\n",
"Section::::Composition and recording.\n",
"Studio production (Sound recording, Mixing, Mastering) and tour with Laurent de Wilde & Gaël Horellou on \"Organics\". \n\nTour in France, Brazil, Canada, Spain, United Kingdom.\n\nBULLET::::- 1998:\n\nWorking composition for the video game industry in Paris, Museum of Natural History, and EMME Interactive.\n\nRecording and mixing the album of Anne Ducros / Produced in collaboration with Didier Lockwood.\n\nSection::::Trainer and Speaker in MAO.\n\nBULLET::::- 2013 to 2014:\n",
"BULLET::::- List of songs recorded by Nickelback\n\nBULLET::::- List of songs recorded by Nightwish\n\nBULLET::::- List of songs recorded by Nirvana\n\nBULLET::::- List of songs recorded by No Angels\n\nBULLET::::- List of songs recorded by Noisettes\n\nBULLET::::- List of songs recorded by Brandy Norwood\n\nBULLET::::- List of songs recorded by Oasis\n\nBULLET::::- List of songs recorded by Phil Ochs\n\nBULLET::::- List of songs recorded by The Offspring\n\nBULLET::::- List of songs recorded by Oh Land\n\nBULLET::::- List of songs recorded by One Direction\n\nBULLET::::- List of songs recorded by Patti Page\n\nBULLET::::- List of songs recorded by Paramore\n",
"BULLET::::- Ed Stasium – production, mixing on tracks 2–4, 6–8, 10–12, 14–15, guitar, backing vocals on 2–4, 6–9, 11, 14–15, bass on 2–4, 6–9, piano on 3, 7, 9, percussion on 4, 6, 8–9, 11, 14, accordion on 4, mellotron on 14\n\nBULLET::::- Mickey Leigh – guitar on tracks 1, 4–6, 8, 10, 11, bass on 1, 5, 10–11, percussion on 1,5, 10, 14–15, backing vocals on 1, 3–6, 10–11, 14, keyboards on 1, 10, 14, organ on 11, mixing on 1, 5, 11, production on 4–5, 11\n\nBULLET::::- Daniel Rey – production on tracks 2–4, 6–8, 10–12, 14–15\n",
"BULLET::::- Anders Herrlin – bass guitar, programming and engineering\n\nBULLET::::- Jonas Isacsson – electric guitars\n\nBULLET::::- Clarence Öfwerman – keyboards and production\n\nBULLET::::- Staffan Öfwerman – percussion and background vocals\n\nBULLET::::- Mats \"M.P.\" Persson – engineering\n\nBULLET::::- Alar Suurna – engineering\n\nSection::::Covers.\n\nRussian metal cover project Even Blurry Videos released their version of the song on YouTube in July 2019.\n\nSection::::Cascada cover (2005).\n",
"\"Something I've always done is just make music for me first. What would I like to hear? What kind of drums do I like? What kind of genres do I like? Second of all, see what's already being done, and then go against the grain. Whatever is popular I try to go anti, somehow. Like if things were more atmospheric than I would go opposite, or if things were dry, I would go more atmospheric. It's kinda hard to really say what you do. Just keep changing, make music for yourself, something you enjoy, not what you think other people enjoy.\"\n",
"BULLET::::- Hector Cervantes – Electric guitar\n\nBULLET::::- Juan DeVevo – Electric guitar, acoustic guitar\n\nBULLET::::- Melodee DeVevo – Violin\n\nBULLET::::- Hector Cervantes – Piano, keyboard\n\nBULLET::::- Mark Hall – Vocals\n\nBULLET::::- Chris Huffman – Bass guitar\n\nBULLET::::- Andy Williams – Drums\n\nBULLET::::- Additional musicians\n\nBULLET::::- David Angell - Violin\n\nBULLET::::- Monisa Angell - Violin\n\nBULLET::::- David Davidson - Contractor, concertmaster\n\nBULLET::::- Jack Jezioro - Bass\n\nBULLET::::- Anthony Lamarchina - Cello\n\nBULLET::::- Sarighani Reist - Cello\n\nBULLET::::- Pamela Sixfin - Violin\n\nBULLET::::- Mary Vanosdale - Violin\n\nBULLET::::- Kristin Wilkinson - Viola\n\nBULLET::::- Technical\n\nBULLET::::- Richard Dodd - Mastering\n\nBULLET::::- Terry Hemmings - Executive producer\n",
"BULLET::::- List of songs recorded by Ashley Tisdale\n\nBULLET::::- List of songs recorded by Tokio Hotel\n\nBULLET::::- List of songs recorded by Tokyo Jihen\n\nBULLET::::- List of songs recorded by Travis\n\nBULLET::::- List of songs recorded by TVXQ\n\nBULLET::::- List of songs recorded by U2\n\nBULLET::::- List of songs recorded by Usher\n\nBULLET::::- List of songs recorded by Carrie Underwood\n\nBULLET::::- List of songs recorded by Steve Vai\n\nBULLET::::- List of songs recorded by The Velvet Underground\n\nBULLET::::- List of songs recorded by Julieta Venegas\n\nBULLET::::- List of songs recorded by Rufus Wainwright\n\nBULLET::::- List of songs recorded by Westlife\n",
"BULLET::::- John Turnbull - Electric & acoustic guitar, ukulele, backing vocals\n\nBULLET::::- Vince Lovepump - Violin, mandolin, Portuguese guitar, backing vocals\n\nBULLET::::- Jim Russell - Drums, percussion, backing vocals\n\nBULLET::::- Niall Power - Drums, percussion, backing vocals\n\nGuest Musicians:\n\nBULLET::::- Roger Taylor - Backing vocals, percussion (on \"Here's To You\")\n\nBULLET::::- Henry Dagg - Musical saw\n\nBULLET::::- Gary Roberts - Electric guitar\n\nBULLET::::- Tash Roper - Clarinet and backing vocals\n\nBULLET::::- Joshua J. Macrae: Percussion\n\nBULLET::::- Darrell Willis - Hand claps and verbals\n\nProduction Credits: \n\nBULLET::::- Pete Briquette - Producer\n\nBULLET::::- Joshua J. Macrae - Engineer\n",
"Recordings started with producer Roberto Laghi at Gothenburg based Top Floor Studios (together with the studio's engineer Jakob Herrmann) with drums on July 16, 2016, which were done by July 24. Recording of guitars and bass guitar started on August 3, with bass done by September 4. The recording of brass instruments started on September 18, strings on September 19, a five-person choir on September 24, and grand piano on September 27. Recording of lead vocals started on September 29, and ended on October 9, concluding the main recordings.\n",
"Chief analyst of the music to be played is bass guitarist and producer Bart van Poppel. After thorough archeology of an album's arrangements — sheet music is not available — and consultation of Andy Babiuk Beatles Gear \"bible\" of Beatles instruments, the hunt begins for the necessary gear. For instance for a 1965 Lowrey Heritage Deluxe organ, or one of only thirty known existing mellotrons from a particular series, used in the intro of Strawberry Fields Forever. Even if an instrument is used on only one track, they will get one, and in at least one case it took a full year to obtain a piece.\n",
"BULLET::::- Matt Costa – vocals on all tracks, electric guitar on tracks 2, 6, 7 and 10, acoustic guitar on tracks 4, 5, 9, 11 and 12, classical guitar on track 8, lap steel on track 4, twelve-string guitar on tracks 5, 8 and 11, electric twelve-string guitar on track 1, bass on tracks 4, 8 and 10, piano on tracks 1, 3, 4, 5, 6 and 10, organ on tracks 1 and 11, harmonica on track 1, trumpet on tracks 3, 9 and 11, harpsichord on track 8, drums on tracks 8 and 10, autoharp on tracks 1 and 8, lute on track 8, percussion on tracks 4, 7, 8, 9, 10, 11 and 12, production on all tracks, engineering on all tracks\n",
"BULLET::::- Nick Haussling – A&R Coordinator\n\nBULLET::::- Brandon Kilgour – mixing\n\nBULLET::::- Steve Martin – programming\n\nBULLET::::- Kyle Moorman – engineering\n\nBULLET::::- Erik Ron – engineering\n\nBULLET::::- Carlotta Moye – photography\n\nBULLET::::- Drew Pearson – recording\n\nBULLET::::- Scott Roewe – Pro Tools, logic tech\n\nBULLET::::- Dave Russell – mixing\n\nBULLET::::- Billy Steinberg – production\n\nBULLET::::- Stuart Stuart – vocal production\n\nBULLET::::- Stephen Walker – design\n\nBULLET::::- Fabien Waltmann – programming\n\nBULLET::::- Greg Wells – production, mixing, programming, piano, beats\n\nSection::::Release history.\n",
"BULLET::::- Additional Engineering on Tracks 4, 5, 6, 8, 9, 10, 12 & 14 – Joel Moss, Freddy Pinero, Chris Steinmetz and Jeff \"Woody\" Woodruff.\n\nBULLET::::- Assistant Engineers on Tracks 4, 5, 6, 8, 9, 10, 12 & 14 – Wade Childers, Brian Dixon, Andrew Felluss, Nick Howard, Timorhy Olmstead and Ryan Smith.\n\nBULLET::::- Pro Tools Operation on Tracks 4, 5, 6, 8, 9, 10, 12 & 14 – Steve Deutsch, Andrew Felluss and Pete Karam.\n",
"Guitar – Art Byington\n\nBass – Richie Lee\n\nDrums, Percussion, Effects – Mike Kelley\n\nL.A. Studio: Chéz Kelley\n\nL.A. Remote: Your Place Or Mine\n\nAssistant Engineer: Tom Nellen\n\nIn New York City\n\nKeyboards and Accordion – Tom Judson\n\nGuitars – Randolph A. Hudson III, Dave Rick, Dom Fleming, Ann Magnuson\n\nPercussion – David Licht\n\nTrombone – Christoper Washburne\n\nTrumpet – John Walsh \n\nTheremin – Walter Sear, Don Fleming\n\nNYC Studio: Sear Sound\n\nEngineer: Bil Emmons\n\nMastering: Greg Calbi at Masterdisk, NY, NY\n\nVibeology: Jim Dunbar\n\nwith special guest star Jim Thirlwell as \"That Satan Guy\"\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-07639 | Why do bees die when they lose their stinger? | It's connected to their innards and pulls out their innards when they sting something and then fly away. | [
"The worker bee's stinger is a complex organ that allows a bee to defend itself and the hive from most mammals. Attacking bees aim for the face by sensing regions with high levels of carbon dioxide (like mosquitos). Bee stings against mammals and birds typically leave the stinger embedded in the victim due to the structure of flesh and the stinger's barbs. In this case, the venom bulb stays with the stinger and continues to pump. Upon losing its stinger, the bee will subsequently die since the portion where the stinger bulb was removed rips out part of its insides.\n",
"Hornets, some ants, centipedes, and scorpions also sting.\n\nA few insects leave their stinger in the wound, but this is overstated. For example, of the 20,000 species of bees worldwide, only the half-dozen species of honeybees (\"Apis\") are reported to have a barbed stinger that cannot be withdrawn; of wasps, nearly all are reported to have smooth stingers with the exception of two species, \"Polybia rejecta\" and \"Synoeca surinama\". A sting, and especially multiple stings, may give rise to severe systemic symptoms which may lead to death; this most frequently occurs with a few social bees and wasps.\n\nSection::::Arthropods.\n",
"Drone bees, the males, are larger and do not have stingers. The female bees (worker bees and queens) are the only ones that can sting, and their stinger is a modified ovipositor. The queen bee has a barbed but smoother stinger and can, if need be, sting skin-bearing creatures multiple times, but the queen does not leave the hive under normal conditions. Her sting is not for defense of the hive; she only uses it for dispatching rival queens, ideally before they can emerge from their cells. Queen breeders who handle multiple queens and have the queen odor on their hands are sometimes stung by a queen.\n",
"Section::::Stingless bees of Brazil.\n",
"Section::::Behavior.:Soldier caste.\n\nWhile the existence of a soldier caste is well known in ants and termites, the phenomenon was unknown among bees until 2012, when some stingless bees were found to have a similar caste of defensive specialists that help guard the nest entrance against intruders; to date, at least 10 species have been documented to possess such \"soldiers\", including \"Tetragonisca angustula\", \"T. fiebrigi\", and \"Frieseomelitta longipes\", with the guards not only larger, but also sometimes a different color from ordinary workers.\n\nSection::::Stingless bees of Australia.\n",
"The barbs on the stinger will not catch on most animals besides mammals and birds, which means that such animals can be stung many times by the same bee.\n\nSection::::Stingless bees.\n\nThere are many bees in this group, native to all continents except for Europe and Antarctica, that have workers which do not have stingers. These bees are not defenseless, however, as they can bite with their mandibles, occasionally releasing caustic secretions at the same time, similar to the defenses of some ants.\n\nSection::::Symbolism.\n",
"Despite being in general fairly peaceful, with exception of a few species such as the \"tubuna\" (\"Scaptotrigona bipunctata\"), most Brazilian meliponines will react if their hives are molested, nipping with their jaws, entangling themselves in the hair, trying to enter in the ears or the nose, and releasing propolis or even acid over their aggressors.\n",
"Section::::Mayan stingless bees of Central America.:Future.\n\nThe outlook for meliponines in Mesoamerica is uncertain. The number of active \"Melipona\" beekeepers is rapidly declining in favor of the more economical, nonindigenous Africanized \"Apis mellifera\". The high honey yield, 100 kg (220 lbs) or more annually, along with the ease of hive care and ability to create new hives from existing stock, commonly outweighs the negative consequences of \"killer bee\" hive maintenance.\n",
"Section::::Stingless bees of Australia.:Pollination.\n",
"Bees with barbed stingers can often sting other insects without harming themselves. Queen honeybees and bees of many other species, including bumblebees and many solitary bees, have smoother stingers with smaller barbs, and can sting mammals repeatedly.\n",
"Section::::Mayan stingless bees of Central America.:Tulum.\n",
"Section::::Behavior.:Role differentiation.\n\nIn a simplified sense, the sex of each bee depends on the number of chromosomes it receives. Female bees have two sets of chromosomes (diploid)—one set from the queen and another from one of the male bees or drones. Drones have only one set of chromosomes (haploid), and are the result of unfertilized eggs, though inbreeding can result in diploid drones.\n",
"\"T. corvina\", as a species of stingless bees, are important crop pollinators. Species native to Costa Rica are known to visit chayote flowers, which become much more fruitful when visited by \"T. corvina\". They are also known to pollinate Panama hat plants (\"Carludovica palmata\"). In general, stingless bees are effective pollinators because they are less harmful to humans than honeybees, and they are resistant to the common diseases and parasites of honeybees.\n",
"Section::::Invertebrates.:Bees and wasps.\n\nSometimes when honey bees (genus \"Apis\") sting a victim, the barbed stinger remains embedded. As the bee tears itself loose, the stinger takes with it the entire distal segment of the bee's abdomen, along with a nerve ganglion, various muscles, a venom sac, and the end of the bee's digestive tract.\n",
"The bee has been considered an agricultural pest for some crops, such as passion fruit, because it damages leaves and flowers while collecting nest materials, and tunnels through the unopened flowers to collect the nectar (thus frustrating their normal pollinators). On the other hand, they are significant pollinators on their own, e.g. for onions.\n\nSection::::Taxonomy and Phylogeny.\n",
"Although it is widely believed that a worker honey bee can sting only once, this is a partial misconception: although the stinger is in fact barbed so that it lodges in the victim's skin, tearing loose from the bee's abdomen and leading to its death in minutes, this only happens if the skin of the victim is sufficiently thick, such as a mammal's. Honey bees are the only hymenoptera with a strongly barbed sting, though yellow jackets and some other wasps have small barbs.\n",
"Queens and workers have a modified ovipositor, a stinger, with which they defend the hive. Unlike bees of any other genus and the queens of their own species, the stinger of worker western honey bees is barbed. Contrary to popular belief, a bee does not always die soon after stinging; this misconception is based on the fact that a bee will usually die after stinging a human or other mammal. The stinger and its venom sac, with musculature and a ganglion allowing them to continue delivering venom after they are detached, are designed to pull free of the body when they lodge. This apparatus (including barbs on the stinger) is thought to have evolved in response to predation by vertebrates, since the barbs do not function (and the stinger apparatus does not detach) unless the stinger is embedded in elastic material. The barbs do not always \"catch\", so a bee may occasionally pull its stinger free and fly off unharmed (or sting again).\n",
"The painful stings of bees are mostly associated with the poison gland and the Dufour's gland which are abdominal exocrine glands containing various chemicals. In \"Lasioglossum leucozonium\", the Dufour's Gland mostly contains octadecanolide as well as some eicosanolide. There is also evidence of n-triscosane, n-heptacosane, and 22-docosanolide. However, the secretions of these glands could also be used for nest construction.\n\nSection::::See also.\n\nBULLET::::- Superorganism\n\nBULLET::::- Australian native bees\n\nSection::::External links.\n\nBULLET::::- All Living Things Images, identification guides, and maps of bees\n\nBULLET::::- Bee Genera of the World\n\nBULLET::::- North American species of bees at Bugguide\n",
"Being tropical, stingless bees are active all year round, although they are less active in cooler weather, with some species presenting diapause. Unlike other eusocial bees, they do not sting, but will defend by biting if their nest is disturbed. In addition, a few (in the genus \"Oxytrigona\") have mandibular secretions, including formic acid, that cause painful blisters. Despite their lack of a sting, stingless bees, being eusocial, may have very large colonies made formidable by the number of defenders.\n\nSection::::Behavior.:Hives.\n",
"Under ordinary circumstances the death (or removal) of a queen increases reproduction in workers, and a significant proportion of workers will have active ovaries in the absence of a queen. The workers of the hive produce a last batch of drones before the hive eventually collapses. Although during this period worker policing is usually absent, in certain groups of bees it continues.\n",
"In all stinging Hymenoptera the sting is a modified ovipositor. Unlike most other stings, honey bee workers' stings are strongly barbed and lodge in the flesh of mammals upon use, tearing free from the honey bee's body, killing the bee within minutes. The sting has its own ganglion, and it continues to saw into the target's flesh and release venom for several minutes. This trait is of obvious disadvantage to the individual but protects the hive from attacks by large animals; aside from the effects of the venom, the remnant also marks the stung animal with honey bee alarm pheromone. The barbs of a honey bee's attack are only suicidal if the skin is elastic, as is characteristic of vertebrates such as birds and mammals; honey bees can sting other insects repeatedly without dying. \n",
"Under ordinary circumstances the death (or removal) of a queen increases reproduction in workers, and a significant proportion of workers will have active ovaries in the absence of a queen. The workers of the hive produce a last batch of drones before the hive eventually collapses. Although during this period worker policing is usually absent, in certain groups of bees it continues.\n",
"When a honey bee stings a person, it cannot pull the barbed stinger back out. It leaves behind not only the stinger, but also part of its abdomen and digestive tract, plus muscles and nerves. This massive abdominal rupture kills the honey bee. Honey bees are the only bees to die after stinging.\n\nSection::::Treatment.\n",
"Many stingless bee colonies, those of \"S. quadripunctata\" included, are repopulated by a single queen who mates. This should, in theory, create a conflicting rift between queens and the worker bees due to variations in genetic relatedness. Queens produce haploid males that are genetically identical to them. In contrast, workers only share fifty percent (50%) of their genes with males, leading to an evolutionary conflict of interest. However, worker bees were not observed to increase their aggressive behaviors towards newly reproduced males.\n",
"While no detailed studies have been done on colony composition of \"T. iridipennis\" nests, in keeping with other stingless bee species, it is expected that the total number of males produced per colony is lower than that of workers. In some tropical species, male stingless bees stay for an extended time in nests. They then leave the nests and form aggregations.\n\nSection::::Camouflage.\n\nSection::::Camouflage.:Nest.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-10280 | How does algae grow in extremely low light to no light places? | Some stuff called "algae" isn't actually algae, but instead is heterotrophic bacteria or fungi, so that's one possibility. | [
"When \"Climacostomum virens\" is grown in the dark, the algal endosymbionts normally found in it are reduced in number and the cytoplasm appears colorless. Peck et al. report that these are not contained within a membrane, but are in direct contact with the host's cytoplasm; however, other observers have noted the presence of perialgal vacuoles around the algae.\n\nSection::::Classification.\n",
"Light is what algae primarily need for growth as it is the most limiting factor. Many companies are investing for developing systems and technologies for providing artificial light. One of them is OriginOil that has developed a Helix BioReactorTM that features a rotating vertical shaft with low-energy lights arranged in a helix pattern. Water temperature also influences the metabolic and reproductive rates of algae. Although most algae grow at low rate when the water temperature gets lower, the biomass of algal communities can get large due to the absence of grazing organisms. The modest increases in water current velocity may also affect rates of algae growth since the rate of nutrient uptake and boundary layer diffusion increases with current velocity.\n",
"In a typical algal-cultivation system, such as an open pond, light only penetrates the top of the water, though this depends on the algae density. As the algae grow and multiply, the culture becomes so dense that it blocks light from reaching deeper into the water. Direct sunlight is too strong for most algae, which can use only about the amount of light they receive from direct sunlight; however, exposing an algae culture to direct sunlight (rather than shading it) is often the best course for strong growth, as the algae underneath the surface is able to utilize more of the less intense light created from the shade of the algae above.\n",
"Nowadays 3 basic types of algae photobioreactors have to be differentiated, but the determining factor is the unifying parameter – the available intensity of sunlight energy.\n\nSection::::Frequently used photo reactor types.:Plate photobioreactor.\n",
"Origin Oils Inc. has been researching a revolutionary method called the Helix Bioreactor, altering the common closed-loop growth system. This system utilizes low energy lights in a helical pattern, enabling each algal cell to obtain the required amount of light. Sunlight can only penetrate a few inches through algal cells, making light a limiting reagent in open-pond algae farms. Each lighting element in the bioreactor is specially altered to emit specific wavelengths of light, as a full spectrum of light is not beneficial to algae growth. In fact, ultraviolet irradiation is actually detrimental as it inhibits photosynthesis, photoreduction, and the 520 nm light-dark absorbance change of algae.\n",
"Section::::Algae.\n\nAlgae can make their own nutrients through photosynthesis. Photosynthesis converts light energy to chemical energy that can be stored as nutrients. For algae to grow, they must be exposed to light because photosynthesis requires light, so algae are typically distributed evenly wherever sunlight and moderate moisture is available. Algae do not have to be directly exposed to the Sun, but can live below the soil surface given uniform temperature and moisture conditions. Algae are also capable of performing nitrogen fixation.\n\nSection::::Algae.:Types.\n",
"In one study, the photobiont was shown to occupy 7% of the volume of the thallus. The density of pigmentation of the upper cortex also varies and seems to control the amount of light reaching the algae.\n\nSection::::Reproduction and dispersal.\n",
"Xanthophyceae have been divided into the following four orders in some classification systems:\n\nBULLET::::- Order Botrydiales\n\nBULLET::::- \"Asterosiphon\"\n\nBULLET::::- \"Botrydium\"\n\nBULLET::::- Order Mischococcales\n\nBULLET::::- \"Botrydiopsis\"\n\nBULLET::::- \"Botryochloris\"\n\nBULLET::::- \"Bumilleriopsis\"\n\nBULLET::::- \"Chlorellidium\"\n\nBULLET::::- \"Mischococcus\"\n\nBULLET::::- \"Monodus\"\n\nBULLET::::- \"Ophiocytium\"\n\nBULLET::::- \"Pleurochloris\"\n\nBULLET::::- \"Pseudobumilleriopsis\"\n\nBULLET::::- \"Sphaerosorus\"\n\nBULLET::::- Order Tribonematales Pascher\n\nBULLET::::- \"Bumilleria\"\n\nBULLET::::- \"Heterococcus\"\n\nBULLET::::- \"Heterothrix\"\n\nBULLET::::- \"Tribonema\"\n\nBULLET::::- \"Xanthonema\"\n\nBULLET::::- Order Vaucheriales Nägeli ex Bohlin\n\nBULLET::::- Vaucheria\n\nSection::::Classifications.:Lüther (1899).\n\nClassification according to Lüther (1899):\n\nBULLET::::- Class Heterokontae\n\nBULLET::::- Order Chloromonadales\n\nBULLET::::- Order Confervales\n\nSection::::Classifications.:Pascher (1912).\n\nClassification according to Pascher (1912):\n\nBULLET::::- Heterokontae\n\nBULLET::::- Heterochloridales\n\nBULLET::::- Heterocapsales\n\nBULLET::::- Heterococcales\n\nBULLET::::- Heterotrichales\n\nBULLET::::- Heterosiphonales\n",
"Section::::Controls of ROS production in algae.:Light levels.\n",
"Section::::Frequently used photo reactor types.:Bubble column photobioreactor.\n\nA bubble column photo reactor consists of vertical arranged cylindrical column, made out of transparent material. The introduction of gas takes place at the bottom of the column and causes a turbulent stream to enable an optimum gas exchange. At present these types of reactors are built with a maximum diameter of 20 cm to 30 cm in order to ensure the required supply of sunlight energy.\n",
"Light availability is another factor that can affect the nutrient stoichiometry of sea grasses. Nutrient limitation can only occur when photosynthetic energy causes grasses to grow faster than the influx of new nutrients. For example, low light environments tend to have a lower C:N ratio. Alternately, high-N environments can have an indirect negative effect to sea grass growth by promoting growth of algae that reduce the total amount of available light.\n",
"BULLET::::- Phytoplankton loss rate is independent of depth and of growth rate.\n\nBULLET::::- Nutrients in the mixed layer are high enough that production is not nutrient limited\n\nBULLET::::- Daily photosynthetic production of the phytoplankton community at any depth is proportional to the mean daily light energy at that depth.\n\nIn other words, light is assumed to be the only factor that limits the growth of phytoplankton during pre-bloom months and the light a phytoplankton community is subject to is determined by the incident irradiance and the coefficient of light extinction.\n\nSection::::Critical Depth Hypothesis.:Mechanism.\n",
"The phenomenon was first discovered in unicellular green algae, and may also occur in plants. However, in these organisms it occurs by a different mechanism, which is not as well understood. The plant/algal mechanism is considered functionally analogous to the cyanobacterial mechanism but involves completely different components. The foremost difference is the presence of fundamentally different types of light-harvesting antenna complexes: plants and green algae use an intrinsically-bound membrane complex of chlorophyll a/b binding proteins for their antenna, instead of the soluble phycobilisome complexes used by cyanobacteria (and certain algae).\n",
"Other living organisms, such as cyanobacteria, purple bacteria, and heliobacteria, can exploit solar light in slightly extended spectral regions, such as the near-infrared. These bacteria live in environments such as the bottom of stagnant ponds, sediment and ocean depths. Because of their pigments, they form colorful mats of green, red and purple.\n",
"In aquatic environments, microbes constitute the base of the food web. Single celled photosynthetic organisms such as diatoms and cyanobacteria are generally the most important primary producers in the open ocean. Many of these cells, especially cyanobacteria, are too small to be captured and consumed by small crustaceans and planktonic larvae. Instead, these cells are consumed by phagotrophic protists which are readily consumed by larger organisms. Viruses can infect and break open bacterial cells and (to a lesser extent), planktonic algae (a.k.a. phytoplankton). Therefore, viruses in the microbial food web act to reduce the population of bacteria and, by lysing bacterial cells, release particulate and dissolved organic carbon (DOC). DOC may also be released into the environment by algal cells. One of the reasons phytoplankton release DOC termed \"unbalanced growth\" is when essential nutrients (e.g. nitrogen and phosphorus) are limiting. Therefore, carbon produced during photosynthesis is not used for the synthesis of proteins (and subsequent cell growth), but is limited due of a lack of the nutrients necessary for macromolecules. Excess photosynthate, or DOC is then released, or exuded.\n",
"Light intensity has been found to affect gas vesicles production and maintenance differently between different bacteria and archaea. For \"Anabaena flos-aquae\", higher light intensities leads to vesicle collapse from an increase in turgor pressure and greater accumulation of photosynthetic products. In cyanobacteria, vesicle production decreases at high light intensity due to exposure of the bacterial surface to UV radiation, which can damage the bacterial genome.\n\nSection::::Regulation.:Carbohydrates.\n",
"The light-harvesting complexes of \"Micromonas\" are distinguishable from other green algae in terms of pigment composition and stability under unfavorable conditions. It has been showed that these proteins use three different pigments for light harvesting, and they are resistant to high temperature and the presence of detergent.\n\nSection::::Cellular mechanisms.:Peptidoglycan biosynthesis.\n",
"BULLET::::- The Photoautotophic Compartment (Compartment 4):\n\nThe fourth compartment is split into two parts: the algae compartment colonised by the cyanobacteria: Arthrospira platensis and the Higher Plant (HP) compartment. These compartments are essential for the regeneration of oxygen and the production of food.\n\nSection::::Operating principle of an Artificial Ecosystem.\n",
"The species of phytoplankton present in the DCM varies with depth due to varying accessory pigmentation. Some phytoplankton species have accessory pigments, compounds that have adapted them to gather light energy from certain wavelengths of light, even in areas of low light penetration. To optimize light energy collection, phytoplankton will move to specific depths to access different wavelengths of visible light. \n",
"To adapt to low light conditions, some phytoplankton populations have been found to have increased amounts of chlorophyll counts per cell, which contributes to the formation of the DCM. Rather than an increase of overall cell numbers, seasonal light limitation or low irradiance levels can raise the individual cellular chlorophyll content. As depth increases within the mixing zone, phytoplankton must rely on having higher pigment counts (chlorophyll) to capture photic energy. Due to the higher concentration of chlorophyll in the phtoplankton present, the DCM does not predict the depth of the biomass maximum in the same region. \n",
"The colonies are located on the upper part of the foreshore, which is the least long-term covered layer of water during the tidal cycle. Colonies of \"S. roscoffensis\" are therefore in this place theoretically exposed to the longest light exposure to maximize the photosynthetic activity of micro-algae partners. Light is an essential biotic factor since the photosynthetic activity of algae \"in hospite\" is the only contribution to nutrient intake for the animals.\n",
"Phycologists typically focus on either freshwater or ocean algae, and further within those areas, either diatoms or soft algae.\n\nSection::::History of phycology.\n",
"Light attenuation factors have been shown to be quite predictive of the DCM depth, since the phytoplankton present in the region require sufficient sunlight for growth, resulting in a DCM that is generally found in the euphotic zone. However, if the phytoplankton population has adapted to lower light environments, the DCM can also be located in the aphotic zone. The high chlorophyll concentration at the DCM is due to the high number of phytoplankton that have adapted to functioning in low light conditions. \n",
"The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal.\n\nSection::::Structure.:Thylakoid system.:Pigments and chloroplast colors.\n",
"The cell cycle is composed of the typical eukaryotic stages in M1, S Phase, G1 and G2.(11) (12) It goes through a light/dark dependent manner. (11) (12) During the night, the cells are haploid and have only one copy of the DNA.(11) During the day, the population is in S phase with two copies of DNA. (11) They go through closed mitosis. (11) G2 to M phase is regulated by nutrient factors, while the S phase is controlled through light/dark timing. \"Alexandrium\" species must be of proper size before then can enter S and G2 phase. (11) \"Alexandrium fundyense\" increases greatly in size during the G2/M phase and after mitosis, it decreases in size. (12) Other species can produce DNA through the entirety of the cell cycle. (12) Thus, researchers have said that going through G1 is light dependent and going through S phase is size dependent. (11) (12)\n"
] | [
"All algae creates energy through photosynthesis.",
"All algae is algae. "
] | [
"Some bacteria and fungi that don't require light are called algae.",
"Algae refers to algae, bacteria, or fungi. "
] | [
"false presupposition"
] | [
"All algae creates energy through photosynthesis.",
"All algae is algae. "
] | [
"false presupposition",
"false presupposition"
] | [
"Some bacteria and fungi that don't require light are called algae.",
"Algae refers to algae, bacteria, or fungi. "
] |
2018-03922 | How come hot peppers burn coming out? | From what I’ve heard, your anus is a mucus membrane similar to your lips, and when the capsaicin in the pepper (what makes hot food hot) touches those types of surfaces, it causes that irritation. I guess it’s just some leftover capsaicin that survives digestion, not all of it gets broken down because the body probably doesn’t see it as fit for using as nutrition. | [
"The substances that give chili peppers their pungency (spicy heat) when ingested or applied topically are capsaicin (8-methyl-\"N\"-vanillyl-6-nonenamide) and several related chemicals, collectively called \"capsaicinoids\". The quantity of capsaicin varies by variety, and on growing conditions. Water stressed peppers usually produce stronger pods. When a habanero plant is stressed, by absorbing low water for example, the concentration of capsaicin increases in some parts of the fruit. \n",
"Pungency is not considered a taste in the technical sense because it is carried to the brain by a different set of nerves. While taste nerves are activated when consuming foods like chili peppers, the sensation commonly interpreted as \"hot\" results from the stimulation of somatosensory fibers in the mouth. Many parts of the body with exposed membranes that lack taste receptors (such as the nasal cavity, genitals, or a wound) produce a similar sensation of heat when exposed to pungent agents.\n",
"This particular sensation, called chemesthesis, is not a taste in the technical sense, because the sensation does not arise from taste buds, and a different set of nerve fibers carry it to the brain. Foods like chili peppers activate nerve fibers directly; the sensation interpreted as \"hot\" results from the stimulation of somatosensory (pain/temperature) fibers on the tongue. Many parts of the body with exposed membranes but no taste sensors (such as the nasal cavity, under the fingernails, surface of the eye or a wound) produce a similar sensation of heat when exposed to hotness agents. Asian countries within the sphere of, mainly, Chinese, Indian, and Japanese cultural influence, often wrote of pungency as a fifth or sixth taste.\n",
"BULLET::::- Chinese style sauces such as black bean and chili.\n\nSection::::Heat.\n\nThe heat, or burning sensation, experienced when consuming hot sauce is caused by capsaicin and related capsaicinoids. The burning sensation is not \"real\" in the sense of damage being wrought on tissues. The mechanism of action is instead a chemical interaction with the neurological system.\n",
"Group C nerve fiber\n\nGroup C nerve fibers are one of three classes of nerve fiber in the central nervous system (CNS) and peripheral nervous system (PNS). The C group fibers are unmyelinated and have a small diameter and low conduction velocity, whereas Groups A and B are myelinated. Group C fibers include postganglionic fibers in the \n\nautonomic nervous system (ANS), and nerve fibers at the dorsal roots (IV fiber). These fibers carry sensory information.\n\nDamage or injury to nerve fibers causes neuropathic pain. Capsaicin activates C fibre vanilloid receptors, giving chili peppers a hot sensation.\n\nSection::::Structure and anatomy.\n",
"Although black pepper causes a similar burning sensation, it is caused by a different substance—piperine.\n\nSection::::Cuisine.\n\n\"Capsicum\" fruits and peppers can be eaten raw or cooked. Those used in cooking are generally varieties of the \"C. annuum\" and \"C. frutescens\" species, though a few others are used, as well. They are suitable for stuffing with fillings such as cheese, meat, or rice.\n",
"Capsinoid chemicals provide the distinctive tastes in \"C. annuum\" variants. In particular, capsaicin creates a burning sensation (\"hotness\"), which in extreme cases can last for several hours after ingestion. A measurement called the Scoville scale has been created to describe the hotness of peppers and other foods.\n\nSection::::Uses.:Traditional medicine.\n\nHot peppers are used in traditional medicine as well as food in Africa. English botanist John Lindley described \"C. annuum\" in his 1838 \"Flora Medica\" thus:\n\nIn Ayurveda, \"C. annuum\" is classified as follows:\n\nBULLET::::- \"Guna\" (properties) – \"ruksha\" (dry), \"laghu\" (light) and \"tikshna\" (sharp)\n\nBULLET::::- \"Rasa\" (taste) – \"katu\" (pungent)\n",
"Capsaicinoids are the chemicals responsible for the \"hot\" taste of chili peppers. They are fat soluble and therefore water will be of no assistance when countering the burn. The most effective way to relieve the burning sensation is with dairy products, such as milk and yogurt. A protein called casein occurs in dairy products which binds to the capsaicin, effectively making it less available to \"burn\" the mouth, and the milk fat helps keep it in suspension. Rice is also useful for mitigating the impact, especially when it is included with a mouthful of the hot food. These foods are typically included in the cuisine of cultures that specialise in the use of chilis. Mechanical stimulation of the mouth by chewing food will also partially mask the pain sensation.\n",
"The pungent sensation provided by chili peppers, black pepper and other spices like ginger and horseradish plays an important role in a diverse range of cuisines across the world, such as Korean, Persian, Turkish, Tunisian, Ethiopian, Hungarian, Indian, Burmese, Indonesian, Laotian, Singaporean, Malaysian, Bangladeshi, Mexican, Peruvian, Caribbean, Pakistani, Somali, Southwest Chinese (including Sichuan cuisine), Sri Lankan, Vietnamese, and Thai cuisines.\n\nSection::::Mechanism.\n",
"Capsaicin is produced by the plant as a defense against mammalian predators and microbes, in particular a fusarium fungus carried by hemipteran insects that attack certain species of chili peppers, according to one study. Peppers increased the quantity of capsaicin in proportion to the damage caused by fungal predation on the plant's seeds.\n\nSection::::Intensity.:Common peppers.\n\nA wide range of intensity is found in commonly used peppers:\n\nSection::::Intensity.:Notable hot chili peppers.\n\nSome of the world's hottest chili peppers are:\n\nSection::::Uses.\n\nSection::::Uses.:Culinary uses.\n",
"Many fresh chilies such as poblano have a tough outer skin that does not break down on cooking. Chilies are sometimes used whole or in large slices, by roasting, or other means of blistering or charring the skin, so as not to entirely cook the flesh beneath. When cooled, the skins will usually slip off easily.\n",
"Most of the capsaicin in a pungent (hot) pepper is concentrated in blisters on the epidermis of the interior ribs (septa) that divide the chambers, or locules, of the fruit to which the seeds are attached. A study on capsaicin production in fruits of \"C. chinense\" showed that capsaicinoids are produced only in the epidermal cells of the interlocular septa of pungent fruits, that blister formation only occurs as a result of capsaicinoid accumulation, and that pungency and blister formation are controlled by a single locus, \"Pun1\", for which there exist at least two recessive alleles that result in non-pungency of \"C. chinense\" fruits.\n",
"600 nm when placed in violet light. If fresh chili peppers come in contact with the skin, eyes, lips or other membranes, irritation can occur; some people who are particularly sensitive wear latex or vinyl gloves while handling peppers. If irritation does occur, washing the oils off with hot soapy water and applying vegetable oil to the skin may help. When preparing jalapeños, it is recommended that hands not come in contact with the eyes as this leads to burning and redness.\n\nSection::::Eating characteristics.:Serving methods.\n",
"The trigeminal nerve (cranial nerve V) provides information concerning the general texture of food as well as the taste-related sensations of peppery or hot (from spices).\n\nSection::::Further sensations and transmission.:Pungency (also spiciness or hotness).\n",
"Early research showed capsaicin to evoke a long-onset current in comparison to other chemical agonists, suggesting the involvement of a significant rate-limiting factor. Subsequent to this, the TRPV1 ion channel has been shown to be a member of the superfamily of TRP ion channels, and as such is now referred to as . There are a number of different TRP ion channels that have been shown to be sensitive to different ranges of temperature and probably are responsible for our range of temperature sensation. Thus, capsaicin does not actually cause a chemical burn, or indeed any direct tissue damage at all, when chili peppers are the source of exposure. The inflammation resulting from exposure to capsaicin is believed to be the result of the body's reaction to nerve excitement. For example, the mode of action of capsaicin in inducing bronchoconstriction is thought to involve stimulation of C fibers culminating in the release of neuropeptides. In essence, the body inflames tissues as if it has undergone a burn or abrasion and the resulting inflammation can cause tissue damage in cases of extreme exposure, as is the case for many substances that cause the body to trigger an inflammatory response.\n",
"To produce ajvar, bell peppers are roasted whole on a plate on an open fire, a plate of wood in a stove, or in an oven. The baked peppers must briefly cool to allow the flesh to separate from the skin. Next, the skin is carefully peeled off and the seeds are removed. The peppers are then ground in a mill or chopped into tiny pieces (this variant is often referred to as pindjur). Finally, the resulting mush is stewed for several hours in large pots. Sunflower oil is added at this stage to condense and reduce the water, and to enhance later preservation. Salt (and sometimes vinegar) is added at the end and the hot mush is poured directly into sterilized glass jars, which are sealed immediately.\n",
"The amount of capsaicin in the fruit is highly variable and dependent on genetics and environment, giving almost all types of \"Capsicum\" varied amounts of perceived heat. The most recognizable \"Capsicum\" without capsaicin is the bell pepper, a cultivar of \"Capsicum annuum\", which has a zero rating on the Scoville scale. The lack of capsaicin in bell peppers is due to a recessive gene that eliminates capsaicin and, consequently, the \"hot\" taste usually associated with the rest of the \"Capsicum\" family. There are also other peppers without capsaicin, mostly within the \"Capsicum annuum\" species, such as the cultivars Giant Marconi, Yummy Sweets, Jimmy Nardello, and Italian Frying peppers (also known as the Cubanelle).\n",
"The amount of capsaicin in hot peppers varies significantly among varieties, and is measured in Scoville heat units (SHU). The world's current hottest known pepper as rated in SHU is the 'Carolina Reaper,' which had been measured at over 2,200,000 SHU.\n\nSection::::Species and varieties.:Species list.\n\nSources:\n\nBULLET::::- \"Capsicum annuum\"\n\nBULLET::::- \"Capsicum baccatum\"\n\nBULLET::::- \"Capsicum campylopodium\"\n\nBULLET::::- \"Capsicum cardenasii\"\n\nBULLET::::- \"Capsicum ceratocalyx\"\n\nBULLET::::- \"Capsicum chacoense\"\n\nBULLET::::- \"Capsicum chinense\"\n\nBULLET::::- \"Capsicum coccineum\"\n\nBULLET::::- \"Capsicum cornutum\"\n\nBULLET::::- \"Capsicum dimorphum\"\n\nBULLET::::- \"Capsicum dusenii\"\n\nBULLET::::- \"Capsicum eximium\"\n\nBULLET::::- \"Capsicum flexuosum\"\n\nBULLET::::- \"Capsicum friburgense\" Bianch. & Barboza\n\nBULLET::::- \"Capsicum frutescens\"\n\nBULLET::::- \"Capsicum galapagoense\"\n\nBULLET::::- \"Capsicum geminifolium\"\n",
"Compared to other chillies, the jalapeño heat level varies from mild to hot depending on cultivation and preparation and can have from a few thousand to over 10,000 Scoville heat units. The number of scars on the pepper, which appear as small brown lines, called 'corking', has a positive correlation with heat level, as growing conditions which increase heat level also cause the pepper to form scars. For US consumer markets, 'corking' is considered unattractive; however, in other markets, it is a favored trait, particularly in pickled or oil preserved jalapeños.\n",
"Even the hottest peppers are not literally toxic in reasonable quantities; research has suggested that it would take around 3 pounds (1.36 kg) of the highest-Scoville peppers, like ghost pepper, to kill a 150-pound (68 kg) adult. However, there are documented fatalities from cardiac arrest caused by the pain and panic induced by pepper spray, the main ingredient of which is oleoresin capsicum, a concentrated capsaicin wax extracted from chili peppers. Allergic reactions to the substance itself, especially asthma attacks, are more common. These were also reported in the Ohio school's ghost pepper incident.\n",
"In Australia, New Zealand, South Africa, and India, heatless varieties are called \"capsicums\", while hot ones are called \"chilli\"/\"chillies\" (double L). Pepperoncini are also known as \"sweet capsicum\". The term \"bell peppers\" is almost never used, although \"C. annuum\" and other varieties which have a bell shape and are fairly hot, are often called \"bell chillies\".\n\nIn Ireland and the United Kingdom, the heatless varieties are commonly known simply as \"peppers\" (or more specifically \"green peppers\", \"red peppers\", etc.), while the hot ones are \"chilli\"/\"chillies\" (double L) or \"chilli peppers\".\n",
"Section::::Hot peppers.\n\nNew Mexico accounts for roughly 65% of all U.S. hot pepper production.\n\nSection::::Hot peppers.:Bacterial spot.\n\nBacterial spot is spread from plant to plant through water, wind, and plant contact. Once infected, the leaves of the plant are targeted by the disease. The disease causes severe spotting of the pepper and kills the leaves. This is a twofold problem because the defoliation results in the pepper being discolored by sunscald. Research for bacterial spot treatment has shown that copper sprays have been able to increase marketable yields by 50% in treated fields.\n\nSection::::Hot peppers.:Powdery mildew.\n",
"Section::::Activity of chemical constituents.\n\nThe main biologically active chemical component isolated from the leaves of \"P. colorata\" is polygodial. The chewed horopito leaf has a characteristically sharp, hot peppery taste. This is primarily due to polygodial which causes pungency on the tongue in concentrations as low as 0.1 µg.\n",
"Because of the burning sensation caused by capsaicin when it comes in contact with mucous membranes, it is commonly used in food products to provide added spice or \"heat\" (piquancy), usually in the form of spices such as chili powder and paprika. In high concentrations, capsaicin will also cause a burning effect on other sensitive areas, such as skin or eyes. The degree of heat found within a food is often measured on the Scoville scale. Because people enjoy the heat, there has long been a demand for capsaicin-spiced products like curry, chili con carne, and hot sauces such as Tabasco sauce and salsa.\n",
"The bell pepper is the only member of the genus \"Capsicum\" that does not produce capsaicin, a lipophilic chemical that can cause a strong burning sensation when it comes in contact with mucous membranes. They are thus scored in the lowest level of the Scoville scale. This absence of capsaicin is due to a recessive form of a gene that eliminates the compound and, consequently, the \"hot\" taste usually associated with the rest of the genus \"Capsicum\". This recessive gene is overwritten in the Mexibelle pepper, a hybrid variety of bell pepper that produces small amounts of capsaicin (and is thus mildly pungent). Sweet pepper cultivars produce non-pungent capsaicinoids.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-00340 | How come my nose hurts if I burp after drinking a carbonated beverage? | The air coming out of your stomach has a high concentration of CO2 (from the *carbon*ation), as this CO2 leaves your mouth it just kind of tastes different than normal air. However, when concentrated CO2 enters or leaves your nose, it burns. Why does it burn? Well in simple terms, CO2 + water = acid (carbonic acid to be exact). Acid tends to burn. | [
"Burping\n\nSection::::Causes.\n\nBULLET::::- Burping is usually caused by swallowing air when eating or drinking and subsequently expelling it, in which case the expelled gas is mainly a mixture of nitrogen and oxygen.\n\nBULLET::::- Burps can be caused by drinking beverages containing carbon dioxide, such as beer and soft drinks, in which case the expelled gas is mainly carbon dioxide.\n\nBULLET::::- Diabetes drugs such as metformin and exenatide can cause burping, especially at higher doses. This often resolves in a few weeks.\n",
"Because chemoresponsive nerve fibers are present in all types of skin, chemesthetic sensations can be aroused from anywhere on the body's surface as well as from mucosal surfaces in the nose, mouth, eyes, etc. Mucus membranes are generally more sensitive to chemesthetic stimuli because they lack the barrier function of cornified skin.\n\nMuch of the chemesthetic flavor sensations are mediated by the trigeminal nerves, which are relatively large and important nerves. Flavors that stimulate the trigeminal nerves are therefore important - for example, carbon dioxide is the trigeminal stimulant in carbonated beverages.\n",
"BULLET::::- The initial response is cellular buffering (plasma protein buffers) that occurs over minutes to hours. Cellular buffering elevates plasma bicarbonate (HCO) only slightly, approximately 1 mEq/L for each 10-mm Hg increase in \"Pa\"CO.\n\nBULLET::::- The second step is renal compensation that occurs over 3–5 days. With renal compensation, renal excretion of carbonic acid is increased and bicarbonate reabsorption is increased. For instance, PEPCK is upregulated in renal proximal tubule brush border cells, in order to secrete more NH and thus to produce more HCO.\n\nSection::::Physiological response.:Estimated changes.\n",
"The chemical reacts with moisture on the skin and in the eyes, causing a burning sensation and the immediate forceful and uncontrollable shutting of the eyes. Effects usually include tears streaming from the eyes, profuse coughing, exceptional nasal discharge that is full of mucus, burning in the eyes, eyelids, nose and throat areas, disorientation, dizziness and restricted breathing. It will also burn the skin where sweaty and/or sunburned. In highly concentrated doses, it can also induce severe coughing and vomiting. Almost all of the immediate effects wear off within an hour (such as exceptional nasal discharge and profuse coughing), although the feeling of burning and highly irritated skin may persist for hours. Affected clothing will need to be washed several times or thrown away.\n",
"Section::::Reception.:Chart performance.\n",
"With insufflation, the effects are more abrupt and intense but have a significantly shorter duration, while oral usage results in a milder, longer experience. When insufflated, the onset happens very rapidly, usually reaching the peak at about 20–40 minutes and plateauing for 2–3 hours. 2C-B is also considered one of the most painful drugs to insufflate, with users reporting intense nasal burning. The sudden intensity of the experience combined with the pain can often start the experience with a negative imprint and nausea is also increased with insufflation, compounding the issue.\n\nSection::::Pharmacology.\n",
"Gastric phytobezoars are a form of intestinal blockage and are seen in those with poor gastric motility. The preferred treatment of bezoars includes different therapies and/or fragmentation to avoid surgery. Phytobezoars are most common and consist of various undigested substances including lignin, cellulose, tannins, celery, pumpkin skin, grape skins, prunes, raisins, vegetables and fruits. Phytobezoars can form after eating persimmons and pineapples. These are more difficult to treat and are referred to as diospyrobezoars.\n\nSection::::Treatment.\n",
"Section::::Live performances.\n",
"Section::::Reception.\n\nSection::::Reception.:Critical reception.\n",
"The term \"Mace\" came into being because it was the brand-name invented by one of the first American manufacturers of CN aerosol sprays. Subsequently, in the United States, Mace became synonymous with tear-gas sprays in the same way that Kleenex has become strongly associated with facial tissues (a phenomenon known as a genericized trademark).\n\nLike CS gas, this compound irritates the mucous membranes (oral, nasal, conjunctival and tracheobronchial). Sometimes it can give rise to more generalized reactions such as syncope, temporary loss of balance and orientation. More rarely, cutaneous irritating outbreaks have been observed and allergic contact permanent dermatitis.\n",
"Section::::Background.\n",
"Inhaling butane gas can cause drowsiness, unconsciousness, asphyxia, and cardiac arrhythmia. Butane is the most commonly misused volatile solvent in the UK and caused 52% of solvent-related deaths in 2000. When butane is sprayed directly into the throat, the jet of fluid can cool rapidly to −20 °C by adiabatic expansion, causing prolonged laryngospasm. Sudden sniffing death syndrome is commonly known as SSDS.\n",
"Section::::Composition.\n",
"BULLET::::- In \"acute respiratory acidosis\", the \"Pa\"CO is elevated above the upper limit of the reference range (over 6.3 kPa or 45 mm Hg) with an accompanying acidemia (pH 7.36).\n\nBULLET::::- In \"chronic respiratory acidosis\", the \"Pa\"CO is elevated above the upper limit of the reference range, with a normal blood pH (7.35 to 7.45) or near-normal pH secondary to renal compensation and an elevated serum bicarbonate (HCO 30 mm Hg).\n\nSection::::Causes.\n\nSection::::Causes.:Acute.\n",
"When Phospho soda is used as preparation for colonoscopy, 1.5 fluid ounces (45ml), mixed with an equal amount of water or any clear liquid and followed by 8 oz of water, is taken, followed by a second dose 6 hours later (3 oz total). It will cause very loose, eventually watery stools, usually starting within an hour or so and lasting several hours.\n\nA 2007 study showed that in patients with decreased renal function, Phospho soda may worsen renal impairment compared to polyethylene glycol-based laxatives. In patients without kidney problems, no difference was observed.\n\nSection::::Litigation.\n",
"Section::::Effects.:Duration.\n\nWhen orally consumed, 2C-B has a much longer delay before the onset of effects than when it is insufflated. Oral ingestion generally takes roughly 45–75 minutes for the effects to be felt, plateau lasts 2–4 hours, and coming down lasts 1–2 hours. Rectal administration onset varies from 5–20 minutes. Insufflated onset takes 1–10 minutes for effects to be felt. The duration can last from 4 to 12 hours depending on route of administration, dose, and other factors.\n",
"In addition to Coca-Cola, meat tenderizer has been used to dissolve bezoars of the stomach. When treatment with Coca-Cola is combined with endoscopic methods, the success of treatment approaches 90%. The mechanism by which Coca-Cola dissolves the bezoar is based upon its low pH, CO bubbles, and sodium bicarbonate content. \n",
"BULLET::::- Note that the Berlin definition requires a minimum positive end expiratory pressure (PEEP) of 5 cm for consideration of the Pa/Fi ratio. This degree of PEEP may be delivered noninvasively with CPAP to diagnose mild ARDS.\n\nNote that the 2012 \"Berlin criteria\" are a modification of the prior 1994 consensus conference definitions (see \"history\").\n\nSection::::Diagnosis.:Medical imaging.\n",
"Diagnosis of alcohol sensitivity due to allergic reactivity to the allergens in alcoholic beverages can be confirmed by standard skin prick tests, skin patch tests, blood tests, challenge tests, and challenge/elimination tests as conducted for determining the allergen causing other classical allergic reactions (see allergy and Skin allergy tests.)\n\nSection::::Treatment.\n",
"Section::::Toxicity.\n\nSection::::Toxicity.:Nervous system.\n\nPrenatal exposure of B\"a\"P to rats is known to affect learning and memory in rodent models. Pregnant rats eating B\"a\"P were shown to negatively affect the brain function in the late life of their offspring; at a time when synapses are first formed and adjusted in strength by activity B\"a\"P diminished NMDA receptor-dependent nerve cell activity measured as mRNA expression of the NMDA NR2B receptor subunit.\n\nSection::::Toxicity.:Immune system.\n",
"The pragmatic challenge is to distinguish from aspiration pneumounia with an infectious component because the former does not require antibiotics while the later does. While some issues, such as a recent history of exposure to substantive toxins, can foretell the diagnosis, for a patient with dyphagia the diagnosis may be less obvious, as the dyshagic patient may have caustic gastric contents damaging the lungs which may or may not have progressed to bacterial infection. \n\nThe following tests help determine how severely the lungs are affected:\n",
"Mineral oil should not be given internally to young children, pets, or anyone with a cough, hiatus hernia, or nocturnal reflux, because it can cause complications such as lipoid pneumonia. Due to its low density, it is easily aspirated into the lungs, where it cannot be removed by the body. In children, if aspirated, the oil can work to prevent normal breathing, resulting in death of brain cells and permanent paralysis and/or brain damage.\n\nSection::::Signs and symptoms.\n\nAcute:\n\nBULLET::::- Cough\n\nBULLET::::- Difficulty Breathing\n\nBULLET::::- Abnormal lung sounds (wet, gurgling sounding breaths)\n\nBULLET::::- Chest pain, tightness or burning\n\nChronic:\n",
"A link has been shown between long-term regular cola intake and osteoporosis in older women (but not men). This was thought to be due to the presence of phosphoric acid, and the risk for women was found to be greater for sugared and caffeinated colas than diet and decaffeinated variants, with a higher intake of cola correlating with lower bone density.\n\nAt moderate concentrations phosphoric acid solutions are irritating to the skin. Contact with concentrated solutions can cause severe skin burns and permanent eye damage.\n\nSection::::See also.\n\nBULLET::::- Phosphate fertilizers, such as ammonium phosphate fertilizers\n\nSection::::External links.\n",
"BULLET::::- For a CO pressure typical for bottled carbonated drinks (formula_4 ~ 2.5 atm), we get a relatively acidic medium (pH = 3.7) with a high concentration of dissolved CO. These features contribute to the sour and sparkling taste of these drinks.\n\nBULLET::::- Between 2.5 and 10 atm, the pH crosses the p\"K\" value (3.60), giving [HCO] [HCO] at high pressures.\n\nBULLET::::- A plot of the equilibrium concentrations of these different forms of dissolved inorganic carbon (and which species is dominant) as a function of the pH of the solution is known as a Bjerrum plot.\n\nBULLET::::- Remark\n",
"Carbonated soda has been proposed for the treatment of gastric phytobezoars. In about 50% of cases studied, carbonated soda alone was found to be effective in gastric phytobezoar dissolution. Unfortunately, this treatment can result in the potential of developing small bowel obstruction in a minority of cases, necessitating surgical intervention. It is one of many other stomach disorders that can have similar symptoms.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-04857 | how were huge rope-bridge/ziplines over great distances and depths made, before the use of present technology? | The two easiest ways are... 1) have a guy on each side of the chasm. The guy on one side has a bow and arrow or something to fire the rope to the other guy, who pulls the rope to the other end. 2) have a guy at the top of the chasm and another guy on the bottom. Drop the rope, with a bunch of slack, to the bottom guy, who climbs the other side of the chasm. Repeat as necessary. | [
"Leonardo da Vinci drew sketches of a concept for a ropemaking machine, but it was never built. Nevertheless, remarkable feats of construction were accomplished without advanced technology: In 1586, Domenico Fontana erected the 327 ton obelisk on Rome's Saint Peter's Square with a concerted effort of 900 men, 75 horses, and countless pulleys and meters of rope. By the late 18th century several working machines had been built and patented.\n",
"Aerial passenger ropeways were known in Asia well before the 17th century for crossing chasms in mountainous regions. Men would traverse a woven fiber line hand over hand. Evolutionary refinement added a harness or basket to also transport cargo.\n",
"In the 1930s, as caving became increasingly popular in France, several clubs in the Alps made vertical cave exploration an outdoor sport. During World War II, a team composed of Pierre Chevalier, Fernand Petzl, Charles Petit-Didier and others explored the Dent de Crolles cave system near Grenoble, France, which became the deepest explored cave in the world (-658m) at that time. The lack of available equipment during the war forced Pierre Chevalier and the rest of the team to develop their own equipment, leading to technical innovation. The first use of single-rope technique with prusik and mechanical rope ascenders (Henri Brenot's \"monkeys\", first used by Chevalier and Brenot in a cave in 1934) can be directly traced back to the exploration of the Dent de Crolles cave system. American caver Bill Cuddington, known as \"Vertical Bill\" developed single-rope techniques in the U.S. in the late 1950s. In 1958, two Swiss alpinists, Juesi and Marti teamed together, creating the first commercially available rope ascender known as the Jumar. In 1968 Bruno Dressler asked Petzl, who worked as a metals machinist, to build a rope-ascending tool, today known as the Petzl Croll, that he had developed by adapting the Jumar for pit caving. Petzl started a small caving equipment manufacturing company Petzl, which manufactures equipment for caving, climbing, mountaineering and at-height safety in civil engineering. The rappel rack was developed in the late 1960's by cavers in the Huntsville, Alabama caving club to facilitate long descents. The evolution of mechanical ascension systems helped extend the practice and safety of pit exploration.\n",
"Section::::History.\n\nRopeways or aerial cables have been used as a method of transport in some mountainous countries for more than 2,000 years, possibly starting in China, India and Japan as early as 250 BC, remaining in use in some remote areas in China such as Nujiang (Salween) valley in Yunnan as late as 2015 before being replaced by bridges. Not all of these structures were assisted by gravity, so not all fitted the definition of the zip-line.\n\nVarious technological advances in Europe in the Middle Ages improved the ropeways, some of which were still assisted by gravity.\n",
"The first recorded mechanical ropeway was by Venetian Fausto Veranzio who designed a bi-cable passenger ropeway in 1616. The industry generally considers Dutchman Adam Wybe to have built the first operational system in 1644. The technology, which was further developed by the people living in the Alpine regions of Europe, progressed and expanded with the advent of wire rope and electric drive.\n",
"The world's first cable car on multiple supports was built by Adam Wybe in Gdańsk, Poland in 1644. It was moved by the horses and used to move soil over the river to build defences.\n\nIn Eritrea the Italians built the Asmara-Massawa Cableway in 1936, which was 75 km long. The Manizales - Mariquita Cableway (1922) in Colombia was 73 km long.\n\nConveyors can be powered by a wide variety of forms of energy, electric, engines, or gravity (particularly in mountainous mining concerns, or where running water is available). Gravity-driven conveyors may qualify as zip-lines.\n\nSection::::See also.\n\nBULLET::::- Aerial lift\n",
"When the Inca people began building a grass suspension bridge, they would first gather natural materials of grass and other vegetation. They would then braid these elements together into rope. This contribution was made by the Inca women. Vast amounts of thin-looking rope were produced. The villagers would then deliver their quota of rope to the builders. The rope was then divided into sections. Each section consisted of an amount of thin rope being laid out together in preparation to create a thicker rope cord. Once the sections are laid out, the strands of rope made earlier are twisted together tightly and evenly, producing the larger and thicker rope cord. These larger ropes are then braided together to create cables, some as thick as a human torso. Depending on the dimensions of the cable, each could weigh up to 200 pounds. These cables were then delivered to the bridge site.\n",
"Very long cables, such as those used for long-distance undersea communications, have more complex structures, but nonetheless start with similar elements. Because the distances involved are far greater, a more continuous flow process replaces the standard ropewalk, shortening the length of the walk as the runner becomes static, and the feed end becomes far more complex as it has to spin in one direction whilst laying the rope in the other. Although further waterproofing and armoured coatings are normal, the core of the rope is similar to the description.\n\nSection::::The technology of making a rope.:Tightrope ropes.\n",
"This type of bridge is known as a rope bridge due to its historical construction from rope. Inca rope bridges still are formed from native materials, chiefly rope, in some areas of South America. These rope bridges must be renewed periodically owing to the limited lifetime of the materials, and rope components are made by families as contributions to a community endeavor.\n\nSimple suspension bridges, for use by pedestrians and livestock, are still constructed, based on the ancient Inca rope bridge but using wire rope and sometimes steel or aluminium grid decking, rather than wood.\n",
"From the late 17th century, the ropewalk on the Swedish island of Lindholmen was a key component of the Karlskrona naval base producing rope up to 300 metres in length for the cordage of warships. Although production ceased in 1960, the elaborately designed facility is now open to the public with exhibitions and demonstrations of ropemaking. A similarly scaled facility in Rochefort, Charente-Maritime, France, called the Corderie Royale, is also maintained as a museum within the Centre International de la Mer.\n\nIn the 18th Century, Malta and Port Mahon, on the island of Menorca, both had open-air ropewalks.\n",
"The greatest bridges of this kind were in the Apurímac Canyon along the main road north from Cusco; a famous example spans a 148-foot gap that is supposed to be the inspiration behind Thornton Wilder's 1928 Pulitzer Prize winning novel \"The Bridge of San Luis Rey\" (1927).\n",
"Wilhelm Albert's first ropes consisted of three strands consisting of four wires each. In 1840, Scotsman Robert Stirling Newall improved the process further. In America wire rope was manufactured by John A. Roebling, starting in 1841 and forming the basis for his success in suspension bridge building. Roebling introduced a number of innovations in the design, materials and manufacture of wire rope. Ever with an ear to technology developments in mining and railroading, Josiah White and Erskine Hazard, principal owners of the Lehigh Coal & Navigation Company (LC&N Co.) — as they had with the first blast furnaces in the Lehigh Valley — built a Wire Rope factory in Mauch Chunk, Pennsylvania in 1848, which provided lift cables for the Ashley Planes project, then the back track planes of the Summit Hill & Mauch Chunk Railroad, improving its attractiveness as a premier tourism destination, and vastly improving the throughput of the coal capacity since return of cars dropped from nearly four hours to less than 20 minutes. The decades were witness to a burgeoning increase in deep shaft mining in both Europe and North America as surface mineral deposits were exhausted and miners had to chase layers along inclined layers. The era was early in railroad development and steam engines lacked sufficient tractive effort to climb steep slopes, so incline plane railways were common. This pushed development of cable hoists rapidly in the United States as surface deposits in the Anthracite Coal Region north and south dove deeper every year, and even the rich deposits in the Panther Creek Valley required LC&N Co. to drive their first shafts into lower slopes beginning Lansford and its Schuylkill County twin-town Coaldale.\n",
"In South America, Inca rope bridges predate the arrival of the Spanish in the Andes in the 16th century. The oldest known suspension bridge, reported from ruins, dates from the 7th century in Central America (see Maya Bridge at Yaxchilan).\n",
"The first recorded mechanical ropeway was by Venetian Fausto Veranzio who designed a bicable passenger ropeway in 1616. The industry generally considers Dutchman Adam Wybe to have built the first operational system in 1644. The technology, which was further developed by the people living in the Alpine regions of Europe, progressed rapidly and expanded due to the advent of wire rope and electric drive. World War I motivated extensive use of military tramways for warfare between Italy and Austria.\n\nSection::::History.:First chairlifts.\n",
"Since such cables or ropes cannot be handled and, therefore, have no practical field of application, it cannot be assumed that any ropemaker in antiquity has ever produced such a cable. That alone is sufficient to discard the occasional opinion that the ropes had been produced and delivered in manageable lengths and had been spliced together on the spot.\n",
"From the 16th to the 19th century, the craft of rope making remained pretty much unchanged. There were many steps involved in rope making, and each of these steps was done by hand with the aid of simple tools. In the first half of the 19th century the Dutch shipbuilding industry was booming. Many businesses, including the roperies, benefited from this favourable environment.\n",
"Locomotion, propulsion and steel building were the big topics of this phase. The early predecessors of MAN were responsible for numerous technological innovations. The success of the early MAN entrepreneurs and engineers like Heinrich Gottfried Gerber, was based on a great openness towards new technologies. They constructed the Wuppertal monorail (\"Wuppertaler Schwebebahn\") and the first spectacular steel bridges like the Großhesseloher Brücke in Munich in 1857 and the Müngsten railway bridge between 1893 and 1897.\n",
"In Boston in the Massachusetts Colony, some early rope making businesses were called 'ropewalks'.\n\nJalan Pintal Tali which is in one of the older, central parts of George Town, Penang, Malaysia, literally means \"rope-twisting street\".\n",
" From 1834 until 1854, when the Pennsylvania Railroad Company finished a competing line, the Allegheny Portage Railroad made continuous boat traffic possible over the Allegheny Mountains between the Juniata and Western Division Canals. It followed a route that included 11 levels, 10 inclined planes fitted with stationary engines that could raise and lower boats and cargo, a , viaduct over the Little Conemaugh River, and many bridges. Infrastructure included 153 drains and culverts. The railroad climbed from the eastern canal basin at Hollidaysburg and from the western basin at Johnstown. At its summit, the railroad reached an elevation of above sea level.\n",
"Rope stretcher\n\nIn ancient Egypt, a rope stretcher (or harpedonaptai) was a surveyor who measured real property demarcations and foundations using knotted cords, stretched so the rope did not sag. The practice is depicted in tomb paintings of the Theban Necropolis. Rope stretchers used 3-4-5 triangles and the plummet, which are still in use by modern surveyors.\n",
"The ancient Egyptians were probably the first civilization to develop special tools to make rope. Egyptian rope dates back to 4000 to 3500 BC and was generally made of water reed fibres. Other rope in antiquity was made from the fibres of date palms, flax, grass, papyrus, leather, or animal hair. The use of such ropes pulled by thousands of workers allowed the Egyptians to move the heavy stones required to build their monuments. Starting from approximately 2800 BC, rope made of hemp fibres was in use in China. Rope and the craft of rope making spread throughout Asia, India, and Europe over the next several thousand years.\n",
"The spans of ancient structures are short. It would have been easy for somebody to tie a long rope between two poles and in this way create a very long ancient span. However, the ancient people had no reason to do this, and if they did, it is not documented and therefore not in this timeline. Only with the discovery of electricity and radio communication did people have a reason for tying a wire between two poles, thus creating the simplest form of long spans.\n",
"Inca rope bridge\n\nInca rope bridges are simple suspension bridges over canyons and gorges and rivers (\"pongos\") constructed by the Inca Empire. The bridges were an integral part of the Inca road system and exemplify Inca innovation in engineering. Bridges of this type were useful since the Inca people did not use wheeled transport – traffic was limited to pedestrians and livestock – and they were frequently used by Chasqui runners delivering messages throughout the Inca Empire.\n\nSection::::Construction and maintenance.\n\nThe bridges were constructed using ichu grass woven into large bundles which were very strong.\n",
"The majority of inclines were used in industrial settings, predominantly in quarries and mines, or to ship bulk goods over a barrier ridgeline as the Allegheny Portage Railroad and the Ashley Planes feeder railway shipped coal from the Pennsylvania Canal/Susquehanna basin via Mountain Top to the Lehigh Canal in the Delaware River Basin. The Welsh slate industry made extensive use of gravity balance and water balance inclines to connect quarry galleries and underground chambers with the mills where slate was processed. Examples of substantial inclines were found in the quarries feeding the Ffestiniog Railway, the Talyllyn Railway and the Corris Railway amongst others.\n",
"Many ropewalks were in the open air, while others were covered only by roofs. Ropewalks historically were harsh sweatshops, and frequently caught fire, as hemp dust ignites easily and burns fiercely. Rope was essential in sailing ships and the standard length for a British Naval Rope was . A sailing ship such as required of rope.\n\nSection::::The technology of making a rope.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-03583 | How does someone get the flu if no one else has the flu? | In a global world, someone always has a strain of the flu. That means it can mutate and return in a few years to re-infect people it had hit before. There are multiple strains in circulation, so you can get a new strain every year if you're unlucky/sickly. Completely new strains jump from animals to humans occasionally, so "new" strains usually start on farms - remember bird flu and swine flu? | [
"Any living organism can contract a virus by giving parasites the opportunity to grow. Parasites feed on the nutrients of another organism which allows the virus to thrive. Once the human body detects a virus, it then creates fighter cells that attack the parasite/virus; literally, causing a war within the body. A virus can affect any part of the body causing a wide range of illnesses such as the flu, the common cold, and sexually transmitted diseases. The flu is an airborne virus that travels through tiny droplets and is formally known as Influenza. Parasites travel through the air and attack the human respiratory system. People that are initially infected with this virus pass infection on by normal day to day activity such as talking and sneezing. When a person comes in contact with the virus, unlike the common cold, the flu virus affects people almost immediately. Symptoms of this virus are very similar to the common cold but much worse. Body aches, sore throat, headache, cold sweats, muscle aches and fatigue are among the many symptoms accompanied by the virus. A viral infection in the upper respiratory tract results in the common cold. With symptoms like sore throat, sneezing, small fever, and a cough, the common cold is usually harmless and tends to clear up within a week or so. The common cold is also a virus that is spread through the air but can also be passed through direct contact. This infection takes a few days to develop symptoms; it is a gradual process unlike the flu.\n",
"Influenza spreads between humans when infected people cough or sneeze, then other people breathe in the virus or touch something with the virus on it and then touch their own face. \"Avoid touching your eyes, nose or mouth. Germs spread this way.\" Swine flu cannot be spread by pork products, since the virus is not transmitted through food. The swine flu in humans is most contagious during the first five days of the illness, although some people, most commonly children, can remain contagious for up to ten days. Diagnosis can be made by sending a specimen, collected during the first five days, for analysis.\n",
"Three of the four types of influenza viruses affect humans: Type A, Type B, and Type C. Type D has not been known to infect humans, but is believed to have the potential to do so. Usually, the virus is spread through the air from coughs or sneezes. This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are showing symptoms. The infection may be confirmed by testing the throat, sputum, or nose for the virus. A number of rapid tests are available; however, people may still have the infection even if the results are negative. A type of polymerase chain reaction that detects the virus's RNA is more accurate.\n",
"Typically, influenza is transmitted from infected mammals through the air by coughs or sneezes, creating aerosols containing the virus, and from infected birds through their droppings. Influenza can also be transmitted by saliva, nasal secretions, feces and blood. Healthy individuals can become infected if they breathe in a virus-laden aerosol directly, or if they touch their eyes, nose or mouth after touching any of the aforementioned bodily fluids (or surfaces contaminated with those fluids). Flu viruses can remain infectious for about one week at human body temperature, over 30 days at 0 °C (32 °F), and indefinitely at very low temperatures (such as lakes in northeast Siberia). Most influenza strains can be inactivated easily by disinfectants and detergents.\n",
"Section::::Signs and symptoms.\n\nApproximately 33% of people with influenza are asymptomatic.\n\nSymptoms of influenza can start quite suddenly one to two days after infection. Usually the first symptoms are chills and body aches, but fever is also common early in the infection, with body temperatures ranging from 38 to 39°C (approximately 100 to 103°F). Many people are so ill that they are confined to bed for several days, with aches and pains throughout their bodies, which are worse in their backs and legs.\n\nSection::::Signs and symptoms.:Symptoms of influenza.\n\nBULLET::::- Fever and chills\n\nBULLET::::- Cough\n\nBULLET::::- Nasal congestion\n\nBULLET::::- Runny nose\n",
"During the mid-20th century, identification of influenza subtypes became possible, allowing accurate diagnosis of transmission to humans. Since then, only 50 such transmissions have been confirmed. These strains of swine flu rarely pass from human to human. Symptoms of zoonotic swine flu in humans are similar to those of influenza and of influenza-like illness in general, namely chills, fever, sore throat, muscle pains, severe headache, coughing, weakness, and general discomfort. The recommended time of isolation is about five days.\n\nSection::::Notable incidents.\n\nSection::::Notable incidents.:Spanish flu.\n",
"It can be difficult to distinguish between the common cold and influenza in the early stages of these infections. Influenza symptoms are a mixture of symptoms of common cold and pneumonia, body ache, headache, and fatigue. Diarrhea is not usually a symptom of influenza in adults, although it has been seen in some human cases of the H5N1 \"bird flu\" and can be a symptom in children. The symptoms most reliably seen in influenza are shown in the adjacent table.\n",
"Human flu symptoms usually include fever, cough, sore throat, muscle aches, conjunctivitis and, in severe cases, severe breathing problems and pneumonia that may be fatal. The severity of the infection will depend in large part on the state of the infected person's immune system and if the victim has been exposed to the strain before, and is therefore partially immune. Recent follow up studies on the impact of statins on influenza virus replication show that pre-treatment of cells with atorvastatin suppresses virus growth in culture. \n",
"Section::::Diseases.\n\nMost strains of \"H. influenzae\" are opportunistic pathogens; that is, they usually live in their host without causing disease, but cause problems only when other factors (such as a viral infection, reduced immune function or chronically inflamed tissues, e.g. from allergies) create an opportunity. They infect the host by sticking to the host cell using trimeric autotransporter adhesins.\n",
"Samples are respiratory samples, usually collected by a physician, nurse, or assistant, and sent to a hospital laboratory for preliminary testing. There are several methods of collecting a respiratory sample, depending on requirements of the laboratory that will test the sample. A sample may be obtained from around the nose simply by wiping with a dry cotton swab.\n\nSection::::Causes.:Other causes.\n\nInfectious diseases causing ILI include malaria, acute HIV/AIDS infection, herpes, hepatitis C, Lyme disease, rabies, myocarditis, Q fever, dengue fever, poliomyelitis, pneumonia, measles, and many others.\n",
"Section::::Replication cycle.\n\nTypically, influenza is transmitted from infected mammals through the air by coughs or sneezes, creating aerosols containing the virus, and from infected birds through their droppings. Influenza can also be transmitted by saliva, nasal secretions, feces and blood. Infections occur through contact with these bodily fluids or with contaminated surfaces. Out of a host, flu viruses can remain infectious for about one week at human body temperature, over 30 days at , and indefinitely at very low temperatures (such as lakes in northeast Siberia). They can be inactivated easily by disinfectants and detergents.\n",
"Influenza, commonly known as the flu, is an infectious disease caused by an influenza virus. Symptoms can be mild to severe. The most common symptoms include: high fever, runny nose, sore throat, muscle pains, headache, coughing, sneezing, and feeling tired. These symptoms typically begin two days after exposure to the virus and most last less than a week. The cough, however, may last for more than two weeks. In children, there may be diarrhea and vomiting, but these are not common in adults. Diarrhea and vomiting occur more commonly in gastroenteritis, which is an unrelated disease and sometimes inaccurately referred to as \"stomach flu\" or the \"24-hour flu\". Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure.\n",
"Section::::Signs and symptoms.\n\nIn general, humans who catch a humanized influenza A virus (a human flu virus of type A) usually have symptoms that include fever, cough, sore throat, muscle aches, conjunctivitis, and, in severe cases, breathing problems and pneumonia that may be fatal. The severity of the infection depends in large part on the state of the infected persons' immune systems and whether they had been exposed to the strain before (in which case they would be partially immune). No one knows if these or other symptoms will be the symptoms of a humanized H5N1 flu.\n",
"Influenza in humans is subject to clinical surveillance by a global network of more than 110 National Influenza Centers. These centers receive samples obtained from patients diagnosed with ILI, and test the samples for the presence of an influenza virus. Not all patients diagnosed with ILI are tested, and not all test results are reported. Samples are selected for testing based on severity of ILI, and as part of routine sampling, and at participating surveillance clinics and laboratories. The United States has a general surveillance program, a border surveillance program, and a hospital surveillance program, all devoted to finding new outbreaks of influenza.\n",
"The viruses that cause HFMD are spread through close personal contact, through the air from coughing and the feces of an infected person. Contaminated objects can also spread the disease. Coxsackievirus A16 is the most common cause, and enterovirus 71 is the second-most common cause. Other strains of coxsackievirus and enterovirus can also be responsible. Some people may carry and pass on the virus despite having no symptoms of disease. Other animals are not involved. Diagnosis can often be made based on symptoms. Occasionally, a throat or stool sample may be tested for the virus.\n",
"Influenza can be spread in three main ways: by direct transmission (when an infected person sneezes mucus directly into the eyes, nose or mouth of another person); the airborne route (when someone inhales the aerosols produced by an infected person coughing, sneezing or spitting) and through hand-to-eye, hand-to-nose, or hand-to-mouth transmission, either from contaminated surfaces or from direct personal contact such as a handshake. The relative importance of these three modes of transmission is unclear, and they may all contribute to the spread of the virus. In the airborne route, the droplets that are small enough for people to inhale are 0.5 to 5µm in diameter and inhaling just one droplet might be enough to cause an infection. Although a single sneeze releases up to 40,000 droplets, most of these droplets are quite large and will quickly settle out of the air. How long influenza survives in airborne droplets seems to be influenced by the levels of humidity and UV radiation, with low humidity and a lack of sunlight in winter aiding its survival.\n",
"Section::::Cultural references.\n",
"According to the World Health Organization: \"Every winter, tens of millions of people get the flu. Most are only ill and out of work for a week, yet the elderly are at a higher risk of death from the illness. We know the worldwide death toll exceeds a few hundred thousand people a year, but even in developed countries the numbers are uncertain, because medical authorities don't usually verify who actually died of influenza and who died of a flu-like illness.\" Even healthy people can be affected, and serious problems from influenza can happen at any age. People over 65 years old, pregnant women, very young children and people of any age with chronic medical conditions are more likely to get complications from influenza, such as pneumonia, bronchitis, sinus, and ear infections.\n",
"Section::::Reception.\n\nSection::::Reception.:Ratings.\n",
"When an infected person sneezes or coughs more than half a million virus particles can be spread to those close by. In otherwise healthy adults, influenza virus shedding (the time during which a person might be infectious to another person) increases sharply one-half to one day after infection, peaks on day 2 and persists for an average total duration of 5 days—but can persist as long as 9 days. In those who develop symptoms from experimental infection (only 67% of healthy experimentally infected individuals), symptoms and viral shedding show a similar pattern, but with viral shedding preceding illness by one day. Children are much more infectious than adults and shed virus from just before they develop symptoms until two weeks after infection. In immunocompromised people, viral shedding can continue for longer than two weeks.\n",
"Influenza differs from the common cold as it is caused by a different group of viruses, and its symptoms tend to be more severe and to last longer. Infection usually lasts for about a week, and is characterized by sudden onset of high fever, aching muscles, headache and severe malaise, non-productive cough, sore throat and rhinitis. Symptoms usually peak after two or three days.\n",
"Section::::Mechanism.:Pathophysiology.\n\nThe mechanisms by which influenza infection causes symptoms in humans have been studied intensively. One of the mechanisms is believed to be the inhibition of adrenocorticotropic hormone (ACTH) resulting in lowered cortisol levels.\n\nKnowing which genes are carried by a particular strain can help predict how well it will infect humans and how severe this infection will be (that is, predict the strain's pathophysiology).\n",
"Influenza, commonly known as the flu, is an infectious disease of birds and mammals caused by an RNA virus of the family Orthomyxoviridae (the influenza viruses). In humans, common symptoms of influenza infection are fever, sore throat, muscle pains, severe headache, coughing, and weakness and fatigue. In more serious cases, influenza causes pneumonia, which can be fatal, particularly in young children and the elderly. While sometimes confused with the common cold, influenza is a much more severe disease and is caused by a different type of virus. Although nausea and vomiting can be produced, especially in children, these symptoms are more characteristic of the unrelated gastroenteritis, which is sometimes called \"stomach flu\" or \"24-hour flu.\"\n",
"Cases of swine flu have been reported in India, with over 31,156 positive test cases and 1,841 deaths up to March 2015.\n\nSection::::Signs and symptoms.\n",
"The specific combination of fever and cough has been found to be the best predictor; diagnostic accuracy increases with a body temperature above 38°C (100.4°F). Two decision analysis studies suggest that \"during local outbreaks\" of influenza, the prevalence will be over 70%. Even in the absence of a local outbreak, diagnosis may be justified in the elderly during the influenza season as long as the prevalence is over 15%.\n"
] | [
"There can be a time when nobody has the flu."
] | [
"Someone always has the flu even if they are not symptomatic. "
] | [
"false presupposition"
] | [
"There can be a time when nobody has the flu."
] | [
"false presupposition"
] | [
"Someone always has the flu even if they are not symptomatic. "
] |
2018-20462 | Why is it if you put helium in a balloon it floats but put it in a tank it gets heavier? | A balloon expands, a tank (hopefully) does not. Helium is lighter than atmospheric air at the same pressure, but only by a factor of 5 or so, so if you put 6 times atmospheric pressure in a tank, that'll make it heavier than the surrounding air. | [
"Since the \"Hindenburg\" disaster in 1937, helium has replaced hydrogen as a lifting gas in blimps and balloons due to its lightness and incombustibility, despite an 8.6% decrease in buoyancy.\n",
"The effects of buoyancy do not just affect balloons; both liquids and gases are fluids in the physical sciences, and when all macrosize objects larger than dust particles are immersed in fluids on Earth, they have some degree of buoyancy. In the case of either a swimmer floating in a pool or a balloon floating in air, buoyancy can fully counter the gravitational weight of the object being weighed, for a weighing device in the pool. However, as noted, an object supported by a fluid is fundamentally no different from an object supported by a sling or cable—the weight has merely been transferred to another location, not made to disappear.\n",
"When rubber or plastic balloons are filled with helium so that they float, they typically retain their buoyancy for only a day or so, sometimes longer. The enclosed helium atoms escape through small pores in the latex which are larger than the helium atoms. Balloons filled with air usually hold their size and shape much longer, sometimes for up to a week.\n",
"and the buoyant force for one m of hydrogen in air at sea level is:\n\nTherefore, the amount of mass that can be lifted by helium in air at sea level is:\n\nand the buoyant force for one m of helium in air at sea level is:\n\nThus hydrogen's additional buoyancy compared to helium is:\n",
"BULLET::::- The diffusion issue shared with Hydrogen (though, as Helium's molecular radius is smaller, it diffuses through more materials than Hydrogen).\n\nBULLET::::- Helium is expensive.\n",
"Because of its low molecular weight, helium enters and leaves tissues more rapidly than nitrogen as the pressure is increased or reduced (this is called on-gassing and off-gassing). Because of its lower solubility, helium does not load tissues as heavily as nitrogen, but at the same time the tissues can not support as high an amount of helium when super-saturated. In effect, helium is a faster gas to saturate and desaturate, which is a distinct advantage in saturation diving, but less so in bounce diving, where the increased rate of off-gassing is largely counterbalanced by the equivalently increased rate of on-gassing.\n",
"Section::::Buoyancy compensation.\n\nWith a rigid airship two main strategies are pursued to avoid the venting of lifting gas:\n\nBULLET::::- 1. The use of a fuel with the same density as air and therefore no increase in buoyancy caused by consumption.\n\nBULLET::::- 2. Adding water as ballast by extraction during the trip.\n\nSection::::Buoyancy compensation.:Fuel with a density close to air.\n\nOnly gasses have a density similar or equal to the air.\n\nSection::::Buoyancy compensation.:Fuel with a density close to air.:Hydrogen.\n",
"Barkeepers often do not talk about density, but call fluids 'lighter' and 'heavier' or refer to 'specific gravity', which means the same. If two identical volumes of fluids are compared, the denser one weighs more than the lighter one.\n\nSection::::Floating Liqueurs in practice.\n",
"The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking.\n\nSection::::Fluids and objects.:Compressible objects.:Divers.\n",
"Section::::Fluids and objects.\n\nThe atmosphere's density depends upon altitude. As an airship rises in the atmosphere, its buoyancy decreases as the density of the surrounding air decreases. In contrast, as a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased.\n\nSection::::Fluids and objects.:Compressible objects.\n",
"This calculation is at sea level at 0 °C. For higher altitudes, or higher temperatures, the amount of lift will decrease proportionally to the air density, but the ratio of the lifting capability of hydrogen to that of helium will remain the same. This calculation does not include the mass of the envelope need to hold the lifting gas.\n\nSection::::High-altitude ballooning.\n",
"A common helium-filled toy balloon is something familiar to many. When such a balloon is fully filled with helium, it has buoyancy—a force that opposes gravity. When a toy balloon becomes partially deflated, it often becomes neutrally buoyant and can float about the house a meter or two off the floor. In such a state, there are moments when the balloon is neither rising nor falling and—in the sense that a scale placed under it has no force applied to it—is, in a sense perfectly weightless (actually as noted below, weight has merely been redistributed along the Earth's surface so it cannot be measured). Though the rubber comprising the balloon has a mass of only a few grams, which might be almost unnoticeable, the rubber still retains all its mass when inflated.\n",
"BULLET::::- Because the hydrogen molecule is very small, it can easily diffuse through many materials such as latex, so that the balloon will deflate quickly. This is one reason that many hydrogen or helium filled balloons are constructed out of Mylar/BoPET.\n\nSection::::Gases theoretically suitable for lifting.:Helium.\n\nHelium is the second lightest gas. For that reason, it is an attractive gas for lifting as well. Small size of helium molecules increases its lifting value.\n\nA major advantage is that this gas is noncombustible. But the use of helium has some disadvantages, too:\n",
"Nitrogen gas (density 1.251 g/L at STP, average atomic mass 28.00 g/mol) is about 3% lighter than air, insufficient for common use as a lifting gas.\n\nSection::::Hydrogen versus helium.\n\nHydrogen and helium are the most commonly used lift gases. Although helium is twice as heavy as (diatomic) hydrogen, they are both significantly lighter than air, making this difference negligible.\n\nThe lifting power in air of hydrogen and helium can be calculated using the theory of buoyancy as follows:\n",
"In a practical dirigible design, the difference is significant, making a 50% difference in the fuel-carrying capacity of the dirigible and hence increasing its range significantly. However, hydrogen is extremely flammable and its use as a lifting gas in dirigibles has decreased since the Hindenburg disaster. Helium is safer as a lifting gas because it is inert and does not undergo combustion. \n\nSection::::Gases theoretically suitable for lifting.:Water vapor.\n",
"The mass of \"weightless\" (neutrally buoyant) balloons can be better appreciated with much larger hot air balloons. Although no effort is required to counter their weight when they are hovering over the ground (when they can often be within one hundred newtons of zero weight), the inertia associated with their appreciable mass of several hundred kilograms or more can knock fully grown men off their feet when the balloon's basket is moving horizontally over the ground.\n",
"Hydrogen (density 0.090 g/L at STP, average molecular mass 2.016 g/mol) and helium (density 0.179 g/L at STP, average molecular mass 4.003 g/mol) are the most commonly used lift gases. Although helium is twice as heavy as (diatomic) hydrogen, they are both much lighter than air that this difference only results in hydrogen having 8% more buoyancy than helium.\n",
"The \"USS Shenandoah (ZR-1)\" (1923–25) was the first airship with ballast water recovered from the condensation of exhaust gas. Prominent vertical slots in the airship's hull acted as exhaust condensers. A similar system was used on her sister ship, \"USS Akron (ZRS-4)\". The German-made \"USS Los Angeles (ZR-3)\" was also fitted with exhaust gas coolers to prevent jettisoning of the costly helium.\n\nSection::::Buoyancy compensation.:Lifting gas temperature.\n",
"A simple method for calculating the mass of a volume of gas is to calculate the mass at STP, at which densities for gases are available. The mass of each component gas is calculated for the volume of that component calculated using the gas fraction for that component.\n\nExample: Twin 12l cylinders filled with Trimix 20/30/50 to 232bar at 20°C (293K)\n\nCalculate volume at 1.013 bar, 0%deg;C (273K)\n\nOf this,\n\nThe mass of the helium is a small part of the total. and density of oxygen and nitrogen are fairly similar.\n",
"So, for example, a typical design for a minimum mass tank to hold helium (as a pressurant gas) on a rocket would use a spherical chamber for a minimum shape constant, carbon fiber for best possible formula_10, and very cold helium for best possible formula_11.\n\nSection::::Design.:Stress in thin-walled pressure vessels.\n\nStress in a shallow-walled pressure vessel in the shape of a sphere is\n",
"The hydrogen nucleus contains just one proton. Its isotope deuterium, or heavy hydrogen, contains a proton and a neutron. Helium contains two protons and two neutrons, and carbon, nitrogen and oxygen - six, seven and eight of each particle, respectively. However, a helium nucleus weighs less than the sum of the weights of the two heavy hydrogen nuclei which combine to make it. The same is true for carbon, nitrogen and oxygen. For example, the carbon nucleus is slightly lighter than three helium nuclei, which can combine to make a carbon nucleus. This difference is known as the mass defect.\n",
"and drops to zero as \"r\" increases. This behavior is well known to anyone who has blown up a balloon: a large force is required at the start, but after the balloon expands (to a radius larger than \"r\"), less force is needed for continued inflation.\n\nSection::::Why does the larger balloon expand?\n",
"Helium II also exhibits a creeping effect. When a surface extends past the level of helium II, the helium II moves along the surface, against the force of gravity. Helium II will escape from a vessel that is not sealed by creeping along the sides until it reaches a warmer region where it evaporates. It moves in a 30 nm-thick film regardless of surface material. This film is called a Rollin film and is named after the man who first characterized this trait, Bernard V. Rollin. As a result of this creeping behavior and helium II's ability to leak rapidly through tiny openings, it is very difficult to confine liquid helium. Unless the container is carefully constructed, the helium II will creep along the surfaces and through valves until it reaches somewhere warmer, where it will evaporate. Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. These waves are known as \"third sound\".\n",
"Neon is monatomic, making it lighter than the molecules of diatomic nitrogen and oxygen which form the bulk of Earth's atmosphere; a balloon filled with neon will rise in air, albeit more slowly than a helium balloon.\n",
"Methane (density 0.716 g/L at STP, average molecular mass 16.04 g/mol), the main component of natural gas, is sometimes used as a lift gas when hydrogen and helium are not available. It has the advantage of not leaking through balloon walls as rapidly as the smaller molecules of hydrogen and helium. Many lighter-than-air balloons are made of aluminized plastic that limits such leakage; hydrogen and helium leak rapidly through latex balloons. However, methane is highly flammable and like hydrogen is not appropriate for use in passenger-carrying airships. It is also relatively dense and a potent greenhouse gas.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-04956 | why are sloths so slow? | Because they evolved to be. For the sloth, being slow is an advantage. Because they are slow, predators rarely spot them. Because they are slow, they burn very little energy. As a result of that, they can get by eating food that isn't very high in energy and therefore is eaten by few other animals. So slow leads to few predators and little competition for food. It works. | [
"BULLET::::- Sloths - The Sloths wear coats that make them slow. But when the coats are removed, the sloths can move fast. Sloth King and his army attacked hidden kingdom because they thought humans were cutting down trees and they thought they would have nowhere to live.\n",
"Maned sloths rarely descend from the trees because, when on a level surface, they are unable to stand and walk, only being able to drag themselves along with their front legs and claws. They travel to the ground only to defecate or to move between trees when they cannot do so through the branches. The sloth's main defenses are to stay still and to lash out with its formidable claws. It can swim well but does not move well on the ground.\n\nSection::::Life history.\n",
"The tamanduas are nocturnal, active at night and secreting away in hollow tree trunks and burrows abandoned by other animals during daylight hours. They can spend more than half of their time in the treetops, as much as 64%, where they forage for arboreal ants and termites. Tamanduas move rather awkwardly on the ground and are incapable of galloping like their relative, the giant anteater. Tamanduas walk on the sides on their clenched forefeet to avoid injuring their palms with their sharp claws.\n",
"Section::::Behaviour and ecology.:Dietary habits.\n",
"Section::::Characteristics.\n",
"Two-toed sloths have a diverse diet of insects, carrion, fruits, leaves and small lizards, ranging over up to 140 hectares. Three-toed sloths, on the other hand, have a limited diet of leaves from only a few trees, and no mammal digests as slowly.\n",
"The tamandua's small eyes afford limited vision. Instead of relying on their sense of sight, they primarily utilize their senses of smell and hearing to locate their insect prey. They use their sharp claws and powerful forearms to tear open the nest of a colony of termites and employ their elongated tongues, coated with sticky saliva, to extract the insects.\n\nSection::::Conservation.\n",
"Section::::Behaviour and ecology.:Reproduction.\n",
"Section::::Biology.:Reproduction.\n",
"Section::::Behavior.\n\nUnlike the two-toed sloth, three-toed sloths are agile swimmers. They are still slow in trees. The offspring cling to their mother's bellies for around nine months. They cannot walk on all four limbs, so they must use their front arms and claws to drag themselves across the rainforest floor.\n",
"The ancient Xenarthra included a much greater variety of species than today. Ancient sloths were not arboreal but dwelled on land, and were known to reach sizes that rival those of elephants, as was the case for Megatherium.\n",
"Adult males have a total head-body length of , with a tail about long and a weight of . Females are generally larger, measuring , and weighing . Like all other sloths, the maned sloth has very little muscle mass in comparison to other mammals its size. This reduced muscle mass allows it to hang from thin branches.\n\nSection::::Ecology and behavior.\n",
"Sloths move only when necessary and even then very slowly. They usually move at an average speed of per minute, but can move at a marginally higher speed of , if they are in immediate danger from a predator. While they sometimes sit on top of branches, they usually eat, sleep, and even give birth hanging from branches. They sometimes remain hanging from branches even after death. On the ground, the maximum speed of sloths is per minute. Sloths are surprisingly strong swimmers and can reach speeds of per minute. They use their long arms to paddle through the water and can cross rivers and swim between islands. Sloths can reduce their already slow metabolism even further and slow their heart rate to less than a third of normal, allowing them to hold their breath underwater for up to 40 minutes.\n",
"Brown-throated sloths inhabit the high canopy of the forest, where they eat young leaves from a wide range of different trees. They do not travel far, with home ranges of only around , depending on the local environment. Within a typical, range, a brown-throated sloth will visit around 40 trees, and may specialise on one particular species, even spending up to 20% of its time in a single specific tree. Thus, although the species are generalists, individual sloths may feed on a relatively narrow range of leaf types.\n",
"Colugos are unskilled climbers; they lack opposable thumbs and are not especially strong. They progress up trees in a series of slow hops, gripping onto the bark with their small, sharp claws. Colugos spend most of the day curled up in tree hollows or hanging inconspicuously under branches. At night, colugos spend most of their time up in the trees foraging, with gliding being used to either find another foraging tree or to find possible mates and protect territory.\n",
"Section::::Extinction.\n",
"Maned sloths are solitary diurnal animals, spending from 60–80% of their day asleep, with the rest more or less equally divided between feeding and travelling. Sloths sleep in crotches of trees or by dangling from branches by their legs and tucking their head in between their forelegs.\n",
"Section::::Hunting of ground sloths.:Advantages.\n\nCertain characteristics and behavioral traits of the ground sloths made them easy targets for human hunting and provided hunter-gatherers with strong incentives to hunt these large mammals.\n",
"Northern tamanduas are mainly nocturnal, but are also often active during the day, and spend only around 40% of their time in the trees. They are active for about eight hours each day, spending the rest of the time sheltering in hollow trees. They are solitary animals, occupying home ranges of between 25 and 70 ha (62 and 170 ac). Known predators include jaguars and harpy eagles.\n",
"Aside from the color changes and visually striking beard, the two tamarins essentially have the same body structure. They are very small, compared to most other primates. Using their claws, they cling to tree branches, maintaining a consistent verticality in the jungle environment. To navigate their lush environment, which typically is in rainforests, they leap and move quickly through trees, rarely touching the forest floor.\n\nSection::::Habitat.\n",
"Maned sloths are folivores, and feed exclusively on tree and liana leaves, especially \"Cecropia\". Although individual animals seem to prefer leaves from particular species of tree, the species as a whole is able to adapt to a wide range of tree types. Younger leaves are preferred to older, and tree leaves are preferred to liana leaves. Individual maned sloths have reported to travel over a home range of , with estimated population densities of .\n",
"They inhabit the dense undergrowth of tropical forests. With the exception of \"T. minor\", they are primarily terrestrial and forage on the forest floor, usually below . Since they are rarely seen crossing wide roads, populations likely are negatively affected by fragmentation of forests caused by logging operations.\n\nSection::::Ecology and behaviour.\n\nEarly naturalists described wild-caught captive \"Tupaia\" specimens as restless, nervous, and rapidly reacting to sounds and movements. Their auditory sensitivity is highly developed as the broad frequency range of their hearing reaches far into the ultrasonic.\n",
"Section::::Ecology and behavior.\n",
"Section::::Biology.\n\nSection::::Biology.:Morphology and anatomy.\n\nSloths can be long and, depending on species, weigh from . Two-toed sloths are slightly larger. Sloths have long limbs and rounded heads with tiny ears. Three-toed sloths also have stubby tails about long.\n\nSloths are unusual among mammals in not having seven cervical vertebrae. Two-toed sloths have five to seven, while three-toed sloths have eight or nine. The other mammal not having seven is the Manatee, with six.\n\nSection::::Biology.:Physiology.\n",
"Section::::Physical description.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-05075 | How does remote job execution work? | VNC is actually a very roundabout way of asking another computer to do work. Here's what happens: * You connect to the remote VNC environment. As part of your connection, you will tell the host what kind of display you can support, how much data you can receive, and other parameters of interest. You will also let the host know what kind of mouse and keyboard you have connected. * Your VNC clilent and the VNC host you're connecting to have been written to understand a special language, the Remote Frame Buffer protocol, or RFB. * The RFB describes how to communicate various kinds of data — changes to things on the screen, mouse clicks, keyboard strokes, and so on. * When the connection is established, the host sends you _frames_ — snapshots of the current state of the host's view of the screen. Your client understands how to take these frames and decompress them so that they're viewable by a human. * Similarly, if you move the mouse or type on the keyboard, your client also knows how to encode those movements or keystrokes to send them to the remote host. Again, both you and the host know how to speak RFB, so you know how to represent these ideas. * Your remote host will execute any keystrokes or mouse movements according to the rules laid out in RFB. * This continues until you terminate the connection. Of course, you don't need to run VNC at all to ask a computer to remotely do something on your behalf. For instance, when I click "save" on this post, I'm sending a command to Reddit's servers and asking them to add this comment! | [
"The terms Remote Batch, Remote Job System and Remote Job Processing are also used for RJE facilities.\n\nSection::::Examples.\n\nRemote Job Entry (RJE) is also the name of an OS/360 component that provided RJE services. An RJE workstation operator may have complete console control of the job flow between the workstation and mainframe, depending on local configuration and policy.\n\nConversational Remote Job Entry (CRJE) is a component of OS/360 and OS/VS1 that provides job submission, job retrieval and editing for a user at an interactive terminal.\n",
"Remote job entry\n\nRemote job entry is the procedure for sending requests for data processing tasks or 'jobs' to mainframe computers from remote workstations, and by extension the process of receiving the output from such tasks at a remote workstation.\n\nThe RJE workstation is called a remote because it usually is located some distance from the host computer. The workstation connects to the host through a modem or local area network (LAN). Today this is known as the client–server model, and RJE is an early form of a request–response architecture.\n",
"BULLET::::- Near real-time processing can be initiated for individual operations. Near real-time triggers can be generated on the fly in response to external events such as the arrival of incoming files.\n\nBULLET::::- Monitoring and manual intervention can be carried out from the command line or with a built-in web-based graphical user interface.\n\nSection::::Description.:Key features.\n\nBULLET::::- Job chains, which can be seen as an assembly line on which multiple job nodes are passed. Each job in a job chain makes up a step in the processing of the chain.\n",
"Remote desktop sharing is accomplished through a common client/server model. The client, or VNC viewer, is installed on a local computer and then connects via a network to a server component, which is installed on the remote computer. In a typical VNC session, all keystrokes and mouse clicks are registered as if the client were actually performing tasks on the end-user machine.\n\nThe target computer in a remote desktop scenario is still able to access all of its core functions. Many of these core functions, including the main clipboard, can be shared between the target computer and remote desktop client.\n",
"Microsoft Remote Web Workplace\n\nThe Remote Web Workplace is a feature of Microsoft's Windows Small Business Server, Windows Home Server 2011, and the midsize business-focused product, Windows Essential Business Server, which enables existing users to log into a front-end network-facing interface of the small business/home server.\n",
"BULLET::::5. according to Environment tag, a job can be moved from the input queue to the routing queue, waiting for another JEM cluster which will fetch and execute it\n\nBULLET::::6. if JCL validation is unsuccessful, job is moved into output queue\n\nWhen a job is moved into output queue, the \"submitter\" will receive a \"job ended\" notification (via topic).\n\nSection::::Overview.:File-systems.\n",
"BULLET::::- Synchronous Mirroring where I/O completion is only returned when the remote site acknowledges the completion. Applicable for shorter distances (200 km)\n\nBULLET::::- Asynchronous Mirroring where I/O completion is returned before the remote site has acknowledged the completion. Applicable for much greater distances (200 km)\n\nBULLET::::- Point-In-Time Snapshots to copy or clone data for diverse uses\n\nBULLET::::- When combined with thin provisioning, enables space-efficient snapshots\n\nSection::::Block virtualization.:Pooling.\n",
"BULLET::::1. when a job is submitted for execution by a \"submitter\", it's moved to preinput queue: while there, JCL is validated (JCL validation is done by a cluster's node)\n\nBULLET::::2. after successful JCL validation, job is moved to input queue, waiting for job execution\n\nBULLET::::3. according to Domain and Affinity tags, job is run on an appropriate node and moved to running queue\n\nBULLET::::4. after job ends, it is moved to output queue\n",
"Section::::Architecture.:Terminal Server.\n",
"BULLET::::6. Finally, the server stub calls the server procedure. The reply traces the same steps in the reverse direction.\n\nSection::::Standard contact mechanisms.\n\nTo let different clients access servers, a number of standardized RPC systems have been created. Most of these use an interface description language (IDL) to let various platforms call the RPC. The IDL files can then be used to generate code to interface between the client and servers.\n\nSection::::Analogues.\n\nNotable RPC implementations and analogues include:\n\nSection::::Analogues.:Language-specific.\n\nBULLET::::- Java's Java Remote Method Invocation (Java RMI) API provides similar functionality to standard Unix RPC methods.\n",
"Distributed job processing in JobServer is enabled using an agent model where remote nodes communicate with a central pair (primary/secondary) of master nodes. The master nodes are responsible for the job scheduling and distribute the job processing across a cluster of agent nodes.\n\nSection::::Mesos clustering.\n",
"This provides a mechanism by which non-Java applications in an external address space may make inbound calls to a target WOLA-enabled EJB in a remote WAS instance, either on another z/OS LPAR or a distributed WAS platform. The same supplied WOLA proxy application installed in a local WAS z/OS instance is required to handle the initial cross-memory WOLA call and forward that to the named target EJB on the remote WAS instance. The following picture illustrates the topology:\n",
"Some organisations use a hybrid client model partway between centralized computing and conventional desktop computing, in which some applications (such as web browsers) are run locally, while other applications (such as critical business systems) are run on the terminal server. One way to implement this is simply by running remote desktop software on a standard desktop computer.\n\nSection::::Hosted computing model.\n",
"BULLET::::- on the local machine, open a terminal window\n\nBULLET::::- use ssh with the X forwarding argument to connect to the remote machine\n\nBULLET::::- request local display/input service (e.g., export DISPLAY=\"[user's machine]\":0 if not using SSH with X forwarding enabled)\n\nThe remote X client application will then make a connection to the user's local X server, providing display and input to the user.\n\nAlternatively, the local machine may run a small program that connects to the remote machine and starts the client application.\n\nPractical examples of remote clients include:\n",
"Remote evaluation belongs to the family of mobile code, within the field of code mobility. An example for remote evaluation is grid computing: An executable task may be sent to a specific computer in the grid. After the execution has terminated, the result is sent back to the client. The client in turn may have to reassemble the different results of multiple concurrently calculated subtasks into one single result.\n\nSection::::See also.\n\nBULLET::::- Client-side scripting, the client executing code sent by the server, instead of the server executing code sent by the client\n\nBULLET::::- Code on demand\n\nBULLET::::- Code mobility\n",
"The SOS GmbH and the JobScheduler were recognized in 2012 with selection by the Gartner IT research and advisory company for their Magic Quadrant report on the worldwide workload automation market. The JobScheduler was described as \"... attractive for organizations with an open-source tool adoption policy.\"\n\nSection::::Description.\n\nSection::::Description.:Architecture.\n\nBULLET::::- The JobScheduler can be configured to run as a standalone application.\n\nBULLET::::- The JobScheduler implements a master / agent architecture to run jobs on the master and on agents that are deployed to remote computers.\n",
"As time sharing systems developed, interactive job control emerged. An end-user in a time sharing system could submit a job interactively from his remote terminal (remote job entry), communicate with the operators to warn them of special requirements, and query the system as to its progress. He could assign a priority to the job, and terminate (kill) it if desired. He could also, naturally, run a job in the foreground, where he would be able to communicate directly with the executing program. During interactive execution he could interrupt the job and let it continue in the background or kill it. This development of interactive computing in a multitasking environment led to the development of the modern shell.\n",
"A task in DrQueue is composed of multiple jobs all of which require a script which is distributed to the \"slave\" nodes of the cluster by the \"master\". The master acts as a central server, where all tasks are stored. The slave software is run on each node in the cluster and it reports its status back to the master periodically.\n",
"BULLET::::2. Then the office PC logs into a file server where the needed information is stored.\n\nBULLET::::3. The remote PC takes control of the office PC's monitor and keyboard, allowing the remote user to view and manipulate information, execute commands, and exchange files.\n\nMany computer manufacturers and large businesses' help desks use this service widely for technical troubleshooting of their customers' problems. Therefore you can find various professional first-party, third-party, open source, and freeware remote desktop applications. Which some of those are cross-platform across various versions of Windows, macOS, UNIX, and Linux. Remote desktop programs may include LogMeIn or TeamViewer. \n",
"While not as commonly used, analog outputs may be included to control devices that require varying quantities, such as graphic recording instruments (strip charts). Summed or processed data quantities may be generated in a master SCADA system and output for display locally or remotely, wherever needed.\n\nSection::::Architecture.:Software and logic control.\n",
"Terminal Server is managed by the \"Terminal Server Manager\" Microsoft Management Console snap-in. It can be used to configure the sign in requirements, as well as to enforce a single instance of remote session. It can also be configured by using Group Policy or Windows Management Instrumentation. It is, however, not available in client versions of Windows OS, where the server is pre-configured to allow only one session and enforce the rights of the user account on the remote session, without any customization.\n\nSection::::Architecture.:Remote Desktop Gateway.\n",
"Section::::Current activities.:System of systems control.\n\nA common thread in all of RACE’s activities is ‘system of systems’ control. JET, ITER, DEMO and ESS are examples of complex systems reliant on efficient, collaborative operation of multiple robotic devices.\n",
"Remote access can also be explained as remote control of a computer by using another device connected via the internet or another network. This is widely used by many computer manufacturers and large businesses' help desks for technical troubleshooting of their customers' problems.\n\nRemote desktop software captures the mouse and keyboard inputs from the local computer (client) and sends them to the remote computer (server).\n",
"Version 1.1 (released August 20, 2002) introduced the ability to schedule remote tasks.\n",
"Section::::Uses.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-16515 | Why does fruit at the bottom of a container (like strawberries) or fruit basket go bad first? | As fruit ages, it releases chemicals into the air that receptors on the surface of fruit use to trigger ripening/decomposition. This is how fruit are able to ripen at about the same time in a field/orchard even if they began growing at different times. When you pack a bunch of fruit into a container, these chemicals will build up at the bottom causing that fruit to ripen faster. Separate from all this, moisture also tends to build up at the bottom of a container, providing a habitat for algae, fungus and mold spores. As an aside, bananas give off way more of these ripening chemicals than most fruits, and so you can stick unripe fruit in a paper bag with a ripe banana to get them to ripen faster. The paper bag is important as it prevents the buildup of moisture. As a result you get the ripening without the mold. | [
"Fruits and vegetables are very susceptible to mechanical injury. This can occur at any stage of the marketing chain and can result from poor harvesting practices such as the use of dirty cutting knives; unsuitable containers used at harvest time or during the marketing process, e.g. containers that can be easily squashed or have splintered wood, sharp edges or poor nailing; overpacking or underpacking of containers; and careless handling of containers. Resultant damage can include splitting of fruits, internal bruising, superficial grazing, and crushing of soft produce. Poor handling can thus result in development of entry points for moulds and bacteria, increased water loss, and an increased respiration rate.\n",
"Different fruit have different ripening stages. In tomatoes the ripening stages are:\n\nBULLET::::- Green: When the surface of the tomato is completely green\n\nBULLET::::- Breaker: When less than 10% of the surface is red\n\nBULLET::::- Turning: When less than 30% of the surface is red (but no less than 10%)\n\nBULLET::::- Pink: When less than 60% of the surface is red (but no less than 30%)\n\nBULLET::::- Light Red: When less than 90% of the surface is red (but no less than 60%)\n\nBULLET::::- Red: When the surface is nearly completely red.\n\nSection::::List of ripening and non-ripening fruits.\n",
"Section::::Characteristics.\n\nTomatoes have a short shelf-life in which they remain firm and ripe. This lifetime may be shorter than the time needed for them to reach market when shipped from winter growing areas to markets in the north, and the softening process can also lead to more of the fruit being damaged during transit.\n",
"Fruit is picked early in the day in order to minimize water loss and to prevent high heat exposure, which would be damaging. The fruit is then carefully placed into either plastic crates or bamboo baskets and taken to packaging houses, where the fruit undergo a series of checks for standards. The packaging houses are well-ventilated and shaded to prevent further decay. The process of checking and sorting are performed by workers instead of machinery. Any fruit that are split, under ripe, or decaying are disposed of. The remaining healthy fruit are then prepped and shipped to markets.\n",
"Section::::Sixteen ton weight.\n",
"Section::::Environment.\n",
"On tomato, can see that there are sunken in dark spots. As the disease continues to develop can begin to see spots that are rotting. The pathogen can infect both green and ripe fruit; spots are not evident on green right away, but over time they develop. (Dillard, 1987). Symptoms are most common on the fruit, but they may also appear on the stem, leaves, and roots.\n",
"Section::::Horticulture.\n\n\"Botrytis cinerea\" affects many other plants. It is economically important on soft fruits such as strawberries and bulb crops. Unlike wine grapes, the affected strawberries are not edible and are discarded. To minimize infection in strawberry fields, good ventilation around the berries is important to prevent moisture being trapped among leaves and berries. A number of bacteria have been proven to act as natural antagonists to \"B. cinerea\" in controlled studies.\n\nIn greenhouse horticulture, \"Botrytis cinerea\" is well known as a cause of considerable damage in tomatoes.\n",
"BULLET::::- Cinta Transportada de Fruticias (Strawberry Conveyor Belt): Forty strawberries are transported on a tilted conveyor belt. A Participante must catch them one by one and place them in the box. To make things difficult, a chosen fellow Participante places distractions on the conveyor belt along with the strawberries, such as gunk and pieces of raw meat. The two Participantes switch roles so both would have a chance to try both sides. The one who places the most strawberries in the box is exempted from competing in the final stunt.\n",
"Section::::Evaluating ripeness.:Acid level.:pH level.\n",
"BULLET::::- The upright style is the most common; it exudes a feeling of stability and gravity. In this style, the primary stem is about as long as the diameter and depth of the container combined, with the secondary stem being around two-thirds and the ornamental stem about half the length of the primary branch.\n",
"Produce can be damaged when exposed to extremes of temperature. Levels of tolerance to low temperatures are importance when cool storage is envisaged. All produce will freeze at temperatures between 0 and -2 degrees Celsius. Although a few commodities are tolerant of slight freezing, bad temperature control in storage can lead to significant losses.\n\nSome fruits and vegetables are also susceptible to contaminants introduced after harvest by use of contaminated field boxes; dirty water used for washing produce before packing; decaying, rejected produce lying around packing houses; and unhealthy produce contaminating healthy produce in the same packages.\n",
"Section::::In sugarcane.:Why a ratoon crop ripens earlier than its corresponding plant crop.\n\nA ratoon crop ripens earlier, in general, by at least one to one and a half months or so due to: early development of shoots, maintenance of relatively lesser N content in index tissues and rapid run-out of N during grand growth phase and relatively higher inorganic non-sugars in its juice.\n\nSection::::In sugarcane.:Poor ratoon crops due to low temperature harvest.\n",
"Section::::Evaluating ripeness.:Acid level.:Balancing sugar, acidity and pH.\n",
"The harvesting and cleaning process has not changed substantially over time. The delicate strawberries are still harvested by hand. Grading and packing often occurs in the field, rather than in a processing facility. In large operations, strawberries are cleaned by means of water streams and shaking conveyor belts.\n\nSection::::Cultivation.:Pests.\n",
"The amount of fermentable sugars is often low and needs to be supplemented by a process called chaptalization in order to have sufficient alcohol levels in the finished wine. Sucrose is often added so that there is sufficient sugar to ferment to completion while keeping the level of acidity acceptable. If the specific gravity of the initial solution is too high, indicating an excess of sugar, water or acidulated water may be added to adjust the specific gravity down to the winemaker's target range.\n",
"BULLET::::- M.2: Produces a semidwarf to semistandard freestanding tree, depending on scion variety. Trees are strong, crop well, and do not have collar rot problems.\n\nBULLET::::- M.7: Produces a semidwarf tree of Class 6 that is freestanding in deep well drained soils but in rocky, steep, or shallow soils, it tends to lean. The rootstock may sucker profusely and is susceptible to collar rot(Phytophthora).\n",
"Section::::Evaluating ripeness.:Acid level.\n",
"Section::::Hosts and Symptoms.\n\nOne of the most well-known diseases caused by \"Ceratocystis paradoxa\" is Black rot or stem-end rot of pineapple, but it can also infect tropical fruit plants such as banana and coconuts as well as sugarcane. The pathogen infects the fruits through wounds or other openings after harvest has already happened and the fruit is fresh. This is because the time to processing takes too long.\n",
"Section::::Factors influencing when ripeness occurs.\n",
"Section::::Evaluating ripeness.:Must weight.\n",
"Section::::Management.\n",
"The pathogen \"Ceratocystis paradoxa\" is the teleomorph stage of the inoculation and is uncommon in the natural environment. This is because the primary disease observed is caused by the anamorph stage which is due to \"Thielaviopsis paradoxa\". Chlamydospores are the overwinter stage of the pathogen. Because pineapples are grown using pieces of fruit previously harvested pineapples, these chlamydospores can be present and can start the inoculation early on. If they are not present in the planting, then they must infect the wounds or natural openings on harvested pineapple.\n",
"In viticulture, growers want to avoid any part of the cordon from touching the ground because of the vine's natural inclination to send out suckers or basal shoots and take root in that area where the cordon is touching the ground. Ever since the phylloxera epidemic of the 19th century, many vines are grafted on phylloxera resistant rootstock. However, the \"top part\" of the grafted vine is still very susceptible to the phylloxera and should a part of that vine take root both the daughter and the original mother vine will risk being infected by the louse. Additionally this daughter vine will leech resources of water and nutrients from the mother vine which can diminish the quality of both vines' grape production.\n",
"Further, a cultivar that is resistant to one disease may be more susceptible to another that is equally important. A lettuce cultivar that is resistant to mosaic virus may be sensitive to corky root disease, whilst another that resists corky root may be vulnerable to downy mildew (\"Brim lactic\").br\n\nAnother drawback to resistance is that depending on the host pathogen system, resistance is sometimes not long lasting as new pathogen strains quickly develop, and further research and breeding is constantly needed.\n\nSection::::Resistance and immunity.:Availability of resistant varieties.\n"
] | [
"Fruit at the bottom of a container or fruit basket go bad first."
] | [
"Fruit at the bottom of a container or fruit basket ripen faster as chemicals will build up at the bottom."
] | [
"false presupposition"
] | [
"Fruit at the bottom of a container or fruit basket go bad first."
] | [
"false presupposition"
] | [
"Fruit at the bottom of a container or fruit basket ripen faster as chemicals will build up at the bottom."
] |
2018-00262 | Why does different cheese taste different and what factors go into the taste? | Species of animal has an effect on the taste of cheese. Following factors have profound effect on the milk. Breed of the species. Feed of the animals. Season of the year. These are some of the main factors which affect the milk. For example : Fontina cheese is produce in Italy, there is special breed of cows which are fed certain feed and that is how they keep the authentic name brand by producing consistent Fontina cheese. So are Parmesan and Romano, very specific, and very precise and their name is protected by law. Raw milk cheese and pasteurized milk cheese. This factor alone yields different even if you keep all the other variable the same. Cultures: This is the heart of cheese making, small minute variation in bacterial cultures may give you totally different product. For example: Swiss cheese is made with specific culture of bacteria. Activity of the culture: In case the culture is slow or very fast in growth, it is a sure recipe for different products. Super Cleanliness: Small contamination with coli forms can run havoc and small contamination with bacteriophage may give you it is called dead vat. It basically kills the culture and nothing happens. Enzymes: Some cheeses are made with additional enzymes to develop typical characteristic flavor. Color: Some cheeses are as a nostalgia and are done with added color. Coagulating agents: There are mainly two types of coagulating agent, enzymatic and acid. Both produce different types of cheeses. For example Ricotta is acid coagulated and Cheddar is enzyme coagulated commercial name Rennet. Now for the last 45 to 50 years coagulating enzyme made extreme headway, now there are several sources where Rennet could come. In olden times it was one and only source was 4th stomach of suckling calf. Some cheeses such as Blue/Camembert are mold ripened cheese/: These cheese are inoculated with mold spores and these are called Mold ripened cheeses. Temperature Controls: Small variation in temperature controls could make a different product. Washing the curd: Some cheese require to wash the curd with water to slow the active culture to produce the exact product you want. Size of the cut: Size of the cut of the curd has profound affect on the final product. Cooking Temperature: Cooking temperature is extremely important, to firm up the curd, and each cheese variety has specific temperature.= Speed of stirring: Speed of stirring is of prime importance too fast will shelter the curd, and too slow will mate the curd. Draining: Speed of draining the whey/liquid portion. Cheddaring or turning the curd in slabs: Some cheese/s for example Cheddar cheese has proper sequence of turning the curd. Self draining or forced draining: Classic examples are Feta and Ricotta are self drain, where as Cheddar is forced drain. Forming/hoops/ and pressure per square inch in the press and duration: These are very precise factors to control very specific factors and have profound affect on the end results. Salting dry salting or brine(liquid salt solution): For example Feta has to be brine salted, Cheddar is dry salted and Mozzarella is brine salted but not sold in brine, where as Feta is mostly sold in Brine Solution. A few percentage up and down may give you different product. Curing and Ripening: Temperature control, humidity and air circulation are three extremely important factor and the fourth one is some cheeses, they have to flipped at certain time. Waxing or vacuum packing: Each cheese has it is own way of packing, for example Romano and Parmesan are waxed and Cheddar is cut from bigger blocks and vacuum packed. URL_0 | [
"Over a thousand types of cheese from various countries are produced. Their styles, textures and flavors depend on the origin of the milk (including the animal's diet), whether they have been pasteurized, the butterfat content, the bacteria and mold, the processing, and aging. Herbs, spices, or wood smoke may be used as flavoring agents. The yellow to red color of many cheeses, such as Red Leicester, is produced by adding annatto. Other ingredients may be added to some cheeses, such as black pepper, garlic, chives or cranberries.\n",
"Cheesemakers choose starter cultures to give a cheese its specific characteristics. Also, if the cheesemaker intends to make a mould-ripened cheese such as Stilton, Roquefort or Camembert, mould spores (fungal spores) may be added to the milk in the cheese vat or can be added later to the cheese curd.\n\nSection::::Process.:Coagulation.\n",
"Types of cheese\n\nTypes of cheese are grouped or classified according to criteria such as length of fermentation, texture, methods of production, fat content, animal milk, and country or region of origin. The method most commonly and traditionally used is based on moisture content, which is then further narrowed down by fat content and curing or ripening methods. The criteria may either be used singly or in combination, with no single method being universally used.\n",
"BULLET::::- Scimudin – Lombardy\n\nBULLET::::- Scimut\n\nBULLET::::- Scodellato\n\nBULLET::::- Secondo sale\n\nBULLET::::- Seras – lower Aosta Valley; cows’ milk cheese known since 1267 and often eaten with polenta\n\nBULLET::::- Seré (see Seras)\n\nBULLET::::- Seirass (see Seras)\n\nBULLET::::- Semicotto\n\nBULLET::::- Semitenero loiano\n\nBULLET::::- Semuda\n\nBULLET::::- Sigarot\n\nBULLET::::- Silandro – South Tyrol\n\nBULLET::::- Silter – Lombardy\n\nBULLET::::- Shtalp\n\nBULLET::::- Smorzasoel\n\nBULLET::::- Soera (Sola della Valcasotto) – Piedmont\n\nBULLET::::- Sola – Piedmont\n\nBULLET::::- Sora\n\nBULLET::::- Sot la Trape – Friuli Venezia Giulia\n\nBULLET::::- Sottocenere al tartufo\n\nBULLET::::- Spalèm – Lombardy\n\nBULLET::::- Spessa – Trentino\n\nBULLET::::- Spress – Piedmont\n",
"BULLET::::- Fodòm\n\nBULLET::::- Fondue – Aosta Valley, Piedmont\n\nBULLET::::- Fontal – Trentino\n\nBULLET::::- Fontina – DOP – Aosta Valley\n\nBULLET::::- Formadi – Friuli Venezia Giulia\n\nBULLET::::- Formaggella – Piedmont, Lombardy\n\nBULLET::::- Formaggello spazzacamino\n\nBULLET::::- Formaggetta\n\nBULLET::::- Formaggina\n\nBULLET::::- Formaggio'\n\nBULLET::::- Formaggiola caprina\n\nBULLET::::- Formaggiu ri capra\n\nBULLET::::- Formai\n\nBULLET::::- Formaio embriago – Veneto\n\nBULLET::::- Furmaggitt di Montevecchia – Lombardy\n\nBULLET::::- Furmaggiu du quagliu\n\nBULLET::::- Furmai\n\nBULLET::::- Formazza\n\nBULLET::::- Formella del Friuli – Friuli Venezia Giulia\n\nBULLET::::- Frachet – Piedmont\n\nBULLET::::- Fresa – Sardinia\n\nBULLET::::- Frico balacia – Friuli Venezia Giulia\n\nBULLET::::- Frue\n\nSection::::G.\n\nBULLET::::- Galbanino\n\nBULLET::::- Garda Tremosine\n",
"BULLET::::- Pecorino\n\nBULLET::::- Pecorino di Carmasciano\n\nBULLET::::- Pecorino Romano\n\nBULLET::::- Pecorino Sardo\n\nBULLET::::- Pecorino Siciliano\n\nBULLET::::- Pecorino Toscano\n\nBULLET::::- Pepato\n\nBULLET::::- Picón Bejes-Tresviso\n\nBULLET::::- Ricotta\n\nBULLET::::- Robiola\n\nBULLET::::- Roncal cheese\n\nBULLET::::- Roquefort\n\nBULLET::::- Saloio\n\nBULLET::::- Šar cheese\n\nBULLET::::- Serra da Estrela cheese\n\nBULLET::::- Serpa cheese\n\nBULLET::::- Sirene\n\nBULLET::::- St James\n\nBULLET::::- Sussex Slipcote\n\nBULLET::::- Telemea\n\nBULLET::::- Testouri\n\nBULLET::::- Torta del Casar\n\nBULLET::::- Tzfat cheese\n\nBULLET::::- Van herbed cheese\n\nBULLET::::- Vlašić cheese\n\nBULLET::::- Wensleydale cheese (though most Wensleydale cheese is made from cow's milk)\n\nBULLET::::- Wigmore\n\nBULLET::::- Xynomizithra\n\nBULLET::::- Xynotyro\n\nBULLET::::- Zamorano cheese\n\nSection::::See also.\n\nBULLET::::- List of cheeses\n",
"The main factor in categorizing these cheeses is age. Fresh cheeses without additional preservatives can spoil in a matter of days.\n\nFor these simplest cheeses, milk is curdled and drained, with little other processing. Examples include cottage cheese, cream cheese, curd cheese, farmer cheese, caș, chhena, fromage blanc, queso fresco, paneer, and fresh goat's milk chèvre. Such cheeses are often soft and spreadable, with a mild flavour.\n",
"Some cheeses are categorized by the source of the milk used to produce them or by the added fat content of the milk from which they are produced. While most of the world's commercially available cheese is made from cow's milk, many parts of the world also produce cheese from goats and sheep. Examples include Roquefort (produced in France) and Pecorino (produced in Italy) from ewe's milk. One farm in Sweden also produces cheese from moose's milk. Sometimes cheeses marketed under the same name are made from milk of different animal—feta cheeses, for example, are made from sheep's milk in Greece.\n",
"BULLET::::- Pannarello\n\nBULLET::::- Pannerone Lodigiano – Lodi, Lombardy\n\nBULLET::::- Parmigiano-Reggiano – DOP – Emilia-Romagna, Lombardy\n\nBULLET::::- Pastore\n\nBULLET::::- Pastorella del Cerreto di Sorano\n\nBULLET::::- Pastorino\n\nBULLET::::- Pecora\n\nBULLET::::- Pecoricco – Apulia\n\nBULLET::::- Pecorini – Calabria\n\nBULLET::::- Pecorino – sheep's-milk cheese\n\nBULLET::::- Pepato\n\nBULLET::::- Peretta – Sardinia\n\nBULLET::::- Perlanera\n\nBULLET::::- Pettirosso \"Tipo Norcia\"\n\nBULLET::::- Piacentinu or Piacentino\n\nBULLET::::- Piacentinu di Enna or Piacentino ennese – Sicily\n\nBULLET::::- Piattone\n\nBULLET::::- Piave – DOP – Veneto\n\nBULLET::::- Piddiato\n\nBULLET::::- Pierino\n\nBULLET::::- Pioda S.Maria\n\nBULLET::::- Piodino\n\nBULLET::::- Piramide\n\nBULLET::::- Piscedda\n\nBULLET::::- Pirittas\n\nBULLET::::- Pojna enfumegada (see Poina enfumegada) – Trentino\n",
"BULLET::::- Uglichsky a hard cheese made of cow's milk\n\nBULLET::::- Yaroslavsky a hard cow's milk cheese, usually produced in rounds; with a slightly sour taste\n\nBULLET::::- Zakusochny a soft blue cow's milk cheese\n\nSection::::Europe.:Serbia.\n\nBULLET::::- Sremski\n\nBULLET::::- Zlatarski PDO\n\nBULLET::::- Sjenički\n\nBULLET::::- Svrljiški Belmuz\n\nBULLET::::- Krivovirski Kačkavalj\n\nBULLET::::- Homoljski ovčiji (Homolje sheep cheese)\n\nBULLET::::- Homoljski kozji (Homolje goat cheese)\n\nBULLET::::- Homoljski kravlji (Homolje cow cheese)\n\nBULLET::::- Pirotski Kačkavalj\n\nBULLET::::- Lužnička Vurda\n\nBULLET::::- Užički Kajmak\n\nBULLET::::- Čačanski Kajmak\n\nBULLET::::- Čačanski Sir\n\nSection::::Europe.:Slovenia.\n\nBULLET::::- Bohinc Jože\n\nBULLET::::- Nanoški\n\nBULLET::::- Planinski\n\nSection::::Europe.:Switzerland.\n",
"BULLET::::- Bettelmatt – Piedmont\n\nBULLET::::- Bergkäse\n\nBULLET::::- Bernardo – Lombardy\n\nBULLET::::- Biancospino\n\nBULLET::::- Bocconcini\n\nBULLET::::- Bocconcini alla panna di bufala (see Bocconcini)\n\nBULLET::::- Bianco verde – Trentino; a cows’ milk cheese from Rovereto\n\nBULLET::::- Bitto – DOP – Lombardy\n\nBULLET::::- Bleu d'Aoste – Aosta Valley\n\nBULLET::::- Blu\n\nBULLET::::- Bonassai – Sardinia\n\nBULLET::::- Bonrus – Piedmont\n\nBULLET::::- Boscatella di Fiavè – Trentino; a recently developed soft cheese made in Fiavè\n\nBULLET::::- Boschetto al Tartufo – a cheese incorporating pieces of white truffle\n\nBULLET::::- Bormino\n\nBULLET::::- Boves – Piedmont\n\nBULLET::::- Bra – DOP – Province of Cuneo, Piedmont; made in three varieties:\n",
"BULLET::::- Réblèque – Aosta Valley; cow's milk\n\nBULLET::::- Reblo\n\nBULLET::::- Reblochon – Piedmont\n\nBULLET::::- Rebruchon (see Reblochon)\n\nBULLET::::- Regato\n\nBULLET::::- Renàz\n\nBULLET::::- Riavulillo\n\nBULLET::::- Ricotta\n\nBULLET::::- Rigatino di Castel San Pietro\n\nBULLET::::- Robiola\n\nBULLET::::- Romita piemontese – Piedmont\n\nBULLET::::- Rosa Camuna – Val Camonica, Lombardy; mild compact paste cheese made with partially skimmed cow's milk\n\nBULLET::::- Rosso di lago\n\nSection::::S.\n\nBULLET::::- Salignon – lower Aosta Valley; goats's and/or sheep’ milk cheese, usually smoked\n\nBULLET::::- Salagnun\n\nBULLET::::- Salato\n\nBULLET::::- Salgnun (Salignun) – Lombardy\n\nBULLET::::- Salondro or Solandro – Trentino\n\nBULLET::::- Salva – Lombardy\n",
"Section::::Moisture: soft to hard.:Semi-soft cheese.\n\nSemi-soft cheeses, and the sub-group \"Monastery\", cheeses have a high moisture content and tend to be mild-tasting. Well-known varieties include Havarti, Munster and Port Salut.\n\nSection::::Moisture: soft to hard.:Medium-hard cheese.\n",
"BULLET::::- Tendaio – semi-soft cows milk cheese made in Castiglione di Garfagnana, Tuscany, with ancient origins\n\nBULLET::::- Testùn – Piedmont\n\nBULLET::::- Tipo\n\nBULLET::::- Tirabuscion\n\nBULLET::::- Tirolese – South Tyrol\n\nBULLET::::- Toblach or Toblacher Stangenkäse – South Tyrol (see Dobbiaco)\n\nBULLET::::- Toma\n\nBULLET::::- Tombea – Lombardy\n\nBULLET::::- Tometta – Piedmont\n\nBULLET::::- Tometto (Tumet)\n\nBULLET::::- Tomini di Bollengo e del Talucco – Piedmont\n\nBULLET::::- Tomino – Piedmont\n\nBULLET::::- Torta (cheese)\n\nBULLET::::- Toscanello\n\nBULLET::::- Tosela – Trentino\n\nBULLET::::- Toumin dal mel – Piedmont\n\nBULLET::::- Tre Valli – Province of Pordenone, Friuli Venezia Giulia\n\nBULLET::::- Treccia\n\nBULLET::::- Trifulin – Langhe, Piedmont\n\nBULLET::::- Trizza\n",
"BULLET::::- Marzotica – Province of Lecce, Puglia\n\nBULLET::::- Mascarpin de la Calza\n\nBULLET::::- Mascarpa\n\nBULLET::::- Mascarpone\n\nBULLET::::- Mastela\n\nBULLET::::- Mattone or Zeigel\n\nBULLET::::- Mattonella al rosmarino\n\nBULLET::::- Matusc o Magro di latteria – Lombardy\n\nBULLET::::- Mezzapasta – Piedmont\n\nBULLET::::- Millefoglie all'aceto balsamico/Marzemino\n\nBULLET::::- Misto\n\nBULLET::::- Moesin di Fregona\n\nBULLET::::- Mollana della Val Borbera\n\nBULLET::::- Moncenisio (see Murianengo) – Piedmont\n\nBULLET::::- Montagna\n\nBULLET::::- Montanello (Caciotta dolce)\n\nBULLET::::- Montasio – DOP – Friuli‑Venezia Giulia, Veneto\n\nBULLET::::- Mont Blanc\n\nBULLET::::- Monte Baldo e Monte Baldo primo fiore\n\nBULLET::::- Monte delle Dolomiti – Trentino\n\nBULLET::::- Monte Veronese – DOP – Veneto\n",
"BULLET::::- Cazelle de Saint Affrique\n\nBULLET::::- Cherni Vit\n\nBULLET::::- Corleggy Cheese\n\nBULLET::::- Croglin\n\nBULLET::::- Crozier Blue\n\nBULLET::::- Dolaz cheese\n\nBULLET::::- Duddleswell cheese\n\nBULLET::::- Etorki\n\nBULLET::::- Feta\n\nBULLET::::- Fine Fettle Yorkshire\n\nBULLET::::- Ġbejna\n\nBULLET::::- Graviera\n\nBULLET::::- Halloumi\n\nBULLET::::- Idiazabal cheese\n\nBULLET::::- Jibneh Arabieh\n\nBULLET::::- Kadchgall\n\nBULLET::::- Kars gravyer cheese\n\nBULLET::::- Kashkaval\n\nBULLET::::- Kasseri\n\nBULLET::::- Kefalograviera\n\nBULLET::::- Kefalotyri\n\nBULLET::::- La Serena cheese\n\nBULLET::::- Lanark Blue\n\nBULLET::::- Lavaş cheese\n\nBULLET::::- Lighvan cheese\n\nBULLET::::- Manchego\n\nBULLET::::- Manouri\n\nBULLET::::- Mihaliç Peyniri\n\nBULLET::::- Mizithra\n\nBULLET::::- Nabulsi cheese\n\nBULLET::::- Oscypek\n\nBULLET::::- Ossau-Iraty\n\nBULLET::::- Oštiepok\n\nBULLET::::- P'tit Basque\n\nBULLET::::- Paddraccio\n\nBULLET::::- Pag cheese\n\nBULLET::::- Parlick Fell cheese\n",
"Section::::Quality control.\n\nCheesemakers must be skilled in the grading of cheese to assess quality, defects and suitability for release from the maturing store for sale. The grading process is one of sampling by sight, smell, taste and texture. Part of the cheesemaker's skill lies in the ability to predict when a cheese will be ready for sale or consumption, as the characteristics of cheese change constantly during maturation.\n",
"Some cheeses may be deliberately left to ferment from naturally airborne spores and bacteria; this approach generally leads to a less consistent product but one that is valuable in a niche market.\n\nSection::::Process.:Culturing.\n",
"Section::::Process.\n\nAfter the initial manufacturing process of the cheese is done, the cheese ripening process occurs. This process is especially important, since it defines the flavour and texture of the cheese, which differentiates the many varieties. Duration is dependent on the type of cheese and the desired quality, and typically ranges from \"three weeks to two or more years\".\n",
"Similar cheeses are produced elsewhere, principally in the United States, using different techniques and cultures that produce a cheese of a similar appearance, but with a different taste. The best-known of these is Wisconsin Cheese, a \"mezzano\" cheese with a sharper flavour (\"piccante\") than the Italian.\n\nSection::::History.\n",
"Section::::Brined.\n",
"Section::::Varieties and types.\n",
"Cheeses can be classified according to a variety of features including ripening characteristics, special processing techniques (such as cheddaring) or method of coagulation. Acid-setting is a method of coagulation that accounts for around 25% of production. These are generally fresh cheeses like cottage cheese, queso blanco, quark and cream cheese. The other 75%, which includes almost all ripened cheeses, are rennet cheeses. Some cheeses like ricotta and ziger are made by first heating the milk to between 90-92 degrees Celsius to create coprecipitation of casein and whey protein before addition of lactic or citric acid.\n\nSection::::Production.\n",
"The truckle has a parallelepiped shape with a plain squared side 11/13 cm or 17/19 cm long and a straight bowed side from 9 to 15 cm. The average weight spaces from . The crust is washed with water and salt, while as concerns cheese aged for a longer time it is oiled. In the past, linseed oil was used. In this way, some yellow mildew is built up, which penetrating into the crust increase the flavour.\n",
"BULLET::::- Musulupu\n\nSection::::N.\n\nBULLET::::- Nevegal\n\nBULLET::::- Nis\n\nBULLET::::- Nocciolino di ceva\n\nBULLET::::- Nostrale d'alpe – Piedmont\n\nBULLET::::- Nostrano (local produce)\n\nBULLET::::- Nusnetto bresciano – Province of Brescia, Lombardy\n\nSection::::O.\n\nBULLET::::- Ormea – Piedmont\n\nBULLET::::- Orrengigo di Pistoia – Tuscany\n\nBULLET::::- Ortler – South Tyrol\n\nBULLET::::- Ostrica di montagna – Piedmont; one of the Mortaràt specialities of the area of Biella\n\nBULLET::::- Ossolano d'alpe – cows' milk cheese made in Piedmont\n\nSection::::P.\n\nBULLET::::- Paddaccio\n\nBULLET::::- Paddraccio\n\nBULLET::::- Padduni\n\nBULLET::::- Paglierina – Piedmont\n\nBULLET::::- Paglietta – Piedmont\n\nBULLET::::- Pallone di Gravina – Apulia and Basilicata\n\nBULLET::::- Pampanella\n\nBULLET::::- Pancette - Basilicata\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-02061 | Why do healthy foods at the stores nutrition labels always seem devoid of nutrition? | In general it's because of companies and farmers lobbying the FDA about what essential nutrients appear on the label. Iodine is an essential nutrient, why isn't it on the label? Vitamin C is essential, but in everything; why is it on the label? When you showcase only calories, iron, vitamin c, and only a couple others nutritionally simple foods look better. | [
"Food and beverage manufacturers can manage the perceptions of consumers by controlling information on food labels. The Food and Drug Administration (FDA) requires a label on most food sold in grocery stores. However, the FDA does not regulate dietary supplements. Many chain restaurants also try to make their food appear to be healthier but serve too large of a portion. Fast food restaurants use advertising to make their food appear healthier when they have not changed anything about it either. Consumers have to consider where their health and nutrition information is coming from. When one gets nutrition information from the media, one is getting it from the food industry and companies that could benefit from customers purchasing their products. On labels and packaging there are many different perception management techniques they use.\n",
"In 2014, the U.S. Food and Drug Administration proposed several simultaneous improvements to nutrition labeling for the first time in over 20 years. The proposed changes were based on trends of consumption of nutrients of public health importance. However, studies had shown that the majority of the U.S. population could not understand the information in the then current Nutrition Facts Label. Nutrition label numeracy is particularly low in older individuals, of black and Hispanic race/ethnicity, who are unemployed, born outside of the US, have lower English proficiency, lower education achievement, lower income, or live in the South.\n",
"In 1990, the U.S. Food and Drug Administration (FDA) required that nutrition labels be put on food products in the United States. The thought behind doing so was to provide consumers with the necessary information to make educated decisions about the foods that they purchased. Since that time, nutrition psychologists have done research on how influential these labels are on how consumers choose what foods to buy. These studies have shown mixed results concerning the effects of nutritional labeling. According to the research, the average consumer does tend to read the labels and take the information into consideration, in part because companies have begun producing foods with more health-conscious ingredients. However, many of these potential health benefits are overshadowed by the continuing increase in obesity and deaths related to obesity in the United States over the last few decades.\n",
"The words “diet, low fat, sugar-free, healthy and good for you” are labels a consumer may see often on packaging, and thus associate these labels with products that will aid in a healthy lifestyle. It seems advertisers are aware of the need to live healthier and longer, so they adapt their products in accordance. It is suggested that food advertising influences consumer preferences and shopping habits. Therefore, by highlighting certain contents or ingredients is misleading consumers into thinking they are buying healthy when in fact they are not.\n",
"In addition to using the “healthy” food label to draw customers to low nutrition foods, food marketers have used a variety of “low content,” like low fat, low calorie, etc claims to assuage consumer’s health concerns and to potentially mislead them. “Low content” claims are labels or other advertised claims that appear on packages and or in advertisements are used so that consumers perceive the products they buy as being healthier or more nutritious. Misleading food health assertions of this nature are both widespread in food marketing and also not reflective of the actual nutritional or health quality of the food or beverage in question. These claims are not consistent among all food and beverage groups, although some of them do accurately represent the nutritional and or health benefits of a certain food or beverage, often this does not guarantee that all claims across all beverages and foods are reflective of actual nutrition. Additionally, even if a certain product is in fact low fat or any one of the different types of “low content” claims, consumers often focus on the claim and neglect other health considerations like added sugars, calories, and other unhealthy ingredients.\n",
"The nutrition facts label currently appears on more than 6.5 billion food packages. President Bill Clinton issued an award of design excellence for the nutrition facts label in 1997 to Burkey Belser in Washington, DC.\n",
"In a \"Wall Street Journal\" article in August 2009, John Mackey acknowledged that his company had lost touch with its natural food roots and would attempt to reconnect with the idea that health was affected by the quality of food consumed. He said \"We sell a bunch of junk\". He stated that the company would focus more on health education in its stores. As of 2013, many stores have employed Healthy Eating Specialists which are team members who \"answer customers’ healthy eating questions and can assist...in choosing the most nutrient-dense ingredients, suggest satisfying healthy recipes,\" and help \"create a meal plan in keeping with your health goals.\"\n",
"In addition to the nutrition label, products may display certain nutrition information or health claims on packaging. These health claims are only allowed by the FDA for \"eight diet and health relationships based on proven scientific evidence\", including: calcium and osteoporosis, fiber-containing grain products, fruits and vegetables and cancer, fruits, vegetables, and grain products that contain fiber—particularly soluble fiber—and the risk of coronary heart disease, fat and cancer, saturated fat and cholesterol and coronary heart disease, sodium and hypertension, and folate and neural tube defects. The Institute of Medicine recommended these labels contain the most useful nutritional information for consumers: saturated fats, trans fats, sodium, calories, and serving size. In January 2011, food manufacturers and grocery stores announced plans to display some of this nutrition information on processed food.\n",
"BULLET::::- In December 2008, Weight Watchers eliminated the Core Plan and introduced the Momentum Plan, designed to help members understand how consuming certain filling foods helped them to eat less and prevent overeating.\n\nBULLET::::- In late 2010 Weight Watchers overhauled its POINTS system and replaced it with PointsPlus (ProPoints outside the U.S.); under the new system, fruits and non-starchy vegetables are zero points, and processed foods have higher points than they did before.\n",
"Nutrition facts labels are only one of many types of food label required by regulation or applied by manufacturers.\n\nSection::::Australia and New Zealand.\n\nAustralia and New Zealand use a nutritional information panel of the following format:\n\nOther items are included as appropriate, and the units may be varied as appropriate (e.g. substituting ml for g, or mmol for mg in the 'Sodium' row). In April 2013 the New Zealand government introduced rules around common claims made on food packaging, such as 'low in fat'.\n\nSection::::Canada.\n",
"Here are some deceptive practices:\n\nBULLET::::- Distribute sugar amounts among many ingredients\n\nBULLET::::- Include \"healthy\" ingredients to make it appear to be healthy\n\nBULLET::::- Use scientific names of ingredients to mask their nutritional value\n\nBULLET::::- Use advertising or catch phrases to sell their product\n\nBULLET::::- Not including contaminants (heavy metal, toxic substances)\n\nBULLET::::- Using phrases like \"zero grams of trans fat\" because there is less than one gram in the serving size. This means there can be more than a gram of trans fat in the product though.\n",
"Generally health care focuses mainly on the increasing incidence of obesity.\n\nIt used to be the accepted opinion that only patients with low body weight or low body mass index (BMI, BMI < 18.5 kg/m2) are malnourished. However, studies show that BMI is not always a good parameter to detect malnutrition. Analysis show that a high percentage of body fat reduces the sensitivity of BMI to detect nutritional depletion.\n\nSection::::Disease related Malnutrition in Hospitals.\n",
"Weight Watchers developed the POINTS Food System for use with their Flex Plan. Healthy weight control is the primary objective of the system. The system is designed to allow customers to eat any food while tracking the number of points for each food consumed. Members try to keep to their POINTS Target, a number of points for a given time frame. The daily POINTS Target is personalized based on members' height, weight and other factors, such as gender. A weekly allowance for points is also established to provide for special occasions, mistakes, etc.\n\nSection::::Systems in use today.:Naturally Nutrient Rich (NNR).\n",
"Nutrition facts label\n\nThe nutrition facts label (also known as the nutrition information panel, and other slight variations) is a label required on most packaged food in many countries, showing what nutrients (to limit and get enough of) are in the food. Labels are usually based on official nutritional rating systems. Most countries also release overall nutrition guides for general educational purposes. In some cases, the guides are based on different dietary targets for various nutrients than the labels on specific foods.\n",
"In Canada, a standardized \"Nutrition Facts\" label was introduced as part of regulations passed in 2003, and became mandatory for most prepackaged food products on December 12, 2005. (Smaller businesses were given until December 12, 2007 to make the information available.). In accordance with food packaging laws in the country, all information, including the nutrition label, must be written in both English and French, the country's two official languages.\n",
"Section::::United States.:Food.:Marketing and consumer perceptions.\n\nMany companies have started to use their packaging for food as a marketing tool. Words such as “healthy”, “low-fat”, and “natural” have contributed to what is called the health-halo effect, which is when consumers overestimate the healthfulness of an item based on claims on the packaging. Food companies may incorporate whole grain and higher fiber levels into their products in order to advertise these advantages. However, there is no regulated amount of grain needed in a certain product to be able to advertise this benefit, and the product may not be as nutritious as advertised.\n",
"HealthyDiningFinder.com is an online resource and search tool operated by Healthy Dining that provides guidance in choosing dietitian-approved Healthy Dining menu items and corresponding nutrition information for menu items served at more than 60,000 full-service and quick-serve restaurant locations in the US. Restaurant participation in the Healthy Dining Program has grown substantially since its inception in 1990 as a Southern California, publication-based program.\n",
"In the United States, nutrition information is required on packaged retail foods in the form of nutrition facts panels as a result of food labeling regulations. In recent years, many restaurants have begun posting nutrition information as a result of both customer demand and menu-labeling laws.\n\nSection::::Applications.:Menu-labeling.\n\nThe Patient Protection and Affordable Care Act, signed into law March 23, 2010, includes a provision that creates a national, uniform nutrition-disclosure standard for food service establishments.\n",
" The USDA does not require retailers to put a nutrition label on ground turkey products. It is suggested to provide a nutrition label anyway, for the benefit of customers. There are two sets of circumstances under which some labeling is mandatory. First, the packaging must inform of any skin contained in the product. Second, the Nutrition Labeling and Education Act of 1990 states that if the product is labeled “lean” or “extra lean,” it needs a nutrition label for evidence.\n\nSection::::Nutrition.\n",
"Many fast food restaurants added labels to their menus by listing the nutritional information below each item. The intent was to inform consumers of the caloric and nutritional content of the food being served there and result in directing consumers to the healthier options available. However, reports do not display any significant drop in sales at sandwich or burger locations which highlights no change in consumer behavior even after food was labeled.\n",
"The Ministry of Health and Family Welfare had, on September 19, 2008, notified the Prevention of Food Adulteration (5th Amendment) Rules, 2008, mandating packaged food manufacturers to declare on their product labels nutritional information and a mark from the F.P.O or Agmark (Companies that are responsible for checking food products) to enable consumers make informed choices while purchasing. Prior to this amendment, disclosure of nutritional information was largely voluntary though many large manufacturers tend to adopt the international practice.\n\nSection::::Mexico.\n",
"In the US, this was not a popular policy. Food Basics eventually stopped charging for shopping bags and started using the typical cheaply made plastic bags used by its competitors and its fellow A&P banner stores.\n",
"By law, nearly all products have a nutrition label in Canada. The nutrition label gives you information about the product including, its serving size, calories, and its percentage of the 13 core nutrients that Canada deems necessary. These nutrients include fat, saturated fat, trans fat, cholesterol, sodium, carbohydrate, fibre, sugars, protein, vitamin A, vitamin C, calcium, and iron. All of these nutrients, except for vitamins and minerals, are recorded based on a reasonable daily intake percentage. Vitamins and minerals are based on a recommended daily intake (RDI). These differ in that one is based on what one is expected to eat in a day, while the other is based on what the government recommends one consume in a day. However, all nutrients are recorded onto the same label with the same guidelines that is prescribed by the Food and Drug Regulations. These guidelines determine that the nutrition label must be clearly and predominantly displayed on the package to the manufacturer as well as clearly visible to the consumer at the time of purchase.\n",
"Other studies have shown that marketing for food products has demonstrated an effect on consumers’ perceptions of purchase intent and flavor. One study in particular performed by Food and Brand Lab researchers at Cornell University looked at how an organic label affects consumers’ perceptions. The study concluded that the label claiming the product was “organic” altered perceptions in various ways. Consumers perceived these foods to have fewer calories and stated they were willing to pay up to 23.4% more for the product. The taste was supposedly “lower in fat” for the organic products as opposed to the regular ones. Finally, the study concluded that people who do not regularly read nutrition labels and who do not regularly buy organic food products are the most susceptible to this example of the health-halo effect.\n",
"The United Kingdom's Advertising Standards Authority, the self-regulatory agency for the UK ad industry, uses nutrient profiling to define junk food. Foods are scored for \"A\" nutrients (energy, saturated fat, total sugar and sodium) and \"C\" nutrients (fruit, vegetables and nut content, fiber and protein). The difference between A and C scores determines whether a food or beverage is categorized as HFSS (high in fat, salt and sugar; a term synonymous with \"junk food\").\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-02022 | When does "after Labor Day" become "before (the next) Labor Day"? | This is a fashion cliche that suggests that white is a "summer" color. Supposedly, the acceptable time to wear white is between Memorial Day (late May) and Labor Day (early September.) | [
"In 1855, the \"New York Times\" look forward to that year's Moving Day:\n",
"Section::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Mississippi.\n\nBULLET::::- All federal holidays except Columbus Day\n\nBULLET::::- January 15–21 (floating Monday) – this federal holiday is renamed \"Martin Luther King's and Robert E. Lee's Birthdays\"\n\nBULLET::::- April 24–30 (floating Monday) – Confederate Memorial Day\n\nBULLET::::- May 25–31 (floating Monday) – renamed National Memorial Day / Jefferson Davis Birthday\n\nBULLET::::- November 11 – renamed Armistice Day (Veterans Day)\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Missouri.\n\nBULLET::::- All federal holidays\n\nBULLET::::- February 12 – Lincoln's Birthday\n",
"BULLET::::- March 25–31 (floating Monday) – Seward's Day\n\nBULLET::::- October 18 – Alaska Day\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:American Samoa.\n\nBULLET::::- All federal holidays\n\nBULLET::::- April 17 – Flag Day\n\nBULLET::::- December 26 – Family Day\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Arizona.\n\nBULLET::::- All federal holidays\n\nBULLET::::- January 15–21 (floating Monday) – this federal holiday is renamed \"Dr. Martin Luther King Jr./Civil Rights Day\".\n\nBULLET::::- February 15–21 (floating Monday) – this federal holiday is renamed \"Lincoln/Washington Presidents' Day\".\n",
"BULLET::::- 1863 – American Civil War: The Emancipation Proclamation takes effect in Confederate territory.\n\nBULLET::::- 1877 – Queen Victoria of the United Kingdom is proclaimed Empress of India.\n\nBULLET::::- 1885 – Twenty-five nations adopt Sandford Fleming's proposal for standard time (and also, time zones)\n\nBULLET::::- 1890 – Eritrea is consolidated into a colony by the Italian government.\n\nBULLET::::- 1892 – Ellis Island begins processing immigrants into the United States.\n",
"BULLET::::- May 22 - National Maritime Day \n\nBULLET::::- last Mon. in May - Memorial Day \n\nBULLET::::- June 14 - Flag Day \n\nBULLET::::- June 14-July 4 - Honor America Days \n\nBULLET::::- 3rd Sun. in June - Father's Day \n\nBULLET::::- July 27 - National Korean War Veterans Armistice Day (expired 2003) \n\nBULLET::::- 4th Sun. in July - Parent's Day \n\nBULLET::::- August 19 - National Aviation Day \n\nBULLET::::- 1st Sat. aft. 1st Mon. in September (Labor Day) - Carl Garner Federal Lands Cleanup Day \n",
"Near the end of the 19th century, many people began leaving the city for the cooler suburbs in the heat of summertime, and as a result October 1 became a second Moving Day, as people returning to the city would take their belongings out of storage and move into their newly rented homes. The October date may be related to the English custom of paying land rents on Michaelmas, which falls on September 29. Eventually, the October date began to supplant the traditional May date, so that by 1922 the Van Owners Association reported only a \"moderate flurry\" of activity on the Spring day. The movers also attempted to get legislation passed to spread out the Fall rush to three dates: the firsts of September, October and November. Over time, the tradition of a specific Moving Day began to fade, with the remnant evident in commercial leases, which still generally run out on May 1 or October 1.\n",
"BULLET::::- February 12 – Lincoln's Birthday\n\nBULLET::::- November 2–8 (floating Tuesday) – Election Day\n\nBULLET::::- November 23–29 (floating Friday) – day after Thanksgiving\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Illinois.:Chicago, Illinois.\n\nBULLET::::- All Illinois state holidays except the Day after Thanksgiving\n\nBULLET::::- March 1–7 (floating Monday) – Pulaski Day\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Indiana.\n\nBULLET::::- All federal holidays except Washington's Birthday\n\nBULLET::::- March 20 – April 23 (floating Friday using Computus) – Good Friday\n",
"Section::::Unofficial end of summer.\n\nLabor Day is called the \"unofficial end of summer\" because it marks the end of the cultural summer season. Many take their two-week vacations during the two weeks ending Labor Day weekend. Many fall activities, such as school and sports begin about this time.\n",
"BULLET::::- March 20 – April 23 (floating Friday using Computus) – Good Friday\n\nBULLET::::- November 2–8 (floating Tuesday) – Election Day\n\nBULLET::::- November 23–29 (floating Friday) – day After Thanksgiving\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:District of Columbia.\n\nBULLET::::- All federal holidays\n\nBULLET::::- January 20 – Inauguration Day (every 4 years)\n\nBULLET::::- April 16 – Emancipation Day\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Florida.\n",
"BULLET::::- May 1–7 (floating Monday) – Primary Election Day\n\nBULLET::::- November 2–8 (floating Monday) – General Election Day\n\nBULLET::::- November 23–29 (floating Friday) – Lincoln's Birthday to occur on day after Thanksgiving\n\nBULLET::::- December 24 – Washington's Birthday to occur on Christmas Eve\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Iowa.\n\nBULLET::::- All federal holidays except Washington's Birthday and Columbus Day\n\nBULLET::::- November 23–29 (floating Friday) – Day after Thanksgiving\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Kansas.\n",
"Times of day from \":01\" to \":29\" minutes past the hour are commonly pronounced with the words \"after\" or \"past\", for example, 10:17 being \"seventeen after ten\" or \"seventeen past ten\". \":15\" minutes is very commonly called \"quarter after\" or \"quarter past\" and \":30\" minutes universally \"half past\", e.g., 4:30, \"half past four\". Times of day from \":31\" to \":59\" are, by contrast, given subtractively with the words \"to\", \"of\", \"until\", or \"till\": 12:55 would be pronounced as \"five to one\" or \"five of one\". \":45\" minutes is pronounced as \"quarter to\", \"quarter of\", \"quarter until\", or \"quarter till\".\n",
"Section::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Legal holidays observed nationwide.\n\nBULLET::::- January 1 – New Year's Day\n\nBULLET::::- May 25–31 (floating Monday) – Memorial Day\n\nBULLET::::- Known officially as \"National Memorial Day\" in Alabama,\n\nBULLET::::- and \"Memorial Day / Decoration Day\" in Idaho.\n\nBULLET::::- Observed with Jefferson Davis' Birthday, and known officially as \"National Memorial Day / Jefferson Davis' Birthday\", in Mississippi.\n\nBULLET::::- July 4 – Independence Day\n\nBULLET::::- September 1–7 (floating Monday) – Labor Day\n\nBULLET::::- November 11 – Veterans Day\n",
"Section::::December 30, 1900 (Sunday).\n\nBULLET::::- In the last Sunday of the century, the \"New York Herald\" published Mark Twain's \"A Greeting from the 19th Century to the 20th Century\", while the \"New York World\" published the article \"New York as It Will Be in 1999\".\n",
"Local legend has it that the tradition began because the first of May was the day the first Dutch settlers set out for Manhattan, but \"The Encyclopedia of New York City\" links it instead to the English celebration of May Day. While it may have originated as a custom, the tradition took force of law by an 1820 act of the New York State legislature, which mandated that if no other date was specified, all housing contracts were valid to the first of May – unless the day fell on a Sunday, in which case the deadline was May 2.\n",
"Section::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Arkansas.\n\nBULLET::::- All federal holidays except Columbus Day\n\nBULLET::::- February 15–21 (floating Monday) – this federal holiday is renamed \"George Washington's Birthday and Daisy Gatson Bates Day\".\n\nBULLET::::- December 24 – Christmas Eve\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:California.\n\nBULLET::::- All federal holidays except Columbus Day\n\nBULLET::::- March 31 (fixed) – César Chávez Day\n\nBULLET::::- November 23–29 (floating Friday) – day after Thanksgiving\n",
"BULLET::::- — National Defense Transportation Day (The President is requested to issue each year a proclamation designating the third Friday in May as National Defense Transportation Day.)\n\nBULLET::::- — National Freedom Day (February 1)\n\nBULLET::::- — National Grandparents' Day (The President is requested to issue each year a proclamation designating the first Sunday in September after Labor Day as National Grandparents Day.)\n\nBULLET::::- — National Korean War Veterans Armistice Day (July 27 of each year until 2003)\n\nBULLET::::- — National Maritime Day (May 22)\n\nBULLET::::- — National Pearl Harbor Remembrance Day (December 7)\n",
"BULLET::::- CBS owned-and-operated and affiliate stations have the option of airing \"Let's Make a Deal\" at either 10:00 a.m. or 3:00 p.m. Eastern, depending on the station's choice of feed.\n\nBULLET::::- (*) The fourth hour of \"Today\" was renamed \"Today with Hoda & Jenna\" on April 8, 2019, when Jenna Bush Hager succeeded Kathie Lee Gifford as co-host of the program, alongside Hoda Kotb. While \"Today with Kathie Lee & Hoda\"/\"Today with Hoda & Jenna\" is part of \"Today\", it is promoted as its own distinct program.\n",
"In 1887 Oregon became the first state of the United States to make Labor Day an official public holiday. By the time it became an official federal holiday in 1894, thirty U.S. states officially celebrated Labor Day. All U.S. states, the District of Columbia, and the United States territories have subsequently made Labor Day a statutory holiday.\n\nSection::::Labor Day vs. May Day.\n",
"BULLET::::- March 20 – April 23 (floating Friday using Computus) – Good Friday\n\nBULLET::::- March 26 – Prince Jonah Kuhio Kalanianaole Day\n\nBULLET::::- June 11 – Kamehameha Day\n\nBULLET::::- August 15–21 (floating Friday) – Statehood Day\n\nBULLET::::- November 2–8 (floating Tuesday) – Election Day\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Idaho.\n\nBULLET::::- All federal holidays\n\nBULLET::::- January 15–21 (floating Monday) – this federal holiday is renamed \"Martin Luther King, Jr.-Idaho Human Rights Day\"\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Illinois.\n\nBULLET::::- All federal holidays\n",
"BULLET::::- October 25–31 (floating Friday) – Nevada Day\n\nBULLET::::- November 23–29 (floating Friday) – Family Day\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:New Hampshire.\n\nBULLET::::- All federal holidays (offices remain open on Columbus Day)\n\nBULLET::::- January 15–21 (floating Monday) – this federal holiday is renamed Martin Luther King, Jr. Civil Rights Day\n\nBULLET::::- November 23–29 (floating Friday) – the day after Thanksgiving\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:New Jersey.\n\nBULLET::::- All federal holidays\n",
"BULLET::::- 1st Sun. aft. 1st Mon. in September (Labor Day) - National Grandparents Day \n\nBULLET::::- September 11 - Patriot Day \n\nBULLET::::- September 17 - Citizenship Day \n\nBULLET::::- last Sun. in September - Gold Star Mother's Day \n\nBULLET::::- 1st Mon. in October - Child Health Day \n\nBULLET::::- October 9 - Leif Erikson Day \n\nBULLET::::- 2nd Mon. in October - Columbus Day \n\nBULLET::::- October 15 - White Cane Safety Day \n\nBULLET::::- December 7 - National Pearl Harbor Remembrance Day \n\nBULLET::::- December 17 - Pan American Aviation Day \n",
"BULLET::::- All federal holidays except Washington's Birthday and Columbus Day\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Kentucky.\n\nBULLET::::- All federal holidays except Washington's Birthday and Columbus Day\n\nBULLET::::- March 20 – April 23 (floating Friday using Computus) – Good Friday\n\nBULLET::::- November 23–29 (floating Friday) – Day after Thanksgiving\n\nBULLET::::- December 24 – Christmas Eve\n\nBULLET::::- December 31 – New Year's Eve\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Louisiana.\n\nBULLET::::- All federal holidays except Columbus Day\n",
"BULLET::::- All Friday holidays are celebrated on Saturday and all Monday holidays are celebrated on Tuesday to account for the time zone difference with the states. Weekday holidays such as Thanksgiving are celebrated as they fall.\n\nBULLET::::- March 20 – April 23 (floating Friday using Computus) – Good Friday\n\nBULLET::::- March 22 – April 25 (floating Sunday using Computus) – Easter (listed to account for park closing, which normally opens Sundays)\n\nBULLET::::- April 13–15 – Songkran Festival\n\nBULLET::::- December 31 – New Year's Eve\n\nSection::::Government sector holidays: federal, state, and local government.:Legal holidays by states and political divisions of the United States.:Washington.\n",
"By 1856, some erosion of the strict adherence to the custom of Moving Day was noted, as some people moved a few days before or after the traditional day, creating, in effect, a \"moving week\". Once the economic depression of 1873 was over, more housing was constructed, dropping the price of housing down, and subsequently people had less need to move as often.\n",
"Over shorter timescales, there are a variety of practices for defining when each day begins. In ordinary usage, the civil day is reckoned by the midnight epoch, that is, the civil day begins at midnight. But in older astronomical usage, it was usual, until January 1, 1925, to reckon by a noon epoch, 12 hours after the start of the civil day of the same denomination, so that the day began when the mean sun crossed the meridian at noon. This is still reflected in the definition of J2000, which started at noon, Terrestrial Time.\n"
] | [
"The acceptable time to wear white is after labor and ends before next Labor Day. "
] | [
"The acceptable time to wear white is between Labor Day and Memorial Day. "
] | [
"false presupposition"
] | [
"The acceptable time to wear white is after labor and ends before next Labor Day. ",
"The acceptable time to wear white is after labor and ends before next Labor Day. "
] | [
"normal",
"false presupposition"
] | [
"The acceptable time to wear white is between Labor Day and Memorial Day. ",
"The acceptable time to wear white is between Labor Day and Memorial Day. "
] |
2018-08708 | Do objects still emit infared radiation at absolute zero (-273°c)? | We don't know, such a temperature has never been achieved and is theoretically impossible to obtain. So there really isn't a definite answer to that question. | [
"This can be expressed in a cleaner way in terms of the surface gravity of the black hole; this is the parameter that determines the acceleration of a near-horizon observer. In natural units (), the temperature is\n\nwhere is the surface gravity of the horizon. So a black hole can only be in equilibrium with a gas of radiation at a finite temperature. Since radiation incident on the black hole is absorbed, the black hole must emit an equal amount to maintain detailed balance. The black hole acts as a perfect blackbody radiating at this temperature.\n",
"For ideal black bodies, the brightness temperature is also the directly measurable temperature. For objects in nature, often called Gray Bodies, the actual temperature is only a fraction of the brightness temperature. The fraction of brightness temperature to actual temperature is defined as the emissivity. The relationship between brightness temperature and temperature can be written as:\n\nformula_10\n",
"With temperatures in the range 11,000 to 15,000 K, all the WDs with the most extreme fields are far too cool to be detectable EUV/X-ray sources, e.g., Grw +70°8247, LB 11146, SBS 1349+5434, PG 1031+234 and GD 229.\n\nMost highly magnetic WDs appear to be isolated objects, although G 23–46 (7.4 MG) and LB 1116 (670 MG) are in unresolved binary systems.\n",
"Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K.\n\n1931:\n\nThe term \"microwave\" first appears in print: \"When trials with wavelengths as low as 18 cm were made known, there was undisguised surprise that the problem of the micro-wave had been solved so soon.\" \"Telegraph & Telephone Journal\" XVII. 179/1\"\n\n1938:\n\nNobel Prize winner (1920) Walther Nernst re-estimates the cosmic ray temperature as 0.75 K.\n\n1946:\n",
"Hawking radiation is required by the Unruh effect and the equivalence principle applied to black hole horizons. Close to the event horizon of a black hole, a local observer must accelerate to keep from falling in. An accelerating observer sees a thermal bath of particles that pop out of the local acceleration horizon, turn around, and free-fall back in. The condition of local thermal equilibrium implies that the consistent extension of this local thermal bath has a finite temperature at infinity, which implies that some of these particles emitted by the horizon are not reabsorbed and become outgoing Hawking radiation.\n",
"If changes in external temperatures or internal heat generation changes are too rapid for the equilibrium of temperatures in space to take place, then the system never reaches a state of unchanging temperature distribution in time, and the system remains in a transient state.\n",
"At present, it is expected that the primary impact of quantum effects is for event horizons to possess a temperature and so emit radiation. For black holes, this manifests as Hawking radiation, and the larger question of how the black hole possesses a temperature is part of the topic of black hole thermodynamics. For accelerating particles, this manifests as the Unruh effect, which causes space around the particle to appear to be filled with matter and radiation.\n",
"Through Planck's law the temperature spectrum of a black body is proportionally related to the frequency of light and one may substitute the temperature (\"T\") for the frequency in this equation.\n\nFor the case of a source moving directly towards or away from the observer, this reduces to\n\nHere \"v\" 0 indicates a receding source, and \"v\" 0 indicates an approaching source.\n",
"In 2019, Biancalana, Robson and Villari from Heriot-Watt University in Edinburgh (UK), showed that Hawking's radiation temperature is a purely topological quantity that can be calculated very simply by computing the Euler characteristics of the black hole spacetime.\n\nSection::::Black hole evaporation.\n\nWhen particles escape, the black hole loses a small amount of its energy and therefore some of its mass (mass and energy are related by Einstein's equation ).\n\nSection::::Black hole evaporation.:1976 Page numerical analysis.\n",
"Notice that a gray (flat spectrum) ball where formula_48 comes to the same temperature as a black body no matter how dark or light gray .\n\nSection::::Temperature relation between a planet and its star.:Effective temperature of Earth.\n\nSubstituting the measured values for the Sun and Earth yields:\n\nWith the average emissivity formula_53 set to unity, the effective temperature of the Earth is:\n\nor −18.8 °C.\n",
"However, according to the conjectured gauge-gravity duality (also known as the AdS/CFT correspondence), black holes in certain cases (and perhaps in general) are equivalent to solutions of quantum field theory at a non-zero temperature. This means that no information loss is expected in black holes (since the theory permits no such loss) and the radiation emitted by a black hole is probably the usual thermal radiation. If this is correct, then Hawking's original calculation should be corrected, though it is not known how (see below).\n",
"Thermal radiation, a common synonym for infra-red when it occurs at temperatures commonly encountered on Earth, is the process by which the surface of an object radiates its thermal energy in the form of electromagnetic waves. Infrared radiation that one can feel emanating from a household heater, infra-red heat lamp, or kitchen oven are examples of thermal radiation, as is the IR and visible light emitted by a glowing incandescent light bulb (not hot enough to emit the blue high frequencies and therefore appearing yellowish; fluorescent lamps are not thermal and can appear bluer). Thermal radiation is generated when the energy from the movement of charged particles within molecules is converted to the radiant energy of electromagnetic waves. The emitted wave frequency of the thermal radiation is a probability distribution depending only on temperature, and for a black body is given by Planck's law of radiation. Wien's displacement law gives the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the heat intensity (power emitted per area).\n",
"and a white dwarf star is eventually born, the ember of the expired MS star. Temperatures of a new-born white dwarf may be in the hundreds of thousand kelvin, but if the mass of the white dwarf is less than just a few solar masses, burning of He to C and O is not possible and the star will slowly cool down forever. The coolest white dwarfs observed have temperatures of roughly 4000 K, which must mean that the universe is not old enough so that lower temperature stars cannot be found. The emission spectra of \"cool\" white dwarfs does not at all look like a Planck blackbody spectrum. Instead, nearly the whole infrared is attenuated or missing altogether from the star's emission, owing to CIA in the hydrogen-helium atmospheres surrounding their cores.\n",
"In an optically-thin plasma the matter is not in thermodynamical equilibrium with the radiation, because collisions between particles and photons are very rare, and, as a matter of fact, the square root mean velocity of photons, electrons, protons and ions is not the same: we should define a temperature for each of these particle populations. The result is that the emission spectrum does not fit the spectral distribution of a blackbody radiation, but it depends only on those collisional processes which occur in a very rarefied plasma.\n",
"Mirror matter could have been diluted to unobservably low densities during the inflation epoch. Sheldon Glashow has shown that if at some high energy scale particles exist which interact strongly with both ordinary and mirror particles, radiative corrections will lead to a mixing between photons and mirror photons. This mixing has the effect of giving mirror electric charges a very small ordinary electric charge. Another effect of photon–mirror photon mixing is that it induces oscillations between positronium and mirror positronium. Positronium could then turn into mirror positronium and then decay into mirror photons.\n",
"When the star's or planet's net emissivity in the relevant wavelength band is less than unity (less than that of a black body), the actual temperature of the body will be higher than the effective temperature. The net emissivity may be low due to surface or atmospheric properties, including greenhouse effect.\n\nSection::::Star.\n",
"BULLET::::- Black holes, black hole information paradox, and black hole radiation: Do black holes produce thermal radiation, as expected on theoretical grounds? Does this radiation contain information about their inner structure, as suggested by gauge–gravity duality, or not, as implied by Hawking's original calculation? If not, and black holes can evaporate away, what happens to the information stored in them (since quantum mechanics does not provide for the destruction of information)? Or does the radiation stop at some point leaving black hole remnants? Is there another way to probe their internal structure somehow, if such a structure even exists?\n",
"It has been suggested that Rydberg atoms are common in interstellar space and could be observed from earth. Since the density within interstellar gas clouds is many orders of magnitude lower than the best laboratory vacuums attainable on Earth, Rydberg states could persist for long periods of time without being destroyed by collisions.\n\nSection::::Applications and further research.:Strongly interacting systems.\n",
"In nuclear reactor engineering, decay heat continues to be generated after the reactor has been shut down (see SCRAM), and nuclear chain reactions have been suspended. The decay of the short-lived radioisotopes created in fission continues at high power, for a time after shut down. The major source of heat production in a newly shut down reactor is due to the beta decay of new radioactive elements recently produced from fission fragments in the fission process.\n",
"formula_2 (the Intensity or Brightness) is the amount of energy emitted per unit surface per unit time per unit solid angle and in the frequency range between formula_3 and formula_4; formula_5 is the temperature of the black body; formula_6 is Planck's constant; formula_3 is frequency; formula_8 is the speed of light; and formula_9 is Boltzmann's constant. This equation can be rewritten to express the temperature, T, in terms of the measured radiance at a particular frequency. The temperature derived from the Planck function is referred to as the brightness temperature (which see, for derivation).\n",
"In an ideal system, the emitter would be surrounded by converters so no light is lost. However, realistically, geometries must accommodate the input energy (fuel injection or input light) used to heat the emitter. Additionally, costs prohibit the placement of converters everywhere. When the emitter reemits light, anything that does not travel to the converters is lost. Mirrors can be used to redirect some of this light back to the emitter; however, the mirrors may have their own losses.\n\nSection::::Black body radiation.\n",
"Section::::Number and classification.:Artificial near-Earth objects.\n\nDefunct space probes and final stages of rockets can end up in near-Earth orbits around the Sun, and be re-discovered by NEO surveys when they return to Earth's vicinity.\n",
"The brightness temperature is not a temperature as ordinarily understood. It characterizes radiation, and depending on the mechanism of radiation can differ considerably from the physical temperature of a radiating body (though it is theoretically possible to construct a device which will heat up by a source of radiation with some brightness temperature to the actual temperature equal to brightness temperature). Nonthermal sources can have very high brightness temperatures. In pulsars the brightness temperature can reach 10 K. For the radiation of a typical helium–neon laser with a power of 60 mW and a coherence length of 20 cm, focused in a spot with a diameter of 10 µm, the brightness temperature will be nearly .\n",
"BULLET::::- Temperature measurements – Pyrometers and infrared cameras are instruments used to measure the temperature of an object by using its thermal radiation; no actual contact with the object is needed. The calibration of these instruments involves the emissivity of the surface that's being measured.\n\nSection::::Mathematical definitions.\n\nSection::::Mathematical definitions.:Hemispherical emissivity.\n\nHemispherical emissivity of a surface, denoted \"ε\", is defined as\n\nwhere\n\nBULLET::::- \"M\" is the radiant exitance of that surface;\n\nBULLET::::- \"M\" is the radiant exitance of a black body at the same temperature as that surface.\n\nSection::::Mathematical definitions.:Spectral hemispherical emissivity.\n",
"This was followed by arguments from Stephen Hawking and others that an accelerated observer near a black hole (e.g. an observer carefully lowered towards the horizon at the end of a rope) ought to see the region inhabited by \"real\" radiation, whereas for a distant observer this radiation would be said to be \"virtual\". If the accelerated observer near the event horizon traps a nearby particle and throws it out to the distant observer for capture and study, then for the distant observer, the appearance of the particle can be explained by saying that the physical acceleration of the particle has turned it from a virtual particle into a \"real\" particle (see Hawking radiation).\n"
] | [
"Infrared radiation can be tested at absolute zero temperature. "
] | [
"Absolute zero temperature can not be achieved in order to know this answer. "
] | [
"false presupposition"
] | [
"Infrared radiation can be tested at absolute zero temperature. ",
"Infrared radiation can be tested at absolute zero temperature. "
] | [
"normal",
"false presupposition"
] | [
"Absolute zero temperature can not be achieved in order to know this answer. ",
"Absolute zero temperature can not be achieved in order to know this answer. "
] |
2018-21842 | What is the law regarding a Supreme Court Justice that cannot physically sit in session and/or execute their duties? | There is no law forcing a Justice to retire. When and if RBG recovers from her fall, if she chooses, she will resume her duties. The other Justices have the right to defer cases until they see fit, as happened in 1974, when one Justice basically refused to go, but they can not remove them. Even if she goes into a persistent vegetative state, she remains a Justice until they pull the plug. | [
"The Washington Post observed that while a Justice was required to recuse himself or herself when they had a conflict of interest, the decision as to whether recusal was necessary was left to the discretion of the Justice in question.\n\nSection::::Career.:July 2006 Congressional testimony.\n\nThe \"Boston Globe\" reported on July 11, 2006 that Hutson was scheduled to testify before the House and Senate Armed Services Committees.\n",
"The Governor of Alabama may fill vacancies when they occur for the remainder of unexpired terms. The current partisan line-up for the court is all Republican. There is no specific limitation on the number of terms to which a member may be elected. However, the state constitution under Amendment 328, adopted in 1973, prohibits any member from seeking election once they have attained the age of seventy years. This amendment would have prohibited then Chief Justice Roy Moore from seeking re-election in 2018. However, on April 26, 2017, Moore announced his intent to run for the United States Senate seat formerly held by United States Attorney General Jeff Sessions, and resigned from the court.\n",
"On September 11, 2018, the West Virginia Senate debated and voted on a number of preliminary issues for the four trials. Several motions had to do with Judge Davis, who had already retired. It was voted to try her anyway by votes of 15-19 on several resolutions. Then the Senate \"dissolved\" into a court of impeachment and set trial dates and other housekeeping matters. A resolution substituting censure for a full trials for Workman and Walker was ruled out of order.\n\nSection::::Trials.:Trial of Justice Walker.\n",
"The regular members were allowed to be reappointed without limit. The Secretary of Justice serves at the pleasure of the president, while the representative of Congress serves until they are recalled by their chamber, or until the term of Congress that named them expires. Finally, the Chief Justice serves until mandatory retirement at the age of 70. The regular members' terms start at July 9.\n",
"The court tries to avoid such rulings when possible: After the retirement of Justice O'Connor in 2006 three cases would have ended with a tie. All cases were reargued to allow the newly appointed Samuel Alito to cast a decisive vote.\n",
"In the Supreme Court of the United States, the Justices typically recuse themselves from participating in cases in which they have financial interests. For example, Justice Sandra Day O'Connor generally did not participate in cases involving telecommunications firms because she owned stock in these firms, and Justice Stephen Breyer has disqualified himself in some cases involving insurance companies because of his participation in a Lloyd's of London syndicate. Justices also have declined to participate in cases in which close relatives, such as their children, are lawyers for one of the parties. Even if the family member is connected to one of the parties but is not directly involved in the case, justices may recuse themselves – for instance Clarence Thomas recused himself in \"United States v. Virginia\" because his son was attending Virginia Military Institute, whose policies were the subject of the case. On occasion, recusal occurs under more unusual circumstances; for example, in two cases, Chief Justice William H. Rehnquist stepped down from the bench when cases were argued by Arizona attorney James Brosnahan, who had testified against Rehnquist at his confirmation hearing in 1986. Whatever the reason for recusal, the \"United States Reports\" will record that the named justice \"took no part in the consideration or decision of this case\".\n",
"Ministers may take leave of their posts for three reasons:\n\nBULLET::::- The end of their terms\n\nBULLET::::- Relinquishment, which is only allowed in serious cases, all of which must be affirmed by the President and accepted or discarded by the Senate.\n\nBULLET::::- Voluntary retirement: Proceeds when the interested party requests their retirement, as long as they meet the conditions of age and seniority.\n\nSection::::Supreme Court building.\n",
"Section::::Richard J. Daronco.\n",
"On August 7, 2018, the House Judiciary Committee recommended that all four remaining justices be impeached. Loughry for lack of oversight, improper removal of the desk to his home, improper use of a government computer, improper use of state owned cars for personal travel, overspending on his office decorations, the overpaying of \"senior status judges\" and lying to the Legislature; Chief Justice Margaret Workman and Justice Robin Davis for overpaying of \"senior status judges\", lack of oversight, and overspending; and Justice Beth Walker for lack of oversight and overspending. On August 13, 2018, the full House of Delegates impeached the four remaining members of the court. On August 14, 2018, Davis retired from the court, effective August 13, 2018. The justices, other than Justice Walker who has already been tried, awaited trial on the impeachment in the West Virginia Senate.\n",
"Justices assigned to sit temporarily on the Supreme Court have all the authority of a Supreme Court justice to hear arguments, render decisions and file opinions. However, no justice shall be assigned to sit on the Supreme Court in the determination of any cause or matter upon which the justice has previously sat or for which such justice is not otherwise disqualified nor without the justice's own consent.\n\nSection::::Impeachment of justices.\n\nThe state constitution provides two methods for removing judicial officers, based on who is bringing such action:\n",
"A majority of the General Assembly may pass articles of impeachment against a Justice, which the Senate will then try. Only a two-thirds majority will convict, and the Senate may punish a convicted Justice with only removal from office and prohibition on holding future office. After a Justice has been impeached by the General Assembly—but before the Senate renders a verdict on the charges—the Justice may not exercise any official function. By virtue of accepting a position in the Executive or Legislative branches of government or becoming a candidate for political office, a Justice is considered as resigned from the bench.\n",
"On June 26, 2018, the West Virginia House of Delegates assembled in special session to consider Loughry's or any other justice's impeachment. The matter was referred to the House Judiciary Committee. In the course of its investigation, additional issues were discovered relative to splitting the pay of \"senior status judges\", who are retired judges filling in certain circumstances and who can make no more than 25% of an active judges' salary, between IRS Form W-2 and Form 1099 in order to circumvent that rule; and of the court purchasing \"working lunches\" at taxpayer expense on a regular basis.\n",
"An ad hoc Senate committee heard evidence in September and October 2010, and a vote by the full Senate took place on December 8, 2010. Article 1 was passed unanimously, Articles 2 was passed by a vote of 69–27, Article 3 was passed by a vote of 88–8, and Article 4 was passed by a vote of 90 to 6. A further vote to disqualify the former judge from ever holding office again passed by a vote of 94 to 2.\n\nSection::::21st century.:Samuel Kent – Southern District of Texas.\n",
"Section::::Robert Smith Vance.\n",
"Section::::John Roll.\n",
"Section::::Federal judicial service.\n",
"Section::::Judicial career.\n",
"BULLET::::6. \"4 seats vacant\"\n\nBULLET::::128. Chief Judge and associate judges, United States Court of Federal Claims (by seniority)\n\nBULLET::::1. Margaret M. Sweeney (December 14, 2005) (Chief Judge)\n\nBULLET::::2. Thomas C. Wheeler (October 24, 2005)\n\nBULLET::::3. Patricia E. Campbell-Smith (September 19, 2013)\n\nBULLET::::4. Elaine D. Kaplan (November 6, 2013)\n\nBULLET::::5. Lydia Kay Griggsby (January 5, 2015)\n\nBULLET::::6. Richard Hertling (June 12, 2019)\n\nBULLET::::7. Ryan T. Holte (July 11, 2019)\n\nBULLET::::8. \"9 seats vacant\"\n\nBULLET::::129. One-star military officers (in order of seniority: retired officers rank with but after active-duty officers)\n\nBULLET::::130. Directors of offices of executive departments\n",
"Should a Justice or Judge become \"incapacitated\" to the point at which they can no longer continue in office, the Court as a whole may notify the governor. The governor then appoints a three-member commission and, depending on their decision, may force them to retire.\n\nSection::::Configuration.:Appointment, composition, and life on the bench.:Current membership.\n\nBy tradition, a partisan balance is maintained on the Supreme Court, with the sitting governor permitted to arrange his appointments so that his party has a one-seat advantage.\n",
"Chances of impeachment, which were always remote, faded away to nothing.\n\nSection::::21st century.:Mark Fuller – Middle District of Alabama.\n\nJudge Fuller (R) was arrested on August 9, 2014 after his wife called police and reported her husband was drunk and hitting her while they were at an Atlanta hotel. He later accepted a plea deal that will allow his record to be expunged if he completes a counseling program. The Eleventh Circuit Court of Appeals reassigned all of his cases to other judges for the time being.\n",
"Justice Perry\n\nJustice Perry may refer to:\n\nBULLET::::- Antonio Perry, an Associate Justice of the Supreme Court of Hawaii\n\nBULLET::::- James E. C. Perry, an Associate Justice of the Florida Supreme Court\n\nBULLET::::- John C. Perry, an appointed Chief Justice of the Supreme Court of Wyoming Territory who died before assuming office\n\nBULLET::::- Melissa Perry, an Associate Justice of the Federal Court of Australia\n\nBULLET::::- Sion L. Perry, an Associate Justice of the Alabama Supreme Court\n\nBULLET::::- Thomas Erskine Perry, a Chief Justice of the supreme court in Bombay during the British rule of India\n",
"John Roll was appointed by President George H. W. Bush to the United States District Court for the District of Arizona. Roll was fatally shot in the 2011 Tucson shooting, which occurred on January 8, 2011 outside a Safeway supermarket in Casas Adobes, Arizona, when a gunman opened fire at a \"Congress on Your Corner\" event held by Democratic U.S. House Representative Gabrielle Giffords; Roll later succumbed to his injuries, as did five other people. Fourteen others were wounded including Giffords. Roll attended Mass earlier that morning and had decided to attend the event about an hour before the shooting. \n",
"Supreme Court Chief Justice Lorie Gildea issued a statement on the death of former Justice Wahl:\n",
"At the end of 2011 at the insistence of then and now again Chief Justice of the GA Supreme Court, Carol Huntstein made the request to the Associate Judges to allow her to step down and allow then Presiding Judge Carley to Serve out the rest of his term as Chief Justice. After his retirement Justice Huntstein would and did assume the Chief Justice role. All of the Associate Justices voted unanimously in favor of Huntstein's gesture.\n",
"On occasion, a judge will leave office at the end of a term, in which case a general election determines their replacement. If the Supreme Court needs an additional judge on a temporary basis due to illness, an unfilled position, or a justice is disqualified from sitting on a case due to a conflict of interest, the court can appoint a senior judge to serve as a judge pro tempore. Senior judges are all former, qualified judges (a minimum of 12 years on the bench) that have retired from a state court. Only former Supreme Court justices, elected Oregon circuit court judges, or elected Oregon Court of Appeals judges can be assigned to temporary service on the Supreme Court.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-10874 | Why is it dangerous to bathe during a lightning storm if the water pipes are already buried underground? | The pipes are grounded, but you are a really good conductor and closer to the ground. | [
"Bonding is particularly important for bathrooms, swimming pools and fountains. In pools and fountains, any metallic object (other than conductors of the power circuit) over a certain size must be bonded to assure that all conductors are at the same potential. Since it is buried in the ground, a pool can be a better ground than the electric panel ground. With all the conducting elements bonded, it is less likely that electric current will find a path through a swimmer. In concrete pools even the reinforcing bars of the concrete must be connected to the bonding system to ensure no dangerous potential gradients are produced during a fault.\n",
"Potential differences between pool water and railings, or shower facilities and grounded drain pipes are not uncommon as a result of neutral to earth voltages (NEV), and can be a major nuisance, but are usually not life-threatening. However, contact voltage resulting from damaged insulation on a current carrying conductor can be very dangerous, and can lead to shock or electrocution. Such a condition can arise spontaneously from mechanical, thermal, or chemical stress on insulation materials, or from unintentional damage from digging activity, freeze-frost seizing, corrosion and collapse of conduit, or even workmanship issues.\n",
"Several other factors affect the locations where electrical faults occur. Electrical faults do not always occur where fire first reaches a conduit, but preferentially occur at bends in a conduit or at locations where wires are pressed together. The elevation of an electrical line has a strong effect on its exposure to heat, since temperatures in a fire are generally highest near ceiling level, except in the immediate vicinity of the point of origin. Protection from the fire is an important consideration: being located within a wall or being covered in fiberglass insulation will offer some protection to an electrical line, and will delay electrical faults.\n",
"BULLET::::- Less subject to damage from severe weather conditions (mainly lightning, hurricanes/cyclones/typhoons, tornados, other winds, and freezing)\n\nBULLET::::- Decreased risk of fire. Overhead power lines can draw high fault currents from vegetation-to-conductor, conductor-to-conductor, or conductor-to-ground contact, which result in large, hot arcs.\n",
"A robotics research paper in 2011 suggested that robots could examine the shapes of specific manhole covers and use them to calculate their geographic position, as a double-check on GPS data.\n\nSection::::Security and safety.\n\nIn urban areas, stray voltage issues have become a significant concern for utilities. In 2004, Jodie S. Lane was electrocuted after stepping on a metal manhole cover, while walking her dog in New York City. As result of this and other incidents, increased attention has been focused on these hazards, including technical conferences on stray voltage detection and prevention.\n",
"36 Provisions for supply and use of electricity in multi-storied building more than 15 meters in height\n\n37 Conditions applicable to installations of voltage exceeding 250 Volts\n\n38 Appeal to Electrical Inspector in regard to defects\n\n39 Precautions against failure of supply and notice of failures\n\nBULLET::::- Chapter - V Safety Provisions for Electrical Installations and apparatus of voltage not exceeding 650 volts\n\n40 Test for resistance of insulation\n\n41 Connection with earth\n\n42 Earth leakage protective device\n\nChapter - VI Safety Provisions for Electrical Installations and apparatus of voltage exceeding 650 volts\n\n43 Approval by Electrical Inspector\n",
"Transmission lines are exposed to damage by gunfire, especially in rural areas. Shotgun pellets can sever fibers or damage the sheath, allowing water into the cable. Adding a ballistic shield element to the cable makes it larger and heavier and may not be economically feasible. The utility may factor gunshot damage into reliability calculations for the system. \n\nGlass under tension and exposed to acid environments loses strength; this applies to both the optical fibers and the glass reinforcement of polymers. The cable jacket and gel coating of fibres provides protection from chemical attack. \n",
"BULLET::::- Roofing Materials : In certain parts of the world uncoated lead flashing is used as a roofing material. Researchers found on-site water storage of rainwater was more acidic, and contained elevated levels of heavy metals in a study conducted in Australia from 2005-2006.\n",
"For geomembranes with a conductive backing, spark testing can be performed (ASTM D7240). For the spark testing method, water is not sprayed onto the exposed geomembrane. A high DC voltage is introduced across the geomembrane, creating a spark where the geomembrane contains a breach.\n\nSection::::Methods.:Covered geomembrane methods.\n",
"When copper roofing, gutters, and rain leaders are electrically bonded to an earth termination facility, a pathway of low electrical impedance to ground is provided, however without dedicated conduction pathways to concentrate the discharge channel, a disperse energized surface may not be the most desirable.\n",
"Pinhole leaks with pitting initiating on the exterior surface of the pipe, can occur if copper piping is improperly grounded or bonded. The phenomenon is known technically as \"stray current corrosion\" or \"electrolytic pitting\". Pin-holing due to poor grounding or poor bonding occurs typically in homes where the original plumbing has been modified; homeowners may find that a new plastic water filtration device or plastic repair union has interrupted the water pipe's electrical continuity to ground, when they start seeing pinhole water leaks after a recent install. Damage occurs rapidly, usually becoming obvious about six months after the ground interruption. Correctly installed plumbing appliances will have a copper bonding jumper cable connecting the interrupted pipe sections. Pinhole leaks from stray current corrosion can result in high plumbing bills and require the replacement of the entire water line. The cause is fundamentally an electrical defect, not a plumbing defect; once the plumbing damage is repaired, an electrician should promptly be consulted to evaluate the grounding and bonding of the entire plumbing and electrical systems.\n",
"Although both bodies of water are in close proximity to Municipal Well #1, the impact of the contamination consequences and their dangers to the residents of Crestwood that received their drinking water from the public water supply has not been studied.\n",
"The tower was connected to Reservoir No.1 atop the Hudson Palisades to which water was pumped from the Hackensack River, approximately 14 miles away. While the reservoir at the site could provide adequate pressure for water users in Hoboken, located just above sea level, water pressure was inadequate for customers atop the Palisades.\n",
"Section::::Lightning protection systems.\n\nLightning protection systems are designed to mitigate the effects of lightning through connection to extensive grounding systems that provide a large surface area connection to earth. The large area is required to dissipate the high current of a lightning strike without damaging the system conductors by excess heat. Since lightning strikes are pulses of energy with very high frequency components, grounding systems for lightning protection tend to use short straight runs of conductors to reduce the self-inductance and skin effect.\n\nSection::::Bonding.\n",
"Lightning arresters can form part of large electrical transformers and can fragment during transformer ruptures. High-voltage transformer fire barriers are required to defeat ballistics from small arms as well as projectiles from transformer bushings and lightning arresters, per NFPA 850.\n\nSection::::Components.\n",
"Section::::Residential wiring.:Special locations.:Swimming pools.\n\nFor swimming pools, Section 603 of BS 7671 defines similar zones. In some of these zones, only industrial sockets according to IEC 60309 are permitted, in order to discourage the use of portable domestic appliances with inappropriate ingress protection rating.\n\nSection::::Residential wiring.:Special locations.:Portable outdoor equipment.\n",
"Copper piping, commonly used to carry natural gas and water, reacts with concrete over a long period, slowly degrading until the pipe fails. This can lead to what is commonly referred to as slab leaks. These occur when pipes begin to leak from within the slab. Signs of a slab leak range from unexplained dampened carpet spots, to drops in water pressure and wet discoloration on exterior foundation walls. Copper pipes must be \"lagged\" (that is, \"insulated\") or run through a conduit or plumbed into the building above the slab. Electrical conduits through the slab must be water-tight, as they extend below ground level and can potentially expose wiring to groundwater.\n",
"Daisy chaining of power strips (known in building and electric codes as multi-plug adapters or relocatable power taps), whether surge protected or not, is specifically against most codes. As an example, the International Code Council's \"International Fire Code 2009 Edition\" in 605.4.2 states, \"Relocatable power taps shall be directly connected to permanently installed receptacles.\"\n\nSection::::Overload protection.\n",
"If the wrong type was used on an installation, the level of protection given could be substantially less than that intended, in particular the voltage operated type can only protect against faults or shocks to metalwork connected to the circuit ground, connected to the VOELCB, it cannot detect current leaving a live wire and running to ground by another path, such as via a person standing on the earth.\n",
"Section::::Background.\n\nBefore 1996, in the United States it was common to ground the frames of 120/240-volt permanently connected appliances (such as a clothes dryer or oven) to neutral conductors. This has been prohibited in new installations since the 1996 National Electrical Code upon local adoption by legislation or regulation. Existing installations are permitted to continue in accordance with NEC 250.140 Exception.\n",
"In North America socket-outlets located in places where an easy path to ground exists—such as wet areas and rooms with uncovered concrete floors—must be protected by a GFCI. The US \"National Electrical Code\" has required devices in certain locations to be protected by GFCIs since the 1960s. Beginning with underwater swimming pool lights (1968) successive editions of the code have expanded the areas where GFCIs are required to include: construction sites (1974), bathrooms and outdoor areas (1975), garages (1978), areas near hot tubs or spas (1981), hotel bathrooms (1984), kitchen counter sockets (1987), crawl spaces and unfinished basements (1990), near wet bar sinks (1993), near laundry sinks (2005) and in laundry rooms (2014).\n",
"In February 2014, Bersin Properties was cited for a series of code violations after pipes from the property's sprinkler system burst inside the vacant mall, flooding out some floors and sending water spewing out into the parking lots. Inspectors found that the interior of the building was not heated, and the burst pipes had caused significant damage. \n",
"The parts of a lightning protection system are air terminals (lightning rods or strike termination devices), bonding conductors, ground terminals (ground or \"earthing\" rods, plates, or mesh), and all of the connectors and supports to complete the system. The air terminals are typically arranged at or along the upper points of a roof structure, and are electrically bonded together by bonding conductors (called \"down conductors\" or \"downleads\"), which are connected by the most direct route to one or more grounding or earthing terminals. Connections to the earth electrodes must not only have low resistance, but must have low self-inductance.\n",
"\"Concentrated leaks\" occur when cracks form in the soil. The cracks must be below reservoir level, and water pressure needs to be present to maintain the open pipe. It is possible for water flow to cause the sides of the pipe to swell, closing it and thus limiting erosion. Additionally, if the soil lacks sufficient cohesion to maintain a crack, the crack will collapse and concentrated leak erosion will not progress to a breach. Cracks that allow concentrated leaks can arise due to many factors, including:\n\nBULLET::::- Cross-valley arching resulting in vertical stresses on the sides of the dam\n",
"It is not unusual for ELCB protected installation to have a second unintentional connection to Earth somewhere, one that does not pass through the ELCB sense coil. This can occur via metal pipework in contact with the ground, metal structural framework, outdoor home appliances in contact with soil, and so on.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-24079 | For all you soccer (or football fans, either name is fine), How does the offside rule work, and why is it a rule? Is it only to stop cherry picking? | Iirc, you mustn't pass the ball to a team mate that's behind the opponent's defenders. So if you have ABA and left A passes to right A it's an offside. However, if at the time of shooting the pass, it's AAB and right A passes B while the pass is still ongoing, then it's not an offside. I assume the rule is there to stop teams from just doing high, far loop shots. That is, they sort of want to enforce a "last line of defense". | [
"Being in an offside position is not an offence in itself; a player who was in an offside position at the moment the ball last touched, or was played, by a teammate, must then become involved in active play in the opinion of the referee, in order for an offence to occur. When the offside offence occurs, the referee stops play and awards an indirect free kick to the defending team from the place where the offending player became involved in active play.\n",
"Section::::Application.:Offside sanction.\n\nThe sanction for an offside offence is an indirect free kick for the opponent at the place where the offence occurred, even if it is in the player’s own half of the field of play.\n\nSection::::Application.:Officiating.\n",
"In enforcing this rule, the referee depends greatly on an assistant referee, who generally keeps in line with the second-to-last opponent, the ball, or the halfway line, whichever is closer to the goal line of their relevant end. An assistant referee signals for an offside offence by first raising their flag to a vertical position and then, if the referee stops play, by partly lowering their flag to an angle that signifies the location of the offence:\n\nBULLET::::- Flag pointed at a 45-degree angle downwards: offence has occurred in the third of the pitch nearest to the assistant referee;\n",
"A player in an offside position at the moment the ball is touched or played by a teammate is only penalised for committing an \"offside offence\" if, in the opinion of the referee, s/he becomes involved in active play by:\n\nBULLET::::- Interfering with play\n\nBULLET::::- Interfering with an opponent\n\nBULLET::::- Gaining an advantage by playing the ball or interfering with an opponent when it has\n",
"An offside offence may occur if a player receives the ball directly from either a direct free kick or an indirect free kick.\n\nSince offside is judged at the time the ball is touched or played by a teammate, not when the player receives the ball, it is possible for a player to receive the ball significantly past the second-to-last opponent, or even the last opponent, without committing an offence.\n",
"BULLET::::- Flag parallel to the ground: offence has occurred in the middle third of the pitch;\n\nBULLET::::- Flag pointed at a 45-degree angle upwards: offence has occurred in the third of the pitch furthest from the assistant referee.\n",
"Offside rules are generally designed to ensure that players play together as a team, and do not consistently position one or a few players near the opponent's goal to try to receive a \"Hail Mary pass\" for an easy goal without opposing players nearby. However, the application and enforcement of offside rules can be complicated, and can sometimes be confusing for new players as well as for spectators.\n\nSection::::History.\n",
"This foul is almost always committed by the defense (any offensive player that moves into the neutral zone after setting would be charged with a false start). However, it is possible for the offense to commit this foul. If an offensive player lines up in the neutral zone, an offside foul will be called against the offense. If a defensive player jumps the snap too early and causes the offensive player to jump early, than the defense will be flagged. If a defensive player jumps too early and touches the offensive player, than encroachment will be called.\n\nSection::::History.:Penalty.\n",
"Being in an offside position is not an offence in itself, but a player so positioned when the ball is played by a teammate can be judged guilty of an offside offence if they become \"involved in active play\", \"interfere with an opponent\", or \"gain an advantage\" by being in that position.\n\nSection::::Significance.\n",
"In addition to the above criteria, in the 2017–18 edition of the Laws of the Game, the IFAB made a further clarification that, \"In situations where a player moving from, or standing in, an offside position is in the way of an opponent and interferes with the movement of the opponent towards the ball this is an offside offence if it impacts on the ability of the opponent to play or challenge for the ball.\"\n",
"BULLET::::- No offside. Most leagues play without an offside rule. Some leagues enforce a \"three-line violation\", prohibiting players from playing the ball in the air from behind the front line of their own penalty area across all three lines into the opponent's penalty area. Violations often result in a free kick for the opposing team at the front line of the offending team's penalty area.\n",
"Offside (association football)\n\nOffside is one of the laws of association football, codified in Law 11 of the Laws of the Game. The law states that a player is in an offside position if any of their body parts, except the hands and arms, are in the opponents' half of the pitch, and closer to the opponents' goal line than both the ball and the second-last opponent (the last opponent is usually, but not necessarily, the goalkeeper).\n",
"BULLET::::- Any part of the player's head, body or feet is in the opponents' half of the field (excluding the half-way line).\n\nBULLET::::- Any part of the player's head, body or feet is closer to the opponents' goal line than both the ball and the second-last opponent.\n\nThe goalkeeper counts as an opponent in the second condition, but it is not necessary that the last opponent be the goalkeeper.\n\nSection::::Application.:Offside offence.\n",
"A player may not be penalised for an offside offence when receiving the ball directly from a throw-in. Skillful attackers can sometimes take advantage of this rule by getting behind the last defender(s) to receive the throw-in and having a clear path to goal.\n",
"Similar to the Direct Free Kick, the Indirect Free Kick restarts the play. The team given an Indirect Free Kick is unable to score from the spot. It first has to touch a player on the same team in order to resume play.\n\nWhen a free kick is being performed, the opposing team has to be at least 10 yards from where the ball is going to be struck.\n",
"There is no offside offence if a player receives the ball directly from a goal kick, a corner kick, a throw-in, or a dropped-ball. It is also not an offence if the ball was last deliberately played by an opponent (except for a deliberate save). In this context, according to the IFAB, \"A ‘save’ is when a player stops, or attempts to stop, a ball which is going into or very close to the goal with any part of the body except the hands/arms (unless the goalkeeper within the penalty area).\"\n",
"Under the Laws of the Game, what constitutes an obvious goalscoring opportunity is left to the discretion of the referee; however, several factors are given to help referees decide. These are: the distance between the offence and the goal, the likelihood of keeping or gaining control of the ball, the direction of the play, the location and number of defenders.\n\nThe offence is informally known as DOGSO, an acronym for \"Denial of an Obvious Goal-Scoring Opportunity\".\n",
"Offside was probably part of the \"Cambridge Rules\" from their inception in 1848. A ruleset dating from 1856 found in the library of Shrewsbury School is probably closely modelled on the Cambridge Rules and is thought to be the oldest set still in existence. Rule No. 9 required more than three defensive players to be ahead of an attacker who plays the ball. The rule states:\n\nWhen the original Laws of the Game were first drafted in 1863 no forward passes of any sort were permitted, except for kicks from behind the goal line. The rule states: \n",
"BULLET::::- If a foul has occurred as well as misconduct, play is restarted according to the nature of the foul (either an indirect free kick, direct free kick or penalty kick to the opposing team)\n\nBULLET::::- If no foul under Law 12 has occurred, play is restarted with an indirect free kick to the opposing team\n\nSection::::Team officials.\n",
"The offside offence is neither a foul nor misconduct as it does not belong to Law 12. Like fouls, however, any play (such as the scoring of a goal) that occurs after an offence has taken place, but before the referee is able to stop the play, is nullified. The only time an offence related to offside is cautionable is if a defender deliberately leaves the field in order to deceive their opponents regarding a player's offside position, or if a forward, having left the field, returns and gains an advantage. In neither of these cases is the player being penalised for being offside; they are being cautioned for acts of unsporting behaviour.\n",
"Section::::Intentional fouls.\n\nIn certain situations, a team (specifically in the NFL) may intentionally commit a foul in order to receive a penalty that they see as advantageous:\n\nBULLET::::- A delay of game or intentional false start penalty may be sought intentionally in order to back up the line of scrimmage prior to a punt to allow for a larger punting field. (This is one of the few examples of an intentional foul that is generally tolerated as a strategy.)\n",
"Section::::Application.:Offside offence.\n\nIt is regulated in section 11.2 of the Rules, that the referee shall stop the game because of offside, if a player receives\n",
"How many are against us?\n\nKick out the ball so that we may begin the game\n\nCome, kick it here\n\nYou keep the goal\n\nSnatch the ball from that fellow if you can\n\nCome, throw yourself against him\n\nRun at him\n\nKick the ball back\n\nWell done. You aren't doing anything\n\nTo make a goal\n\nThis is the first goal, this the second, this the third\n\nDrive that man back\n\nThe opponents are, moreover, coming out on top, If you don't look out, he will make a goal\n\nUnless we play better, we'll be done for\n",
"BULLET::::- Offside: Law 11 of the laws of football, relating to the positioning of defending players in relation to attacking players when the ball is played to an attacking player by a teammate. In its most basic form, a player is offside if they are in their opponent's half of the field, and is closer to the goal line than both the second-last defender and the ball at the moment the ball is played to them by a teammate.\n",
"Opposing players must retain the required distance as stated above. Failure to do so promptly so may constitute misconduct and be punished by a caution (yellow card). If an opposing player enters the penalty area before the ball is in play, the goal kick may be retaken.\n"
] | [] | [] | [
"normal"
] | [
"The offside rule may exist to prevent cherry picking."
] | [
"false presupposition",
"normal"
] | [
"The rule exists to enforce a \"last line of defense\"."
] |
2018-02351 | Why do developers make the recoil in video games like CS:GO and Fortnite not go directly in the middle of the crosshair? Why do they spray around the crosshair instead? | because real gun recoil goes all around, not in just one axis. it depends on many chaotic variable like where the buttstock is on your shoulder the moment the recoil force is transferred into your body. | [
"One of Novint's earliest games was a free download called \"Haptics Life 2\", a \"Half-Life 2\" mod in which the mouse controls have been replaced with Falcon controls and 3D Force Feedback was incorporated. As a result, weapons recoils, the weight of carried objects, damage dealt to the character, and character and vehicle accelerations are all conveyed by the Falcon to the player. Each gun in the game has a different, tangible recoil.\n",
"Another issue, when choosing or developing a cartridge, is the issue of recoil. The recoil is not just the reaction from the projectile being launched, but also from the powder gas, which will exit the barrel with a velocity even higher than that of the bullet. For handgun cartridges, with heavy bullets and light powder charges (a 9×19mm, for example, might use of powder, and a bullet), the powder recoil is not a significant force; for a rifle cartridge (a .22-250 Remington, using of powder and a bullet), the powder can be the majority of the recoil force.\n",
"BULLET::::- Continuous Level of detail (LOD) is designed to improve performance by dynamically adjusting visual detail as TressFX-enabled objects move towards and away from the player’s point of view. This is done by rendering fewer hairs when far away from an object but making each hair thicker, thus reducing computational time but maintaining the same look and aesthetic.\n\nBULLET::::- New functionality to support rendering for grass and fur in addition to hair.\n",
"Recoil (video game)\n\nRecoil is a vehicular combat tank-based Microsoft Windows video game. It involves the player piloting an experimental tank known as the \"BFT\" (Battle Force Tank) through various missions. There is \n\na heavy influence on collecting various weapons for the BFT throughout the game. It was developed by Zipper Interactive, a subsidiary of its parent publisher, Electronic Arts, and uses the same game engine as \"MechWarrior 3\".\n\nSection::::Plot.\n",
"In addition to the overall mass of the gun, reciprocating parts of the gun will affect how the shooter perceives recoil. While these parts are not part of the ejecta, and do not alter the overall momentum of the system, they do involve moving masses during the operation of firing. For example, gas-operated shotguns are widely held to have a \"softer\" recoil than fixed breech or recoil-operated guns. (Although many semi-automatic recoil and gas-operated guns incorporate recoil buffer systems into the stock that effectively spreads out peak felt recoil forces.) In a gas-operated gun, the bolt is accelerated rearwards by propellant gases during firing, which results in a forward force on the body of the gun. This is countered by a rearward force as the bolt reaches the limit of travel and moves forwards, resulting in a zero sum, but to the shooter, the recoil has been spread out over a longer period of time, resulting in the \"softer\" feel.\n",
"A \"recoil compensator\" is designed to direct the gases upwards at roughly a right angle to the bore, in essence making it a small rocket that pushes the muzzle downwards, and counters the \"flip\", or rise of the muzzle caused by the high bore line of most firearms. These are often found on \"raceguns\" used for action shooting and in heavy, rifle caliber handguns used in metallic silhouette shooting. In the former case, the compensator serves to keep the sights down on target for a quick follow-up shot, while in the latter case they keep the heavy recoil directed backwards, preventing the pistol from trying to twist out of the shooter's grip.\n",
"Recoil is also a key issue in rifle stock design. Heavy recoiling rifles should have wide butts, with a good recoil pad to absorb the force of recoil, and a comb that is straight or slopes down towards the action, so that it does not push into the shooter's face under recoil.\n",
"Section::::Perception of recoil.\n",
"Section::::Critical reception.\n\n\"Next Generation\" reviewed the PC version of the game, rating it three stars out of five, and stated that \"\"Recoil\"'s selling points are of the fast and fiery variety, but due to its brevity, this blockbuster may ultimately be little more than a weekend diversion, which prevents us from giving it a higher score.\"\n",
"Recoil (1998 film)\n\nRecoil is a 1998 action/thriller film written by Richard Preston, Jr., produced by Richard Pepin and Joseph Merhi, directed by Art Camacho and starring Gary Daniels, Gregory A. McKinney, and Robin Curtis.\n\nSection::::Plot.\n",
"There are many factors that determine how a shooter will perceive the free recoil of his or her small arm. Some of the factors are, but not limited to: body mass; body frame; experience; shooting position; recoil suppression equipment; small arm fit and or environmental stressors.\n\nSection::::Calculating free recoil.\n\nThere are several different ways to calculate free recoil. However, the two most common are the momentum short and long forms.\n",
"BULLET::::- Khyber Pass copy: a firearm manufactured by cottage gunsmiths in the Khyber Pass region between Pakistan and Afghanistan.\n\nBULLET::::- kick: The recoil or backward momentum of a firearm when it is discharged. Newton's third law suggests that the recoil of the arm should be exactly equal and opposite to forward momentum of the projectile(s). However the muzzle blast will normally add its own momentum to that of the projectile, so too increasing total recoil of the arm – unless the muzzle blast's effect is (partially) neutralised, or even reversed, by means of a muzzle brake.\n\nSection::::L.\n",
"The next major step was the seminal \"GoldenEye 007\" (1997), which introduced the feature to consoles by incorporating the manual aiming of Sega's light gun shooter \"Virtua Cop\" (1994) in its first-person shooter gameplay. According to creator Martin Hollis: \"We ended up with innovative gameplay, in part because we had \"Virtua Cop\" features in a FPS: A gun that only holds 7 bullets and a reload button, lots of position dependant hit animations, innocents you shouldn’t kill, and an aiming mode. When you press R in \"GoldenEye\", the game basically switches to a \"Virtua Cop\" mode.\"\n",
"BULLET::::- Gameplay enhancing: The lens of the game's sniper rifle has been changed to show physically correct reflections of the environment behind the player. To achieve this in a traditional rasterized renderer another view would have been needed to be rendered, stored in a texture and projected back onto the lens. Enabling this effect in the ray tracer had only a 1% performance impact on the frame rate.\n\nBULLET::::- Glass: To demonstrate the properties of a ray traced glass shader a dome has been shown consisting of a glass surface that correctly reflects the surrounding environment including all dynamic objects.\n",
"For small arms, the way in which the shooter perceives the recoil, or \"kick\", can have a significant impact on the shooter's experience and performance. For example, a gun that is said to \"kick like a mule\" is going to be approached with trepidation, and the shooter may anticipate the recoil and flinch in anticipation as the shot is released. This leads to the shooter jerking the trigger, rather than pulling it smoothly, and the jerking motion is almost certain to disturb the alignment of the gun and may result in a miss. The shooter may also be physically injured by firing a weapon generating recoil in excess of what the body can safely absorb or restrain; perhaps getting hit in the eye by the rifle scope, hit in the forehead by a handgun as the elbow bends under the force, or soft tissue damage to the shoulder, wrist and hand; and these results vary for individuals. In addition, as pictured on the right, excessive recoil can create serious range safety concerns, if the shooter cannot adequately restrain the firearm in a down-range direction.\n",
"where formula_25 is the angle above the aim angle at which the bullet leaves the barrel, formula_26formula_27 is the time of travel of the bullet in the barrel (because of the acceleration formula_28 the time is longer than formula_29 : formula_30) and \"L\" is the distance the bullet travels from its rest position to the tip of the barrel. The angle at which the bullet leaves the barrel above the aim angle is then given by:\n",
"BULLET::::- The rifle grenade could be fired at an angle from the shoulder or braced under the arm at launcher positions 3 through 6 for direct-fire or close-range support. (The lower the number, the bigger the momentum and the harder the recoil.) This was not done with the M7 Auxiliary Grenade Cartridge fitted. This was due to the increased recoil it generated making it unsafe and easily capable of injuring the firer.\n\nBULLET::::- X means unsafe. The range would be within the grenade's minimum effective range, potentially harming the firer and any friendly personnel nearby.\n",
"BULLET::::- As of recently, the game \"Payday 2\" offers a VR option where you can essentially wield dual pistols in a gun kata form, able to point your pistols in opposite directions to engage targets as in the film \"Equilibrium\", or more so along the lines of \"John Wick\", where you can punch enemies backwards and unload a few shots into them.\n",
"BULLET::::- : The high side of a shooter in motion is the side that it is curling away from, ie., the side outside the curve of the shooter's path. To \"hit on the high side\" is to hit the stationary rock off-centre on the side the shooter came from.\n\nBULLET::::- : Any shot where the aim is to move another stone; the opposite of a draw\n\nBULLET::::- : A takeout rock that, after making contact with another rock, slides (rolls) into a designated area\n",
"Section::::Design.:Active Recoil Channel.\n\nThe Active Recoil Channel (ARC) is a fairly long, deep channel that runs along the bottom of the head right behind the face. The idea behind the design is that when the ball impacts with the face, the channel will \"actively\" flex and compress which will lead to lower spin off the face and higher ball speed. Clubs that don't have the channel tend to be more rigid and produce excess spin while losing ball speed.\n\nSection::::Design.:Head Structure Changes.\n",
"The downside of this method is that it's not as accurate as a traditional light gun. The additional IR image processing results in lag or \"cursor drift\", i.e. when quickly sweeping the light gun across the screen the crosshair will seem to drag slightly behind where the light gun is actually pointing.\n\nSection::::Design.:Image capture.\n",
"\"Deus Ex\" features a head-up display crosshair, whose size dynamically shows where shots will fall based on movement, aim, and the weapon in use; the reticle expands while the player is moving or shifting their aim, and slowly shrinks to its original size while no actions are taken. How quickly the reticle shrinks depends on the character's proficiency with the equipped weapon the number of accuracy modifications added to the weapon, and the level of the \"Targeting\" nano-augmentation.\n",
"Magnum pistol cartridges reverse this power/accuracy tradeoff by using lower-density, slower-burning powders that give high load density and a broad pressure curve. The downside is the increased recoil and muzzle blast from the high powder mass, and high muzzle pressure.\n",
"Hollywood and video game depictions of firearm shooting victims being thrown several feet backwards are inaccurate, although not for the often-cited reason of conservation of energy, which would also be in error because conservation of momentum would apply. Although energy (and momentum) must be conserved (in a closed system), this does not mean that the kinetic energy or momentum of the bullet must be fully deposited into the target in a manner that causes it to fly dramatically away. \n",
"Section::::Clearances and Tolerances.:Action.\n\nThe primary purposes of the firearm action are holding the cartridge in place in the chamber, and providing a way to ignite the propellant. In a single-shot action, little additional functionality is provided, while in a semi-automatic firearm the action also taps energy from the firing process for cycling to fire the next round. From an accuracy perspective, the primary goal of the action is to achieve a consistent placement of the cartridge in the chamber every shot.\n"
] | [
"Certain video game directors should make recoil fire directly into the middle of the cross hair and not around it. "
] | [
"Realistically recoil normally fires all around, and not just in one axis, making the game more realistic than others."
] | [
"false presupposition"
] | [
"Certain video game directors should make recoil fire directly into the middle of the cross hair and not around it. ",
"Certain video game directors should make recoil fire directly into the middle of the cross hair and not around it. "
] | [
"normal",
"false presupposition"
] | [
"Realistically recoil normally fires all around, and not just in one axis, making the game more realistic than others.",
"Realistically recoil normally fires all around, and not just in one axis, making the game more realistic than others."
] |
2018-02783 | What are vacuum tubes on amps? Do they actually do anything? | Vacuum tubes predate transistors, and act as amplifiers. Both work on the basic idea that a small current, your audio signal, can proportionately regulate a large current, from the wall outlet, for example. How they both achieve that are very different. Transistors, which everyone knows is how they make computers, were originally invented to be an amplifier. Vacuum tube amps and transistor amps are both analog devices. The analog vs. digital comes up in your recording and playback equipment. A record is analog, in that the signal is only as precise as the carving in the vinyl and the player's manufacture. Magnetic tape is also analog. Digital audio is produced by taking a measurement of the current generated by a microphone regularly over time. That's sampling and your sample rate. This analog signal off the mic is passed through an Analog to Digital converter, which is going to turn an analog sine waves into a number, the more bits an ADC has, the more accurate the representation of the original signal. These numeric values are fed into a Digital to Analog Converter, which reverses the process, and then that is piped through your amplifier. Unlike your record or tape, which the sound can change because the media physically wears out, or expands and contracts with temperature, a digital sample will be reproduced effectively exactly every single time, forever. Audiophiles like to think vacuum amplifiers have a "warmer" sound, which might be true, but what if you don't always want that? I think there are too many variables to make a comparison meaningful. Every piece of equipment, how they're wired, and the shape, material, and contents of your room are going to change how your audio sounds. If you compare a $1k vacuum amplifier to a $1k opamp in a double blind test, I wouldn't expect you to be able to discern a difference. | [
"Although vacuum tubes have been largely replaced by solid-state devices in most amplifying, switching, and rectifying applications, there are certain exceptions. In addition to the special functions noted above, tubes have some niche applications.\n",
"Section::::History.\n\nThe first practical device that could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Vacuum tubes were used in almost all amplifiers until the 1960s–1970s when the transistor, invented in 1947, replaced them. Today, most amplifiers use transistors, but vacuum tubes continue to be used in some applications.\n",
"Section::::Solid state vs. vacuum tube amplifiers.\n\nModern RF power amplifiers use solid-state devices such as bipolar junction transistors and MOSFETs. Transistors and other modern solid-state devices have replaced vacuum tubes in most electronic devices, but tubes are still used in some high-power transmitters (see \"Valve RF amplifier\"). Although mechanically robust, transistors are electrically fragile – they are easilly damaged by excess voltage or current. Tubes are mechanically fragile, but electrically robust – they can handle remarkably high electrical overloads without appreciable damage.\n\nSection::::Applications.\n",
"BULLET::::- 7551 - Noval-base beam power pentode with 12-15 volt filament. 6.3 volt filament version was 7558. Used in telephony, RF amplification, and more rarely AF amplification.\n\nBULLET::::- 7554 – Ceramic/metal \"pencil\"-type disk-seal SHF power triode up to 5 GHz\n",
"Amperex tubes were original equipment parts in many models of Tektronix and Hewlett-Packard test equipment. Although Amperex stopped making vacuum tubes long ago, hoards of new old stock (especially the original \"Bugle Boy\" series) are traded for profit, and other manufacturers produced compatible tubes more recently.\n\nSection::::Hicksville, New York.\n",
"All THD amplifiers utilize vacuum tubes for their preamp and power amp sections. Until the release of the Flexi-50 amplifier, they were all class A (the Flexi-50 is class AB). This is why early THD UniValve prototypes were sometimes referred to as the “Pure Class A Head.”\n\nSection::::Amplifier basics.:Printed circuit boards.\n",
"BULLET::::- 4062A – Ceramic/metal \"pencil\"-type disk-seal SHF power triode up to 4 GHz, mu = 100, P = 10 W\n\nBULLET::::- 4065 – Directly heated electrometer triode, grid current ≤125 fA, 4-pin all-glass pigtail, for probe amplifiers\n\nBULLET::::- 4205 – Directly heated power triode, 4-pin bayonet base with offset pin\n\nBULLET::::- 4270A (3C/350E) – Directly heated power triode, 4-pin base\n\nBULLET::::- 4275 – Directly heated power triode, 4-pin base\n\nBULLET::::- 4300 – Directly heated power triode, 4-pin base\n",
"Section::::Common vacuum tube types.\n\nSection::::Common vacuum tube types.:Triode 12AX7/ECC83.\n",
"BULLET::::- EQ171 – Nonode, \"gnome tube\"\n\nSection::::A - 4 V heater.:E - 6.3 V heater.:ES.\n",
"Section::::History and development.:Use in electronic computers.:Colossus.\n",
"Section::::History and development.:Multifunction and multisection tubes.\n",
"Section::::History.:\"Trippin' With Dr. Faustus\" and Rockosmos (2016–present).\n",
"Section::::History and development.:Miniature tubes.\n",
"Section::::History.:Live performances.\n",
"Tubes designed for high gain audio applications may have twisted heater wires to cancel out stray electric fields, fields that could induce objectionable hum into the program material.\n\nHeaters may be energized with either alternating current (AC) or direct current (DC). DC is often used where low hum is required.\n\nSection::::History and development.:Use in electronic computers.\n",
"Valve amplifier\n\nA valve amplifier or tube amplifier is a type of electronic amplifier that uses vacuum tubes to increase the amplitude or power of a signal. Low to medium power valve amplifiers for frequencies below the microwaves were largely replaced by solid state amplifiers in the 1960s and 1970s.\n\nValve amplifiers can be used for applications such as guitar amplifiers, satellite transponders such as DirecTV and GPS, audiophile stereo amplifiers, military applications (such as radar) and very high power radio and UHF television transmitters.\n\nSection::::History.\n\nSection::::History.:Origins.\n",
"Valve audio amplifier\n\nA valve audio amplifier (UK) or vacuum tube audio amplifier (United States) is a valve amplifier used for sound reinforcement, sound recording and reproduction.\n\nUntil the invention of solid state devices such as the transistor, all electronic amplification was produced by valve (tube) amplifiers. While solid-state devices prevail in most audio amplifiers today, valve audio amplifiers are still used where their audible characteristics are considered pleasing, for example in music performance or music reproduction.\n\nSection::::Instrument and vocal amplification.\n",
"BULLET::::- 7868 – Beam power pentode, magnoval pinbase version of 7591. Found in many of the once popular \"Challenger\" series \"PA\" amps made by Bogen Communications, also found in some guitar amplifiers made by Ampeg.\n\nBULLET::::- 7895 – Improved 7586 \"Nuvistor\" with higher mu\n\nSection::::List of \"EIA\" professional tubes.:8000s.\n\nBULLET::::- 8011 – \"Micropup\"-type UHF power triode up to 600 MHz\n\nBULLET::::- 8056 – \"Nuvistor\" triode for low supply voltage\n\nBULLET::::- 8058 – \"Nuvistor\" triode with grid on envelope and an anode cap, for grounded-grid UHF circuits\n\nBULLET::::- 8069 – 8 kV/23...1000 µA Corona voltage reference, cathode cylinder and anode top cap\n",
"Note: The 4000 numbers identify special-quality valves though SQ valves CV numbered before that rule came in retain their original CV number.\n",
"BULLET::::- 4641 – Directly heated power triode, 4-pin base\n\nBULLET::::- 4671/E1C (955) – Indirectly heated \"Acorn\" triode\n\nBULLET::::- 4672/E1F (954) – Indirectly heated \"Acorn\" pentode\n\nBULLET::::- 4674 – Indirectly heated \"Acorn\" diode\n\nBULLET::::- 4675 – 4671/E1C with a 4 Volts heater\n\nBULLET::::- 4676 – 4672/E1F with a 4 Volts heater\n\nBULLET::::- 4678 (EM1) – Indirectly heated tuning indicator\n\nBULLET::::- 4683 – Directly heated power triode, 8-pin base\n\nBULLET::::- 4695/E2F (956) – Indirectly heated \"Acorn\" pentode\n\nSection::::List of \"EIA\" professional tubes.:5000s.\n\nBULLET::::- 5331, 5332, 5514 – Directly heated power triodes, 4-pin base with anode top cap\n",
"BULLET::::- XX1192 – 1. Gen. inverter, 1-stage image intensifier\n\nBULLET::::- XX1200 – 1. Gen. inverter, 1-stage image intensifier\n\nBULLET::::- XX1211 – 1. Gen. inverter, 3-stage image intensifier\n\nBULLET::::- XX1270 – 1. Gen. inverter, 2-stage image intensifier\n\nBULLET::::- XX1400 – 2. Gen. inverter, 1-stage image intensifier\n\nBULLET::::- XX1430 – 1. Gen. inverter, 1-stage image intensifier\n\nBULLET::::- XX1510 – 1. Gen. 3-stage image intensifier\n\nBULLET::::- XX1610 – 2. Gen. image intensifier\n\nBULLET::::- XX1800 – 2. Gen. proximity focused, 1-stage image intensifier\n\nSection::::List of \"Pro Electron\" professional tubes.:Y - Vacuum tubes.\n\nSection::::List of \"Pro Electron\" professional tubes.:Y - Vacuum tubes.:YA.\n",
"BULLET::::- YD1302 – 55 W, Air-cooled, UHF power triode\n\nBULLET::::- YD1332 – 250 W, Air-cooled, UHF power triode\n\nBULLET::::- YD1333 – 100 W, Air-cooled, UHF power triode\n\nBULLET::::- YD1334 – 110 W, Air-cooled, UHF power triode\n\nBULLET::::- YD1335 – 550 W, Air-cooled, UHF power triode\n\nBULLET::::- YD1336 – 220 W, Air-cooled, UHF power triode\n\nBULLET::::- YD1342 – 30 MHz, 530 kW, Water-cooled RF power triode\n\nBULLET::::- YD1352S (8867, DX334) – \"Neotron\", a field-effect tube, 5 MHz, 3 kW, water-cooled, magnetically beamed RF power pulse generator triode\n\nSection::::List of \"Pro Electron\" professional tubes.:Y - Vacuum tubes.:YG.\n",
"BULLET::::- 805 – Directly heated H.F. high-mu triode, giving 140 watts up to 30 MHz and 70 watts at 85 MHz.\n\nBULLET::::- 806 – Directly heated H.F. high-mu triode, giving 390 watts up to 30 MHz 195 watts at 100 MHz.\n",
"Section::::History and development.:Improvements in construction and performance.\n",
"Section::::Types.:Vacuum tube.\n\nVacuum tubes (called \"valves\" in British English) were by far the dominant active electronic components in most instrument amplifier applications until the 1970s, when solid-state semiconductors (transistors) started taking over. Transistor amplifiers are less expensive to build and maintain, reduce the weight and heat of an amplifier, and tend to be more reliable and more shock-resistant. Tubes are fragile and they must be replaced and maintained periodically. As well, serious problems with the tubes can render an amplifier inoperable until the issue is resolved. \n"
] | [
"Vacuum tubes don't do anything. "
] | [
"Vacuum tubes act as amplifiers. "
] | [
"false presupposition"
] | [
"Vacuum tubes don't do anything. ",
"Vacuum tubes don't do anything. "
] | [
"normal",
"false presupposition"
] | [
"Vacuum tubes act as amplifiers. ",
"Vacuum tubes act as amplifiers. "
] |
2018-01296 | Why does closing a door lessen the amount of sound that enters a room even though sound travels best through solids? | Because sound will be lost when it goes from one medium to another(for example: from air to water). When you close a door, the sound has to make this ' medium shift ' twice: from the air to the door (solid), and from the door to the air. | [
"BULLET::::- Sound isolation: Noise isolation is isolating noise to prevent it from transferring out of one area, using barriers like deadening materials to trap sound and vibrational energy. Example: In home and office construction, many builders place sound-control barriers (such as fiberglass batting) in walls to deaden the transmission of noise through them.\n",
"Section::::Applications.\n\nAcoustic absorption is critical in areas such as:\n\nBULLET::::- Soundproofing\n\nBULLET::::- Sound recording and reproduction\n\nBULLET::::- Loudspeaker design\n\nBULLET::::- Acoustic transmission lines\n\nBULLET::::- Room acoustics\n\nBULLET::::- Architectural acoustics\n\nBULLET::::- Sonar\n\nBULLET::::- Noise Barrier Walls\n\nSection::::Applications.:Anechoic chamber.\n",
"The use of acoustic foam and other absorbent means is less effective against this transmitted vibration. The user is advised to break the connection between the room that contains the noise source and the outside world. This is called acoustic decoupling. Ideal decoupling involves eliminating vibration transfer in both solid materials and in the air, so air-flow into the room is often controlled. This has safety implications: inside decoupled space, proper ventilation must be assured, and gas heaters cannot be used.\n\nSection::::Noise cancellation.\n",
"A common application is with electric guitar to remove hum and hiss noise caused by distortion effects units. A noise gate does not remove noise from the signal itself. When the gate is open, both the signal and the noise will pass through. Even though the signal and the unwanted noise are both present in open gate status, the noise is not as noticeable. The noise becomes most noticeable during periods where the main signal is not present, such as a bar of rest in a guitar solo. Gates typically feature 'attack', 'release', and 'hold' settings and may feature a 'look-ahead' function.\n",
"The energy density of sound waves decreases as they spread out, so that increasing the distance between the receiver and source results in a progressively lesser intensity of sound at the receiver. In a normal three-dimensional setting, with a point source and point receptor, the intensity of sound waves will be attenuated according to the inverse square of the distance from the source.\n\nSection::::Damping.\n",
"BULLET::::- Diffraction is the change of a sound wave’s propagation to avoid obstacles. According to Huygens’ principle, when a sound wave is partially blocked by an obstacle, the remaining part that gets through acts as a source of secondary waves. For instance, if you are in a room and you shout with the door open, the people on either side of hallway will hear it. The sound waves that left the door become a source, then spread out in the hallway. The sounds from the surroundings might interfere with the acoustic space like the example given.\n\nSection::::Uses of acoustic space.\n",
"Open apertures, dispersion cylinders (large diameter and usually wall height), carefully sized and placed panels, and irregular room shapes are another way of either absorbing energy or breaking up resonant modes. For absorption, as with large foam wedges seen in anechoic chambers, the loss occurs ultimately through turbulence, as colliding air molecules convert some of their kinetic energy into heat. Damped panels, typically consisting of sheets of hardboard between glass fibre battens, have been used to absorb bass, by allowing movement of the surface panel and energy absorption by friction with the fibre battens.\n",
"Most vibration / sound transfer from a room to the outside occurs through mechanical means. The vibration passes directly through the brick, woodwork and other solid structural elements. When it meets with an element such as a wall, ceiling, floor or window, which acts as a sounding board, the vibration is amplified and heard in the second space. A mechanical transmission is much faster, more efficient and may be more readily amplified than an airborne transmission of the same initial strength.\n",
"BULLET::::- Noise absorption: In architectural acoustics, unwanted sounds can be absorbed rather than reflected inside the room of an observer. This is useful for noises with no point source and when a listener needs to hear sounds only from a point source and not echo reflections. Example: In a recording studio, sound proofing is accomplished with bass traps and anechoic chambers. Wallace Sabine, an American physicist, is credited with studying sound reverberations in 1900, and Carl Eyring revised his equations in 1930 for Bell Labs. Another example is the ubiquitous use of dropped ceilings and acoustical tiles in modern office buildings with high ceilings. Submarine hulls have special coatings that absorb sound.\n",
"BULLET::::- Acid strength – refers to the tendency of an acid, symbolised by the chemical formula HA, to dissociate into a proton, H, and an anion, A.\n\nBULLET::::- Acoustic board – is a special kind of board made of sound absorbing materials. Its job is to provide sound insulation. Between two outer walls sound absorbing material is inserted and the wall is porous. Thus, when sound passes through an acoustic board, the intensity of sound is decreased. The loss of sound energy is balanced by producing heat energy.\n",
"Noise gates often implement hysteresis, that is, they have two thresholds: one to open the gate and another, set a few dB below, to close the gate. This means that once a signal has dropped below the close threshold, it has to rise to the open threshold for the gate to open, so that a signal that crosses over the close threshold regularly does not open the gate and cause chattering. A longer hold time also helps to avoid chattering, as described above.\n\nSection::::Roles.\n",
"Section::::Interior space acoustics.\n\n This is the science of controlling a room's surfaces based on sound absorbing and reflecting properties. Excessive reverberation time, which can be calculated, can lead to poor speech intelligibility.\n",
"Negative pressure is generated and maintained by a ventilation system that removes more exhaust air from the room than air is allowed into the room. Air is allowed into the room through a gap under the door (typically about one half-inch high). Except for this gap, the room should be as airtight as possible, allowing no air in through cracks and gaps, such as those around windows, light fixtures and electrical outlets. Leakage from these sources can compromise or eliminate room negative pressure.\n\nSection::::Smoke test.\n",
"They must be isolated from outside influences (e.g., planes, trains, automobiles, snowmobiles, elevators, pumps, ...; indeed any source of sound which may interfere with measurements inside the chamber) and they must be physically large. The first, environmental isolation, requires in most cases specially constructed, nearly always massive, and likewise thick, walls, floors, and ceilings. Such chambers are often built as spring supported isolated rooms within a larger building. The National Research Council in Canada has a modern anechoic chamber, and has posted a video on the Web, noting these as well as other constructional details. Doors must be specially made, sealing for them must be acoustically complete (no leaks around the edges), ventilation (if any) carefully managed, and lighting chosen to be silent.\n",
"The second requirement follows in part from the first and from the necessity of preventing reverberation inside the room from, say, a sound source being tested. Preventing echoes is almost always done with absorptive foam wedges on walls, floors and ceilings, and if they are to be effective at low frequencies, these must be physically large; the lower the frequencies to be absorbed, the larger they must be.\n\nAn anechoic chamber must therefore be large to accommodate those absorbers and isolation schemes, but still allow for space for experimental apparatus and units under test.\n\nSection::::Electrical and mechanical analogy.\n",
"A variety of measures aim to reduce hazardous noise at its source. Programs such as Buy Quiet and the National Institute for Occupational Safety and Health (NIOSH) Prevention through design promote research and design of quiet equipment and renovation and replacement of older hazardous equipment with modern technologies. Physical materials, such as foam, absorb sound and walls to provide a sound barrier that modifies existing systems that decrease hazardous noise at the source.\n\nSection::::Approaches to noise control.:Path.\n",
"The \"Release\" control is used to define the length of time the gate takes to change from open to fully closed. It is the fade-out duration. A fast release abruptly cuts off the sound, whereas a slower release smoothly attenuates the signal from open to closed, resulting in a slow fade-out. If the release time is too short, a click can be heard when the gate re-opens. Release is the second-most common control to find on a gate, after Threshold.\n",
"Of the electromechanical types, two approaches are used:\n\nBULLET::::- Disengaging – the motor driven mechanism (the opener portion) is not joined to the closer, but only engages the closer when it is needed to open the door. When opening the door manually, the opener portion is still, so the opening is smooth and quiet.\n\nBULLET::::- Permanently engaged – the motor driven mechanism (the opener portion) is always joined to the closer (if present) and to the door. When opening the door manually, the user is also driving the opener portion, so the opening is rough and noisy.\n",
"Door closers also play a role in maintaining average cooling temperatures, since colder air does not vent out for longer periods if the door remains closed for longer periods on average. \n\nSection::::Usage.:Security.\n\nDoor closers also play a role in general security and can be found on building entrance doors because they close the door once somebody has passed through, often latching the lock and stopping unwanted persons from gaining access to a building in behind someone if they have not pulled the door closed behind them.\n\nSection::::Usage.:Noise Control.\n",
"That being said one of the first references concerning a device to close a door can be found in the writings of Hero of Alexandria who describes his \"automata\" which controlled the doors of temples, both opening and closing them automatically among other things. Weights and levers have also been used to close doors, giving them automation. Another device for smaller domestic doors and very simple, involved a loop of rope (skein) fixed to the door frame that was twisted, having a piece of wood placed in between the twists which then rests on the door (like a mini ballista arm), the opening of the door twists the skein further, when the door is released the rope wanting to untwist pushes the arm back against the door thereby closing it. \n",
"The \"Attack\" control is used to define the length of time the gate takes to change from closed to fully open. It is the fade-in duration.\n\nThe \"Hold\" control is used to define the length of time the gate will stay fully open after the signal falls below the threshold, and before the Release period is commenced. The hold control is often set to ensure the gate does not close during short pauses between words or sentences in a speech signal.\n",
"If a specular reflection from a hard flat surface is giving a problematic echo then an acoustic diffuser may be applied to the surface. It will scatter sound in all directions. This is effective to eliminate pockets of noise in a room.\n\nSection::::Room within a room.\n\nA room within a room (RWAR) is one method of isolating sound and preventing it from transmitting to the outside world where it may be undesirable.\n",
"Air curtains consume electrical energy during their operation, but can be used for net energy savings by reducing the heat transfer (via mass transfer when air mixes across the threshold) between two spaces. However, a closed and well-sealed physical door is much more effective in reducing energy loss. Both technologies are often utilized in tandem; when the solid door is opened the air curtain turns on, minimizing air exchange between inside and outside.\n",
"But there are very few door models with an R-value close to 10 (which is far less than the R-40 walls or the R-50 ceilings of super-insulated buildings – Passive Solar and Zero Energy Buildings). Typical doors are not thick enough to provide very high levels of energy efficiency.\n\nMany doors may have good R-values at their center, but their overall energy efficiency is reduced because of the presence of glass and reinforcing elements, or because of poor weatherstripping and the way the door is manufactured.\n",
"BULLET::::2. Impact transmission - a noise source in one room results from an impact of an object onto a separating surface, such as a floor and transmits the sound to an adjacent room. A typical example would be the sound of footsteps in a room being heard in a room below. Acoustic control measures usually include attempts to isolate the source of the impact, or cushioning it. For example carpets will perform significantly better than hard floors.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-24313 | Does my phone check every second to see if the current time matches the time of a previously set alarm so it can trigger that alarm? How are these scheduled events generally triggered? | Your phone is literally just a computer. Computers have a piece of software called a scheduler. Schedulers are a fundamental part of the operating system and do way more than simply check for your alarms. They coordinate all of the other software that wants a piece of the main processor's time. So, the scheduler has a list of all of the things that need to happen. If you are browsing a web page while also listening to music then the scheduler allots time on the CPU for each of those tasks. It will also check to see if there are things that need to happen in the future. If a piece of software is set to wait for 1 second before checking on something then it will tell the scheduler that it needs CPU time in 1 second. Similarly, your phone's alarm will be set into the future and the scheduler will notice when the current time matches the alarm time. Since the scheduler is always running and always making sure that things happen on time, an alarm is one of the easier things for it to do. | [
"Other \"rules\" address failure to exit premises, which results in arming all zones in Stay Mode and a one-time, automatic restart of exit delay. However, if there is an exit error, an immediate local alarm will sound.\n\nSection::::Audio and video verification.\n\nAlarms that utilize either audio, video, or combination of both audio and video verification technology give security companies, dispatchers, police officers, and property managers more reliable data to assess the threat level of a triggered alarm.\n",
"Alarms are triggered by a special type of toast notification for alarms. Due to hardware limitations, alarms cannot always appear on certain devices that are powered off. In order for an alarm to ring on a PC that is off, InstantGo must be included in the device. Prior to the Windows 10 Creators Update, Alarms & Clock was the only app that could make an alarm notification appear during quiet hours, but third-party alarms running on Windows 10 version 1704 or later also ring during quiet hours by default.\n\nSection::::World Clock.\n",
"An RWT is not required during a calendar week in which an RMT is scheduled. No testing has to be done during a calendar week in which all parts of the EAS (header burst, attention signal, audio message, and end of message burst) have been legitimately activated.\n",
"Depending upon the zone triggered, number and sequence of zones, time of day, and other factors, the alarm monitoring center may automatically initiate various actions. Central station operators might be instructed to call emergency services immediately, or to first call the protected premises or property manager to try to determine if the alarm is genuine. Operators could also start calling a list of phone numbers provided by the customer to contact someone to go check on the protected premises. Some zones may trigger a call to the local heating oil company to go check on the system, or a call to the owner with details of which room may be getting flooded. Some alarm systems are tied to video surveillance systems so that current video of the intrusion area can be instantly displayed on a remote monitor, not to mention recorded.\n",
"The world clock list detects the user's location and shows the local time on the user's location on a world map. Users can search for additional locations to show on the map. When other times are displayed, the World Clock feature calculates how far ahead or behind the other times are from the user's local time. It is also possible to compare what nonlocal times will be at a specified local time. When the map is minimized horizontally, the times are shown in a vertical list below the map instead of on it.\n",
"Increasing deployment of voice over IP technology (VoIP) is driving the adoption of broadband signaling for alarm reporting. Many sites requiring alarm installations no longer have conventional telephone lines (POTS), and alarm panels with conventional telephone dialer capability do not work reliably over some types of VoIP service.\n",
"RTCs often have an alternate source of power, so they can continue to keep time while the primary source of power is off or unavailable. This alternate source of power is normally a lithium battery in older systems, but some newer systems use a supercapacitor, because they are rechargeable and can be soldered. The alternate power source can also supply power to battery backed RAM.\n\nSection::::Timing.\n",
"A second video solution can be incorporated into a standard panel, which sends the central station an alarm. When a signal is received, a trained monitoring professional accesses the on-site digital video recorder (DVR) through an IP link to determine the cause of the activation. For this type of system, the camera input to the DVR reflects the alarm panel's zones and partitioning, which allows personnel to look for an alarm source in multiple areas.\n",
"Many alarm clocks have radio receivers that can be set to start playing at specified times, and are known as \"clock radios\". Some alarm clocks can set multiple alarms. A \"progressive alarm clock\", can have different alarms for different times (see Next-Generation Alarms) and even play music of your choice. Most modern televisions, mobile phones and digital watches have alarm clock functions to turn on or make sounds at a specific time.\n\nSection::::Types.\n\nSection::::Types.:Traditional.\n",
"Some alarm systems use real-time audio and video monitoring technology to verify the legitimacy of an alarm. In some municipalities around the United States, this type of alarm verification allows the property it is protecting to be placed on a \"verified response\" list, allowing for quicker and safer police responses.\n\nThe first video home security system was patented on December 2, 1969 to inventor Marie Brown. The system used television surveillance.\n\nSection::::Access control and bypass codes.\n",
"More recently, time clocks have started to adopt technology commonly seen in phones and tablets – called 'Smartclocks'. The \"state of the art\" smartclocks come with multi-touch screens, full color displays, real time monitoring for problems, wireless networking and over the air updates. Some of the smartclocks use front-facing cameras to capture employee clock-ins to deter \"buddy clocking\", a problem usually requiring expensive biometric clocks. With the increasing popularity of cloud-based software, some of the newer time clocks are built to work seamlessly with the cloud.\n\nSection::::Types.\n\nSection::::Types.:Basic time clock.\n",
"The list of services to be monitored at a Central Station has expanded over the past few years to include: Access Control; CCTV Monitoring; Alarm Verification; Environmental Monitoring; Intrusion Alarm Monitoring; Fire Alarm & Sprinkler Monitoring; Critical Condition Monitoring; Medical Response Monitoring; Elevator Telephone Monitoring; Hold-Up or Panic Alarm Monitoring; Duress Monitoring; Auto Dialer tests; Open & Close Signal Supervision & Reporting; Exception Reports; and PIN or Passcode Management. Increasingly, the Central Stations are making this information available directly to end users via the internet and a secure log-on to view and create custom reports on these events themselves.\n",
"A dual signalling communication device is attached to a control panel on a security installation and is the component that transmits the alarm to the ARC. It can do this in a number of different ways, via the GPRS radio path, via the GSM radio path or via the telephone line/or IP if that has been chosen. These multiple signalling paths are all present and live at the same time backing each other up to minimise exposure of the property to intruders. Should one fail there is always one form of back up and depending on the manufacturer chosen up to three paths working simultaneously at any one time. Prior to the availability of dual signalling systems, police and keyholders were often called out to the premises because of an alarm signal on the telephone path only to discover that it was a network fault and not a genuine alarm\n",
"Alarms may immediately notify someone or only notify when alarms build to some threshold of seriousness or urgency. At sites with several buildings, momentary power failures can cause hundreds or thousands of alarms from equipment that has shut down – these should be suppressed and recognized as symptoms of a larger failure. Some sites are programmed so that critical alarms are automatically re-sent at varying intervals. For example, a repeating critical alarm (of an uninterruptible power supply in 'bypass') might resound at 10 minutes, 30 minutes, and every 2 to 4 hours thereafter until the alarms are resolved.\n",
"The first alarm-verification call goes to the location the alarm originated. If contact with a person is not made, a second call is placed to a different number. The secondary number, best practices dictate, should be to a telephone that is answered even after hours, preferably a cellular phone of a decision maker authorized to request or bypass emergency response.\n\nECV, as it cannot confirm an actual intrusion event and will not prompt a priority law enforcement dispatch, is not considered true alarm verification by the security industry.\n\nSection::::Independent certification.\n",
"An example of how this system works is when a passive infrared or other sensor is triggered a designated number of video frames from before and after the event is sent to the central station.\n",
"The accuracy of the clock was measured on 5 May 2011 for a period of 3 hours and 22 minutes. The clock's accuracy varied from +10 to -40 seconds, but it is very hard to make any predictions on the day-to-day accuracy of the clock.\n",
"More typical systems incorporate a digital cellular communication unit that will contact the central station (or some other location) via the Public Switched Telephone Network (PSTN) and raise the alarm, either with a synthesized voice or increasingly via an encoded message string that the central station decodes. These may connect to the regular phone system on the system side of the demarcation point, but typically connect on the customer side ahead of all phones within the monitored premises so that the alarm system can seize the line by cutting-off any active calls and call the monitoring company if needed. A dual signalling system would raise the alarm wirelessly via a radio path (GPRS/GSM) or cellular path using the phone line or broadband line as a back-up overcoming any compromise to the phone line. Encoders can be programmed to indicate which specific sensor was triggered, and monitors can show the physical location (or \"zone\") of the sensor on a list or even a map of the protected premises, which can make the resulting response more effective. For example, a heat sensor alarm, coupled with a flame detector in the same area is a more reliable indication of an actual fire than just one or the other sensor indication by itself.\n",
"Many modern mobile phones feature built-in alarm clocks that do not need the phone to be switched on for the alarm to ring off. Some of these mobile phones feature the ability for the user to set the alarm's ringtone, and in some cases music can be downloaded to the phone and then chosen to play for waking.\n\nSection::::Next-generation alarms.\n",
"Depending on the company offering it, the system may provide for more than emergency use. Some solutions will periodically call to converse with the user.\n\nSection::::Monitoring.\n\nIn the event of an alarm, some systems will place a phone call to a community emergency service such as 911. Others will place a call to the configured number of a friend or family member. Some systems will send an SMS message to configured contacts. \n",
"Reaction time to an evacuation can vary widely, depending on the type of emergency, its perceived danger, and any false alarms of such an emergency beforehand. It can even be affected by the means of communication. In one study, mock alarms were broadcast to subway riders in London, England using different forms of address: a simple bell, a bell followed by instructions from subway staff, a public address announcement lasting 30 seconds and broadcast twice, a combination of staff instructions and public address, and finally by instructions, public address, informing people of the emergency, and announcing the type of emergency. In most cases, but not always, the more forceful the call to evacuate and the more staff were involved, the quicker riders got on escalators to evacuate.\n",
"Real-time clock alarm\n\nA real time clock alarm is a feature that can be used to allow a computer to 'wake up' after shut down to execute tasks every day or on a certain day. It can sometimes be found in the 'Power Management' section of a motherboard's BIOS setup. However, newer BIOS setups do not include an RTC alarm option, although it can still be set from within user applications. Wake On LAN, Wake on ring, and IPMI functions could also be used to start a computer after it is turned off.\n",
"The devices use an internal GPS chip to gather location information. When the SEND is triggered, this information is sent via commercial satellite to a commercial monitoring agency whose role is to pass the information to an appropriate responding agency. The responding agency contacted depends, in part, on the location. Examples of responding agencies would be military Search and Rescue, Coast Guard, local police, voluntary Search and Rescue.\n",
"Section::::Panel indicators.:Priority 2 alarm.\n\nAlso known as \"Security\". This LED can only activate if there is a secondary device hooked into the \"Priority 2 Alarm\" terminals. This secondary device could be a security system, building management system, or another fire alarm control panel. Depending on how the panel is programmed, the panel's alarms may or may not activate when a condition like this is present.\n\nSection::::Panel indicators.:Trouble.\n",
"BULLET::::- Security and fire detection system installers\n\nBULLET::::- Personal Emergency Response System (PERS) retailers\n\nBULLET::::- Devices connected to the Internet of Things.\n\nBULLET::::- Crash Detection Devices\n\nBULLET::::- Integrated Security Cameras\n\nBULLET::::- Lone Workers\n\nBULLET::::- Environmental Monitoring (flood, gas, temperature)\n\nSection::::Worldwide.\n\nSection::::Worldwide.:United States.\n\nIn the United States, central stations are regulated by third-party inspection companies such as Underwriters Laboratories (UL), FM Approvals, and The Monitoring Association (TMA). Central stations are evaluated on specific criteria such as system redundancy, building security, and regulation compliance. Once a central station has demonstrated adherence to the UL requirements, it can receive its certification.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-04681 | How does a wax cylinder phonograph produce sound? | It super cool. What's neat is that it works the same way a vinyl record does (and to some degree the way older CDs used to). But it's more impressive because it's bigger and more obvious how crazy it is that these work at all. When a wax cylinder phonograph is recorded, what they do is use sound waves to make impressions on the soft wax. Sound is caused by high frequency vibrations of air molecules. A rapid increase and decrease air pressure sends a wave of pressure through the air. What can air pressure do? Ever open a Snapple and hear that "pop" of the lid? That metal button was held shut by air pressure and when it equalized, it made a noise and was forceful enough to snap that lid. Press it with your hands to see just how much force that is. Our eardrums do this too. They vibrate back and forth like a more sensitive Snapple lid and all the information we get from sound comes from that vibrating. Recording a phonograph reverses this. A large cone concentrates sound pressure waves down to a point. At the tip of this cone is a membrane (like the Snapple lid) that vibrates as the pressure waves increase and decrease the pressure. Attached to that vibrating membrane is a needle point. Picture a can of cranberry sauce. Pour out the cylinder of cranberry inside. A wax phonograph look like this. It's a cylinder of wax. Warm up that wax to make it soft but still solid. Now press the phonograph needle to the wax and start the wax cylinder rotating like it is on a pottery wheel. The needle will leave a neat little groove in the cylinder and the depth of that groove will depend on the sound pressure behind the needle membrane at the time of the recording. As the recording continues, all the variations in sound pressure are captured. Now stop the recording and play it back. Cool the wax to room temperature and it will harden. Hard like a candle. It’s now hard enough that dragging the needle over it will vibrate the membrane. Amplify this little vibration and you’ll get a speaker making sounds that were recorded. Microphones are speakers in reverse. And modern ones do exactly the same thing but use magnetic fields produced by magnets moving through coils instead of a needle making marks on wax. | [
"Alexander Graham Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work at Bell's Volta Laboratory in Washington, D. C., in 1879, and continued until they were granted basic patents in 1886 for recording in wax.\n",
"Alexander Graham Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work at Bell's Volta Laboratory in Washington, D.C., in 1879, and continued until they were granted basic patents in 1886 for recording in wax.\n",
"Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work in Washington, D. C., in 1879, and continued until they were granted basic patents in 1886 for recording in wax.\n",
"In some ways similar to the laser turntable is the IRENE scanning machine for disc records, which images with microphotography, invented by a team of physicists at Lawrence Berkeley Laboratories.\n\nAn offshoot of IRENE, the Confocal Microscope Cylinder Project, can capture a high-resolution three-dimensional image of the surface, down to 200 µm. In order to convert to a digital sound file, this is then played by a version of the same 'virtual stylus' program developed by the research team in real-time, converted to digital and, if desired, processed through sound-restoration programs.\n\nSection::::Formats.\n\nSection::::Formats.:Types of records.\n",
"In August 2010, Ash International and PARC released the first commercially available glow in the dark phonograph cylinder, a work by Michael Esposito and Carl Michael von Hausswolff, entitled \"The Ghosts Of Effingham\". The cylinder was released in a limited edition of 150 copies, and was produced by Vulcan Records in Sheffield, England.\n\nIn April 2019, the popular podcast Hello Internet released 10 limited edition wax cylinder recordings.\n\nSection::::Preservation of cylinder recordings.\n",
"A recording had been engraved into the wax-filled groove of the modified Edison machine. When it was played, a voice from the distant past spoke, reciting a quotation from Shakespeare's Hamlet: \"There are more things in heaven and earth, Horatio, than are dreamed of in your philosophy ...\" and also, whimsically: \"I am a Graphophone and my mother was a Phonograph.\"\n",
"One the collections notable recordings was a brown Wax Cylinder (c.1895) from Wheeling, WV. This recording by the vocalist Edward M. Favor is one of the earliest recordings in the archives. Its volume is faint and was intended to be used with a tube and earphone type machine. Historians assume that not more than 50 pieces were made of this cylinder.\n",
"In 1901 The Gold Molded (originally spelled Moulded) process was perfected for commercial use by Thomas Edison and Jonas Aylsworth (Edison's Chemist) with input from Walter Miller, the Recording Manager of Edison Records. This discussion was gleaned from facts provided by Walter Miller, Jonas Aylsworth, Thomas Edison, Adolphe Melzer, and Charles Wurth.\n",
"The audience broke into applause ... John Philip Sousa [said]: '[Gentlemen], that is a band. This is the first time I have ever heard music with any soul to it produced by a mechanical talking machine' ... The new instrument is a feat of mathematics and physics. It is not the result of innumerable experiments, but was worked out on paper in advance of being built in the laboratory ... The new machine has a range of from 100 to 5,000 [cycles], or five and a half octaves ... The 'phonograph tone' is eliminated by the new recording and reproducing process.\n",
"The first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. The invention soon spread across the globe and over the next two decades the commercial recording, distribution, and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1900s. The development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910.\n\nSection::::Phonograph.:Disc phonograph.\n",
"The Edison team had experimented with Vacuum Deposited Gold masters as early as 1888, and it has been reported that some brown wax records certainly were molded, although it seems nobody has found these, in recent years, or can identify them. The Edison Record, \"Fisher Maiden\", was an early record that was experimented with for the process. The 1888 experiments were not very successful due to the fact the grooves of the cylinders were square, and the sound waves were saw-tooth-shaped and deep. The records came out scratched and it was very time-consuming. Many failures and very few that come out.\n",
"For a sound to be recorded by the Phonograph, it has to go through three distinct steps. First, the sound enters a cone-shaped component of the device, called the microphone diaphragm. That sound causes the microphone diaphragm, which is connected to a small metal needle, to vibrate. The needle then vibrates in the same way, causing its sharp tip to etch a distinctive groove into a cylinder, which was made out of tinfoil.\n\nSection::::The phonograph.:Playback.\n",
"At a dinner party on 7 April 1889, at the home of Browning's friend the artist Rudolf Lehmann, an Edison cylinder phonograph recording was made on a white wax cylinder by Edison's British representative, George Gouraud. In the recording, which still exists, Browning recites part of \"How They Brought the Good News from Ghent to Aix\" (and can be heard apologising when he forgets the words).\n",
"Thomas Alva Edison conceived the principle of recording and reproducing sound between May and July 1877 as a byproduct of his efforts to \"play back\" recorded telegraph messages and to automate speech sounds for transmission by telephone. His first experiments were with waxed paper.\n",
"Section::::Early history.:Oldest surviving recordings.\n\nFrank Lambert's lead cylinder recording for an experimental talking clock is often identified as the oldest surviving playable sound recording,\n\nalthough the evidence advanced for its early date is controversial.\n\nWax phonograph cylinder recordings of Handel's choral music made on June 29, 1888, at The Crystal Palace in London were thought to be the oldest-known surviving musical recordings, until the recent playback by a group of American historians of a phonautograph recording of \"Au clair de la lune\" made on April 9, 1860.\n",
"Electric sound recording and reproduction are electrical or mechanical techniques and devices for the inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. Acoustic analog recording is achieved by a small microphone diaphragm that can record sound waves on a phonograph (in which a stylus senses grooves on a record) or magnetic tape. The first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. \n",
"Section::::Early history.:Early machines.\n\nEdison's early phonographs recorded onto a thin sheet of metal, normally tinfoil, which was temporarily wrapped around a helically grooved cylinder mounted on a correspondingly threaded rod supported by plain and threaded bearings. While the cylinder was rotated and slowly progressed along its axis, the airborne sound vibrated a diaphragm connected to a stylus that indented the foil into the cylinder's groove, thereby recording the vibrations as \"hill-and-dale\" variations of the depth of the indentation.\n",
"The other experimental Graphophones indicate an amazing range of experimentation. While the method of cutting a record on wax was the one later exploited commercially, everything else seems to have been tried at least once. The following was noted on Wednesday, March 20, 1881:\n\nThe result of these ideas for magnetic reproduction resulted in patent , granted on May 4, 1886; which dealt solely with \"the reproduction, through the action of magnetism, of sounds by means of records in solid substances. \"\n\nSection::::Laboratory projects.:Photophone.:Tape recorder.\n",
"Section::::Hard plastic cylinders.\n",
"Between 1880 and 1885, Alexander Graham Bell and his associates at the Volta Laboratory experimented with a variety of processes for improved sound recording. They eventually settled on a recording process based on cutting wax cylinders. On January 6, 1886, the associates formed the Volta Graphophone company and were awarded a patent on their wax cylinder process. Later in the year, Edison resumed research on the phonograph. On March 28, 1887, the Volta associates established the American Graphophone Company for the manufacturing and sale of graphophones, and Edison organized the Edison Phonograph Company in the following year to protect his new research in sound.\n",
"Besides being far easier to handle, the wax recording media also allowed for lengthier recordings and created superior playback quality. The Graphophone designs initially deployed foot treadles to rotate the recordings which were then replaced by more convenient wind-up clockwork drive mechanisms and which finally migrated to electric motors, instead of the manual crank that was used on Edison's phonograph. The numerous improvements allowed for a sound quality that was significantly better than Edison's machine.\n\nSection::::Laboratory projects.:Photophone.:Magnetic sound recordings.\n",
"Ediphone Wax Formula and Procedure for making Ediphone Cylinders\n\nNoted C.H. 11/21/1946\n\n1. 1,200 lbs of double pressed stearic acid (130 °F. Titer) and 4 lbs of nigrosine base B dye are placed in a 200-gallon cast iron cauldron. The cauldron is directly heated by an oil burner of the household type. (Our Present ones are Eisler, the manufacture of which has been discontinued.) Heat is applied until the stearic acid has been melted and the temperature has reached 360 °F.\n",
"A recording made on a sheet of tinfoil at an 1878 demonstration of Edison's phonograph in St. Louis, Missouri has been played back by optical scanning and digital analysis. A few other early tinfoil recordings are known to survive, including a slightly earlier one which is believed to preserve the voice of U.S. President Rutherford B. Hayes, but as of May 2014 they have not yet been scanned. These antique tinfoil recordings, which have typically been stored folded, are too fragile to be played back with a stylus without seriously damaging them. Edison's 1877 tinfoil recording of \"Mary Had a Little Lamb\", not preserved, has been called the first instance of recorded verse.\n",
"Section::::Turntable technology.:Turntable drive systems.\n",
"BULLET::::- \"Transmitting And Recording Sounds By Radiant Energy\", filed November 1885, issued May 1886 (with Alexander and Chichester Bell)\n\nBULLET::::- \"Recording and Reproducing Speech and Other Sounds\" (improvements include compliant cutting head, wax surface, and constant linear velocity disk), filed June 1885, issued May 1886 (with Chichester Bell)\n\nBULLET::::- \"Apparatus for Recording and Reproducing Sounds\" (wax coated cylinder, pause and reverse mechanism), filed December 1885, issued May 1886\n\nBULLET::::- \"Paper Cylinder for Graphophonic Records\" (helically wound), filed April 1887, issued November 1887\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-04844 | How do conservationists repopulate almost extinct species | They won't. Even when he was alive, there really wasn't hope with such a small population. While you can try and repopulate with a pretty small pool, not that small. One example of successful conservation efforts was the American Bison. Before colonization, its estimated that there were possibly 60 million bison. By 1889, there were about 1,100. Now there are 500,000, though most are farm raised, not wild. However, even with a sample of around 1,100, the Bison population still has a problem with a limited gene pool Grizzlies in the Yellowstone region went from 136 in 1975 to about 700 today. In NJ, bald eagles have grown by a factor of 7 in 15 years. Any animal that has gone through a quasi-extinction event, or a population bottleneck will have the problem of inbreeding. | [
"Section::::Conservation.:Private farming.\n",
"Section::::Successes and failures.\n",
"Section::::Prevention by human intervention, modern science and safeguards.\n\nSection::::Prevention by human intervention, modern science and safeguards.:\"In situ\" conservation.\n\nWith advances in modern bioscience, several techniques and safeguards have emerged to check the relentless advance of genetic erosion and the resulting acceleration of endangered species towards eventual extinction. However, many of these techniques and safeguards are too expensive yet to be practical, and so the best way to protect species is to protect their habitat and to let them live in it as naturally as possible.\n",
"Section::::Genetic management of captive populations.:Avoiding loss of genetic diversity.\n",
"Wildlife sanctuaries and national parks have been created to preserve entire ecosystems with all the web of species native to the area. Wildlife corridors are created to join fragmented habitats (see Habitat fragmentation) to enable endangered species to travel, meet, and breed with others of their kind. Scientific conservation and modern wildlife management techniques, with the expertise of scientifically trained staff, help manage these protected ecosystems and the wildlife found in them. Wild animals are also translocated and reintroduced to other locations physically when fragmented wildlife habitats are too far and isolated to be able to link together via a wildlife corridor, or when local extinctions have already occurred.\n",
"Section::::Genetic considerations.:Adaptation to Captivity.\n\nSome reintroduction programs use plants or animals from captive populations to form a reintroduced population. When reintroducing individuals from a captive population to the wild, there is a risk that they have adapted to captivity due to differential selection of genotypes in captivity versus the wild. \n",
"An extensive open-air planting used maintain genetic diversity of wild, agricultural, or forestry species. Typically species that are either difficult or impossible to conserve in seed banks are conserved in field gene banks. Field gene banks may also be used grow and select progeny of species stored by other \"ex situ\" techniques.\n\nSection::::Techniques for plants.:Cultivation collections.\n",
"Section::::Genetic management of captive populations.:Managing genetic disorders.\n",
"Section::::Types.:Reintroduction.\n\nReintroductions involve restoring a species to its native range. The species may no longer be found there due to any number of reasons, though most common is often the introduction of predators or habitat loss due to either climate change or other human factors. This is generally done to broaden the range of threatened populations and to reconnect fragmented populations.\n\nSection::::Controversy.\n",
"Plants are under horticulture care, but the environment is managed to near natural conditions. This occurs with either restored or semi-natural environments. This technique is primarily used for taxa that are rare or in areas where habitat has been severely degraded.\n\nSection::::Techniques for animals.\n",
"Plants under horticultural care in a constructed landscape, typically a botanic garden or arboreta. This technique is similar to a field gene bank in that plants are maintained in the ambient environment, but the collections are typically not as genetically diverse or extensive. These collections are susceptible to hybridization, artificial selection, genetic drift, and disease transmission. Species that cannot be conserved by other \"ex situ\" techniques are often included in cultivated collections.\n\nSection::::Techniques for plants.:Inter situ.\n",
"Section::::Genetic management of captive populations.:Minimizing mean kinship.\n",
"Section::::Examples.\n\nShowy Indian clover, \"Trifolium amoenum\", is an example of a species that was thought to be extinct, but was rediscovered in 1993 in the form of a single plant at a site in western Sonoma County. Seeds were harvested and the species grown in \"ex situ\" facilities.\n\nThe Wollemi pine is another example of a plant that is being preserved via \"ex situ\" conservation, as they are being grown in nurseries to be sold to the general public.\n\nSection::::Drawbacks.\n",
"Section::::Genetic management of captive populations.:Avoiding adaptations to captivity.\n",
"Humans have been reintroducing species for food and pest control for thousands of years. However, the practice of reintroducing for conservation is much younger, starting in the 20th century.\n\nSection::::Methods for sourcing individuals.\n\nThere are a variety of approaches to species reintroduction. The optimal strategy will depend on the biology of the organism. The first matter to address when beginning a species reintroduction is whether to source individuals \"in situ\", from wild populations, or \"ex situ\", from captivity in a zoo or botanic garden, for example.\n\nSection::::Methods for sourcing individuals.:\"In situ\" sourcing.\n",
"Section::::Conservation techniques.\n\nScientists and conservation professionals have developed a number of techniques to protect bird species. These techniques have had varying levels of success.\n\nSection::::Conservation techniques.:Captive breeding.\n",
"BULLET::::5. Artificial population recruitment may include captive propagation (forced immigration) or captive breeding.\n\nSection::::Case study.\n",
"The state of a declining species can sometimes be reversed by augmenting reproduction through behavior. By manipulating auditory, olfactory, and visual cues of animals, biologists can attract animals to breeding grounds or increase the number of breeding individuals. This method has been applied most successfully to bird populations. For example, acoustic playbacks have attracted seabirds to historic and new breeding grounds. Similarly, adding eggs to nests of some male fish species may promote increased spawning by females who prefer to spawn with males already possessing eggs.\n\nSection::::Applications.:Assessing biodiversity.\n",
"An alternative method of conserving a species is to conserve the habitat that the species is found in. In this process, there is no target species for conservation, but rather the habitat as a whole is protected and managed, often with a view to returning the habitat to a more natural state. In theory, this method of conservation can be beneficial because it allows for the entire ecosystem and the many species within to benefit from conservation, rather than just the single target species. The International Union for Conservation of Nature suggest there is evidence that habitat based approaches do not have enough focus on individual species to protect them sufficiently. However much research now is turning towards area-based strategies in preference to individual species approaches such as endangered species recovery plans.\n",
"Some methods in managing threatened species involve reintroducing species to enclosed reserves or island areas. Once these species are introduced, their populations can become overabundant as these areas serve to protect the targeted species against predators and competitors. This occurred for the \"Bettongia lesueur\", the burrowing bettong, which was reintroduced to the Arid Recovery reserve in Australia: their population has increased from 30 to approximately 1532 individuals. Due to the damage within this reserve their population is considered overabundant.\n\nSection::::Potential impacts.\n",
"Removal of exotic species will allow the species that they have negatively impacted to recover their ecological niches. Exotic species that have become pests can be identified taxonomically (e.g., with Digital Automated Identification SYstem (DAISY), using the barcode of life). Removal is practical only given large groups of individuals due to the economic cost.\n\nAs sustainable populations of the remaining native species in an area become assured, \"missing\" species that are candidates for reintroduction can be identified using databases such as the \"Encyclopedia of Life\" and the Global Biodiversity Information Facility.\n",
"Section::::Methods.:Restoring animal life.\n\nRestoration often focuses on reestablishing plant communities, probably because plants form the foundation for other organisms within the community. Restoration of faunal communities often follows the “Field of Dreams” hypothesis: “if you build it, they will come”. Many animal species have been found to naturally recolonize areas where habitat has been restored. For example, abundances of several bird species showed marked increases after riparian vegetation had been reestablished in a riparian corridor in Iowa.\n",
"Reintroduction is the deliberate release of species into the wild, from captivity or relocated from other areas where the species survives. This may be an option for certain species that are endangered or extinct in the wild. However, it may be difficult to reintroduce EW species into the wild, even if their natural habitats were restored, because survival techniques, which are often passed from parents to offspring during parenting, may be lost. While conservation efforts may preserve some of the genetics of a species, the species may never fully recover due to the loss of the natural memetics of the species.\n",
"When a species has been extirpated from a site where it previously existed, individuals that will comprise the reintroduced population must be sourced from wild or captive populations. When sourcing individuals for reintroduction, it is important to consider local adaptation, adaptation to captivity (for \"ex situ\" conservation), the possibility of inbreeding depression and outbreeding depression, and taxonomy, ecology, and genetic diversity of the source population. Reintroduced populations experience increased vulnerability to influences of drift, selection, and gene flow evolutionary processes due to their small sizes, climatic and ecological differences between source and native habitats, and presence of other mating-compatible populations.\n",
"There are five major areas of management action for conservation of vulnerable species:\n\nBULLET::::1. Control of other species may include: control of exotic fauna, exotic flora, other native species and parasites and disease.\n\nBULLET::::2. Control of direct human impacts may include control of grazing, human access, on and off-road vehicles, low impact recreation and illegal collecting and poaching.\n\nBULLET::::3. Pollution control may include control of chemical run-off, siltation, water quality and use of pesticides and herbicides.\n\nBULLET::::4. Active habitat management may include fire management and control, control of soil erosion and waterbodies, habitat restoration and mechanical vegetation control.\n"
] | [
"Conservationists can repopulate almost extinct species."
] | [
"They cannot always do this because genetic bottlenecks make it difficult for the species to survive."
] | [
"false presupposition"
] | [
"Conservationists can repopulate almost extinct species."
] | [
"false presupposition"
] | [
"They cannot always do this because genetic bottlenecks make it difficult for the species to survive."
] |
2018-03256 | why are some banks able to give different rates on things, ie. a higher yield percentage on savings accounts, as opposed to others? | Like any business, it's competition. There are certainly costs involved with running 1000's of branches that online banks don't have to deal with, so they do have some efficiencies they can pass along. But there are also concerns about online-only banks many consumers have, and higher rates are a way to entice more people to consider using them. Even if the expenses were equal, the online banks might be willing to forego profits today to grow their customer base, effectively using the higher rates as a marketing expense for customer acquisition. | [
"To compensate for the low liquidity, FDs offer higher rates of interest than saving accounts. The longest permissible term for FDs is 10 years. Generally, the longer the term of deposit, higher is the rate of interest but a bank may offer lower rate of interest for a longer period if it expects interest rates, at which the Central Bank of a nation lends to banks (\"repo rates\"), will dip in the future.\n",
"See the (S)ensitivity section of the CAMELS rating system for a substantial list of links to documents and examiner manuals, issued by financial regulators, that cover many issues in the analysis of interest rate risk.\n\nIn addition to being subject to the CAMELS system, the largest banks are often subject to prescribed stress testing. The assessment of interest rate risk is typically informed by some type of stress testing. See: Stress test (financial), List of bank stress tests, List of systemically important banks.\n",
"Section::::Instruments and requirements.:Large exposures restrictions.\n\nBanks may be restricted from having imprudently large exposures to individual counterparties or groups of connected counterparties. Such limitation may be expressed as a proportion of the bank's assets or equity, and different limits may apply based on the security held and/or the credit rating of the counterparty. Restricting disproportionate exposure to high-risk investment prevents financial institutions from placing equity holders' (as well as the firm's) capital at an unnecessary risk.\n\nSection::::Instruments and requirements.:Activity and affiliation restrictions.\n",
"This approach became problematic during the 2007/8 financial crisis because actual interest rates paid began to differ from published rates such as Libor or bank base rates vary. With poor credit availability, the profit adjustment made in favour of depositing business units was effectively understated. This had been less an issue when banks' borrowing costs were close to base rates or quoted rates such as LIBOR.\n",
"The researcher found it intuitive that basic interest rate caps are most likely to bite at the lower end of the market, with interest rates charged by microfinance institutions generally higher than those by banks and this is driven by a higher cost of funds and higher relative overheads. Transaction costs make larger loans relatively more cost effective for the financial institution.\n",
"In many countries, banks or similar financial institutions are the primary originators of mortgages. For banks that are funded from customer deposits, the customer deposits typically have much shorter terms than residential mortgages. If a bank offered large volumes of mortgages at fixed rates but derived most of its funding from deposits (or other short-term sources of funds), it would have an asset–liability mismatch because of interest rate risk. It would then be running the risk that the interest income from its mortgage portfolio would be less than it needed to pay its depositors. In the United States, some argue that the savings and loan crisis was in part caused by the problem: the savings and loans companies had short-term deposits and long-term, fixed-rate mortgages and so were caught when Paul Volcker raised interest rates in the early 1980s. Therefore, banks and other financial institutions offer adjustable rate mortgages because it reduces risk and matches their sources of funding.\n",
"In the United Kingdom, some online banks offer rates higher as many savings accounts, along with free banking (no charges for transactions) as institutions that offer centralised services (telephone, internet or postal based) tend to pay higher levels of interest. The same holds true for banks within the EURO currency zone.\n\nSection::::Interest.:High-yield accounts.\n\nHigh-yield accounts pay a higher interest rate than typical NOW accounts and frequently function as loss-leaders to drive relationship banking.\n\nSection::::Lending.\n\nAccounts can lend money in two ways: overdraft and offset mortgage.\n\nSection::::Lending.:Overdraft.\n",
"In exchange for the customer depositing the money for an agreed term, institutions usually grant higher interest rates than they do on accounts that customers can withdraw from on demand—though this may not be the case in an inverted yield curve situation. Fixed rates are common, but some institutions offer CDs with various forms of variable rates. For example, in mid-2004, interest rates were expected to rise—and many banks and credit unions began to offer CDs with a \"bump-up\" feature. These allow for a single readjustment of the interest rate, at a time of the consumer's choosing, during the term of the CD. Sometimes, financial institutions introduce CDs indexed to the stock market, bond market, or other indices.\n",
"Another possibility used to estimate the risk-free rate is the inter-bank lending rate. This appears to be premised on the basis that these institutions benefit from an implicit guarantee, underpinned by the role of the monetary authorities as 'the lendor of last resort.' (In a system with an endogenous money supply the 'monetary authorities' may be private agents as well as the central bank - refer to Graziani 'The Theory of Monetary Production'.) Again, the same observation applies to banks as a proxy for the risk-free rate – if there is any perceived risk of default implicit in the interbank lending rate, it is not appropriate to use this rate as a proxy for the risk-free rate.\n",
"If deposit insurance is provided by another business or corporation, like other insurance agreements, there is a presumption that the insurance corporation would charge higher rates to or simply refuse to cover banks that engaged in extremely risky behavior, thus solving the problem of moral hazard whilst simultaneously reducing the risk of a bank run.\n",
"In June 1996 a Joint Agency Policy Statement was issued by the OCC, Treasury, Fed and FDIC defining interest rate risk as the exposure of a bank’s financial condition to adverse movements in interest rates resulting from the following: \n\nBULLET::::- repricing or maturity mismatch risk - differences in the maturity or timing of coupon adjustments of bank assets, liabilities and off-balance-sheet instruments\n\nBULLET::::- yield curve risk - changes in the slope of the yield curve\n",
"In many countries, it is not feasible for banks to lend at fixed rates for very long terms; in these cases, the only feasible type of mortgage for banks to offer may be adjustable rate mortgages (barring some form of government intervention). For example, the mortgage industry of the United Kingdom has traditionally been dominated by building societies. Since funds raised by UK building societies must be at least 50% deposits, lenders prefer variable-rate mortgages to fixed-rate mortgages to reduce potential interest rate risks between what they charging in mortgage interest and what they are paying in interest for deposits and other funding sources.\n",
"When two banks merge, a survey is done to ensure that the combined deposit market shares will be no larger than 25% in a particular state, or 10% nationally. If one or both of those percentages are higher than allowed, the banks can elect to still do the merger but they would need to divest (i.e. sell off branches and customer accounts) enough branches to get them within the guidelines.\n",
"Banks must satisfy the 'use test', which means that the ratings must be used internally in the risk management practices of the bank. A rating system solely devised for calculating regulatory capital is not acceptable. While banks are encouraged to improve their rating systems over time, they are required to demonstrate the use of risk parameters for risk management for at least three years prior to obtaining qualification.\n\nSection::::Minimum requirements.:Risk quantification.\n\nOverall requirements\n\nBULLET::::- Except for retail exposures, PD for a particular grade must be a long-run average of one year default rates for that grade\n",
"Section::::Regulations in the United States.\n\nRelated finance companies are not regulated as strictly as banks by the Federal Reserve, rather they are regulated by the Department of Financial Institutions or Department of Commerce on a State level depending on the State. Regulations may include maximum interest rate, late fee amounts, grace periods and so forth. Some of the companies that have started as RFCs have grown large enough that they became Industrial Banks which are FDIC Insured banks owned by non-financial institutions.\n",
"SIBOR-pegged ARMs are more popular than SOR-or board&rate-pegged mortgages.\n\nHowever, recently, ANZ introduced an ARM that is pegged to the average of SIBOR and SOR. So far, it is the only bank in Singapore to offer such a mortgage.\n\nSection::::Pricing.\n",
"Malayan Banking Bhd (Maybank) has set a group-wide base rate at 3.2%, effective Jan 2, 2015. All new retail loans and financing such as mortgages, unit trust loans, share margin financing, personal financing and overdraft facilities which are applied for by individual customers will be based on the base rate. Though certain banks may be setting a higher BR compared to others, they can sometimes offer lower ELR to customers in order to remain competitive. Loans that are already approved and extended prior to January 2, 2015 will still follow the old BLR until the end of the loan tenure.\n",
"Eurodollars can have a higher interest rate attached to them because of the fact that they are out of reach from the Federal Reserve. U.S. banks hold an account at the Fed and can, in theory, receive unlimited liquidity from the Fed if necessary. These required reserves and Fed backing make U.S. Dollar deposits in U.S. banks inherently less risky, and Eurodollar deposits slightly more risky, which requires a slightly higher interest rate.\n\nBy the end of 1970, 385 billion eurodollars were booked offshore. \n",
"The assessment of interest rate risk is a very large topic at banks, thrifts, saving and loans, credit unions, and other finance companies, and among their regulators. The widely deployed CAMELS rating system assesses a financial institution's: (C)apital adequacy, (A)ssets, (M)anagement Capability, (E)arnings, (L)iquidity, and (S)ensitivity to market risk. A large portion of the (S)ensitivity in CAMELS is \"interest rate risk\". Much of what is known about assessing interest rate risk has been developed by the interaction of financial institutions with their regulators since the 1990s. Interest rate risk is unquestionably the largest part of the (S)ensitivity analysis in the CAMELS system for most banking institutions. When a bank receives a bad CAMELS rating equity holders, bond holders and creditors are at risk of loss, senior managers can lose their jobs and the firms are put on the FDIC problem bank list. \n",
"In the USA, the largest banks are regulated by the Federal Reserve (FRB) and the Office of the Comptroller of Currency (OCC). These regulators set the selection criteria, establish hypothetical adverse scenarios and oversee the annual tests. 19 banks operating in the U.S. (at the top tier) have been subject to such testing since 2009. Banks showing difficulty under the stress tests are required to postpone share buybacks, curtail dividend plans and if necessary raise additional capital financing.\n\nSection::::Banks.:United States.:G-SIB Capital Requirements.\n",
"Some features of CDs are:\n\nBULLET::::- A larger principal should/may receive a higher interest rate.\n\nBULLET::::- A longer term usually earns a higher interest rate, except in the case of an inverted yield curve (e.g., preceding a recession).\n\nBULLET::::- Smaller institutions tend to offer higher interest rates than larger ones.\n\nBULLET::::- Personal CD accounts generally receive higher interest rates than business CD accounts.\n\nBULLET::::- Banks and credit unions that are not insured by the FDIC or NCUA generally offer higher interest rates.\n",
"Banks are also required to regularly stress test their rating systems considering economic downturn scenarios, market risk based events or liquidity conditions that may increase the level of capital held by the bank. These stress tests should not only consider the relevant internal data of the bank, but also macro-economic factors that might affect the accuracy of the rating system.\n\nSection::::Minimum requirements.:Corporate governance and oversight.\n",
"BULLET::::- 1996 FED Bank Holding Company Supervision Manual (section 2127) This had a minor update in 2010 discussing the 2010 interagency advisory on interest-rate risk management. \"The advisory does not constitute new guidance...The advisory targets IRR management at insured depository institutions. However, the principles and supervisory expectations articulated also apply to BHCs, which are reminded of long-standing supervisory guidance that they should manage and control aggregate risk exposures on a consolidated basis while recognizing legal distinctions and possible obstacles to cash movements among subsidiaries.\"\n\nBULLET::::- 1997 OCC Comptroller’s Handbook for Interest Rate Risk\n",
"Deposit brokers, somewhat like stockbrokers, are paid a commission by the customer to find the best certificate of deposit (CD) rates and place their customers' money in those CDs. Previously, banks and thrifts could only have five percent of their deposits be brokered deposits; the race to the bottom caused this limit to be lifted. A small one-branch thrift could then attract a large number of deposits simply by offering the highest rate. To make money off this expensive money, it had to lend at even higher rates, meaning that it had to make more, riskier investments. This system was made even more damaging when certain deposit brokers instituted a scam known as \"linked financing\". In \"linked financing\", a deposit broker would approach a thrift and say he would steer a large amount of deposits to that thrift if the thrift would lend certain people money. The people, however, were paid a fee to apply for the loans and told to give the loan proceeds to the deposit broker.\n",
"For example, assume a particular U.S. depository institution, in the normal course of business, issues a loan. This dispenses money and decreases the ratio of bank reserves to money loaned. If its reserve ratio drops below the legally required minimum, it must add to its reserves to remain compliant with Federal Reserve regulations. The bank can borrow the requisite funds from another bank that has a surplus in its account with the Fed. The interest rate that the borrowing bank pays to the lending bank to borrow the funds is negotiated between the two banks, and the weighted average of this rate across all such transactions is the federal funds \"effective\" rate.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-04785 | Why is yawning infectious between humans, but when an animal yawns, it doesn't make us yawn? | Actually, yawns *are* contagious across species. Humans yawning can make dogs yawn. Some humans will yawn after seeing another animal yawn. | [
"According to research published by the US National Institutes of Health, the triple reassortant H2N3 virus isolated from diseased pigs in the United States in 2006 is pathogenic for certain mammals without prior adaptation and transmits among swine and ferrets. Adaptation, in the H2 hemagglutinin derived from an avian virus, includes the ability to bind to the mammalian receptor, a significant prerequisite for infection of mammals, in particular humans, which poses a big concern for public health. Researchers investigated the pathogenic potential of swine H2N3 in Cynomolgus macaques, a surrogate model for human influenza infection. In contrast to human H2N2 virus, which served as a control and largely caused mild pneumonia similar to seasonal influenza A viruses, the swine H2N3 virus was more pathogenic causing severe pneumonia in nonhuman primates. Both viruses replicated in the entire respiratory tract, but only swine H2N3 could be isolated from lung tissue on day 6 post infection. All animals cleared the infection whereas swine H2N3 infected macaques still presented with pathologic changes indicative of chronic pneumonia at day 14 post infection. Swine H2N3 virus was also detected to significantly higher titers in nasal and oral swabs indicating the potential for animal-to-animal transmission. Blood plasma levels of Interleukin 6 (IL-6), Interleukin 8, monocyte chemotactic protein-1 and Interferon-gamma were significantly increased in swine H2N3 compared to human H2N2 infected animals supporting the previously published notion of increased IL-6 levels being a potential marker for severe influenza infections. Researchers concluded the swine H2N3 virus represents a threat to humans with the potential for causing a larger outbreak in a non-immune or partially immune population. Furthermore, surveillance efforts in farmed pig populations need to become an integral part of any epidemic and pandemic influenza preparedness.\n",
"Influenza A viruses are enveloped, negative sense, single-stranded RNA viruses. Genome analysis has shown that H3N8 was transferred from horses to dogs and then adapted to dogs through point mutations in the genes. The incubation period is two to five days, and viral shedding may occur for seven to ten days following the onset of symptoms. It does not induce a persistent carrier state.\n\nSection::::Symptoms.\n",
"A major factor contributing to the appearance of new zoonotic pathogens in human populations is increased contact between humans and wildlife. This can be caused either by encroachment of human activity into wilderness areas or by movement of wild animals into areas of human activity. An example of this is the outbreak of Nipah virus in peninsular Malaysia in 1999, when intensive pig farming began on the habitat of infected fruit bats. Unidentified infection of the pigs amplified the force of infection, eventually transmitting the virus to farmers and causing 105 human deaths.\n",
"A newer form was identified in Asia during the 2000s and has since caused outbreaks in the US as well. It is a mutation of H3N2 that adapted from its avian influenza origins. Vaccines have been developed for both strains. \n\nSection::::History.\n",
"Today it is established that at least some primate species are highly susceptible to \"B. pertussis\" and develop clinical whooping cough in high incidence when exposed to low inoculation doses. The bacteria may be present in wild animal populations, but this is not confirmed by laboratory diagnosis, although whooping cough is known among wild gorillas. Several zoos also have a long-standing custom of vaccinating their primates against whooping cough.\n\nSection::::Mechanism.\n",
"Evidence for the occurrence of contagious yawning linked to empathy is rare outside of primates. It has been studied in Canidae species, such as the domestic dog and wolf. Domestic dogs have shown the ability to yawn contagiously in response to human yawns. Domestic dogs have demonstrated they are skilled at reading human communication behaviors. This ability makes it difficult to ascertain whether yawn contagion among domestic dogs is deeply rooted in their evolutionary history or is a result of domestication. \n",
"As of April 2015, the question of whether vaccination against the earlier strain offered protection had not been resolved. The US Department of Agriculture granted conditional approval for a canine H3N2-protective vaccine in December 2015.\n\nIn March 2016, researchers reported that this strain had infected cats and suggested that it may be transmitted between them.\n\nSection::::No human risk.\n",
"Zoonoses are of interest because they are often previously unrecognized diseases or have increased virulence in populations lacking immunity. The West Nile virus appeared in the United States in 1999 in the New York City area, and moved through the country in the summer of 2002, causing much distress. Bubonic plague is a zoonotic disease, as are salmonellosis, Rocky Mountain spotted fever, and Lyme disease.\n",
"The avian flu virus H7N2 has been found in cats in New York City. Though transmission to people is possible, it is thought to be rare. In Europe, cats were identified as being hosts for West Nile virus.\n\nSection::::Bacterial.\n\nSection::::Bacterial.:\"Pasteurella multocida\".\n",
"Contact with farm animals can lead to disease in farmers or others that come into contact with infected farm animals. Glanders primarily affects those who work closely with horses and donkeys. Close contact with cattle can lead to cutaneous anthrax infection, whereas inhalation anthrax infection is more common for workers in slaughterhouses, tanneries and wool mills. Close contact with sheep who have recently given birth can lead to clamydiosis, or enzootic abortion, in pregnant women, as well as an increased risk of Q fever, toxoplasmosis, and listeriosis in pregnant or the otherwise immunocompromised. Echinococcosis is caused by a tapeworm which can be spread from infected sheep by food or water contaminated with feces or wool. Bird flu is common in chickens. While rare in humans, the main public health worry is that a strain of bird flu will recombine with a human flu virus and cause a pandemic like the 1918 Spanish flu. In 2017, free range chickens in the UK were temporarily ordered to remain inside due to the threat of bird flu. Cattle are an important reservoir of cryptosporidiosis and mainly affects the immunocompromised.\n",
"At least 31 different species of ticks from the genera \"Haemaphysalis\" and \"Hyalomma\" in southeastern Iran have been found to carry the virus.\n\nWild animals and small mammals, particularly European hare, Middle-African hedgehogs and multimammate rats are the \"amplifying hosts\" of the virus. Birds are generally resistant to CCHF, with the exception of ostriches. Domestic animals like sheep, goats and cattle can develop high titers of virus in their blood, but tend not to fall ill.\n",
"Dogs may become infected with EBOV but not develop symptoms. Dogs in some parts of Africa scavenge for food, and they sometimes eat EBOV-infected animals and also the corpses of humans. A 2005 survey of dogs during an EBOV outbreak found that although they remain asymptomatic, about 32 percent of dogs closest to an outbreak showed a seroprevalence for EBOV versus 9 percent of those farther away. The authors concluded that there were \"potential implications for preventing and controlling human outbreaks.\"\n\nSection::::Other animals.:Reston virus.\n",
"Similarly, in recent times avian influenza and West Nile virus have spilled over into human populations probably due to interactions between the carrier host and domestic animals. Highly mobile animals such as bats and birds may present a greater risk of zoonotic transmission than other animals due to the ease with which they can move into areas of human habitation.\n\nBecause they depend on the human host for part of their life-cycle, diseases such as African schistosomiasis, river blindness, and elephantiasis are \"not\" defined as zoonotic, even though they may depend on transmission by insects or other vectors.\n",
"At least some primate species are highly sensitive to \"B. pertussis\", and develop a clinical whooping cough in high incidence when exposed to low inoculation doses. Whether the bacteria spread naturally in wild animal populations has not been confirmed satisfactorily by laboratory diagnosis, but whooping cough has been found among wild gorillas. Several zoos have learned to vaccinate their primates against whooping cough.\n\nSection::::Diagnosis.\n",
"The most common way a cat can obtain H5N1 is by consuming an infected bird. This has been studied in the 2006 and 2007 cases in Germany and Austria where the strains between the cat and the infected birds were not different between the species. A cat is able to then transfer the virus via the respiratory tract and the digestive tract to other cats. However, studies suggest that a cat cannot transfer the virus to a dog, and vice versa, while sharing a food bowl. Though there is no concrete evidence, there is a potential link between the transfer of the virus between poultry, wild birds, and humans.\n",
"Zoonoses have different modes of transmission. In direct zoonosis the disease is directly transmitted from animals to humans through media such as air (influenza) or through bites and saliva (rabies). In contrast, transmission can also occur via an intermediate species (referred to as a vector), which carry the disease pathogen without getting infected. When humans infect animals, it is called reverse zoonosis or anthroponosis. The term is from Greek: ζῷον \"zoon\" \"animal\" and νόσος \"nosos\" \"sickness\".\n\nSection::::Causes.\n",
"About 80% of infected dogs with H3N8 show symptoms, usually mild (the other 20% have subclinical infections), and the fatality rate for Greyhounds in early outbreaks was 5 to 8%, although the overall fatality rate in the general pet and shelter population is probably less than 1%. Symptoms of the mild form include a cough that lasts for 10 to 30 days and possibly a greenish nasal discharge. Dogs with the more severe form may have a high fever and pneumonia. Pneumonia in these dogs is not caused by the influenza virus, but by secondary bacterial infections. The fatality rate of dogs that develop pneumonia secondary to canine influenza can reach 50% if not given proper treatment. Necropsies in dogs that die from the disease have revealed severe hemorrhagic pneumonia and evidence of vasculitis.\n",
"Evidence indicates that both domestic dogs and pigs can also be infected with EBOV. Dogs do not appear to develop symptoms when they carry the virus, and pigs appear to be able to transmit the virus to at least some primates. Although some dogs in an area in which a human outbreak occurred had antibodies to EBOV, it is unclear whether they played a role in spreading the disease to people.\n\nSection::::Cause.:Reservoir.\n",
"A study by the University of London has suggested that the \"contagiousness\" of yawns by a human will pass to dogs. The study observed that 21 of 29 dogs yawned when a stranger yawned in front of them, but did not yawn when the stranger only opened his mouth. \n",
"Paragonimiasis, or lung fluke uses cats as a reservoir and subsequently can transmit the infection to humans. Symptoms in cats have not been observed. There are over nine species of lung flukes that can be transmitted to humans from cats. The disease has been found in Asia, Africa, India, North, South and Central America. It is not uncommon and estimates of those infected are in the millions. Signs symptoms in humans are coughing up blood, migration of the flukes into other body organs including the central nervous system. There it can cause neurological symptoms such as headache, confusion, convulsions, vision problems, and bleeding in the brain. This infection in humans is sometimes mistaken for tuberculosis.\n",
"DNA microarrays are used in macaque research. For example, Michael Katze of University of Washington, Seattle, infected macaques with 1918 and modern influenzas. The DNA microarray showed the macaque genomic response to human influenza on a cellular level in each tissue. Both viruses stimulated innate immune system inflammation, but the 1918 flu stimulated stronger and more persistent inflammation, causing extensive tissue damage, and it did not stimulate the interferon-1 pathway. The DNA response showed a transition from innate to adaptive immune response over seven days.\n",
"Helt and Eigsti (2010) showed that dogs, like humans, develop a susceptibility to contagious yawning gradually, and that while dogs above seven months 'catch' yawns from humans, younger dogs are immune to contagion. The study also indicated that nearly half of the dogs responded to the human's yawn by becoming relaxed and sleepy, suggesting that the dogs copied not just the yawn, but also the physical state that yawns typically reflect.\n\nYawning has multiple possible functions, and may occur when the body perceives the benefits.\n\nSection::::Social function.:Relation to empathy.\n",
"Influenza infects many animal species, and transfer of viral strains between species can occur. Birds are thought to be the main animal reservoirs of influenza viruses. Sixteen forms of hemagglutinin and nine forms of neuraminidase have been identified. All known subtypes (HxNy) are found in birds, but many subtypes are endemic in humans, dogs, horses, and pigs; populations of camels, ferrets, cats, seals, mink, and whales also show evidence of prior infection or exposure to influenza. Variants of flu virus are sometimes named according to the species the strain is endemic in or adapted to. The main variants named using this convention are: bird flu, human flu, swine flu, horse flu and dog flu. (Cat flu generally refers to feline viral rhinotracheitis or feline calicivirus and not infection from an influenza virus.) In pigs, horses and dogs, influenza symptoms are similar to humans, with cough, fever and loss of appetite. The frequency of animal diseases are not as well-studied as human infection, but an outbreak of influenza in harbor seals caused approximately 500 seal deaths off the New England coast in 1979–1980. However, outbreaks in pigs are common and do not cause severe mortality. Vaccines have also been developed to protect poultry from avian influenza. These vaccines can be effective against multiple strains and are used either as part of a preventative strategy, or combined with culling in attempts to eradicate outbreaks.\n",
"Swine influenza virus is common throughout pig populations worldwide. Transmission of the virus from pigs to humans is not common and does not always lead to human influenza, often resulting only in the production of antibodies in the blood. If transmission does cause human influenza, it is called zoonotic swine flu or a variant virus. People with regular exposure to pigs are at increased risk of swine flu infection. The meat of an infected animal poses no risk of infection when properly cooked.\n",
"While it has been commonly known that the influenza virus increases one's chances of contracting pneumonia or meningitis caused by the streptococcus pneumonaie bacteria, new medical research in mice indicates that the flu is actually a necessary component for the transmission of the disease. Researcher Dimitri Diavatopoulo from the Radboud University Nijmegen Medical Centre in the Netherlands describes his observations in mice, stating that in these animals, the spread of the bacteria only occurs between animals already infected with the influenza virus, not between those without it. He says that these findings have only been inclusive in mice, however, he believes that the same could be true for humans.\n"
] | [
"Animals yawns are not \"infectious\" to humans.",
"When humans witness animals yawn, it doesn't ever cause a human to yawn. "
] | [
"Animals yawning can make humans yawn. ",
"There have been many cases where humans have yawned after witnessing animals yawn and vice versa."
] | [
"false presupposition"
] | [
"Animals yawns are not \"infectious\" to humans.",
"When humans witness animals yawn, it doesn't ever cause a human to yawn. "
] | [
"false presupposition",
"false presupposition"
] | [
"Animals yawning can make humans yawn. ",
"There have been many cases where humans have yawned after witnessing animals yawn and vice versa."
] |
2018-24287 | Why is it that, when depicted, our fingers are the first to twitch when regaining consciousness? | That's just a Hollywood cliche. It has no real world tie-in. | [
"When people lose consciousness, they fall down (unless prevented from doing so) and, when in this position, effective blood flow to the brain is immediately restored, allowing the person to regain consciousness. If the person does not fall into a fully flat, supine position, and the head remains elevated above the trunk, a state similar to a seizure may result from the blood's inability to return quickly to the brain, and the neurons in the body will fire off and generally cause muscles to twitch very slightly but mostly remain very tense.\n",
"\"Orlacs Hände\" was based on the book \"Les Mains d'Orlac\" by Maurice Renard. It was one of the first films to feature the motif, often recurring in later films, of hands with a will of their own, whether or not attached to a body, as well as popular fears around the subject of surgical transplants, in the days before such procedures were possible. It was shot at the studios of Listo-Film in Vienna by the Pan-Film production company.\n",
"The characters reappeared in Kiernan and Hemphill's sketch show \"Chewin' the Fat\", nearly every episode of which featured Jack, Victor, Tam and Winston, with minor differences from their counterparts in the series. By the time \"Still Game\" became a show in its own right Winston's physical appearance had changed significantly, but he was still played by Paul Riley. As the show evolved, supporting characters assumed greater prominence. Jack and Victor made their final appearance on \"Chewin' the Fat\" in the 2002 Hogmanay Special.\n",
"Nonsynaptic plasticity also plays a key role in seizure activity. Febrile seizures, seizures due to fever early in life, can lead to increased excitability of hippocampal neurons. These neurons become highly sensitized to convulsant agents. It has been shown that seizures early in life can predispose one to more seizures through nonsynaptic mechanisms.\n\nTrauma, including stroke that results in cortical injury, often results in epilepsy. Increased excitability and NMDA conductances result in epileptic activity, suggesting that nonsynaptic plasticity may be the mechanism through which epilepsy is induced after trauma.\n\nSection::::Applications to disease.:Autism.\n",
"Tyler, Matt and Caroline are on their way home when John activates the device. Tyler hears the noise and loses control of the car, which crashes into a gate. The paramedics get to the scene and examine the unconscious Tyler. His pulse is steady but when the paramedic tries to examine his eyes, they don't look human. Tyler wakes up and his eyes are back to normal. Caroline, who seemed perfectly fine before, collapses.\n",
"In addition to oxygen deprivation or deficiency, infections are a common pathological cause of ASC. A prime example of an infection includes meningitis. The medical website WEBMD states that meningitis is an infection that causes the coverings of the brain to swell. This particular infection occurs in children and young adults. This infection is primarily viral. Viral meningitis causes ASC and its symptoms include fevers and seizures (2010). The Impairment becomes visible the moment seizures begin to occur, this is when the patient enters the altered state of consciousness.\n\nSection::::Induction methods.:Pathologies/other.:Sleep deprivation.\n",
"Ictal asystole\n\nIctal asystole is a rare occurrence for patients that have temporal lobe epilepsy. It can often be identified by loss of muscle tone or the presence of bilateral asymmetric jerky limb movements during a seizure, although ECG monitoring is necessary to provide a firm result. Ictal asystole and Ictal bradycardia can cause an epileptic patient to die suddenly. \n",
"Section::::Main characters.:Isa Drennan.\n",
"Section::::Main characters.:Victor McDade.\n",
"\"The Moving Finger\" was first adapted for television by the BBC with Joan Hickson in the series \"Miss Marple\". It first aired on 21–22 February 1985.\n",
"Jacksonian seizures are initiated with abnormal electrical activity within the primary motor cortex. They are unique in that they travel through the primary motor cortex in succession, affecting the corresponding muscles, often beginning with the fingers. This is felt as a tingling sensation, or a feeling of waves through the fingers when touched together. It then affects the hand and moves on to more proximal areas on the same side of body. Symptoms often associated with a Jacksonian seizure are sudden head and eye movements, tingling, numbness, smacking of the lips, and sudden muscle contractions. Most of the time any one of these actions can be seen as normal movements, without being associated with the seizure occurring. They occur at no particular moment and last only briefly. They may result in secondary generalized seizure involving both hemispheres. They can also start at the feet, manifesting as tingling or pins and needles, and there are painful cramps in the foot muscles, due to the signals from the brain. Because it is a partial seizure, the postictal state is of normal consciousness .\n",
"Section::::Other characters.\n\nFiona\n",
"Penobscott is not actually seen until the season-ending episode \"Margaret's Marriage\", wherein Donald (played by Carroll) arrives to marry Margaret at the 4077th. Hawkeye and B.J. have a bachelor party for him, and after he passes out from drunkenness, the hosts, also inebriated, decide to play a joke on Penobscott by plastering him from his chest to his toes, intending to tell him that he had broken both his legs during the night. The cast is still on during the wedding ceremony, and he is unable to move without assistance. The wedding is cut short by incoming wounded, which leaves Donald in the mess hall, unable to move in his body cast. As Margaret leaves for her honeymoon, they make a halfhearted attempt to tell her that the cast could be removed, but she doesn't hear them over the sound of the helicopter they are departing in.\n",
"Section::::Hawkins Family.:Jean Hawkins.\n",
"Charge Nurse\n\nPlayed by Carolyn Konrad; appears in 3 episodes from \"Faimly\" to \"Wireless\". She is a nurse in the hospital.\n\nHarry Drennan \n",
"Kalachakram (2002 film)\n\nKalachakram is a 2002 Malayalam-language Indian feature film directed by Sonu Sisupal, starring Siddique and Ashwathy in lead roles. \n\nSection::::Plot.\n\nIn 1945, before Adolf Hitler's death, his cells were kept for the future. In 2005 AD, the clone is used to recreate Hitler and to achieve his undone mission after six decades.\n\nSection::::Cast.\n\nBULLET::::- Jayasurya as Prashanth's friend\n\nBULLET::::- Siddique as Rajeev George\n\nBULLET::::- Ashwathy as Gayathri/Golda\n\nBULLET::::- Bindu Ramakrishnan as Prashanth's mother\n\nBULLET::::- Devan as Nanaji\n\nBULLET::::- Jagadish as Ganapathi Swami\n\nBULLET::::- M. S. Thripunithura as Chief Editor/Veena's father\n\nBULLET::::- Neelam as Veena\n",
"Section::::Main characters.:Thomas \"Tam\" Mullen.\n",
"Hans Conried was enthusiastic about the role, saying in retrospect, \"I had never had any such part before, never have since and probably never will again. We rehearsed for eight weeks before I was engaged to shoot for eight weeks, an extravagance that I as a bit player had never known ... If it had been a success, with my prominent part in the title role, it would have changed my life.\"\n",
"Research by Dr. Eelco Wijdicks on the depiction of comas in movies was published in Neurology in May 2006. Dr. Wijdicks studied 30 films (made between 1970 and 2004) that portrayed actors in prolonged comas, and he concluded that only two films accurately depicted the state of a coma victim and the agony of waiting for a patient to awaken: \"Reversal of Fortune\" (1990) and \"The Dreamlife of Angels\" (1998). The remaining 28 were criticized for portraying miraculous awakenings with no lasting side effects, unrealistic depictions of treatments and equipment required, and comatose patients remaining muscular and tanned.\n",
"Stress exposure results in hormone release that mediates its effects in the brain. These hormones act on both excitatory and inhibitory neural synapses, resulting in hyper-excitability of neurons in the brain. The hippocampus is known to be a region that is highly sensitive to stress and prone to seizures. This is where mediators of stress interact with their target receptors to produce effects.\n\nSection::::Causes.:Other.\n",
"In an episode of \"The Untouchables\" syndicated TV series (1993-1994), members of an Irish gang are playing poker when they are gunned down by members of an Italian mob. A short time later, other members of the Irish gang come to investigate the scene of the demise of their unfortunate colleagues. At one point, the leader of the investigating team examines the poker hand held by one of the deceased and comments matter-of-factly, \"Aces and eights\".\n",
"The character has been the basis for several action figures.\n\nSection::::Recent Whereabouts.\n",
"One newspaper story says that a similar ambulance had waited for Viana, who was executed on December 10, 1920. Viana was taken to a room about two blocks from the jail with where he was successfully revived to test the system before Cardinelli's execution. Viana was then killed because he had snitched against the Cardinella gang.\n\nSection::::In popular culture.\n\nBULLET::::- The Cardinelli Gang was the subject of author W.R. Burnett's 1929 novel, \"Little Caesar\", which was adapted into the famous 1930 film \"Little Caesar\", starring Edward G. Robinson.\n",
"Section::::Main characters.:Navid Harrid.\n",
"The third and final SSE Hydro live show \"Still Game: The Final Farewell\" was officially announced on 1 November 2018, with 5 shows in September 2019 taking place over three days. A further 5 shows where announced on 2 November.\n\nSection::::Cast.\n\nSection::::Cast.:Main cast.\n\nBULLET::::- Ford Kiernan as Jack Jarvis\n\nBULLET::::- Greg Hemphill as Victor McDade\n\nBULLET::::- Paul Riley as Winston Ingram\n\nBULLET::::- Mark Cox as Thomas \"Tam\" Mullen\n\nBULLET::::- Jane McCarry as Isa Drennan\n\nBULLET::::- Sanjeev Kohli as Navid Harrid\n\nBULLET::::- Gavin Mitchell as Robert \"Boabby The Barman\" Taylor\n\nSection::::Cast.:Recurring cast.\n\nBULLET::::- James Martin as \"Auld\" Eric (Jones)\n"
] | [
"Depictions of a person regaining consciousness are accurate in the real world.",
"Our fingers are the first to twitch when regaining consciousness"
] | [
"Depictions of a person regaining consciousness are not necessarily accurate in the real world.",
"Despite Hollywood depictions of the phenomenon, one's fingers are not the first to twitch when regaining consciousness."
] | [
"false presupposition"
] | [
"Depictions of a person regaining consciousness are accurate in the real world.",
"Our fingers are the first to twitch when regaining consciousness"
] | [
"false presupposition",
"false presupposition"
] | [
"Depictions of a person regaining consciousness are not necessarily accurate in the real world.",
"Despite Hollywood depictions of the phenomenon, one's fingers are not the first to twitch when regaining consciousness."
] |
2018-01822 | Why does the earth being 1 degree warmer so significant? | > If the weather was warmer by a degree on average every day there would be no difference. Well, that isn't what is being talked about. Instead think about how much energy is required to heat up the entire atmosphere and surface of Earth by one degree. A huge amount! That much energy being at work in the atmosphere can mean more powerful weather effects in the short term or in a specific area. Also think about how it gets colder as you move toward the poles. At some point water starts to freeze to ice. Now if you raise the overall temperature by one degree this would mean that point where water freezes consistently would move closer to the poles by some amount. As the change is very gradual this might mean like 100 miles, in a strip all the way around the planet for both poles. How much area is that?! And it isn't like that line is always in the same place throughout the year so ice melting at different times than normal can translate to unpredictable changes in the weather. | [
"Section::::Solar radiation.\n\nAlmost all of the energy available to the Earth's surface and atmosphere comes from the sun in the form of solar radiation (light from the sun, including invisible ultraviolet and infrared light). Variations in the amount of solar radiation reaching different parts of the Earth are a principal driver of global and regional climate. Latitude is the most important factor determining the yearly average amount of solar radiation reaching the top of the atmosphere; the incident solar radiation decreases smoothly from the Equator to the poles. Therefore, temperature tends to decrease with increasing latitude.\n",
"In the extreme, the planet Venus is thought to have experienced a very large increase in greenhouse effect over its lifetime, so much so that its poles have warmed sufficiently to render its surface temperature effectively isothermal (no difference between poles and equator). On Earth, water vapor and trace gasses provide a lesser greenhouse effect, and the atmosphere and extensive oceans provide efficient poleward heat transport. Both palaeoclimate changes and recent global warming changes have exhibited strong polar amplification, as described below.\n",
"The amount of solar energy reaching Earth's surface decreases with increasing latitude. At higher latitudes, the sunlight reaches the surface at lower angles, and it must pass through thicker columns of the atmosphere. As a result, the mean annual air temperature at sea level decreases by about per degree of latitude from the equator. Earth's surface can be subdivided into specific latitudinal belts of approximately homogeneous climate. Ranging from the equator to the polar regions, these are the tropical (or equatorial), subtropical, temperate and polar climates.\n\nThis latitudinal rule has several anomalies:\n",
"Today, submarine mapping and measurements have been drastically reduced. One classic way to measuring ice thickness is to drill a hole in the ice and analyze the ice obtained. There are also many more complex methods and devices dedicated to measuring and keeping track of weather conditions in polar areas. These include ice mass balance buoys, upward looking sonar from under-ice buoys, and satellites. Global warming has increased interest in polar meteorology. This is because most of Earth's snow and ice are in polar regions, and these areas are expected to be the most affected by the snow/ice-surface albedo feedback effect. Therefore, if increased atmospheric carbon dioxide concentration causes global warming, then polar regions should warm faster than other locations on Earth.\n",
"A 2018 published study points at a threshold at which temperatures could rise to 4 or 5 degrees compared to the pre-industrial levels, through self-reinforcing feedbacks in the climate system, suggesting this threshold is below the 2 degree temperature target, agreed upon by the Paris climate deal. Study author Katherine Richardson stresses, \"We note that the Earth has never in its history had a quasi-stable state that is around 2°C warmer than the pre-industrial and suggest that there is substantial risk that the system, itself, will 'want' to continue warming because of all of these other processes – even if we stop emissions. This implies not only reducing emissions but much more.\"\n",
"Satellite datasets show that over the past four decades the troposphere has warmed and the stratosphere has cooled. Both of these trends are consistent with the influence of increasing atmospheric concentrations of greenhouse gases.\n\nSection::::Measurements.\n",
"Global surface temperatures have increased about 0.74 °C (plus or minus 0.18 °C) since the late-19th century, and the linear trend for the past 50 years of 0.13 °C (plus or minus 0.03 °C) per decade is nearly twice that for the past 100 years. The warming has not been globally uniform. Some areas have, in fact, cooled slightly over the last century. The recent warmth has been greatest over North America and Eurasia between 40 and 70°N. Lastly, seven of the eight warmest years on record have occurred since 2001 and the 10 warmest years have all occurred since 1995.\n",
"By looking at the difference time series, the year-to-year variability of the climate is removed, as well as regional climatic trends. In such a difference time series, a clear and persistent jump of, for example 1 °C, can easily be detected and can only be due to changes in the measurement conditions.\n",
"Section::::Importance to the Earth's atmosphere.\n",
"The instrumental temperature record from surface stations was supplemented by radiosonde balloons, extensive atmospheric monitoring by the mid-20th century, and, from the 1970s on, with global satellite data as well. Taking the record as a whole, most of the 20th century had been unprecedentedly warm, while the 19th and 17th centuries were quite cool.\n\nThe O/O ratio in calcite and ice core samples used to deduce ocean temperature in the distant past is an example of a temperature proxy method, as are other climate metrics noted in subsequent categories.\n\nSection::::Physical evidence and effects.:Glaciers.\n",
"CLIMAP has been a cornerstone of paleoclimate research and remains the most used sea surface temperature reconstruction of the global ocean during the last glacial maximum (Yin and Battisti 2001), but it has also been persistently controversial. CLIMAP resulted in estimates of global cooling of only 3.0 ± 0.6 °C relative to the modern day (Hoffert and Covey 1992). The climate change during an ice age that occurs far from the continental ice sheets themselves is believed to be primarily controlled by changes in greenhouse gases, hence the conditions during the last glacial maximum provide a natural experiment for measuring the impact of changes in greenhouse gases on climate. The cited estimates of 3.0 °C implies a climate sensitivity to carbon dioxide changes at the low end of the range proposed by the Intergovernmental Panel on Climate Change .\n",
"Section::::Thermal inertia.\n\nThe ocean’s thermal inertia delays some global warming for decades or centuries. It is accounted for in global climate models, and has been confirmed via measurements of Earth’s energy balance. Permafrost takes longer to respond to a warming planet because of thermal inertia, due to ice rich materials and permafrost thickness.\n\nThe observed transient climate sensitivity and the equilibrium climate sensitivity are proportional to the thermal inertia time scale. Thus, Earth’s equilibrium climate sensitivity adjusts over time until a new steady state equilibrium has been reached.\n\nSection::::Ice sheet inertia.\n",
"Global average temperature is one of the most-cited indicators of global climate change, and shows an increase of approximately 1.4 °F since the early 20th Century. The global surface temperature is based on air temperature data over land and sea-surface temperatures observed from ships, buoys and satellites. There is a clear long-term global warming trend, while each individual year does not always show a temperature increase relative to the previous year, and some years show greater changes than others. These year-to-year fluctuations in temperature are due to natural processes, such as the effects of El Niños, La Niñas, and the eruption of large volcanoes. Notably, the 20 warmest years have all occurred since 1981, and the 10 warmest have all occurred in the past 12 years.\n",
"Polar amplification\n\nPolar amplification is the phenomenon that any change in the net radiation balance (for example greenhouse intensification) tends to produce a larger change in temperature near the poles than the planetary average. On a planet with an atmosphere that can restrict emission of longwave radiation to space (a greenhouse effect), surface temperatures will be warmer than a simple planetary equilibrium temperature calculation would predict. Where the atmosphere or an extensive ocean is able to transport heat polewards, the poles will be warmer and equatorial regions cooler than their local net radiation balances would predict.\n",
"Though the global average temperature response to cumulative emissions is approximately linear, this response is not uniform throughout the globe. Calculations by Leduc et al., (2016) of the geographical pattern of temperature response (the regional TCRE, or RTCRE) show values of low temperature change over equatorial and tropical ocean regions and high values of temperature change exceeding 4 °C/Tt C in the Arctic. Likewise, they show a pronounced temperature response difference between the land and ocean, which is largely the result of ocean heat cycling.\n\nSection::::Temperature Response.:Regional response.:Regional precipitation response.\n",
"Most people say, \"A few degrees? So what? If I change my thermostat a few degrees, I'll live fine.\" ... [The] point is that one or two degrees is about the experience that we have had in the last 10,000 years, the era of human civilization. There haven't been—globally averaged, we're talking—fluctuations of more than a degree or so. So we're actually getting into uncharted territory from the point of view of the relatively benign climate of the last 10,000 years, if we warm up more than a degree or two. (Stephen H. Schneider)\n",
"Received radiation is unevenly distributed over the planet, because the Sun heats equatorial regions more than polar regions. \"The atmosphere and ocean work non-stop to even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds, and ocean circulation.\" Earth is very close to being in radiative equilibrium, the situation where the incoming solar energy is balanced by an equal flow of heat to space; under that condition, global temperatures will be \"relatively\" stable. Globally, over the course of the year, the Earth system—land surfaces, oceans, and atmosphere—absorbs and then radiates back to space an average of about 340 watts of solar power per square meter. Anything that increases or decreases the amount of incoming or outgoing energy will change global temperatures in response.\n",
"Section::::Observed temperature changes.\n\n Multiple independently produced datasets confirm that between 1880 and 2012, the global average (land and ocean) surface temperature increased by 0.85 [0.65 to 1.06] °C. Currently, surface temperature rise with about 0.2 °C degrees per decade. Since 1950, the number of cold days and nights have decreased, and the number of warm days and night have increased. These trends can be compared to historical temperature trends: patterns of warming and cooling like the Medieval Climate Anomaly and the Little Ice Age were not as synchronous as current warming, but did reach temperatures as high as late-20th century regionally.\n",
"The Hans Tausen Iskappe (ice cap) in Peary Land (northern Greenland) was drilled in 1977 with a new deep drill to 325 m. The ice core contained distinct melt layers all the way to bedrock indicating that Hans Tausen Iskappe contains no ice from the last glaciation; i.e., the world's northernmost ice cap melted away during the post-glacial climatic optimum and was rebuilt when the climate got colder some 4000 years ago.\n",
"In the Northern Hemisphere, when the Earth is at its furthest point from the sun (aphelion) the variation in temperature between winter and summer are less extreme. When the earth is closest to the sun (perihelion), about 5,750 years later, then the variations are at their most extreme. At present the Earth is at its furthest, so the northern hemisphere summers and winters are less extreme and the southern hemisphere climate is more extreme.\n",
"Cooling between 2006 and 2008, for instance, was likely driven by La Niña, the opposite of El Niño conditions. The area of cooler-than-average sea surface temperatures that defines La Niña conditions can push global temperatures downward, if the phenomenon is strong enough. Even accounting for the presence of internal climate variability, recent years rank among the warmest on record. For example, every year of the 2000s was warmer than the 1990 average.\n\nSection::::Regional temperature.\n",
"The U.S. National Weather Service Cooperative Observer Program has established minimum standards regarding the instrumentation, siting, and reporting of surface temperature stations. The observing systems available are able to detect year-to-year temperature variations such as those caused by El Niño or volcanic eruptions.\n\nThe urban heat island effect is very small, estimated to account for less than of warming per decade since 1900.\n",
"Changes to the surface of the planet, such as an absence of volcanoes or higher sea levels, which would reduce the amount of land surface exposed to weathering can change the rates at which different processes in this cycle take place. Over tens to hundreds of millions of years, carbon dioxide levels in the atmosphere may vary due to natural perturbations in the cycle but even more generally, it serves as a critical negative feedback loop between carbon dioxide levels and climate changes. For example, if CO builds up in the atmosphere, the greenhouse effect will serve to increase the surface temperature, which will in turn increase the rate of rainfall and silicate weathering, which will remove carbon from the atmosphere. In this way, over long timescales, the carbonate-silicate cycle has a stabilizing effect on the Earth's climate, which is why it has been called the Earth's thermostat.\n",
"Numerous cycles have been found to influence annual global mean temperatures. The tropical El Niño–La Niña cycle and the pacific decadal oscillation are the most well-known of these cycles. An examination of the average global temperature changes by decades reveals continuing climate change, and AR5 reports \"Each of the last three decades has been successively warmer at the Earth's surface than any preceding decade since 1850 (see Figure SPM.1). In the Northern Hemisphere, 1983–2012 was likely the warmest 30-year period of the last 1,400 years (medium confidence)\".\n",
"Seasonal temperature trends are positive over most of the globe but weak cooling is observed over the mid latitudes of the southern ocean but also over eastern Canada in spring because of strengthening of the North Atlantic oscillation. Warming is stronger over northern Europe, China and North America in winter, Europe and Asia interior in spring, Europe and north Africa in summer and northern North America, Greenland and eastern Asia in autumn.\n"
] | [
"Earth being 1 degree warmer is very significant. "
] | [
"Weather being warmer by one degree would not make much of a difference. "
] | [
"false presupposition"
] | [
"Earth being 1 degree warmer is very significant. ",
"Earth being 1 degree warmer is very significant. "
] | [
"normal",
"false presupposition"
] | [
"Weather being warmer by one degree would not make much of a difference. ",
"Weather being warmer by one degree would not make much of a difference. "
] |
2018-22799 | What happens when we stretch in the morning? Why does it feel so good and why is it almost compulsory sometimes? | Our muscles and fascia are mostly unmoving while we sleep, and there is constant pressure from whatever we are sleeping on. This causes them to kind of bind together and get sticky, stretching pulls the layers apart and keeps our mobility and flexibility up. You may notice that people that sit for long stretches and don't keep up with mobility will become stuck, in a poor posture or with joint pain from parts pulling on unmoving parts. | [
"In its most basic form, stretching is a natural and instinctive activity; it is performed by humans and many other animals. It can be accompanied by yawning. Stretching often occurs instinctively after waking from sleep, after long periods of inactivity, or after exiting confined spaces and areas.\n\nIncreasing flexibility through stretching is one of the basic tenets of physical fitness. It is common for athletes to stretch before (for warming up) and after exercise in an attempt to reduce risk of injury and increase performance.\n",
"The practice of holding yoga postures or \"asanas\" for extended periods of time is a significant part of traditional yoga practice, both in the hatha yoga tradition of India and in the Taoist yoga tradition of the greater China area. For example, B. K. S. Iyengar recommended holding Supta Virasana (reclining hero pose) for 10–15 minutes. Long-held stretches are recommended in other physical disciplines, such as gymnastics and ballet, to increase flexibility.\n",
"During the next two decades, Shelter Publications produced a series of fitness books, including Bob Anderson's \"Stretching\" (which has sold three million copies and is in 31 languages), \"Galloway's Book on Running\" by Olympian Jeff Galloway, and \"Getting Stronger\" by legendary bodybuilder Bill Pearl. More recently, Shelter produced StretchWare, software that reminds you to stretch at your computer.\n",
"Section::::Program overview.\n\nAntiGravity classes are led by certified instructors trained in the program(s), and typically last for 75 minutes. Before classes start, hammocks are adjusted to the individual heights of the students, and each lesson begins and ends with guided meditations inside the hammock, referred to as \"Womb Pose\". Students are then led by their instructor through a series of motions and poses adapted for the hammocks from modern yoga, Pilates, ballet, and other exercise techniques, typically focusing on flexibility, mobility, decompression and core development.\n",
"Section::::Commercial performance.:Pt. 2.\n",
"A study sponsored and done by the American Council on Exercise revealed that the extensions are around 70-90 percent effective compared to the triangle push up for the triceps. However, extensions put no pressure on the wrists so they are an alternative for people with wrist strain or injury.\n\nSection::::Execution.\n\nInstructions:\n\nBULLET::::1. Lie on a flat bench with feet on the ground and head hanging just off the top of the bench, so that the edge of the bench rests in the pit between neck and head.\n",
"Section::::Commercial performance.\n\nSection::::Commercial performance.:Pt. 1.\n",
"Stretching prior to strenuous physical activity has been thought to increase muscular performance by extending the soft tissue past its attainable length in order to increase range of motion. Many physically active individuals practice these techniques as a “warm-up” in order to achieve a certain level of muscular preparation for specific exercise movements. When stretching, muscles should feel somewhat uncomfortable but not physically agonizing.\n",
"Applicants may demonstrate their knowledge of human anatomy and physiology by completing a course in human anatomy, human physiology, or human biology provided by a regionally-accredited academic institution or a BCIA-approved training program or by successfully completing an Anatomy and Physiology exam covering the organization of the human body and its systems.\n",
"BULLET::::- John Shaw Billings Correspondence with Libraries of the American Philosophical Society and the College of Physicians of Philadelphia Microfilm Copies 1878-1916—National Library of Medicine finding aid\n\nBULLET::::- John Shaw Billings Papers at the University of South Carolina 1856-1966—National Library of Medicine finding aid\n\nBULLET::::- John Shaw Billings Papers at New York Public Library [microform] 1854-1913—National Library of Medicine Finding aid\n\nBULLET::::- John Shaw Billings: \"I Could Lie Down and Sleep for Sixteen Hours without Stopping,\" Indiana Historical Bureau\n\nBULLET::::- John Shaw Billings: The Many Lives He has Led, Indiana Historical Bureau\n",
"Stretching can be dangerous when performed incorrectly. There are many techniques for stretching in general, but depending on which muscle group is being stretched, some techniques may be ineffective or detrimental, even to the point of causing hypermobility, instability, or permanent damage to the tendons, ligaments, and muscle fiber. The physiological nature of stretching and theories about the effect of various techniques are therefore subject to heavy inquiry.\n",
"The television audience for \"Classical Stretch\" began as mainly women in their 40s and 50s, so Esmonde-White sought to expand her audience to younger viewers. Also \"Classical Stretch\" user-friendly movements turned off young males who like to tell themselves they are doing hard work in the gym. Until they try it, many young men think a stretch class would be too easy for them. \"Essentrics\" was developed with her daughter Sahra for younger audiences. They developed a teacher-training system which includes printed manuals and DVDs, and is available for distance education. \"Essentrics\" is presented for all ages and genders, and seeks to strengthen and stretch every one of the body's 620 muscles.\n",
"However, both of these types of stretching have been shown to have a positive impact on flexibility over time by increasing muscle and joint elasticity, thus increasing the depth and range of motion an athlete is able to reach. This is evident in the experiment \"Acute effects of duration on sprint performance of adolescent football players.\" In this experiment, football players were put through different stretching durations of static and dynamic stretching to test their effects. They were tested on maximum sprinting ability and overall change in flexibility. Both static and dynamic stretching had a positive impact on flexibility but, whereas dynamic stretching had no impact on sprint times, static stretching had a negative result, worsening the time the participants were able to sprint the distance in. While the duration of stretching for dynamic had no impact on the overall results, the longer the stretch was held for static, the worse the results got, showing that the longer the duration of stretching held, the weaker the muscle became.\n",
"Although muscle \"stimulation\" occurs in the gym (or home gym) when lifting weights, muscle \"growth\" occurs afterward during rest periods. Without adequate rest and sleep (6 to 8 hours), muscles do not have an opportunity to recover and grow. Additionally, many athletes find that a daytime nap further increases their body's ability to recover from training and build muscles. Some bodybuilders add a massage at the end of each workout to their routine as a method of recovering.\n\nSection::::Muscle growth.:Overtraining.\n",
"Section::::Stretching.:Static-Active.\n\nStatic-active stretching includes holding an extended position with just the strength of the muscles such as holding the leg in front, side or behind. Static-active flexibility requires a great deal of strength, making it the hardest to develop.\n\nSection::::Stretching.:Ballistic.\n\nBallistic stretching is separate from all other forms of stretching. It does not include stretching, but rather a bouncing motion. The actual performance of ballistic movements prevents lengthening of tissues. These movements should only be performed when the body is very warm; otherwise they can lead to injury.\n\nSection::::Limits of Flexibility.\n",
"Section::::Common exercises.\n\nIn addition to the various stretches, some of the more common calisthenic exercises include:\n\nBULLET::::- Muscle-ups\n\nBULLET::::- Squat jumps (box jumps)\n\nBULLET::::- Front lever\n\nBULLET::::- Push-ups\n\nBULLET::::- Pull-ups\n\nBULLET::::- Chin-ups\n\nBULLET::::- Squats\n\nBULLET::::- Back lever\n\nBULLET::::- Handstand\n\nBULLET::::- Dips\n\nBULLET::::- Hyperextensions\n\nBULLET::::- Leg raises\n\nBULLET::::- Planks\n\nBULLET::::- Shuttle runs\n\nBULLET::::- Burpees\n\nBULLET::::- L-sit\n\nSection::::Calisthenics parks.\n",
"Yoga classes used as therapy usually consist of asanas (postures used for stretching), pranayama (breathing exercises), and relaxation in savasana (lying down). The physical asanas of modern yoga are related to medieval haṭha yoga tradition, but they were not widely practiced in India before the early 20th century.\n",
"Stretching\n\nStretching is a form of physical exercise in which a specific muscle or tendon (or muscle group) is deliberately flexed or stretched in order to improve the muscle's felt elasticity and achieve comfortable muscle tone. The result is a feeling of increased muscle control, flexibility, and range of motion. Stretching is also used therapeutically to alleviate cramps.\n",
"BULLET::::5. Press back up to starting 10 o’clock position.\n\nTry to avoid moving your elbows too much; try to keep them the same width apart during the whole movement.\n\nSection::::Variations.\n\nSection::::Variations.:Vertical French extension.\n\nIn this variation, the exercise is performed while standing (or sitting on a device with a low backing—that allows the shoulders full range of movement). With respect to gravity, the weight is still lifted in the same manner. With respect to the body, .\n",
"However, other studies have found that removing portions of these series-elastic components (by way of tendon length reduction) had little effect on muscle performance.\n\nStudies on turkeys have, nevertheless, shown that during SSC, a performance enhancement associated with elastic energy storage still takes place but it is thought that the aponeurosis could be a major source of energy storage (Roleveld et al., 1994).\n",
"BULLET::::3. While H. A. DeVries, L. E. Holt and others wandered from this course, P. Williams (1937) utilized procedures for his flexion exercises back program. Peters and Peters (1975) further adapted Sherrington's principles into their program of ‘active stretching’, departing from the popular static stretching designed for specific sports, to address mobility of the entire body.\n",
"Uttanasana\n\nUttanasana (; ) or Standing Forward Bend, with variants such as Padahastasana where the toes are grasped, is a standing forward bending asana in modern yoga as exercise.\n\nSection::::Etymology and origins.\n\nThe name comes from the Sanskrit words उत्तान \"uttāna\", \"intense stretch\"; and आसन; \"āsana\", \"posture\" or \"seat\".\n",
"BULLET::::- Passive stretching (also called: static-passive stretching; assisted relaxed stretching) - 1. A static stretch (See: \"static stretching\") in which an external force (such as the floor or another person) holds the performer in the static position. 2. The practice of having a relaxed limb moved beyond its normal range of motion with the assistance of a partner. In \"active stretching\", in contrast, the limb is extended to its maximum range using only the muscles of that limb.\n\nBULLET::::- Pike - To be bend forward at the waist with the legs and trunk kept straight.\n",
"Flexibility is improved by stretching. Stretching should only be started when muscles are warm and the body temperature is raised. To be effective while stretching, force applied to the body must be held just beyond a feeling of pain and needs to be held for at least ten seconds. Increasing the range of motion creates good posture and develops proficient performance in everyday activities increasing the length of life and overall health of the individual.\n\nSection::::Stretching.:Dynamic.\n",
"Although static stretching is part of some warm-up routines, a study in 2013 indicated that it weakens muscles. For this reason, an active dynamic warm-up is recommended before exercise in place of static stretching.\n\nSection::::Physiology.\n\nStudies have shed light on the function, in stretching, of a large protein within the myofibrils of skeletal muscles named titin. A study performed by Magid and Law demonstrated that the origin of passive muscle tension (which occurs during stretching) is actually within the myofibrils, not extracellularly as had previously been supposed.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-03852 | Where is the internet stored? If all computers were destroyed would we be able to get back to our same internet? | If you destroyed all the pizza restaurants, who would you call to order a pizza? That's basically what you're asking. "The Internet" is just the connections between all the computers, it doesn't store anything or provide any services *other than* letting computers talk. Now, a lot of people say "the internet" when they mean "all the resources & information accessible via the internet" but that's not technically correct. | [
"Section::::Current proposal.\n",
"Computers are used as control systems for a wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, and also general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.\n",
"The source code was released into the public domain on April 30, 1993. Some of the code still resides on Tim Berners-Lee's NeXT Computer in the CERN museum and has not been recovered due to the computer's status as a historical artifact. To coincide with the 20th anniversary of the research center giving the web to the world, a project began in 2013 at CERN to preserve this original hardware and software associated with the birth of the Web.\n\nSection::::History.\n",
"The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible. Its web archive, the Wayback Machine, contains hundreds of billions of web captures. The Archive also oversees one of the world's largest book digitization projects.\n\nSection::::Operations.\n",
"The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be with computer security, i.e. authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure virtual private network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into remote locations and its employees' homes.\n",
"In March 2012 the museum opened Life Online, the world's first gallery dedicated to exploring the social, technological and cultural impact of the Internet. The permanent gallery was initially accompanied by a temporary exhibition, \"[open source]: Is the internet you know under threat?\" The exhibition was an exploration of the open source nature of the Internet, and the current threats to both net neutrality and the general continuation of the open source culture.\n",
"The foundation archived a snapshot of the Italian web domain, made in collaboration with the National Library of Italy, an archive of political websites of the 25 EU member states captured during the European constitutional debate, and archives (among others):\n\nBULLET::::- The National Archives (United Kingdom)\n\nBULLET::::- National Library of Ireland\n\nBULLET::::- CERN, Organisation européenne pour la recherche nucléaire (Switzerland)\n\nBULLET::::- Parliament of the United Kingdom\n\nBULLET::::- Public Record Office of Northern Ireland\n",
"The Internet is the largest example of an internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW).\n",
"Information is moving—you know, nightly news is one way, of course, but it's also moving through the blogosphere and through the Internets.\n\nDuring a discussion on education at a Twitter-themed town hall meeting on July 6, 2011, at the White House, President Barack Obama used the term \"Internets\" and quickly corrected his statement.\n\nSection::::Reaction.\n\nOn the evening of October 9, 2004, the day following the Bush/Kerry debate, \"Saturday Night Live\" parodied Bush with Will Forte's impression of George W. Bush:\n",
"Section::::Other services and endeavors.:Table Top Scribe System.\n\nA combined hardware software system has been developed that performs a safe method of digitizing content.\n\nSection::::Other services and endeavors.:Credit Union.\n",
"Multiple web resources with a common theme, a common domain name, or both, make up a website. Websites are stored in computers that are running a program called a web server that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be largely provided by a publisher, or interactively where users contribute content or the content depends upon the users or their actions. Websites may be provided for a myriad of informative, entertainment, commercial, governmental, or non-governmental reasons.\n\nSection::::History.\n",
"Section::::Media collections.:Mathematics – Hamid Naderi Yeganeh.\n\nThis collection contains mathematical images created by mathematical artist Hamid Naderi Yeganeh.\n\nSection::::Media collections.:Microfilm collection.\n\nThis collection contains approximately 160,000 items from a variety of libraries including the University of Chicago Libraries, the University of Illinois at Urbana-Champaign, the University of Alberta, Allen County Public Library, and the National Technical Information Service.\n\nSection::::Media collections.:Moving image collection.\n",
"The Internet (also known simply as \"the Net\" or less precisely as \"the Web\") is a more interactive medium of mass media, and can be briefly described as \"a network of networks\". Specifically, it is the worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It consists of millions of smaller domestic, academic, business, and governmental networks, which together carry various information and services, such as email, online chat, file transfer, and the interlinked web pages and other documents of the World Wide Web.\n",
"Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).\n",
"In addition to web archives, the Internet Archive maintains extensive collections of digital media that are attested by the uploader to be in the public domain in the United States or licensed under a license that allows redistribution, such as Creative Commons licenses. Media are organized into collections by media type (moving images, audio, text, etc.), and into sub-collections by various criteria. Each of the main collections includes a \"Community\" sub-collection (formerly named \"Open Source\") where general contributions by the public are stored.\n\nSection::::Media collections.:Audio collection.\n",
"Section::::Other services and endeavors.\n\nSection::::Other services and endeavors.:Physical media.\n\nVoicing a strong reaction to the idea of books simply being thrown away, and inspired by the Svalbard Global Seed Vault, Kahle now envisions collecting one copy of every book ever published. \"We're not going to get there, but that's our goal\", he said. Alongside the books, Kahle plans to store the Internet Archive's old servers, which were replaced in 2010.\n\nSection::::Other services and endeavors.:Software.\n",
"The vision of the Physical Internet involves encapsulating goods in smart, ecofriendly and modular containers ranging from the size of a maritime container to the size of a small box. It thus generalizes the maritime container that succeeded to support globalization and shaped ships and ports, and extends containerization to logistics services in general. The Physical Internet moves the border of the private space to be inside of the container instead of the warehouse or the truck. These modular containers will be continuously monitored and routed, exploiting their digital interconnection through the Internet of Things.\n",
"There are known rare cases where online access to content which \"for nothing\" has put people in danger was disabled by the website.\n\nOther threats include natural disasters, destruction (remote or physical), manipulation of the archive's contents (see also: cyberattack, backup), problematic copyright laws and surveillance of the site's users.\n\nKevin Vaughan suspects that in the long-term of multiple generations \"next to nothing\" will survive in a useful way besides \"if we have continuity in our technological civilization\" by which \"a lot of the bare data will remain findable and searchable\".\n",
"The Net is possibly the largest store of information on this planet. Everybody can be part of it; it is one of the few places where race, creed, colour, gender, sexual preference do not prejudice people against others. All this through the magic of modern technology. Communication is the key. People talking to people. The Net isn't computers. That's just the way we access it. The Net is people helping each other in a worldwide community.\n\nSection::::Filmmaking.\n",
"In terms of accessibility, the archived web sites are full text searchable within seven days of capture. Content collected through Archive-It is captured and stored as a WARC file. A primary and back-up copy is stored at the Internet Archive data centers. A copy of the WARC file can be given to subscribing partner institutions for geo-redundant preservation and storage purposes to their best practice standards. Periodically, the data captured through Archive-It is indexed into the Internet Archive's general archive.\n",
"In late 1999, the Archive expanded its collections beyond the Web archive, beginning with the Prelinger Archives. Now the Internet Archive includes texts, audio, moving images, and software. It hosts a number of other projects: the NASA Images Archive, the contract crawling service Archive-It, and the wiki-editable library catalog and book information site Open Library. Soon after that, the archive began working to provide specialized services relating to the information access needs of the print-disabled; publicly accessible books were made available in a protected Digital Accessible Information System (DAISY) format.\n\nAccording to its website:\n",
"In 2005 the only back-up server was located next to Hertfordshire Oil Storage Terminal in Buncefield, which was the scene of a major civil emergency when it burned to the ground in December 2005. According to the Home Office the location had been assessed as low-risk notwithstanding that the site was from a disaster hazard and the site and its surroundings burned to the ground.\n\nSection::::Users.\n",
"Section::::Media collections.:Images collection.:Cover Art Archive.\n\nThe Cover Art Archive is a joint project between the Internet Archive and MusicBrainz, whose goal is to make cover art images on the Internet. This collection contains more than 330,000 items.\n\nSection::::Media collections.:Images collection.:Metropolitan Museum of Art images.\n\nThe images of this collection are from the Metropolitan Museum of Art. This collection contains more than 140,000 items.\n\nSection::::Media collections.:Images collection.:NASA Images.\n",
"BULLET::::- Cloud computing: With cloud computing IT services can be delivered in which resources are retrieved from the Internet as opposed to direct connection to a server. Files can be kept on cloud-based storage systems rather than on local storage devices.\n",
"To combat link rot, web archivists are actively engaged in collecting the Web or particular portions of the Web and ensuring the collection is preserved in an archive, such as an archive site, for future researchers, historians, and the public. The goal of the Internet Archive is to maintain an archive of the entire Web, taking periodic snapshots of pages that can then be accessed for free via the Wayback Machine. In January 2013 the company announced that it had reached the milestone of 240 billion archived URLs. National libraries, national archives and other organizations are also involved in archiving culturally important Web content.\n"
] | [
"Internet is stored somewhere specifically.",
"If all computers are destroyed at once, the internet would be lost and unable to be retrieved."
] | [
"The internet is the connection of computers. As long as computers can connect we have internet. ",
"The connection to the internet is not provided by the computers, therefore destroying all computers at once wouldn't effect the internet. "
] | [
"false presupposition"
] | [
"Internet is stored somewhere specifically.",
"If all computers are destroyed at once, the internet would be lost and unable to be retrieved."
] | [
"false presupposition",
"false presupposition"
] | [
"The internet is the connection of computers. As long as computers can connect we have internet. ",
"The connection to the internet is not provided by the computers, therefore destroying all computers at once wouldn't effect the internet. "
] |
2018-11700 | The Super-Kamiokande neutrino detector and how it’s ultra pure water can dissolve metal | I'm not buying it. Myths about the dangers of ultrapure water abound, but are not backed by solid evidence. This is anecdotal evidence. All water can dissolve metal to some extent, depending on the metal. That's why lead pipes are bad for drinking water supplies. If you drink ultrapure water, it will have some minerals in it when it leaves your body. That does not mean it is dangerous in normal amounts. Everyone needs some minerals in their diet to replace those lost in urine and sweat. | [
"Bimetallic materials are materials that are made out of two different metals or alloys that are tightly bonded together. A good example of a bimetallic material would be a bimetallic strip which is used in some kinds of thermometers. In ISCR, bimetallic materials are small pieces of metals that are coated lightly with a catalyst such as palladium, silver, or platinum. The catalyst drives a faster reaction and the small size of the particles allows them to effectively move into and remain in the target zone.\n\nSection::::Reductants.:Proprietary materials.\n",
"BULLET::::- Cementation is the conversion of the metal ion to the metal by a redox reaction. A typical application involves addition of scrap iron to a solution of copper ions. Iron dissolves and copper metal is deposited.\n\nBULLET::::- Solvent Extraction\n\nBULLET::::- Ion Exchange\n\nBULLET::::- Gas reduction. Treating a solution of nickel and ammonia with hydrogen affords nickel metal as its powder.\n\nBULLET::::- Electrowinning is a particularly selective if expensive electrolysis process applied to the isolation of precious metals. Gold can be electroplated from its solutions.\n\nSection::::Solution concentration and purification.:Solvent extraction.\n",
"Section::::Solution concentration and purification.:Ion exchange.\n\nChelating agents, natural zeolite, activated carbon, resins, and liquid organics impregnated with chelating agents are all used to exchange cations or anions with the solution. Selectivity and recovery are a function of the reagents used and the contaminants present.\n\nSection::::Metal Recovery.\n",
"Section::::Use in environmental metal toxicology.\n\nExposure to ionic metals has been shown to result in deleterious effects for aquatic organisms and may induce oxidative stress, cause DNA damage, and decrease enzyme activity. In contrast, some metals under certain environmental conditions have potential moderating effects on other more toxic metals; one example being zinc (Zn), which has been shown to reduce copper (Cu) toxicity when both metals are present. Given that the presence of particular aqueous metals may have a wide array of effects on organisms, aquatic toxicologists have developed various methods for sampling them.\n",
"Metal recovery is the final step in a hydrometallurgical process. Metals suitable for sale as raw materials are often directly produced in the metal recovery step. Sometimes, however, further refining is required if ultra-high purity metals are to be produced. The primary types of metal recovery processes are electrolysis, gaseous reduction, and precipitation. For example, a major target of hydrometallurgy is copper, which is conveniently obtained by electrolysis. Cu ions reduce at mild potentials, leaving behind other contaminating metals such as Fe and Zn.\n\nSection::::Metal Recovery.:Electrolysis.\n",
"SLMDs are limited to the assessment of labile metals, and cannot be used to monitor for organic contaminants. Further, while the ability of SLMDs to sample copper, zinc, nickel, lead, and cadmium has been repeatedly demonstrated, there has been little laboratory research on their ability to reliably uptake other toxic metals. Still, while laboratory studies on the effectiveness of SLMDs have only investigated copper, zinc, nickel, lead, and cadmium, SLMDs have been used with success in field studies to assess a wider range of metals.\n\nSection::::See also.\n\nBULLET::::- Semipermeable membrane devices\n\nBULLET::::- Polar organic chemical integrative sampler\n",
"There are related non-chemical devices based on a variety of physical phenomena which have been marketed for over 50 years with similar claims of scale inhibition. Whilst some are effective, such as electrolytic devices, most do not work.\n\nBULLET::::- Electrolysis: \"Electrolytic scale inhibitors\" - two metals such as copper and zinc are used\n\nBULLET::::- Electrostatic: \"Electronic water conditioners\"\n\nBULLET::::- Electromagnetic: fluctuating electromagnetic fields are created\n\nBULLET::::- Catalytic\n\nBULLET::::- Mechanical\n\nBULLET::::- Other devices combine these different methods\n\nOther uses of magnetic devices:\n",
"After being deployed for a known time interval, SLMDs can be recovered from the field for analysis. Washing with 20% nitric acid allows for the extraction of accumulated metals, and by using analytical techniques like inductively coupled plasma mass spectroscopy (ICP-MS) or atomic absorption spectroscopy (flame AAS) to measure the concentration of metal in the extract, the amount of metal accumulated by the SLMD can be determined.\n\nSection::::Applications.\n\nSLMDs are known to accumulate cadmium, cobalt, copper, nickel, lead, and zinc, and have been deployed in freshwater monitoring studies by The Washington State Department of Ecology (Ecology) and the USGS.\n",
"Hydrometallurgy\n\nHydrometallurgy is a method for obtaining metals from their ores. It is a technique within the field of extractive metallurgy involving the use of aqueous chemistry for the recovery of metals from ores, concentrates, and recycled or residual materials. Metal chemical processing techniques that complement hydrometallurgy are pyrometallurgy, vapour metallurgy and molten salt electrometallurgy. Hydrometallurgy is typically divided into three general areas:\n\nBULLET::::- Leaching\n\nBULLET::::- Solution concentration and purification\n\nBULLET::::- Metal or metal compound recovery\n\nSection::::Leaching.\n",
"After deployment, the immobilized metal species can then be extracted from the outer membrane. The metal species can be identified and analyzed using widely recognized standard techniques (e.g., digestion, atomic absorption spectroscopy, inductively coupled plasma mass spectrometry, etc.). In this regard, any procedure or analytical technique applicable to measuring ionic or complexed metal species is suitable for determining metal concentrations sequestered by the SLMD.\n",
"Section::::Reactivity and Applications.\n\nMost simple metallated compounds are commercially available in both the solid and solution phases, with solution phase metallated compounds available in a wide range of solvents and concentrations. These compounds may also be created in the laboratory as an \"in situ\" synthetic intermediate or separately in solution.\n\nSection::::Reactivity and Applications.:Reactivity of Metallated Compounds.\n",
"Metals in the environment can speciate into different forms. Most metals dissolved in the aqueous environment are present as any of several ionic, complex-ion, and organically bound states. For most toxic metals, bioavailability is greatest for labile metals in their free ionic state. Recognizing the potential usefulness of a passive sampling device that could be used to measure trace amounts of bioavailable toxic metals, researchers at the United States Geological Survey (USGS) and University of Missouri began development on a counterpart to SPMDs that could be used to sample for labile metals.\n\nSection::::Structure and function.\n",
"Toxic metals can be present in the aqueous environment at trace or ultra-trace concentrations, yet still be toxicologically significant and thus cause harm to humans or the environment. Because these concentrations are so low, they would fall beyond the detection limits of most analytical instruments if the media had been sampled using traditional grab samples. Using SLMDs to passively collect metals over an extended period of time allows for trace metals to accumulate to detectable levels, which can give more accurate estimate of aquatic chemistry and contamination. SLMDs also have the advantage of being able to capture pulses of metal contamination that might otherwise go undetected when using grab samples.\n",
"Peepers are passive diffusion samplers used for metals in freshwater and marine sediment pore water, so they can be used to find areas that may have metal-contaminated sediments. Peepers are plastic vessels filled with clean water and covered in a dialysis membrane, which allows metals in sediment pore water to enter the water inside the peeper. They are usually placed deep enough into sediment to be in an anoxic environment, in which metals will be soluble enough to sample. If the peepers are deployed long enough so the sediment pore water and contained peeper water reach equilibrium, they can accurately provide metal concentrations in sampled sediment pore water.\n",
"BULLET::::- The metal (M) is transformed from solid form to the dissolved form. In uncontaminated sediment, M will mostly be a combination of Fe and Mn. In contaminated sediments, toxic heavy metals such as Cd, Pb, etc. will also be liberated by the reaction.\n\nOnce the quantity (in moles) of AVS has been determined in this way, it is divided by the dry mass of the sediment to obtain the AVS concentration. In addition to the gravimetric method described here, other methods, such as colorimetry, may be used.\n\nSection::::Methods.:Metals Determination.\n",
"BULLET::::- Gold can be separated from a cyanide solution with the Merrill-Crowe process using Counter Current Decantation (CCD). In some mines, Nickel and Cobalt are treated with CCD, after the original ore was treated with concentrated Sulfuric acid and steam in Titanium covered autoclaves, producing nickel cobalt slurry. The nickel and cobalt in the slurry are removed from it almost completely using a CCD system exchanging the cobalt and nickel with flash steam heated water.\n",
"BULLET::::- Oxidation Reduction reactions are forced to their natural end point within the reaction tank which speeds up the natural process of nature that occurs in wet chemistry, where concentration gradients and Solubility Products (KsP) are the chief determinants to enable reactions to reach stoichiometric completion.\n\nBULLET::::- Electrocoagulation Induced pH swings toward neutral.\n\nSection::::Water treatment.:Optimizing reactions.\n",
"BDTH can be used to chelate heavy metals like lead, cadmium, copper, manganese, zinc, iron, and mercury from ground water, coal tailings, gold ore, waste water of battery-recycling plants, and contaminated soil.\n",
"From a hydrometallurgical perspective, solvent extraction is exclusively used in separation and purification of uranium and plutonium, zirconium and hafnium, separation of cobalt and nickel, separation and purification of rare earth elements etc., its greatest advantage being its ability to selectively separate out even very similar metals. One obtains high-purity single metal streams on 'stripping' out the metal value from the 'loaded' organic wherein one can precipitate or deposit the metal value. Stripping is the opposite of extraction: Transfer of mass from organic to aqueous phase. \n",
"First developed by Petty, Brumbaugh, Huckins, May, and Wiedmeyer, the SLMD is used to monitor ionic metals in aquatic environments. Due to anthropogenic factors such as mining, metal refining, and industrial activity, global emissions of metals has significantly increased within the last 100 years, and will likely continue to increase during the foreseeable future.\n\nSection::::Components and functions.\n",
"In addition, Birbilis's group worked to define the size of microstructural features that trigger localised corrosion of engineering alloys. Such work involved the careful study of alloys structures on the nano- and atomic scale, whilst studying their metastable pitting (an automated web based tool to study metastable pitting from electrochemical data and published by Birbilis's group is available online.)\n",
"As a non-contact, non-destructive technique SKP has been used to investigate latent fingerprints on materials of interest for forensic studies. When fingerprints are left on a metallic surface they leave behind salts which can cause the localized corrosion of the material of interest. This leads to a change in Volta potential of the sample, which is detectable by SKP. SKP is particularly useful for these analyses because it can detect this change in Volta potential even after heating, or coating by, for example, oils.\n",
"BULLET::::- Coppersensor-1 (CS1) that comprises a thioether rich motif that binds to Cu(I) causing the excitation of a boron-dipyrromethene (BODIPY) dye in the visible region. The probe has good selectivity for Cu(I) over alkaline earth metals, Cu(II), and d-block metals.\n\nSection::::Examples.:Iron.\n\nIron is used a great deal in biological systems, a fact that is well known due to its role in Hemoglobin. For it, there are many small molecule sensors including:\n",
"BULLET::::- Metals: Metals can be in the source water or gases and can migrate into the steam from components in the steam generation and delivery path. Metallic systems corrode and impart metallic ions. Stainless steel, for example, can slough off molecules into the steam path. Limiting or eliminating metals from the water, gas and steam delivery paths reduce the risk of metallic contamination but do not affect the presence of metals in the source water and gas. Metal ions degrade electrical performance in semiconductors and metal ions in solar cells can be recombination centers that reduce the efficiency of the photovoltaic device.\n",
"BULLET::::- Reaction with an alkali metal at temperatures below its melting point, with a catalytic amount (5-10% by mole) of an electron carrier such as naphthalene or biphenyl. This method can be used with lithium as the reducing agent, even at room temperature, and is therefore less hazardous than the previous method; and often results in more reactive powders.\n\nBULLET::::- Reaction with previously prepared lithium naphthalide or lithium biphenylide instead of lithium. This process can be carried out at even lower temperatures, below ambient. Although slower, it was found to produce even smaller particles.\n"
] | [
"Only ultra pure water can dissolve metal. "
] | [
"All water has the ability to dissolve some metal. "
] | [
"false presupposition"
] | [
"Only ultra pure water can dissolve metal. ",
"Only ultra pure water can dissolve metal. "
] | [
"normal",
"false presupposition"
] | [
"All water has the ability to dissolve some metal. ",
"All water has the ability to dissolve some metal. "
] |
2018-00096 | Why is conversion between celsius and kelvin simple subtraction but kelvin to fahrenheit or celsius to fahrenheit involves subtraction and division? | 1 unit Kelvin (not degree) is equal to 1 degree Celsius. The Kelvin system is basically derived from Celsius, it just uses a different reference point for its zero than Celsius does. | [
"This practice is permissible because the degree Celsius is a special name for the kelvin for use in expressing relative temperatures, and the magnitude of the degree Celsius is exactly equal to that of the kelvin. Notwithstanding that the official endorsement provided by Resolution 3 of the 13th CGPM states \"a temperature interval may also be expressed in degrees Celsius\", the practice of simultaneously using both °C and K is widespread throughout the scientific world. The use of SI prefixed forms of the degree Celsius (such as \"µ°C\" or \"microdegree Celsius\") to express a temperature interval has not been widely adopted.\n",
"Early in the 20th century, Halsey and Dale suggested that the resistance to the use of centigrade (now Celsius) system in the U.S. included the larger size of each degree Celsius and the lower zero point in the Fahrenheit system.\n",
"In science and in engineering, the Celsius scale and the Kelvin scale are often used in combination in close contexts, e.g. \"a measured value was 0.01023 °C with an uncertainty of 70 µK\". This practice is permissible because the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding the official endorsement provided by decision #3 of Resolution 3 of the 13th CGPM, which stated \"a temperature interval may also be expressed in degrees Celsius\", the practice of simultaneously using both °C and K remains widespread throughout the scientific world as the use of SI-prefixed forms of the degree Celsius (such as \"µ°C\" or \"microdegrees Celsius\") to express a temperature interval has not been well-adopted.\n",
"Degrees Fahrenheit are used in the U.S. to measure temperatures in most non-scientific contexts. The Rankine scale of absolute temperature also saw some use in thermodynamics. Scientists worldwide use the kelvin and degree Celsius. Several U.S. technical standards are expressed in Fahrenheit temperatures and American medical practitioners often use degrees Fahrenheit for body temperature.\n\nThe relationship between the different temperature scales is linear but the scales have different zero points, so conversion is not simply multiplication by a factor. Pure water freezes at 32 °F = 0 °C and boils at 212 °F = 100 °C at 1 atm. The conversion formula is:\n",
"For some other quantities, it is easier or has been convention to estimate ratios between attribute \"differences\". Consider temperature, for example. In the familiar everyday instances, temperature is measured using instruments calibrated in either the Fahrenheit or Celsius scales. What are really being measured with such instruments are the magnitudes of temperature differences. For example, Anders Celsius defined the unit of the Celsius scale to be 1/100th of the difference in temperature between the freezing and boiling points of water at sea level. A midday temperature measurement of 20 degrees Celsius is simply the ratio of the Celsius unit to the midday temperature.\n",
"For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, though this would yield the same formula at the end.\n",
"Section::::Usage conventions.:Use in conjunction with degrees Celsius.\n\nIn science and engineering, degrees Celsius and kelvins are often used simultaneously in the same article, where absolute temperatures are given in degrees Celsius, but temperature intervals are given in kelvins. E.g. \"its measured value was with an uncertainty of 60 µK\".\n",
"BULLET::::- \"c\" °Celsius to \"f\" °Fahrenheit :\n\nThis is also an exact conversion making use of the identity −40 °F = −40 °C. Again, is the value in Fahrenheit and the value in Celsius:\n\nBULLET::::- \"f\" °Fahrenheit to \"c\" °Celsius : .\n\nBULLET::::- \"c\" °Celsius to \"f\" °Fahrenheit : .\n\nSection::::History.\n",
"At the 9th CGPM, the Celsius temperature scale was renamed the Celsius scale and the scale itself was fixed by defining the triple point of water as 0.01 °C, though the CGPM left the formal definition of absolute zero until the 10th CGPM when the name \"Kelvin\" was assigned to the absolute temperature scale and triple point of water was defined as being 273.16 °K.\n\nSection::::Working draft of SI: \"Practical system of units\".:Luminosity.\n",
"Maxwell and Boltzmann had produced theories describing the inter-relational of temperature, pressure and volume of a gas on a microscopic scale but otherwise, in 1900, there was no understanding of the microscopic nature of temperature.\n\nBy the end of the nineteenth century, the fundamental macroscopic laws of thermodynamics had been formulated and although techniques existed to measure temperature using empirical techniques, the scientific understanding of the nature of temperature was minimal.\n\nSection::::Convention of the metre.\n",
"In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature \"kelvin\", symbol K, replacing \"degree Kelvin\", symbol °K. Furthermore, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also held in Resolution 4 that \"The kelvin, unit of thermodynamic temperature, is equal to the fraction of the thermodynamic temperature of the triple point of water.\"\n",
"The Celsius scale (°C) is used for common temperature measurements in most of the world. It is an empirical scale that was developed by a historical progress, which led to its zero point being defined by the freezing point of water, and additional degrees defined so that was the boiling point of water, both at sea-level atmospheric pressure. Because of the 100-degree interval, it was called a centigrade scale. Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale, and so that a temperature increment of one degree Celsius is the same as an increment of one kelvin, though they differ by an additive offset of approximately 273.15.\n",
"The degree centigrade as a unit of temperature resulted from the scale devised by Swedish astronomer Anders Celsius in 1742. His scale counter-intuitively designated 100 as the freezing point of water and 0 as the boiling point. Independently, in 1743, the French physicist Jean-Pierre Christin described a scale with 0 as the freezing point of water and 100 the boiling point. The scale became known as the centi-grade, or 100 gradations of temperature, scale.\n",
"BULLET::::- 1967/1968: Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature \"kelvin\", symbol K, replacing \"degree absolute\", symbol °K. Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 that \"The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water\".\n",
"Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C.\n\nSection::::Mathematical examples.:Orientation and frame of reference.\n",
"In the present-day Fahrenheit scale, 0 °F no longer corresponds to the eutectic temperature of ammonium chloride brine as described above. Instead, that eutectic is at approximately 4 °F on the final Fahrenheit scale.\n\nThe Rankine temperature scale was based upon the Fahrenheit temperature scale, with its zero representing absolute zero instead.\n\nSection::::Usage.\n\nThe Fahrenheit scale was the primary temperature standard for climatic, industrial and medical purposes in English-speaking countries until the 1960s. In the late 1960s and 1970s, the Celsius scale replaced Fahrenheit in almost all of those countries—with the notable exception of the United States—typically during their general metrication process.\n",
"In the European Union, it is mandatory to use kelvins or degrees Celsius when quoting temperature for \"economic, public health, public safety and administrative\" purposes, though degrees Fahrenheit may be used alongside degrees Celsius as a supplementary unit. For example, the laundry symbols used in the United Kingdom follow the recommendations of ISO 3758:2005 showing the temperature of the washing machine water in degrees Celsius only. The equivalent label in North America uses one to six dots to denote temperature with an optional temperature in degrees Celsius.\n",
"From 1743, the Celsius scale is based on 0 °C for the freezing point of water and 100 °C for the boiling point of water at 1 atm pressure. Prior to 1743, the scale was also based on the boiling and melting points of water, but the values were reversed (i.e. the boiling point was at 0 degrees and the melting point was at 100 degrees). The 1743 scale reversal was proposed by Jean-Pierre Christin.\n",
"Canada has passed legislation favoring the International System of Units, while also maintaining legal definitions for traditional Canadian imperial units. Canadian weather reports are conveyed using degrees Celsius with occasional reference to Fahrenheit especially for cross-border broadcasts. Fahrenheit is still used on virtually all Canadian ovens, and in reference to swimming pool temperatures and thermostats. Thermometers, both digital and analog, sold in Canada usually employ both the Celsius and Fahrenheit scales.\n",
"In 1744, coincident with the death of Anders Celsius, the Swedish botanist Carl Linnaeus (1707–1778) reversed Celsius's scale. His custom-made \"linnaeus-thermometer\", for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time, whose workshop was located in the basement of the Stockholm observatory. As often happened in this age before modern communications, numerous physicists, scientists, and instrument makers are credited with having independently developed this same scale; among them were Pehr Elvius, the secretary of the Royal Swedish Academy of Sciences (which had an instrument workshop) and with whom Linnaeus had been corresponding; Daniel Ekström, the instrument maker; and Mårten Strömer (1707–1770) who had studied astronomy under Anders Celsius.\n",
"Celsius measurement follows an interval system but not a ratio system; and it follows a relative scale not an absolute scale. For example, 20 °C is not twice the heat energy of 10 °C; and 0 °C is not the lowest Celsius value. Thus, degrees Celsius is a useful interval measurement but does not possess the characteristics of ratio measures like weight or distance.\n\nSection::::Coexistence of Kelvin and Celsius scales.\n",
"The general rule of the International Bureau of Weights and Measures (BIPM) is that the numerical value always precedes the unit, and a space is always used to separate the unit from the number, (not \"\" or \"\"). The only exceptions to this rule are for the unit symbols for degree, minute, and second for plane angle (°, ′, and ″, respectively), for which no space is left between the numerical value and the unit symbol. Other languages, and various publishing houses, may follow different typographical rules.\n\nSection::::Name and symbol typesetting.:Unicode character.\n",
"Unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or written as a degree. The kelvin is the primary unit of temperature measurement in the physical sciences, but is often used in conjunction with the degree Celsius, which has the same magnitude. \n\nSection::::History.\n",
"Official policy also varies from common practice for the degree Celsius (°C). NIST states: \"Prefix symbols may be used with the unit symbol °C and prefix names may be used with the unit name 'degree Celsius'. For example, 12 m°C (12 millidegrees Celsius) is acceptable.\" In practice, it is more common for prefixes to be used with the kelvin when it is desirable to denote extremely large or small absolute temperatures or temperature differences. Thus, temperatures of star interiors may be given in units of MK (megakelvins), and molecular cooling may be described in mK (millikelvins).\n",
"Televised and radio weather reports are given in degrees Fahrenheit instead of Celsius for dew point and air temperatures, miles per hour for wind speed, inches of mercury for atmospheric pressure (millibars are used only when reporting tropical phenomena such as hurricanes), and other customary units. In some northern border states, temperatures are described in both Fahrenheit and Celsius for the benefit of the cross-border Canadian audiences.\n"
] | [] | [] | [
"normal"
] | [
"Converting between celsius, farenheit, and kelvin should be the same."
] | [
"false presupposition",
"normal"
] | [
"celsius and kelvin are the same, just with different reference points. Farenheit is not based on celsius. "
] |
2018-20358 | Most lighthouse don't have more than a couple windows. Why? | Shed some light. Heh heh. Lighthouses are not inhabited normally (the lighthouse keeper would have a separate house nearby, back in the days when they were all manned). They are also built in places of extreme weather, more or less by definition. The old huge solid stone lighthouses were that thick because it was necessary to keep them sturdy enough - big windows would compromise the strength of the structure. Later on, when they learned to build with metal, etc., they would use as little material as possible to make the lighthouse cheap and quick to put up. So basically, there are no good reasons to have big windows except in the top lightroom, and a few good reasons to not have them. | [
"Reynaud's approach was different, and did not fit in with the typical British-style architecture of lighthouses in 19th century Brittany. The building was constructed in two blocks: the first block was solid and modeled after the British lighthouses Eddystone and Bell Rock, while the second block was lighter in style, upon which the illumination was to be placed at its final height.\n",
"Where dangerous shoals are located far off a flat sandy beach, the prototypical tall masonry coastal lighthouse is constructed to assist the navigator making a landfall after an ocean crossing. Often these are cylindrical to reduce the effect of wind on a tall structure, such as Cape May Light. Smaller versions of this design are often used as harbor lights to mark the entrance into a harbor, such as New London Harbor Light.\n",
"Fresnel lighthouse lenses are ranked by \"order\", a measure of refracting power, with a first order lens being the largest, most powerful and expensive; and a sixth order lens being the smallest. The order is based on the focal length of the lens. A first order lens has the longest focal length, with the sixth being the shortest. Coastal lighthouses generally use first, second, or third order lenses, while harbor lights and beacons use fourth, fifth, or sixth order lenses.\n",
"Section::::Interior.:Lantern.\n\nThe lantern is the large round glass structure, that houses the lens, located at the top of the lighthouse. This structure is made out of multiple materials, primarily glass, wood, and iron.\n\nAny conservation or restoration processes should keep in mind that the lantern, ventilation shafts, and lens should not be obstructed in anyway. Any replacement glass must be rated for wind standards that are likely to occur at the top of the lighthouse.\n\nSection::::Interior.:Lantern.:Lens.\n",
"Cape Cod style\n\nCape Cod style was a style of lighthouse architecture that originated on Cape Cod in Massachusetts during the early 1800s, and which became predominant to the West Coast, where numerous well-preserved examples still exist. In such lighthouses, the light tower was attached directly to the keeper's dwelling, and centered on the roof; entry was achieved through a stairway in the top floor of the dwelling. \n",
"The idea of creating a thinner, lighter lens by making it with separate sections mounted in a frame is often attributed to Georges-Louis Leclerc, Comte de Buffon. The marquis de Condorcet (1743–1794) proposed grinding such a lens from a single thin piece of glass.\n",
"Some lighthouses, such as those at Cape Race, Newfoundland, and Makapuu Point, Hawaii, used a more powerful hyperradiant Fresnel lens manufactured by the firm of Chance Brothers.\n",
"The ARLHS World List of Lights (WLOL) is published by the Amateur Radio Lighthouse Society.\n\nThe Lighthouse Directory indicates that four of the lighthouses listed below strictly fit its criteria for a lighthouse, but included a fifth, the Gibraltar Aerobeacon, as the publishers felt that it was merited.\n",
"On all previous stations, it has been necessary either to use materials purchased or plans prepared under the supervision of the Spanish Government. As this station was the first to be constructed throughout, construction was speedy and of a higher standard at a much lower cost. At the end of the 1905, the tower was constructed to the balcony, cistern built, foundation of dwelling finished, doors, windows, and louvres made.\n\nWork was completed the following year and the fourth-order light was lit for the first time.\n\nSection::::Current condition.\n",
"Already in 1807 the Danish authorities had established a lighthouse and pilot station on the \"Bülker Huk\" headland. The tower featured six Argand lamps with curved mirrors but only became operational in 1815 due to the Napoleonic Wars. In 1843 this tower was destroyed by lightning. and it was replaced by a tower with a rotating lens.\n",
"While lighthouse buildings differ depending on the location and purpose, they tend to have common components.\n\nA light station comprises the lighthouse tower and all outbuildings, such as the keeper's living quarters, fuel house, boathouse, and fog-signaling building. The Lighthouse itself consists of a tower structure supporting the lantern room where the light operates.\n",
"Where a tall cliff exists, a smaller structure may be placed on top such as at Horton Point Light. Sometimes, such a location can be too high, for example along the west coast of the United States, where frequent low clouds can obscure the light. In these cases, lighthouses are placed below clifftop to ensure that they can still be seen at the surface during periods of fog or low clouds, as at Point Reyes Lighthouse. Another victim of fog was the Old Point Loma lighthouse, which was replaced in 1891 with a lower lighthouse, New Point Loma lighthouse.\n",
"Some lighthouses from the early 1900s were of traditional 8-sided timber construction, such as at \"Point Riche\" near Port au Choix, Newfoundland, Henry Island in Cape Breton (NS), at La Martre, Quebec (site of a museum) on the Gulf of Saint Lawrence, Lonely Island in Lake Huron, or at Pachena Point on Vancouver Island, site of the terrible 1906 shipwreck of the SS Valencia. However, the vast majority of post-1910 lighthouses replicated the octagonal pattern using the new ferro-concrete construction technique. Examples are Peggy's Cove and Western Island (NS), Cap Gaspe and Cap au Saumon (PQ), and Machias Seal Island (NB). This style was carried to impressive height (102 feet) at Cape Sable Island (NS), Long Point in Lake Erie, and Great Duck Island in Lake Huron.\n",
"Immediately beneath the lantern room is usually a Watch Room or Service Room where fuel and other supplies were kept and where the keeper prepared the lanterns for the night and often stood watch. The clockworks (for rotating the lenses) were also located there. On a lighthouse tower, an open platform called the gallery is often located outside the watch room (called the Main Gallery) or Lantern Room (Lantern Gallery). This was mainly used for cleaning the outside of the windows of the Lantern Room.\n",
"In 1869 the Spanish government approved the first plan for Puerto Rico in order to serve the ships that sail through its waters. The lighthouses are located in prominent and isolated areas with good visibility towards the sea. The classification system of the lighthouses of Puerto Rico was based on the characteristics of the lens, and the structure. The lights of the first and second order have a wider light to warn ships of the proximity to land, followed by the minor lights, whose scope was limited to smaller harbors and bays and to connect the primary lights in the system.\n",
"The source of illumination had generally been wood pyres or burning coal but this was expensive, some lighthouses consuming 400 tons of coal a year. Candles or oil lamps backed by concave mirrors were used, often in large banks. The French conducted a series of tests between 1783 and 1788 with varying results. Smeaton's Eddystone lighthouse used 24 candles until 1810.\n",
"Casquets lighthouses\n\nCasquets Lighthouse is an active lighthouse located on the rocky Les Casquets, Alderney, Channel Islands.\n\nSection::::History.\n\nCasquets Lighthouse is the latest in a series of three lights on Les Casquets. The first lighthouses started operation on 30 October 1724, and were three towers lit by coal fires called St Peter, St Thomas and the Dungeon. Three stone towers were built to give the lights a distinctive appearance which would not be confused with lighthouses in nearby France.\n",
"In a lighthouse, the source of light is called the \"lamp\" (whether electric or fuelled by oil) and the concentration of the light is by the \"lens\" or \"optic\". Originally lit by open fires and later candles, the Argand hollow wick lamp and parabolic reflector were introduced in the late 18th century.\n",
"In 1888, the United States Lighthouse Board built the current tower and equipped it with a 'state of the art' fourth order Fresnel lens. This is the third lighthouse on the island. This \"handsome\" lighthouse shares its design and shape with only one other, Port Sanilac Light, on Lake Huron.\n",
"The modern era of lighthouses began at the turn of the 18th century, as lighthouse construction boomed in lockstep with burgeoning levels of transatlantic commerce. Advances in structural engineering and new and efficient lighting equipment allowed for the creation of larger and more powerful lighthouses, including ones exposed to the sea. The function of lighthouses shifted toward the provision of a visible warning against shipping hazards, such as rocks or reefs.\n",
"Having completed twenty-six years service with Trinity House, in 1878 William left to become the engineer-in-chief to the Commissioners of Irish Lights where he took over work in progress, introducing new technology to improved fog systems, the oil burners and built gas burning lamps, rebuilding a number of lighthouses; the largest project being the new Fastnet Rock lighthouse from 1896 which William designed, using the skills he had first learned with Les Hanois Lighthouse. He spent time on the rock supervising the first stones, but his heath deteriorated and in 1900 he resigned with the tower still incomplete.\n\nSection::::Personal life.\n",
"United States Army Corps of Engineers Lieutenant George Meade built numerous lighthouses along the Atlantic and Gulf coasts before gaining wider fame as the winning general at the Battle of Gettysburg. Colonel Orlando M. Poe, engineer to General William Tecumseh Sherman in the Siege of Atlanta, designed and built some of the most exotic lighthouses in the most difficult locations on the U.S. Great Lakes.\n\nFrench merchant navy officer Marius Michel Pasha built almost a hundred lighthouses along the coasts of the Ottoman Empire in a period of twenty years after the Crimean War (1853–1856).\n\nSection::::Lighthouse technology.\n\nSection::::Lighthouse technology.:Power.\n",
"As technology advanced, prefabricated skeletal iron or steel structures tended to be used for lighthouses constructed in the 20th century. These often have a narrow cylindrical core surrounded by an open lattice work bracing, such as Finns Point Range Light.\n",
"There are two types of lighthouses: ones that are located on land, and ones that are offshore. A \"land lighthouse\" is simply a lighthouse constructed to aid navigation over land, rather than water. Historically, they were constructed in areas of flatland where the featureless landscape and prevailing weather conditions (e.g. winter fog) might cause travellers to become easily disorientated and lost. In such a landscape a high tower with a bright lantern could be visible for many miles.\n",
"The lighthouse was constructed in 1968 to a design by local architect Maurice Durand; unlike his other lighthouses, it was not meant to replace a tower which had been destroyed during World War II, but was instead built new as a landfall light. It was constructed in response to criticisms from local sailors, who said that due to development along the shoreline they could no longer see the harbor lights at the entrance to the town harbor.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-01311 | How did they edit people out of really old photos like Stalin did? | URL_0 The top comment explains it really well: > In the old school Stalin era you are asking about, a typical method was to take the original negative and then print the photograph in large format. Then, an artist could use a scalpel to carefully cut out the specific offending individual (or inanimate object, such as a billboard or sign) that they wished to be removed from the original scene. This method was a lot easier than trying to do this same technique on the much smaller original negative. > Then, they could either insert another cutout of a similar scaled and lit person or object to fill the void from the original cutout, splicing that new subject/object into the picture. Alternatively, or even in addition to, they would use airbrush and painting techniques to cover up the person by painting in a new background. > Once the artist was comfortable with the appearance of the coverup in the now-censored full size airbrushed photograph, they would then take another camera and carefully frame and take a new picture of the old doctored photograph - basically, taking a picture of a picture. This was an important step because not only would this act of taking a picture of the doctored photograph help hide any small blemishes or imperfections from the censoring job (allowing them to blur the focus slightly during the reshoot for example, or use different grained film to help with the blending), but this act would also produce a brand new "clean" negative of the original photo that could be used to replace the old "evil" negative in the archives. Credit /u/Falcon109 | [
"Devyatkin produced a series of interviews for MGM with Heroes of the Soviet Union. Those who were interviewed include the liberator of Auschwitz, General Arkady Petrenko and the discoverer of Hitler's corpse, Elena Rzhevskaya.\n",
"scenes of the film were gathered from more than 100 different cameras\n\nover the course of the next thirteen years. Those behind the camera\n\ninclude the Tsar's royal photographer, the Tsar himself, Soviet\n\nphotographers, military staff photographers of Germany, Great Britain,\n\nJapan and the United States, and others.\n\nThe narration is provided by the American radical Max Eastman, who\n\nwas originally slated to write captions to explain the scenes and help\n\nraise money to finance the project. Eastman was chosen by Axelbank because of Eastman's\n\nclose political relations with many leaders of the Bolshevik party,\n",
"BULLET::::- The first chapter of the epic film \"Liberation\" was filmed 20 years after the subsequent three parts. The film's director, Yuri Ozerov, had refused to minimize the errors of the Soviet high command during the first year of the war, and instead waited for a time when he could film this portion accurately.\n\nBULLET::::- Sergei Eisenstein's \"Ivan the Terrible\" Part II was completed in 1945 but was not released until 1958; 5 years after Stalin's death.\n",
"BULLET::::- Rosika Schwimmer Papers, Manuscripts and Archives Division, The New York Public Library, New York, NY\n\nBULLET::::- Schwimmer Family Papers, Manuscripts and Archives Division, The New York Public Library, New York, NY\n\nBULLET::::- Digital images of Rosika Schwimmer from the Schwimmer-Lloyd Collection, Manuscripts and Archives Division, The New York Public Library, New York, NY\n\nBULLET::::- Rosika Schwimmer Papers, Hoover Institution Archives, Stanford, CA\n\nBULLET::::- Schwimmer-Lloyd Collection, Sophia Smith Collection, Smith College, Northampton, MA\n\nBULLET::::- Rosika Schwimmer Papers, Swarthmore College Peace Collection, Swarthmore, PA\n",
"Winston Smith's job, \"revising history\" (and the \"unperson\" motif) are based on the Stalinist habit of airbrushing images of \"fallen\" people from group photographs and removing references to them in books and newspapers. In one well-known example, the Soviet encyclopaedia had an article about Lavrentiy Beria. When he fell in 1953, and was subsequently executed, institutes that had the encyclopaedia were sent an article about the Bering Strait, with instructions to paste it over the article about Beria.\n",
"In August 1936, after months of careful preparations and rehearsals in Soviet secret police prisons, Zinoviev, Kamenev and 14 others, mostly Old Bolsheviks, were put on trial again in the Moscow Trials. Kamenev and all the others were found guilty and were executed by shooting on August 25, 1936.\n\nSection::::Censorship of historical photographs.:Postcard.\n",
"Joseph Stalin made use of photo retouching for propaganda purposes. On May 5, 1920 his predecessor Vladimir Lenin held a speech for Soviet troops that Leon Trotsky attended. Stalin had Trotsky retouched out of a photograph showing Trotsky in attendance. In a well known case of \"damnatio memoriae\" image manipulation, NKVD leader Nikolai Yezhov (the \"Vanishing Commissar\"), after his execution in 1940, was removed from an official press photo where he was pictured with Stalin. (For more information, see Censorship of images in the Soviet Union.)\n",
"Joseph Stalin often air-brushed his enemies out of photographs. Nikolai Yezhov was removed from the original image after falling out of favor with Stalin and being executed.\n\nSection::::Popular image manipulations throughout in history.:Mussolini 1942.\n\nBenito Mussolini had a portrait done of himself where he was on a horse, he had the horse handler removed from the original photograph in order to make himself look more heroic.\n\nSection::::Popular image manipulations throughout in history.:Oprah 1989.\n",
"Photos are often manipulated during wars and for political purposes. One well known example is Joseph Stalin's manipulation of a photograph from May 5, 1920 on which Stalin's predecessor Lenin held a speech for Soviet troops that Leon Trotsky attended. Stalin had later Trotsky retouched out of this photograph. (cf. King, 1997). A recent example is reported by Healy (2008) about North Korean leader Kim Jong Il.\n\nSection::::In specific domains.:Internet sources.\n",
"According to Capa, he took 106 pictures in the first two hours of the invasion. Capa returned with the unprocessed films to London, where a staff member at \"Life\" made a mistake in the darkroom; he set the dryer too high and melted the emulsion in the negatives in three complete rolls and over half of a fourth roll. Only eleven frames in total were recovered. Accounts differed in blaming a fifteen-year-old lab assistant named Dennis Banks, or Larry Burrows, who would later gain fame as a photographer but worked in the lab.\n",
"BULLET::::- Stop Action (photos that are in fact captures taken from film).\n\nSection::::Photographs.\n\nSome of the included photos are identified with larger events, such as H.S. Wong's 1937 photograph of a lone child crying at a demolished train station on \"Bloody Saturday\" as representative of the entire bombing of Shanghai. Other photographs are excerpts from larger historic collections, such as Roger Fenton's and Alexander Gardner's respective groundbreaking documentations of the Crimean War and American Civil War. Margin notes document the circumstantial background of many photographs, as well as instances where the images have been accused of being staged.\n\nSection::::Gallery.\n",
"On the other hand, cinematographer Shigeru Shirai stated in his memoirs that it was not the case that they filmed everything they saw and even some of what they shot was cut out. After Shirai arrived in Nanking on December 14 he saw long lines of Chinese who were being taken to the banks of the Yangtze River to be shot, but he was not allowed to start his movie camera. Shirai was stunned by what he saw and said that he suffered nightmares for many nights after.\n\nSection::::Reviews.\n",
"Pornographic images and videotapes were smuggled into the Soviet Union for illegal distribution. In addition to the anti-pornographic law, such smuggling was prohibited by legal provisions giving the Soviet state the exclusive right to conduct foreign economic trade.\n\nSection::::Censorship of historical photographs.\n\nSection::::Censorship of historical photographs.:The Water Commissar.\n\nThis image taken by the Moscow Canal was taken when Nikolai Yezhov was water commissar. After he fell from power, he was arrested, shot, and had his image removed by the censors.\n",
"Section::::Biography.:Work in film.:Projects after \"The Year 1905\".\n",
"\"In the camps, first at Tanforan and then at Topaz in Utah, I had the opportunity to study the human race from the cradle to the grave, and to see what happens to people when reduced to one status and one condition. Cameras and photographs were not permitted in the camps, so I recorded everything in sketches, drawings and paintings.\"\n\n\"Miné Okubo - preface to the 1983 edition of Citizen 13660\"\n",
"In February 1858, they arrived in Calcutta to document the aftermath of the Indian Rebellion of 1857. During this time they produced possibly the first-ever photographic images of corpses. It is believed that for at least one of the photographs taken at the palace of Sikandar Bagh in Lucknow, the skeletal remains of Indian rebels were disinterred or rearranged to heighten the photograph's dramatic impact.\n",
"A third film, which began production in 1946, was halted when the decision was made not to release the second film. After Eisenstein's death in 1948, all footage from the film was confiscated, and it was rumored to have been destroyed (though some stills and a few brief shots still exist today).\n\nCinematography was divided between Eduard Tisse, who shot the exteriors, and Andrei Moskvin, who filmed all interior scenes. The color sequences of Part Two were also filmed by Moskvin.\n",
"Section::::Censorship of historical photographs.:Lenin's speech.\n\nOn May 5, 1920, Lenin gave a famous speech to a crowd of Soviet troops in Sverdlov Square, Moscow. In the foreground was Leon Trotsky and Lev Kamenev. The photo was later altered and both were removed by censors.\n\nSection::::Censorship of historical photographs.:Lenin's speech.:Leon Trotsky.\n",
"The 16 pages of photographs include those of Ivan Klimenko, head of autopsy commission Faust Shkaravsky, the locations of Hitler's burning and burying site outside the \"Führerbunker\"s emergency exit, SMERSH agents exhuming Hitler and Braun's remains, a diagram of where the corpses of Hitler, Braun, Joseph and Magda Goebbels were burned, Hitler and Braun's corpses in boxes, Hitler's dental remains and a sketch drawn by his dentist Hugo Blaschke's assistant Käthe Heusermann on 11 May 1945 to identify them, Braun's dental bridge, the first and last page of Hitler's autopsy report, the Soviet autopsy commission with both Kreb's and Joseph Goebbels' corpses, the bodies of the Goebbels family, the bodies of Krebs and the Goebbels children at Plötzensee Prison, and Blondi's corpse.\n",
"Next, Klimov began making a film about Grigori Rasputin called \"Agony\". The road to screening took him nine years and many rewrites. Although finished in 1975, the final edit was not released in the USSR until 1985, due to suppressive measures partly because of its orgy scenes and partly because of its relatively nuanced portrait of Emperor Nicholas II. It had been shown in western Europe a few years before. In 1976, Klimov finished a film begun by his teacher Mikhail Romm before the latter's death called \"And Still I Believe...\".\n",
"Section::::Censorship of historical photographs.:October Revolution celebration.\n\nOn November 7, 1919, this image was snapped of the Soviet leadership celebrating the second anniversary of the October Revolution. After Trotsky and his allies fell from power, a number of figures were removed from the image, including Trotsky and two people over to Lenin's left, wearing glasses and giving a salute. Lev Kamenev two men over on Lenin's right was another of Stalin's opponents and below the boy in front of Trotsky, another bearded figure, Artemic Khalatov the one time Commissar of publishing, was also edited out.\n",
"There is also a famous lost film of Tolstoy made a decade before he died. In 1901, the American travel lecturer Burton Holmes visited Yasnaya Polyana with Albert J. Beveridge, the U.S. senator and historian. As the three men conversed, Holmes filmed Tolstoy with his 60-mm movie camera. Afterwards, Beveridge's advisers succeeded in having the film destroyed, fearing that documentary evidence of a meeting with the Russian author might hurt Beveridge's chances of running for the U.S. presidency.\n\nSection::::See also.\n\nBULLET::::- Anarchism and religion\n\nBULLET::::- Christian vegetarianism\n\nBULLET::::- Leo Tolstoy and Theosophy\n\nBULLET::::- List of peace activists\n\nBULLET::::- Tolstoyan movement\n",
"They used a heavy wood and plate camera. After the shooting, the negatives were developed and printed by illuminating off the glass from below. A few times a week, a batch of ice was delivered to the dark room, the required chemical baths to keep the proper temperature. Probably Anna and Augusta reused a lot of the expensive glass negatives. Fortunately, many cityscapes, landscapes, shots of factories, schools and orphanages have survived. Anna tried to continue the photo studio after the death of Augusta, but she sold the shop in the 1950s.\n",
"BULLET::::- In 1897, Redfern captured a scene at the Canal Wharf, Sheffield taken at Tinsley in Sheffield.\n\nBULLET::::- In 1897, a photograph was taken in the Banquet Room of the Masonic Hall, Surrey Street, at Sheffield City Centre in Sheffield.\n",
"In \"The Making of The White Countess\", a bonus feature on the DVD release of the film, production designer Andrew Sanders discusses the difficulties he had recreating 1930s Shanghai in a city where most pre-war remnants are surrounded by modern skyscrapers and neon lights. Many of the sets had to be constructed on soundstages. Also impeding him were restrictions on imports levied by the Chinese government, forcing him to make do with whatever materials he could find within the country. The film was the last for producer Ismail Merchant, who died shortly after principal photography was completed. \n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-21086 | With how much the Earth has changed,would prehistoric creatures as we know them even be able to exist in today's environments,even in the best of conditions? | Some of them may be able to but others wouldn't. The gigantic insects the existed before the dinosaurs rose to prominence woldnt be able to support their size due to a significantly lower oxygen content in air compared to their time | [
"Section::::Description.\n\nAlmost all the species of \"Psilophyton\" have been found in rocks of Emsian age (around ). One exception is \"P. krauselii\", from the Czech Republic, which is younger, being from the upper part of the Middle Devonian (around ).\n",
"BULLET::::- Occurrence of the characean genus \"Tolypella\" is reported from the Lower Cretaceous of the Garraf Massif (Catalonia, Spain) by Martín-Closas \"et al.\" (2018), representing the oldest known record of the genus reported so far.\n\nBULLET::::- A study on the spore wall structure and development in \"Psilophyton dawsonii\" is published by Noetinger, Strayer & Tomescu (2018).\n\nBULLET::::- Lycopsid megaspores preserved with fossil starch, probably used to attract and reward animals for megaspore dispersal, are described from the Permian of north China by Liu \"et al.\" (2018).\n",
"Most fossil insect groups found after the Permian–Triassic boundary differ significantly from those before: Of Paleozoic insect groups, only the Glosselytrodea, Miomoptera, and Protorthoptera have been discovered in deposits from after the extinction. The caloneurodeans, monurans, paleodictyopteroids, protelytropterans, and protodonates became extinct by the end of the Permian. In well-documented Late Triassic deposits, fossils overwhelmingly consist of modern fossil insect groups.\n\nSection::::Extinction patterns.:Terrestrial plants.\n\nSection::::Extinction patterns.:Terrestrial plants.:Plant ecosystem response.\n",
"Any one of the above three definitions, but also with a relict distribution in refuges.\n\nSome paleontologists believe that living fossils with large distributions (such as \"Triops cancriformis\") are not real living fossils. In the case of \"Triops cancriformis\" (living from the Triassic until now), the Triassic specimens lost most of their appendages (mostly only carapaces remain), and they have not been thoroughly examined since 1938.\n\nSection::::History.:Other definitions.:Low diversity.\n\nAny of the first three definitions, but the clade also has a low taxonomic diversity (low diversity lineages).\n",
"In the UK, there are just two known populations: in a pool and adjacent area in the Caerlaverock Wetlands in Scotland, and a temporary pond in the New Forest. The species is legally protected under Schedule 5 of the Wildlife and Countryside Act 1981 (as amended).\n\nThis species is considered to be one of the oldest living species on the planet at around 200 million years old. Fossils of this species from the Upper Triassic (Norian) period appear virtually unchanged compared to modern day members of the species.\n\nSection::::Life cycle.\n",
"On the other hand, if the dates are comparatively recent, say less than 35 ka, then humans would be exculpated as the causative agent. If however the estimate falls somewhere close to 46ka then human arrival and the final demise of the Megafauna would appear to be closely associated.\n\nSection::::See also.\n\nBULLET::::- Australian Megafauna\n\nBULLET::::- Lancefield, Victoria\n\nBULLET::::- List of fossil sites \"(with link directory)\"\n\nSection::::References.\n",
"A species discovered at the Silveirinha site in Portugal, \"Archaeonycteris praecursor\", was described in 2009 and estimated to be the oldest of the known taxa. The fossil material uncovered in Dorset, England, and described as \"Archaeonycteris relicta\" is dated to a later period in the Eocene, this is the most recent known species. The only species to found beyond Europe is the Early Eocene fossil species \"Archaeonycteris storchi\", which occurs in India.\n",
"Due to their external skeleton, the fossil history of insects is not entirely dependent on lagerstätte type preservation as for many soft-bodied organisms. However, with their small size and light build, insects have not left a particularly robust fossil record. Other than insects preserved in amber, most finds are terrestrial or near terrestrial sources and only preserved under very special conditions such as at the edge of freshwater lakes. While some 1/3 of known non-insect species are extinct fossils, due to the paucity of their fossil record, only 1/100th of known insects are extinct fossils.\n",
"The geological record of terrestrial plants is sparse and based mostly on pollen and spore studies. Plants are relatively immune to mass extinction, with the impact of all the major mass extinctions \"insignificant\" at a family level. Even the reduction observed in species diversity (of 50%) may be mostly due to taphonomic processes. However, a massive rearrangement of ecosystems does occur, with plant abundances and distributions changing profoundly and all the forests virtually disappearing; the Palaeozoic flora scarcely survived this extinction.\n",
"BULLET::::- Redescription of the meganeurid species \"Meganeurites gracilipes\" is published by Nel \"et al.\" (2018), who interpret this species as unlikely to have lived in densely forested environments, and more likely to be an open-space, ecotone or riparian forest predator, hunting in a way similar to extant hawkers.\n\nBULLET::::- A study on the phylogenetic relationships of an Early Cretaceous plecopteran \"\"Rasnitsyrina\" culonga\" Sinitshenkova (2011) is published by Cui, Toussaint & Béthoux (2018).\n",
"BULLET::::- Kaua'i mole duck (\"Talpanas lippa\", a blind, flightless, terrestrial Hawaiian duck)\n\nBULLET::::- \"Apteribis\" (a giant, flightless ibis)\n\nBULLET::::- Lowland kagu (\"Rhynochetos orarius\")\n\nBULLET::::- Viti Levu giant pigeon (\"Natunaornis gigoura\")\n\nBULLET::::- American Flamingo (\"Phoenicopterus ruber,\" extirpated in Australia)\n\nBULLET::::- \"Xenorhynchopsis minor\" et \"Xenorhynchopsis tibialis\" (Australian flamingo)\n\nBULLET::::- \"Ocyplanus proeses\" (Australian flamingo)\n\nSome extinct megafauna, such as the bunyip-like \"Diprotodon\", may remain in folk memory or be the sources of cryptozoological legends.\n\nSection::::Pleistocene or Ice Age extinction event.:Europe and northern Asia.\n",
"There are many more species found for the period than in this article. Most families were more diverse than they are today, and they were yet more so in the last interglacial. A great extinction, especially of mammals, continued throughout the end of the Pleistocene, and it may be continuing today.\n\nSection::::Evidence.\n",
"BULLET::::- †\"[[Solen townsendensis]]\"\n\nBULLET::::- \"[[Solena]]\"\n\nBULLET::::- †\"[[Solena conradi]]\"\n\nBULLET::::- †\"[[Solena eugenensis]]\"\n\nBULLET::::- †\"[[Solena novacularis]]\"\n\nBULLET::::- [[File:Somatochlora arctica.JPG|thumb|upright|A living \"[[Somatochlora]]\" dragonfly]] †\"[[Somatochlora]]\"\n\nBULLET::::- †\"[[Somatochlora oregonica]]\" – type locality for species\n\nBULLET::::- \"[[Spermophilus]]\"\n\nBULLET::::- †\"[[Spermophilus gidleyi]]\"\n\nBULLET::::- †\"[[Spermophilus mckayensis]]\"\n\nBULLET::::- †\"[[Spermophilus shotwelli]]\"\n\nBULLET::::- †\"[[Spermophilus tephrus]]\"\n\nBULLET::::- †\"[[Spermophilus wilsoni]]\"\n\nBULLET::::- †\"[[Sphaerosperma]]\"\n\nBULLET::::- †\"[[Sphaerosperma riesii]]\"\n\nBULLET::::- †\"[[Sphenophalos]]\"\n\nBULLET::::- †\"[[Sphenosperma]]\"\n\nBULLET::::- †\"[[Sphenosperma baccatum]]\"\n\nBULLET::::- †\"[[Spirocrypta]]\"\n\nBULLET::::- †\"[[Spirocrypta pileum]]\"\n\nBULLET::::- \"[[Spirotropis]]\"\n\nBULLET::::- †\"[[Spirotropis calodius]]\"\n\nBULLET::::- †\"[[Spirotropis kincaidi]]\"\n\nBULLET::::- †\"[[Spirotropis washingtonensis]]\"\n\nBULLET::::- \"[[Spisula]]\"\n\nBULLET::::- †\"[[Spisula densata]]\"\n\nBULLET::::- †\"[[Spisula eugenensis]]\"\n\nBULLET::::- \"[[Spizaetus]]\"\n\nBULLET::::- †\"[[Spizaetus pliogryps]]\"\n\nBULLET::::- [[File:Steneofiber esseri.JPG|thumb|upright|Fossilized partial skulls of the rodent Oligocene-Miocene \"[[Steneofiber]]\"]] †\"[[Steneofiber]]\"\n",
"The birds are known from their remains, which are subfossil (not fossilized, or not completely fossilized). Some are also known from folk memory, as in the case of Haast's eagle in New Zealand. As the remains are not completely fossilized, they may yield organic material for molecular analyses to provide additional clues for resolving their taxonomic affiliations.\n",
"BULLET::::- A study on the seeds preserved in moa coprolites is published by Carpenter \"et al.\" (2018), who question the hypothesis that some of the largest-seeded plants of New Zealand were dispersed by moas.\n\nBULLET::::- A study on the plant–insect interactions in the European forest plant communities in the Upper Pliocene Lagerstätte of Willershausen (Lower Saxony, Germany), the Upper Pliocene locality of Berga (Thuringia, Germany) and the Pleistocene locality of Bernasso (France) is published by Adroit \"et al.\" (2018).\n",
"In the scenario for 100 million years in the future, the world is much hotter than at present. Octopuses and enormous tortoises have come on to the land, much of which is flooded by shallow seas surrounded by brackish swamps. Antarctica has drifted towards the tropics and is covered with dense rainforests, as it was before. Australia has collided with North America and Asia, forcing up an enormous, 12-kilometre-high mountain plateau much taller than the modern Himalayas. Greenland has been reduced to a small, temperate island. There are cold, deep ocean trenches. The Sahara has once again become the rich grassland it was millions of years ago.\n",
"Radiolaria have left a geological record since at least the Ordovician times, and their mineral fossil skeletons can be tracked across the K–Pg boundary. There is no evidence of mass extinction of these organisms, and there is support for high productivity of these species in southern high latitudes as a result of cooling temperatures in the early Paleocene. Approximately 46% of diatom species survived the transition from the Cretaceous to the Upper Paleocene, a significant turnover in species but not a catastrophic extinction.\n",
"BULLET::::- \"Cantisolanum daturoides\" from the Eocene London Clay Formation, previously suggested to be a member of the family Solanaceae, is reinterpreted as more likely to be a commelinid monocot by Särkinen \"et al.\" (2018).\n\nBULLET::::- A study on the lower threshold of extant palm temperature tolerance, as well as on the potential of using the presence of palm fossils to infer past climate, is published by Reichgelt, West & Greenwood (2018).\n",
"BULLET::::- A study on the morphology and phylogenetic relationships of Eocene fruits belonging to the species \"Mastixicarpum crassum\" and \"Eomastixia bilocularis\" is published by Manchester & Collinson (2019).\n\nBULLET::::- Seeds of \"Eurya stigmosa\" are reported from the Early Pleistocene lacustrine and fluvial sediments of Porto da Cruz, Madeira by Góis-Marques \"et al.\" (2019).\n\nBULLET::::- A study on the putative cycad \"\"Zamia\" australis\" from the Miocene Ñirihuau Formation (Argentina) is published by Passalia, Caviglia & Vera (2019), who reinterpret the fossil specimens as flowering plant leaves, and transfer this species to the genus \"Lithraea\".\n",
"BULLET::::- Powerful goshawk and the Gracile goshawk (\"Accipiter efficax et Accipiter quartus\")\n\nBULLET::::- \"Sylviornis\" (giant, flightless New Caledonian galliform- largest in existence)\n\nBULLET::::- Noble megapode (\"Megavitornis altirostris\")\n\nBULLET::::- New Caledonian gallinule (\"Porphyrio kukwiedei\")\n\nBULLET::::- Giant megapodes\n\nBULLET::::- Giant malleefowl (\"Leipoa gallinacea\")\n\nBULLET::::- Pile-builder megapode (\"Megapodius molistructor\")\n\nBULLET::::- Consumed scrubfowl (\"Megapodius alimentum\")\n\nBULLET::::- Viti Levu scrubfowl (\"Megapodius amissus\")\n\nBULLET::::- New Caledonian ground dove (\"Gallicolumba longitarsus\")\n\nBULLET::::- New Caledonian snipe et Viti Levu snipe (\"Coenocorypha miratropica\" et \"Coenocorypha neocaledonica\")\n\nBULLET::::- Niue night heron (\"Nycticorax kalavikai\")\n\nBULLET::::- Marquesas cuckoo-dove (\"Macropygia heana\")\n\nBULLET::::- New Caledonian barn owl (\"Tyto letocarti\")\n\nBULLET::::- Various \"Galliraillus\" sp.\n",
"BULLET::::- †\"Arcanoceras furnishi\" – type locality for species\n\nBULLET::::- †\"Archaeocidaris\"\n\nBULLET::::- \"Archaeolithophyllum\" – tentative report\n\nBULLET::::- †\"Archimastax\"\n\nBULLET::::- †\"Archimastax americanus\" – type locality for species\n\nBULLET::::- †\"Archimedes\"\n\nBULLET::::- †\"Archimylacris\"\n\nBULLET::::- †\"Archimylacris venusta\" – type locality for species\n\nBULLET::::- †\"Arkacrinus\"\n\nBULLET::::- †\"Arkacrinus dubius\"\n\nBULLET::::- †\"Arkanites\"\n\nBULLET::::- †\"Arkanites relictus\"\n\nBULLET::::- †\"Arkoceras\"\n\nBULLET::::- †\"Arkoceras exiguum\"\n\nBULLET::::- †\"Asketomorpha\" – type locality for genus\n\nBULLET::::- †\"Asketomorpha grandis\" – type locality for species\n\nBULLET::::- †\"Asphaltina\"\n\nBULLET::::- †\"Astartella\"\n\nBULLET::::- †\"Astartella concentrica\" – or unidentified comparable form\n\nBULLET::::- †\"Athyris\"\n\nBULLET::::- †\"Atrypa\"\n\nBULLET::::- †\"Atrypina\"\n\nBULLET::::- †\"Atrypina erugata\"\n\nBULLET::::- †\"Aviculopecten\"\n\nBULLET::::- †\"Aviculopecten inspeciosus\"\n\nBULLET::::- †\"Aviculopecten jennyi\"\n\nBULLET::::- †\"Aviculopecten morrowensis\"\n",
"Section::::Fossil groups of plants.\n\nSome plants have remained almost unchanged throughout earth's geological time scale. Horsetails had evolved by the Late Devonian, early ferns had evolved by the Mississippian, conifers by the Pennsylvanian. Some plants of prehistory are the same ones around today and are thus living fossils, such as \"Ginkgo biloba\" and \"Sciadopitys verticillata\". Other plants have changed radically, or became extinct.\n\nExamples of prehistoric plants are:\n\nBULLET::::- \"Araucaria mirabilis\"\n\nBULLET::::- \"Archaeopteris\"\n\nBULLET::::- \"Calamites\"\n\nBULLET::::- \"Dillhoffia\"\n\nBULLET::::- \"Glossopteris\"\n\nBULLET::::- \"Hymenaea protera\"\n\nBULLET::::- \"Nelumbo aureavallis\"\n\nBULLET::::- \"Pachypteris\"\n\nBULLET::::- \"Palaeoraphe\"\n\nBULLET::::- \"Peltandra primaeva\"\n\nBULLET::::- \"Protosalvinia\"\n\nBULLET::::- \"Trochodendron nastae\"\n\nSection::::Notable paleobotanists.\n",
"The oldest known arachnid is the trigonotarbid \"Palaeotarbus jerami\", from about in the Silurian period. \"Attercopus fimbriunguis\", from in the Devonian period, bears the earliest known silk-producing spigots, but its lack of spinnerets means it was not one of the true spiders, which first appear in the Late Carboniferous over . The Jurassic and Cretaceous periods provide a large number of fossil spiders, including representatives of many modern families. Fossils of aquatic scorpions with gills appear in the Silurian and Devonian periods, and the earliest fossil of an air-breathing scorpion with book lungs dates from the Early Carboniferous period.\n",
"BULLET::::- Presence of secretory tissues is reported in extinct flowers from the Cretaceous amber from Myanmar and Cenozoic Dominican amber (including specimens preserved while in the process of emitting compounds) by Poinar & Poinar (2019).\n\nBULLET::::- Presence of endothelium (a specialized seed tissue that develops from the inner epidermis of the inner integument) is reported in several different kinds of flowering plant seeds (including in the lineage leading to extant Chloranthaceae) from the Early Cretaceous of eastern North America and Portugal by Friis, Crane & Pedersen (2019).\n",
"BULLET::::- A study on the plant specimens (ferns, gymnosperms and angiosperms) from the Lower Cretaceous Araripe Basin (Brazil) preserving evidence of plant–insect interactions and potentially of paleoecological relationships between plants and insects is published by Edilson Bezerra dos Santos Filho \"et al.\" (2019).\n\nBULLET::::- Leaves of members of the family Nymphaeaceae preserving evidence of insect herbivory are reported from the Albian Utrillas Formation (Spain) by Estévez-Gallardo \"et al.\" (2019).\n\nBULLET::::- A study on the evolution of plant assemblages in the area of Primorye (Russia) throughout the Paleogene is published by Bondarenko, Blokhina & Utescher (2019).\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-21700 | Why is tipping a waiter/waittress expected for doing their job? | The American Dream aka Wage Theft. It started that you tipped them a bit (maybe 5%). Management/ownership noticed so they started paying them a bit less (in effect pocketing the tip). Since the tip was now expected, people who wanted to reward exceptional service had to tip a bit more. This cycle continued until most of their wage is now tips in some markets. | [
"When it comes to paying the bill in American restaurants, adding a tip is a common custom that is often expected by the waiter. According to a study by CreditCards.com, 4 out of 5 Americans always leave a tip when dining out, and the average tip is 16%-20% of the total bill.\n",
"Some bars in New York City's borough of Manhattan have instituted mandatory tipping. Mandatory tipping is considered an oxymoron, as tipping is by definition a voluntary act on the part of the customer. The BBC has reported that some find the practice bothersome; particularly those who are not aware that the tipping is used to subsidize the sub-standard pay at the workplace. One waiter in London, England has criticized the low wages to the popular press.\n",
"Different countries maintain different customs regarding tipping, but, in the United States, a tip paid in addition to the amount presented on the bill for food and drinks is customary. At most sit-down restaurants, servers and bartenders expect a tip after a patron has paid the check. The minimum legally required hourly wage paid to waiters and waitresses in many U.S. states is lower than the minimum wage employers are required to pay for most other forms of labor in order to account for the tips that form a significant portion of the server's income. If wages and tips do not equal the federal minimum wage of $7.25 per hour during any week, the employer is required to increase cash wages to compensate.\n",
"Section::::Career.\n",
"The issue of tipping is sometimes discussed in connection with the principal–agent theory. \"Examples of principals and agents include bosses and employees ... [and] diners and waiters.\" \"The \"principal–agent problem\", as it is known in economics, crops up any time agents aren't inclined to do what principals want them to do. To sway them [(agents)], principals have to make it worth the agents' while ... [in the restaurant context,] the better the diner's experience, the bigger the waiter's tip.\" \"In the ... language of the economist, the tip serves as a way to reduce what is known as the classic \"principal–agent\" problem.\" According to \"Videbeck, a researcher at the New Zealand Institute for the Study of Competition and Regulation[,] '[i]n theory, tipping can lead to an efficient match between workers' attitudes to service and the jobs they perform. It is a means to make people work hard. Friendly waiters will go that extra mile, earn their tip, and earn a relatively high income...[On the other hand,] if tipless wages are sufficiently low, then grumpy waiters might actually choose to leave the industry and take jobs that would better suit their personalities.'\"\n",
"Section::::Labour laws.\n\nSection::::Labour laws.:Canada.\n\nQuebec and Ontario allow employers to pay lower minimum wages to workers who would reasonably be expected to be receiving tips. In Ontario, the minimum wage is $14.00 per hour, with exceptions for students under 18 years old and employed for not more than 28 hours a week, who are paid $13.15 per hour; and both liquor and restaurant servers, who are paid $12.20 per hour. On April 13, 2010, the \"Toronto Star\" reported since 2009, it has become common for restaurant servers to give part of their tips to the business they work for.\n",
"The use of tipping is a strategy on the part of the owners or managers to align the interests of the service workers with those of the owners or managers; the service workers have an incentive to provide good customer service (thus benefiting the company's business), because this makes it more likely that they will get a good tip.\n",
"Bribery and corruption are sometimes disguised as tipping. In some developing countries, police officers, border guards, and other civil servants openly solicit tips, gifts and dubious fees using a variety of local euphemisms.\n\nSection::::Reasons for tipping.\n\nTipping researcher Michael Lynn identifies five motivations for tipping:\n\nBULLET::::- Showing off\n\nBULLET::::- To supplement the server's income and make them happy\n\nBULLET::::- For improved future service\n\nBULLET::::- Avoid disapproval from the server\n\nBULLET::::- A sense of duty\n\nIn countries such as Australia and Japan where no tipping is provided, the service is found to be as good as in America.\n",
"As a solution to the principal–agent problem, though, tipping is not perfect. In the hopes of getting a larger tip, a server, for example, may be inclined to give a customer an extra large glass of wine or a second scoop of ice cream. While these larger servings make the customer happy and increase the likelihood of the server getting a good tip, they cut into the profit margin of the restaurant. In addition, a server may dote on generous tippers while ignoring other customers, and in rare cases harangue bad tippers.\n\nSection::::Employment contract.:Non-financial compensation.\n",
"Waiting on tables is (along with nursing and teaching) part of the service sector, and among the most common occupations in the United States. The Bureau of Labor Statistics estimates that, as of May 2008, there were over 2.2 million persons employed as servers in the U.S.\n\nMany restaurants choose a specific uniform for their wait staff to wear. Waitstaff may receive tips as a minor or major part of their earnings, with customs varying widely from country to country.\n\nSection::::Terminology.\n",
"Tipping is not required but often expected, particularly in restaurants where roughly 5 to 10% is common. This belongs to the service one got and the restaurant level (low, medium, high prices). In standard restaurants it is OK to round up to the next Euro. By tipping roughly 5% one can't be wrong in bars or restaurants. Taxi bills might be just rounded up to the next Euro. Another common setting where tipping is customary is taxis.\n\nSection::::By region.:Europe.:Croatia.\n",
"There is no tradition of tipping somebody who is just providing a service (e.g. a hotel porter). Casinos in Australia—and some other places—generally prohibit tipping of gaming staff, as it is considered bribery. For example, in the state of Tasmania, the Gaming Control Act 1993 states in section 56 (4): \"It is a condition of every special employee's licence that the special employee must not solicit or accept any gratuity, consideration or other benefit from a patron in a gaming area\". There is concern that tipping might become more common in Australia.\n\nSection::::By region.:Oceania.:New Zealand.\n",
"In Russian language a gratuity is called \"chayeviye\", which literally means \"for the tea\". Tipping small amounts of money in Russia for people such as waiters, cab drivers and hotel bellboys was quite common before the Communist Revolution of 1917. During the Soviet era, and especially with the Stalinist reforms of the 1930s, tipping was discouraged and was considered an offensive capitalist tradition aimed at belittling and lowering the status of the working class. So from then until the early 1990s tipping was seen as rude and offensive. With the fall of the Soviet Union and the dismantling of the Iron Curtain in 1991, and the subsequent influx of foreign tourists and businessmen into the country, tipping started a slow but steady comeback. Since the early 2000s tipping has become somewhat of a norm again. However, still a lot of confusion persists around tipping: Russians do not have a widespread consensus on how much to tip, for what services, where and how. In larger urban areas, like Moscow and St Petersburg, tips of 10% are expected in high-end restaurants, coffee shops, bars and hotels, and are normally left in cash on the table, after the bill is paid by credit card; or as part of cash payment if a credit card is not used. Tipping at a buffet or any other budget restaurant, where there are no servers to take your order at the table (called \"stolovaya\") is not expected and not appropriate. Fast food chains, such as McDonald's, Chaynaya Lozhka, Teremok and so on, do not allow tipping either. Tipping bartenders in a pub is not common, but it is expected in an up-market bar. Metered taxi drivers also count on a tip of 5–10%, but non-metered drivers who pre-negotiate the fare do not expect one. It should also be noted that the older Russians, who grew up and lived most of their lives during the Soviet era, still consider tipping an offensive practice and detest it. In smaller rural towns, tipping is rarely expected and may even cause confusion.\n",
"In the Freakonomics blog entitled \"Should Tipping be Banned?\" Stephen Dubner and Steven Levitt discussed the issue of gratuities. The authors pointed out that research by Michael Lynn found that \"attractive waitresses get better tips than less attractive waitresses. Men’s appearance, not so important\". Lynn's research also found that \"blondes get better tips than brunettes. Slender women get better tips than heavier women. Large breasted women get better tips than smaller breasted women\". A woman server interviewed for the blog stated that she \"lost my job because my manager said that I didn’t fit the look of the company, or the restaurant. So I don’t know if it was because I’m a lot more curvier than the other girls or because my skin is darker. I don’t know\".\n",
"Tipping is not expected or required in Australia. The minimum wage in Australia is reviewed yearly, and as of 2017 it was set at A$17.70 per hour (A$22.125 for casual employees) and this is fairly standard across all types of venues. Tipping at cafés and restaurants (especially for a large party), and tipping of taxi drivers and home food deliverers is again, not required or expected. However many people tend to round up the amount owed while indicating that they are happy to let the worker \"keep the change\".\n",
"Section::::By region.:Europe.:Turkey.\n\nIn Turkey, tipping, or \"bahşiş\" (lit. gift, from the Persian word بخشش, often rendered in English as \"baksheesh\") is usually optional and not customary in many places. Though not necessary, a tip of 5–10% is appreciated in restaurants, and is usually paid by \"leaving the change\". Cab drivers usually do not expect to be tipped, though passengers may round up the fare. A tip of small change may be given to a hotel porter.\n\nSection::::By region.:Europe.:United Kingdom.\n",
"According to the National Restaurant Association, only a handful of restaurants in the United States have adopted a no-tipping model and some restaurants who have adopted this model returned to tipping due to loss of employees to competitors.\n\nSection::::By region.:North America and the Caribbean.:United States.:Service charges.\n",
"Although it has been cited that tipping taxi drivers is typical, it is not common in practice.\n\nSection::::By region.:Europe.:Italy.\n\nTips (\"la mancia\") are not customary in Italy, and are given only for a special service or as thanks for high quality service, but they are very uncommon. Almost all restaurants (with the notable exception of those in Rome) have a service charge (called \"coperto\" and/or \"servizio\"). As restaurants are required to inform you of any fees they charge, they usually list the \"coperto\"/\"servizio\" on the menu.\n\nSection::::By region.:Europe.:Netherlands.\n",
"Tipping (\"dricks\") is commonly not expected, but is practiced to reward high quality service or as a kind gesture. Tipping is most often done by leaving small change on the table or rounding up the bill. This is mostly done at restaurants (less often if payment is made at the desk) and in taxis (some taxis are very expensive as there is no fixed tariff, so they might not be tipped). Less often hairdressers are tipped. Tips are taxed in Sweden, but cash tips are not often declared to the tax authority. Cards are heavily used in Sweden as of the 2010s, and tips paid by cards in restaurants are regularly checked by the tax authority.\n",
"Section::::By region.:Europe.:Norway.\n\nThe service charge is included in the bill. It is uncommon for Norwegians to tip taxi drivers or cleaning staff at hotels. In restaurants and bars it is more common, but not expected. Tips are often given to reward high quality service or as a kind gesture. Tipping is most often done by leaving small change (5–15%) at the table or rounding up the bill.\n",
"Section::::By region.:Europe.:Germany.\n\nTipping (\"Trinkgeld\") is not seen as obligatory. In the case of waiting staff, and in the context of a debate about a minimum wage, some people disapprove of tipping and say that it should not substitute for employers paying a good basic wage. But most people in Germany consider tipping to be good manners as well as a way to express gratitude for good service.\n",
"Judith Martin in her 2005 manners book opines that fast food restaurants will never charge mandatory tipping for their customers, despite the presence of tip jars, and considers tipping for non-table services to be inappropriate.\n",
"\"Oslo Servitørforbund\" and \"Hotell- og Restaurantarbeiderforbundet\" (The Labor Union for Hotel and Restaurant Employees) has said many times that they discourage tipping, except for extraordinary service, because it makes salaries decrease over time, makes it harder to negotiate salaries and does not count towards pensions, unemployment insurance, loans and other benefits.\n\nSection::::By region.:Europe.:Romania.\n",
"Tipping is a practiced social custom in the United States. Tipping by definition is voluntary – at the discretion of the customer. In restaurants offering traditional table service, a gratuity of 15–20% of the amount of a customer’s check (before tax) is customary when good to excellent service is provided. In buffet-style restaurants where the server brings only beverages, 10% is customary. Higher tips may be given for excellent service, and lower tips for mediocre service. In the case of bad or rude service no tip may be given, and the restaurant manager may be notified of the problem. Tips are also generally given for services provided at golf courses, casinos, hotels, spas, salons, and for concierge services, food delivery, and taxis. This etiquette applies to bar service at weddings and any other event where one is a guest as well. The host should provide appropriate tips to workers at the end of an event; the amount may be negotiated in the contract.\n",
"Section::::By region.:Europe.:Iceland.\n\nIn Iceland tipping (\"þjórfé\", lit. \"serving money\") is not customary and never expected. Foreign tourists sometimes still tip without thinking because that is the custom in their home country. Tourist guides in Iceland also sometimes encourage their guests to tip them, but there is no requirement to do so.\n\nSection::::By region.:Europe.:Ireland.\n\nIt is uncommon for Irish people to tip taxi-drivers or cleaning staff at hotel. Tips are often given to reward high quality service or as a kind gesture. Tipping is most often done by leaving small change (5–10%) at the table or rounding up the bill.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-11658 | why does water taste different after brewing frozen and melted? | It's to do with dissolved gases. Freeze-thaw cycles lead to partial degassing, which affects the taste. | [
"BULLET::::- \"Freezing of grapes\" and cold pressing are used to squeeze the liquid part of the berries. The flakes of frozen water remain in the press and only the sweet juice flows. This is the principle of ice wine. Cryoextraction is a recent technique invented to reproduce the phenomenon in the regions which are not cold enough: the grapes are artificially frozen before being pressed. This method overcomes the climate and harvesting work can continue without waiting for the frosts (risk of loss of the grapes by weather accident or attack by hungry sparrows), but shortening the maturation does not give the same flavour.\n",
"Swedes usually consume surströmming after the third Thursday of August, labeled as \"Surströmming day\", through early September. Because of the strong smell, surströmming is often eaten outdoors. The pressurized can is usually opened some distance away from the dining table and is often initially punctured while immersed in a bucket of water, or after tapping and angling it upwards at 45 degrees, to prevent the escaping gas from spraying any brine. \n",
"One generic process of icing beer involves lowering the temperature of a batch of beer until ice crystals form. Since alcohol has a much lower freezing point (-114 °C; -173.2 °F) than water and doesn't form crystals, when the ice is filtered off. This creates a concoction with a higher volume ratio of alcohol to water and therefore creating a beer with a higher alcohol content by volume. The process is known as \"fractional freezing\" or \"freeze distillation\".\n",
"Section::::Applications of freeze drying.:Freeze drying of food.:Coffee.\n\nCoffee contains flavor and aroma qualities that are created due to the Maillard reaction during roasting and can be preserved with freeze-drying. Compared to other drying methods like room temperature drying, hot-air drying, and solar drying, Robusta coffee beans that were freeze-dried contained higher amounts of essential amino acids like leucine, lysine, and phenylalanine. Also, few non-essential amino acids that significantly contributed to taste were preserved.\n\nSection::::Applications of freeze drying.:Freeze drying of food.:Fruits.\n",
"Pure water is supercooled in a chiller to −2°C and released through a nozzle into a storage tank. On release it undergoes a phase transition forming small ice particles within 2.5% ice fraction. In the storage tank it is separated by the difference in density between ice and water. The cold water is supercooled and released again increasing the ice fraction in the storage tank.\n\nHowever a small crystal in the supercooled water or a nucleation cell on the surface may act as a seed for ice crystals and block the generator.\n\nSection::::See also.\n\nBULLET::::- Cold chain\n\nBULLET::::- Fishing industry\n",
"BULLET::::- Maximum Nitrogen Concentration (NH4-N [micro l-1] of 13 on the surface and 120 at the bottom of the lake have been reported.\n\nBULLET::::- The lake water temperature varied from a minimum of in January to in June/July at the surface and correspondingly and , at the bottom of the lake.\n\nSection::::Water quality issues.:Flora.\n\nWithin the lake water, the flora recorded comprise the following.\n",
"Water also differs from most liquids in that it becomes less dense as it freezes. The maximum density of water in its liquid form (at 1 atm) is ; that occurs at . The density of ice is . Thus, water expands 9% in volume as it freezes, which accounts for the fact that ice floats on liquid water.\n\nThe details of the exact chemical nature of liquid water are not well understood; some theories suggest that the unusual behaviour of water is due to the existence of 2 liquid states.\n\nSection::::Chemical and physical properties.:Taste and odor.\n",
"Maple sugaring parties typically began to operate at the start of the spring thaw in regions of woodland with sufficiently large numbers of maples. Syrup makers first bored holes in the trunks, usually more than one hole per large tree; they then inserted wooden spouts into the holes and hung a wooden bucket from the protruding end of each spout to collect the sap. The buckets were commonly made by cutting cylindrical segments from a large tree trunk and then hollowing out each segment's core from one end of the cylinder, creating a seamless, watertight container. Sap filled the buckets, and was then either transferred to larger holding vessels (barrels, large pots, or hollowed-out wooden logs), often mounted on sledges or wagons pulled by draft animals, or carried in buckets or other convenient containers. The sap-collection buckets were returned to the spouts mounted on the trees, and the process was repeated for as long as the flow of sap remained \"sweet\". The specific weather conditions of the thaw period were, and still are, critical in determining the length of the sugaring season. As the weather continues to warm, a maple tree's normal early spring biological process eventually alters the taste of the sap, making it unpalatable, perhaps due to an increase in amino acids.\n",
"Many shaved ices are confused with \"Italian ice\". \"Italian ice\", also known as \"water ice\", has the flavoring incorporated into the ice before it is frozen (although some commercial brands are flavored after production). Shaved ice—especially highly commercial shaved ice (such as that found in food chains or from street vendors)—is often flavored after the ice has been frozen and shaved. Snow cones are an example of shaved ice that is flavored after production.\n\nSection::::History.\n",
"A different version of the song \"Beautiful Songs You Should Know\" appears on the 2008 No-Man album \"Schoolyard Ghosts\" (this album shares a title with another song on \"Warm Winter\", which is itself closely related to the No-Man track \"Mixtaped\").\n\nSection::::Track listing.\n\nAll songs by Tim Bowness/Giancarlo Erra unless otherwise noted.\n\nBULLET::::1. New Memories of Machines - 1:31\n\nBULLET::::2. Before We Fall - 5:12\n\nBULLET::::3. Beautiful Songs You Should Know - 4:59\n\nBULLET::::4. Warm Winter - 5:34\n\nBULLET::::5. Lucky You, Lucky Me - 4:17\n\nBULLET::::6. Change Me Once Again - 5:56\n\nBULLET::::7. Something in Our Lives - 4:11\n",
"The University of Bristol have produced a paper entitled \"Investigation and development of an innovative pigging technique for the water supply industry.\" in which they have detailed the research that they have carried out. It looks particularly at how the properties of the ice pig behave with different ice fractions and varied levels of particulate loading as well as looking into the effects of shear strength, viscosity and heat transfer characteristics.\n\nSection::::Process.\n",
"Every operational ice concentration algorithm is predicated on this\n\nprinciple or a slight variation.\n\nThe NASA team algorithm, for instance, works by taking the\n\ndifference of two channels and dividing by their sum.\n\nThis makes the retrieval slightly nonlinear, but with\n\nthe advantage that the influence of temperature is mitigated.\n\nThis is because brightness temperature varies roughly linearly\n\nwith physical temperature when all other things are equal—see emissivity—and because the sea ice emissivity at different microwave\n\nchannels is strongly correlated.\n\nAs the equation suggests, concentrations of multiple ice\n\ntypes can potentially be detected, with NASA team distinguishing between\n",
"Once antifreeze has been mixed with water and put into use, it periodically needs to be maintained. If engine coolant leaks, boils, or if the cooling system needs to be drained and refilled, the antifreeze's freeze protection will need to be considered. In other cases a vehicle may need to be operated in a colder environment, requiring more antifreeze and less water. Three methods are commonly employed to determine the freeze point of the solution:\n\nBULLET::::1. Specific gravity—(using a hydrometer test strip or some sort of floating indicator),\n",
"Sometimes the freeflow will not stop when the backpressure is increased. This may be caused by very cold water freezing the first or second stage valve open, or a malfunction of either the first or second stages. If the freeflow is caused by freezing it will generally not be corrected except by closing the cylinder valve and allowing the ice to thaw, which requires an alternative air supply to breathe from while the valve is closed. As long as the freeflow continues, the refrigerating effect of the air expanding through the valves will keep the ice frozen, and air will continue to escape until either the cylinder valve is closed, or the cylinder is empty.\n",
"BULLET::::- even if temperatures \"somewhat below\" the freezing point of ethyl alcohol are achieved, there will still be alcohol and water mixed as a liquid, and\n\nBULLET::::- at some still lower temperature, the remaining alcohol-and-water solution will freeze without an alcohol-poor solid being separable.\n\nThe best-known freeze-distilled beverages are applejack and ice beer. Ice wine is the result of a similar process, but in this case, the freezing happens \"before\" the fermentation, and thus it is sugar, not alcohol, that gets concentrated. For an in-depth discussion of the physics and chemistry, see eutectic point.\n\nSection::::Purification of solids.\n",
"Section::::Appearance of warmed-over flavor.\n",
"High-grade industrial water (produced by reverse osmosis or distillation) and potable water are sometimes used in industrial plants requiring high-purity cooling water. Production of these high purity waters creates waste byproduct brines containing the concentrated impurities from the source water.\n",
"A growing number of wineries near Lake Erie, especially in Pennsylvania, New York, and Ashtabula County, Ohio, also produce ice wine.\n\nThe US law for ice wines specifies that grapes must be naturally frozen. The TTB (Tax and Trade Bureau) regulations state that \"Wine made from grapes frozen after harvest may not be labeled with the term 'ice wine' or any variation thereof, and if the wine is labeled to suggest it was made from frozen grapes, the label must be qualified to show that the grapes were frozen postharvest.\"\n\nSection::::Production.\n",
"Freeze distillation is a misnomer, because it is not distillation but rather a process of enriching a solution by partially freezing it and removing frozen material that is poorer in the dissolved material than is the liquid portion left behind. Such enrichment parallels enrichment by true distillation, where the evaporated and re-condensed portion is richer than the liquid portion left behind.\n\nThe detailed situation is the subject of thermodynamics, a subdivision of physics of importance to chemistry. Without resorting to mathematics, the following can be said for a mixture of water and alcohol:\n",
"BULLET::::- Brew temperature 90-95 °C\n\nBULLET::::- Standard 200 ml water\n\nBULLET::::- 4 g of tea\n\nBULLET::::- Brew times: 60-40-60-70-80-(+10) seconds\n\nA cold vessel lowers the steep temperature; to avoid this, always rinse the vessel with +90 °C (+194 °F) water before brewing.\n",
"Many coffee retailers simply use hot-brewed coffee in their iced coffee drinks. Starbucks specifically uses the double-strength method in which the coffee is brewed hot with twice the amount of grounds. With this method, the melted ice does not dilute the strength and flavour of the coffee. Unlike the cold-brew process, this method does not eliminate the acidity inherent in hot-brewed coffee.\n\nCold coffee drinks such as Frappuccinos are premade, presweetened and typically shelf stable. They are usually made using heat-brewed coffee.\n\nSection::::Variations by country.:Vietnam.\n",
"Iced coffee can be made from cold brew coffee, where coffee grounds are soaked for several hours and then strained. Prior to the commercialization of cold brewers, consumers took the matter into their own hands, cold-brewing an iced coffee by soaking ground coffee and chicory with water. The next day, the grounds would be filtered out. The result was a very strong coffee concentrate that was mixed with milk and sweetened. This sweeter, creamier form of iced coffee is the type commonly found in New Orleans, Louisiana, at local coffee chains such as CC's Coffee House.\n",
"BULLET::::- Freezing in this scenario begins at a temperature significantly below 0 °C.\n\nBULLET::::- The first material to freeze is not the water, but a dilute solution of alcohol in water.\n\nBULLET::::- The liquid left behind is richer in alcohol, and as a consequence, further freezing would take place at progressively lower temperatures. The frozen material, while always poorer in alcohol than the (increasingly rich) liquid, becomes progressively richer in alcohol.\n\nBULLET::::- Further stages of removing frozen material and waiting for more freezing will come to naught once the liquid uniformly cools to the temperature of whatever is cooling it.\n",
"BULLET::::- The lake is Monomictic Mixing type and develops thermal stratification in March to November. Maximum depth of the Thermocline is . Hypolimnion temperature ranges from to .\n\nBULLET::::- pH value varied from a maximum of 8.8 on the surface to a minimum of 7.7 at depth in year over the 12 months period\n\nBULLET::::- DO [mg l-1] value varied from a maximum of 10.4 on the surface to a minimum of 2.2 at the bottom in year over the 12 months period\n",
"Purified water is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain critical consistency of taste, clarity, and color. This guarantees the consumer complete reliability of safety and drinking-satisfaction of their favorite refreshment, no matter where in the world it has been bottled. In the process prior to filling and sealing, individual bottles are always rinsed with deionised water to remove any particles that could cause a change in taste.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-02035 | How does a Bluetooth source device control volume? | Think of it as multiplication. If your volume on the source is at 50% and your BT headphones are at 100%, the resulting volume is 50%. If you turn down your headphones to 50% the resulting volume is 25%. | [
"BULLET::::1. Searches for PVs in all available block devices.\n\nBULLET::::2. Parses the metadata header in each PV found.\n\nBULLET::::3. Computes the layouts of all visible volume groups.\n\nBULLET::::4. Loops over each logical volume in the volume group to be brought online and:\n\nBULLET::::1. Checks if the logical volume to be brought online has all its PVs visible.\n\nBULLET::::2. Creates a new, empty device mapping.\n\nBULLET::::3. Maps it (with the \"linear\" target) onto the data areas of the PVs the logical volume belongs to.\n\nTo move an online logical volume between PVs on the same Volume Group, use the \"pvmove\" tool:\n",
"Section::::Unauthorized use.:Leaked keys.\n",
"The and keys control the volume for the currently selected device. You can also use or for the same purpose. Both the left and right signals are affected. To control them independently, use to increase and to decrease the volume on the left channel. increases the volume of the right channel and decreases it.\n\nSection::::Keyboard commands.:View mode controls.\n",
"BULLET::::- WBS M166 Summing Amplifier\n\nBULLET::::- WBS M166A Summing Amplifier\n\nBULLET::::- WBS M201 Relay\n\nBULLET::::- WBS M202 Logic Latch\n\nBULLET::::- WBS M207 Phantom Power Card\n\nBULLET::::- WBS M402 Equalizer\n\nBULLET::::- WBS M402A Equalizer\n\nBULLET::::- WBS M402B Equalizer\n\nBULLET::::- WBS M404A High Pass / Low Pass Filter\n\nBULLET::::- WBS M405 Portable Extended Range VU Meter\n\nBULLET::::- WBS M405F Rack Mounted Extended Range VU Meter\n\nBULLET::::- WBS M406 Compressors\n\nBULLET::::- WBS M411 Stereo/Mono Transcription Preamplifier\n\nBULLET::::- WBS M412 Oscillator\n\nBULLET::::- WBS M413 Talk-back and Assign\n\nBULLET::::- WBS M420 Reverb Return\n\nBULLET::::- WBS M433 Peak Program Meter\n\nBULLET::::- WBS M434 Peak Program Meter\n",
"Section::::Basic operating parameters.:Drive.\n",
"Section::::Half-space measurement.\n",
"In equipment which has a microprocessor, FPGA or other functional logic which can store settings and reload them to the \"potentiometer\" every time the equipment is powered up, a multiplying DAC can be used in place of a digipot, and this can offer higher setting resolution, less drift with temperature, and more operational flexibility.\n\nSection::::Membrane potentiometers.\n",
"BULLET::::- Smart volume control for mobile phones: The volume control in mobile phones depend on the background noise levels, noise classes, hearing profile of the user and other parameters. The measurement on noise level and loudness level involve imprecision and subjective measures. The authors have demonstrated the successful use of fuzzy logic system for volume control in mobile handsets.\n",
"BULLET::::- Reference level (typically +4 dBu, valid with tones only);\n\nBULLET::::- Standard output level (10 dB above reference, typical peak levels);\n\nBULLET::::- Clip level (6 dB above standard output level, \"headroom\" to allow for unusual conditions)\n\nSection::::Standard characteristics.\n\nThe behaviour of VU meters is defined in ANSI C16.5-1942, British Standard BS 6840, and IEC 60268-17.\n\nSection::::Standard characteristics.:Rise time.\n\nThe rise time, defined as the time it takes for the needle to reach 99% of the distance to 0 VU when the VU-meter is submitted to a signal that steps from 0 to a level that reads 0 VU, is 300 ms.\n",
"BULLET::::- WBS M475 Monitor\n\nBULLET::::- WBS M475A Stereo Monitor\n\nBULLET::::- WBS M477 Master Summing\n\nBULLET::::- WBS M479 Dual Auxiliary\n\nBULLET::::- WBS M480B Pre-amp\n\nBULLET::::- WBS M480C Input\n\nBULLET::::- WBS M481 Oscillator / Cue Amplifier\n\nBULLET::::- WBS M481B Cue OSC Re-entry\n\nBULLET::::- WBS M482B Monitor\n\nBULLET::::- WBS M484 Summing Amplifier\n\nBULLET::::- WBS M484A Summing Amplifier\n\nBULLET::::- WBS M485B 25V power Supply\n\nBULLET::::- WBS M487 Dual Master\n\nBULLET::::- WBS M487 Dual Master\n\nBULLET::::- WBS M487 Pre-amp\n\nBULLET::::- WBS M490A Microphone Input\n\nBULLET::::- WBS M490F Input\n\nBULLET::::- WBS M490F Stereo Input\n\nBULLET::::- WBS M490G Mic Input\n\nBULLET::::- WBS M490H Line Input\n",
"Some devices, when plugged into charging ports, draw even more power (10 watts at 2.1 amperes) than the Battery Charging Specification allows — The iPad is one such device; it negotiates the current pull with data pin voltages. Barnes & Noble Nook Color devices also require a special charger that runs at 1.9 amperes.\n\nSection::::Power.:PoweredUSB.\n\nPoweredUSB is a proprietary extension that adds four additional pins supplying up to 6 A at 5 V, 12 V, or 24 V. It is commonly used in point of sale systems to power peripherals such as barcode readers, credit card terminals, and printers.\n",
"BULLET::::- WBS M562B Equalizer\n\nBULLET::::- WBS M566 Compressor/Limiter/De-esser\n\nBULLET::::- WBS M567 Noise Gate and Meter\n\nBULLET::::- WBS M600 Universal Audio Amplifier\n\nBULLET::::- WBS M604 Dual Frequency Oscillator\n\nBULLET::::- WBS M606\n\nBULLET::::- WBS M605A Audio DA\n\nBULLET::::- WBS M608 Audio DA\n\nBULLET::::- WBS M609 Remote Sensitivity Audio DA\n\nBULLET::::- WBS M610 10 Watt Monitor Amplifier\n\nBULLET::::- WBS M612 Audio DA\n\nBULLET::::- WBS M612A Audio DA\n\nBULLET::::- WBS M624 Power Supply\n\nBULLET::::- WBS M625A Power Supply\n\nBULLET::::- WBS M625B Power Supply\n\nBULLET::::- WBS M632 Power Supply\n\nBULLET::::- WBS M648 Power Supply\n\nBULLET::::- WBS M666 Power Amplifier\n\nSection::::Products.:Intercom and other components.\n",
"Bluetooth mesh profiles use Bluetooth Low Energy to communicate with other Bluetooth Low Energy devices in the network. Each device can pass the information forward to other Bluetooth Low Energy devices creating a \"mesh\" effect. For example, switching off an entire building of lights from a single smartphone.\n\nBULLET::::- MESH (Mesh Profile) — for base mesh networking.\n\nBULLET::::- MMDL (Mesh models) — for application layer definitions. Term \"model\" is used in mesh specifications instead of \"profile\" to avoid ambiguities.\n\nSection::::Applications.:Health care profiles.\n",
"BULLET::::- \"P\" - Thermal power handling capacity of the driver, in watts. This value is difficult to characterize and is often overestimated, by manufacturers and others. As the voice coil heats, it changes dimension to some extent, and changes electrical resistance to a considerable extent. The latter changes the electrical relationships between the voice coil and passive crossover components, changing the slope and crossover points designed into the speaker system.\n\nBULLET::::- \"V\" - Peak displacement volume, calculated by \"V\" = \"S\"·\"X\"\n\nSection::::Other parameters.\n\nBULLET::::- \"Z\" - The impedance of the driver at \"F\", used when measuring \"Q\" and \"Q\".\n",
"BULLET::::- WBS M435 VU Meter\n\nBULLET::::- WBS M436 VU Meter\n\nBULLET::::- WBS M441 Mic / Line Pre-Amplifier\n\nBULLET::::- WBS M441D Mic / Line Pre-Amplifier\n\nBULLET::::- WBS M441M Mic / Line Pre-Amplifier\n\nBULLET::::- WBS M450A Program Amplifier\n\nBULLET::::- WBS M450B Program - Summing Amplifier\n\nBULLET::::- WBS M451 Slate\n\nBULLET::::- WBS M452 Oscillator\n\nBULLET::::- WBS M451 Talkback\n\nBULLET::::- WBS M452 Oscillator\n\nBULLET::::- WBS M453A Cue Amplifier\n\nBULLET::::- WBS M454 Slate Assign\n\nBULLET::::- WBS M455 Auxiliary Returns\n\nBULLET::::- WBS M457A Pre-Mixer Master\n\nBULLET::::- WBS M460 Input\n\nBULLET::::- WBS M460A Input\n\nBULLET::::- WBS M460B Input\n\nBULLET::::- WBS M460L Input\n\nBULLET::::- WBS M460LA Input\n",
"Applying a ramp voltage to the input of a similar VCR circuit (the load resistor has been changed to 3000 ohms) allows one to determine the exact value of the resistance of the JFET as the input voltage is varied.\n",
"The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently.\n\nSection::::Implementation.:Communication and connection.\n\nA master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad-hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).\n",
"Section::::Basic operating parameters.\n\nVVA designs generally include a number of parameters that may be configured to change the sound and operating characteristics of the amplifier design:\n\nSection::::Basic operating parameters.:Operating point.\n",
"In the circuit on the figure, a non-linearized VCR design, the voltage-controlled resistor, the LSK489C JFET, is used a programmable voltage divider. The VGS supply sets the level of the output resistance of the JFET. The drain-to-source resistance of the JFET (\"R\") and the drain resistor (\"R\") form the voltage-divider network. The output voltage can be determined from the equation\n",
"BULLET::::- In a fully active loudspeaker system each driver has its own dedicated power amplifier. The low-level audio signal is first sent through an active crossover to split the audio signal into the appropriate frequency ranges before being sent to the power amplifiers and then on to the drivers. This design is commonly seen in studio monitors and professional concert audio.\n",
"Section::::Digital potentiometer.\n\nA digital potentiometer (often called digipot) is an electronic component that mimics the functions of analog potentiometers. Through digital input signals, the resistance between two terminals can be adjusted, just as in an analog potentiometer. There are two main functional types: volatile, which lose their set position if power is removed, and are usually designed to initialise at the minimum position, and non-volatile, which retain their set position using a storage mechanism similar to flash memory or EEPROM.\n",
"A logarithmic taper potentiometer is constructed with a resistive element that either \"tapers\" in from one end to the other, or is made from a material whose resistivity varies from one end to the other. This results in a device where output voltage is a logarithmic function of the slider position.\n",
"Section::::Control schemes.:Modulation.\n\nInstead of starting and stopping the compressor, a slide valve as described above modulates capacity to the demand. While this yields a consistent discharge pressure over a wide range of demand, overall power consumption may be higher than with a load/unload scheme, resulting in approximately 70% of full-load power consumption when the compressor is at a zero-load condition.\n",
"Section::::Variations of the voltage clamp technique.:Two-electrode voltage clamp using microelectrodes.:Dual-cell voltage clamp.\n",
"Section::::Types.:Sealed (or closed) enclosures.:Isobaric loading.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal"
] | [] |
2018-11378 | How can washing up liquids/detergents come with the warning 'harmful to aquatic environments with long lasting effects' yet still have such widespread use? | Because things that go down the drain should be subjected to Sewage treatment. In its many processes are included ones to remove the nitrogen and phosphorous. By the time they're done, its okay. If not removed, they can that cause the harm by acting as fertilizers, causing terrible plant and algae overgrowth. | [
"However, in the U.S. at least, funding for such warning devices has been shrinking, with approved funding down 45% over the last five years. According to one marine science professor, \"We need it more than ever, and we’ve brought ourselves to the precipice of making great forecasts, but we can’t make it happen.\"\n\nSection::::Potential remedies.:Reducing chemical runoff.\n",
"Studies have revisited the question of whether existing household phosphate bans are effective in reducing phosphorus concentration in waterways, and subsequent algal blooms. A 2014 case study of Vermont phosphate policies around Lake Champlain showed that while the bans reduced the phosphate contribution by treated wastewater from households to five percent of the total contribution, algal blooms have still continued to worsen for other reasons.\n",
"Traditionally, marine biofouling has been prevented through use of biocides: substances that deter or eliminate organisms upon contact. Yet most biocides are also harmful to humans, non-fouling marine organisms, and the general aquatic environment. Emerging regulations by the International Maritime Organization (IMO) have all but ceased application of biocides, causing a rush to research environmentally friendly ultra-low fouling materials.\n\nSection::::Potential applications.:Nautical applications.:Preventing biofouling.:Heavy metal paints.\n",
"BULLET::::- As recently as the 1970s there was a serious issue with the water treatment infrastructure of some states, notably in Southern California with water sourced from the Bay Delta. Water was being disinfected for domestic use through chlorine treatment, which was effective for killing microbial contaminants and bacteria, but in some cases, it reacted with runoff chemicals and organic matter to form trihalomethanes. Research done in the subsequent years began to suggest the carcinogenic and harmful nature of this category of compounds, but the burden fell to the water treatment plant and providers to clean the water, not the federal government. It took many years to get hard regulation of these chemicals passed federally, in fact, the Maximum Contaminant Level for THMs was lowered again in 2001 as more research continues to become available.\n",
"A 2006 study found detectable concentrations of 28 pharmaceutical compounds in sewage treatment plant effluents, surface water, and sediment. The therapeutic classes included antibiotics, analgesics and anti-inflammatories, lipid regulators, beta-blockers, anti-convulsant, and steroid hormones. Although most chemical concentrations were detected at low levels (nano-grams/Liter (ng/L)), there are uncertainties that remain regarding the levels at which toxicity occurs and the risks of bioaccumulation of these pharmaceutical compounds.\n",
"Biofouling, especially of ships, has been a problem for as long as humanity has been sailing the oceans. The earliest written mention of fouling was by Plutarch who recorded this explanation of its impact on ship speed: \"when weeds, ooze, and filth stick upon its sides, the stroke of the ship is more obtuse and weak; and the water, coming upon this clammy matter, doth not so easily part from it; and this is the reason why they usually calk their ships.\"\n",
"Section::::Impact.\n\nGovernments and industry spend more than US$5.7 billion annually to prevent and control marine biofouling.\n\nBiofouling occurs everywhere but is most significant economically to the shipping industries, since fouling on a ship's hull significantly increases drag, reducing the overall hydrodynamic performance of the vessel, and increases the fuel consumption.\n",
"A number of states in the U.S. have tried eliminating phosphates in detergent and by cleaning water treatment plants, which succeeded in reducing the amount that entered Lake Erie by 66%. However, changes in farming practices during that period increased chemical runoff, thereby offsetting the improvements.\n",
"In Canada, there is no law requiring manufactures to state the health and environmental hazards associated with their cleaning products. Many people buy such products to support a clean and healthy home, often unaware of the products ability to harm both their own health and the surrounding environment. \"Canadians spend more than $275 million on household cleaning products in a year\" Chemicals from these cleaners enter our bodies through air passageways and absorption through the skin and when these cleaning products are washed down the drain they negatively affect aquatic ecosystems. There are also no regulations in place stating that the ingredients be listed on labels of cleaning products leading the users to be ultimately unaware of the chemicals they expose themselves and their surrounding environments to.\n",
"Chemical waste in our oceans is becoming a major issue for marine life. There have been many studies conducted to try and prove the effects of chemicals in our oceans. In Canada, many of the studies concentrated on the Atlantic provinces, where fishing and aquaculture are an important part of the economy. \n",
"The economic damage resulting from lost business has become a serious concern. According to one report in 2016, the four main economic impacts from harmful algal blooms come from damage to human health, fisheries, tourism and recreation, and the cost of monitoring and management of area where blooms appear. EPA estimates that algal blooms impact 65 percent of the country's major estuaries, with an annual cost of $2.2 billion. In the U.S. there are an estimated 166 coastal dead zones. Because data collection has been more difficult and limited from sources outside the U.S., most of the estimates as of 2016 have been primarily for the U.S.\n",
"Polychlorinated biphenyls (PCBs) are organic pollutants that are still present in our environment today, despite being banned in many countries, including the United States and Canada. Due to the persistent nature of PCBs in aquatic ecosystems, many aquatic species contain high levels of this chemical. For example, wild salmon (\"Salmo salar\") in the Baltic Sea have been shown to have significantly higher PCB levels than farmed salmon as the wild fish live in a heavily contaminated environment.\n",
"Because of the high solubility of most PPCPs, aquatic organisms are especially vulnerable to their effects. Researchers have found that a class of antidepressants may be found in frogs and can significantly slow their development. The increased presence of estrogen and other synthetic hormones in waste water due to birth control and hormonal therapies has been linked to increased feminization of exposed fish and other aquatic organisms. The chemicals within these PPCP products could either affect the feminization or masculinization of different fishes, therefore affecting their reproductive rates.\n",
"Section::::Principal sources.:Marinas and boating activities.\n\nChemicals used for boat maintenance, like paint, solvents, and oils find their way into water through runoff. Additionally, spilling fuels or leaking fuels directly into the water from boats contribute to nonpoint source pollution. Nutrient and bacteria levels are increased by poorly maintained sanitary waste receptacles on the boat and pump-out stations.\n\nSection::::Control.\n\nSection::::Control.:Regulation of Nonpoint Source Pollution.\n",
"Banning discussion in the United States started because of pollution of the Great Lakes. Seventeen US states have partial or full bans on the use of phosphates in dish detergent, and two US states (Maryland and New York) ban phosphates in commercial dishwashing. In 1983 there was a corruption scandal in which industry sought to influence government regulators regarding the ban.\n\nSome dishwashing detergents may contain phosphorus, an ingredient which at least two states in the United States have limited use in dishwashing detergent.\n\nSection::::Environmental impact.\n",
"Section::::Harmful effects.:Economic impact.:Fisheries industry.\n\nAs early as 1976 a short-term, relatively small, dead zone off the coasts of New York and New Jersey cost commercial and recreational fisheries over $500 million. In 1998 a red tide in Hong Kong killed over $10 million in high-value fish.\n\nIn 2009, the economic impact for the state of Washington's coastal counties dependent on its fishing industry was estimated to be $22 million. In 2016, the U.S. seafood industry expected future lost revenue could amount to $900 million annually.\n",
"Starting in the mid-1960s, ecologists and toxicologists began to express concern about the potential adverse effects of pharmaceuticals in the water supply, but it wasn’t until a decade later that the presence of pharmaceuticals in water was well documented. Studies in 1975 and 1977 found clofibric acids and salicylic acids at trace concentrations in treated water. Widespread concern about and research into the effect of PPCPs largely started in the early 1990s. Until this time, PPCPs were largely ignored because of their relative solubility and containment in waterways compared to more familiar pollutants like agrochemicals, industrial chemicals, and industrial waste and byproducts.\n",
"Since then, a great deal of attention has been directed to the ecological and physiological risk associated with pharmaceutical compounds and their metabolites in water and the environment. In the last decade, most research in this area has focused on steroid hormones and antibiotics. There is concern that steroid hormones may act as endocrine disruptors. Some research suggests that concentrations of ethinylestradiol, an estrogen used in oral contraceptive medications and one of the most commonly prescribed pharmaceuticals, can cause endocrine disruption in aquatic and amphibian wildlife in concentrations as low as 1 ng/L.\n",
"The only sensors now in use are located in the Gulf of Mexico. In 2008 similar sensors in the Gulf forewarned of an increased level of toxins which led to a shutdown of shellfish harvesting in Texas along with a recall of mussels, clams and oysters, possibly saving many lives. With an increase in the size and frequency of HABs, experts state the need for significantly more sensors located around the country. The same kinds of sensors can also be used to detect threats to drinking water from intentional contamination.\n",
"With the more completed legislative files and contexts coming towards the laundry industry. The environmentally unfriendly synthetic surfactants and phosphate salts are no longer allowed to use without any usage limit. Consequently, synthetic surfactants are then used with lower concentration in combination with enzymes. Currently, laundry industry manufacturers have recognized the importance of producing environmentally friendly detergents, and to fulfill the achievement, laundry enzymes have been added to reformulate the detergent and replace the previous chemical surfactants and phosphate. Laundry enzymes are biological active factors such as bacteria, yeast or even mushrooms that are biologically sourced, and hence there will be less chemical pollution from the enzymes and they decompose some toxicants\n",
"In response to this, the water industry has stated that there is no evidence of a risk to overall health as a result of exposure to these chemicals. However, the Food and Drug Administration (FDA) states in its review of water pollution that many contaminants survive wastewater treatment and biodegradation, and are detectable in the environment. Therefore, the tainted source is recycled through a community, exposing more people and releasing more chemicals along the way.\n\nSection::::Circulation of pollutants.:Water cycle.\n",
"NOAA has provided a few cost estimates for various blooms over the past few years: $10.3 million in 2011 due to the red tide at Texas oyster landings; $2.4 million lost income by tribal commerce from 2015 fishery closures in the pacific northwest; $40 million from Washington state's loss of tourism from the same fishery closure.\n\nAlong with damage to businesses, the toll from human sickness results in lost wages and damaged health. The costs of medical treatment, investigation by health agencies through water sampling and testing, and the posting of warning signs at effected locations is also costly.\n",
"Trisodium nitrilotriacetate is found in bathroom cleaners and possibly some laundry detergents although more actively used in industrial formulations. Small amounts add up in the environment and add to an overall toxic issue. In aquatic ecosystems these chemicals cause heavy metals in sediment to redisolve and many of these metals are toxic to fish and other wildlife.\n\nSection::::Plasticizers.\n",
"The impact of cyanobacterial blooms has been assessed in economic terms. In December 1991, the world’s largest algal bloom occurred in Australia, where 1000 km of the Darling-Barwon River was affected. One million people-days of drinking water were lost, and the direct costs incurred totalled more than A$1.3 million. Moreover, 2000 site-days of recreation were also lost, and the economic cost was estimated at A$10 million, after taking into account indirectly affected industries such as tourism, accommodation and transport.\n\nSection::::Related toxic blooms and their impact.:Current methods of analysis in water samples.\n",
"Several common detergent ingredients are surfactants, builders, bleach-active compounds and auxiliary agents. The surfactants can be classified into anionic, cationic and nonionic surfactants. The most widely used surfactant linear alkylbenzene sulfonate (LAS) is an anionic surfactant. In builders, sodium triphosphate, zeolite A, sodium nitrilotriacetate (NTA) are the most important substances. Bleach-active compounds are usually sodium perborate and sodium percarbonate. Enzymes and fluorescent whitening agents are added into detergents as auxiliary agents.\n\nSection::::Mechanism.\n\nSection::::Mechanism.:Environmental harm of surfactants.\n"
] | [
"If liquid detergents contain warnings that they could be harmful, they should not still have such widespread use."
] | [
"Procedures are done to said issues and by the time the procedures are complete, the issues aren't as big anymore."
] | [
"false presupposition"
] | [
"If liquid detergents contain warnings that they could be harmful, they should not still have such widespread use.",
"If liquid detergents contain warnings that they could be harmful, they should not still have such widespread use."
] | [
"normal",
"false presupposition"
] | [
"Procedures are done to said issues and by the time the procedures are complete, the issues aren't as big anymore.",
"Procedures are done to said issues and by the time the procedures are complete, the issues aren't as big anymore."
] |
2018-14327 | What causes the shiny rainbow hue I see on certain sliced meats (e.g. roast beef)? Is it an indicator of poor quality? | It is due to the direction of the cut going against the muscle fibers resulting in diffraction of light into the rainbow you see. Quality isn't really part of the equation, it can happen with any quality of meat so long as the fibers are tightly packed and aligned ("restructured" or chopped meat bonded together won't do this). | [
"Section::::Methods of preparation.\n\nFresh meat can be cooked for immediate consumption, or be processed, that is, treated for longer-term preservation and later consumption, possibly after further preparation. Fresh meat cuts or processed cuts may produce iridescence, commonly thought to be due to spoilage but actually caused by structural coloration and diffraction of the light. A common additive to processed meats, both for preservation and because it prevents discoloring, is sodium nitrite, which, however, is also a source of health concerns, because it may form carcinogenic nitrosamines when heated.\n",
"Surface gratings, consisting on ordered surface features due exposure of ordered muscle cells on cuts of meat. The structural coloration on meat cuts appears only after the ordered pattern of muscle fibrils is exposed and light is diffracted by the proteins in the fibrils. The coloration or wavelength of the diffracted light depends on the angle of observation and can be enhanced by covering the meat with translucent foils. Roughening the surface or removing water content by drying causes the structure to collapse, thus, the structural coloration to disappear.\n\nSection::::Mechanisms.:Variable structures.\n",
"Myoglobin contains hemes, pigments responsible for the colour of red meat. The colour that meat takes is partly determined by the degree of oxidation of the myoglobin. In fresh meat the iron atom is in the ferrous (+2) oxidation state bound to an oxygen molecule (O). Meat cooked well done is brown because the iron atom is now in the ferric (+3) oxidation state, having lost an electron. If meat has been exposed to nitrites, it will remain pink because the iron atom is bound to NO, nitric oxide (true of, e.g., corned beef or cured hams). Grilled meats can also take on a pink \"smoke ring\" that comes from the iron binding to a molecule of carbon monoxide. Raw meat packed in a carbon monoxide atmosphere also shows this same pink \"smoke ring\" due to the same principles. Notably, the surface of this raw meat also displays the pink color, which is usually associated in consumers' minds with fresh meat. This artificially induced pink color can persist, reportedly up to one year. Hormel and Cargill are both reported to use this meat-packing process, and meat treated this way has been in the consumer market since 2003.\n",
"BULLET::::- Rare ()— ( core temperature) The outside is grey-brown, and the middle of the steak is fully red and slightly warm.\n\nBULLET::::- Medium rare ()— ( core temperature) The steak will have a reddish-pink center. This is the standard degree of cooking at most steakhouses, unless specified otherwise.\n\nBULLET::::- Medium () – ( core temperature) The middle of the steak is hot and fully pink surrounding the center. The outside is grey-brown.\n\nBULLET::::- Medium well done ()— ( core temperature) The meat is lightly pink surrounding the center.\n",
"Darkcutter\n\nA darkcutter or dark cutter is a carcass of beef that has been subjected to undue stress before slaughter, and is dark in color. Sometimes referred to as dark cutting beef, they have a dark color which makes the meat appear less fresh, making them undesirable to consumers. Darkcutters fetch a lower price than otherwise ordinary beef on the market. \n",
"Red meats such as beef, lamb, and venison, and certain game birds are often roasted to be \"pink\" or \"rare\", meaning that the center of the roast is still red. Roasting is a preferred method of cooking for most poultry, and certain cuts of beef, pork, or lamb. Although there is a growing fashion in some restaurants to serve \"rose pork\", temperature monitoring of the center of the roast is the only sure way to avoid foodborne disease.\n",
"These types of shakers are used for very clean cuts. Generally, a final material cut will not contain any oversize or any fines contamination.\n\nThese shakers are designed for the highest attainable quality at the cost of a reduced feed rate.\n\nSection::::Types of mechanical screening.:Trommel screens.\n",
"Section::::Occurrence.\n\nThe pinkish color in meat is typically due to the presence of a compound called myoglobin. Myoglobin typically darkens and turns brown when heated above a certain temperature. This is why the perimeter of a cooked steak is darker in color than the red inside; as the lower temperature of the middle of the steak was not sufficient to cause the myoglobin to lose its pigment. \n",
"Most recipes include nitrates or nitrites, which convert the natural myoglobin in steak to nitrosomyoglobin, giving it a pink color. Nitrates and nitrites reduce the risk of dangerous botulism during curing by inhibiting the growth of \"Clostridium botulinum\" bacteria spores, but have been shown to be linked to increased cancer risk. Beef cured without nitrates or nitrites has a gray color, and is sometimes called \"New England corned beef\".\n",
"In Mexico, thinly sliced meat, breaded and fried, known as \"milanesa\", is a popular ingredient in \"tortas\", the sandwiches sold in street stands and indoor restaurants in Mexico City.\n",
"Pale, Soft, Exudative meat, or PSE meat, describes a carcass quality condition known to occur in pork, beef, and poultry. It is characterized by an abnormal color, consistency, and water holding capacity, making the meat dry and unattractive to consumers. The condition is believed to be caused by abnormal muscle metabolism following slaughter, due to an altered rate of glycolysis and a low pH within the muscle fibers. A mutation point in the ryanodine receptor gene (\"RYR1\") in pork, associated to stress levels prior to slaughter are known to increase the incidence of PSE meat. Although the term \"soft\" may look positive, it refers to raw meat. When cooked, there is higher cook loss and the final product is hard, not juicy.\n",
"The color and flavor of the flesh depends on the diet and freshness of the trout. Farmed trout and some populations of wild trout, especially anadromous steelhead, have reddish or orange flesh as a result of high astaxanthin levels in their diets. Astaxanthin is a powerful antioxidant that may be from a natural source or a synthetic trout feed. Rainbow trout raised to have pinker flesh from a diet high in astaxanthin are sometimes sold in the U.S. with labeling calling them \"steelhead\". As wild steelhead are in decline in some parts of their range, farmed rainbow are viewed as a preferred alternative. In Chile and Norway, rainbow trout farmed in saltwater sea cages are sold labeled as steelhead.\n",
"BULLET::::- Well done () – ( and above core temperature) The meat is grey-brown in the center and slightly charred. In parts of England this is known as \"German style\".\n\nBULLET::::- Overcooked () – (much more than core temperature) The meat is blackened throughout and slightly crispy.\n",
"Cabidela\n\nCabidela () or arroz de cabidela (\"cabidela\" rice) is a Portuguese dish made with poultry or rabbit cooked in its own blood added to water and a bit of vinegar much like \"jugged\" or \"civet\" dishes. The blood is captured when the animal is slaughtered. The rice is cooked together with the meat or separately. The blood imparts a brown color to the dish.\n\nSection::::Variants.\n",
"BULLET::::- narrow spiral articulated bands around the middle of the whorl, the base mottled or barred, and the suture bordered by a row of dark blotches\n\nBULLET::::- ground-color pink or purplish, the entire surface variegated by rather narrow spiral bands, finely articulated with red or purple and white\n",
"The USDA NAMP / IMPS codes related to this subprimal cut are 181A and 184. 181A is obtained from 181 after removing the bottom sirloin and the butt tender (the part of the tenderloin which is in the sirloin). 184 is obtained from 182 after removing the bottom sirloin. The foodservice cuts from 184 are 184A through 184F, its portion cut is 1184 and, the \"subportion\" cuts from 1184 are 1184A through 1184F. 181A is not further divided into foodservice cuts. In Australia, this cut is called D-rump in the Handbook of Australian Meat and assigned code 2100.\n\nSection::::Etymology.\n",
"The USDA's grading system, which has been designed to reward marbling, has eight different grades; Prime, Choice, Select, Standard, Commercial, Utility, Cutter and Canner. Prime has the highest marbling content when compared to other grades, and is capable of fetching a premium at restaurants and supermarkets. Choice is the grade most commonly sold in retail outlets, and Select is sold as a cheaper, but still nutritious, option in many stores. Prime, Choice, Select and Standard are commonly used in the younger cattle (under 42 months of age), and Commercial, Utility, Canner and Cutter are used in older cattle carcasses which are not marketed as wholesale beef \"block\" meat, but as material used in ground products and cheaper steaks for family restaurants.\n",
"In the European Union, the use of carmine in foods is regulated under the European Commission's directives governing food additives in general and food dyes in particular and listed under the names \"Cochineal\", \"Carminic acid\", \"Carmines\" and \"Natural Red 4\" as additive E 120 in the list of EU-approved food additives. The directive governing food dyes approves the use of carmine for certain groups of foods only and specifies a maximum amount which is permitted or restricts it to the quantum satis.\n",
"Section::::United States grading system.\n",
"Some fish for sashimi are treated with carbon monoxide to keep the flesh red for a longer time in storage. This practice can make spoiled fish appear fresh.\n",
"With a hue code of 348, this color is within the range of carmine colors.\n\nThis color is supposed to be fluorescent, but there is no mechanism for displaying fluorescence on a computer screen. \n\nSection::::Variations of carmine.:Paradise pink.\n\nDisplayed at right is the color paradise pink.\n\nThe source of this color is the \"Pantone Textile Paper eXtended (TPX)\" color list, color #17-1755 TPX—Paradise Pink.\n\nSince it has a hue code of 347, the color \"paradise pink\" is within the range of carmine colors.\n\nSection::::Variations of carmine.:Rich carmine.\n",
"In 2012 sportfishermen in Cook Inlet reported increased instances of a condition known as \"mushy halibut syndrome\". The meat of affected fish has a \"jelly-like\" consistency. When cooked it does not flake in the normal manner of halibut but rather falls apart. The meat is still perfectly safe to eat but the appearance and consistency are considered unappetizing. The exact cause of the condition is unknown but may be related to a change in diet.\n\nHalibiut is a excellent source of protein and Omega–3 but there is a chance it may contain mercury which is harmful.\n",
"In Colombia, the dish is called \"milanesa\" or \"chuleta valluna\", and is made with a thin cut of pork, breaded and fried.\n\nIn Chile, breaded cutlet is known as \"escalopa\", and it is usually made of beef, pork or chicken. This dish is also known as \"milanesas\", and it is prepared by breading and frying thin pieces of meat. \"Escalopas\" can be found from fancy to simple restaurants.\n\nSection::::Milanesa.:Central America.\n",
"Slaughter weight (for meat production) is generally achieved beyond 12 months of age. \n\nMeat from the Mangalica can be easily found in Hungary, as Hungarian farmers produce about 60,000 animals each year.\n\nSection::::Varieties.\n",
"The United States Department of Agriculture requires that when fats and oils are added to red meat products such as roast beef and steaks, the product indicate this prominently, for example as part of the product name or as a product name qualifier. Additionally, products that appear to be of a higher quality as a result of fat injections must include a statement to indicate this, such as \"injected with beef fat\", \"artificially marbled—simulated fat covered\", or \"product may appear to be of a higher quality than the actual grade“.\n"
] | [
"A shiny appearance on sliced meats is an indication of poor quality."
] | [
"Quality if not a determining factor when it comes to the shiny appearance of sliced meats, it is due to the direction in which way the meat was cut."
] | [
"false presupposition"
] | [
"A shiny appearance on sliced meats is an indication of poor quality."
] | [
"false presupposition"
] | [
"Quality if not a determining factor when it comes to the shiny appearance of sliced meats, it is due to the direction in which way the meat was cut."
] |
2018-14435 | The air that's inside the bubbles of bubble-wrap made overseas: is it the exact air that was trapped during its manufacture... literally air from China, or wherever? Or, does it somehow get replaced with "other" air when in transit? | It’s whatever air that is near the assembly line when the bubble wrap is being made. Bubble wrap is formed from three resins in which are melted into the film, then the still half-melted film is sucked on by a bunch of tiny holes that are lead to by a vacuum. The bubble is then formed, the air is trapped inside, then the half-melted film seals itself back up due to its liquid properties. Edit: I think i miscommunicated at the end. I believe it’s laid over another sheet and then it’s sealed to that under-sheet due to its semi-liquid properties. | [
"Section::::Company operations.:Brands.:Bubble Wrap.\n\nInitially created as a failed wallpaper, Bubble was subsequently used as a greenhouse insulator. Finally, it took on its best-known use as a packaging material. In its earliest form, Bubble Wrap suffered from leaky bubbles, but by the mid 1960s a special coating was developed to prevent the bubbles from losing air. In 1969, Sealed Air reported $4 million in sales, mostly attributed to Bubble Wrap, as it was still a proprietary product at that time.\n\nSection::::Company operations.:Brands.:Cryovac.\n",
"Better known by the brand name of Bubble Wrap, air-bubble packing is a pliable transparent plastic material commonly used for the cushioning of fragile, breakable items in order to absorb or minimize shock and vibration. Regularly spaced, the protruding air-filled hemispheres are known as \"bubbles\" which are 1/4 inch (6 millimeters) in diameter, to as large as an inch (26 millimeters) or more. Air-bubble packing was co-invented by Alfred Fielding and Marc Chavannes in 1957.\n\n1957 Borazon\n",
"The bubbles can be as small as 6 millimeters (1/4 inch) in diameter, to as large as 26 millimeters (1 inch) or more, to provide added levels of shock absorption during transit. The most common bubble size is 1 centimeter. In addition to the degree of protection available from the size of the air bubbles in the plastic, the plastic material itself can offer some forms of protection for the object in question. For example, when shipping sensitive electronic parts and components, a type of bubble wrap is used that employs an anti-static plastic that dissipates static charge, thereby protecting the sensitive electronic chips from static which can damage them. One of the first widespread uses of bubble wrap was in 1960, shipping the new IBM 1401 computers to buyers. Most customers had never seen this packing material before.\n",
"Bubble Wrap (brand)\n\nBubble Wrap (originally Air Cap) is a trademarked brand of Sealed Air Corporation that includes numerous cushioning products made from bubble wrap. The brand is produced by the Product Care division of Sealed Air. Both the Bubble Wrap brand and product were introduced in 1960, with the launch of Sealed Air. Although the brand was originally used for the packaging of IBM computers, Sealed Air now does most of its Bubble Wrap business in the food packaging industry.\n\nSection::::History.\n",
"Environmental agencies in the USA often use the terms \"dscf\" or \"scfd\" to denote a \"standard\" cubic foot of dry gas. Likewise, they often use the terms \"dscm\" or \"scmd\" to denote a \"standard\" cubic meter of gas. Since there is no universally accepted set of \"standard\" temperature and pressure, such usage can be and is very confusing. It is strongly recommended that the reference temperature and pressure always be clearly specified when stating gas volumes or gas flow rates.\n\nSection::::Correcting concentrations for reference conditions.:Correcting to a dry basis.\n",
"Section::::Uses.\n\nThe \"Annual Bubble Wrap Competition For Young Inventors\" was hosted by Sealed Air from 2006 to 2008, in which children were encouraged to design products made out of bubble wrap that had uses outside of the packaging industry. Inventions included a \"Bubble Wrap Car Door Cover\", a \"Bubble Wrap Cushy Wheelchair\", and \"Transformable Bubble Wrap Kite\".\n\nPopping Bubble Wrap is sometimes used as stress-relief, and Sealed Air's corporate offices have \"stress relief boxes\" that are filled with Bubble Wrap for the employees to pop.\n",
"Triangular diagrams are not commonplace. The easiest way to understand them is to briefly go through three basic steps in their construction. \n\nBULLET::::1. Consider the first triangular diagram below, which shows all possible mixtures of methane, oxygen and nitrogen. Air is a mixture of about 21 volume percent oxygen, and 79 volume percent inerts (nitrogen). Any mixture of methane and air will therefore lie on the straight line between pure methane and pure air - this is shown as the blue air-line. The upper and lower flammability limits of methane in air are located on this line, as shown.\n",
"Bubble wrap was invented in 1957 by engineers Alfred Fielding and Marc Chavannes in Hawthorne, New Jersey. Fielding and Chavannes sealed two shower curtains together, creating a smattering of air bubbles, which they originally tried to sell as wallpaper. When the product turned out to be unsuccessful as wallpaper, the team marketed it as greenhouse insulation. Although Bubble Wrap was branded by Sealed Air Corporation (founded by Fielding and Chavannes) in 1960, it was not until a year later that its use in protective usage was discovered. As a packaging material, Bubble Wrap's first client was IBM, which used the product to protect the IBM 1401 computer during shipment. Fielding and Chavannes were inducted into the New Jersey Inventors Hall of Fame in 1993. Sealed Air celebrated Bubble Wrap's 50th birthday in January 2010.\n",
"Bubble wrap\n\nBubble wrap is a pliable transparent plastic material used for packing fragile items. Regularly spaced, protruding air-filled hemispheres (bubbles) provide cushioning for fragile items. \n\n\"Bubble wrap\" is a generic trademark owned by Sealed Air Corporation. In 1957 two inventors named Alfred Fielding and Marc Chavannes were attempting to create a three-dimensional plastic wallpaper. Although the idea was a failure, they found that what they did make could be used as packing material. Sealed Air Corp. was co-founded by Alfred Fielding in 1960.\n",
"Section::::Air classification.\n\nAn air elutriator is a simple device which can separate particles into two or more groups.\n",
"BULLET::::- Dense gas model — Dense gas models are models that simulate the dispersion of dense gas pollution plumes (i.e., pollution plumes that are heavier than air). The three most commonly used dense gas models are:\n\nBULLET::::- The DEGADIS model developed by Dr. Jerry Havens and Dr. Tom Spicer at the University of Arkansas under commission by the US Coast Guard and US EPA.\n\nBULLET::::- The SLAB model developed by the Lawrence Livermore National Laboratory funded by the US Department of Energy, the US Air Force and the American Petroleum Institute.\n",
"Refers to the process in which air is absorbed into the food item. It refers to the lightness of cakes and bread, as measured by the type of pores they contain, and the color and texture of some sauces which have incorporated air bubbles.\n\nIn wine tasting, a variety of methods are used to aerate wine and bring out the aromas including swirl wine in the glass, use of a decanter to increase exposure to air, or a specialized wine aerator.\n",
"The term is used generically for similar products, such as bubble pack, bubble paper, air bubble packing, bubble wrapping, or aeroplast. Properly Bubble Wrap and BubbleWrap still are Sealed Air Corporation registered trademarks.\n\nSection::::Design.\n",
"BULLET::::- Puff or intermittent source – short term sources (for example, many accidental emission releases are short term puffs)\n\nBULLET::::- Continuous source – long term source (for example, most flue gas stack emissions are continuous)\n\nSection::::Characterization of atmospheric turbulence.\n",
"BULLET::::- NAME – Numerical atmospheric-dispersion modelling environment (NAME) is a local to global scale model developed by the UK's Met Office. It is used for: forecasting of air quality, air pollution dispersion, and acid rain; tracking radioactive emissions and volcanic ash discharges; analysis of accidental air pollutant releases and assisting in emergency response; and long-term environmental impact analysis. It is an integrated model that includes boundary layer dispersion modelling.\n",
"Another convention utilizing these symbols is the indication of modification or transformation of one type to another. For instance, an Arctic air mass blowing out over the Gulf of Alaska may be shown as \"cA-mPk\". Yet another convention indicates the layering of air masses in certain situations. For instance, the overrunning of a polar air mass by an air mass from the Gulf of Mexico over the Central United States might be shown with the notation \"mT/cP\" (sometimes using a horizontal line as in fraction notation).\n\nSection::::Characteristics.\n",
"Ventile\n\nVentile, is a registered trademark used to brand a special high-quality woven cotton fabric first developed by scientists at the Shirley Institute in Manchester, England. Originally created to overcome a shortage of flax used for fire hoses and water buckets, its properties were also found to be ideal for pilots' immersion suits.\n",
"BULLET::::- DEGADIS – Dense gas dispersion (DEGADIS) is a model that simulates the dispersion at ground level of area source clouds of denser-than-air gases or aerosols released with zero momentum into the atmosphere over flat, level terrain.\n\nBULLET::::- HGSYSTEM – A collection of computer programs developed by Shell Research Ltd. and designed to predict the source-term and subsequent dispersion of accidental chemical releases with an emphasis on dense gas behavior.\n",
"Previously, the airline operated buses for travelers in San Francisco, Houston and Abu Dhabi. The San Francisco buses transported customers to/from Milpitas and Cupertino. The Houston bus service served Sugar Land and Southwest Houston Chinatown.\n\nSection::::Pacific Greenhouse Gases Measurement (PGGM) Project.\n",
"Section::::Application Characteristics.:Air leakage in UFAD plenums.\n",
"Out of Thin Air\n\nOut of Thin Air may refer to:\n\nBULLET::::- \"Out of Thin Air\", a 2017 documentary about the Guðmundur and Geirfinnur case\n\nBULLET::::- \"Out of Thin Air\", the first episode of 2014 documentary \"The Mystery of Matter\"\n\nBULLET::::- \"Out of Thin Air\", a song by Howard Jones from \"Cross That Line\"\n\nBULLET::::- \"Out of Thin Air\", a song from the 1996 film \"Aladdin and the King of Thieves\"\n\nBULLET::::- \"Out of Thin Air\", an episode of \"Black Scorpion\"\n\nBULLET::::- \"Out of Thin Air: The Brief Wonderful Life of Network News\", a 1991 memoir by Reuven Frank\n",
"Raymond Wang is a young inventor from Vancouver, Canada, who created a device that can improve air quality for passengers on airplanes. In May, 2015, at the age of 17 and a junior at St. Georges School in Vancouver, Wang won the world's largest high school science competition, the Intel International Science and Engineering Fair in Pittsburgh, the top prize valued at $75,000. His invention has the potential to keep dangerous microbes from spreading on airplanes. Wang has spoken about his invention on TED, and has filed for a patent for his invention, which he calls a \"global inlet director.\"\n",
"The expandable microspheres are off-white, can be 6 to 40 micrometers in average diameter and have a density of 900 to 1400 kg/m³.\n\nThe expandable microspheres are used as a blowing agent in products like e.g. puff ink, automotive underbody coatings or injection molding of thermoplastics. Here the product must be heated at some point in the process for the expandable microspheres to expand.\n\nSection::::Expanded microsphere.\n\nThe expanded microsphere is a material that has been heated to cause expansion. The product acts as a light weight filler in many products.\n",
"Section::::Characteristics.:Sizes.\n",
"\"Tersicoccus phoenicis\" are only known to exist at two locations on Earth, and were independently found in geographically separated clean room facilities nearly apart. One example was located during a 2007 microbial test swabbing of the \"Phoenix\" lander clean room floor in the Payload Hazardous Servicing Facility at Kennedy Space Center (Florida, United States), while the other was found in the Herschel Space Observatory's clean room at Guiana Space Centre (Kourou, French Guiana).\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-04827 | How does higher quality microphones filter out most of the background noises? | There are a few things that you should probably look at. Some microphones have different recording patterns. You can choose an optimal recording pattern depending on the setting. Cardioid mode records what is directly in front of the receiver versus say an omni\-directional mic that will pick up in a 360 degree pattern. You can also apply what is called a pop filter. Pop filters work by reducing air flow towards the mic which effectively removes the popping and white noises often heard. | [
"BULLET::::- Cinema Audio Society Award for \"Outstanding Achievement in Sound Mixing for Television-Series\", \"Six Feet Under\" (2004) - NOMINATED\n\nBULLET::::- Daytime Emmy Award for \"Outstanding Achievement in Film Sound Mixing\", \"Pee-wee's Playhouse\" (1991) – WON\n\nBULLET::::- Genie Award for \"Best Original Song\", \"Ups & Downs\" (1984) - WON\n\nBULLET::::- Genie Award for \"Best Achievement in Music Score\",\"Happy Birthday to Me\" (1982) - NOMINATED\n\nBULLET::::- Genie Award for \"Best Achievement in Overall Sound\", \"Terror Train\" (1981) - NOMINATED\n\nSection::::External links.\n\nBULLET::::- Bo Harwood discusses microphones at Obama rallies (14 October 2008)\n",
"The quality of the imaging arriving at the listener's ear depends on numerous factors, of which the most important is the original \"miking\", that is, the choice and arrangement of the recording microphones (where \"choice\" refers here not to the brands chosen, but to the size and shape of the microphone diaphragms, and \"arrangement\" refers to microphone placement and orientation relative to other microphones). This is partly because miking simply affects imaging more than any other factor, and because, if the miking spoils the imaging, nothing later in the chain can recover it.\n",
"Some models have adjustable gain on the microphone itself to be able to accommodate different level sources, such as loud instruments or quiet voices. Adjustable gain helps to avoid clipping and maximize signal to noise ratio.\n\nSome models have adjustable squelch, which silences the output when the receiver does not get a strong or quality signal from the microphone, instead of reproducing noise. When squelch is adjusted, the threshold of the signal quality or level is adjusted.\n\nSection::::Products.\n",
"BULLET::::- In the X-Y techniques, the microphones would ideally be in exactly the same place, which is not possible – if they are slightly separated left to right, there may be some loss of high frequencies when played back in mono, so they are often separated vertically. This only causes problems with sound from above or below the height of the microphones.\n\nBULLET::::- The M/S technique is ideal for mono compatibility, since summing Left+Right just gives the Mid signal back.\n\nThe equipment for the techniques also varies from the bulky to the small and convenient.\n",
"The master recording process, using current 24-bit techniques, offers around 99 dB of \"true\" dynamic range (based on the \"ITU-R 468 noise weighting\" standard); identical to the dynamic range of a good studio microphone, though very few recordings will use just one microphone, and so the noise on most recordings is likely to be the sum of several microphones after mixing, and probably at least 6 dB worse than shown.\n\nSection::::See also.\n\nBULLET::::- Audio system measurements\n\nBULLET::::- Noise measurement\n\nBULLET::::- Weighting filter\n\nBULLET::::- Equal-loudness contour\n\nBULLET::::- Fletcher-Munson curves\n\nSection::::External links.\n\nBULLET::::- EBU Recommendation R68-2000\n",
"Section::::Commercial performance.\n",
"Microphones are often designed for highly specific applications and have a major effect on recording quality. A single studio quality microphone can cost $5,000 or more, while consumer quality recording microphones can be bought for less than $50 each. Microphones also need some type of microphone preamplifier to prepare the signal for use by other equipment. These preamplifiers can also have a major effect on the sound and come in different price ranges, physical configurations, and capability levels. Microphone preamplifiers may be external units or a built in feature of other audio equipment.\n\nSection::::With computers.:Software.\n",
"BULLET::::- ML-7 Preamplifier (stereo ML-6a, design credit : Thomas Colangelo)\n\nBULLET::::- ML-7A Preamplifier (update by Madrigal, design credit : Thomas Colangelo)\n\nBULLET::::- ML-8 Microphone Preamplifier (input Brüel & Kjær microphones)\n\nBULLET::::- ML-9 Power Amplifier (design credit : Thomas Colangelo)\n\nBULLET::::- ML-10 Preamplifier (design credit : Thomas Colangelo)\n\nBULLET::::- ML-11 Preamplifier (budget model forced by new management, design credit : Thomas Colangelo)\n\nBULLET::::- ML-12 Power Amplifier (budget model forced by new management, design credit : Thomas Colangelo)\n\nSection::::Equipment history.:1980s to mid-1990s: Cello.\n\nBULLET::::- Cello Audio Palette, world’s first no-compromise analog equalizer\n\nBULLET::::- Cello Audio Suite\n",
"BULLET::::- In 1995, JL Audio introduces the first group of Stealthbox® vehicle-specific subwoofers. Three brand new enclosed systems were introduced this year: PowerWedge™ (now a name of the past), ProWedge™, and MicroSub™.\n",
"The professional models transmit in VHF or UHF radio frequency and have 'true' diversity reception (two separate receiver modules, each with its own antenna), which eliminates dead spots (caused by phase cancellation) and the effects caused by the reflection of the radio waves on walls and surfaces in general. (See antenna diversity).\n\nAnother technique used to improve the sound quality (actually, to improve the dynamic range), is companding. Nady Systems, Inc. was the first to offer this technology in wireless microphones in 1976, which was based on the patent obtained by company founder John Nady.\n",
"For quality of sound and soundness of quality-control this ... is in a class with ... industry-standard-setting productions. I usually don't pay much attention to producers of records in my reviews but since the liner notes of this ... (and others in this \"SM5000 Series\") credit producer Anton Kwiatkowski's \"award-winning\" microphone techniques, and the results are so unfailingly excellent as to be thoroughly identifiable with his work, his name deserves mention.\n\nSection::::Freelance Work: 1981 - Present.\n",
"JZ Microphones produces ten microphone models, in whose creation twenty-four patents owned by company are used. Most of the microphones are made with ‘’Golden drop’’ technology – a slightly different gilding process of capsule; in result the sound is much natural and cleaner. Also the original design of microphones differ JZ from other microphones, one of the most popular model series Black Hole unique design with hole in body makes attaching easier and also reduces unnecessary sounds.\n\nSound engineers and producers that use JZ Microphones JZ:\n\nBULLET::::- Andy Gill\n\nBULLET::::- Sylvia Massy\n\nBULLET::::- Dave Jerden\n\nBULLET::::- Kurt Hugo Schneider\n",
"Wider dynamic range came with the introduction of the first compander wireless microphone, offered by Nady Systems in 1976. Todd Rundgren and the Rolling Stones were the first popular musicians to use these systems live in concert. Nady joined CBS, Sennheiser and Vega in 1996 to receive a joint Emmy Award for \"pioneering [the] development of the broadcast wireless microphone\".\n\nSection::::Advantages and disadvantages.\n\nThe advantages are:\n\nBULLET::::- Greater freedom of movement for the artist or speaker\n\nBULLET::::- Avoidance of cabling problems common with wired microphones, caused by constant moving and stressing the cables\n",
"A microphone is a transducer and as such is the source of much of the coloration of an audio mix. Most audio engineers would assert that a microphone preamplifier also affects the sound quality of an audio mix. A preamplifier might load the microphone with low impedance, forcing the microphone to work harder and so change its tone quality. A preamplifier might add coloration by adding a different characteristic than the audio mixer's built-in preamplifiers. Some microphones, for example condensers, must be used in conjunction with an impedance matching preamplifier to function properly.\n",
"Section::::Various methods of stereo recording.\n\nSection::::Various methods of stereo recording.:X-Y technique: intensity stereophony.\n\nHere there are two directional microphones at the same place, and typically placed at 90° or more to each other. A stereo effect is achieved through differences in sound pressure level between two microphones. Due to the lack of differences in time-of-arrival and phase ambiguities, the sonic characteristic of X-Y recordings is generally less \"spacey\" and has less depth compared to recordings employing an AB setup.\n",
"BULLET::::- Neve 8028 Mixing Console, \"one of five in the world\", \"a 24-input, 16-bus, 24-monitor 8028 with 1073 or 1084 EQs and \"no automation\"\"\n\nBULLET::::- Neve 8078 Mixing Console\n\nBULLET::::- Neve 2254 Compressor/Limiter\n\nBULLET::::- Neve 1084 Mic Preamplifier & Equaliser\n\nSection::::Notable products.:Neve 1073 Console Module.\n",
"Section::::Stereo microphone techniques.\n\nVarious standard techniques are used with microphones used in sound reinforcement at live performances, or for recording in a studio or on a motion picture set. By suitable arrangement of one or more microphones, desirable features of the sound to be collected can be kept, while rejecting unwanted sounds.\n\nSection::::Powering.\n",
"BULLET::::- In 1999, W3 subwoofers are introduced. The Stealthbox®, PowerWedge™ and Evolution® lineups are further expanded with new models and replacements for old models. In this same year, Jeff Scoon, Bruce Macmillan and David Krich establish a new engineering department for JL Audio electronics in Phoenix, AZ\n",
"BULLET::::- The sub-cardioid microphone has no null points. It is produced with about 7:3 ratio with 3–10 dB level between the front and back pickup.\n\nSection::::Polar patterns.:Bi-directional.\n",
"One more difference to standard formats is the sampling process. The audio stream is sampled and convolved with a triangle function, and interpolated later during playback. The techniques employed, including the sampling of signals with a finite rate of innovation, were developed by a number of researchers over the preceding decade, including Pier Luigi Dragotti and others.\n",
"The broadcast quality of professional audio equipment is on a par with that of consumer high-end audio and hi-fi equipment, but is more likely to be designed purely on sound engineering principles and owes little to the consumer-oriented audiophile sub-culture.\n\nSection::::Stores.\n",
"Section::::\"Romanticide\".\n",
"The use of different kinds of microphones and their placement around the studio was a crucial part of the recording process, and particular brands of microphone were used by engineers for their specific audio characteristics. The smooth-toned ribbon microphones developed by the RCA company in the 1930s were crucial to the \"crooning\" style perfected by Bing Crosby, and the famous Neumann U47 condenser microphone was one of the most widely used from the 1950s. This model is still widely regarded by audio professionals as one of the best microphones of its type ever made. Learning the correct placement of microphones was a major part of the training of young engineers, and many became extremely skilled in this craft. Well into the 1960s, in the classical field it was not uncommon for engineers to make high-quality orchestral recordings using only one or two microphones suspended above the orchestra. In the 1960s, engineers began experimenting with placing microphones much closer to instruments than had previously been the norm. The distinctive rasping tone of the horn sections on the Beatles recordings \"Good Morning Good Morning\" and \"Lady Madonna\" were achieved by having the saxophone players position their instruments so that microphones were virtually inside the mouth of the horn.\n",
"BULLET::::- Type of sound-source: Acoustic instruments produce a sound very different from electric instruments, which are again different from the human voice.\n\nBULLET::::- Situational circumstances: Sometimes a microphone should not be visible, or having a microphone nearby is not appropriate. In scenes for a movie the microphone may be held above the pictureframe, just out of sight. In this way there is always a certain distance between the actor and the microphone.\n\nBULLET::::- Processing: If the signal is destined to be heavily processed, or \"mixed down\", a different type of input may be required.\n",
"This chart is based on the assumption that what goes in should come out—true high-fidelity—and so an Alignment Level (AL) corresponding to 100 dB SPL has been assumed throughout. Any lower level would imply severe clipping at the first stage; the master recording. Top quality microphones do not present a problem; most will handle 130 dB SPL without severe distortion, and a few manage more than 140 dB SPL.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-03238 | How does Velcro retain its strength through continuous use? | There are a couple factors that explain this. First, most loops don't find a hook each time the Velcro is sealed. Only a minority of the loops need to find a hook for the Velcro to hold. Second, not every loop tears when you rip apart the Velcro. There are many times more loops on the one side than there are hooks on the other, so this combination of factors gives Velcro an appreciable use\-life. If you notice your Velcro is getting less sticky, you'll get more bang for your buck by trying to clean lint out of the hook side first. | [
"Swiss electrical engineer George de Mestral invented his first touch fastener when, in 1941, he went for a walk in the woods and wondered why burdock seeds clung to his coat and dog. He discovered it could be turned into something useful. He patented it in 1955, and subsequently refined and developed its practical manufacture until its commercial introduction in the late 1950s.\n",
"BULLET::::- 1984 - David Letterman wears a suit made of hook-and-loop and jumps from a trampoline into a wall covered in the product during an interview with Velcro Companies' USA director of industrial sales.\n\nBULLET::::- 1996 - In the John Frankenheimer film \"The Island of Dr. Moreau,\" Moreau's assistant jokingly claims that the doctor won his Nobel Prize for inventing Velcro.\n",
"BULLET::::- ASTM D5170-98 (2010) Standard Test Method for Peel Strength (\"T\" Method) of Hook and Loop Touch Fasteners\n\nBULLET::::- ASTM D2050-11 Standard Terminology Relating to Fasteners and Closures Used with Textiles\n\nSection::::Jumping.\n\nVelcro jumping is a game where people wearing hook-covered suits take a running jump and hurl themselves as high as possible at a loop-covered wall. The wall is inflated, and looks similar to other inflatable structures. It is not necessarily completely covered in the material—often there will be vertical strips of hooks. Sometimes, instead of a running jump, people use a small trampoline.\n",
"The fastener consisted of two components: a lineal fabric strip with tiny hooks that could \"mate\" with another fabric strip with smaller loops, attaching temporarily, until pulled apart. Initially made of cotton, which proved impractical, the fastener was eventually constructed with nylon and polyester.\n\nDe Mestral gave the name \"Velcro\", a portmanteau of the French words \"velour\" (\"velvet\") and \"crochet\" (\"hook\"), to his invention as well as his company, which continues to manufacture and market the fastening system.\n\nSection::::Company leadership.\n",
"Velcro\n\nVelcro BVBA is a privately held company that produces fasteners and other products. It is known for being the original patentor of the hook-and-loop fastener, to which it has (over its objections) lent the generic name \"velcro\".\n\nSection::::History.\n",
"The pouches are opened and closed with Spanish Tab fasteners, they can be closed in two different ways, quick release or secure. Small sections of Velcro, sewn on the inside of the lids of the pouches, and the top front section of the pouches, allow for easy and effortless fastening. Added silencer strips allow to cover them when not needed.\n",
"From 2011 to 2017, Scott Filion served as President of Velcro Americas. In 2014, he also served as interim CEO, replacing Alain Zijlstra.\n\nFraser Cameron served as CEO from 2014 until 2018.\n\nIn January 2018, Robert \"Bob\" Woodruff, previously associated with Alex and Ani and Nike, was appointed CFO. In January 2019, he was appointed CEO.\n\nIn January 2018, Paul Garutti was appointed President of Velcro Latin America. He replaced Dirk Foreman who was appointed President of North America in 2017.\n\nSection::::External links.\n\nBULLET::::- Velcro Official US company website\n\nBULLET::::- Original 1955 patent - from Google Patents\n",
"Television show host David Letterman immortalized this during the February 28, 1984 episode of Late Night with David Letterman on NBC. Letterman proved that with enough of the material a man could be hurled against a wall and stick, by performing this feat during the television broadcast.\n",
"Section::::Modern Materials.:Fiberglass.\n",
"Hook-and-loop fasteners, hook-and-pile fasteners or touch fasteners (often referred to by the genericized trademark velcro, despite the objections of the Velcro Brand), consist of two components: typically, two lineal fabric strips (or, alternatively, round \"dots\" or squares) which are attached (sewn or otherwise adhered) to the opposing surfaces to be fastened. The first component features tiny hooks, the second features smaller loops. When the two are pressed together the hooks catch in the loops and the two pieces fasten or bind temporarily. When separated, by pulling or peeling the two surfaces apart, the strips make a distinctive \"ripping\" sound.\n\nSection::::History.\n",
"De Mestral gave the name Velcro, a portmanteau of the French words velours (\"velvet\"), and crochet (\"hook\"), to his invention as well as his company, which continues to manufacture and market the fastening system.\n",
"The game moved to the U.S. after Sports Illustrated published a story on it in 1991. Adam Powers and Stephen Wastell of the Perfect Tommy's bar in New York city read of the game, and soon became the United States distributor of Human Bar Fly equipment. By 1992, wall-jumping was practiced in dozens of New Zealand bars and was said to be one of the favorite bar activities there at the time.\n\nSection::::In popular culture.\n\nBULLET::::- 1969–1972 - Velcro brand fasteners were used on the suits, sample collection bags, and lunar vehicles during all Apollo program missions to the Moon.\n",
"Various constructions and strengths are available. Some touch fasteners are strong enough that a two-inch square (5 × 5 cm) piece is enough to support a load. Fasteners made of Teflon loops, polyester hooks, and glass backing are used in aerospace applications, e.g. on space shuttles. The strength of the bond depends on how well the hooks are embedded in the loops, how much surface area is in contact with the hooks, and the nature of the force pulling it apart. If hook-and-loop is used to bond two rigid surfaces, such as auto body panels and frame, the bond is particularly strong because any force pulling the pieces apart is spread evenly across all hooks. Also, any force pushing the pieces together is disproportionately applied to engaging more hooks and loops. Vibration can cause rigid pieces to improve their bond. Full-body hook-and-loop suits have been made that can hold a person to a suitably covered wall.\n",
"Section::::Early materials.:Polyester resin.\n",
"Power pro\n\nPower Pro a type of braided fishing line made out of a material called Spectra fibers. It has an equivalent diameter of nearly 1/5 of monofilament. Thus the diameter of a piece of Power Pro testing at 50 pounds is equivalent to monofilaments' diameter testing at around 12 pounds. It lacks stretch that monofilament has, giving the fisherman a better \"feel\" and also helps set the hook faster. Environmentalists have criticized the use of spectra fiber, as it takes a long time to degrade thus harming the environment. Spectra is a form of gel-spun polyethylene.\n",
"BULLET::::- 1997 - The fastener has become part of a recurring joke in various media in which it is claimed that modern humans would be unable to invent it, and that it is in fact a form of advanced technology. For example, K claims in \"Men in Black\" that Velcro was originally alien technology,\n\nBULLET::::- 2002 - The \"\" episode \"\" portrays Velcro as being introduced to human society by Vulcans in 1957. One of the Vulcans in the episode is named \"Mestral\", after the fastener's actual inventor and founder of the brand.\n",
"A Velcro hair roller is made of a strip of hook and loop fasteners that is wrapped around cylinders. They are available in different sizes and can be used on dry or wet hair. The rollers are self-holding because they do not need pins or clips to be held in place and do not need heat to be applied to create the curls. Typically kept in the hair for about fifteen minutes. To clean Velcro hair rollers hair should be removed from them and then soaked in shampoo and water mixture followed by a vinegar solution to remove any excess oil or dirt.\n",
"In 2013 the Spirograph brand was re-launched worldwide by Kahootz Toys with products that returned to the use of the original gears and wheels. The modern products use removable putty in place of pins or are held down by hand to keep the stationary pieces in place on the paper. The Spirograph was a 2014 Toy of the Year finalist in two categories, over 45 years after the toy was named Toy of the Year in 1967.\n\nSection::::Operation.\n",
"A Montreal firm, Velek, Ltd., acquired the exclusive right to market the product in North and South America, as well as in Japan, with American Velcro, Inc. of New Hampshire, and Velcro Sales of New York, marketing the \"zipperless zipper\" in the United States.\n",
"However, hook and loop's integration into the textile industry took time, partly because of its appearance. Hook and loop in the early 1960s looked like it had been made from left-over bits of cheap fabric, an unappealing aspect for clothiers. The first notable use for Velcro® brand hook and loop came in the aerospace industry, where it helped astronauts maneuver in and out of bulky space suits. Eventually, skiers noted the similar advantages of a suit that was easier to get in and out of. Scuba and marine gear followed soon after.\n",
"Currently, a large quantity of commercial holds are made of polyurethane (often called PU or urethane in the USA) or a polyurethane mixture. PU is lighter, more flexible, and less prone to chipping and breakage than polyester or natural materials. Like polyester (PE), PU mixtures can vary, and different mixtures have different textures and strengths. It's very simple to make a quality polyester recipe, but much harder to make a top quality polyurethane. If the polyurethane is too soft it will split apart when the hold is tightened, or the bolt might get pulled through the hold, or the hold will flex on the wall or could polish (become slick) quickly. If the polyurethane is too hard it will be brittle (like polyester) and the edges could chip or it could crack when tightened (also like polyester). Some climbers believe polyurethane can become warm with intensive use, though a few moments of not being held and some brushing usually solves the problem. PU holds are generally a lot lighter than Polyester holds as PU tolerates a much thinner wall so it can be hollowed out and maintain strength whereas PE holds need to be solid or have very thick walls or they are much more prone to breaking. Polyurethane is the leading hold material in the USA. However, there is an Atlantic split with most of Europe preferring Polyester mixes. There are many reasons for this, mostly that PU is generally a newer material and Europe only recently has been exposed to quality PU mixes. Additionally, PU is generally more expensive than PE.\n",
"BULLET::::- The character of Jerry on the 1990s sitcom \"Parker Lewis Can't Lose\" wears a trenchcoat from which he can get any needed item, always with the sound of a velcro attachment ripping free.\n",
"\"It took us nearly two years to iron the kinks out of Super Ball before we produced it,\" said Richard Knerr, President of Wham-O in 1966. \"It always had that marvelous springiness…. But it had a tendency to fly apart. We've licked that with a very high-pressure technique for forming it. Now we're selling millions.\" \n",
"Section::::Tape-less devices.:Myskinclamp.\n\nIt is made of stainless steel pusher cone with spring which pushes the other steel cone which can slide on the pusher shaft and a plastic cone to hold the foreskin in place. It also has an option for a strap by which it is tied with the leg or can be used as a tugger.\n\nSection::::Tape-less devices.:CAT II Q.\n",
"When one or both of the pieces is flexible, e.g., a pocket flap, the pieces can be pulled apart with a peeling action that applies the force to relatively few hooks at a time. If a flexible piece is pulled in a direction parallel to the plane of the surface, then the force is spread evenly, as it is with rigid pieces.\n\nThree ways to maximize the strength of a bond between the two flexible pieces are:\n\nBULLET::::- Increase the area of the bond, e.g. using larger pieces.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
2018-11826 | How are fruit juice concentrates made? If juice comes from the fruits themselves how is it put into concentrated form? | Juice contains water so you just boil it until that water is gone to make a more concentrated juice | [
"Although processing methods vary between juices, the general processing method of juices includes:\n\nBULLET::::- Washing and sorting food source\n\nBULLET::::- Juice extraction\n\nBULLET::::- Straining, filtration and clarification\n\nBULLET::::- Blending pasteurization\n\nBULLET::::- Filling, sealing and sterilization\n\nBULLET::::- Cooling, labeling and packing\n",
"The process of extracting juice from fruits and vegetables can take a number of forms. Simple crushing of most fruits will provide a significant amount of liquid, though a more intense pressure can be applied to get the maximum amount of juice from the fruit. Both crushing and pressing are processes used in the production of wine.\n\nSection::::Production.:Infusion.\n\nInfusion is the process of extracting flavours from plant material by allowing the material to remain suspended within water. This process is used in the production of teas, herbal teas and can be used to prepare coffee (when using a coffee press).\n",
"After the juice is filtered, it may be concentrated in evaporators, which reduce the size of juice by a factor of 5, making it easier to transport and increasing its expiration date. Juices are concentrated by heating under a vacuum to remove water, and then cooling to around 13 degrees Celsius. About two thirds of the water in a juice is removed. The juice is then later reconstituted, in which the concentrate is mixed with water and other factors to return any lost flavor from the concentrating process. Juices can also be sold in a concentrated state, in which the consumer adds water to the concentrated juice as preparation.\n",
"Section::::Processing and manufacture.:Manufacture of \"not from concentrate\".\n",
"Juice is prepared by mechanically squeezing or macerating (sometimes referred to as cold pressed) fruit or vegetable flesh without the application of heat or solvents. For example, orange juice is the liquid extract of the fruit of the orange tree, and tomato juice is the liquid that results from pressing the fruit of the tomato plant. Juice may be prepared in the home from fresh fruit and vegetables using a variety of hand or electric juicers. Many commercial juices are filtered to remove fiber or pulp, but high-pulp fresh orange juice is a popular beverage. Additives are put in some juices, such as sugar and artificial flavours (in some fruit juice-based beverages); savoury seasonings (e.g., in Clamato or Caesar tomato juice drinks). Common methods for preservation and processing of fruit juices include canning, pasteurization, concentrating, freezing, evaporation and spray drying.\n",
"Single strength orange juice (SSOJ) can either be \"not from concentrate\" (NFC) orange juice or juice that is reconstituted from a concentrate with the addition of water to reach a specific single strength brix level. The processing of SSOJ also begins with the selection of orange. The most common types of orange used to produce orange juice are the Pineapple orange, Valencia orange, and Washington Navel oranges from Florida and California. The manufacturing journey begins when oranges are delivered to processing plants by trucks holding about 35,000 to 40,000 pounds of fruit. The fruit is unloaded at the plant for inspection and grading to remove unsuitable fruit before the oranges enter the storage bins. An automatic sampler contraption removes oranges for determination of acid and soluble solids. The bins are organized based on ratio of soluble solids to acids in order to blend oranges appropriate to produce juice with uniform flavor. After the fruit leaves the bins, they are scrubbed with detergent on a rotary brush washer and subsequently rinsed with potable water. Throughout the processing stages, there are multiple points with facilities that inspect oranges and discard damaged fruit.\n",
"The process of concentrating orange juice was patented in 1948. It was originally developed to provide World War II troops with a reliable source of vitamin C. Today, the majority of retailed orange juice is made from reconstituted orange juice concentrate.\n\nMost sodas and soft drinks are produced as highly concentrated syrups and later diluted with carbonated water directly before consumption or bottling. Such concentrated syrups are sometimes retailed to the end-consumer because of their relatively low price and considerable weight savings. Condensed milk is also produced for transport weight savings and resistance to spoilage.\n",
"Juice vesicles hold a lot of juice than can be recovered through various extraction processes. The pulp is usually removed from the juice by filtering it out. The juiciness of the pulp depends on the species, variety, season, and the tree on which it grew. Close to 90% of the citrus fruit juice solids are recovered with extractors. Pectic enzymes can sometimes be added to lessen the thickness of these solids. The juice along with these solids can be combined to increase primary juice yields or sold as bases for fruit beverages. The juice solids become opaque from the pulp washing process, resulting in a less expensive source of fruit solids for food labeling in comparison to regular juice. The juice solids can also be pasteurized, dried, and sold, but appear dark brown in color if they have not been washed properly before drying. The solids can also be stored frozen or sold to beverage manufacturers. They provide fruit beverages that are sold with a higher appeal to a consumer and improved texture in the juice. These opaque juice solids are known as cloud.\n",
"High intensity pulsed electric fields are being used as an alternative to heat pasteurization in fruit juices. Heat treatments sometimes fail to make a quality, microbiological stable products. However, it was found that processing with high intensity pulsed electric fields (PEF) can be applied to fruit juices to provide a shelf-stable and safe product. In addition, it was found that pulsed electric fields provide a fresh-like and high nutrition value product. Pulsed electric field processing is a type of nonthermal method for food preservation.\n",
"BULLET::::- Orange juice is obtained by squeezing the fruit on a special tool (a \"juicer\" or \"squeezer\") and collecting the juice in a tray underneath. This can be made at home or, on a much larger scale, industrially. Brazil is the largest producer of orange juice in the world, followed by the United States, where it is one of the commodities traded on the New York Board of Trade.\n\nBULLET::::- Frozen orange juice concentrate is made from freshly squeezed and filtered orange juice.\n",
"In the United Kingdom, orange juice from concentrate is a product of concentrated fruit juice with the addition of water. Any lost flavour or pulp of the orange juice during the initial concentration process may be restored in the final product to be equivalent to an average type of orange juice of the same kind. Any restored flavour or pulp must come from the same species of orange. Sugar may be added to the orange juice for regulating the acidic taste or sweetening, but must not exceed 150g per litre of orange juice. Across the UK, the final orange juice from concentrate product must contain a minimum Brix level of 11.2, excluding the additional sweetening ingredients. Vitamins and minerals may be added to the orange juice in accordance with Regulation (EC) 1925/2006.\n",
"Recently, many brands of organic orange juices have become available on the market.\n\nSection::::Processing and manufacture.\n\nSection::::Processing and manufacture.:Manufacture of frozen concentrated orange juice.\n",
"The basic manufacture process of NCS involves juice extraction, physical elimination of impurities and clarification of the juice, evaporation of the water content of the juice, crystallization, eventually drying and packaging.\n\nThe cane juice is generally extracted from cleaned and eventually shredded cane stalks by mechanical processes, commonly with simple crushers consisting of three metal rollers. It is filtered to separate bagasse particles and/or allowed to settle so to eliminate solid impurities.\n",
"Making cold-pressed juice is a two-step process. The first stage is to shred the fruits and vegetables into a pulp. Typically the shredding process uses a steel rotating disc. Produce is loaded into a large hopper feeding tube and typically falls into a filter bag. The second process is the hydraulic press; this exposes the shredded produce to extreme pressures between two plates. The pressure causes the juice and water content from the produce to drip into a collection tray below, leaving behind the fibre content in the filter bag. The fibre left behind is generally composted, recycled in food products or discarded.\n",
"In the United States, orange juice is regulated and standardized by the Food and Drug Administration (FDA or USFDA) of the United States Department of Health and Human Services. According to the FDA, orange juice from concentrate is a mixture of water with frozen concentrated orange juice or concentrated orange juice for manufacturing. Additional ingredients into the mixture may include fresh/frozen/pasteurized orange juice from mature oranges, orange oil, and orange pulp. Furthermore, one or more of the following optional sweetening ingredients may be added: sugar, sugar syrup, invert sugar, invert sugar syrup, dextrose, corn syrup, dried corn syrup, glucose syrup, and dried glucose syrup. The orange juice must contain a minimum Brix level of 11.8, which indicates the percentage of orange juice soluble solids, excluding any added sweetening ingredients.\n",
"Juices are then pasteurized and filled into containers, often while still hot. If the juice is poured into a container while hot, it is cooled as quickly as possible. Packages that cannot stand heat require sterile conditions for filling. Chemicals such as hydrogen peroxide can be used to sterilize containers. Plants can make anywhere from 1 to 20 tonnes a day.\n\nSection::::Processing.\n",
"Juicing tools have been used throughout history. Manual devices include barrel-shaped presses, hand-operated grinders, and inverted cones upon which fruit is mashed and twisted. Modern juicers are powered by electric motors generating from 200 to 1000 or more watts. There are several types of electric juicers: masticating, centrifugal, and triturating juicers. These variations are defined by the means of extracting the juice.\n\nBULLET::::- Masticating (also referred to as cold pressed) – utilizes a single gear driven by a motor; slower operation; kneads and grinds items placed in a chute\n",
"In the United Kingdom, the name or names of the fruit followed by \"juice\" can only legally be used to describe a product which is 100% fruit juice, as required by the Fruit Juices and Fruit Nectars (England) Regulations and the Fruit Juices and Fruit Nectars (Scotland) Regulations 2003. However, a juice made by reconstituting concentrate can be called juice. A product described as fruit \"nectar\" must contain at least 25% to 50% juice, depending on the fruit. A juice or nectar including concentrate must state that it does. The term \"juice drink\" is not defined in the Regulations and can be used to describe any drink which includes juice, whatever the amount. Comparable rules apply in all EU member states in their respective languages.\n",
"Section::::Products.\n\nSafterei uses two techniques that are not used in the normal production process of juices: Coldpressed and High pressure processing.\n\nCold pressing (also referred to as masticating) utilises a single gear driven by a motor; slower operation; kneads and grinds items placed in a chute. \n\nBecause of this technique no heat is added to the production process, leaving the end product higher in nutrients and vitamins.\n",
"Cold-pressed juice\n\nCold-pressed juice refers to juice that uses a hydraulic press to extract juice from fruit and vegetables, as opposed to other methods such as centrifugal or single auger.\n",
"When loading the charge of berries into the strainer section sugar may be added (in alternating layers) the amount depending on the projected use, sugar also acts as a preservative. Other additives may be pectin to prepare jellies, ascorbic acid to improve shelf life and other ingredients that will dissolve and mix with the juice as it is extracted. To get a consistent batch of juice the whole charge is allowed to extract and then bottled when complete; draining of the juice as it extracts will result in different compositions from start to end of the batch. Long dwell times in juicers made from aluminium when preparing acid juices is not recommended.\n",
"Clarification is carried out to coagulate the particulates, which come to the surface during boiling and are skimmed off. A variety of materials are used, such as plant material, ash, etc. With the aim of neutralizing the juice, which facilitates the formation of sugar crystals, lime or sulfur dioxide are added. In some of the larger factories the juice is filtered and chemically clarified.\n",
"A juicing press, such as a fruit press or wine press, is a larger scale press that is used in agricultural production. These presses can be stationary or mobile. A mobile press has the advantage that it can be moved from one orchard to another. The process is primarily used for apples and involves a stack of apple mash, wrapped in fine mesh cloth, which is then pressed under approx 40 tonnes. These machines are popular in Europe and have now been introduced to North America.\n\nSection::::Types.:Steam juice extractor.\n",
"Juicer\n\nA juicer, also known as juice extractor, is a tool used to extract juice from fruits, herbs, leafy greens and other types of vegetables in a process called juicing. It crushes, grinds, and/or squeezes the juice out of the pulp.\n\nSome types of juicers can also function as a food processor. Most of the twin gear and horizontal masticating juicers have attachments for crushing herbs and spices, extruding pasta, noodles or bread sticks, making baby food and nut butter, grinding coffee, making nut milk, etc.\n\nSection::::Types.\n\nSection::::Types.:Reamers.\n",
"Section::::Manufacturing.\n"
] | [] | [] | [
"normal"
] | [] | [
"normal",
"normal"
] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.