title
stringlengths 1
149
⌀ | section
stringlengths 1
1.9k
⌀ | text
stringlengths 13
73.5k
|
---|---|---|
Windows Hardware Engineering Conference
|
Windows Hardware Engineering Conference
|
On December 17, 2014, Microsoft announced that registration was open for the first of its re-launched WinHEC summit, taking place March 18–19, 2015 in Shenzhen, China. The company also announced that Terry Myerson, Executive Vice President of the Operating Systems Group would keynote the event. They would discuss advancements in the Windows platform making it easier for companies to build devices powered by Windows as well as Microsoft’s growing investments in the Shenzhen and China ecosystem.
|
Windows Hardware Engineering Conference
|
Audience
|
WinHEC will stay true to its strong technical roots. The agenda will be packed with executive keynotes, deep technical training sessions, hands-on labs, and opportunities for Q&A on topics across the spectrum of Windows-based hardware. For executives, engineering managers, engineers and technical product managers at OEMs, ODMs, IHVs, and IDHs who are working with or want to work with Windows technologies
|
Windows Hardware Engineering Conference
|
Events
|
1992 – San Francisco, California. March 1–3, 1992 1993 – San Jose, California. March 1–3, 1993 1994 – San Francisco, California. February 23–25, 1994 1995 – San Francisco, California. March 20–22, 1995 1996 – San Jose, California. April 1–3, 1996 1997 – San Francisco, California. April 8–10, 1997 1998 – Orlando, Florida. March 25–27, 1998 1999 – Los Angeles, California. April 7–9, 1999 2000 – New Orleans, Louisiana. April 25–27, 2000 2001 – Anaheim, California. March 26–28, 2001.
|
Windows Hardware Engineering Conference
|
Events
|
Announcement of the availability of Windows XP Beta 2, which includes the first public beta of Internet Explorer 6.
2002 – Seattle, Washington. April 16–18, 2002.
2003 – New Orleans, Louisiana. May 6–8, 2003.
Bill Gates keynote; demonstrated "Athens" PC concept, discussed 64-bit computing, uptake of Windows XP.
|
Windows Hardware Engineering Conference
|
Events
|
Initial Windows Longhorn demonstrations and discussions, focusing on a new Desktop Composition Engine (which later became known as the Desktop Window Manager) 2004 – Seattle, Washington. May 4–7, 2004.Discussion of Longhorn release timeline and upcoming service packs for Windows XP and Windows Server 2003 Updated Athens concept PC design, named "Troy" based on a Longhorn user interface 2005 – Washington State Convention and Trade Center, Seattle, Washington. April 25–27, 2005.Bill Gates gave a keynote speech on various topics including Windows "Longhorn" (known later as Windows Vista) and 64-bit computing.
|
Windows Hardware Engineering Conference
|
Events
|
2006 – Washington State Convention and Trade Center, Seattle, Washington. May 23–25, 2006. Attendance of more than 3,700.Microsoft announced the release of beta 2 of Windows Vista, Windows Server "Longhorn" and Microsoft Office 2007.
The Free Software Foundation staged a protest outside the venue, wearing yellow hazmat suits and handing out pamphlets claiming that Microsoft products are "Defective by Design" because of the Digital Rights Management technologies included in them.
2007 – Los Angeles Convention Center, Los Angeles, California. May 15–17, 2007.
2008 – Los Angeles Convention Center, Los Angeles, California. November 4–6, 2008.Immediately following PDC 2008, held at the same venue, October 27–30.
Focusing on the then upcoming Windows 7.
2015 – Grand Hyatt Shenzhen Hotel, Shenzhen, China. March 18–19, 2015.Microsoft released the source of the Windows Driver Frameworks.
Focused on Windows 10.
|
Lan blood group system
|
Lan blood group system
|
The Lan blood group system (short for Langereis) is a human blood group defined by the presence or absence of the Lan antigen on a person's red blood cells. More than 99.9% of people are positive for the Lan antigen. Individuals with the rare Lan-negative blood type, which is a recessive trait, can produce an anti-Lan antibody when exposed to Lan-positive blood. Anti-Lan antibodies may cause transfusion reactions on subsequent exposures to Lan-positive blood, and have also been implicated in mild cases of hemolytic disease of the newborn. However, the clinical significance of the antibody is variable. The antigen was first described in 1961, and Lan was officially designated a blood group in 2012.
|
Lan blood group system
|
Molecular biology
|
The Lan antigen is carried on the protein ABCB6, an ATP-binding cassette transporter encoded by the ABCB6 gene on chromosome 2q36. The Lan-negative blood type is inherited in an autosomal recessive manner, being expressed by individuals who are homozygous for nonfunctional alleles of ABCB6. Some variant alleles cause a weak positive phenotype, which may be mistaken for a Lan-negative phenotype in serologic testing. As of 2018, more than 40 null or weak alleles of ABCB6 have been described.ABCB6 is involved in heme synthesis and porphyrin transport and is widely expressed throughout the body, particularly in the heart, skeletal muscle, eye, fetal liver, mitochondrial membrane, and Golgi bodies.: 220 The Lan antigen is more strongly expressed on cord blood cells than on adult red blood cells.: 490 Despite the protein's wide distribution, Lan-negative individuals do not appear to experience any adverse effects from the absence of ABCB6. It is thought that other porphyrin transporters, such as ABCG2 (which carries the Junior blood group antigen), may compensate.: 220 A 2018 study found that Lan-negative blood cells exhibited resistance to Plasmodium falciparum in vitro.
|
Lan blood group system
|
Epidemiology
|
The prevalence of the Lan antigen exceeds 99.9% in most populations. The frequency of the Lan-negative blood type is estimated at 1 in 50,000 in Japanese populations, 1 in 20,000 in Caucasians, and 1 in 1,500 in black people from South Africa.
|
Lan blood group system
|
Clinical significance
|
When Lan-negative individuals are exposed to Lan-positive blood through transfusion or pregnancy, they may develop an anti-Lan antibody. Anti-Lan is considered a clinically significant antibody,: 220 but its effects are variable. It has been associated with severe transfusion reactions and mild cases of hemolytic disease of the newborn, but in some cases individuals with the antibody have not experienced any adverse effects from exposure to Lan-positive blood. It is recommended that individuals with anti-Lan are transfused with Lan-negative blood, especially if the antibody titer is high. One case of autoimmune hemolytic anemia involving auto-anti-Lan has been described.
|
Lan blood group system
|
Laboratory testing
|
Serologic reagents and molecular assays for Lan antigen typing were not commercially available as of 2013.Anti-Lan antibodies are typically composed of immunoglobulin G and may bind complement. As an IgG antibody, anti-Lan can be detected using the indirect antiglobulin test. The antibody is resistant to treatment with ficin, papain, trypsin, DTT, and EDTA/glycine-acid.: 220
|
Lan blood group system
|
History
|
The Lan antigen was first described in 1961 by Van der Hart et al., when a Dutch patient suffered a severe hemolytic transfusion reaction.: 220 : 489 The patient was found to produce an antibody that reacted with all but 1 out of 4,000 blood donors tested. The causative antigen was identified and designated "Langereis" after the patient's last name. Lan was officially designated a blood group by the International Society of Blood Transfusion in 2012, following the discovery of the molecular basis of the Lan-negative phenotype.: 220
|
Radium compounds
|
Radium compounds
|
Radium compounds are compounds containing the element radium (Ra). Due to radium's radioactivity, not many compounds have been well characterized. Solid radium compounds are white as radium ions provide no specific coloring, but they gradually turn yellow and then dark over time due to self-radiolysis from radium's alpha decay. Insoluble radium compounds coprecipitate with all barium, most strontium, and most lead compounds.
|
Radium compounds
|
Oxides and hydroxides
|
Radium oxide (RaO) has not been characterized well past its existence, despite oxides being common compounds for the other alkaline earth metals. Radium hydroxide (Ra(OH)2) is the most readily soluble among the alkaline earth hydroxides and is a stronger base than its barium congener, barium hydroxide. It is also more soluble than actinium hydroxide and thorium hydroxide: these three adjacent hydroxides may not be separated by precipitating them with ammonia.
|
Radium compounds
|
Halides
|
Radium fluoride (RaF2) is a highly radioactive compound. It can be coprecipitated with lanthanide fluorides. Radium fluoride has the same crystal form as calcium fluoride (fluorite). It can be prepared by the reaction of radium metal and hydrogen fluoride gas: Ra + 2 HF → RaF2 + H2Radium chloride (RaCl2) is a colorless, luminous compound. It becomes yellow after some time due to self-damage by the alpha radiation given off by radium when it decays. Small amounts of barium impurities give the compound a rose color. It is soluble in water, though less so than barium chloride, and its solubility decreases with increasing concentration of hydrochloric acid. Crystallization from aqueous solution gives the dihydrate RaCl2·2H2O, isomorphous with its barium analog.Radium bromide (RaBr2) is also a colorless, luminous compound. In water, it is more soluble than radium chloride. Like radium chloride, crystallization from aqueous solution gives the dihydrate RaBr2·2H2O, isomorphous with its barium analog. The ionizing radiation emitted by radium bromide excites nitrogen molecules in the air, making it glow. The alpha particles emitted by radium quickly gain two electrons to become neutral helium, which builds up inside and weakens radium bromide crystals. This effect sometimes causes the crystals to break or even explode.
|
Radium compounds
|
Other compounds
|
Radium nitrate (Ra(NO3)2) is a white compound that can be made by dissolving radium carbonate in nitric acid. As the concentration of nitric acid increases, the solubility of radium nitrate decreases, an important property for the chemical purification of radium.Radium forms much the same insoluble salts as its lighter congener barium: it forms the insoluble sulfate (RaSO4, the most insoluble known sulfate), chromate (RaCrO4), carbonate (RaCO3), iodate (Ra(IO3)2), tetrafluoroberyllate (RaBeF4), and nitrate (Ra(NO3)2). With the exception of the carbonate, all of these are less soluble in water than the corresponding barium salts, but they are all isostructural to their barium counterparts. Additionally, radium phosphate, radium oxalate, and radium sulfite are probably also insoluble, as they coprecipitate with the corresponding insoluble barium salts. The great insolubility of radium sulfate (at 20 °C, only 2.1 mg will dissolve in 1 kg of water) means that it is one of the less biologically dangerous radium compounds. The large ionic radius of Ra2+ (148 pm) results in weak complexation and poor extraction of radium from aqueous solutions when not at high pH.
|
Subdural hygroma
|
Subdural hygroma
|
A subdural hygroma (SDG) is a collection of cerebrospinal fluid (CSF), without blood, located under the dural membrane of the brain. Most subdural hygromas are believed to be derived from chronic subdural hematomas. They are commonly seen in elderly people after minor trauma but can also be seen in children following infection or trauma. One of the common causes of subdural hygroma is a sudden decrease in pressure as a result of placing a ventricular shunt. This can lead to leakage of CSF into the subdural space especially in cases with moderate to severe brain atrophy. In these cases the symptoms such as mild fever, headache, drowsiness and confusion can be seen, which are relieved by draining this subdural fluid.
|
Subdural hygroma
|
Etiology and Pathophysiology
|
Subdural hygromas require two conditions in order to occur. First, there must be a separation in the layers of the meninges of the brain. Second, the resulting subdural space that occurs from the separation of layers must remain uncompressed in order for CSF to accumulate in the subdural space, resulting in the hygroma. The arachnoid mater is torn and cerebrospinal fluid (CSF) from the subarachnoid space accumulates in the subdural space. Hygromas also push the subarachnoid vessels away from the inner table of the skull. Subdural hygroma can appear in the first day, but the mean time of appearance is 9 days on CT scan. Subdural hygroma does not have internal membranes that can easily rupture like subdural haematoma, but hygroma can sometimes occur together with hemorrhage to become hematohygroma.Subdural hygromas most commonly occur when events such as head trauma, infections, or cranial surgeries happen in tandem with brain atrophy, severe dehydration, prolonged spinal drainage, or any other event that causes a decrease in intracranial pressure. This provides the basis for why subdural hygromas more commonly occur in infants and elderly; infants have compressible brains while elderly patients have a greater amount of space for fluid to accumulate due to brain atrophy from age.
|
Subdural hygroma
|
Signs and symptoms
|
Most subdural hygromas are small and clinically insignificant. A majority of patients with SDG will not experience symptoms. However, some commonly reported but nonspecific symptoms of SDG that have been reported include headache and nausea. Focal neurologic deficits and seizures have also been reported but are nonspecific to SDG. Larger hygromas may cause secondary localized mass effects on the adjacent brain parenchyma, enough to cause a neurologic deficit or other symptoms. Acute subdural hygromas can be a potential neurosurgical emergency, requiring decompression. Acute hygromas are typically a result of head trauma—they are a relatively common posttraumatic lesion—but can also develop following neurosurgical procedures, and have also been associated with a variety of conditions, including dehydration in the elderly, lymphoma and connective tissue diseases.
|
Subdural hygroma
|
Diagnosis
|
In CT scan, subdural hygroma will have same density as the normal CSF. Meanwhile, in MRI, subdural hygroma will have same intensity with CSF. If iodinated contrast is administered during CT scan, the hygroma will produce high density because of the contrast at 120 kVp. However, at 190 kVp, hygroma with contrast will have intermediate density.In the majority of cases, if there has not been any acute trauma or severe neurologic symptoms, a small subdural hygroma on the head CT scan will be an incidental finding. If there is an associated localized mass effect that may explain the clinical symptoms, or concern for a potential chronic SDH that could rebleed, then an MRI, with or without neurologic consultation, may be useful.
|
Subdural hygroma
|
Diagnosis
|
It is not uncommon for chronic subdural hematomas (SDHs) on CT reports for scans of the head to be misinterpreted as subdural hygromas, and vice versa. Magnetic resonance imaging (MRI) should be done to differentiate a chronic SDH from a subdural hygroma, when clinically warranted. Elderly patients with marked cerebral atrophy, and secondary widened subarachnoid CSF spaces, can also cause confusion on CT. To distinguish chronic subdural hygromas from simple brain atrophy and CSF space expansion, a gadolinium-enhanced MRI can be performed. Visualization of cortical veins traversing the collection favors a widened subarachnoid space as seen in brain atrophy, whereas subdural hygromas will displace the cortex and cortical veins.
|
Subdural hygroma
|
Treatment
|
Most subdural hygromas that are asymptomatic do not require any treatment. Some might opt to perform a simple burr-holes to alleviate intracranial pressure (ICP). Occasionally a temporary drain is placed for 24-48 hours post op. In recurrent cases a craniotomy may be performed to attempt to locate the location of the CSF Leak. In certain cases a shunt can be placed for additional drainage. Great caution is used when choosing to look for the CSF leak due to them generally being difficult to spot.
|
Doubleheader (baseball)
|
Doubleheader (baseball)
|
In the sport of baseball, a doubleheader is a set of two games played between the same two teams on the same day. Historically, doubleheaders have been played in immediate succession, in front of the same crowd. Contemporarily, the term is also used to refer to two games played between two teams in a single day in front of different crowds and not in immediate succession.
|
Doubleheader (baseball)
|
Doubleheader (baseball)
|
The record for the most doubleheaders played by a major-league team in one season is 44 by the Chicago White Sox in 1943. Between September 4 and September 15, 1928, the Boston Braves played nine consecutive doubleheaders – 18 games in 12 days.
|
Doubleheader (baseball)
|
History
|
For many decades, major-league doubleheaders were routinely scheduled numerous times each season. However, any major-league doubleheader now played is generally the result of a prior game between the same two teams being postponed due to inclement weather or other factors. Most often the game is rescheduled for a day on which the two teams play each other again. Often it is within the same series, but in some cases, may be weeks or months after the original date. On rare occasions, the last game between two teams in that particular city is rained out, and a doubleheader may be scheduled at the other team's home park to replace the missed game.
|
Doubleheader (baseball)
|
History
|
Currently, major-league teams playing two games in a day usually play a "day-night doubleheader", in which the stadium is emptied of spectators and a separate admission is required for the second game. However, such games are officially regarded as separate games on the same date, rather than as a doubleheader. True doubleheaders are less commonly played. Classic doubleheaders, also known as day doubleheaders, were more common in the past, and although they are rare in the major leagues, they still are played at the minor league and college levels.
|
Doubleheader (baseball)
|
History
|
In 1959, at least one league played a quarter of its games as classic doubleheaders. The rate declined to 10% in 1979. Eventually, eight years passed between two officially scheduled doubleheaders. Reasons for the decline include clubs' desire to maximize revenue, longer duration of games, five-day pitching rotation as opposed to four-day rotation, time management of relievers and catchers, and lack of consensus among players.
|
Doubleheader (baseball)
|
Types of doubleheaders
|
The Official Baseball Rules used by Major League Baseball (MLB) discuss doubleheaders in section 4.08.: 16–17 The document makes mention of "conventional" and "split" doubleheaders.: 16 Conventional In conventional doubleheaders, a spectator may attend both games by purchasing a single ticket. After the first game ends, a break, normally lasting 30 to 45 minutes per the Official Baseball Rules, occurs and the second game is then started.: 16–17 For statistical purposes, the attendance is counted only for the second game, with the first game's attendance recorded as zero.
|
Doubleheader (baseball)
|
Types of doubleheaders
|
Day The "classic" day doubleheader consists of the first game played in the early afternoon and, following a break, the second is played in the late afternoon. This was often done out of necessity in the years before many ballparks had lights. Often, if either game went into extra innings, the second game was eventually called when it grew dark.
|
Doubleheader (baseball)
|
Types of doubleheaders
|
This type of doubleheader is now more prominent in Minor League Baseball. It is now uncommon in the major leagues, even for rain makeups, since the use of stadium lights allows for night games. They are still occasionally scheduled, one example being the Tampa Bay Rays hosting the Oakland Athletics in a single-admission doubleheader starting at 1:05 p.m. on the afternoon of June 10, 2017, at Tropicana Field.
|
Doubleheader (baseball)
|
Types of doubleheaders
|
Twi-night In a twi-night (short for "twilight-night") doubleheader, the first game is played in the late afternoon and, following a break, the second begins at night. Under the Collective Bargaining Agreement (CBA) between MLB and the Major League Baseball Players Association (MLBPA), this is allowed provided the start time of the first game is no later than 5:00 p.m. local time, although they generally start at 4:00 p.m. This type of doubleheader is still used in the minor leagues, or occasionally in MLB as the result of a rainout.
|
Doubleheader (baseball)
|
Types of doubleheaders
|
Split In a split or "day-night" doubleheader, the first game is played in the early afternoon and the second is played at night. In this scenario, separate tickets are sold for admission to each individual game. Such doubleheaders are favored by major-league organizations because they can charge admission for each game individually, and most often occur as the result of a rainout, where tickets have already been sold to the individual games.
|
Doubleheader (baseball)
|
Types of doubleheaders
|
Except in special circumstances with the approval of the MLBPA, such as a makeup game resulting from a rainout, scheduling split doubleheaders is prohibited under the terms of the 2002 CBA. Exceptions have occurred; for example, on August 22, 2012, the Arizona Diamondbacks hosted the Miami Marlins in a day-night doubleheader, the first doubleheader ever played at Chase Field, which was arranged due to a scheduling error violating another section of the CBA, which prohibits 23 consecutive games without a day off.Since the 2012 season, the CBA has allowed teams to expand their active roster by one player (currently from 26 to 27 players) for split doubleheaders, as long as those doubleheaders were scheduled with at least 48 hours' notice.
|
Doubleheader (baseball)
|
Tripleheaders
|
Three instances of a tripleheader are recorded in MLB, indicating three games between the same two teams on the same day. These occurred between the Brooklyn Bridegrooms and Pittsburgh Innocents on September 1, 1890 (Brooklyn won all three); between the Baltimore Orioles and Louisville Colonels on September 7, 1896 (Baltimore won all three); and between the Pittsburgh Pirates and Cincinnati Reds on October 2, 1920 (Cincinnati won two of the three).Tripleheaders are prohibited under the current CBA, except if the first game is the conclusion of a game suspended from a prior date: this would only happen in the extremely rare event when the only remaining dates between teams are doubleheaders, and no single games are left for the suspended game to precede.
|
Doubleheader (baseball)
|
Tripleheaders
|
In 2019, a Friday doubleheader at the end of the season between the Tigers and White Sox was rained out after one of the games started but did not go 5 innings. As a result, one of the games was moved to a doubleheader Saturday and the other was cancelled. Had the broader definition of suspended games rule been in play for 2019, it is possible a tripleheader would have happened between the Tigers and White Sox on that Saturday due to the rules allowing for such.
|
Doubleheader (baseball)
|
Seven-inning doubleheaders
|
Under some rulesets, games played as part of a doubleheader last seven innings each instead of the usual nine.
|
Doubleheader (baseball)
|
Seven-inning doubleheaders
|
In college and minor league baseball College and minor league baseball typically use seven-inning doubleheaders. This applies even in the postseason; in 1994, the first game of the five-game Pacific Coast League championship series between Vancouver and Albuquerque was rained out; the two teams played a doubleheader, seven innings each, on the originally scheduled date of the second game. In the minors, if the first game is the completion of a suspended game from a prior day, the suspended game is played to completion (seven or nine innings, whichever it was scheduled to be when it started), and the second game of the doubleheader is seven innings.
|
Doubleheader (baseball)
|
Seven-inning doubleheaders
|
In leagues which place a runner on second base at the start of extra innings, the rule applies starting in the eighth inning.
|
Doubleheader (baseball)
|
Seven-inning doubleheaders
|
In Major League Baseball, 2020–2021 After the COVID-19 pandemic delayed the start of MLB's 2020 season to July from its original intended start in March, the league announced on July 31 that all doubleheader games would be scheduled for seven innings each during the shortened season, to reduce strain on teams' pitchers. The league and the MLBPA came to an agreement to put this rule in place only for the 2020 season, later extended to the 2021 season as well. The 2022 season reverted to nine-inning doubleheaders.
|
Doubleheader (baseball)
|
Seven-inning doubleheaders
|
The first major-league seven-inning doubleheader was played on August 2, 2020, between the Cincinnati Reds and the Detroit Tigers at Comerica Park, with the Reds winning both games.
|
Doubleheader (baseball)
|
Seven-inning doubleheaders
|
Statistical impact Some major-league feats in a seven-inning game were counted as-is, while others were not. For example, a shutout was credited when it occurred in a seven-inning game; Reds pitcher Trevor Bauer threw the first seven-inning shutout under the rule.A no-hitter was only credited if the game lasted at least nine innings (i.e. extra innings were played, due to a tie score). Under the 1991 guidelines recognizing major-league no-hitters, the feat is only officially recognized when a team's pitcher (or pitchers) allows no hits in a minimum of nine innings (that is, records at least 27 outs without allowing a hit). On April 25, 2021, Madison Bumgarner of the Arizona Diamondbacks pitched a complete seven-inning game allowing no hits to the Atlanta Braves in the second game of a doubleheader, but did not receive credit for a no-hitter. Five pitchers of the Tampa Bay Rays held the Cleveland Indians hitless in a seven-inning game, the second game of a doubleheader on July 7, 2021, and also did not receive credit for a no-hitter.
|
Doubleheader (baseball)
|
Doubleheaders of note
|
The home-and-home doubleheader, in which each team hosts one game, is extremely rare, as it requires the teams' home ballparks to be in close geographical proximity. During the 20th century and before the advent of interleague play in 1997, only one instance was recorded in Major League Baseball: a Labor Day special event involving the New York Giants and Brooklyn Superbas.
|
Doubleheader (baseball)
|
Doubleheaders of note
|
September 7, 1903 Game 1: Washington Park (II): Giants 6, Superbas 4 Game 2: Polo Grounds (III): Superbas 3, Giants 0This is the only home-and-home doubleheader known to have been part of the original major league season schedule.Since interleague play began, the New York Mets and the New York Yankees have on three occasions played home-and-home doubleheaders. Each occasion was due to a rainout during the first series of the season. During the second series of the season, a makeup game was scheduled at the ballpark of the opposing team as part of a day-night doubleheader.
|
Doubleheader (baseball)
|
Doubleheaders of note
|
July 8, 2000Game 1: Shea Stadium: Yankees 4, Mets 2 Game 2: Yankee Stadium (I): Yankees 4, Mets 2 (June 11 makeup) June 28, 2003 Game 1: Yankee Stadium (I): Yankees 7, Mets 1 Game 2: Shea Stadium: Yankees 9, Mets 8 (June 21 makeup) June 27, 2008 Game 1: Yankee Stadium (I): Mets 15, Yankees 6 (May 16 makeup) Game 2: Shea Stadium: Yankees 9, Mets 0On September 13, 1951, the St. Louis Cardinals hosted a doubleheader against two different teams. The first game was a 6–4 win against the New York Giants. The second game resulted in a 2–0 loss to the Boston Braves.On September 25, 2000, the Cleveland Indians also hosted a doubleheader against two different teams. The September 10 game against the Chicago White Sox in Cleveland had been rained out. With no common days off for the remainder of the season and both teams in a postseason race, the teams agreed to play a day game in Cleveland on the same day that the Indians were to host the Minnesota Twins for a night game. The Indians defeated the White Sox 9–2 in the first game, while the Twins defeated the Indians 4–3 in the second.On occasion, teams may play both games of a doubleheader at the same park, but one team is designated home for each game. This is usually the result of earlier postponements. For example, in 2007, when snow storms in northern Ohio caused the Cleveland Indians to postpone an entire four-game series from April 5–8 against the Seattle Mariners; three of the games were made up in Cleveland throughout the season, while the fourth was made up as part of a doubleheader in Seattle on September 26 with the Indians as the designated home team for the first game. The Indians won the first game acting as the home team, 12–4, but lost the second as the away team, 3–2.
|
Doubleheader (baseball)
|
In popular culture
|
National Baseball Hall of Fame inductee Ernie Banks, who spent his entire MLB career with the Chicago Cubs, was known for his catchphrase, "It's a beautiful day for a ballgame ... Let's play two!", expressing his wish to play a doubleheader every day out of his love of baseball.
|
Data transformation (statistics)
|
Data transformation (statistics)
|
In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point zi is replaced with the transformed value yi = f(zi), where f is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs.
|
Data transformation (statistics)
|
Data transformation (statistics)
|
Nearly always, the function that is used to transform the data is invertible, and generally is continuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in some currency unit, it would be common to transform each person's income value by the logarithm function.
|
Data transformation (statistics)
|
Motivation
|
Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if the population is substantially skewed and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distribution before constructing a confidence interval. If desired, the confidence interval can then be transformed back to the original scale using the inverse of the transformation that was applied to the data.Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph.
|
Data transformation (statistics)
|
Motivation
|
Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying the reciprocal function, yielding liters per kilometer, or gallons per mile.
|
Data transformation (statistics)
|
In regression
|
Data transformation may be used as a remedial measure to make data suitable for modeling with linear regression if the original data violates one or more assumptions of linear regression. For example, the simplest linear regression models assume a linear relationship between the expected value of Y (the response variable to be predicted) and each independent variable (when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity. For example, addition of quadratic functions of the original independent variables may lead to a linear relationship with expected value of Y, resulting in a polynomial regression model, a special case of linear regression.
|
Data transformation (statistics)
|
In regression
|
Another assumption of linear regression is homoscedasticity, that is the variance of errors must be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data is heteroscedastic), it may be possible to find a transformation of Y alone, or transformations of both X (the predictor variables) and Y, such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables and linear regression may therefore be applied on these.
|
Data transformation (statistics)
|
In regression
|
Yet another application of data transformation is to address the problem of lack of normality in error terms. Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss–Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedaticity) often also help make the error terms approximately normal.
|
Data transformation (statistics)
|
In regression
|
Examples Equation: Y=a+bX Meaning: A unit increase in X is associated with an average of b units increase in Y.Equation: log (Y)=a+bX (From exponentiating both sides of the equation: Y=eaebX Meaning: A unit increase in X is associated with an average increase of b units in log (Y) , or equivalently, Y increases on an average by a multiplicative factor of eb . For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a unit increase in X would lead to a 10 b times increase in Y on an average. If b were 1, then this implies a 10-fold increase in Y for a unit increase in XEquation: log (X) Meaning: A k-fold increase in X is associated with an average of log (k) units increase in Y. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a tenfold increase in X would result in an average increase of log 10 10 )=b units in YEquation: log log (X) (From exponentiating both sides of the equation: Y=eaXb Meaning: A k-fold increase in X is associated with a kb multiplicative increase in Y on an average. Thus if X doubles, it would result in Y changing by a multiplicative factor of 2b Alternative Generalized linear models (GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value.
|
Data transformation (statistics)
|
Common cases
|
The logarithm transformation and square root transformation are commonly used for positive data, and the multiplicative inverse transformation (reciprocal transformation) can be used for non-zero data. The power transformation is a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to use statistical estimation techniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as the Box–Cox transformation.
|
Data transformation (statistics)
|
Common cases
|
The reciprocal transformation, some power transformations such as the Yeo–Johnson transformation, and certain other transformations such as applying the inverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values (the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied.A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes." The logarithm also has a useful effect on ratios. If we are comparing positive quantities X and Y using the ratio X / Y, then if X < Y, the ratio is in the interval (0,1), whereas if X > Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where X and Y are treated symmetrically, the log-ratio log(X / Y) is zero in the case of equality, and it has the property that if X is K times greater than Y, the log-ratio is the equidistant from zero as in the situation where Y is K times greater than X (the log-ratios are log(K) and −log(K) in these two situations).
|
Data transformation (statistics)
|
Common cases
|
If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a logit transformation may be appropriate: this yields values in the range (−∞,∞).
|
Data transformation (statistics)
|
Common cases
|
Transforming to normality 1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations. 2. A linguistic power function is distributed according to the Zipf-Mandelbrot law. The distribution is extremely spiky and leptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g. authorship attribution problems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation.3. To assess whether normality has been achieved after transformation, any of the standard normality tests may be used. A graphical approach is usually more informative than a formal statistical test and hence a normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed.
|
Data transformation (statistics)
|
Common cases
|
Transforming to a uniform distribution or an arbitrary distribution If we observe a set of n values X1, ..., Xn with no ties (i.e., there are n distinct values), we can replace Xi with the transformed value Yi = k, where k is defined such that Xi is the kth largest among all the X values. This is called the rank transform, and creates data with a perfect fit to a uniform distribution. This approach has a population analogue.
|
Data transformation (statistics)
|
Common cases
|
Using the probability integral transform, if X is any random variable, and F is the cumulative distribution function of X, then as long as F is invertible, the random variable U = F(X) follows a uniform distribution on the unit interval [0,1].
From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If G is an invertible cumulative distribution function, and U is a uniformly distributed random variable, then the random variable G−1(U) has G as its cumulative distribution function.
Putting the two together, if X is any random variable, F is the invertible cumulative distribution function of X, and G is an invertible cumulative distribution function then the random variable G−1(F(X)) has G as its cumulative distribution function.
|
Data transformation (statistics)
|
Common cases
|
Variance stabilizing transformations Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with different expected values. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances.
|
Data transformation (statistics)
|
Common cases
|
A variance-stabilizing transformation aims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or Anscombe transform for Poisson data (count data), the Box–Cox transformation for regression analysis, and the arcsine square root transformation or angular transformation for proportions (binomial data). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended because logistic regression or a logit transformation are more appropriate for binomial or non-binomial proportions, respectively, especially due to decreased type-II error.
|
Data transformation (statistics)
|
Transformations for multivariate data
|
Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data generated by a random vector X are observed as vectors Xi of observations with covariance matrix Σ, a linear transformation can be used to decorrelate the data. To do this, the Cholesky decomposition is used to express Σ = A A'. Then the transformed vector Yi = A−1Xi has the identity matrix as its covariance matrix.
|
Spinal mobilization
|
Spinal mobilization
|
Spinal mobilization is a type of passive movement of a spinal segment or region. It is usually performed with the aim of achieving a therapeutic effect.
Spinal mobilization has been described as "a gentle, often oscillatory, passive movement applied to a spinal region or segment so as gently to increase the passive range of motion of that segment or region."
|
Spinal mobilization
|
Types of techniques
|
Spinal mobilization employ a range of techniques or schools of approaches in delivering the passive movement. Some examples include Maitland Technique Mulligan Technique
|
Project Blogger
|
Project Blogger
|
Project Blogger is an educational initiative in Ireland by Discover Science & Engineering (DSE). It provides blogging tools and an online space for secondary school students and their teachers to create blogs about their school science experiments and science interests.
Through the blogs, the students can share their experiences about science with their classmates, as well as with students from other schools across Ireland.The scheme was piloted in the 2007–08 academic year and was extended the following year.
DSE has also teamed up with Scifest for students to use Project Blogger in their SciFest projects. The students use their online science diaries to store ongoing project results, images, ideas, graphs, video and discussions.
|
Wax museum
|
Wax museum
|
A wax museum or waxworks usually consists of a collection of wax sculptures representing famous people from history and contemporary personalities exhibited in lifelike poses, wearing real clothes.
|
Wax museum
|
Wax museum
|
Some wax museums have a special section dubbed the "Chamber of Horrors", in which the more grisly exhibits are displayed. Some collections are more specialized, as, for example, collections of wax medical models once used for training medical professionals. Many museums or displays in historical houses that are not wax museums as such use wax figures as part of their displays. The origin of wax museums goes back to the early 18th century at least, and wax funeral effigies of royalty and some other figures exhibited by their tombs had essentially been tourist attractions well before that.
|
Wax museum
|
History before 1800
|
The making of life-size wax figures wearing real clothes grew out of the funeral practices of European royalty. In the Middle Ages it was the habit to carry the corpse, fully dressed, on top of the coffin at royal funerals, but this sometimes had unfortunate consequences in hot weather, and the custom of making an effigy in wax for this role grew, again wearing actual clothes so that only the head and hands needed wax models. After the funeral these were often displayed by the tomb or elsewhere in the church, and became a popular attraction for visitors, which it was often necessary to pay to view.The Westminster Abbey Museum in London has a collection of British royal funeral effigies made of varying materials going back to that of Edward III of England's wooden likeness (died 1377), as well as those of figures such as the naval hero Horatio Nelson, and Frances Stewart, Duchess of Richmond, who also had her parrot stuffed and displayed. From the funeral of Charles II in 1680 they were no longer placed on the coffin but were still made for later display. The effigy of Charles II, open-eyed and standing, was displayed over his tomb until the early 19th century, when all the Westminster effigies were removed from the abbey itself. Nelson's effigy was a pure tourist attraction, commissioned the year after his death in 1805, and his burial not in the Abbey but in St Paul's Cathedral after a government decision that major public figures should in future be buried there. Concerned for their revenue from visitors, the Abbey decided it needed a rival attraction for admirers of Nelson.
|
Wax museum
|
History before 1800
|
In European courts including that of France the making of posed wax figures became popular. Antoine Benoist (1632–1717) was a French court painter and sculptor in wax to King Louis XIV. He exhibited forty-three wax figures of the French Royal Circle at his residence in Paris. Thereafter, the king authorized the figurines to be shown throughout France. His work became so highly regarded that James II of England invited him to visit England in 1684. There he executed works of the English king and members of his court. A seated figure of Peter the Great of Russia survives, made by an Italian artist, after the Tsar was impressed by the figures he saw at the Chateau of Versailles. The Danish court painter Johann Salomon Wahl executed figures of the Danish king and queen in about 1740.The 'Moving Wax Works of the Royal Court of England', a museum or exhibition of 140 life-size figures, some apparently with clockwork moving parts, opened by Mrs Mary in Fleet Street in London was doing excellent business in 1711. Philippe Curtius, waxwork modeller to the French court, opened his Cabinet de Cire as a tourist attraction in Paris in 1770, which remained open until 1802. In 1783 this added a Caverne des Grands Voleurs ("Cave of the Great Thieves"), an early "Chamber of Horrors". He bequeathed his collection to his protégée Marie Tussaud, who during the French Revolution made death masks of the executed royals.
|
Wax museum
|
Notable wax museums
|
Madame Tussauds, historically associated with London, is the most famous name associated with wax museums, although it was not the earliest wax museum, as is sometimes thought. In 1835 Madame Tussaud established her first permanent exhibition in London's Baker Street. By the late 19th century most large cities had some kind of commercial wax museum, like the Musée Grévin in Paris or the Panoptikum Hamburg, and for a century these remained highly popular. In the late 20th century it became harder for them to compete with other attractions.
|
Wax museum
|
Notable wax museums
|
Today there are also Madame Tussauds in Dam Square, Amsterdam; Berlin; Madame Tussauds Hong Kong; Shanghai; and five locations in the United States: the Venetian Hotel in Las Vegas, Nevada, Times Square in New York City, Washington, D.C., Fisherman's Wharf in San Francisco and Hollywood.
Louis Tussaud's wax museum in San Antonio, Texas, is across the street from the historic Alamo. Others are located on the Canadian side of Niagara Falls, and Grand Prairie, Texas.
|
Wax museum
|
Notable wax museums
|
One of the most popular wax museums in the United States for decades was The Movieland Wax Museum in Buena Park, California, near Knott's Berry Farm. The museum opened in 1962 and through the years added many wax figures of famous show business figures. Several stars attended the unveilings of the wax incarnations. The museum closed its doors on October 31, 2005, after years of dwindling attendance.
|
Wax museum
|
Notable wax museums
|
However, the most enduring museum in the United States is the Hollywood Wax Museum located in Hollywood, California which features almost exclusively figures of movie actors displayed in settings associated with their roles in popular movies. This group of museums includes Hollywood Wax Museum Branson in Branson, Missouri along with Hollywood Wax Museum Pigeon Forge in Pigeon Forge, Tennessee and Hollywood Wax Museum Myrtle Beach in Myrtle Beach, South Carolina. With the original location having been developed in the mid-1960s, this group of museums went against the late 20th century trend of declining wax museum attendance, with the Branson location having undergone a substantial expansion and remodeling in 2008 and 2009 including an animated ride and a mirror maze.
|
Wax museum
|
Notable wax museums
|
Another popular wax museum is the Musée Conti Wax Museum in New Orleans, Louisiana, which features wax figures portraying the city's history as well as a "Haunted Dungeon" section of wax figures of famous characters from horror films and literature. This museum is currently closed as the Conti building is being converted into condos. The museum should reopen at Jazzland Theme Park some time in the future. Another popular wax museum in the U.S. is the Wax Museum at Fisherman's Wharf in San Francisco, California.
|
Wax museum
|
Notable wax museums
|
BibleWalk is a Christian wax museum in Mansfield, Ohio. It has received attention for its use of celebrity wax figures in its religious scenes, originally a cost-saving measure when new wax figures were deemed too expensive.The Royal London Wax Museum was open in downtown Victoria, British Columbia, Canada, from 1970 to 2010 in the Steamship Terminal building, it featured "royalty to rogues and the renowned." It was forced to close when the building required seismic upgrades.
|
Wax museum
|
Notable wax museums
|
The National Wax Museum in Dublin, Ireland is a wax museum which hosts well over a hundred figures. For many years it has had only one sculptor, PJ Heraty, who continued producing figures even while the museum was closed. Meanwhile, it could be re-opened at a new location. During the last few years some other new wax museums are starting around the world. In 2009 Dreamland Wax Museum opened in Gramado, in the south of Brazil.
|
Wax museum
|
Notable wax museums
|
The National Presidential Wax Museum in Keystone, South Dakota is the only wax museum in the world to feature every U.S. president. Their exhibits also include other notable figures from history such as General George Custer, Alexander Graham Bell, Thomas Edison, and Sitting Bull. Originally created by the famed sculptor Katherine Stubergh, the museum includes death and life masks of notable Hollywood celebrities including Mae West and Sid Grauman. Their most revered exhibit is a depiction of George W. Bush standing on the rubble of the World Trade Center with NYFD fireman Bob Beckwith following the attacks on September 11, 2001.
|
Wax museum
|
Notable wax museums
|
India's first wax museum opened in December 2005 in Kanyakumari. Now located to Lonavala it contains 100 wax statues of celebrities at Lonavala Square Mall. The biggest in India wax museum named Mother's Wax Museum was opened in November 2014 in New Town, Kolkata. Another branch opened in July 2008 at the historical site of Old Goa with a collection of religious statues.
|
Wax museum
|
Notable wax museums
|
Madame Tussauds opened its first museum in India at New Delhi in 2017.
|
Wax museum
|
Depictions
|
Mystery of the Wax Museum House of Wax (1953 film) Museo del horror Terror in the Wax Museum Waxwork (film) House of Wax (2005 film)
|
Slugging
|
Slugging
|
Slugging, also known as casual carpooling, is the practice of forming ad hoc, informal carpools for purposes of commuting, essentially a variation of ride-share commuting and hitchhiking. A driver picks up these non-paying passengers (known as "slugs" or "sluggers") at key locations, as having these additional passengers means that the driver can qualify to use an HOV lane or enjoy toll reduction. While the practice is most common and most publicized in the congested Washington, D.C. metropolitan area, slugging also occurs in San Francisco, Houston, and other cities.
|
Slugging
|
Background
|
In order to relieve traffic volume during the morning and evening rush hours, high-occupancy vehicle (HOV) lanes that require more than one person per automobile were introduced in many major American cities to encourage carpooling and greater use of public transport, first appearing in the Washington D.C. metropolitan area in 1975. The failure of the new lanes to relieve congestion, and frustration over failures of public-transport systems and high fuel prices, led to the creation in the 1970s of "slugging", a form of hitchhiking between strangers that is beneficial to both parties, as drivers and passengers are able to use the HOV lane for a quicker trip. While passengers are able to travel for free, or cheaper than via other modes of travel, and HOV drivers sometimes pay no tolls, "slugs are, above all, motivated by time saved, not money pocketed". Concern for the environment is not their primary motivation; Virginia drivers of hybrid automobiles are, for example, eligible to use HOV lanes with no passengers.In the Washington area—with the second-busiest traffic during rush hour in the United States and Canada as of 2010—slugging occurs on Interstates 95, 66 and 395 between Washington and northern Virginia. As of 2006, there were about 6,459 daily slugging participants there.In the San Francisco Bay Area, with the third-busiest rush hour, casual carpooling occurs on Interstate 80 between the East Bay and San Francisco. As of 1998, 8,000 to 9,000 people slugged in San Francisco daily. However, after bridge tolls were levied on carpool vehicles in 2010, casual carpooling saw a significant decline and etiquette became more uncertain. Among the effects of the COVID-19 pandemic in the San Francisco Bay Area was the end of casual carpooling in March 2020. As of November 2022 the tradition has not resumed; although drivers continue to hope to see waiting passengers at designated pickup spots, the spontaneous nature of the program means that there is no one to restart it.Slugging also occurs in tenth-busiest Houston, at a rate of 900 daily in 2007, and in Pittsburgh.Slugging is shown to be effective in reducing vehicle travel distance as a form of ridesharing.Slugging is more used during morning commutes than evening commutes. The most common mode that slugging replaces is the transit bus.David D. Friedman's The Machinery of Freedom proposed a similar system (which he referred to as "jitney transit") in the 1970s. However, his plan assumed that passengers would be expected to pay for their transit, and that security measures such as electronic identification cards (recording the identity of both driver and passenger in a database readily available to police, in the event one or both parties disappeared) would be needed in order for people to feel safe. Although slugging is informal, ad hoc, and free, in 30 years no violence or crime was reported from Washington D.C. slugging until October 2010, when former Sergeant Major of the Army Gene McKinney struck one of his passengers with his car after they threatened to report his reckless driving to the police.
|
Slugging
|
Etymology
|
The term slug (used as both a noun and a verb) came from bus drivers who had to determine if the people waiting at the stop were genuine bus passengers or merely people wanting a free lift, in the same way that they look out for fake coins—or "slugs"—being thrown into the fare-collection box.
|
Slugging
|
General practices
|
In practice, slugging involves the creation of free, unofficial ad hoc carpool networks, often with published routes and pick-up and drop-off locations. In the morning, sluggers gather at local businesses and at government-run locations such as park and ride-like facilities or bus stops and subway stations with lines of sluggers. Drivers pull up to the queue for the route they will follow and either display a sign or call out the designated drop-off point they are willing to drive to and how many passengers they can take; in the Washington area the Pentagon—the largest place of employment in the United States, with 25,000 workers—is a popular destination. Enough riders fill the car and the driver departs. In the evening, the routes reverse.Many unofficial rules of etiquette exist, and websites allow sluggers to post warnings about those who break them. Some Washington D.C. rules are: The slug first in line gets the next ride to their destination and also gets to choose the front or back seat. Slugs should never take a ride out of turn.
|
Slugging
|
General practices
|
Drivers are not to pick up sluggers en route to or standing outside the line, a practice referred to as "body snatching".
A woman is not to be left in the line alone, for her safety.
No eating, smoking, or putting on of makeup is allowed.
The driver has full control of the radio and climate controls.
Windows may not be opened unless the driver approves.
No money is exchanged or requested, as the driver and slugs all benefit from slugging.
Driver and passengers say "Thank you" at the end.
|
Slugging
|
Government involvement
|
While local governments sometimes aid sluggers by posting signs labeled with popular destinations for people to queue at, slugging is organized by its participants and no slug line has ever been created by government. Slug lines are organized and maintained by volunteers. Government officials have become more aware of sluggers' needs when planning changes that affect their behavior, and solicit their suggestions. The Virginia Department of Transportation even includes links on their governmental webpage regarding slugging.
|
Slugging
|
Other countries
|
In Jakarta, "car jockeys" are paid by commuters to ride into the center of the city to permit the use of high-occupancy vehicle lanes.In India, it is illegal for drivers to randomly pick up commuters from the public roads and there is evidence that such drivers have been fined.
In the Polish People's Republic, hitchhiking was officially supported by the government (and formalized), and in Cuba, government vehicles are obligated to take hitchhikers, but these systems have nothing to do with high-occupancy lanes.
|
Interpersonal deception theory
|
Interpersonal deception theory
|
Interpersonal deception theory (IDT) is one of a number of theories that attempts to explain how individuals handle actual (or perceived) deception at the conscious or subconscious level while engaged in face-to-face communication. The theory was put forth by David Buller and Judee Burgoon in 1996 to explore this idea that deception is an engaging process between receiver and deceiver. IDT assumes that communication is not static; it is influenced by personal goals and the meaning of the interaction as it unfolds. The sender's overt (and covert) communications are affected by the overt and covert communications of the receiver, and vice versa. IDT explores the interrelation between the sender's communicative meaning and the receiver's thoughts and behavior in deceptive exchanges.
|
Interpersonal deception theory
|
Interpersonal deception theory
|
Intentional deception requires greater cognitive exertion than truthful communication, regardless of whether the sender attempts falsification (lying), concealment (omitting material facts) or equivocation (skirting issues by changing the subject or responding indirectly).
|
Interpersonal deception theory
|
Theoretical perspective
|
IDT views deception through the lens of interpersonal communication, considering deception as an interactive process between sender and receiver. In contrast with previous studies of deception (which focused on the sender and receiver individually), IDT focuses on the dyadic and relational nature of deceptive communication. Behaviors by sender and receiver are dynamic, multifunctional, multidimensional and multi-modal.
|
Interpersonal deception theory
|
Theoretical perspective
|
Dyadic communication is communication between two people; a dyad is a group of two people between whom messages are sent and received. Relational communication is communication in which meaning is created by two people simultaneously filling the roles of sender and receiver. Dialogic activity is the active communicative language of the sender and receiver, each relying upon the other in the exchange. "Both individuals within the communicative situation are actively participating in strategies to obtain or achieve goals set by themselves. The decision to actively deceive or not, is not that of a passive nature, it is done with intent by both individuals during the conversation".
|
Interpersonal deception theory
|
Theoretical perspective
|
In psychotherapy and psychological counseling, dyadic, relational and dialogic activity between therapist and patient relies on honest, open communication if the patient is to recover and be capable of healthier relationships. Deception uses the same theoretical framework in reverse; the communication of one participant is deliberately false.
|
Interpersonal deception theory
|
History
|
Current research literature documents well that human beings are poor detectors of deception. Research reveals that accuracy rates of people's ability to tell truth from deception are only a little above chance (54%). Concerningly, observers perform slightly worse given only visual information (52% accuracy) and better when they can hear (but not see) the target person (63%), While experts are more confident than laypersons, they are not more accurate.Interpersonal Deception Theory (IDT) attempts to explain the manner in which individuals engaged in face-to-face communication deal with actual or perceived deception on the conscious and subconscious levels. IDT proposes that the majority of individuals overestimate their ability to detect deception. In some cultures, various means of deception are acceptable while other forms are not. Acceptance of deception can be found in language terms that classify, rationalize or condemn, such behavior. Deception that may be considered a simple white lie to save feelings may me be determined socially acceptable, while deception used to gain certain advantages can be determined to be ethically questionable. It has been estimated that "deception and suspected deception arise in at least one quarter of all conversations".Interpersonal deception detection between partners is difficult unless a partner tells an outright lie or contradicts something the other partner knows is true. While it is difficult to deceive a person over a long period of time, deception often occurs in day-to-day conversations between relational partners. Maintaining a deception over time is difficult because it places a significant cognitive load on the deceiver. The deceiver must recall previous statements so that their story remains consistent and believable. As a result, deceivers often leak important information both verbally and nonverbally.
|
Interpersonal deception theory
|
History
|
In the early twentieth century, Sigmund Freud studied nonverbal cues to detect deception about a century ago. Freud observed a patient being asked about his darkest feelings. If his mouth was shut and his fingers were trembling, he was considered to be lying. Freud also noted other nonverbal cues, such as drumming one's fingers when telling a lie. More recently, scientists have attempted to establish the differences between truthful and deceptive behavior using a myriad of psychological and physiological approaches. In 1969, Ekman and Friesen used straightforward observation methods to determine deceptive non-verbal leakage cues, while more recently Rosenfeldet et al. used magnetic resonance imaging (MRI) to detect differences between honest and deceptive responses.In 1989, DePaulo and Kirkendol developed the Motivation Impairment Effect (MIE). MIE states the harder people try to deceive others, the more likely they are to get caught. Burgoon and Floyd, however, revisited this research and formed the idea that deceivers are more active in their attempt to deceive than most would anticipate or expect.
|
Interpersonal deception theory
|
History
|
IDT was developed in 1996 by David B. Buller and Judee K. Burgoon. Prior to their study, deception had not been fully considered as a communication activity. Previous work had focused upon the formulation of principles of deception. These principles were derived by evaluating the lie detection ability of individuals observing unidirectional communication. These early studies found initially that "although humans are far from infallible in their efforts to diagnose lies, they are substantially better at the task than would result merely by chance." Additionally, research has shown that deception and suspected deception occurs in at least one quarter of all conversations.Buller and Burgoon discount the value of highly controlled studies – usually one-way communication experiments – designed to isolate unmistakable cues that people are lying. Therefore, IDT is based on two-way communication and intended to describe deception as an interaction communicative process. In other words, deception is an interpersonal communication method that required the active participation of both the deceiver and receiver. Buller and Burgoon wanted to emphasize that both the receiver and deceiver are active participants in the deception process. Both are constantly engaged in conscious and unconscious behaviors that relay their true intentions. Buller and Burgoon initially based their theory of IPD on the four-factor model of deception developed by social psychologist Miron Zuckerman, who argues that the four components of deceit inevitably cause cognitive overload and therefore leakage. Zuckerman's four factors include the attempt to control information, which fosters behavior that can come across as too practiced, followed by physiological arousal as a result of deception. This arousal then leads to the third factor, felt emotions, which are usually guilt and anxiety, which can become noticeable to an observer. Additionally, the many cognitive factors and mental gymnastics that are going on during a deception often lead to nonverbal leakage cues, such as increased blinking and a higher pitched voice.
|
Interpersonal deception theory
|
Propositions
|
IDT's model of interpersonal deception has 21 verifiable propositions. Based on assumptions of interpersonal communication and deception, each proposition can generate a testable hypothesis. Although some propositions originated in IDT, many are derived from earlier research. The propositions attempt to explain the cognition and behavior of sender and receiver during the process of deception, from before interaction through interaction to the outcome after interaction.
|
Interpersonal deception theory
|
Propositions
|
Context and relationship IDT's explanations of interpersonal deception depend on the situation in which interaction occurs and the relationship between sender and receiver.
1. Sender and receiver cognition and behaviors vary, since deceptive communication contexts vary in access to social cues, immediacy, relationship, conversational demands and spontaneity.
2. In deceptive interchanges, sender and receiver cognition and behaviors vary; relationships vary in familiarity (informational and behavioral) and valence.
Other factors before interaction Individuals approach deceptive exchanges with factors such as expectancy, knowledge, goals or intentions and behaviors reflecting their communication competence. IDT posits that these factors influence the deceptive exchange.
3. Compared with truth-tellers, deceivers engage in more strategic activity designed to manage information, behavior and image and have more nonstrategic arousal cues, negative and muted affect and non-involvement.
Effects on sender's deception and fear of detection IDT posits that factors before the interaction influence the sender's deception and fear of detection.
4. Context moderates deception; increased interaction produces greater strategic activity (information, behavior and image management) and reduced nonstrategic activity (arousal or muted affect) over time.
5. Initial expectations of honesty are related to the degree of interactivity and the relationship between sender and receiver.
6. Deceivers' fear of detection and associated strategic activity are inversely related to expectations of honesty, a function of context and relationship quality.
7. Goals and motivation influence behavior.
8. As receivers' informational, behavioral and relational familiarity increase, deceivers have a greater fear of detection and exhibit more strategic information, behavior and image management and nonstrategic leakage behavior.
9. Skilled senders convey a truthful demeanor, with more strategic behavior and less nonstrategic leakage, better than unskilled ones.
Effects on receiver cognition IDT also posits that factors before the interaction, combined with initial behavior, affect receiver suspicion and detection accuracy.
10. Receiver judgment of sender credibility is related to receiver truth biases, context interactivity, sender encoding skills and sender deviation from expected patterns.
11. Detection accuracy is related to receiver truth biases, context interactivity, sender encoding skills, informational and behavioral familiarity, receiver decoding skills and sender deviation from expected patterns.
Interaction patterns IDT describes receiver suspicion and sender reaction.
12. Receiver suspicion is displayed in a combination of strategic and nonstrategic behavior.
13. Senders perceive suspicion.
14. Suspicion, perceived or actual, increases senders' strategic and nonstrategic behavior.
15. Deception and suspicion displays change over time.
16. Reciprocity is the predominant interaction pattern between senders and receivers during interpersonal deception.
Outcomes IDT posits that interaction between sender and receiver influences how credible the receiver thinks the sender is and how suspicious the sender thinks the receiver is.
17. Receiver detection accuracy, bias, and judgments of sender credibility after an interaction are functions of receiver cognition (suspicion and truth bias), receiver decoding skill and final sender behavior.
18. Sender perceived deception success is a function of final sender cognition (perceived suspicion) and receiver behavior.
|
Interpersonal deception theory
|
Strategic and Non Strategic Linguistic Behavior
|
Strategic linguistic behavior: Information and image management is most relevant to language use during deception; there are three sub-strategies that can be used for this: Reticence (reserving or restraining) Reticence is a very common way of creating deception; it is withholding truthful information, and/or reducing the amount of specificity in content details.
Vagueness and Uncertainty The message becomes evasive and ambiguous through language choices.
Non-Immediacy Reduces the degree of directness and intensity of the interaction between the communicator and the object or the event communicated about. This has the effect of distancing senders from their messages.
|
Interpersonal deception theory
|
Receiver's role
|
Although most people believe they can spot deception, IDT posits that they cannot. A deceiver must manage his or her verbal and nonverbal cues to ensure that what they are saying appears true. According to IDT, the more socially aware a receiver is, the better he or she is at detecting deceit.
|
Interpersonal deception theory
|
Receiver's role
|
Humans have a predisposition to believe what they are told. This is referred to as a "truth bias." In a common social agreement, people are honest with one another and believe that others will be honest with them. If a deceiver begins a deceptive exchange with an accurate statement, the statement may induce the receiver to believe the rest of the deceiver's story is also true. The sender prepares the receiver to accept his or her information as truth, even if some (or all) of the dialogue is false. If the sender repeats the same tactic, the receiver will become more aware that the sender is lying.When suspicion is aroused in the receiver, there are a variety of ways that this suspicion can be expressed. Buller and Burgoon (1996) emphasized that there is no uniform receiver style to express suspicion, but instead is expressed through a variety of ways that they had discovered in previous research. According to Buller et al. (1991), receivers often utilize follow-up questions to question their deceivers if they begin to detect deception. Buller et al. found that this did not elicit as much suspicion as probes from nonsuspicious receivers. Burgoon et al. (1995) found that some receivers engaged in a more dominant interview style to engage with their deceiver, which represents a more aggressive and "unpleasant" style of questioning that aroused suspicion on the part of the deceiver.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.